Several AI boosters signed this week’s “mitigation extinction risks” statement, raising the possibility that insiders with billions of dollars at stake are attempting to showcase their capacity for self-regulation.
Speakers’ view at artificial general intelligence conference at the FedEx Institute of Technology, University of Memphis, March 5, 2008.
(brewbooks, Flickr/Attribution-ShareAlike/ CC BY-SA 2.0)
By Kenny Stancil
Common Dreams
This week, 80 artificial intelligence scientists and more than 200 “other notable figures” signed a statement that says “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
The one-sentence warning from the diverse group of scientists, engineers, corporate executives, academics and others doesn’t go into detail about the existential threats posed by AI.
Instead, it seeks to “open up discussion” and “create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously,” according to the Center for AI Safety, a U.S.-based nonprofit whose website hosts the statement.
Geoffrey Hinton giving a lecture about deep neural networks at the University of British Columbia, 2013. (Eviatar Bach, CC BY-SA 3.0 Wikimedia Commons)
Lead signatory Geoffrey Hinton, often called “the godfather of AI,” has been sounding the alarm for weeks. Earlier this month, the 75-year-old professor emeritus of computer science at the University of Toronto announced that he had resigned from his job at Google in order to speak more freely about the dangers associated with AI.
Before he quit Google, Hinton told CBS News in March that the rapidly advancing technology’s potential impacts are comparable to “the Industrial Revolution, or electricity, or maybe the wheel.”
Asked about the chances of the technology “wiping out humanity,” Hinton warned that “it’s not inconceivable.”
That frightening potential doesn’t necessarily lie with currently existing AI tools such as ChatGPT, but rather with what is called “artificial general intelligence” (AGI), which would encompass computers developing and acting on their own ideas.
“Until quite recently, I thought it was going to be like 20-to-50 years before we have general-purpose AI,” Hinton told CBS News. “Now I think it may be 20 years or less.”
Pressed by the outlet if it could happen sooner, Hinton conceded that he wouldn’t rule out the possibility of AGI arriving within five years, a significant change from a few years ago when he “would have said, ‘No way.’”
“We have to think hard about how to control that,” said Hinton. Asked if that’s possible, Hinton said, “We don’t know, we haven’t been there yet, but we can try.”
The AI pioneer is far from alone. According to the 2023 AI Index Report, an annual assessment of the fast-growing industry published last month by the Stanford Institute for Human-Centered Artificial Intelligence, 57 percent of computer scientists surveyed said that “recent progress is moving us toward AGI,” and 58 percent agreed that “AGI is an important concern.”
Although its findings were released in mid-April, Stanford’s survey of 327 experts in natural language processing — a branch of computer science essential to the development of chatbots — was conducted last May and June, months before OpenAI’s ChatGPT burst onto the scene in November.
OpenAI CEO Sam Altman, who signed the statement shared Tuesday by the Center for AI Safety, wrote in a February blog post: “The risks could be extraordinary. A misaligned superintelligent AGI could cause grievous harm to the world.”
The following month, however, Altman declined to sign an open letter calling for a half-year moratorium on training AI systems beyond the level of OpenAI’s latest chatbot, GPT-4.
OpenAI CEO Sam Altman speaking at an event in San Francisco in 2019. (TechCrunch/ CC BY 2.0 Wikimedia Commons)
The letter, published in March, states that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
Tesla and Twitter CEO Elon Musk was among those who called for a pause two months ago, but he is “developing plans to launch a new artificial intelligence start-up to compete with” OpenAI, according to the Financial Times, begging the question of whether his stated concern about the technology’s “profound risks to society and humanity” is sincere or an expression of self-interest.
Possible Bid for Self-Regulation
That Altman and several other AI boosters signed this week’s statement raises the possibility that insiders with billions of dollars at stake are attempting to showcase their awareness of the risks posed by their products in a bid to persuade officials of their capacity for self-regulation.
Isaac Asimov's three rules of robotics fully vindicated: US military drone controlled by AI killed its operator during simulated test https://t.co/sjM5zuJwzH
— Yanis Varoufakis (@yanisvaroufakis) June 2, 2023
Demands from outside the industry for robust government regulation of AI are growing. While ever-more dangerous forms of AGI may still be years away, there is already mounting evidence that existing AI tools are exacerbating the spread of disinformation, from chatbots spouting lies and face-swapping apps generating fake videos to cloned voices committing fraud.
Current, untested AI is hurting people in other ways, including when automated technologies deployed by Medicare Advantage insurers unilaterally decide to end payments, resulting in the premature termination of coverage for vulnerable seniors.
Critics have warned that in the absence of swift interventions from policymakers, unregulated AI could harm additional healthcare patients, hasten the destruction of democracy, and lead to an unintended nuclear war. Other common worries include widespread worker layoffs and worsening inequality as well as a massive uptick in carbon pollution.
A report published last month by Public Citizen argues that “until meaningful government safeguards are in place to protect the public from the harms of generative AI, we need a pause.”
“Businesses are deploying potentially dangerous AI tools faster than their harms can be understood or mitigated,” the progressive advocacy group warned in a statement.
“History offers no reason to believe that corporations can self-regulate away the known risks — especially since many of these risks are as much a part of generative AI as they are of corporate greed,” the watchdog continued. “Businesses rushing to introduce these new technologies are gambling with peoples’ lives and livelihoods, and arguably with the very foundations of a free society and livable world.”
Kenny Stancil is a staff writer for Common Dreams.
This article is from Common Dreams.
Views expressed in this article and may or may not reflect those of Consortium News.
Support CN’s Spring
Fund Drive Today
Tags: AGI AI AI Index Report artificial general intelligence Artificial Intelligence Center for AI Safety ChatGPT Geoffrey Hinton Google Kenny Stancil OpenAI OpenAI CEO Sam Altman Stanford Institute for Human-Centered Artificial Intelligence
The “operator killing drone” has now been debunked; (which probably means it’s true)
“US air force colonel ‘misspoke’ about drone killing pilot who tried to override mission”
“Colonel retracted his comments and clarified that the ‘rogue AI drone simulation’ was a hypothetical ‘thought experiment’” (Guardian again)
“Hypothetical “thought experiment” Mmmm; “hypothetical thoughts”. Mmmmm. “Unchartered territory” anyone?.
The statement fails to address the potential benefits and positive societal impacts of AI. While the potential risks should not be downplayed, it is crucial to maintain a balanced view. Advanced AI has the potential to revolutionize industries, boost economies, and solve complex global problems.
Moreover, the singling out of ‘artificial general intelligence’ (AGI) as an existential threat without offering a detailed understanding or tangible solutions could be seen as an effort to create a state of fear and uncertainty. This would further justify their claim for self-regulation, a stance that would allow these companies to operate under fewer constraints and with more autonomy, which could lead to a consolidation of power within the industry.
Furthermore, calls for self-regulation among tech giants have historically led to a lack of accountability, where the responsibility to prevent and address harm is shifted away from the creators and onto the users. Instead of self-regulation, an external, neutral body with regulatory powers could ensure the ethical use of AI and prevent misuse.
In conclusion, while the risk mitigation of AI is undeniably crucial, the emphasis should be on collaborative, transparent, and diversified efforts rather than concentrating power within a few entities. Policies should be inclusive and protective of public interest, ensuring that AI development benefits society as a whole and not just a few corporate players.
If AI were truly “intelligent,” and thinking way faster than we do, we should expect it to make far better decisions, even for ourselves. The real danger is that the ultra rich oligarchs want to gain control of the AI and use it to subjugate and murder poor and working class people. *THAT* is the worst case scenario for the human race.
AI may be our last chance at getting out from under the boot of fascism. We’re already on the road to extinction. Any significant change in the world’s power structure must be regarded as a potential improvement!
Your email address will not be published.
This site uses Akismet to reduce spam. Learn how your comment data is processed.Securely by check, credit card or on Patreon:
TAX-DEDUCTIBLE
Make your tax-deductible donation by clicking here.
Keep Consortium News going in the tradition of Bob Parry. Become a Consortium News member!
Watch Consortium News Live!
WINNER OF THE 2017 MARTHA GELLHORN PRIZE FOR JOURNALISM
In Memoriam:Robert Parry, 1949-2018
Comment Policy
Privacy Policy
Contact Us
Contact Us
Contact Us
Copyright © 2023 . All Rights Reserved. The Magazine Basic Theme by bavotasan.com.
Leave a Reply
You must be logged in to post a comment.