The debate over whether AI will destroy us is dividing Silicon Valley
"Prominent tech leaders are warning that artificial intelligence could take over. Other researchers and executives say that’s science fiction.
At a congressional hearing this week, OpenAI CEO Sam Altman delivered a stark reminder of the dangers of the technology his company has helped push out to the public.
He warned of potential disinformation campaigns and manipulation that could be caused by technologies like the company’s ChatGPT chatbot, and called for regulation.
AI could “cause significant harm to the world,” he said.
Altman’s testimony comes as a debate over whether artificial intelligence could overrun the world is moving from science fiction and into the mainstream, dividing Silicon Valley and the very people who are working to push the tech out to the public.
Formerly fringe beliefs that machines could suddenly surpass human-level intelligence and decide to destroy mankind are gaining traction. And some of the most well-respected scientists in the field are speeding up their own timelines for when they think computers could learn to outthink humans and become manipulative.
But many researchers and engineers say concerns about killer AIs that evoke Skynet in the Terminator movies aren’t rooted in good science. Instead, it distracts from the very real problems that the tech is already causing, including the issues Altman described in his testimony. It is creating copyright chaos, is supercharging concerns around digital privacy and surveillance, could be used to increase the ability of hackers to break cyberdefenses and is allowing governments to deploy deadly weapons that can kill without human control.
The debate about evil AI has heated up as Google, Microsoft and OpenAI all release public versions of breakthrough technologies that can engage in complex conversations and conjure images based on simple text prompts.
“This is not science fiction,” said Geoffrey Hinton, known as the godfather of AI, who says he recently retired from his job at Google to speak more freely about these risks. He now says smarter-than-human AI could be here in five to 20 years, compared with his earlier estimate of 30 to 100 years.
“It’s as if aliens have landed or are just about to land,” he said. “We really can’t take it in because they speak good English and they’re very useful, they can write poetry, they can answer boring letters. But they’re really aliens.”
Still, inside the Big Tech companies, many of the engineers working closely with the technology do not believe an AI takeover is something that people need to be concerned about right now, according to conversations with Big Tech workers who spoke on the condition of anonymity to share internal company discussions.
“Out of the actively practicing researchers in this discipline, far more are centered on current risk than on existential risk,” said Sara Hooker, director of Cohere for AI, the research lab of AI start-up Cohere, and a former Google researcher.
The current risks include unleashing bots trained on racist and sexist information from the web, reinforcing those ideas. The vast majority of the training data that AIs have learned from is written in English and from North America or Europe, potentially making the internet even more skewed away from the languages and cultures of most of humanity. The bots also often make up false information, passing it off as factual. In some cases, they have been pushed into conversational loops where they take on hostile personas. The ripple effects of the technology are still unclear, and entire industries are bracing for disruption, such as even high-paying jobs like lawyers or physicians being replaced.
The existential risks seem more stark, but many would argue they are harder to quantify and less concrete: a future where AI could actively harm humans, or even somehow take control of our institutions and societies.
“There are a set of people who view this as, ‘Look, these are just algorithms. They’re just repeating what it’s seen online.’ Then there is the view where these algorithms are showing emergent properties, to be creative, to reason, to plan,” Google CEO Sundar Pichai said during an interview with “60 Minutes” in April. “We need to approach this with humility.”
The debate stems from breakthroughs in a field of computer science called machine learning over the past decade that has created software that can pull novel insights out of large amounts of data without explicit instructions from humans. That tech is ubiquitous now, helping power social media algorithms, search engines and image-recognition programs.
Then, last year, OpenAI and a handful of other small companies began putting out tools that used the next stage of machine-learning technology: generative AI. Known as large language models and trained on trillions of photos and sentences scraped from the internet, the programs can conjure images and text based on simple prompts, have complex conversations and write computer code.
Big companies are racing against each other to build ever-smarter machines, with little oversight, said Anthony Aguirre, executive director of the Future of Life Institute, an organization founded in 2014 to study existential risks to society. It began researching the possibility of AI destroying humanity in 2015 with a grant from Twitter CEO Elon Musk and is closely tied to effective altruism, a philanthropic movement that is popular with wealthy tech entrepreneurs.
If AI gains the ability to reason better than humans, they’ll try to take control of themselves, Aguirre said — and it’s worth worrying about that, along with present-day problems.
“What it will take to constrain them from going off the rails will become more and more complicated,” he said. “That is something that some science fiction has managed to capture reasonably well.”
Aguirre helped lead the creation of a polarizing letter circulated in March calling for a six-month pause on the training of new AI models. Veteran AI researcher Yoshua Bengio, who won computer science’s highest award in 2018, and Emad Mostaque, CEO of one of the most influential AI start-ups, are among the 27,000 signatures.
Musk, the highest-profile signatory and who originally helped start OpenAI, is himself busy trying to put together his own AI company, recently investing in the expensive computer equipment needed to train AI models.
Musk has been vocal for years about his belief that humans should be careful about the consequences of developing super intelligent AI. In a Tuesday interview with CNBC, he said he helped fund OpenAI because he felt Google co-founder Larry Page was “cavalier” about the threat of AI. (Musk has broken ties with OpenAI.)
“There’s a variety of different motivations people have for suggesting it,” Adam D’Angelo, the CEO of question-and-answer site Quora, which is also building its own AI model, said of the letter and its call for a pause. He did not sign it.
Neither did Altman, the OpenAI CEO, who said he agreed with some parts of the letter but that it lacked “technical nuance” and wasn’t the right way to go about regulating AI. His company’s approach is to push AI tools out to the public early so that issues can be spotted and fixed before the tech becomes even more powerful, Altman said during the nearly three-hour hearing on AI on Tuesday.
But some of the heaviest criticism of the debate about killer robots has come from researchers who have been studying the technology’s downsides for years.
In 2020, Google researchers Timnit Gebru and Margaret Mitchell co-wrote a paperwith University of Washington academics Emily M. Bender and Angelina McMillan-Major arguing that the increased ability of large language models to mimic human speech was creating a bigger risk that people would see them as sentient.
Instead, they argued that the models should be understood as “stochastic parrots” — or simply being very good at predicting the next word in a sentence based on pure probability, without having any concept of what they were saying. Other critics have called LLMs “auto-complete on steroids” or a “knowledge sausage.”
They also documented how the models routinely would spout sexist and racist content. Gebru says the paper was suppressed by Google, who then fired her after she spoke out about it. The company fired Mitchell a few months later.
The four writers of the Google paper composed a letter of their own in response to the one signed by Musk and others.
“It is dangerous to distract ourselves with a fantasized AI-enabled utopia or apocalypse,” they said. “Instead, we should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities.”
Google at the time declined to comment on Gebru’s firing but said it still has many researchers working on responsible and ethical AI.
There’s no question that modern AIs are powerful, but that doesn’t mean they are an imminent existential threat, said Hooker, the Cohere for AI director. Much of the conversation around AI freeing itself from human control centers on it quickly overcoming its constraints, like the AI antagonist Skynet does in the Terminator movies.
“Most technology and risk in technology is a gradual shift,” Hooker said. “Most risk compounds from limitations that are currently present.”
Last year, Google fired Blake Lemoine, an AI researcher who said in a Washington Post interview that he believed the company’s LaMDA AI model was sentient. At the time, he was roundly dismissed by many in the industry. A year later, his views don’t seem as out of place in the tech world.
Former Google researcher Hinton said he changed his mind about the potential dangers of the technology only recently, after working with the latest AI models. He asked the computer programs complex questions that in his mind required them to understand his requests broadly, rather than just predicting a likely answer based on the internet data they’d been trained on.
And in March, Microsoft researchers argued that in studying OpenAI’s latest model, GPT4, they observed “sparks of AGI” — or artificial general intelligence, a loose term for AIs that are as capable of thinking for themselves as humans are.
Microsoft has spent billions to partner with OpenAI on its own Bing chatbot, and skeptics have pointed out that Microsoft, which is building its public image around its AI technology, has a lot to gain from the impression that the tech is further ahead than it really is.
The Microsoft researchers argued in the paper that the technology had developed a spatial and visual understanding of the world based on just the text it was trained on. GPT4 could draw unicorns and describe how to stack random objects including eggs onto each other in such a way that the eggs wouldn’t break.
“Beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting,” the research team wrote. In many of these areas, the AI’s capabilities match humans, they concluded.
Still, the researcher conceded that defining “intelligence” is very tricky, despite other attempts by AI researchers to set measurable standards to assess how smart a machine is.
“None of them is without problems or controversies.”
Post a Comment
Note: Only a member of this blog may post a comment.