Key takeaways
- OpenAI CEO Sam Altman recently warned that “some really bad things” come with AI, especially deepfakes and other “really strange or scary moments.”
- OpenAI’s new video app, Sora 2, quickly claimed the top spot on Apple’s App Store in the days after it launched late last month, showing how quickly deepfake-style technology is becoming mainstream.
- Altman said he hopes society learns to build protective barriers before the technology becomes even more powerful.
Sam Altman, CEO of OpenAI, the company behind ChatGPT, is providing unexpectedly stark warnings about the effects of his own products and others like them.
“I expect really bad things to happen because of technology,” he said in a recent interview on venture capital firm Andreessen Horowitz’s a16z podcast.
The warning is not hypothetical. Videos made by Sora’s new version of OpenAI, launched late last month by invitation only, were quickly seen on social media as the app quickly rose to number one in Apple’s (AAPL) App Store in the US. Social media soon included deepfakes written by Sora that featured Martin Luther King Jr. and other public figures, including Altman himself, who was represented involved in various forms of criminal activity. (OpenAI later prevented users from making Martin Luther King Jr. videos on Sora.)
But if Altman really expects “really bad things to happen,” why does his company seem to help hasten their arrival?
Why does this matter to you?
AI-generated deepfakes can be indistinguishable from real videos, which can make it difficult to trust what you see on social media. You may lose trust in the news or financial advice videos you see on platforms faster than tech companies and regulators can create safeguards. Scammers are already using similar tools to create fake videos and use them for fraudulent purposes. Be careful and always question the content you see on social media before taking it as truth.
Altman: we must ‘co-evolve’ with AI
Altman’s rationale for moving forward with this public release is that society needs a test drive.
“Very soon the world will have to deal with incredible video models that can fake anyone or show whatever you want,” he said on the podcast.
Instead of perfecting the technology behind closed doors, he argues that society and AI must “co-evolve,” that “it can’t just be left to the end.” Their theory: give people early exposure so communities can build norms and guardrails before these tools become even more powerful.
The stakes include losing what we have long considered evidence of the truth: event videos have helped change world history. But the advantage, according to Altman, is that we will be better prepared when even more sophisticated tools arrive.
Warning
Holocaust denial videos created with Sora 2 garnered hundreds of thousands of likes on Instagram within days of the app’s launch, according to the Global Coalition Against Hate and Extremism. The organization maintains that OpenAI’s usage policies, which lack specific prohibitions against hate speech, have helped extremist content flourish online.
What Altman says comes later
Altman’s warning wasn’t just about fake videos. It was about what happens when too many of us outsource our decisions to algorithms that few people understand.
“I still think there will be some really strange or scary moments,” he said, emphasizing that just because AI hasn’t caused a catastrophic event yet “doesn’t mean it never will.”
“Billions of people talking to the same brain” could create “strange things on a societal scale,” he said. Put another way, it could lead to unexpected chain reactions, causing consequent changes in information, politics, and community trust that will spread faster than anyone can control.
Despite these broad changes that affect us all, Altman opposed regulations for this technology.
“Most regulations probably have a lot of disadvantages,” he said. Although Altman also said he supports “very careful safety testing” for what he called “extremely superhuman” models.
“I think we’ll develop some barriers around it as a society,” he said.
