Home MarketsEurope & Middle East AI will be fine regardless of who wins the White House

AI will be fine regardless of who wins the White House

by SuperiorInvest

Sam Altman, CEO of OpenAI, at the Hope Global Forums annual meeting in Atlanta, Georgia, USA, on Monday, December 11, 2023.

Dustin Chambers | Bloomberg | fake images

DAVOS, Switzerland – OpenAI founder and CEO Sam Altman said generative artificial intelligence as a sector and the United States as a country are “going to be fine” no matter who wins the presidential election later this year.

Altman was responding to a question about Donald Trump’s resounding victory in the Iowa caucuses and the public “faced with the reality of this upcoming election.”

“I think America will be fine, no matter what happens in this election. I think AI will be fine, no matter what happens in this election, and we will have to work very hard to make it that way.” Altman said this week in Davos during an interview at Bloomberg House at the World Economic Forum.

Trump won the Iowa Republican caucus in a landslide on Monday, setting a new record for the Iowa race with a 30-point lead over his closest rival.

“I think part of the problem is that we say, ‘Now we’re faced with, you know, it never occurred to us that the things he’s saying might resonate with a lot of people and now all of a sudden, after his performance in Iowa, oh man. That It’s very similar to what they do in Davos,” Altman said.

“I think there’s been a real failure to learn lessons about what working for the citizens of the United States is and what isn’t.”

Part of what has propelled leaders like Trump to power is a working-class electorate that resents the feeling of having been left behind, and advances in technology widen the divide. When asked if there is a danger of AI causing that damage, Altman replied: “Yeah, sure.”

“This is something bigger than just a technological revolution… Therefore, it will become a social issue, a political issue. It already has been in some ways.”

As voters in more than 50 countries, representing half the world’s population, head to the polls in 2024, OpenAI this week released new guidelines on how it plans to protect against abuse of its popular generative AI tools, including its chatbot. , ChatGPT. as well as DALL·E 3, which generates original images.

“As we prepare for the 2024 elections in the world’s largest democracies, our focus is to continue our platform security work by raising accurate voting information, enforcing measured policies, and improving transparency,” the San Francisco-based company wrote. Francisco in a blog post on Monday.

The strengthened guidelines include cryptographic watermarks on images generated by DALL·E 3, as well as completely prohibiting the use of ChatGPT in political campaigns.

“Many of these are things we’ve been doing for a long time, and we have a release from the security systems team that not only has moderation, but we can also leverage our own tools to scale our application, which I think gives us a significant advantage,” said Anna Makanju, vice president of global affairs at OpenAI, on the same panel as Altman.

The measures aim to prevent a repeat of past disruptions in crucial political elections through the use of technology, such as the Cambridge Analytica scandal in 2018.

Revelations from reports in The Guardian and elsewhere revealed that the controversial political consultancy, which worked for Trump’s campaign in the 2016 US presidential election, collected data from millions of people to influence the election.

Altman, when asked about OpenAI’s steps to ensure its technology was not used to manipulate elections, said the company was “pretty focused” on the issue and has “a lot of anxiety” about getting it right.

“I think our role is very different from that of a distribution platform,” like a social media site or a news publisher, he said. “We have to work with them, so it’s like you generate here and distribute here. And there needs to be a good conversation between them.”

However, Altman added that he is less concerned about the dangers of using artificial intelligence to manipulate the electoral process than in previous election cycles.

“I don’t think this will ever be the same as before. I think it’s always a mistake to try to fight the last war, but we can eliminate some of that,” he said.

“I think it would be terrible if I said, ‘Oh yeah, I’m not worried. I feel great.’ We’re going to have to watch this relatively closely this year. [with] super strict monitoring [and] super tight feedback.”

While Altman is not worried about the possible outcome of the US election in favor of AI, the shape of any new government will be crucial in determining how the technology is ultimately regulated.

Last year, President Joe Biden signed an executive order on AI, calling for new safety standards, protecting the privacy of American citizens, and advancing equity and civil rights.

One thing that worries many regulators and AI ethicists is the possibility that AI will worsen social and economic disparities, especially since the technology has been shown to contain many of the same biases that humans have.

Source Link

Related Posts