Home Markets Goal to identify more AI-generated images before the next elections

Goal to identify more AI-generated images before the next elections

by SuperiorInvest

Meta Platforms CEO Mark Zuckerberg arrives at the federal courthouse in San Jose, California, on December 20, 2022.

David Paul Morris | Bloomberg | fake images

Goal is expanding its efforts to identify artificial intelligence-doctored images as it seeks to root out misinformation and deepfakes ahead of upcoming elections around the world.

The company is building tools to identify AI-generated content at scale when it appears on Facebook, Instagram and Threads, it announced Tuesday.

Until now, Meta only tagged AI-generated images developed with its own AI tools. Now, the company says it will look to apply those labels to content from GoogleOpenAI, microsoft, AdobeMidjorney and Shutterstock.

Labels will appear in all available languages ​​in each application. But the change will not be immediate.

In the blog post, Nick Clegg, president of global affairs at Meta, wrote that the company will begin labeling AI-generated images from external sources “in the coming months” and will continue working on the issue “over the next year.”

More time is needed to work with other AI companies to “align on common technical standards that indicate when content has been created using AI,” Clegg wrote.

Election-related misinformation caused a crisis on Facebook after the 2016 presidential election due to the way foreign actors, mostly from Russia, were able to create and spread highly charged and inaccurate content. The platform was repeatedly exploited in the following years, most notably during the Covid pandemic, when people used the platform to spread large amounts of misinformation. Holocaust deniers and QAnon conspiracy theorists also abounded on the site.

Meta is trying to show that it is prepared for bad actors to use more advanced forms of technology in the 2024 cycle.

While some AI-generated content is easily detected, this is not always the case. Services that claim to identify AI-generated text, such as essays, have been shown to exhibit bias against non-native English speakers. It’s not much easier with images and videos, although there are often signs.

Meta seeks to minimize uncertainty by working primarily with other AI companies that use invisible watermarks and certain types of metadata on images created on their platforms. However, there are ways to remove watermarks, an issue Meta plans to address.

“We are working hard to develop classifiers that can help us automatically detect AI-generated content, even if the content lacks invisible markers,” Clegg wrote. “At the same time, we are looking for ways to make it more difficult to remove or alter invisible watermarks.”

Audio and video can be even more difficult to monitor than images, because there is not yet an industry standard for AI companies to add invisible identifiers.

“We are not yet able to detect those signals and label this content from other companies,” Clegg wrote.

Meta will add a way for users to voluntarily disclose when they upload AI-generated video or audio. If they share deepfake content or another form of AI-generated content without disclosing it, the company “may apply sanctions,” the post says.

“If we determine that digitally created or altered image, video, or audio content creates a particularly high risk of materially misleading the public on a matter of importance, we may add a more prominent label if appropriate,” Clegg wrote.

LOOK: Meta is overly optimistic about revenue and cost growth

Source Link

Related Posts