Home MarketsAsia Deepfake scams have looted millions. Experts warn it could get worse

Deepfake scams have looted millions. Experts warn it could get worse

by SuperiorInvest

3D generated face representing artificial intelligence technology

The motion cloud | Stock | fake images

A rising wave of deepfake scams has looted millions of dollars from companies around the world, and cybersecurity experts warn it could get worse as criminals exploit generative AI to commit fraud.

A deep fake is a video, sound or image of a real person that has been digitally altered and manipulated, often using artificial intelligence, to convincingly misrepresent them.

In one of the biggest cases known this year, a Hong Kong financial worker was tricked into transferring more than $25 million to scammers who used deepfake technology and disguised themselves as colleagues on a video call, authorities told local media. in February.

Last week, British engineering firm Arup confirmed to CNBC that it was the company involved in that case, but could not go into details about the matter due to the ongoing investigation.

These threats have been growing as a result of the popularization of Open AI's GPT Chat, released in 2022, which quickly propelled generative AI technology into the mainstream, said David Fairman, chief information officer and chief security officer, APAC at Netskope.

“The public accessibility of these services has lowered the barrier to entry for cybercriminals: they no longer need to have special technological skills,” Fairman said.

The volume and sophistication of scams have expanded as artificial intelligence technology continues to evolve, he added.

Growing trend

Various generative AI services can be used to generate human-like text, images, and video content and can therefore act as powerful tools for illicit actors attempting to digitally manipulate and recreate certain individuals.

An Arup spokesperson told CNBC: “Like many other companies around the world, our operations are subject to regular attacks, including invoice fraud, phishing scams, WhatsApp voice spoofing and deepfakes.”

The finance worker had reportedly attended the video call with people believed to be the company's chief financial officer and other staff, who asked him to make a money transfer. However, the rest of the attendees present at that meeting had, in reality, been digitally recreated deepfakes.

Arup confirmed that “fake voices and images” were used in the incident, adding that “the number and sophistication of these attacks has increased significantly in recent months.”

Chinese state media reported a similar case in Shanxi province this year involving a financial employee, who was tricked into transferring 1.86 million yuan ($262,000) to a scammer's account after a video call with a deepfake of his boss.

Broader implications

In addition to direct attacks, companies are increasingly concerned about other ways that fake photos, videos or speeches from their superiors could be used maliciously, cybersecurity experts say.

According to Jason Hogg, cybersecurity expert and executive-in-residence at Great Hill Partners, deepfakes of high-ranking members of a company can be used to spread fake news to manipulate stock prices, defame a company's brand and sales, and spread other harmful misinformation.

“That's just a sample of what happens,” said Hogg, who previously worked as an FBI special agent.

He highlighted that generative AI is capable of creating deepfakes based on a trove of digital information, such as publicly available content hosted on social networks and other media platforms.

In 2022, Patrick Hillmann, Binance's director of communications, claimed in a blog post that scammers had spoofed it based on previous news interviews and television appearances, using it to trick clients and contacts into meeting.

Netskope's Fairman said such risks had led some executives to begin removing or limiting their online presence for fear that cybercriminals could use it as ammunition.

Deepfake technology has already become widespread outside the business world.

From fake pornographic images to doctored videos promoting kitchen utensils, celebrities like Taylor Swift have fallen victim to deepfake technology. Deepfakes of politicians have also proliferated.

Meanwhile, some scammers have created deepfakes of people's family and friends in an attempt to trick them out of money.

According to Hogg, the broader problems will accelerate and worsen over a period of time, as cybercrime prevention requires careful analysis to develop systems, practices and controls to defend against new technologies.

However, cybersecurity experts told CNBC that companies can bolster defenses against AI-powered threats through better staff education, cybersecurity testing, and requiring keywords and multiple levels of approvals for all transactions, something that could have avoided cases like Arup's.

Clarification: This story has been updated to accurately reflect David Fairman's title at Netskope.

Source Link

Related Posts