Generative AI deepfakes can stoke misinformation or manipulate images of real people for unsavory purposes. They can also help threat actors pass two-factor authentication, according to an Oct. 9 research report from Cato Networks’ CTRL Threat Research.
AI generates videos of fake people looking into a camera
The threat actor profiled by CTRL Threat Research — known by the handle ProKYC — uses deepfakes to forge government IDs and spoof facial recognition systems. The attacker sells the tool on the dark web to aspiring fraudsters, whose ultimate goal is to infiltrate cryptocurrency exchanges.
Some exchanges require a potential account holder to both submit a government ID and appear live in video. With generative AI, the attacker easily creates a realistic-looking image of a person’s face. ProKYC’s deepfake tool then slots that picture into a fake drivers license or passport.
The crypto exchanges’ facial recognition tests require brief proof that the person is present in front of the camera. The deepfake tool spoofs the camera and creates an AI-created image of a person looking left and right.
SEE: Meta is the latest AI giant to create tools for photorealistic video.
The attacker then creates an account on the cryptocurrency exchange using the identity of the generated, non-existent person. From there, they can use the account to launder illegally obtained money or commit other forms of fraud. This type of attack, known as New Account Fraud, caused $5.3 billion in losses in 2023, according to Javelin Research and AARP.
Selling ways to break into networks isn’t new: ransomware-as-a-service schemes let aspiring attackers buy their way into systems.
How to prevent new account fraud
Cato Research’s Chief Security Strategist Etay Maor offered several tips for organizations to prevent the creation of fake accounts using AI:
- Companies should scan for common traits of AI-generated videos, such as very high quality videos — AI can produce images with greater clarity than what is typically captured by a standard webcam.
- Watch or scan for glitches in AI-generated videos, especially irregularities around eyes and lips.
- Collect threat intelligence data from across your organization in general.
It can be tricky to find a balance between too much or too little scrutiny, Maor wrote in the Cato Research research report. “As mentioned above, creating biometric authentication systems that are super restrictive can result in many false-positive alerts,” he wrote. “On the other hand, lax controls can result in fraud.”