• | 9:00 am

You can’t stop generative AI fraud—but you can keep the criminals guessing

In order to combat illegal activity, we’ll have to make tough choices and strike a balance between privacy and identity safety.

You can’t stop generative AI fraud—but you can keep the criminals guessing
[Source photo: piranka/Getty images]

We’ve all experienced identity fraud in some way. At some point in your life, somebody has tried to be you. It’s hardly the sincerest form of flattery—more like a huge headache.

The term identity fraud dates back almost 60 years. Back then, it was less prevalent (not less dangerous) because it was done manually. Think of someone splicing a photo into a stolen passport.

Digital fraud is vastly more sophisticated—and commonplace. Even the most privacy-conscious leave a data trail that serves as ammunition for identity theft. Each day, more of you is distributed online. Think of the credit cards you own and how many accounts you’ve created on various platforms.

In response, we’ve learned to be vigilant and wary. We even tolerate multifactor security. Just as we were making headway, though, here comes generative AIDeepfakes are not new. What’s changed is that technologies of deception became commodities, turning them into convenient mass weapons for criminals.

You may be thinking, “[insert vendor] solves that.” Or my favorite: “We can fight AI with AI.” But it’s just not that simple.


Our data spreads like pollen and, unfortunately, you cannot wipe it from the interwebs. Your PII (personally identifiable information) is out there, ready to be used against you.

Let’s start with good ol’ deepfakes. Admittedly, that first doctored video with a synthetic Tom Cruise impressed everyone. Now, it’s gone far beyond fun and games. In 2022, the FBI warned about North Korea using deepfakes to win freelance contracts for its digital remote workers. Their motives: evade sanctions, generate foreign currency, penetrate corporate databases, and plant malware. Cases like this drove the Pentagon, through the Defense Advanced Research Projects Agency (DARPA), to allocate millions to research detection of falsified videos.

“I was certain it was my boss on the phone, telling me to send the wire transfer. I know his German accent, intonation, and speaking cadence!” AI can generate fake, real-time audio that fools the unsuspecting. Just ask the British executive who wired $243,000 to a new “Hungarian supplier” because his “boss”—who was actually an AI-supported impersonator—told him over the phone that it was urgent. That was nearly four years ago and, today, AI-generated voices mimicking real people can even fool voice verification systems—a security front line that firms like Fidelity rely on heavily.

Then there’s synthetic ID fraud, where a made-up supplier or person is built from thin air. This is often a long con, and it’s one option for North Koreans applying for programming work. AI helps them generate a detailed, plausible social profile that helps divert suspicion. Anyone hiring them is unwittingly violating U.S. sanctions.


Like the Terminator, the fraudsters and technologies for deception never stop coming. Even earlier-generation AI fraud is tough to detect, from Photoshopped images to German-accented voices on the phone. I could go on. Next question: How do we stop it?

The strongest solution today is to come at it from every angle possible. Let’s say you’re a criminal. You want to hijack Bill Smith’s bank account. The bank requires a video call. Their official believes you are Mr. Smith even though you are using deepfake video to impersonate him. But your request, timing, or actions are unusual enough to trigger an alert, and suddenly you are asked to hold up your driver’s license. The alert (and the challenge it triggers) are enabled by heuristics, which are often used to detect fraud by finding unexpected variations from expected patterns.

At the end of the day, there is no single silver bullet to stop this ever-moving target. The best chance to keep up with growing threat sophistication is to take a layered approach and constantly adapt. Here are some ways you can accomplish that.


  • Generative AI is powerful, but criminals using it still struggle to create a cohesive fake identity that is robust enough to hold up under pressure. A fake identity must consistently exhibit human-like behavior or it will stand out. Every additional “ask” makes it exponentially tougher for AI and bad actors to respond correctly. With that in mind, obtain the extra data you need from the person in context, judiciously—its purpose is security, not marketing.
  • Guard their PII carefully and use it only where the context requires. Does this acquisition of more personal user data sound like a nightmare for privacy? Mitigate that risk by automating PII collection and storage so people are not in the loop. To avoid even worse data breaches and fraud, make sure the challenge data is visible only at the moment it’s needed. Ensure the data you collect is used responsibly and that you are adhering to regulatory requirements. Rarely would it be necessary to use all available PII in challenges.
  • Present the user with additional and/or unexpected challenges if risk or uncertainty increases. Extra checks are only needed for users who are perceived as suspicious, and in transactions where the risk is elevated—such as large transfers of money to new recipients. The added checkpoint is akin to challenging a false alibi and forcing the suspect to embellish their story. This is where AI can be powerful for the defense, by requiring a face image, a voice print, or a government ID at just the right moment. It gives your organization a much greater chance to stop identity fraud.


Fraud that exploits generative AI is a threat, but it’s one we can defeat. Creating a more detailed, realistic online version of ourselves is the most practical, long-term strategy. Attackers are hard-pressed to impersonate a rich “digital twin” of each target. The downside is potential harm to our privacy. Cyber-attackers and autocrats will try to weaponize digital profiles, and that is a greater threat than generative AI by itself. Inevitably, to build our future we’ll have to make tough choices and strike a balance between privacy and identity safety. And that balance cannot be achieved via a single solution.

  Be in the Know. Subscribe to our Newsletters.


Rick Song is the cofounder and CEO of Persona. More

More Top Stories: