Dive Brief:
- Many software defenses against fraud are currently not up to the task of combating a rising threat of deepfakes and other scams, according to SentiLink CEO Naftali Harris.
- “The dirty little secret is that not all fraud controls out there are all that good,” Harris said during a panel discussion at the recent Money20/20 fintech conference in Las Vegas. The future of fighting fraud is not especially promising even with deployment of artificial intelligence defenses, he said.
- In 20 to 30 years, “scams and account takeover are going to be way, way worse,” Harris said. “Many of us are going to fall for these kinds of scams because it’s going to be very convincing,” he said. “That’s a world we haven’t reckoned with.”
Dive Insight:
Among many types of scams, deepfake phishing attacks — using AI-generated voice and video — are surging in number and becoming more difficult to detect, according to Keepnet Labs, a creator of an anti-phishing platform.
Deepfake files soared to 8 million last year from 500,000 in 2023, and companies lost an average of almost $500,000 for every deepfake-related incident, according to Keepnet.
Also, the number of detected deepfake incidents increased 10 times in 2023 compared with 2022, Keepnet said.
“Deepfakes are still emerging — they’re not rampant yet,” Oscilar CEO Neha Narkhede said, predicting that the attacks will “go mainstream.”
“What we are seeing on the ground is early but sure signs of deepfakes evolving from one-off impersonations into persistent, synthetic identities,” she said during the panel discussion.
This year one of Oscilar’s cryptocurrency customers noticed seemingly ideal user growth, with hundreds of accounts trading in small volumes and ramping up into larger withdrawals, Narkhede said during the panel discussion.
“Everything seemed fine to them in isolation, but when these signals were pulled together into a single platform, a different story emerged,” she said.
“There were similar GPU fingerprints, almost identical TLS signatures, matching video encoding patterns,” Narkhede said. “They all pointed to the fact that they probably were created by the same automation stack.”
Moreover, the pattern and velocity of trading was coordinated rather than “organic,” she said. “It turned out that this was a classic deep-fake-driven synthetic network.”
Deepfake fraudsters and other wrongdoers have no shortage of people’s personal data to exploit, mined from failures in cybersecurity, Harris said.
“There have been so many breaches that have happened, that I actually think that additional breaches of identity information don’t really give fraudsters that much more than they already have,” he said.
Effective combat against fraud will arise from a collaboration between humans and AI, Narkhede predicted.
Humans “will be guiding AI agents — AI copilots that will do the heavy lifting while not taking humans out of the loop,” she said.
“Then, as AI agents start transacting on our behalf, I think the notion of identity itself is going to change from who you are to what is transacting on your behalf,” Narkhede said.