Dive Brief:
- C-suite leaders should brace themselves for a rise in artificial intelligence-powered impersonation scams targeting enterprises in the coming year, according to a report by fraud prevention firm Nametag.
- Trends such as the increased accessibility of “deepfake” technologies will likely accelerate this year, allowing bad actors to step up attacks like hiring fraud, where scammers pose as legitimate job candidates, according to Nametag’s 2026 Workforce Impersonation Report released last month.
- “It’s a perfect storm that leads us to really sense that 2026 will be the year of impersonation attacks,” Nametag CEO Aaron Painter said in an interview. AI technologies “have given new super powers to bad actors, and they’re taking advantage of those tools,” he said.
Dive Insight:
More than 4.2 million fraud reports have been filed with the FBI since 2020, resulting in over $50.5 billion in losses, with a growing portion of such scams tied to deepfakes, which are AI-altered images, videos or audio recordings, the FBI and American Bankers Association Foundation reported in September.
Deepfake technology has been a game-changer for fraudsters, making it substantially easier for them to steal anyone’s voice or likeness and carry out impersonation attacks, according to Nametag.
In 2024, British engineering group Arup was in the spotlight after reports that scammers successfully siphoned $25 million from the company by using deepfake technology to pose as the organization’s CFO.
Luxury sports car manufacturer Ferrari was unsuccessfully targeted in a 2024 deepfake attempt. As part of the scam, the fraudster tried to dupe a company executive into signing off on a transaction using WhatsApp messages that appeared to be sent by CEO Benedetto Vigna.
In a third example, fraud detection company Pindrop said last year that it was unsuccessfully targeted in a November 2024 hiring scam that involved deepfake technology.
The number of online deepfakes exploded from roughly 500,000 in 2023 to about eight million in 2025, according to an estimate by cybersecurity firm DeepStrike.
Painter said impersonation attacks are poised to further accelerate this year, with departments such as information technology, human resources and finance serving as prime targets.
“You will see a rise of impersonation attacks across the various roles in the enterprise,” he said.
In scams targeting IT, fraudsters may pose as employees or contractors in an attempt to trick help desk staff into resetting a victim’s password or multi-factor authentication, according to Nametag. Deepfake impersonation will become a “standard tactic in helpdesk social engineering playbooks” this year, the firm predicted.
Companies also face escalating fraud and cybersecurity risks when it comes to the hiring process, Nametag said. The responsibility for verifying job candidates’ identities is spread across multiple teams, providing “predictable openings” for scammers, according to the research.
By 2028, one in four job candidate profiles worldwide will be fake, according to a Gartner prediction.
The emergence of agentic AI has elevated the threat level, Nametag warned. “Once an [AI] agent has access and autonomy, it can be hijacked or misused just like any other user account,” the report said. With a hijacked agent, scammers can “initiate legitimate-looking actions, from data exports to software deployments, that bypass human oversight.”
Organizations were advised to make a “fundamental shift” in how they think about workforce identity. “That means verifying that the right human is behind the keyboard, phone, or AI agent, not blindly trusting that whoever can click a link, tap a push, or join a call is who they claim to be,” the report said.