AI-Powered Impostors Are Getting Hired. Here’s How
AI-assisted impostors blur the line between candidate and avatar, raising urgent questions about what identity even means in a remote-first workforce.

In the age of generative AI, identity no longer means what it used to. A resume can be fabricated with a few keystrokes. A face can be conjured out of thin air. A job interview can be passed with someone else whispering the answers into an earpiece, or simply with an AI model prompting a believable response in real time.
This isn’t speculative. It’s happening now.
Earlier this year, the Department of Justice announced multiple indictments involving North Korean nationals who allegedly infiltrated U.S. companies using synthetic identities, deepfake images, proxy interviews, and even local “laptop farms” that masked the operatives’ true location. What looked like a typical remote developer turned out to be a sanctioned foreign agent, funding weapons programs and stealing intellectual property on behalf of a hostile regime.
If that seems like a one-off, consider this: some of those operatives were using AI tools like Anthropic’s Claude to write job applications, complete coding challenges, and even respond to workplace messages, all while hiding behind stolen credentials and fake photos. These aren’t isolated incidents; they point to a sophisticated pattern of deception.
When the Face Behind the Screen Isn’t Real
Anthropic has released a sobering threat intelligence report highlighting how large language models (LLMs), including their own, are being used by bad actors. The company analyzed thousands of usage attempts flagged for potential abuse and found clear patterns of fraud-related queries.
One key takeaway: employment fraud is among the most popular use cases for malicious actors using AI. Anthropic’s internal research, based on red-team testing and real-world misuse signals, suggests a four-phase lifecycle consistent with known FBI case files and DOJ indictments.
Phase 1: Fake It till You Make It – Crafting the Perfect Applicant
Getting a foot in the door is easier than ever, especially when LLMs are being misused to craft realistic resumes, cover letters, and even entire portfolios.
Anthropic observed threat actors using AI to pass screening assessments, auto-generate references, and develop fake personas across LinkedIn and GitHub. In one case, a test user prompted Claude to create a convincing backstory for a software engineer who graduated from a notable university with experience curated towards the tech industry. In seconds, the model output a full employment history, technical project descriptions, and a compelling skills summary.
Phase 2: The AI Wingman – Application and Interview Deception at Scale
Getting hired is no longer a solo act. Increasingly, bad actors are showing up to job interviews with an invisible assistant, one that’s tireless, well-read, and alarmingly persuasive. In Phase Two of the employment fraud lifecycle, threat actors use generative AI not just to get their foot in the door but to be welcomed in.
Once an interview is secured, the AI support continues. Threat actors use these models to rehearse for behavioral questions, prepare responses to common technical prompts, and even generate real-time solutions during coding challenges. In some cases, Claude was observed walking users through detailed technical answers, tailored specifically to the job posting or interview scenario.
What emerges is a deeply unsettling reality. In some interviews, employers aren’t just evaluating a candidate, they’re evaluating a candidate and their AI co-pilot. And often, they can’t tell the difference.
Phase 3: Ghost in the Machine - Maintaining Employment with AI Assistance
Getting the job is only half the deception. Keeping it? That’s where the real performance begins.
In Phase Three of the employment fraud lifecycle, AI becomes more than just a résumé builder or interview coach, it becomes a constant crutch for day-to-day survival. Once inside the virtual walls of a company, threat actors lean heavily on tools like Claude to maintain the illusion of technical competence. Anthropic reports that roughly 80% of observed model usage suggests active, ongoing employment rather than one-off application help.
These operatives rely on generative AI to write and troubleshoot code, respond to team feedback, and even hold up their end of Slack threads and GitHub comments. It’s a digital masquerade: an AI-sustained impersonation of a skilled, collaborative team member. The model doesn't just help them work, it helps them appear to work, and in distributed teams where face time is minimal, that's often enough.
This phase also marks a troubling shift in risk. It’s not just about getting through the front door anymore. These operators are now embedded in engineering teams, with access to proprietary systems, intellectual property, and sensitive source code, creating a vector for both technical compromise and corporate espionage.
Phase 4: From Code to Cash - Fueling Sanctions Evasion with Payroll Fraud
For most employers, payroll is a routine cost of doing business. For state-sponsored threat actors, it’s a revenue stream.
Phase Four is where the implications become geopolitical. According to FBI estimates cited in the Anthropic report, these fraudulent employment schemes generate hundreds of millions of dollars annually for the North Korean regime, money that is then funneled into nuclear weapons development and other sanctions-violating programs.
Generative AI is making this grift scalable. In the past, an individual operator could barely manage one fraudulent role without raising flags. But with AI handling technical tasks, communications, and scheduling across time zones, that same operative can now hold down multiple jobs, simultaneously, without detection.
This is more than cheating to land a job. In the hands of fraud rings or hostile nation-state actors, AI becomes a force multiplier that amplifies the scale, speed, and sophistication of employment fraud. Anthropic’s research showed that in many cases, prompts would evolve iteratively, with users fine-tuning inputs based on prior responses until they arrived at a fraud-ready output.
What’s most striking is the subtlety. These aren’t queries asking how to “hack a job interview.” They’re strategic, calculated, and often resemble the kinds of innocuous career coaching prompts recruiters might see in a LinkedIn post. That’s the danger. The line between support and subversion is thinner than ever.
The Rise of the AI Impostor
The FBI has repeatedly warned of this trend, calling it “Business Identity Compromise.” These AI-assisted impostors blur the line between candidate and avatar, raising urgent questions about what identity even means in a remote-first workforce. In this form of fraud, synthetic content is used not just to gain access, but to maintain the illusion of a legitimate employee. And that illusion can last for months, even years, if employers aren’t equipped to verify identity beyond the surface.
Benchmarking the Risk
A report released by HireRight, a global provider of background screening, reinforces just how underprepared many employers remain. In North America, 34% of respondents to its 2025 Global Benchmark Report, said they have little to no confidence that they could identify when candidates have used generative AI tools to assist with their job applications or resumes. That uncertainty is compounded by the fact that identity verification isn’t standard in many industries, especially when onboarding remote workers or freelancers. And while almost half of global respondents said they are exploring AI solutions to support core Human Resource functions, far fewer have assessed the risks those tools may introduce, or how they could be exploited by malicious actors. In fact, one in six respondents said their business had experienced ID fraud as part of their hiring process, and an additional three out of 10 were unsure.
The gaps are clear, and threat actors are filling them.
From Reactive to Proactive: What Employers Can Do
The good news is that solutions exist, and leading employers are already shifting their approach. Here are five steps employers can take to combat the threat of fake workers:
Strengthen identity verification: Traditional document checks won’t cut it. Employers need layered identity assurance tools that combine government ID validation with biometric face matching and liveness detection. HireRight’s Global ID Check, for example, uses remote verification to confirm not just the document’s authenticity but that the person presenting it is real and matches the image on the ID.
Rethink trust in the remote era: Just because someone can join a video call doesn’t mean they are who they claim to be. Employers should embed liveness checks throughout the hiring journey, during onboarding, system login, or access to sensitive data.
Train hiring teams to spot red flags: Educate talent acquisition professionals about modern fraud tactics: inconsistencies in work history, stock-photo-quality images, or odd hesitations in video interviews. Synthetic applicants don’t always behave like real ones.
Don’t ship first, ask questions later: Sending laptops to addresses that don’t match hiring records, or to third parties, should be a red flag, especially if equipment is requested urgently or rerouted at the last minute.
Collaborate with legal and security teams: Fake workers are more than an HR issue. The threat of synthetic hires presents data privacy, sanctions compliance, and cybersecurity problems. Employers should build cross-functional hiring resilience teams that include legal, compliance, IT, and HR stakeholders.
Parting Thoughts
Generative AI will continue reshaping the way we work, but it’s also reshaping how we deceive, infiltrate, and exploit. Employers are being targeted today by synthetic applicants backed by sophisticated actors and nation-state interests. And in a labor market increasingly built on remote work and digital credentials, trust must be earned, not assumed.
We must evolve our screening tools and policies as quickly as the threats evolve. In this era of AI-powered fraud, the question isn’t just ‘can they do the job?’
It’s ‘who’s really doing it?’
Release Date: September 22, 2025

Alonzo Martinez
Alonzo Martinez is Associate General Counsel at HireRight, where he supports the company’s compliance, legal research, and thought leadership initiatives in the background screening industry. As a senior contributor at Forbes, Alonzo writes on employment legislation, criminal history reform, pay equity, AI discrimination laws, and the impact of legalized cannabis on employers. Recognized as an industry influencer, he shares insights through his weekly video updates, media appearances, podcasts, and HireRight's compliance webinar series. Alonzo's commitment to advancing industry knowledge ensures HireRight remains at the forefront of creating actionable compliance content.