When Deepfakes Apply for Jobs: The Identity Verification Crisis

When Pindrop Security posted a job listing for a senior software engineer in 2024, one applicant stood out. The candidate, identified internally as “Ivan X,” had an impressive CV, strong technical credentials, and performed well during initial screening. There was just one problem. During the video interview, a recruiter noticed something subtle but unsettling: Ivan's facial expressions lagged behind his words by a fraction of a second. Further analysis revealed that Ivan was not a person at all. He was a real-time deepfake, a synthetic face layered over a live webcam feed, paired with a cloned voice driven by generative AI. Eight days later, Ivan resurfaced, applying through a different recruiter for the same role.
The deepest irony? The position Ivan applied for was on Pindrop's deepfake detection team.
“Deepfake candidates are infiltrating the job market at a crazy, unprecedented rate,” Vijay Balasubramaniyan, Pindrop's co-founder and CEO, told Fortune in July 2025. His company, which generates more than $100 million in annual recurring revenue from voice authentication technology, discovered that roughly 16.8 per cent of applicants to its own job postings were fake. That is not a rounding error. That is nearly one in six candidates submitting fabricated identities, synthetic voices, and AI-generated personas to a company whose entire business is detecting exactly this kind of fraud. While Ivan claimed to be located in western Ukraine, his IP address placed him thousands of miles to the east, in what appeared to be a possible Russian military facility near the North Korean border.
The implications extend far beyond one company's recruitment pipeline. Deepfake employment fraud has evolved from a curiosity into a systemic threat that touches national security, corporate espionage, financial crime, and the fundamental viability of remote work as we know it. The question is no longer whether fake candidates can fool employers. They already can, and they already have, at scale. The real question is what happens next.
The Numbers That Should Keep Every Hiring Manager Awake
The data on deepfake hiring fraud has moved from alarming to genuinely frightening in the space of two years. Gartner, the technology research and advisory firm, predicted in July 2025 that by 2028, one in four job candidate profiles worldwide will be fake. Not merely embellished or exaggerated, but entirely fabricated: AI-generated faces, synthetic voices, stolen credentials, and fictional professional histories assembled into convincing digital personas. A Gartner survey of 3,000 job candidates found that 6 per cent already admitted to participating in interview fraud, either by posing as someone else or by having another person interview on their behalf.
Resume Genius surveyed 1,000 hiring managers across the United States in January 2025 and found that 17 per cent had encountered candidates using deepfake technology to alter their video interviews. That figure had risen from just 3 per cent the previous year, representing a nearly sixfold increase in twelve months. Among millennial hiring managers, the encounter rate was even higher, at 24 per cent. The survey also revealed that 76 per cent of hiring managers reported that AI has made it significantly harder to detect impostor applicants, while 35 per cent had come across AI-created portfolio projects or creative work submitted as genuine accomplishments.
Pindrop's 2025 Voice Intelligence and Security Report documented a 1,300 per cent surge in deepfake fraud attempts across its monitoring systems in 2024, jumping from an average of one incident per month to seven per day. The report also noted a 756 per cent year-over-year increase in deepfake or replayed voices across enterprise telephone calls and a 704 per cent rise in face-swap attacks. Across all industries monitored, AI now drives 42.5 per cent of fraud attempts, with nearly one in three considered successful, according to data from Signicat cited in Pindrop's analysis.
Checkr, the background screening platform, surveyed 3,000 American managers and found that 59 per cent had personally suspected a candidate of using AI to misrepresent themselves during some stage of the hiring process. Nearly two-thirds, 62 per cent, believed that job seekers are now better at faking their identities with AI than human resources teams are at detecting them. Only 19 per cent said they were “extremely confident” their current hiring processes could catch a fraudulent applicant. The financial toll is already measurable: nearly one in three respondents said their companies had experienced delayed projects, missed revenue targets, or compliance issues as a direct result of fraudulent hires, with almost 25 per cent reporting losses exceeding $50,000 in the past year.
HYPR's 2025 State of Passwordless Identity Assurance Report, based on insights from 750 IT security decision-makers surveyed by S&P Global Market Intelligence's 451 Research division, found that a staggering 95 per cent of organisations had experienced some form of deepfake attack in the preceding year, including altered static imagery (50 per cent), manipulated live audio and video (44 per cent), and manipulated recorded audio and video (41 per cent). Nearly 40 per cent had suffered a generative AI-related security incident.
Experian's 2026 Future of Fraud Forecast, released in January 2026, identified deepfake job candidates as one of the five most significant fraud threats facing businesses this year. The report warned that “employers will unknowingly onboard individuals who aren't who they say they are, giving bad actors access to sensitive systems.” According to Experian's data, nearly 60 per cent of companies reported an increase in fraud losses between 2024 and 2025, while 72 per cent of business leaders identified AI-enabled fraud and deepfakes as among their top operational challenges. Kathleen Peters, chief innovation officer for fraud and identity at Experian North America, warned that the challenge lies in distinguishing between legitimate and malicious AI: “We want to let the good agents through to provide convenience and efficiency, but we need to make sure that doesn't accidentally become a shortcut for bad actors.”
Seventy Minutes to a Fake Identity
Palo Alto Networks' Unit 42 research team demonstrated in April 2025 just how accessible the underlying technology has become. A single researcher with no prior image manipulation experience, limited deepfake knowledge, and a five-year-old computer equipped with a GTX 3070 graphics card created a convincing synthetic identity capable of passing a video interview in just 70 minutes. The researcher used only freely available tools, an AI search engine, a passable internet connection, and AI-generated face images from thispersonnotexist.org, which permits the use of its generated faces for personal and commercial purposes. With these resources alone, multiple distinct identities were generated, each capable of appearing on a live video call in real time.
The message was stark: the barrier to creating a fake candidate has collapsed to essentially zero. Consumer-grade applications now allow attackers to overlay a realistic face onto a live webcam feed, controlling eye blinks and lip movements with simple keystrokes. Some software routes pre-rendered video through a virtual webcam. With just a 30-second audio sample, open-source models can replicate tone, accent, pacing, and filler-word habits convincingly enough to deceive an untrained listener. A 2025 iProov study found that only 0.1 per cent of participants could correctly identify all fake and real media shown to them, and humans can identify high-quality deepfake videos correctly only 24.5 per cent of the time.
The economics of deepfake creation have inverted entirely. While the average cost of producing a deepfake has dropped to approximately $1.33 according to industry estimates, the average cost of a deepfake fraud incident to the targeted organisation now stands at roughly $500,000, with large firms reporting losses of up to $680,000 per incident. Recovery costs average $1.5 million per major breach, with operational downtime stretching as long as seven days.
When a Cybersecurity Firm Falls for Its Own Threat
Perhaps no incident better illustrates the sophistication of deepfake employment fraud than what happened at KnowBe4, a prominent cybersecurity awareness training company, in July 2024. The firm needed a software engineer for its internal IT AI team. It posted the job, received applications, conducted four video conference interviews, ran background checks, verified references, and extended an offer. Everything checked out. The new hire was sent a company-issued Mac workstation.
The moment that laptop was received, it began loading malware.
KnowBe4's security systems detected the anomalous activity within hours. The company's IT team locked down the restricted onboarding account within 25 minutes of the first alert. Subsequent investigation, conducted in cooperation with the FBI and cybersecurity firm Mandiant, revealed that the hired individual was a North Korean operative using a stolen American identity and an AI-enhanced stock photograph. The operative had connected through a “laptop farm,” a physical location in the United States where accomplices host corporate devices, allowing overseas workers to connect through VPNs and appear to be working domestically.
No KnowBe4 systems were breached, and the company disclosed the incident publicly, making it one of the first organisations to openly discuss being victimised by the scheme. As the company later noted, it continues to receive applications from North Korean fake employees for its remote programmer positions “all the time,” and sometimes they constitute the bulk of applicants received. The incident helped KnowBe4 identify the location of the laptop farm and the American person assisting the North Korean programme, all of which was turned over to the FBI.
The KnowBe4 incident was not an isolated event. It was a symptom of one of the most extensive state-sponsored employment fraud operations in history.
Pyongyang's Remote Workforce
The scale of North Korea's remote IT worker infiltration programme is staggering. CrowdStrike, the cybersecurity firm that tracks the operation under the designation “Famous Chollima,” reported in August 2025 that it had investigated more than 320 incidents over the previous twelve months in which North Korean operatives gained fraudulent employment at Western companies. That figure represented a 220 per cent increase from the year before. Adam Meyers, senior vice president of CrowdStrike's counter-adversary operations, told Fortune that his team was investigating roughly one new incident every single day.
The scheme operates with industrial efficiency. North Korea trains young men in technology, sends them to elite schools in and around Pyongyang, and then deploys them in teams of four or five to locations in China, Russia, Nigeria, Cambodia, and the United Arab Emirates. From these outposts, the workers use AI-generated resumes, deepfake video technology, stolen American identities, and VPN infrastructure to apply for remote software development positions at Western companies. Once hired, they funnel their salaries back to the regime. Individual operatives can earn up to $300,000 annually, according to FBI Assistant Director Brett Leatherman, collectively generating hundreds of millions of dollars for designated entities including the North Korean Ministry of Defence.
The US Treasury Department, State Department, and FBI collectively estimate the scheme has generated hundreds of millions of dollars annually since 2018. The United Nations has placed the figure at between $250 million and $600 million per year. Fortune reported in April 2025 that thousands of North Korean operatives had infiltrated Fortune 500 companies, including what security experts described as “nearly every major company” in the technology sector.
The US Department of Justice has responded with escalating enforcement. In June 2025, the DOJ announced coordinated nationwide actions that included two indictments, an arrest, searches of 29 suspected laptop farms across 16 states, seizure of 29 financial accounts, and the takedown of 21 fraudulent websites. The indictment named Zhenxing “Danny” Wang, a New Jersey resident, along with six Chinese nationals and two Taiwanese nationals who had allegedly stolen the identities of approximately 80 American citizens and provided them to North Korean operatives, enabling employment at several Fortune 500 companies and generating over $5 million in illicit revenue. Between 10 and 17 June, the FBI executed searches of 21 premises across 14 states, seizing approximately 137 laptops. In November 2025, the DOJ announced further actions, including five guilty pleas and more than $15 million in civil forfeitures.
In one particularly alarming case, North Korean IT workers gained access to International Traffic in Arms Regulations (ITAR) data after being hired by a California-based defence contractor developing AI-powered military equipment. The Department of State has offered rewards of up to $5 million for information supporting efforts to disrupt North Korea's illicit financial operations.
Microsoft, which tracks the threat under the designation “Jasper Sleet,” disclosed that it had suspended 3,000 Outlook and Hotmail accounts created by the operatives as part of its disruption efforts. CrowdStrike's investigations revealed that the operatives frequently hold three or four remote jobs simultaneously, using generative AI to manage daily communications across multiple employers, responding in Slack, drafting emails, and completing coding tasks while routing salaries to accounts controlled by the North Korean regime. The operation has expanded geographically: US law enforcement has disrupted domestic laptop farms, so operatives are now pivoting to Western Europe, with CrowdStrike identifying new laptop farm infrastructure in Romania and Poland.
The Security Engineer Who Caught the Fake Twice
The experience of Dawid Moczadlo, co-founder of Vidoc Security Lab, a San Francisco-based vulnerability management startup, provides a particularly vivid illustration of how these operations work in practice. In early 2025, Vidoc posted a developer position on a Polish job board. One applicant claimed to live in Poland and had a Polish name, but on phone calls with Moczadlo and his co-founder, the candidate had a strong Asian accent.
“We noticed it after the third or fourth step of our interview process,” Moczadlo told The Register. “His camera was glitchy, you could see a person, but the person wasn't moving like a person. We spoke internally about him, and we thought, OK, this person is not real.” The applicant was rejected.
Two months later, it happened again. A second candidate, calling himself “Bratislav,” reached out through LinkedIn. He had approximately 500 connections, nine years of experience, and a computer science degree from the University of Kragujevac in Serbia. Everything appeared legitimate. But once again, the candidate had a strong Asian accent that contradicted his claimed Eastern European background, and his on-camera appearance looked subtly artificial. His way of speaking sounded, as Moczadlo described it, like he was reading ChatGPT-generated bullet points.
This time, Moczadlo was ready. Having faced scepticism from peers after the first incident, he began recording the interview. He then deployed a simple but effective test: he asked the candidate to place his hand in front of his face. The candidate refused. The reason was straightforward. Real-time deepfake face-swap filters cannot handle occlusion, the moment a hand passes in front of the face, the synthetic overlay breaks down, exposing the operator's true appearance underneath.
The Pragmatic Engineer newsletter later documented the case in detail, and Palo Alto Networks' Unit 42 team confirmed that the indicators matched known tactics attributed to North Korean IT worker operations. Several observers noted that the AI-generated face used by “Bratislav” bore an uncanny resemblance to a prominent Polish politician named Slawomir Mentzen.
Collateral Damage to Legitimate Candidates
The proliferation of deepfake candidates is not only a problem for employers. It is inflicting real damage on legitimate job seekers. As companies tighten screening and add layers of verification, genuine applicants face longer hiring timelines, more invasive identity checks, and an atmosphere of suspicion that can be profoundly alienating.
There is an additional, more insidious effect. Legitimate candidates may not receive callbacks because hiring managers suspect they might be deepfakes, and the applicants never learn why. As one security consultant quoted by CNBC observed, “you don't even get that call, and you don't know why you didn't get the call.” For workers in regions frequently associated with fraud operations, the stigma can be particularly acute, creating a form of geographic discrimination that has nothing to do with individual merit.
The trust deficit runs in both directions. Gartner's survey of job candidates found that only 26 per cent trusted AI to evaluate them fairly, even though 52 per cent believed AI was already screening their applications. Only half of candidates said they believed the jobs they were applying for were even legitimate. This mutual erosion of trust between employers and candidates represents a fundamental breakdown in the social contract of hiring.
What Employers Must Now Do
The collapse of trust in remote hiring has forced organisations to rethink identity verification from the ground up. The traditional hiring process, built around CVs, reference checks, and video interviews, was designed for a world where people were, at minimum, who they appeared to be on camera. That assumption no longer holds.
The countermeasures emerging across the industry fall into several overlapping categories, and the consensus among security experts is that no single approach is sufficient. Effective defence requires layering multiple verification methods across the entire hiring lifecycle.
Biometric Liveness Detection
The first and most technically sophisticated line of defence involves biometric liveness detection, technology that confirms a person on camera is a real, living human rather than a photograph, pre-recorded video, deepfake overlay, or mask. Active liveness detection requires the user to perform specific actions, such as blinking, turning their head, or smiling, to verify physical presence. Passive liveness detection operates invisibly in the background, analysing subtle cues like skin texture, micro-movements, light reflection patterns, and depth that distinguish genuine human presence from synthetic representations.
Companies such as Pindrop, Veriff, Incode, and Innovatrics now offer specialised tools for this purpose. Pindrop's Pulse for Meetings product integrates directly into Zoom, Microsoft Teams, and Webex sessions, alerting recruiters the instant it detects a deepfake and monitoring liveness continuously throughout the conversation. According to industry data, active liveness detection has been shown to reduce fraud by up to 91 per cent, particularly against deepfake attacks.
Gartner has projected that by 2026, more than 30 per cent of identity verification attacks will involve AI-generated media, and that 30 per cent of enterprises will consider identity verification and authentication solutions unreliable in isolation. The implication is clear: standalone ID checks are no longer enough.
Controlled Unpredictability in Interviews
Beyond technological solutions, security experts recommend what practitioners call “controlled unpredictability,” introducing real-time, unrehearsed elements into interviews that are difficult for pre-programmed deepfake systems to handle. This includes asking candidates to perform physical actions on camera (such as the hand-over-face test that exposed “Bratislav”), requesting sudden changes in camera angle or lighting, asking candidates to read randomly generated sentences aloud, and shifting from rehearsed questions to improvisational follow-ups that force context-dependent reasoning.
The logic is simple: deepfake systems excel at reproducing scripted, predictable interactions. They struggle with the spontaneous and the physical. By anchoring the interview in the real world, even for a few seconds, recruiters can expose the synthetic.
Hiring managers are also adopting what has been termed “forensic questioning,” a technique that relies on probing “why” and “how” follow-ups that force candidates to explain their reasoning process rather than recite polished answers. Checkr's survey found that 59 per cent of hiring managers now suspect candidates of using AI tools to misrepresent themselves during interviews, prompting this rise in probing techniques. When a candidate's technical knowledge has been generated by a large language model operating off-camera, this line of questioning tends to reveal inconsistencies quickly.
The Return of In-Person Verification
Perhaps the most significant structural shift in hiring practices has been the return of in-person interviews for critical roles. Google CEO Sundar Pichai told the Lex Fridman Podcast in June 2025 that the company was reintroducing “at least one round of in-person interviews for people,” specifically to ensure candidates had mastered “the fundamentals.” A Google spokesperson confirmed that the company had become more wary of AI abuses by candidates and had banned the use of AI tools during virtual interviews.
McKinsey began requiring at least one face-to-face meeting with potential recruits before extending offers approximately eighteen months before the policy became public in 2025. The consulting firm stated that “face-to-face interactions are necessary to assess the human qualities that can't be automated and that are core to how we partner with clients, things like judgement, empathy, creativity, and connection.”
Amazon, Anthropic, Cisco, and Deloitte have all implemented similar measures. Deloitte reinstated in-person interviews for its United Kingdom graduate programme. Amazon now requires candidates to formally acknowledge that they will not use unauthorised AI tools during the application process. Anthropic has an explicit ban on AI use during its application process.
The scale of this shift is considerable. A Gartner survey found that 72.4 per cent of recruiting leaders were conducting interviews in person to combat fraud. According to recruitment industry data, in-person interview requests surged from 5 per cent in 2024 to 30 per cent in 2025, a 500 per cent increase. Yet only 31 per cent of companies have implemented AI or deepfake detection software, suggesting that most organisations still rely on manual human reviews and traditional background checks that are increasingly trivial to defeat with modern AI tools.
Decentralised Digital Identity
Looking further ahead, technology companies are developing more fundamental solutions to the identity verification problem. Microsoft's Entra Verified ID uses decentralised identity standards, built on Decentralised Identifiers (DIDs) and Verifiable Credentials, to create tamper-proof digital credentials. Rather than relying on easily forged documents, the system works by having a trusted organisation, such as a government agency or previous employer, attest to a person's identity, employment history, or qualifications. The individual stores this cryptographic credential in a digital wallet and can present it to prospective employers, who can verify its authenticity without needing to collect or store sensitive personal documents.
Microsoft's Face Check feature, integrated into Entra Verified ID, uses Azure AI Vision for real-time biometric verification, comparing a live selfie against a government-issued photo ID while actively detecting deepfake attempts. The system does not store biometric data, addressing privacy compliance concerns while providing continuous verification capability. The approach essentially applies Zero Trust principles to hiring: never trust an identity claim, always verify it.
MajorKey Technologies launched IDProof+ in early 2025, a high-assurance identity verification solution built in collaboration with biometric identity firm authID and designed specifically for remote workforce onboarding. The product ensures each identity is matched to a live user rather than a synthetic profile, supporting zero-trust access frameworks and compliance-sensitive workflows.
What This Means for Remote Work
The deepfake hiring crisis arrives at a moment of particular tension in the debate over remote work. After years of pandemic-driven distributed employment, many companies had already been looking for reasons to mandate return-to-office policies. The deepfake threat provides a powerful new justification.
Yet framing the problem as an argument against remote work misses the deeper structural issue. The vulnerability is not remote work itself. The vulnerability is the inadequacy of the identity verification infrastructure that underpins digital interactions more broadly. Remote work simply happens to be the domain where this infrastructure failure is now most visible and most consequential.
Consider the numbers: the US Bureau of Labor Statistics reports that employers hired an average of 5 million people per month in 2024. Assuming three to six interviews per hire, Pindrop estimates that American hiring managers could face between 45 and 90 million deepfake candidate profiles per year by 2028 if Gartner's prediction holds. The economics of requiring every one of those candidates to attend in-person interviews are prohibitive. For globally distributed companies, they are impossible.
The more realistic path forward involves building what Veriff, the identity verification platform, has described as “trust infrastructure,” continuous, multilayered identity verification woven into every stage of the hiring process, from initial application through onboarding and beyond. This means automated ID verification at application time, deepfake detection during video interviews, biometric liveness checks at onboarding, and ongoing identity confirmation throughout employment. Advanced identity verification tools are expected to become standard best practices over the coming years, much as multi-factor authentication did before them.
The legal landscape is also evolving to compel these measures. The EU AI Act, with high-risk provisions taking effect in August 2026, mandates human oversight for AI systems used in employment decisions. In the United States, negligent hiring doctrine holds employers responsible when they “knew or should have known” of an employee's unfitness. Given the FBI's public warnings about deepfake employment fraud, dating back to a formal Public Service Announcement from the Internet Crime Complaint Center in June 2022, courts may increasingly conclude that employers should have anticipated synthetic identity fraud and implemented appropriate verification controls. Colorado's landmark AI Act, Senate Bill 24-205, which requires rigorous impact assessments for high-risk AI systems, has been delayed until June 2026 but signals the regulatory direction.
The financial stakes are substantial. Deloitte's Centre for Financial Services forecasts that generative AI-enabled fraud losses in the United States will climb from $12.3 billion in 2023 to $40 billion by 2027, a compound annual growth rate of 32 per cent. According to Federal Trade Commission data, job scams alone jumped from $90 million in losses in 2020 to over $501 million in 2024.
The Arms Race That Will Define Hiring for a Generation
The deepfake employment threat is, at its core, an arms race. As detection technology improves, so does the sophistication of synthetic media. As employers deploy liveness checks and forensic questioning, fraud operators will develop countermeasures. The 1,300 per cent surge in deepfake fraud that Pindrop documented in 2024 will not be the peak. It will be remembered as the beginning. In the first quarter of 2025 alone, there were 179 deepfake incidents across industries, surpassing the total for all of 2024 by 19 per cent, according to Keepnet Labs data.
What distinguishes this arms race from previous cybersecurity challenges is its direct impact on something fundamental to economic life: the ability to know who you are employing. Identity, the most basic prerequisite of the employer-employee relationship, has become a contested technical problem. More than half of company leaders surveyed say their employees have received no training in identifying or addressing deepfake attacks, and about one in four leaders had little to no familiarity with the technology at all.
The organisations that will navigate this successfully are those that treat identity verification not as a one-time checkbox during onboarding but as a continuous process embedded throughout the employment lifecycle. They will combine technological solutions (biometric liveness detection, cryptographic credentials, AI-powered anomaly detection) with human judgement (forensic interviewing, in-person verification for high-access roles, ongoing performance validation). They will invest in training their recruiters to recognise the subtle tells of synthetic media, from audio-visual desynchronisation to evasive responses to physical requests. And they will collaborate across organisational boundaries, sharing threat intelligence through industry groups and Information Sharing and Analysis Centres.
The alternative, continuing to hire through processes designed for an era when people were demonstrably real on camera, is an increasingly dangerous gamble. As Pindrop's Balasubramaniyan put it to Fortune: “We are no longer able to trust our eyes and ears.”
That statement, from the CEO of a company whose business is authenticating human identity, may be the most unsettling assessment of the current moment in remote work. The person on the other side of your next Zoom interview might be exactly who they claim to be. Or they might be a synthetic construct, generated in 70 minutes on a five-year-old computer, designed to pass your screening, collect your salary, and access your systems.
The technology to tell the difference exists. The question is whether employers will deploy it before the next breach starts with an interview.
References and Sources
Fortune (2025) “There's an epidemic of fake workers, and 1 in 343 job applicants is now from North Korea, this security company says,” 2 July 2025. Interviews with Pindrop CEO Vijay Balasubramaniyan. Available at: https://fortune.com/2025/07/02/pindrop-ceo-vijay-balasubramaniyan-fake-job-applicants-north-korea/
Gartner (2025) “By 2028, 1 in 4 Candidate Profiles Will Be Fake,” survey of 3,000 job candidates, 31 July 2025. Available at: https://www.gartner.com/en/newsroom/press-releases/2025-07-31-gartner-survey-shows-just-26-percent-of-job-applicants-trust-ai-will-fairly-evaluate-them
Resume Genius (2025) “AI's Impact on Hiring in 2025,” Pollfish survey of 1,000 hiring managers, launched 8 January 2025. Available at: https://resumegenius.com/blog/job-hunting/ai-impact-on-hiring
Pindrop (2025) “2025 Voice Intelligence and Security Report,” documenting 1,300% surge in deepfake fraud. Available at: https://www.prnewswire.com/news-releases/pindrops-2025-voice-intelligence--security-report-reveals-1-300-surge-in-deepfake-fraud-302479482.html
Checkr (2025) “The Hiring Hoax: What 3,000 Managers Revealed about Hiring Fraud in 2025.” Available at: https://checkr.com/resources/articles/hiring-hoax-manager-survey-2025
HYPR (2025) “2025 State of Passwordless Identity Assurance Report,” conducted by S&P Global Market Intelligence 451 Research, surveying 750 IT security decision-makers. Available at: https://www.hypr.com/resources/report-state-of-passwordless
Experian (2026) “Future of Fraud Forecast 2026,” released 13 January 2026. Available at: https://www.experianplc.com/newsroom/press-releases/2026/experian-s-new-fraud-forecast-warns-agentic-ai--deepfake-job-can
Palo Alto Networks Unit 42 (2025) “False Face: Unit 42 Demonstrates the Alarming Ease of Synthetic Identity Creation,” April 2025. Available at: https://unit42.paloaltonetworks.com/north-korean-synthetic-identity-creation/
KnowBe4 (2024) “How a North Korean Fake IT Worker Tried to Infiltrate Us,” July 2024. Available at: https://blog.knowbe4.com/how-a-north-korean-fake-it-worker-tried-to-infiltrate-us
CrowdStrike (2025) “2025 Threat Hunting Report,” documenting 320 Famous Chollima incidents and 220% year-over-year increase. Reported by Fortune, 4 August 2025. Available at: https://fortune.com/2025/08/04/north-korean-it-worker-infiltrations-exploded/
TechCrunch (2025) “North Korean spies posing as remote workers have infiltrated hundreds of companies, says CrowdStrike,” 4 August 2025. Available at: https://techcrunch.com/2025/08/04/north-korean-spies-posing-as-remote-workers-have-infiltrated-hundreds-of-companies-says-crowdstrike/
US Department of Justice (2025) “Justice Department Announces Coordinated, Nationwide Actions to Combat North Korean Remote Information Technology Workers' Illicit Revenue Generation Schemes,” June 2025. Available at: https://www.justice.gov/opa/pr/justice-department-announces-coordinated-nationwide-actions-combat-north-korean-remote
Microsoft Security Blog (2025) “Jasper Sleet: North Korean remote IT workers' evolving tactics to infiltrate organizations,” 30 June 2025. Available at: https://www.microsoft.com/en-us/security/blog/2025/06/30/jasper-sleet-north-korean-remote-it-workers-evolving-tactics-to-infiltrate-organizations/
The Register (2025) “I'm a security expert and I almost fell for this IT job scam,” interview with Dawid Moczadlo, 11 February 2025. Available at: https://www.theregister.com/2025/02/11/it_worker_scam
Pragmatic Engineer Newsletter (2025) “AI Fakers,” documenting Vidoc Security Lab deepfake candidate incidents. Available at: https://newsletter.pragmaticengineer.com/p/ai-fakers
CNBC (2025) “How deepfake AI job applicants are stealing remote work,” 11 July 2025. Available at: https://www.cnbc.com/2025/07/11/how-deepfake-ai-job-applicants-are-stealing-remote-work.html
Entrepreneur (2025) “Major Companies Including Google and McKinsey Are Bringing Back In-Person Job Interviews to Combat AI Cheating.” Available at: https://www.entrepreneur.com/business-news/google-mckinsey-reintroduce-in-person-interviews-due-to-ai/496041
FBI Internet Crime Complaint Center (2022) “Deepfakes and Stolen PII Utilized to Apply for Remote Work Positions,” Public Service Announcement, 28 June 2022. Available at: https://www.ic3.gov/PSA/2022/psa220628
Microsoft Security Blog (2025) “Imposter for hire: How fake people can gain very real access,” 11 December 2025. Available at: https://www.microsoft.com/en-us/security/blog/2025/12/11/imposter-for-hire-how-fake-people-can-gain-very-real-access/
Microsoft (2025) “Microsoft Entra Verified ID” overview and Face Check documentation. Available at: https://www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-verified-id
SHRM (2025) “How Employers Are Confronting Deepfake Interview Fraud.” Available at: https://www.shrm.org/topics-tools/news/talent-acquisition/how-employers-are-confronting-deepfake-interview-fraud
Pindrop (2025) “The Growing Trend of Deepfakes in Interviews,” documenting the “Ivan X” incident. Available at: https://www.pindrop.com/article/growing-trend-of-deepfakes-in-interviews/
Fortune (2025) “Thousands of North Korean IT workers have infiltrated the Fortune 500,” 7 April 2025. Available at: https://fortune.com/2025/04/07/north-korean-it-workers-infiltrating-fortune-500-companies/
The Hacker News (2026) “Deepfake Job Hires: When Your Next Breach Starts With an Interview,” January 2026. Available at: https://thehackernews.com/expert-insights/2026/01/deepfake-job-hires-when-your-next.html
Georgia Tech News Center (2025) “When a Video Isn't Real: Georgia Tech Alum Innovates Deepfake Detection for a New Era of Fraud,” 8 October 2025. Available at: https://news.gatech.edu/news/2025/10/08/when-video-isnt-real-georgia-tech-alum-innovates-deepfake-detection-new-era-fraud
Veriff (2025) “Fraud Index 2025,” reporting 78.65% of global respondents targeted by deepfake or AI-generated fraud. Available at: https://www.veriff.com/identity-verification/biometric-liveness-and-fraud-prevention
Deloitte Centre for Financial Services (2025) Forecasts of generative AI-enabled fraud losses reaching $40 billion by 2027. Referenced in Pindrop VISR 2025 and industry reports.
HR Dive (2025) “By 2028, 1 in 4 candidate profiles will be fake, Gartner predicts,” and “A job applicant can be deepfaked into existence in 70 minutes.” Available at: https://www.hrdive.com/news/fake-job-candidates-ai/757126/
Federal Trade Commission (2025) Data on consumer fraud losses exceeding $12.5 billion in 2024. Referenced in Experian Future of Fraud Forecast 2026.
iProov (2025) Study on human ability to detect deepfakes, finding only 0.1% of participants correctly identified all fake and real media. Referenced in Pindrop VISR 2025.
Keepnet Labs (2026) “Deepfake Statistics and Trends 2026,” documenting 179 deepfake incidents in Q1 2025. Available at: https://keepnetlabs.com/blog/deepfake-statistics-and-trends

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk