Fake AI Applicant Case Raises Job Scam Concerns in North Korea
Artificial intelligence has made online hiring quicker and easier to manage. At the same time, it has opened the door to new security risks for companies relying on remote interviews. A recent case in Japan shows how AI can create highly convincing fake identities during the hiring process.
Earlier this month, a suspected deepfake applicant appeared during a virtual interview with a Tokyo IT firm. The interaction was brief but raised concerns about possible links to North Korean IT workers seeking jobs overseas.
The case has renewed focus on identity verification and the challenges posed by AI impersonation.
The incident was first reported by Yomiuri Shimbun on Thursday. According to the report, the applicant joined the interview under a false identity.
During the call, he claimed to have been raised in the United States and insisted on working remotely. When told the job required in-person attendance, he abruptly ended the call. The exchange lasted about two minutes.
Despite the short duration, several unusual elements stood out. The applicant submitted an English résumé through a Japanese platform, listing experience at a major company and claiming native-level Japanese proficiency.
Identity Linked to a Real Tokyo IT Executive

Freepik | While AI makes recruitment more efficient, it also increases the risk of identity fraud, as shown in a Tokyo deepfake incident.
A closer review revealed something unexpected. The applicant’s background and profile aligned closely with those of Kenbun Yoshii, the CEO of a Tokyo-based IT company.
The discovery raised immediate suspicion. Publicly available photographs and videos of Yoshii appeared to have been used to construct the fake identity shown during the interview.
Yoshii later spoke publicly about the incident, describing it as “creepy and frightening.” He also said he had been contacted about similar job applications submitted under his name to other companies.
The pattern pointed toward something more organized rather than a one-off incident.
Deepfake Indicators Found in Analysis
Multiple organizations reviewed the interview footage to determine whether artificial intelligence had been used to alter the video.
Among those examining the material were the identity management company Okta and a Tokyo-based startup focused on deepfake detection. Their findings suggested strong signs that the video had been manipulated using AI.
During the analysis, investigators noted several technical inconsistencies, including:
- Unnatural boundaries around the hairline
- Brief moments where the eyes appeared slightly misaligned
- Lip movements that did not perfectly match the audio
These visual inconsistencies are often associated with deepfake technology, where AI systems modify facial features or synchronize speech with pre-generated visuals.
Security specialists believe the incident may be connected to a wider pattern already observed in other regions. According to Okta, more than 6,500 similar cases have been documented globally in recent years.
In many of these situations, individuals believed to be North Korean IT workers used fabricated identities to secure remote employment at foreign companies. The workers often pose as software developers or technical specialists.
Once hired, the income earned from these positions is sometimes transferred back to North Korea. Analysts warn that such funds may contribute to government programs, including weapons development.
Evidence of Deepfake Experiments
Another cybersecurity firm, Trend Micro, has also examined the growing use of artificial intelligence in recruitment fraud. Its research found indications that North Korean cyber groups have experimented with deepfake tools to strengthen their job application tactics.
The investigation revealed that these groups frequently produce large batches of fabricated résumés. Many of these documents list advanced technical roles such as full-stack engineering positions.
By combining realistic résumés with AI-generated video identities, attackers can create highly convincing job applicants capable of passing initial recruitment screening.
Security researchers note that such activities initially focused on companies in the United States and Europe. However, the pattern now appears to be expanding into Japan.
Experts warn that businesses conducting remote hiring interviews face increasing difficulty verifying the true identity of applicants. Deepfake technology has advanced quickly, and many visual manipulations are no longer easy to detect during a normal video call.
Without specialized detection tools, even experienced recruiters may struggle to identify subtle irregularities.
Measures Experts Recommend for Recruiters

Freepik | Firms should adopt multi-factor checks and face-to-face meetings to ensure hiring integrity.
Cybersecurity experts highlight the growing need for stronger identity checks in hiring. As remote interviews remain widespread, particularly in the technology sector, improved screening has become essential.
Recommended safeguards include:
- Multi-factor identity verification for applicants
- In-person interviews when possible
- Detailed technical questioning to test claimed expertise
- Additional background checks for remote candidates
Using multiple layers of verification makes it harder for impersonators to slip through the hiring process.
The suspected deepfake interview involving Kenbun Yoshii’s identity shows how artificial intelligence can be used to exploit weaknesses in remote recruitment systems. It also reflects a wider global trend involving fraudulent applicants linked to North Korean IT worker operations.
As remote work continues to grow, companies are under more pressure to improve hiring security. Relying on video interviews alone may no longer be enough to confirm someone’s identity.
Stronger verification systems and more rigorous technical screening are likely to become standard as organizations respond to the rise of AI-driven impersonation.