A North Korean hacking group attempted to use fake IDs built with AI-generated deepfake photos in cyberattacks against South Korean targets, marking one of the first confirmed cases of AI being weaponized for identity fraud in the South, a South Korean security firm said on September 15.
Deepfakes in phishing attempts
Genians, a South Korean cybersecurity company, revealed in a report on September 15 that the Kimsuky group, believed to be backed by Pyongyang, tried to launch hacking attempts in South Korea in July using deepfake images created with generative AI. Investigators traced how the hackers generated fake profile photos with ChatGPT and used them to fabricate a military employee ID.
Although models such as ChatGPT block direct requests for ID creation, the hackers evaded safeguards by using prompts designed to exploit loopholes in the system.
They then attached the forged images to phishing emails titled “Request for ID card draft review” and used a deceptive domain, “mli.kr,” that mimicked the official South Korean military address “mil.kr.”
Targeting researchers and journalists
Kimsuky also targeted civilian researchers and journalists specializing in North Korea in July. The group sent emails that appeared to introduce a new AI tool for managing email accounts, but actually contained malware.
The case confirmed that hackers in South Korea are now using AI-generated fake identities and photos in cyberattacks.
![A ChatGPT logo is seen in this illustration taken on Jan. 22. [REUTERS]](https://koreajoongangdaily.joins.com/data/photo/2025/09/15/716aeb80-7c35-408a-b489-41189687b7ea.jpg)
Genians’ report stated that its analysis of technical indicators, such as malware and infrastructure, combined with contextual signs like targets, language patterns and past activity, revealed that “multiple deepfake cases correlated with threat indicators previously linked to the Kimsuky group.”
AI in global hacking efforts
AI-driven hacking attempts are on the rise worldwide, according to IT industry sources.
In August, Anthropic reported that North Korean hackers utilized its AI model, Claude, to fraudulently secure remote jobs at Fortune 500 technology firms. This involved creating fake credentials and receiving coding assistance during the application process.
OpenAI also said in June that North Korean hackers had generated fake résumés and cover letters using AI in an attempt to create false identities for cyber operations.
BY KIM MIN-JEONG [paik.jihwan@joongang.co.kr]