North Korea’s Growing AI Arsenal: ChatGPT Sparks New Cybercrime Fears

February 24th 2025

North Korea appears to be ramping up its use of ChatGPT and other generative AI tools, raising concerns about how this technology could fuel the regime's notorious cybercrime activities. A recent video from the North Korean propaganda outlet Voice of Korea showed scholars at Kim Il Sung University using GPT-4 in training exercises, highlighting efforts to domestically adopt advanced AI capabilities. This development follows OpenAI's decision to ban accounts linked to North Korean users, after discovering misuse of the platform for creating fake job profiles and resumes as part of a wider regime-led scheme to secure remote employment under false identities.

Pyongyang has long been accused of using these deceptive hiring practices to funnel wages toward its nuclear weapons program. In January, Google's Threat Intelligence Group also revealed that North Korean hackers had been using Google’s Gemini chatbot for military espionage and cryptocurrency theft. With generative AI lowering language barriers and reducing the costs associated with executing scams, cybersecurity experts like Professor Kim Seung-joo of Korea University warn that North Korea’s AI adoption will further embolden its cybercrime operations.

While concerns mount, the debate remains: Should AI platforms impose stricter restrictions to prevent state-backed misuse, or would such measures stifle innovation and legitimate international collaboration? As AI becomes more accessible globally, the challenge of balancing openness with security becomes more pressing.

Source: The Korea Herald

Conceptual illustration of ChatGPT as a digital Trojan horse, symbolizing its use by hackers to infiltrate global financial systems and conduct cyberattacks.
Previous
Previous

Say 'I Do' to AI: How Tech is Transforming China’s Wedding Industry

Next
Next

DeepSeek’s AI Revolution Drives China’s EV and E-Scooter Boom