AI and Cybercrime: How Generative AI is Empowering Hackers - Even Teenagers
February 28th 2025
The arrest of three Japanese teenagers for hacking into Rakuten Mobile’s system using AI-assisted programming has raised alarms about the dangers of AI knowledge in the wrong hands. The students, aged 14 to 16, used ChatGPT to enhance their self-made hacking program, gaining unauthorized access to 220,000 accounts and allegedly profiting through cryptocurrency transactions.
This case highlights a growing concern: as AI becomes more accessible, its potential for cybercrime also expands. On one hand, AI is a tool for innovation, automation, and progress. On the other, it provides an unprecedented advantage to those with malicious intent, reducing barriers to cybercriminal activities such as hacking, identity theft, and fraud.
Supporters of AI argue that the technology itself is not inherently harmful - it's how it is used that matters. Ethical AI education and stronger security measures could help mitigate risks. However, critics warn that generative AI is making sophisticated cybercrimes easier for even untrained individuals, potentially leading to a new era of AI-powered digital crime.
This case forces us to ask difficult questions: How do we regulate AI without stifling innovation? Should AI models place stricter usage restrictions? And as hacking becomes more AI-driven, how can cybersecurity efforts keep up with an ever-evolving digital threat landscape?
Source: The Japan News