In April 2026, Japanese police arrested three teenagers, ages 14, 15, and 16, on charges related to a sustained automated attack campaign against Rakuten Mobile. The three had no formal programming education and no prior arrest records. What they had was ChatGPT and access to credential dump repositories. That was enough to run 220,000 automated attacks against a major carrier's signup infrastructure.
This case is not notable because it is sophisticated. It is notable because it is not. The barrier between curiosity and capability has effectively disappeared for a class of attacks that previously required scripting knowledge, an understanding of HTTP request flows, and enough tradecraft to avoid rate limiting. ChatGPT provided all of it on demand.
What the Attack Actually Was
Rakuten Mobile's signup flow had a vulnerability in how it validated new mobile subscriptions. The teens identified this through online forums where similar abuse patterns against other Japanese carriers had been documented. They then prompted ChatGPT to write automation scripts that could repeatedly hit the signup endpoint, each time generating a new fraudulent account using data sourced from the 3.3 billion leaked credential pairs stored on their devices.
The resulting accounts were not used for mobile service. They were immediately monetized: the fraudulent subscriptions were converted into prepaid value and liquidated as cryptocurrency. The operation netted approximately ¥7.5 million across the campaign before Rakuten's fraud detection flagged the volume anomaly and the accounts began failing.
The 3.3 billion credential pairs were sourced from publicly available breach dumps, not a fresh breach. Combo lists of this scale are freely distributed on Telegram channels. The teens did not steal the credentials. They downloaded them.
How ChatGPT Made This Possible
Prior to generative AI, this attack would have required the attacker to either write their own scripts or purchase them from a cybercrime forum. Writing functional HTTP automation scripts against a carrier-grade signup flow with proper session handling, retry logic, and proxy rotation requires knowledge most 14-year-olds do not have. Purchasing those scripts requires connections to criminal marketplaces and typically some track record in those communities.
ChatGPT removed both barriers. The teens described what they wanted in plain language. The model provided working code. When the initial scripts failed against Rakuten's rate limiting, the teens prompted for modifications. The model iterated with them. This is not a misuse edge case. This is the intended functionality of a coding assistant applied to a malicious task.
The Threat Model Has Changed
Security teams have long modeled their adversaries along a skill spectrum. Script kiddies sit at the low end: unsophisticated actors running existing tools they did not build. Advanced persistent threat groups sit at the high end: well-resourced, technically deep, patient. The assumption embedded in most security architectures is that low-skill actors are limited to low-impact attacks.
That assumption is now wrong. A 14-year-old with ChatGPT and a breach dump can run a six-figure automated fraud campaign against a tier-one carrier. The skill floor for high-volume automated attacks has dropped to approximately zero. The only remaining barriers are willingness to act and access to a consumer AI subscription.
What this means practically: signup abuse, account takeover, and credential-stuffing defenses need to be calibrated against adversaries who can iterate on bypass techniques in real time using AI. Traditional bot detection tuned against known tool signatures is increasingly ineffective when the attacker can generate novel request patterns on demand.
What Defenders Should Do
The Rakuten attack succeeded because the signup flow lacked sufficient friction against automated abuse. The specific failure modes were not disclosed by investigators, but the pattern is consistent with missing or weak controls at several layers:
- No per-session device fingerprinting that persists across signup attempts
- Rate limiting tuned against human-speed interactions, not bot-speed bursts
- Insufficient linkage between new account creation and downstream fraud signals
- No velocity checks on prepaid credit conversion events post-signup
The broader lesson is about threat model recalibration. Any internet-accessible flow that produces monetizable output needs to be designed assuming a well-resourced attacker with automation capability. The skill gap that previously made that assumption expensive to satisfy is gone. Design against it now.
Your signup and authentication flows may have the same exposure.
RedEye Security reviews application-layer abuse controls and tests signup flows against real-world bot and credential-stuffing scenarios. We find what your WAF misses.
Talk to us