A Russian-speaking cybercrime group has compromised more than 600 internet-exposed FortiGate firewalls across 55 countries in just over a month, leveraging off-the-shelf generative AI tools to automate and scale their operations, according to a new incident report from AWS.
Attack Campaign Overview
The campaign, which ran from mid-January to mid-February 2026, didn’t rely on sophisticated zero-day exploits. Instead, the attackers took a volume-over-finesse approach – scanning for exposed FortiGate management interfaces, trying commonly reused or weak credentials, and exfiltrating configuration files once inside.
What made this campaign notable was the AI integration throughout the workflow. AWS security researchers found evidence of AI-generated code, attack playbooks, scripts, and operational notes on compromised infrastructure. The tools were embedded throughout the entire attack chain rather than just used for occasional scripting assistance.
“The volume and variety of custom tooling would typically indicate a well-resourced development team. Instead, a single actor or very small group generated this entire toolkit through AI-assisted development.”
— CJ Moses, CISO at Amazon
Post-Compromise Activity
Once inside a target’s firewall, the attackers extracted:
- Administrator and VPN credentials
- Network topology details
- Firewall rules and configurations
From there, they moved laterally into Active Directory environments, dumped credentials, and targeted backup systems including Veeam servers – suggesting potential ransomware preparation or data exfiltration objectives.
Geographic Distribution
The activity was opportunistic rather than tightly targeted, with victims spread across Europe, Asia, Africa, and Latin America. AWS noted that some compromises may have enabled access to managed service providers (MSPs), amplifying downstream risk across multiple organizations.
Key Takeaways
AWS emphasizes that basic security hygiene would have stopped most of this activity:
- Keep management interfaces off the public internet
- Enforce multi-factor authentication (MFA)
- Avoid password reuse across systems
- Monitor for anomalous login attempts and configuration changes
This incident demonstrates how generative AI is lowering the barrier to entry for cybercriminals. A small group – potentially even a single actor – can now produce the tooling that previously required a well-resourced development team. As AI capabilities continue to evolve, defenders must prioritize fundamentals and assume that sophisticated-looking attacks may come from increasingly smaller operations.
Source: The Register
