OpenAI Confirms ChatGPT Exploited by Chinese and Russian Threat Actors for Cyberattacks

OpenAI has confirmed that Chinese and Russian state-affiliated threat actors have been exploiting ChatGPT to support malicious cyber and influence operations, marking one of the first documented cases of adversaries weaponizing generative AI for tactical offensive cyber activities.

Chinese APT Groups Leverage ChatGPT for Cyber Operations

According to OpenAI’s investigation, Chinese threat actors associated with known cyber espionage units operated ChatGPT accounts to generate, translate, and refine phishing emails and malicious code components. These actors reportedly sought to optimize spear-phishing campaigns, automate technical reconnaissance, and craft convincing lures targeting defense, technology, and policy sectors worldwide.

The coordinated activity allegedly formed part of China’s wider cyber-espionage ecosystem, in which AI tools are being integrated into both defensive and offensive cyber strategies. OpenAI stated that the unauthorized accounts were linked to APT (Advanced Persistent Threat) networks commonly tracked by Western intelligence agencies.

Security analysts believe this marks one of the first confirmed cases of state-linked Chinese hackers directly using generative AI for tactical cyber operations. The actors used GPT-based models to:

  • Draft professional-sounding English communications for social engineering
  • Improve malware documentation for internal collaboration
  • Explore vulnerabilities by simulating attack scenarios

While OpenAI emphasized that its tools were not used to directly hack systems, the capabilities were clearly leveraged to accelerate cyber workflows and lower the barrier to entry for sophisticated attacks.

Russian Rybar Network Operated AI-Powered Content Farm

In addition to Chinese hackers, OpenAI identified a Russian propaganda cluster centered around the “Rybar” network (Рыбарь, meaning “fisherman” in Russian). This group used ChatGPT to mass-produce multilingual posts, pro-Russian narratives, and social media comments distributed across X (formerly Twitter) and Telegram channels.

The operation, internally codenamed “Fish Food” by OpenAI investigators, demonstrated the adaptability of generative AI to disinformation campaigns. While Rybar itself is known as a high-profile military analysis channel aligned with the Russian Ministry of Defense, the associated network created dozens of anonymous accounts masquerading as users from different countries.

OpenAI’s report also referenced another operation nicknamed “Date Bait,” in which scam advertisements were promoted using AI-generated content targeting global audiences through paid placements.

Growing Concern Over AI Weaponization

The revelations highlight an expanding frontier in cybersecurity where threat actors blend AI with traditional attack mechanisms. Experts warn that generative AI platforms can lower the barrier to entry for cybercriminals by speeding up tasks such as:

  • Creating convincing phishing content in multiple languages
  • Generating fake profiles and personas
  • Launching multilingual disinformation campaigns
  • Accelerating malware development workflows

OpenAI has suspended multiple accounts tied to these operations and reaffirmed that it continuously improves abuse detection and content moderation systems using audit trails, behavioral analysis, and adversarial training. The company also collaborates with governments and private cybersecurity partners to detect state-linked exploitation of AI models.

This incident serves as a critical reminder of the dual-use nature of AI technology and the evolving interplay between artificial intelligence and global security threats. As AI tools become more powerful and accessible, maintaining ethical safeguards and robust detection systems will be essential in preventing their weaponization by hostile actors.