A sophisticated attack campaign is exploiting user trust in artificial intelligence platforms to distribute the Atomic macOS Stealer (AMOS), representing a dangerous evolution in social engineering tactics that combines legitimate AI chatbot services with paid Google advertising.
According to research from Flare, threat actors are creating shareable AI chat links on ChatGPT and Grok containing step-by-step “installation guides” disguised as legitimate macOS troubleshooting instructions. These malicious conversations are then promoted to the top of Google search results through paid advertising campaigns.
The ClickFix Technique
The campaign leverages a technique known as “ClickFix,” where users searching for common troubleshooting solutions—such as clearing disk space on macOS—are redirected to seemingly authentic AI-generated instructions hosted on trusted domains. What makes this attack particularly effective is its ability to bypass traditional security measures by appearing completely legitimate.
The malicious instructions are hosted on official ChatGPT and Grok websites rather than suspicious third-party domains, lending them an air of credibility that catches even security-conscious users off guard.
Devastating Impact on Victims
The infection process begins when users are tricked into opening Terminal and pasting what appears to be a harmless command. The malicious command downloads a script that repeatedly requests the user’s system password under the guise of legitimate system operations.
Once credentials are provided, the AMOS stealer immediately begins harvesting sensitive information including:
- Cryptocurrency wallet data from Electrum, Exodus, Coinbase, MetaMask, and Ledger Live
- Seed phrases and private keys enabling immediate theft of digital assets
- Browser credentials from Chrome, Safari, and Firefox
- Keychain credentials and personal files
- Active login sessions and autofill information
A persistent backdoor is also installed that survives system reboots and provides long-term remote access to the compromised machine.
Why It Works
The social engineering component proves remarkably effective because users inherently trust results appearing on reputable platforms like OpenAI and X.AI domains, combined with the additional credibility boost from appearing as sponsored Google search results.
Defensive Recommendations
Organizations and individual Mac users should:
- Monitor for unsigned applications requesting system passwords
- Watch for unusual Terminal activity
- Track unexpected network connections to unfamiliar domains
- Educate users that instructions appearing on trusted AI platforms can be compromised through social engineering
- Independently verify any guidance requesting Terminal command execution through official support channels before implementation
This campaign serves as a stark reminder that even trusted platforms can be weaponized against users, and that AI-generated content should be treated with the same skepticism as any other online information source.
Source: Cyber Security News / Flare Research
