سامي
سامي الغامدي
مستشار Fyntralink · متاح الآن
مدعوم بالذكاء الاصطناعي · Fyntralink

Fake Claude Code Leak on GitHub Delivers Vidar Stealer — Why Saudi Bank Dev Teams Must Vet Every Download

A fake Claude Code repository on GitHub is delivering Vidar infostealer and GhostSocks proxy malware to developers who download it. Here's what Saudi bank security teams need to know — and do — right now.

F
FyntraLink Team

On March 31, Anthropic inadvertently shipped a 59.8 MB JavaScript source map inside the published npm package for Claude Code, exposing 513,000 lines of unobfuscated TypeScript. Within hours, threat actors weaponized the event — not by exploiting the leaked code itself, but by creating fake GitHub repositories that promise "unlocked enterprise features" while silently dropping Vidar information-stealer and GhostSocks proxy malware onto every machine that runs the download.

How the Fake Claude Code Campaign Works

Trend Micro researchers traced the campaign to a GitHub account ("idbzoomh") that published a repository titled with keywords like "leaked Claude Code" and "no usage restrictions." The repo was SEO-optimized to rank among the first Google results for developers searching for the leaked source. Victims download a 7-Zip archive containing a Rust-compiled binary named ClaudeCode_x64.exe. On execution, the dropper unpacks two payloads: Vidar, a commodity infostealer that harvests browser credentials, session tokens, crypto wallets, and MFA seeds; and GhostSocks, a SOCKS5 proxy implant that routes attacker traffic through the compromised machine, making the victim's IP address the exit node for further attacks.

The same operators have been running a rotating-lure campaign since February 2026, cycling through more than 25 distinct software brands — from AI coding assistants to trading platforms — under different names. Regardless of the branding, every archive delivers the identical Rust dropper (TradeAI.exe) with the same Vidar + GhostSocks payload chain. The Claude Code leak simply gave them a high-demand lure at the perfect moment.

Why Developer Workstations Are High-Value Targets

A compromised developer machine inside a bank is not just another infected endpoint. It typically holds SSH keys to production servers, API tokens for CI/CD pipelines, database connection strings, and cloud IAM credentials — the exact artifacts Vidar is designed to exfiltrate. Once GhostSocks is active, the attacker can pivot through the bank's network using the developer's own IP and VPN session, bypassing geo-fencing and behavioral analytics that would flag an external connection. In the 2025 Codecov and 2026 TeamPCP supply chain breaches, initial access followed the same pattern: compromise developer tooling first, then move laterally into build systems and production.

The Real Risk to Saudi Financial Institutions

Saudi banks and fintechs are accelerating AI adoption. Internal teams are experimenting with AI code assistants, LLM-based automation, and agent frameworks — tools that are often downloaded directly from npm, PyPI, or GitHub. SAMA's Cyber Security Common Controls (CSCC) explicitly require institutions to maintain a software inventory, enforce code-signing verification, and restrict execution of unapproved software (Domain 3: Technology Operations, Sub-domain 3.3). NCA's Essential Cybersecurity Controls (ECC) mirror this under control 2-6-1, mandating application whitelisting and integrity verification for all software installed on endpoints.

Yet in practice, developer workstations frequently sit outside the same endpoint hardening policies applied to corporate desktops. Many operate with local admin privileges, run unsigned binaries for testing, and connect to public package registries without a private mirror or checksum gate. This gap is precisely what campaigns like the fake Claude Code lure exploit.

Technical Indicators and Detection Guidance

Security operations teams should hunt for the following indicators across developer segments:

  1. File hashes: Monitor for the known Rust-compiled dropper hashes published by Trend Micro and Bitdefender in their April 2026 advisories. Feed these into your EDR and SIEM correlation rules immediately.
  2. Execution of unsigned Rust binaries: Legitimate developer tools are typically signed. A Rust PE executing from a user's Downloads or Temp folder with no valid Authenticode signature is a strong signal.
  3. SOCKS5 proxy behavior: GhostSocks listens on a high port and tunnels outbound traffic. Look for anomalous outbound connections on non-standard ports from developer VLANs, especially to residential IP ranges.
  4. Vidar C2 patterns: Vidar resolves its command-and-control server via Telegram or Steam profile bio fields. DNS requests to api.telegram.org or steamcommunity.com from non-browser processes on developer machines should trigger an alert.
  5. Credential exfiltration: Vidar dumps browser credential stores, crypto wallet files, and Authenticator backup codes. Sudden reads of Login Data, Cookies, or Web Data SQLite files by a non-browser process are telltale signs.

Recommended Actions for CISOs and Security Leads

  1. Enforce application whitelisting on developer endpoints. Use Microsoft Defender Application Control (WDAC) or AppLocker policies that only allow signed, approved executables. This single control would have blocked the Rust dropper on first execution.
  2. Deploy a private package registry mirror. Route all npm, PyPI, and GitHub release downloads through an internal Artifactory or Nexus instance with automated malware scanning and checksum validation. Block direct downloads from public registries at the proxy level.
  3. Revoke and rotate developer credentials proactively. If any developer downloaded Claude Code binaries from unofficial sources in the past two weeks, treat their machine as compromised. Rotate SSH keys, API tokens, CI/CD secrets, and cloud IAM credentials tied to that workstation.
  4. Segment developer networks. Place developer VLANs behind stricter egress filtering. Block outbound SOCKS5 traffic and restrict Telegram/Steam API access from non-browser processes.
  5. Update threat intelligence feeds. Ingest the IOCs from Trend Micro's April 4, 2026 report ("Weaponizing Trust Signals: Claude Code Lures and GitHub Release Payloads") into your SIEM and EDR platforms.
  6. Conduct targeted awareness training. Brief development teams specifically on supply chain lure campaigns. Emphasize that "leaked" or "cracked" versions of commercial AI tools are overwhelmingly malware delivery vehicles.

SAMA and NCA Compliance Alignment

This incident maps directly to several regulatory controls Saudi financial institutions are already required to implement. Under SAMA CSCC, Domain 3.3 (Technology Operations) mandates software inventory management and integrity controls. Domain 3.4 (Change Management) requires formal approval before any new software is introduced into the environment. Under NCA ECC, control 2-6-1 covers application whitelisting, and control 2-7-2 addresses malware protection on endpoints. Institutions that have fully operationalized these controls would have blocked this campaign at the perimeter. Those that haven't should treat this as a priority gap to close before the next SAMA assessment cycle.

Conclusion

The fake Claude Code campaign is a textbook example of how threat actors weaponize trust — leveraging a real security event (the source code leak) to manufacture urgency and bypass developer skepticism. The malware itself is commodity-grade, but the delivery mechanism is surgically targeted at the people who hold the keys to your production infrastructure. Saudi financial institutions that treat developer workstations as second-class endpoints are leaving the front door open.

Is your organization prepared? Contact Fyntralink for a complimentary SAMA Cyber Maturity Assessment — including a focused review of your developer endpoint security posture and supply chain controls.

]]>