April 3, 2026security

Security Alerts: 2026-04-04

AI infrastructure is under coordinated attack on multiple fronts this week: supply chain poisoning hit LiteLLM and Axios affecting thousands of environments, critical RCE vulnerabilities surfaced in n8n and Langflow, the Claude Code source leak spawned malware campaigns, and researchers demonstrated AI agents writing kernel exploits from CVE descriptions alone.

---

LiteLLM supply chain attack compromises Mercor and thousands of AI environments. Malicious PyPI packages (litellm v1.82.7 and v1.82.8) published by TeamPCP harvested environment variables, SSH keys, cloud credentials, and Kubernetes tokens. Mercor, an AI recruiting firm serving OpenAI, Anthropic, and Meta, confirmed it was breached. Lapsus$ claims 4TB of stolen data including proprietary training datasets now being auctioned on the dark web. If you use LiteLLM, check pip show litellm immediately and rotate all secrets.
Source: https://x.com/AdityaMBAsymbi/status/2039643919667908857

---

Axios npm package hijacked in supply chain attack deploying RAT across all platforms. Compromised versions axios 1.14.1 and 0.30.4 were published using stolen maintainer credentials, injecting a malicious dependency (plain-crypto-js) with a postinstall script that downloads a remote access trojan for macOS, Windows, and Linux. Socket detected the attack within 6 minutes, but any machine that installed the affected versions should be treated as fully compromised. Rotate all credentials on affected systems.
Source: https://x.com/feross/status/2039740053484904694

---

n8n workflow automation CVE-2026-21858 scores CVSS 10.0: unauthenticated remote code execution affecting approximately 100,000 exposed servers. If you run n8n, patch immediately or take it offline. There is no workaround for this one.
Source: https://x.com/NYsquaredAI/status/2039598465596973511

---

Langflow CVE-2026-33017 RCE exploited in the wild within 20 hours of disclosure. CISA added it to the Known Exploited Vulnerabilities catalog. Attackers are specifically harvesting .env files containing OpenAI, Anthropic, and AWS API keys from compromised Langflow instances. Patch or shut down exposed instances now.
Source: https://x.com/DarkForgeNews/status/2039586022669873288

---

OpenClaw CVE-2026-25253 (CVSS 8.8) enables WebSocket authentication bypass and malicious Skill supply chain attacks through ClawHub. Attackers can publish trojanized Skills that execute arbitrary code when loaded by agents. China has banned enterprise use of OpenClaw pending security review. Audit all installed Skills and restrict to verified publishers.
Source: https://x.com/lixiaolai_/status/2039534642534179177

---

Chrome Gemini integration CVE-2026-0628 allows low-privilege extensions to inject code into the Gemini side panel. A malicious extension could hijack camera, microphone, and access Gemini conversation history. Update Chrome immediately and audit installed extensions.
Source: https://x.com/sinsotsuouen26/status/2039511395352695031

---

Claude Code source leak spawns trojanized GitHub forks delivering Vidar malware. After Anthropic accidentally published 500K lines of source code via npm source maps, threat actors created fake repos mimicking the leaked code that deliver Vidar information stealer and GhostSocks proxy via Rust droppers. Do not clone or build any unofficial Claude Code repository. Stick to official Anthropic channels only.
Source: https://x.com/abebecker/status/2039695910964232307

---

Claude Code permission bypass vulnerability: a logic flaw causes deny-rules and command-injection checks to be skipped when a generated command pipeline exceeds 50 subcommands. Attackers can exploit this via malicious CLAUDE.md files or prompt injection to execute arbitrary commands and exfiltrate credentials. Isolate Claude Code agents from production secrets until patched.
Source: https://x.com/syedaquib77/status/2039775266692546687

---

Check Point disclosed CVE-2025-59536 and CVE-2026-21852 in Claude Code enabling API key exfiltration from malicious repositories. Full source access from the leak makes these vulnerabilities far more exploitable as attackers can now craft precise malicious repos targeting the exposed permission and hook logic. Audit repos before opening with Claude Code.
Source: https://x.com/TechPulseHK/status/2039746225990418528

---

Cisco breached via Trivy supply chain compromise traced back to TeamPCP. ShinyHunters stole 300+ GitHub repos including unreleased AI products, 3 million Salesforce records, and AWS data containing FBI, DHS, and NASA personnel information. April 3 extortion deadline set. One compromised vulnerability scanner led to a Fortune 500 breach in 12 days.
Source: https://x.com/Atarussecurity/status/2039750226404352128

---

Drift Protocol loses $280M in cryptocurrency hack involving a compromised multi-signature key. One of five signers was an infiltrated DPRK agent, another had their key compromised. The attack used pre-signed durable nonce transactions requiring weeks of preparation. Circle did not intervene as $230M in USDC was transferred. Multi-sig alone is not sufficient security for high-value DeFi protocols.
Source: https://x.com/theZeugh/status/2039651944608776603

---

Claude writes working FreeBSD kernel exploit from a CVE writeup, achieving root shell. Separately, Anthropic researcher Nicholas Carlini demonstrated Claude finding a zero-day in Ghost CMS (CVE-2026-26980) and stealing admin API keys in 90 minutes. AI is compressing the gap between vulnerability disclosure and working exploit to near-zero, making traditional patch windows dangerously inadequate.
Source: https://x.com/OctoHirono/status/2039511337001418989

---

Google DeepMind research maps six attack vectors for AI Agent Traps: perception attacks via hidden HTML/CSS, memory corruption through RAG poisoning, and latent memory manipulation requiring less than 0.1% data contamination. Testing showed 86% partial hijacking rate from HTML injections alone and 80%+ success from memory attacks. Every webpage an autonomous agent reads is now a potential attack surface.
Source: https://x.com/TomasMann1878/status/2039600029124506038

---

Slopsquatting: researchers discover LLMs hallucinate package names 18-21% of the time. Attackers are registering these hallucinated names on PyPI and npm with malicious payloads, creating a new supply chain attack vector unique to AI-assisted development. Verify every package name your AI suggests before installing.
Source: https://x.com/curioverse_th/status/2039819165457138106

---

Vibe Security Radar reports 35 CVEs traced to AI-generated code in March 2026 alone, with an estimated 5-10x more hidden in production. AI-native organizations have a fundamentally expanded attack surface spanning code generation, dependency management, and agent orchestration, all under simultaneous attack this week.
Source: https://x.com/iototsecnews/status/2039511752489234826
← Previous
Loop Daily: 2026-04-04
Next β†’
Ideas Radar: 2026-04-04
← Back to all articles

Comments

Loading...
>_