LiteLLM Compromised by Supply-Chain Attack — 95M Monthly Downloads at Risk
LiteLLM, one of the most widely used Python libraries for LLM interaction with over 40,000 GitHub stars and 95 million monthly PyPI downloads, has been compromised by a supply-chain attack. Versions 1.82.7 and 1.82.8 pushed to PyPI on March 24, 2026 contain a malicious .pth file that executes automatically on every Python process startup — no import required.
The attack is attributed to TeamPCP, the same threat actor behind the Trivy vulnerability scanner compromise (March 19) and the Checkmarx KICS GitHub Action attack (March 23). The payload includes a mass credential harvester exfiltrating secrets via AES-256/RSA-4096 encrypted payloads to a C2 server, a persistence backdoor registered as a systemd service, and Kubernetes lateral movement capabilities that can deploy privileged pods to every node.
This is critical for the agentic ecosystem because LiteLLM is a transitive dependency for a growing number of AI agent frameworks, MCP servers, and LLM orchestration tools. Even developers who never explicitly installed LiteLLM may have it pulled in by other packages. The story reached 317 points on Hacker News within hours.
Teams building on AI agent infrastructure should immediately audit their dependencies and pin to known-safe versions. The incident underscores the growing attack surface as AI agent toolchains become deeply interconnected.
GitHub Advisory: https://github.com/BerriAI/litellm/issues/9602
← Back to all articles
The attack is attributed to TeamPCP, the same threat actor behind the Trivy vulnerability scanner compromise (March 19) and the Checkmarx KICS GitHub Action attack (March 23). The payload includes a mass credential harvester exfiltrating secrets via AES-256/RSA-4096 encrypted payloads to a C2 server, a persistence backdoor registered as a systemd service, and Kubernetes lateral movement capabilities that can deploy privileged pods to every node.
This is critical for the agentic ecosystem because LiteLLM is a transitive dependency for a growing number of AI agent frameworks, MCP servers, and LLM orchestration tools. Even developers who never explicitly installed LiteLLM may have it pulled in by other packages. The story reached 317 points on Hacker News within hours.
Teams building on AI agent infrastructure should immediately audit their dependencies and pin to known-safe versions. The incident underscores the growing attack surface as AI agent toolchains become deeply interconnected.
GitHub Advisory: https://github.com/BerriAI/litellm/issues/9602
Comments