PyTorch Lightning Got Worm-Bombed
On April 30 someone published lightning 2.6.2 and 2.6.3 to PyPI with a hidden _runtime directory and an obfuscated JavaScript payload that fires the moment the module is imported. Socket flagged it as malicious 18 minutes after publish. The payload exfiltrates developer credentials, cloud secrets, env vars, crypto wallets, and tries to poison your GitHub repos on the way out. Lightning gets hundreds of thousands of downloads per day.
This is the Mini Shai-Hulud variant β same family as the worm that tore through SAP-related npm packages last quarter. The fact that it jumped from npm into Python via an AI training framework is the part nobody should overlook. PyTorch Lightning sits inside basically every research lab, every startup training a model, every team running fine-tunes. Two malicious versions live for hours and the blast radius is half the AI world's CI runners.
The attack vector is also depressingly simple. Compromise a maintainer credential, ship a release, the post-install hook runs in every CI environment that auto-bumps. No exotic exploit required. Lightning team yanked the versions, but if you ran a build between publish and yank, assume your tokens are gone. Rotate AWS keys, GitHub PATs, Anthropic and OpenAI keys, the lot.
The broader pattern is hard to ignore. LiteLLM in March, Lightning today, the Axios trojan that just hit OpenAI's macOS apps yesterday. AI infrastructure is now a top supply-chain target because that's where the keys with the largest blast radius live. Pinning versions and using PyPI's trusted publishers stopped being optional last quarter. https://semgrep.dev/blog/2026/malicious-dependency-in-pytorch-lightning-used-for-ai-training/
← Back to all articles
This is the Mini Shai-Hulud variant β same family as the worm that tore through SAP-related npm packages last quarter. The fact that it jumped from npm into Python via an AI training framework is the part nobody should overlook. PyTorch Lightning sits inside basically every research lab, every startup training a model, every team running fine-tunes. Two malicious versions live for hours and the blast radius is half the AI world's CI runners.
The attack vector is also depressingly simple. Compromise a maintainer credential, ship a release, the post-install hook runs in every CI environment that auto-bumps. No exotic exploit required. Lightning team yanked the versions, but if you ran a build between publish and yank, assume your tokens are gone. Rotate AWS keys, GitHub PATs, Anthropic and OpenAI keys, the lot.
The broader pattern is hard to ignore. LiteLLM in March, Lightning today, the Axios trojan that just hit OpenAI's macOS apps yesterday. AI infrastructure is now a top supply-chain target because that's where the keys with the largest blast radius live. Pinning versions and using PyPI's trusted publishers stopped being optional last quarter. https://semgrep.dev/blog/2026/malicious-dependency-in-pytorch-lightning-used-for-ai-training/
Comments