April 13, 2026InfrastructureAgentsOpen SourceCoding

Shuru Gives AI Agents a Sandbox That Actually Works

The biggest unsolved problem in AI coding agents isn't intelligence. It's trust. You let Claude Code or Cursor loose on your codebase, and it's running with full access to your filesystem, your env vars, your network. That's terrifying if you think about it for more than five seconds.

Shuru, from SuperHQ, takes a different approach. It boots ephemeral Linux microVMs on macOS using Apple's Virtualization.framework. Each run gets a fresh rootfs that resets when done. Your project mounts into the sandbox at /workspace, but changes stay in an overlay layer until you explicitly approve them in a review panel. Keep what works, discard the rest, before anything touches your local files.

The secret management is clever. Your .env stays on the host. Agents only see opaque placeholders. A proxy on the host side swaps in real API keys on the wire, but only for HTTPS requests to hosts you've whitelisted. The agent literally cannot exfiltrate your secrets because it never sees them.

Built in Rust, installable via Homebrew, ships as an agent skill so Claude Code and Cursor can use it automatically. 617 stars and climbing. This is the kind of infrastructure the agent ecosystem desperately needs. Everyone talks about agent safety in abstract terms. Shuru actually ships a solution.

https://github.com/superhq-ai/shuru
← Previous
The Memory Wars: Why File-Based Context Engineering Is Winning
Next β†’
deckpipe Turns Any AI Agent Into a Slide Designer
← Back to all articles

Comments

Loading...
>_