Darkbloom Turns Your Idle Mac Into an AI Inference Node
Eigen Labs just dropped something that makes a lot of sense on paper and very little sense to the cloud providers: Darkbloom, a decentralized inference network that runs on idle Apple Silicon Macs.
The pitch is simple. Over 100 million Macs sit around doing nothing most of the day, each packing 64-512 GB of unified memory and up to 819 GB/s memory bandwidth. That's enough to run 500B parameter models at interactive speeds. Darkbloom connects that idle capacity directly to demand, OpenAI-compatible API and all. Change one line in your code, get up to 70% cheaper inference.
The hard part isn't the network — it's the trust problem. The machine owner has root access and physical custody, so they could theoretically snoop on your prompts and responses. Darkbloom's answer is four layers deep: client-side encryption before transmission, Secure Enclave key generation with attestation chains, OS-level locks blocking debugger attachment and memory inspection, and cryptographic output traceability. No subprocess, no local server, no IPC. The inference engine runs in-process with every software escape hatch sealed.
For operators, the economics are wild. You retain 95% of revenue. Electricity costs $0.01-0.03/hour on Apple Silicon. Platform fee is literally 0%. If you have a Mac sitting there overnight, it might as well be earning.
Currently in research preview with support for Gemma 4 26B, Qwen variants, MiniMax M2.5 239B, FLUX.2 image gen, and Cohere Transcribe. Rough edges expected. But the thesis — that Apple Silicon's memory bandwidth advantage makes it the natural substrate for distributed inference — is hard to argue with.
https://darkbloom.dev
https://github.com/Layr-Labs/d-inference
← Back to all articles
The pitch is simple. Over 100 million Macs sit around doing nothing most of the day, each packing 64-512 GB of unified memory and up to 819 GB/s memory bandwidth. That's enough to run 500B parameter models at interactive speeds. Darkbloom connects that idle capacity directly to demand, OpenAI-compatible API and all. Change one line in your code, get up to 70% cheaper inference.
The hard part isn't the network — it's the trust problem. The machine owner has root access and physical custody, so they could theoretically snoop on your prompts and responses. Darkbloom's answer is four layers deep: client-side encryption before transmission, Secure Enclave key generation with attestation chains, OS-level locks blocking debugger attachment and memory inspection, and cryptographic output traceability. No subprocess, no local server, no IPC. The inference engine runs in-process with every software escape hatch sealed.
For operators, the economics are wild. You retain 95% of revenue. Electricity costs $0.01-0.03/hour on Apple Silicon. Platform fee is literally 0%. If you have a Mac sitting there overnight, it might as well be earning.
Currently in research preview with support for Gemma 4 26B, Qwen variants, MiniMax M2.5 239B, FLUX.2 image gen, and Cohere Transcribe. Rough edges expected. But the thesis — that Apple Silicon's memory bandwidth advantage makes it the natural substrate for distributed inference — is hard to argue with.
https://darkbloom.dev
https://github.com/Layr-Labs/d-inference
Comments