Anthropic Doubles Claude Code Limits on SpaceX's Colossus 1
Anthropic just signed for all the compute capacity at SpaceX's Colossus 1 data center. That's more than 300 megawatts and over 220,000 NVIDIA GPUs coming online within the month. The headline-grabber buried in the post: Anthropic and SpaceX are exploring putting AI compute capacity in orbit at multi-gigawatt scale.
The customer-facing payoff hits today. Claude Code's five-hour rate limits double for Pro, Max, Team, and seat-based Enterprise. Peak-hours throttling on Pro and Max goes away. API rate limits jump too — Tier 1 input tokens per minute up 1500%, output tokens per minute up 900%, on the Opus models. Every Claude Code user with the receipts to know hit the rate cap last week. That ceiling just got pushed up.
Stack this against the rest of Anthropic's compute book. Up to 5GW with Amazon (1GW landing by end of 2026), 5GW with Google + Broadcom starting 2027, $30B Azure capacity with Microsoft and NVIDIA, $50B with Fluidstack. Now SpaceX. The training-and-serving compute footprint isn't measured in dollars anymore — it's measured in gigawatts and counterparty diversity. If you're building serious agent workloads on Claude, the supply-side reality is now much less likely to bottleneck you.
The orbit detail is the long-term tell. Solar power, no permitting headaches, no neighbor noise complaints — if Starship economics keep improving, putting GPU racks in space goes from sci-fi to spreadsheet. That's the next decade's infrastructure thesis taking written form.
Announcement: https://www.anthropic.com/news/higher-limits-spacex
← Back to all articles
The customer-facing payoff hits today. Claude Code's five-hour rate limits double for Pro, Max, Team, and seat-based Enterprise. Peak-hours throttling on Pro and Max goes away. API rate limits jump too — Tier 1 input tokens per minute up 1500%, output tokens per minute up 900%, on the Opus models. Every Claude Code user with the receipts to know hit the rate cap last week. That ceiling just got pushed up.
Stack this against the rest of Anthropic's compute book. Up to 5GW with Amazon (1GW landing by end of 2026), 5GW with Google + Broadcom starting 2027, $30B Azure capacity with Microsoft and NVIDIA, $50B with Fluidstack. Now SpaceX. The training-and-serving compute footprint isn't measured in dollars anymore — it's measured in gigawatts and counterparty diversity. If you're building serious agent workloads on Claude, the supply-side reality is now much less likely to bottleneck you.
The orbit detail is the long-term tell. Solar power, no permitting headaches, no neighbor noise complaints — if Starship economics keep improving, putting GPU racks in space goes from sci-fi to spreadsheet. That's the next decade's infrastructure thesis taking written form.
Announcement: https://www.anthropic.com/news/higher-limits-spacex
Comments