Anthropic Partners With SpaceX, Doubles Claude Code Limits — And Eyes Data Centers in Space
Anthropic just made the most aggressive compute play in AI history — and your Claude experience is about to change overnight.
On May 6, 2026, Anthropic announced a landmark partnership with SpaceX to gain access to all compute capacity at SpaceX’s Colossus 1 data center — a facility housing over 220,000 NVIDIA GPUs and delivering more than 300 megawatts of raw AI power. And that’s not even the wildest part. The two companies are also exploring orbital AI compute — putting data centers in space.
But what matters most to developers and Claude users right now? Anthropic is doubling Claude Code rate limits, removing peak-hour throttling, and substantially raising API caps for Opus models. All effective immediately.
Let’s break down everything you need to know about this deal, what changes today, and why Anthropic is betting on space-based supercomputers.
What Just Happened: Anthropic Signs Deal With SpaceX
Anthropic has signed an agreement with SpaceX — now merged with xAI under Elon Musk’s umbrella — to use 100% of the compute capacity at Colossus 1, one of the world’s largest AI supercomputers.
Colossus 1 sits in Memphis, Tennessee, and was built from the ground up in record time. It features:
- 220,000+ NVIDIA GPUs including H100, H200, and next-generation GB200 accelerators
- 300+ megawatts of compute capacity
- Infrastructure designed for large language model training, multimodal systems, scientific simulations, and generative AI at frontier scale
According to Anthropic’s official announcement, this additional capacity will “directly improve capacity for Claude Pro and Claude Max subscribers.”
From xAI’s side of the announcement: “Colossus delivers unprecedented scale for AI training, fine-tuning, inference, and high-performance computing workloads.”
Claude Code Rate Limits Doubled: What Changes Today
The SpaceX compute deal isn’t just a PR stunt — it’s translating into immediate, tangible improvements for Claude users. Three major changes went into effect on May 6, 2026:
1. Claude Code 5-Hour Rate Limits Doubled
If you’re a Claude Code user on Pro, Max, Team, or seat-based Enterprise plans, your 5-hour rate limit just doubled. This is huge for developers who’ve been hitting walls during long coding sessions. No more rationing your prompts or watching the clock — you now get twice the throughput in every 5-hour window.
2. Peak Hours Throttling Removed
Previously, Claude Code users on Pro and Max plans experienced reduced limits during peak hours — typically US business hours when demand was highest. That throttling is now completely gone. You get the same full rate limits whether you’re coding at 2 PM EST or 2 AM. This was one of the most frustrating pain points for developers, and Anthropic has eliminated it entirely.
3. API Rate Limits Raised for Opus Models
Anthropic has “considerably” raised API rate limits for Claude Opus models. This affects developers building applications on the Opus tier — meaning your apps can now handle more concurrent requests and higher throughput without hitting rate limit errors.
These changes come after months of developer frustration. As The Register reported in March, Anthropic had acknowledged that Claude Code quotas were “running out too fast.” Developers across Reddit, X, and Hacker News had been vocal about hitting limits mid-task, losing context, and having their workflow disrupted.
Why Anthropic Needed SpaceX: The Compute Crisis
This deal didn’t happen in a vacuum. Anthropic has been facing a genuine compute shortage crisis throughout 2026.
The problem started becoming visible in late 2025 when Claude’s user base exploded. Claude Code, launched as a command-line agentic coding tool, became wildly popular among developers — but it consumes significantly more tokens than regular chat interactions. A single Claude Code session can burn through tokens at 10-50x the rate of a normal conversation.
By January 2026, developers were complaining about surprise usage limits. By March, the situation had deteriorated further, with some users reporting their 5-hour quotas lasting less than an hour during peak times.
Anthropic tried temporary fixes — like doubling limits during off-peak hours in late March — but these were band-aids. The company needed raw GPU capacity, and they needed it fast.
Enter SpaceX’s Colossus 1: 220,000 GPUs ready to go, available within the month. It’s the AI equivalent of calling in an airstrike when your ground forces are overwhelmed.
Anthropic’s Compute Empire: Every Deal on the Table
The SpaceX partnership is just one piece of Anthropic’s massive infrastructure buildout. Here’s the full picture of their compute agreements:
| Partner | Capacity | Timeline | Details |
|---|---|---|---|
| Amazon (AWS) | Up to 5 GW | ~1 GW by end of 2026 | AWS Trainium chips, deepest cloud partnership |
| SpaceX/xAI | 300+ MW (220K GPUs) | Within May 2026 | Colossus 1 in Memphis, NVIDIA H100/H200/GB200 |
| Google + Broadcom | 5 GW | Starting 2027 | Google TPUs, custom silicon |
| Microsoft + NVIDIA | $30B Azure capacity | Rolling | Azure cloud infrastructure |
| FluidStack | $50B investment | Multi-year | American AI infrastructure |
Combined, Anthropic is building toward over 10 gigawatts of compute capacity — enough to power a small country. For context, 10 GW is roughly the electricity consumption of a city of 7 million people.
Anthropic CEO Dario Amodei has repeatedly emphasized that compute is the primary bottleneck for AI progress. This infrastructure buildout reflects Anthropic’s belief that they’ll need orders of magnitude more compute for their next-generation models.
Orbital AI Compute: Data Centers in Space?
Here’s where things get genuinely science fiction. Buried in both Anthropic’s and xAI’s announcements is a line that could reshape the entire AI industry:
“As part of this agreement, Anthropic also expressed interest in partnering with SpaceX to develop multiple gigawatts of orbital AI compute capacity.”
xAI’s statement goes further, explaining the rationale: “The compute required to train and operate the next generation of these systems is outpacing what terrestrial power, land, and cooling can deliver on the timelines that matter.”
The logic is surprisingly practical:
- Unlimited solar power — In space, solar panels receive uninterrupted sunlight with no atmosphere to filter it. A solar array in orbit generates roughly 6x more power per square meter than one on Earth.
- Natural cooling — The vacuum of space is the ultimate heat sink. Data centers on Earth spend enormous energy on cooling; in orbit, radiative cooling comes free.
- No land constraints — Terrestrial data centers face NIMBY opposition, permitting delays, and physical space limitations. Orbit has none of these.
- SpaceX’s unique position — With Starship’s mass-to-orbit economics and SpaceX’s Starlink constellation experience, they’re the only company that could realistically build this.
Is this actually going to happen? It’s years away at minimum. But the fact that two of the most well-funded AI and space companies on Earth are even discussing it signals where the industry thinks compute demand is heading.
The Elon Musk Factor: From Lawsuit to Business Partner
There’s a delicious irony in this deal that shouldn’t be overlooked.
Elon Musk co-founded OpenAI, then left. He then sued OpenAI for abandoning its non-profit mission. He launched xAI and Grok as direct competitors to both OpenAI and Anthropic. And now? He’s selling compute to Anthropic.
In February 2026, xAI and SpaceX merged in what was called the biggest merger of all time, valued at $1.25 trillion. This means Anthropic is now effectively a customer of Musk’s combined AI-space empire.
It’s a pragmatic move on both sides. SpaceX/xAI has excess compute capacity at Colossus 1 that can generate revenue. Anthropic desperately needs GPUs and can’t wait for their Amazon and Google capacity to come online. Business beats rivalry.
What This Means for Developers
If you’re building with Claude, here’s the practical impact:
Claude Code users: Your workflow just got dramatically better. Double the rate limits means you can run longer agentic coding sessions without interruption. No more peak-hour slowdowns. If you abandoned Claude Code because of limits, now is the time to come back.
API developers: Higher Opus rate limits mean your production applications can scale more aggressively. If you were routing around rate limits with retry logic and request queuing, you may be able to simplify your infrastructure.
Enterprise teams: The combination of doubled limits and removed peak throttling makes Claude Code viable for larger team deployments where you previously couldn’t guarantee consistent performance.
The compute shortage isn’t over. Anthropic’s blog post carefully notes that new infrastructure takes “12 to 24 months to translate into available capacity.” The SpaceX deal helps because those GPUs are available immediately, but the broader capacity buildout — Amazon, Google, Microsoft — is a multi-year project.
Anthropic by the Numbers in 2026
To understand why Anthropic can make deals at this scale, consider where the company stands:
- $30 billion raised in Series G (February 2026) at a $380 billion valuation
- $19 billion+ annualized revenue
- 32% share of the enterprise LLM market
- Planning to raise $60 billion+ at IPO (expected second half of 2026)
- Compute agreements totaling $80+ billion in infrastructure
Anthropic has gone from a scrappy AI safety startup founded by former OpenAI researchers to one of the most valuable private companies in history. The SpaceX deal cements their position as a company willing to partner with anyone — even Elon Musk — to win the compute race.
International Expansion and Community Commitment
Anthropic also used this announcement to signal international expansion plans. Enterprise customers in regulated industries — financial services, healthcare, government — increasingly need in-region infrastructure for compliance and data residency requirements.
The company’s collaboration with Amazon includes additional inference capacity in Asia and Europe. Anthropic says they’re being “intentional” about where they expand, partnering with democratic countries with secure supply chains.
In an unusual move for a tech company, Anthropic also committed to covering any consumer electricity price increases caused by their data centers in the US, and is exploring extending that commitment internationally. It’s a direct response to growing concerns about AI’s energy footprint.
The Bottom Line
This deal represents two of the most ambitious companies on Earth — one building the most capable AI, the other building rockets — joining forces to solve AI’s biggest bottleneck: compute.
For Claude users, the immediate impact is clear: double the rate limits, no more peak throttling, and higher API caps. For the industry, the SpaceX partnership and orbital compute discussions signal that the AI compute arms race is entering a new phase entirely.
The question isn’t whether AI companies will need space-based data centers. It’s how soon.
Related reads on SudoFlare: