For context on where Anthropic sits in the broader AI landscape, see our overview of the best AI models in 2026.

Yesterday, Anthropic blocked third-party Claude access for every Pro and Max subscriber running tools like OpenClaw. If you had an agent humming along on your $200/month plan, it stopped working at noon Pacific on April 4th. No grace period. No grandfathering. Just a hard cutoff and a polite email offering you a refund.

I run OpenClaw myself, so this one hit close to home. Let me walk you through what actually happened, why Anthropic did it, and what your options look like going forward.

If you want to know more about claude, you can read this comparisson of Claude vs ChatGPT – https://aspirii.com/claude-vs-chatgpt-2026/

Anthropic blocked third-party Claude access — here’s how it went down

Boris Cherny, Anthropic’s Head of Claude Code, announced the change on X. Starting April 4, 2026, Claude subscriptions no longer cover usage routed through third-party harnesses. OpenClaw was first on the chopping block, but Anthropic confirmed the ban covers all third-party harnesses. More tools will be blocked in the coming weeks.

The short version? Anthropic says its subscription plans were built for conversational use through its own apps. Third-party tools were never supposed to be part of the deal.

Now, this didn’t come out of nowhere. If you’ve been paying attention, the writing has been on the wall for months. Anthropic’s Consumer Terms of Service — specifically Section 3.7 — have prohibited automated access through unauthorized tools since February 2024. That’s over two years. In January 2026, Anthropic engineer Thariq Shihipar hinted publicly that enforcement was coming. Then on February 20th, the company updated its legal compliance page to spell it out in plain language: using OAuth tokens from Claude Free, Pro, or Max accounts in any third-party product is a Terms of Service violation.

So yesterday wasn’t a new rule. It was the enforcement finally catching up to an existing one.

Why did Anthropic block third-party Claude access now?

Follow the money. A Claude Max subscription costs up to $200 per month. However, heavy OpenClaw users were reportedly burning through $1,000 to $5,000 per day in API-equivalent compute on those same flat-rate plans. Read that again — daily costs exceeding $1,000 on a $200 monthly plan. Obviously, that math doesn’t work.

There’s a technical angle here too. Anthropic’s own tools — Claude Code and Claude Cowork — are built to maximize prompt cache hit rates. In other words, they reuse previously processed context to cut down on inference costs. Third-party harnesses like OpenClaw skip those optimizations entirely. As a result, a single heavy OpenClaw session eats up far more infrastructure than an equivalent Claude Code session producing the same output.

Cherny put it bluntly: subscriptions weren’t built for these usage patterns, and capacity is something Anthropic has to manage carefully.

The analogy that caught on over at Hacker News sums it up nicely. Anthropic was running an all-you-can-eat buffet, and the autonomous AI agents showed up as sumo wrestlers.

The competitive side of the story

Here’s where it gets interesting. The timing is hard to ignore.

OpenClaw creator Peter Steinberger joined OpenAI on February 14, 2026. Weeks later, Anthropic tightened its legal terms. Then came yesterday’s enforcement. Steinberger didn’t hold back in his response — he accused Anthropic of copying popular OpenClaw features into Claude Code first, and then locking out the open-source alternative.

He’s not entirely wrong about the feature overlap, either. Anthropic recently added Discord and Telegram messaging to Claude Code — capabilities that helped make OpenClaw popular in the first place. Steinberger and investor Dave Morin tried to negotiate a softer transition with Anthropic. They got a one-week delay. That was it.

But this isn’t just about OpenClaw. Anthropic has been drawing these lines for a while now. Back in August 2025, they revoked OpenAI’s API access after catching OpenAI employees using Claude through Cursor to benchmark competing models. More recently, xAI staff lost access too. An internal xAI memo from co-founder Tony Wu confirmed the block.

Whether you call it a walled garden or a sustainable business model depends on where you sit. Either way, Anthropic is clearly steering all usage through channels it controls.

What are your options after Anthropic blocked third-party Claude access?

If you were running OpenClaw on a Claude subscription, you have four paths forward. None of them are free.

Pay-as-you-go billing. Anthropic introduced an “Extra Usage” tier on top of your existing subscription. Third-party tool usage now gets billed separately at API rates. For Sonnet 4.6, that’s $3 per million input tokens and $15 per million output tokens. Opus 4.6 runs $15 input and $75 output. For anyone doing heavy agentic work, those numbers add up quickly.

Straight API access. You can skip the subscription entirely and use an Anthropic API key instead. The per-token pricing is the same, but batching can bring costs down at higher volumes.

Switch your model provider. OpenClaw still works with OpenAI, and OpenAI has been welcoming the displaced users with open arms. Google ran a similar crackdown on AI Ultra subscribers using OpenClaw earlier this year, though, so don’t assume any provider’s generosity is permanent.

Move to a compliant harness. Tools built on Anthropic’s official Agent SDK — like Nanoclaw — are reportedly still allowed. If you’re committed to Claude as your model, this might be the least disruptive option.

To take some of the sting out, Anthropic is offering a one-time credit equal to one month of your subscription cost. You need to claim it by April 17th. They’re also giving up to 30% off pre-purchased extra usage bundles, and full refunds are available if you want out entirely.

What this tells us about where the AI industry is heading

Anthropic isn’t alone in this. OpenAI has tightened third-party access rules. Google restricts certain Gemini API usage. Microsoft pushes Azure users toward first-party AI tools. The pattern across the industry is consistent — flat-rate subscriptions powering unlimited autonomous agent workloads was never going to last.

And honestly? I think most of us saw this coming. When a $200/month subscriber can generate thousands of dollars in daily compute costs, something has to give. The surprise isn’t that Anthropic did it. The surprise is that they let it run this long.

For anyone building production workflows on a consumer subscription — regardless of the provider — yesterday was a wake-up call. These terms can change overnight. If your infrastructure depends on it, plan accordingly.

The buffet is closed. What comes next depends on whether the industry can find a pricing model that actually works for both sides — the providers paying the GPU bills and the developers building the agentic future.


Niels — IT Architect & Cybersecurity Consultant at NIST-Solutions For more on AI, security, and Microsoft infrastructure, visit nist-solutions.dk

For a comprehensive guide to Claude’s models and pricing, see our complete guide to Claude’s current models and pricing.