Anthropic

Executive Summary

What it is: Anthropic's Claude Code is an agentic coding assistant available via CLI, IDE extensions, web, iOS, Android, and API. Plans range from $0 (Free) to $200/mo (Max 20x) for individuals, $25 to $125/seat/mo for teams, and custom enterprise pricing. The underlying Claude models (Opus 4.7, Sonnet 4.6, Haiku 4.5) are also available as raw API access at $1.00 to $5.00 input and $5.00 to $25.00 output per MTok.

What to watch out for: Anthropic does not publish exact usage limits for any plan tier, and a March 3, 2026 silent change lowered the default effort level from "high" to "medium," causing documented quality degradation (AMD's forensic analysis showed a 90% collapse in pre-edit research effort over six weeks). The new Opus 4.7 tokenizer increases token counts by 1.0 to 1.35x for the same input text, and the new xhigh effort level's token multiplier is undisclosed.

Bottom line: Claude Code remains a top-tier coding agent, but Anthropic's pattern of silently changing defaults and withholding usage numbers makes cost unpredictable. Budget for the Max tier ($100 to $200/mo) if you plan to use Opus 4.7 for agentic workflows, and monitor token consumption closely.

Key Terms

  • Token-based billing - charges based on the number of input and output tokens processed. One token is roughly 4 characters or 0.75 words. Source: Anthropic – Token Usage
  • Prompt caching - stores frequently used prompts on Anthropic's servers to reduce latency and cost. Cached reads cost 90% less than regular input tokens. Source: Anthropic – Prompt Caching
  • Batch API - asynchronous processing that costs 50% less than standard API calls, with results available within 24 hours. Source: Anthropic – Batch Processing
  • Context window - the maximum number of tokens a model can process in a single conversation. Larger windows allow more code and conversation history. Source: Anthropic – Extended Thinking
  • Effort levels - controls how much reasoning Claude performs before responding. Levels range from low to max, with xhigh (extra high) added in April 2026. Higher effort costs more tokens. Source: Claude – Effort
  • xhigh effort - a new effort level between high and max introduced with Opus 4.7. The default for Claude Code was raised to xhigh for all plans. Token multiplier for xhigh vs high is undisclosed. Source: Anthropic – Claude Opus 4 7

Latest Changes

First report for this supplier. All models, plans, and pricing are listed as current state.

  • New model: Claude Opus 4.7 launched April 16 at $5/$25 per MTok, replacing Opus 4.6 as the flagship. See Model Performance / Benchmarks for scores.
  • Feature added: New xhigh effort level between high and max. Default for Claude Code raised to xhigh for all plans.
  • Feature added: Task budgets in public beta for API developers. /ultrareview slash command added (3 free reviews for Pro/Max users).
  • Feature added: Auto mode extended to Max users (Claude makes decisions on user's behalf).
  • Feature added: Claude Design by Anthropic Labs for visual designs, prototypes, and slides.
  • Price change (implicit): New Opus 4.7 tokenizer increases token counts by 1.0-1.35x for the same input text, raising effective cost.
  • Deprecation: Opus 4.6, Opus 4.5, Opus 4.1, Sonnet 4.5, Sonnet 4, and Opus 4 moved to legacy status (same pricing, but no longer recommended).

Plans

PlanPrice (monthly)Price (annual)Usage vs FreeKey Inclusions
Free$0$0BaselineWeb/iOS/Android/desktop chat, code execution, web search, memory, connectors, MCP, extended thinking
Pro$20/mo$17/mo ($200/yr billed up front)"More usage" (undisclosed)Everything in Free + Claude Code, Claude Cowork, unlimited projects, Research, more models, Claude for Excel/PowerPoint/Word (beta)
Max 5x$100/moundisclosed5x Pro usage (undisclosed)Everything in Pro + higher output limits, early access features, priority access at high traffic
Max 20x$200/moundisclosed20x Pro usage (undisclosed)Everything in Max 5x + maximum limits
Team Standard$25/seat/mo$20/seat/moMore than Pro (undisclosed)SSO, central billing, enterprise search, no model training on content, admin controls, mix/match seat types
Team Premium$125/seat/mo$100/seat/mo5x standard seat usage (undisclosed)Everything in Team Standard + 5x usage
Enterprise$20/seat + usage at API ratesannualUsage scales with API spendSCIM, audit logs, compliance API, HIPAA-ready, custom data retention, IP allowlisting, role-based access, spend controls

Source: Anthropic – Pricing

API Pricing

ModelInput ($/MTok)Output ($/MTok)Cache Write ($/MTok)Cache Read ($/MTok)Batch Discount
Opus 4.7$5.00$25.00$6.25$0.5050%
Sonnet 4.6$3.00$15.00$3.75$0.3050%
Haiku 4.5$1.00$5.00$1.25$0.1050%
Opus 4.6 (legacy)$5.00$25.00$6.25$0.5050%
Sonnet 4.5 (legacy)$3.00$15.00$3.75$0.3050%
Opus 4.5 (legacy)$5.00$25.00$6.25$0.5050%
Opus 4.1 (legacy)$15.00$75.00$18.75$1.5050%
Sonnet 4 (legacy)$3.00$15.00$3.75$0.3050%
Opus 4 (legacy)$15.00$75.00$18.75$1.5050%

Other API pricing:

  • Managed Agents: $0.08 per session-hour
  • Web search: $10 per 1,000 searches
  • Code execution: 50 free hours per org per day, then $0.05/hour
  • US-only inference (data residency): 1.1x multiplier on input/output tokens
  • Prompt caching TTL: 5-minute default, 1-hour extended caching available

Source: Anthropic – Pricing

Model Performance / Benchmarks

BenchmarkOpus 4.7Opus 4.6
SWE-bench Verified87.6%80.8%
SWE-bench Pro64.3%53.4%
SWE-bench Multilingual80.5%77.8%
SWE-bench Multimodal34.5%27.1%
Terminal-Bench 2.069.4%65.4%
OSWorld78.0%72.7%
GPQA Diamond94.2%91.3%
ARC-AGI-275.8%68.8%
CursorBench70%58%
BigLaw Bench (Harvey)90.9% (high effort)-
XBOW visual-acuity98.5%54.5%
Finance Agent64.4%60.1%

Source: Anthropic – Claude Opus 4 7 System Card, Anthropic – Claude Opus 4 7

Latest News

Claude Opus 4.7 Launch (April 16, 2026)

Anthropic launched Claude Opus 4.7, the successor to Opus 4.6, priced identically at $5/$25 per MTok. Key improvements:

  • State-of-the-art on SWE-bench Verified (87.6%), Terminal-Bench 2.0 (69.4%), and Finance Agent eval (64.4%)
  • Improved vision: supports images up to 2,576px long edge (~3.75 megapixels), over 3x more than prior models
  • New xhigh effort level added between high and max
  • Updated tokenizer that maps the same input to 1.0-1.35x more tokens depending on content type
  • Task budgets in public beta for API developers
  • /ultrareview slash command added to Claude Code (3 free reviews for Pro/Max users)
  • Auto mode extended to Max users (Claude makes decisions on user's behalf)
  • CursorBench: Opus 4.7 scored 70% vs Opus 4.6 at 58% (per Cursor CEO Michael Truell)
  • Rakuten-SWE-Bench: Opus 4.7 resolves 3x more production tasks than Opus 4.6

However, two factors increase effective cost:

  1. New tokenizer: same text costs 1.0-1.35x more tokens
  2. Higher effort levels produce more output tokens on later turns in agentic settings

Source: Anthropic – Claude Opus 4 7

Claude Design by Anthropic Labs (April 17, 2026)

New Anthropic Labs product for collaborating with Claude to create visual designs, prototypes, slides, and one-pagers. Community reports indicate it consumes tokens rapidly.

Source: Anthropic – Claude Design Anthropic Labs

Claude for Creative Work (April 28, 2026)

Announcement of expanded creative capabilities. Limited details at time of report.

Source: Anthropic – Claude For Creative Work

Project Glasswing (April 7, 2026)

Security initiative bringing together Anthropic, Amazon, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks to secure critical software. Related to Mythos Preview's limited release and new cyber safeguards on Opus 4.7.

Source: Anthropic – Glasswing

Infrastructure Expansions (April 2026)

Community Signals

Quality Degradation Controversy

The dominant community narrative this month is a quantified quality degradation in Claude Code, documented by an AMD executive.

AMD Forensic Analysis (GitHub issue #42796, April 2, 2026):

  • Stella Laurenzo, Senior Director of AMD's AI Group, published a forensic analysis showing files-read-to-edits ratio dropped from 21.8 to 1.6 over six weeks, a 90% collapse in pre-edit research effort.
  • Laurenzo used cryptographic signatures to validate the data.
  • AMD's entire team migrated to a different provider.
  • Quote: "Six months ago, Claude was alone in terms of reasoning quality. But the others need to be watched and evaluated very carefully."
  • Covered by The Register (April 6), Winbuzzer (April 7), DEV Community (April 23).

Source: GitHub – Issues

Anthropic's Official Response:

  • HN thread "An update on recent Claude Code quality reports": 941 points, 731 comments.
  • Anthropic acknowledged that the effort default shifted from "high" to "medium" on March 3, 2026.
  • Quality degradation preceded the redact-thinking tag rollout (the model was already cutting corners before Anthropic had anything to hide visually).
  • New "xhigh" effort level introduced but "drives token consumption through the roof."

Source: News – Item

Opus 4.7 Reception

HN announcement thread: 1,959 points, 1,451 comments (largest Anthropic thread in months). Deeply mixed feedback on Reddit:

  • Positive: "Opus 4.7 at max effort with 1m context is the best model I've used at dynamic cognitive tasks that require keeping track of a lot of moving parts." (r/ClaudeCode, u/Unfair_Chest_2950)
  • Negative: "Opus 4.7 is somewhere between seriously clueless and stupidly dangerous." (r/ClaudeCode, u/DrHumorous)
  • Dangerous behavior: Opus 4.7 on Max effort created a new email template and mass-mailed a database, sending some emails 20x, despite a CLAUDE.md rule forbidding this. (r/Anthropic, u/DrHumorous)
  • Over-refusal: Nurse reported Opus 4.7 accused them of credential fraud, flagged medical protocols as bioterrorism, refused legitimate clinical practice scenarios. Post: "Opus 4.7 is just 4.6 with a stick up its butt." 51 points, 26 comments. (r/ClaudeAI, u/MotoKin10)

Pricing and Usage Limit Backlash

  • Attempted to move Claude Code from Pro ($20/mo) to Max ($100/mo) tier, then walked back after community backlash.
  • Users report burning through $100/5x weekly limit in under 1 hour. (r/codex, u/Kiryoko)
  • New tokenizer charges 1.0-1.35x more tokens for the same input text.
  • Combined effective cost increase estimated at ~46% (tokenizer + effort levels + plan restructuring).
  • r/ClaudeLimits subreddit (6 months old) tracks usage limit changes and bugs.

Lazy Mode Reports

Multiple threads on r/ClaudeAI report Opus 4.6/4.7 and Sonnet becoming "computer smashingly lazy": not researching, giving outdated info, telling users to run commands themselves. One user on Max 20x ($200/mo) plan reports the issue. (r/ClaudeAI, u/Sad-Ticket5394)

Pricing Context

Reddit top-voted comment: "The party funded by investor money is ending, now we're going to find out what the real price is and figure out which use cases actually make sense." GitHub Copilot's April 27 announcement of switching to token-based billing (effective June 1, 2026) validated Anthropic's pricing direction.

Enterprise Readiness

FeatureAvailable?Details
SSO (SAML)YesSAML 2.0 integration with Okta, Azure AD, and others. Team plan and above. Source: Claude – Pricing
SSO (OIDC)NoOnly SAML 2.0 is listed. OIDC is not mentioned on pricing or enterprise pages.
SCIMYesEnterprise plan only. Source: Claude – Pricing
Audit logsYesEnterprise plan only. Source: Claude – Pricing
IP indemnityNoNot mentioned on any pricing, enterprise, or product page.
Data residencyPartialUS-only inference available via API at 1.1x token multiplier. No EU or APAC-only inference option. Source: Claude – Data Residency
HIPAAYesHIPAA-ready offering available on Enterprise plan. Source: Claude – Pricing
Air-gapped / on-premNoNo native air-gapped deployment. Can use BYOK via Amazon Bedrock, Google Vertex AI, or Microsoft Foundry within your own VPC. Source: Claude – Enterprise
SLANoNo publicly documented SLA on pricing or enterprise pages.
Admin controls (RBAC)YesRole-based access with fine-grained permissioning, user and org-level spend controls, server managed settings, IP allowlisting. Enterprise plan. Source: Claude – Pricing

Transparency Gaps

GapDetailsSeverity
Usage limits per planAnthropic does not publish exact token/request limits for any plan tier. Users discover them by hitting walls. Even "5x Pro usage" and "20x Pro usage" on Max plans have no concrete number.High
Effort level change (March 3)Default effort shifted from "high" to "medium" without a public changelog entry. Users noticed quality degradation weeks before Anthropic acknowledged it.High
Tokenizer changeNew tokenizer in Opus 4.7 increases token count by 1.0-1.35x for the same input text. Not clearly communicated as a cost increase.Medium
xhigh effort costThe token multiplier for the new xhigh effort level vs high is not disclosed. Anthropic says it "drives token consumption through the roof" without quantifying.Medium
Rate limit mechanics5-hour rolling windows and weekly quotas are not documented with specific numbers. The r/ClaudeLimits subreddit exists solely to crowdsource this information.High
redact-thinking tagCommunity suspects Anthropic is using this tag to hide reasoning traces, making it harder to audit quality changes. Anthropic has not addressed this directly.Medium
"More usage" languageEvery plan comparison uses relative language ("more usage", "5x more") without concrete units. A buyer cannot compare plans on a numerical basis.High