Key Terms
- Credits - Augment's billing unit for measuring usage. Each action (chat, agent turn, code review) consumes credits based on the model used, context size, and response length. For example, a small task with 10 tool calls costs roughly 300 credits; a complex task with 60 tool calls costs roughly 4,300 credits. Source: Augmentcode – Credit Based Pricing
- Context Engine - Augment's proprietary codebase indexing and retrieval system that provides real-time context to models. It indexes at repo scale and stays current with live changes. Source: Augmentcode – Context Engine
- Intent - Augment's agent orchestration workspace for spec-driven development. A coordinator breaks work into discrete tasks before execution. Currently macOS only. Source: Augmentcode – Intent
- Auggie CLI - Augment's command-line agent interface. Supports model switching mid-conversation via the
/modelcommand. Source: Augmentcode – Cli - Code Review - Augment's AI-powered PR review tool for GitHub pull requests, available on all paid plans. Enterprise tier adds advanced analytics, user allowlists, MCP configuration, multi-org support, and unlimited seats. Source: Augmentcode – Enterprise Features
- Prompt caching - storing frequently used prompts on the provider's servers to reduce latency and cost. Not directly exposed in Augment's credit system. Source: Augmentcode – Credit Based Pricing
- MCP (Model Context Protocol) - an open protocol for connecting AI agents to external tools and data sources. Augment supports MCP servers for integrations like Jira, Linear, and Notion. Source: Augmentcode – Context Engine Mcp
Latest Changes
First report for this supplier. All models, plans, and pricing are listed as current state.
- New model: Claude Opus 4.7 becomes default model (April 16). 50% credit discount promo through April 30.
- New model: Gemini 3.1 Pro added as third model option (April 2). 268 credits per task.
- New model: Kimi K2.6 added as budget model (late April). 147 credits per task.
- Feature added: Multi-model strategy formalized (April 7). Reversed prior position that model pickers were a design failure.
- Feature added: AGENTS.md effectiveness study published (April 22). Best files deliver quality equivalent to upgrading Haiku to Opus.
- Feature added: AI-native transformation framework and podcast published.
- Feature added: Multiple changelog entries for Intent, VS Code, IntelliJ, and CLI throughout April.
Plans
| Plan | Price (monthly) | Included credits | Auto top-up | Users | Credit pooling | Key inclusions |
|---|---|---|---|---|---|---|
| Indie | $20/mo | 40,000 | $15/24k credits | 1 | N/A (single user) | Context Engine, coding agent, chat, MCP, code review, SOC 2 Type II |
| Standard | $60/mo per dev | 130,000 | $15/24k credits | Up to 20 | Yes (team level) | Everything in Indie + Slack integration, usage analytics |
| Max | $200/mo per dev | 450,000 | $15/24k credits | Up to 20 | Yes (team level) | Everything in Standard + email-based support |
| Enterprise | Custom | Custom | Custom | Unlimited | Yes | SSO (OIDC), SCIM, CMEK, ISO 42001, SIEM integration, data residency, granular access controls, audit trails, dedicated support, volume-based annual discounts |
All paid plans include: no AI training on customer data, code review (PR summaries and inline comments, auto and manual mode, PR guidelines, MCP support).
Credit top-ups not part of the base plan expire 12 months after purchase.
Source: Augmentcode – Pricing
Terms explained:
- CMEK (Customer-Managed Encryption Keys) - enterprise customers control their own encryption keys rather than relying on the provider's default key management. Available only on Enterprise plan.
- ISO 42001 - an international standard for AI management systems, covering risk management and governance of AI systems. Source: Augmentcode – Pricing
- SIEM (Security Information and Event Management) - integration with enterprise security monitoring tools like Splunk or Datadog to forward audit logs. Available only on Enterprise plan.
API Pricing
Augment Code does not offer a standalone API. It is a closed-platform coding agent accessed via IDE extensions (VS Code, JetBrains), the Auggie CLI, the Intent workspace, Slack integration, and GitHub Code Review. There is no public token-based API pricing.
Instead, usage is billed through the credit system. Credit costs per model (based on a "standard medium-complexity task"):
| Model | Credits per task | Relative cost vs Sonnet 4.6 |
|---|---|---|
| Claude Haiku 4.5 | 88 | 30% |
| Kimi K2.6 | 147 | 50% |
| GPT-5.4 | 210 | 72% |
| GPT-5.1 | 219 | 75% |
| Gemini 3.1 Pro | 268 | 92% |
| Claude Sonnet 4.5 | 293 | 100% (baseline) |
| Claude Sonnet 4.6 | 293 | 100% (baseline) |
| GPT-5.2 | 390 | 133% |
| GPT-5.5 | 420 | 143% |
| Claude Opus 4.5 | 488 | 167% |
| Claude Opus 4.6 | 488 | 167% |
| Claude Opus 4.7 | 488 | 167% |
Launch promo: Claude Opus 4.7 credits are 50% off until April 30, 2026.
Notable: Opus 4.5, 4.6, and 4.7 all cost 488 credits per task despite capability differences. Sonnet 4.5 and 4.6 also cost the same (293 credits). The credit system does not distinguish between model generations within the same tier.
Background activities (Context Compression, System) consume a "small fraction" of total credits. Specific numbers are undisclosed.
Source: Augmentcode – Credit Based Pricing
Model Performance / Benchmarks
Augment does not publish independent benchmark scores for Augment Code as a product. The AGENTS.md effectiveness study (April 22) is the closest to a published benchmark:
- Best AGENTS.md files deliver quality improvements equivalent to upgrading from Haiku to Opus.
- Worst AGENTS.md files make output worse than having no file at all.
- A single AGENTS.md can boost one metric by 25% on a bug fix while dropping another by 30% on a feature task in the same module.
- Recommended: keep files to 100-150 lines with progressive disclosure.
- 100% of AGENTS.md files are discovered automatically; orphan docs in
_docs/folders get read in under 10% of sessions.
Source: Augmentcode – How To Write Good Agents Dot Md Files
Latest News
Gemini 3.1 Pro added as third model option (April 2, 2026)
Augment added Gemini 3.1 Pro alongside Claude and GPT models. Augment's internal benchmarks show Gemini 3.1 Pro at 268 credits per standard task, which is 92% of Sonnet's cost. In real usage, Gemini 3.1 Pro runs at 2.6x cheaper per message compared to Opus 4.6. Best suited for planning, debugging, investigation, and daily execution. Weaker at inferring intent compared to Opus, and can lose context in long conversations.
Source: Augmentcode – Gemini 3 1 Pro Now Available In Augment Code
Opus 4.7 becomes default model, 50% off promo (April 16, 2026)
Augment switched the default model to Claude Opus 4.7, with a 50% credit discount running through April 30, 2026. The company reports Opus 4.7 is better at staying on plan over longer horizons, with fewer tool-state failures and fewer "re-derive the approach" turns. Biggest impact is in Intent workflows where agents execute work end-to-end. Existing threads are not automatically migrated.
Source: Augmentcode – Opus 4 7 Is Now The Default In Augment
Kimi K2.6 added as budget model option (late April 2026)
Kimi K2.6 by Moonshot AI (hosted on Fireworks and Baseten) added at 147 credits per task, making it the second cheapest model after Haiku 4.5. Positioned as "smarter than Haiku and roughly 3-4x cheaper than Opus 4.7." Best for cheap agentic work.
Source: Reddit – Augmentcodeai (announcement by Augment Team)
Multi-model strategy formalized (April 7, 2026)
Matt Ball published "The era of single-model engineering is over," reversing the company's earlier position that model pickers were a design failure. The post argues that the lead among frontier models changes every few months, and teams need architecture that decouples the model layer from the harness (Context Engine) and orchestration (Intent) layers. The company now runs 3 provider families (Anthropic, OpenAI, Google) plus Kimi and expects to add more before year end.
Source: Augmentcode – The Era Of Single Model Engineering Is Over
AGENTS.md effectiveness study (April 22, 2026)
Augment published findings from an internal study measuring how AGENTS.md files affect code generation quality. Key findings: the best AGENTS.md files deliver quality improvements equivalent to upgrading from Haiku to Opus; the worst ones make output worse than having no file at all. A single AGENTS.md can boost one metric by 25% on a bug fix while dropping another by 30% on a feature task in the same module. The study recommends keeping files to 100-150 lines with progressive disclosure into reference documents. 100% of AGENTS.md files are discovered automatically; orphan docs in _docs/ folders get read in under 10% of sessions.
Source: Augmentcode – How To Write Good Agents Dot Md Files
AI-native transformation framework (April 7, 2026)
CTO Igor Ostrovsky and team published a four-stage transformation model for engineering organizations: (1) Adopt agents, (2) Shift to AI-native, (3) Expand scope, (4) Orchestrate. Claims roughly 70% of organizations are in stage 1, 20% in stage 2, 10% in stage 3, and almost none at stage 4. At Augment itself, agents are the primary driver in 3 of 6 SDLC phases (code generation, validation, incident response), with code review and build failures as co-owned, and ideation remaining human-led.
Source: Augmentcode – The Ai Native Transformation Path End To End
AI transformation podcast with DX CTO (April 28, 2026)
Emma Webb published a discussion with Justin Reock (CTO at DX) citing DX's longitudinal study of 500 companies showing a 7.5% median uplift in PR velocity from AI tools (13% average, 70% top performer). Atlassian data shows engineers spend only 16% of their time writing code, so even a perfect code generation tool addresses only 16% of the problem. The podcast argues that system-level improvements (documentation, CI/CD speed, code modularity) matter more than tool selection.
Source: Augmentcode – Ai Transformation Is A Systems Problem
Changelog highlights (April 2026)
- Intent 0.3.6 (Apr 23): Model picker no longer auto-swaps when a provider is slow to load; auto-fallback waits for the user's provider to settle. Fixed delegation-group race condition.
- VSCode 0.840.7 (Apr 22): Added
/settingsslash command, connectivity check before login, forked conversation recovery. Fixed pinned open files not being included in retrieval context. - IntelliJ v0.457.5 (Apr 21): Added built-in slash commands (
/fork,/ask), Hooks settings view, Figma OAuth support. - Intent 0.3.4 (Apr 17): Redesigned setup flow for new users. Agents more resilient under memory pressure, no longer killed mid-response in background tabs. Terminal titles sanitized to prevent credential leaks.
- Auggie CLI 0.24.0 (Apr 15): Image support in view tool,
auggie context statscommand,--jsonflag forauggie account status. - CLI 0.23.0 (Apr 13): Git-based plugins from marketplace, connectivity check during login.
- Intent 0.2.37 (Apr 8): Archive and delete workspaces from sidebar, window layout survives auto-updates.
- CLI 0.22.0 (Apr 7):
/forkslash command, custom themes, git-based plugins. - IntelliJ 0.442.12 (Apr 3): Chat history recovery from server, Figma integration, cost tier badges on models in model picker.
Source: Augmentcode – Changelog
Community Signals
Reddit (r/AugmentCodeAI, April 2026)
User switching away from Augment to Claude Code. Top post this month with 43 upvotes and 41 comments: "From Augment to Claude Code and I'm never looking back." The thread title itself is a signal that Augment faces retention pressure from Anthropic's first-party Claude Code product. Specific complaints not accessible (Reddit blocking), but the upvote ratio and comment volume suggest resonant dissatisfaction.
Source: Old – Top
Code completion removal frustration. "Bring back Code completion" received 23 upvotes and 23 comments (25 days ago). Augment appears to have removed or degraded a inline code completion feature, and users want it back. This aligns with the "Press F to pay respects to these two features" post (16 upvotes, 27 days ago).
Source: Old – Top
Sonnet 4.6 reliability issues. "Why is Sonnet 4.6 constantly hanging over the last weeks?" with 3 upvotes and 14 comments. Multiple users experiencing model hangs, which may explain some of the churn toward Claude Code.
Source: Old – Top
Prompt Enhancer quality regression. "Prompt Enhancer Went From Best In Class to Barely Useful" (5 upvotes, 4 comments, 11 days ago). A user reports that the prompt enhancement feature has degraded significantly.
Source: Old – Top
Wasted credits complaint. "Wasted credits, locking up paying accounts from use" (4 upvotes, 2 comments, 2 days ago). A user reports that wasted credits are locking their account from further use. Suggests the credit system may penalize users for failed or aborted tasks.
Source: Old – Top
Loss of service reports. "Loss of service" (4 upvotes, 8 comments, 28 days ago). Multiple reports of service outages during April.
Source: Old – Top
Intent platform availability. "Will Intent ever be supported for Windows?" (16 upvotes, 13 comments) and "What's the ETA for Intent on Linux/Windows?" (6 upvotes, 6 comments). Intent remains macOS-only, which is a significant limitation for teams on other platforms.
Source: Old – Top
Positive: Code Review for open source. "AugmentCode's Code Review tool: ~6 months into their OSS program, here's my experience" (4 upvotes, 3 comments, 2 days ago). An early professional user sharing their experience with the OSS program.
Source: Old – Top
Intent review from a senior dev. "Review of Intent from a senior dev" (9 upvotes, 22 comments, 20 days ago). A substantive review thread with high engagement.
Source: Old – Top
Model requests. Users requesting GLM 5.1 (15 upvotes, 4 comments) and BYOK (bring your own key) context engine (4 upvotes, 10 comments), suggesting demand for broader model coverage and self-hosting options.
Source: Old – Top
HackerNews
No significant HN discussions about Augment Code in April 2026. The most recent HN post about Augment was from February 2026 (Intent launch, 5 points, 0 comments). Augment does not appear to have strong organic HN traction.
Source: Hn
Enterprise Readiness
| Feature | Available? | Details |
|---|---|---|
| SSO (SAML) | No | SSO listed as OIDC only on Enterprise plan. Source: Augmentcode – Pricing |
| SSO (OIDC) | Yes | OIDC SSO on Enterprise plan. Source: Augmentcode – Pricing |
| SCIM | Yes | Enterprise plan. Source: Augmentcode – Pricing |
| Audit logs | Yes | Audit trails on Enterprise plan. Source: Augmentcode – Pricing |
| IP indemnity | No | Not mentioned on pricing or product pages. |
| Data residency | Yes | Enterprise plan. Source: Augmentcode – Pricing |
| HIPAA | No | Not mentioned on pricing or product pages. |
| Air-gapped / on-prem | No | Not available. Augment is a cloud-based platform. |
| SLA | Partial | Referenced ("same core uptime and response targets") but actual uptime percentage not on pricing page. Linked to separate legal page. |
| Admin controls (RBAC) | Yes | Granular access controls, SIEM integration, CMEK on Enterprise plan. Source: Augmentcode – Pricing |
Transparency Gaps
| Metric | Status | Details |
|---|---|---|
| Exact credit-to-token mapping | Undisclosed | Credits are an abstract unit. The conversion to actual token consumption per model is not published, making it impossible to compare Augment's credit pricing to raw API token pricing. |
| Rate limits (requests/min, requests/hour) | Undisclosed | No rate limit documentation found in pricing or docs. Users report hangs (see Sonnet 4.6 reliability thread) but it is unclear whether these are rate limits, capacity issues, or bugs. |
| Context window per model | Undisclosed | Not specified in pricing or model docs. Different models have different context windows, but Augment does not document what context size each model receives. |
| Background activity credit consumption | Undisclosed ("small fraction") | Context Compression and System activities consume credits but specific percentages or ranges are not quantified. |
| Code Review credit cost | Undisclosed | Credits can be used for Code Review, but the per-PR credit cost is not itemized separately from agent/chat usage. |
| Prompt Enhancer credit cost | Undisclosed | Listed as a separate activity type in usage analytics but the per-use credit cost is not published. |
| "Standard medium-complexity task" definition | Vague | The credit-per-task table is based on a "standard medium-complexity task" but the task definition, prompt, and context size are not published. Actual credit consumption "varies based on task complexity, context size, and response length." |
| Enterprise pricing | Fully opaque | Listed as "Custom" with no starting price, per-seat range, or credit cost range. Requires sales contact. |
| SLA uptime target | Undisclosed on pricing page | The SLA is referenced ("same core uptime and response targets") but the actual uptime percentage and response time targets are not on the pricing page. Linked to a separate legal page. |
| Intent platform support timeline | Undisclosed | Intent is macOS-only. No public ETA for Windows or Linux support despite community demand (two separate threads). |
| Team size limits on Standard/Max | Up to 20 users | Standard and Max plans cap at 20 users but this is not prominently labeled as a limit. Teams of 21+ must go to Enterprise. |
| Auto top-up trigger threshold | Undisclosed | Auto top-up is available at $15/24k credits, but the trigger (when credits run out vs. when they reach a threshold) is not documented. |
| Model availability by plan | Undisclosed | All models appear available on all paid plans, but this is not explicitly stated. It is unclear if Kimi K2.6, Gemini 3.1 Pro, etc. are available on Indie. |