Compare top AI model providers in one up-to-date table covering flagship models, API access, context, and links to official docs.
Enter your values
Open the AI Model Comparison Table and fill in the required input fields with your numbers or selections.
Review the calculation
The tool automatically computes the result as you type. Double-check your inputs to ensure accuracy.
Interpret your results
Review the calculated output along with any breakdowns, charts, or explanations provided to understand what the numbers mean for your situation.
Go deeper with workflow guides, side-by-side comparisons, and reusable embeds connected to this tool.
Add this calculator to your website with a simple iframe.
This tool is part of larger workflows. Open a hub to continue with the next relevant tools.
Continue your workflow with these tools from the same playbook.
AI vs Human Cost Calculator
Compare the cost of completing a task with AI tools versus hiring a human. Includes API costs, time savings, and ROI.
AI Automation Savings Calculator
Estimate time and cost savings from automating recurring business tasks with AI and workflow tools.
Customer Lifetime Value Calculator
Estimate customer lifetime value using monthly revenue, gross margin, and churn. Includes LTV:CAC ratio.
AI Agent Cost Estimator
Estimate monthly AI agent cost from model usage, tools, orchestration, and support overhead.
AI Automation Savings Calculator
Estimate time and cost savings from automating recurring business tasks with AI and workflow tools.
AI Business Case Generator
Generate a structured AI implementation business case with ROI, payback, risks, and timeline.
AI Carbon Footprint Calculator
Estimate annual carbon footprint of model usage based on query volume and usage patterns.
AI Chatbot ROI Calculator
Calculate expected ROI from deploying an AI chatbot for support, lead capture, and customer engagement.
AI Compliance Cost Calculator
Estimate AI compliance and governance costs by use case risk level and organizational scope.
AI Content Cost Calculator
Calculate the cost of producing content with AI versus hiring writers. Compare per-article costs and quality trade-offs.
AI Job Impact Analyzer
Analyze how AI will affect your job across 27 roles and 15 industries. See task-level automation risk, AI tools to learn, emerging roles, and a personalized action plan.
Next Step
Continue with AI vs Human Cost Calculator
19 models from 11 providers — 11 proprietary, 8 open source. Prices are API costs per 1M tokens from official sources. Last verified: 2026-02-25.
Lowest API Cost
DeepSeek DeepSeek-V3.2
$0.70
1M input + 1M output tokens
Highest API Cost
Anthropic Claude Opus 4.6
$30.00
1M input + 1M output tokens
Free to Self-Host
8 open-source models
$0.00
DeepSeek, Llama 4, Qwen3, GLM, Kimi & more
| Provider | Model | Type | Best For | Input | Output | Total | Context | Performance | Links |
|---|---|---|---|---|---|---|---|---|---|
| DeepSeek | DeepSeek-V3.2 | Open Source | Ultra-low-cost experimentation and math/reasoning tasks | $0.28 | $0.42 | $0.70 per 2M tokens | 128K | AIME 2025 96%, HMMT 2025 99.2%, IMO gold medal | |
| Alibaba | Qwen3-Coder 480B | Open Source | Open-source coding tasks and code generation at scale | $0.22 | $1.00 | $1.22 per 2M tokens | 262K | Competes with Claude and GPT on coding benchmarks | |
| Mistral | Mistral Large 3 | Proprietary | Cost-sensitive production workloads and open-model flexibility | $0.50 | $1.50 | $2.00 per 2M tokens | 131K | Strong value at price point, open-weight available | |
| DeepSeek | DeepSeek-R1 | Open Source | Reasoning-heavy tasks and chain-of-thought problem solving | $0.55 | $2.19 | $2.74 per 2M tokens | 128K | Matches OpenAI o1 on competitive math and coding | |
| Gemini 2.5 Flash | Proprietary | Cost-efficient high-volume tasks with large context needs | $0.30 | $2.50 | $2.80 per 2M tokens | 1M | Excellent cost-efficiency with full 1M context | ||
| Moonshot AI | Kimi K2.5 | Open Source | Multimodal reasoning with video understanding | $0.60 | $3.00 | $3.60 per 2M tokens | 256K | Multimodal (text, image, video). Reasoning variant available | |
| Anthropic | Claude Haiku 4.5 | Proprietary | High-volume classification, chat, and lightweight tasks | $1.00 | $5.00 | $6.00 per 2M tokens | 200K | Fastest Claude model, strong for its cost tier | |
| Alibaba | Qwen3 235B | Open Source | Unrestricted commercial use and multilingual applications | $1.20 | $6.00 | $7.20 per 2M tokens | 262K | Full model family 0.6B-235B, strong multilingual performance | |
| OpenAI | GPT-5.1 | Proprietary | Balanced cost and performance for coding and agentic tasks | $1.25 | $10.00 | $11.25 per 2M tokens | 400K | Strong coding and agentic task performance | |
| Gemini 2.5 Pro | Proprietary | General-purpose multimodal with Google ecosystem integration | $1.25 | $10.00 | $11.25 per 2M tokens | 1M | Strong thinking model with dynamic compute allocation | ||
| Cohere | Command A | Proprietary | Enterprise RAG, tool use, and multilingual workloads | $2.50 | $10.00 | $12.50 per 2M tokens | 256K | 150% higher throughput than Command R+ | |
| Gemini 3.1 Pro | Proprietary | Top-ranked overall performance and multimodal tasks | $2.00 | $12.00 | $14.00 per 2M tokens | 1M | LMArena ~1490 Elo, GPQA Diamond 94.3%, SWE-Bench 80.6% | ||
| OpenAI | GPT-5.2 | Proprietary | Top-tier coding, complex reasoning, and agent workflows | $1.75 | $14.00 | $15.75 per 2M tokens | 400K | ARC-AGI >90%, SWE-Bench Pro 56.4%, Terminal-Bench 64% | |
| Anthropic | Claude Sonnet 4.6 | Proprietary | Production agents, long-context tasks, and quality writing | $3.00 | $15.00 | $18.00 per 2M tokens | 200K | Near-flagship quality at lower cost (per VentureBeat) | |
| xAI | Grok 4 | Proprietary | Reasoning-heavy apps with budget-friendly fast variants | $3.00 | $15.00 | $18.00 per 2M tokens | 256K | LMArena ~1483 Elo (Grok 4.1 variant) | |
| Anthropic | Claude Opus 4.6 | Proprietary | Enterprise knowledge work, deep reasoning, and agentic coding | $5.00 | $25.00 | $30.00 per 2M tokens | 200K (1M beta) | ARC-AGI-2 68.8%, GPQA Diamond 91%, Terminal-Bench 2.0 leader | |
| Meta | Llama 4 Maverick | Open Source | Self-hosted multimodal apps with strong coding performance | Self-host | Self-host | Free* | 1M | Exceeds GPT-4o and Gemini 2.0 Flash on coding and reasoning | |
| Meta | Llama 4 Scout | Open Source | Massive context windows and efficient self-hosted deployment | Self-host | Self-host | Free* | 10M | Industry-leading context for open models | |
| Z.AI | GLM-4.7 | Open Source | Top open-source coding and competitive math benchmarks | Self-host | Self-host | Free* | 200K | HumanEval 94.2%, SWE-Bench 73.8%, AIME 2025 95.7% |