AI model comparison

Choose the kind of work and your top priority. Ranking uses vendor pricing snapshots and subjective quality and speed scores. How ranking works

Task
Top priority
Suggested pick

GPT-5.4 mini

OpenAI · Context (tokens): 270,000 · Speed tier: 8

Ranked models

#ModelProviderContext (tokens)Input $/MOutput $/MSpeed tierBlended $/M
1GPT-5.4 miniOpenAI270,000$0.75$4.508$2.24
2Claude Sonnet 4.6Anthropic200,000$3.00$15.007$7.95
3Gemini 3.1 Pro (preview)Google1,000,000$2.00$12.006$5.96
4GPT-5.4OpenAI270,000$2.50$15.006$7.45
5Gemini 3.1 Flash-Lite (preview)Google1,000,000$0.25$1.509$0.74
6Claude Haiku 4.5Anthropic200,000$1.00$5.009$2.65
7Claude Opus 4.6Anthropic200,000$5.00$25.005$13.25
8GPT-5.4 nanoOpenAI270,000$0.20$1.259$0.61

Pros and cons by model

GPT-5.4 mini (OpenAI)
Show pros and cons
Pros
  • Much cheaper than full GPT-5.4.
  • Still strong for coding assistants and agents.
Cons
  • Weaker on hardest reasoning vs flagship.
  • Same regional/tier caveats as OpenAI.
Claude Sonnet 4.6 (Anthropic)
Show pros and cons
Pros
  • Balanced intelligence, cost, and latency.
  • Strong everyday coding and writing partner.
Cons
  • Mid-range pricing vs nano/Flash.
  • Throughput limits vary by plan.
Gemini 3.1 Pro (preview) (Google)
Show pros and cons
Pros
  • Huge multimodal context window.
  • Competitive flagship pricing vs peers.
Cons
  • Preview models may change or rate-limit.
  • Tiered input price above 200k tokens.
GPT-5.4 (OpenAI)
Show pros and cons
Pros
  • Strong reasoning and long context (≤270k tier).
  • Good default for complex code and analysis.
Cons
  • Higher output cost on long answers.
  • Rates may vary by region or tier.
Gemini 3.1 Flash-Lite (preview) (Google)
Show pros and cons
Pros
  • Lowest cost in this catalog for text.
  • Excellent for translation and high QPS.
Cons
  • Preview stability and quotas apply.
  • Less depth than Pro on hardest tasks.
Claude Haiku 4.5 (Anthropic)
Show pros and cons
Pros
  • Fast and cost-efficient Claude tier.
  • Good for extraction, support bots, drafts.
Cons
  • Weaker on frontier reasoning tasks.
  • Long-context quality below Opus/Sonnet.
Claude Opus 4.6 (Anthropic)
Show pros and cons
Pros
  • Top-tier for agents and difficult coding.
  • Large effective context for long specs.
Cons
  • Highest API cost in this table.
  • US-only inference can add a surcharge.
GPT-5.4 nano (OpenAI)
Show pros and cons
Pros
  • Lowest OpenAI tier cost.
  • Great for classification, routing, high volume.
Cons
  • Limited depth for hard math or long documents.
  • Not a substitute for flagship quality.

Data collected: 2026-03-27

How ranking works

Quality and speed scores are subjective 1–5 (quality) and a relative speed tier; they are not live benchmarks. Ranking blends cost, task fit, and speed with weights that depend on your selected priority.

Disclaimer

Figures are a planning snapshot from vendor pages on the collection date. Real bills depend on caching, batch tiers, tools, and prompt length.

Pricing sources