Compare LLM API Pricing Across All Major Providers

Compare cost per token across all major LLM providers interactively.

Explore and Compare Prices Instantly

Use this live comparison table to explore and compare pricing details for popular LLM APIs like OpenAI, Claude, Gemini, and Mistral. Easily sort and filter by provider, context window, and token pricing — all prices shown per 500, 1000, or 1M tokens.

Model ▲▼ Provider ▲▼ Max Context ▲▼ Price In ▲▼ Price Out ▲▼ Speed ▲▼ Link ▲▼

🔎 Want to check our data sources? View all provider documentation.

How to read this table

Price In: Input token cost for prompts (e.g. system or user messages).
Price Out: Output token cost for generated text (e.g. chatbot responses).
Speed: Estimated relative response time (lower = faster).
Context: Maximum number of tokens the model can handle per request.
All prices are shown per 1,000 tokens by default. You can adjust this using the selector above.

What’s Coming Next

We’re just getting started. This project is evolving — soon you’ll find:

  • 📊 Real-world benchmarks – latency, throughput, and output quality comparisons
  • 📁 Data export – download pricing tables in CSV format
  • 📬 Smart alerts – get notified when LLM prices change
  • 🧩 AI tools directory – curated list of apps and no-code tools using LLMs
  • 📡 Public API – access pricing data programmatically

Have feedback or ideas? Let us know.

LLM API pricing, compare AI models, LLM cost calculator

Don’t miss a Drop – Get Notified