When it launched in July 2024, GPT-4o mini was the first model to outperform GPT-4 on LMSYS chat preference while costing less than GPT-3.5 Turbo, a roughly 100x cost reduction versus GPT-4 at comparable quality. It scores 82.0% on MMLU with 128K context and supports text and image input, function calling, and JSON mode at $0.15/$0.60 per million tokens.
Discover the power and diversity of large language models available with Telnyx. Explore the options below to find the perfect model for your project.
Powered by our own GPU infrastructure, select a large language model, add a prompt, and chat away. For unlimited chats, sign up for a free account on our Mission Control Portal here.
GPT-4o is the full multimodal model with maximum capability, while GPT-4o mini is a smaller, faster variant optimized for cost-efficient production tasks. GPT-4o mini scores 82% on MMLU while costing a fraction of GPT-4o.
GPT-4o mini is available in ChatGPT's free tier with usage limits. Through the API, it requires a paid account with pricing at $0.15 per million input tokens, accessible through .
GPT-4o mini scores 82.0% on MMLU (5-shot), surpassing GPT-3.5 Turbo (70.0%) by 12 points and approaching GPT-4 (86.4%) territory at less than 1% the cost. On LMSYS chat preference it outperforms GPT-4 despite the 4-point MMLU gap, suggesting stronger conversational quality than raw knowledge scores indicate. It is the highest quality-per-dollar model on the sheet.
Running GPT-4o mini through Telnyx Inference costs $0.15 per million input tokens and $0.60 per million output tokens. Processing 10,000,000 classification tasks at 500 tokens each would cost approximately $3,750, more than 60% cheaper than GPT-3.5 Turbo and roughly 100x cheaper than GPT-4 at comparable chat quality.
| Organization | Model Name | Tasks | Languages Supported | Context Length | Parameters | Model Tier | License |
|---|---|---|---|---|---|---|---|
| deepseek-ai | DeepSeek-R1-Distill-Qwen-14B | text generation | English | 43,000 | 14.8B | medium | deepseek |
| fixie-ai | ultravox-v0_4_1-llama-3_1-8b | audio text-to-text | Multilingual | 8,000 | 8.7B | small | mit |
| gemma-2b-it | text generation | English | 8,192 | 2.5B | small | gemma | |
| gemma-7b-it | text generation | English | 8,192 | 8.5B | small | gemma | |
| meta-llama | Llama-3.3-70B-Instruct | text generation | Multilingual | 99,000 | 70.6B | large | llama3.3 |
| meta-llama | Llama-Guard-3-1B | safety classification | Multilingual | 128,000 | 1.5B | small | llama3.3 |
| meta-llama | Meta-Llama-3.1-70B-Instruct | text generation | Multilingual | 99,000 | 70.6B | large | llama3.1 |
| meta-llama | Meta-Llama-3.1-8B-Instruct | text generation | Multilingual | 131,072 | 8.0B | small | llama3.1 |
| minimaxai | MiniMax-M2.5 | text generation | English | 2,000,000 | 0 | large | minimaxai |
| minimaxai | MiniMax-M2.7 | text generation | English | 200,000 | 0 | large | minimaxai |
| mistralai | Mistral-7B-Instruct-v0.1 | text generation | English | 8,192 | 7.2B | small | apache-2.0 |
| mistralai | Mistral-7B-Instruct-v0.2 | text generation | English | 32,768 | 7.2B | small | apache-2.0 |
| mistralai | Mixtral-8x7B-Instruct-v0.1 | text generation | Multilingual | 32,768 | 46.7B | medium | apache-2.0 |
| moonshotai | Kimi-K2.5 | text generation | English | 256,000 | 1.0T | large | modified-mit |
| moonshotai | Kimi-K2.6 | text generation | English | 262,144 | 1.0T | large | modified-mit |
| Qwen | Qwen3-235B-A22B | text generation | English | 32,768 | 235.1B | large | apache-2.0 |
| zai-org | GLM-5.1-FP8 | text generation | English | 202,752 | 753.9B | large | mit |
| anthropic | claude-3-7-sonnet-latest | text generation | Multilingual | 200,000 | 0 | large | anthropic |
| anthropic | claude-haiku-4-5 | text generation | Multilingual | 200,000 | 0 | large | anthropic |
| anthropic | claude-opus-4-6 | text generation | Multilingual | 200,000 | 0 | large | anthropic |
| anthropic | claude-sonnet-4-20250514 | text generation | Multilingual | 200,000 | 0 | large | anthropic |
| gemini-2.0-flash | text generation | Multilingual | 1,048,576 | 0 | large | ||
| gemini-2.5-flash | text generation | Multilingual | 1,048,576 | 0 | large | ||
| gemini-2.5-flash-lite | text generation | Multilingual | 1,048,576 | 0 | large | ||
| groq | gpt-oss-120b | text generation | English | 131,072 | 117.0B | large | groq |
| groq | kimi-k2-instruct | text generation | English | 131,072 | 1.0T | large | groq |
| groq | llama-3.3-70b-versatile | text generation | Multilingual | 131,072 | 70.6B | large | llama3.3 |
| groq | llama-4-maverick-17b-128e-instruct | text generation | Multilingual | 1,000,000 | 400.0B | large | llama4 |
| groq | llama-4-scout-17b-16e-instruct | text generation | Multilingual | 128,000 | 109.0B | large | llama4 |
| openai | gpt-3.5-turbo | text generation | Multilingual | 4,096 | 0 | large | openai |
| openai | gpt-4 | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | gpt-4-0125-preview | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | gpt-4-0314 | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | gpt-4-0613 | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | gpt-4-1106-preview | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | gpt-4-32k-0314 | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | gpt-4-turbo-preview | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | gpt-4.1 | text generation | Multilingual | 1,047,576 | 0 | large | openai |
| openai | gpt-4.1-mini | text generation | Multilingual | 1,047,576 | 0 | large | openai |
| openai | gpt-4o | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | gpt-4o-mini | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | gpt-5 | text generation | Multilingual | 400,000 | 0 | large | openai |
| openai | gpt-5-mini | text generation | Multilingual | 400,000 | 0 | large | openai |
| openai | gpt-5.1 | text generation | Multilingual | 400,000 | 0 | large | openai |
| openai | gpt-5.2 | text generation | Multilingual | 400,000 | 0 | large | openai |
| openai | o1-mini | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | o1-preview | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | o3-mini | text generation | Multilingual | 200,000 | 0 | large | openai |
| xai-org | grok-2 | text generation | Multilingual | 131,072 | 0 | large | xai |
| xai-org | grok-2-latest | text generation | Multilingual | 131,072 | 0 | large | xai |
| xai-org | grok-3 | text generation | Multilingual | 131,072 | 0 | large | xai |
| xai-org | grok-3-beta | text generation | Multilingual | 131,072 | 0 | large | xai |
| xai-org | grok-3-fast | text generation | Multilingual | 131,072 | 0 | large | xai |
| xai-org | grok-3-fast-beta | text generation | Multilingual | 131,072 | 0 | large | xai |
| xai-org | grok-3-fast-latest | text generation | Multilingual | 131,072 | 0 | large | xai |
| xai-org | grok-3-latest | text generation | Multilingual | 131,072 | 0 | large | xai |
| xai-org | grok-3-mini | text generation | Multilingual | 131,072 | 0 | large | xai |
| xai-org | grok-3-mini-fast | text generation | Multilingual | 131,072 | 0 | large | xai |
GPT-4o is a natively multimodal model that processes text, images, and audio jointly, while GPT-4 is text-only (with separate vision capabilities). GPT-4o is also faster and cheaper than GPT-4, making it the recommended successor for most applications.
"4o mini" refers to the small, fast variant of GPT-4o (the "o" stands for "omni" indicating multimodal capability). It is ChatGPT's default model for everyday tasks where speed and cost efficiency are prioritized over maximum reasoning depth.
Yes, GPT-4o mini is the default free model in ChatGPT. It is also available through the API at $0.15 per million input tokens, making it one of OpenAI's most affordable options.
GPT-4.1 mini currently offers the strongest performance among OpenAI's mini models, followed by GPT-4o mini and GPT-5 mini. The best choice depends on your task: GPT-4.1 mini leads on structured output, while GPT-5 mini is stronger on reasoning.
GPT-4o mini is priced at $0.15 per million input tokens and $0.60 per million output tokens through the API. Infrastructure providers offer access with additional benefits like co-located inference for lower latency.