Trained on 6 trillion tokens, three times the data volume of its 2B sibling, the 7B Gemma model switches from multi-query to standard multi-head attention and outperforms Llama 2 13B on MMLU despite being roughly half the size. Google optimized each model in the Gemma family with distinct architectural decisions rather than simply scaling a single design up or down.
Gemma 7B IT scores 64.3% on MMLU (5-shot), outperforming Llama 2 13B Chat (54.8%) despite being nearly half the size. Trained on 6 trillion tokens using Google's proprietary data pipelines, it achieves the highest MMLU score among 7B-class models on the sheet, though it trails Llama 3 8B Instruct (67.4%) in the 8B class by about 3 points.
The cost of running Gemma 7B IT with Telnyx Inference is $0.0002 per 1,000 tokens. Analyzing 1,000,000 customer chats at 1,000 tokens each would cost $200, matching the price of Mistral 7B Instruct and Llama 3 8B Instruct on the same sheet.
Discover the power and diversity of large language models available with Telnyx. Explore the options below to find the perfect model for your project.
| Organization | Model Name | Tasks | Languages Supported | Context Length | Parameters | Model Tier | License |
|---|---|---|---|---|---|---|---|
| deepseek-ai | DeepSeek-R1-Distill-Qwen-14B | text generation | English | 43,000 | 14.8B | medium | deepseek |
| fixie-ai | ultravox-v0_4_1-llama-3_1-8b | audio text-to-text | Multilingual | 8,000 | 8.7B | small | mit |
| gemma-2b-it | text generation | English | 8,192 | 2.5B | small | gemma | |
| gemma-7b-it | text generation | English | 8,192 | 8.5B | small | gemma | |
| meta-llama | Llama-3.3-70B-Instruct | text generation | Multilingual | 99,000 | 70.6B | large | llama3.3 |
| meta-llama | Llama-Guard-3-1B | safety classification | Multilingual | 128,000 | 1.5B | small | llama3.3 |
| meta-llama | Meta-Llama-3.1-70B-Instruct | text generation | Multilingual | 99,000 | 70.6B | large | llama3.1 |
| meta-llama | Meta-Llama-3.1-8B-Instruct | text generation | Multilingual | 131,072 | 8.0B | small | llama3.1 |
| minimaxai | MiniMax-M2.5 | text generation | English | 2,000,000 | 0 | large | minimaxai |
| mistralai | Mistral-7B-Instruct-v0.1 | text generation | English | 8,192 | 7.2B | small | apache-2.0 |
| mistralai | Mistral-7B-Instruct-v0.2 | text generation | English | 32,768 | 7.2B | small | apache-2.0 |
| mistralai | Mixtral-8x7B-Instruct-v0.1 | text generation | Multilingual | 32,768 | 46.7B | medium | apache-2.0 |
| moonshotai | Kimi-K2.5 | text generation | English | 256,000 | 1.0T | large | modified-mit |
| Qwen | Qwen3-235B-A22B | text generation | English | 32,768 | 235.1B | large | apache-2.0 |
| zai-org | GLM-5.1-FP8 | text generation | English | 202,752 | 753.9B | large | mit |
| anthropic | claude-3-7-sonnet-latest | text generation | Multilingual | 200,000 | 0 | large | anthropic |
| anthropic | claude-haiku-4-5 | text generation | Multilingual | 200,000 | 0 | large | anthropic |
| anthropic | claude-opus-4-6 | text generation | Multilingual | 200,000 | 0 | large | anthropic |
| anthropic | claude-sonnet-4-20250514 | text generation | Multilingual | 200,000 | 0 | large | anthropic |
| gemini-2.0-flash | text generation | Multilingual | 1,048,576 | 0 | large | ||
| gemini-2.5-flash | text generation | Multilingual | 1,048,576 | 0 | large | ||
| gemini-2.5-flash-lite | text generation | Multilingual | 1,048,576 | 0 | large | ||
| groq | gpt-oss-120b | text generation | English | 131,072 | 117.0B | large | groq |
| groq | kimi-k2-instruct | text generation | English | 131,072 | 1.0T | large | groq |
| groq | llama-3.3-70b-versatile | text generation | Multilingual | 131,072 | 70.6B | large | llama3.3 |
| groq | llama-4-maverick-17b-128e-instruct | text generation | Multilingual | 1,000,000 | 400.0B | large | llama4 |
| groq | llama-4-scout-17b-16e-instruct | text generation | Multilingual | 128,000 | 109.0B | large | llama4 |
| openai | gpt-3.5-turbo | text generation | Multilingual | 4,096 | 0 | large | openai |
| openai | gpt-4 | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | gpt-4-0125-preview | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | gpt-4-0314 | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | gpt-4-0613 | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | gpt-4-1106-preview | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | gpt-4-32k-0314 | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | gpt-4-turbo-preview | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | gpt-4.1 | text generation | Multilingual | 1,047,576 | 0 | large | openai |
| openai | gpt-4.1-mini | text generation | Multilingual | 1,047,576 | 0 | large | openai |
| openai | gpt-4o | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | gpt-4o-mini | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | gpt-5 | text generation | Multilingual | 400,000 | 0 | large | openai |
| openai | gpt-5-mini | text generation | Multilingual | 400,000 | 0 | large | openai |
| openai | gpt-5.1 | text generation | Multilingual | 400,000 | 0 | large | openai |
| openai | gpt-5.2 | text generation | Multilingual | 400,000 | 0 | large | openai |
| openai | o1-mini | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | o1-preview | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | o3-mini | text generation | Multilingual | 200,000 | 0 | large | openai |
| xai-org | grok-2 | text generation | Multilingual | 131,072 | 0 | large | xai |
| xai-org | grok-2-latest | text generation | Multilingual | 131,072 | 0 | large | xai |
| xai-org | grok-3 | text generation | Multilingual | 131,072 | 0 | large | xai |
| xai-org | grok-3-beta | text generation | Multilingual | 131,072 | 0 | large | xai |
| xai-org | grok-3-fast | text generation | Multilingual | 131,072 | 0 | large | xai |
| xai-org | grok-3-fast-beta | text generation | Multilingual | 131,072 | 0 | large | xai |
| xai-org | grok-3-fast-latest | text generation | Multilingual | 131,072 | 0 | large | xai |
| xai-org | grok-3-latest | text generation | Multilingual | 131,072 | 0 | large | xai |
| xai-org | grok-3-mini | text generation | Multilingual | 131,072 | 0 | large | xai |
| xai-org | grok-3-mini-fast | text generation | Multilingual | 131,072 | 0 | large | xai |
Powered by our own GPU infrastructure, select a large language model, add a prompt, and chat away. For unlimited chats, sign up for a free account on our Mission Control Portal here.
Check out our helpful tools to help get you started.
Gemma is a family of lightweight, state-of-the-art open models developed by Google, designed for various text generation tasks like question answering, summarization, and reasoning. They are text-to-text, decoder-only large language models available in English. For more information, visit the Gemma model page on Hugging Face.
The Gemma 2B model is designed for efficiency and versatility in text generation tasks, trained on a context length of 8192 tokens. It offers open weights, pre-trained variants, and instruction-tuned variants, making it suitable for deployment in environments with limited resources. For detailed features, visit the Gemma 2B model page.
Yes, you can fine-tune Gemma 2B on your dataset. Fine-tuning scripts and notebooks are available under the examples directory of the google/gemma-7b repository. Adapt these resources for Gemma 2B by changing the model-id to google/gemma-2b. For the original resources, visit the google/gemma-7b repository.
Gemma models were trained on a dataset totaling 6 trillion tokens, comprising web documents, code, and mathematical text to ensure a broad understanding of language, logic, and information. This diverse dataset enables Gemma models to perform a wide range of text generation tasks effectively.
The Gemma 2B model, while state-of-the-art, has limitations related to the quality and diversity of its training data, complexity of tasks, language ambiguity, factual accuracy, and ethical considerations. Users should be aware of these limitations and consider them when using the model for specific applications.
While the Gemma model project is developed by Google, the community can contribute by providing feedback, reporting issues, and sharing insights on the model's performance and applications through the Hugging Face community platform. Engage with the Gemma model community here.
For in-depth technical documentation, usage examples, and further resources on the Gemma models, visit the Gemma model page on Hugging Face. Additionally, you can explore the Gemma Technical Report, the Responsible Generative AI Toolkit, and the Gemma models on Vertex Model Garden for more detailed information.
Google has conducted structured evaluations, internal red-teaming, and implemented CSAM and sensitive data filtering to ensure the Gemma models meet internal policies for ethics and safety. Additionally, Google provides guidelines for responsible use and encourages developers to implement content safety safeguards. For more information, refer to the Responsible Generative AI Toolkit.