Trained from scratch on a 2 trillion-token corpus split 87% code and 13% natural language across 87 programming languages, this model punches well above its weight class. At just 6.7B parameters, it matches CodeLlama-34B on HumanEval with 66.1% pass@1, a model five times its size.
DeepSeek Coder 6.7B Instruct, like GPT-3.5 Turbo-0301, isn't ranked on the LLM Leaderboard.
Discover the power and diversity of large language models available with Telnyx. Explore the options below to find the perfect model for your project.
| Organization | Model Name | Tasks | Languages Supported | Context Length | Parameters | Model Tier | License |
|---|---|---|---|---|---|---|---|
| deepseek-ai | DeepSeek-R1-Distill-Qwen-14B | text generation | English | 43,000 | 14.8B | medium | deepseek |
| fixie-ai | ultravox-v0_4_1-llama-3_1-8b | audio text-to-text | Multilingual | 8,000 | 8.7B | small | mit |
| gemma-2b-it | text generation | English | 8,192 | 2.5B | small | gemma | |
| gemma-7b-it | text generation | English | 8,192 | 8.5B | small | gemma | |
| meta-llama | Llama-3.3-70B-Instruct | text generation | Multilingual | 99,000 | 70.6B | large | llama3.3 |
| meta-llama | Llama-Guard-3-1B | safety classification | Multilingual | 128,000 | 1.5B | small | llama3.3 |
| meta-llama | Meta-Llama-3.1-70B-Instruct | text generation | Multilingual | 99,000 | 70.6B | large | llama3.1 |
| meta-llama | Meta-Llama-3.1-8B-Instruct | text generation | Multilingual | 131,072 | 8.0B | small | llama3.1 |
| minimaxai | MiniMax-M2.5 | text generation | English | 2,000,000 | 0 | large | minimaxai |
| mistralai | Mistral-7B-Instruct-v0.1 | text generation | English | 8,192 | 7.2B | small | apache-2.0 |
| mistralai | Mistral-7B-Instruct-v0.2 | text generation | English | 32,768 | 7.2B | small | apache-2.0 |
| mistralai | Mixtral-8x7B-Instruct-v0.1 | text generation | Multilingual | 32,768 | 46.7B | medium | apache-2.0 |
| moonshotai | Kimi-K2.5 | text generation | English | 256,000 | 1.0T | large | modified-mit |
| Qwen | Qwen3-235B-A22B | text generation | English | 32,768 | 235.1B | large | apache-2.0 |
| zai-org | GLM-5 | text generation | English | 202,752 | 753.9B | large | mit |
| anthropic | claude-3-7-sonnet-latest | text generation | Multilingual | 200,000 | 0 | large | anthropic |
| anthropic | claude-haiku-4-5 | text generation | Multilingual | 200,000 | 0 | large | anthropic |
| anthropic | claude-opus-4-6 | text generation | Multilingual | 200,000 | 0 | large | anthropic |
| anthropic | claude-sonnet-4-20250514 | text generation | Multilingual | 200,000 | 0 | large | anthropic |
| gemini-2.0-flash | text generation | Multilingual | 1,048,576 | 0 | large | ||
| gemini-2.5-flash | text generation | Multilingual | 1,048,576 | 0 | large | ||
| gemini-2.5-flash-lite | text generation | Multilingual | 1,048,576 | 0 | large | ||
| groq | gpt-oss-120b | text generation | English | 131,072 | 117.0B | large | groq |
| groq | kimi-k2-instruct | text generation | English | 131,072 | 1.0T | large | groq |
| groq | llama-3.3-70b-versatile | text generation | Multilingual | 131,072 | 70.6B | large | llama3.3 |
| groq | llama-4-maverick-17b-128e-instruct | text generation | Multilingual | 1,000,000 | 400.0B | large | llama4 |
| groq | llama-4-scout-17b-16e-instruct | text generation | Multilingual | 128,000 | 109.0B | large | llama4 |
| openai | gpt-3.5-turbo | text generation | Multilingual | 4,096 | 0 | large | openai |
| openai | gpt-4 | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | gpt-4-0125-preview | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | gpt-4-0314 | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | gpt-4-0613 | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | gpt-4-1106-preview | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | gpt-4-32k-0314 | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | gpt-4-turbo-preview | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | gpt-4.1 | text generation | Multilingual | 1,047,576 | 0 | large | openai |
| openai | gpt-4.1-mini | text generation | Multilingual | 1,047,576 | 0 | large | openai |
| openai | gpt-4o | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | gpt-4o-mini | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | gpt-5 | text generation | Multilingual | 400,000 | 0 | large | openai |
| openai | gpt-5-mini | text generation | Multilingual | 400,000 | 0 | large | openai |
| openai | gpt-5.1 | text generation | Multilingual | 400,000 | 0 | large | openai |
| openai | gpt-5.2 | text generation | Multilingual | 400,000 | 0 | large | openai |
| openai | o1-mini | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | o1-preview | text generation | Multilingual | 128,000 | 0 | large | openai |
| openai | o3-mini | text generation | Multilingual | 200,000 | 0 | large | openai |
| xai-org | grok-2 | text generation | Multilingual | 131,072 | 0 | large | xai |
| xai-org | grok-2-latest | text generation | Multilingual | 131,072 | 0 | large | xai |
| xai-org | grok-3 | text generation | Multilingual | 131,072 | 0 | large | xai |
| xai-org | grok-3-beta | text generation | Multilingual | 131,072 | 0 | large | xai |
| xai-org | grok-3-fast | text generation | Multilingual | 131,072 | 0 | large | xai |
| xai-org | grok-3-fast-beta | text generation | Multilingual | 131,072 | 0 | large | xai |
| xai-org | grok-3-fast-latest | text generation | Multilingual | 131,072 | 0 | large | xai |
| xai-org | grok-3-latest | text generation | Multilingual | 131,072 | 0 | large | xai |
| xai-org | grok-3-mini | text generation | Multilingual | 131,072 | 0 | large | xai |
| xai-org | grok-3-mini-fast | text generation | Multilingual | 131,072 | 0 | large | xai |
Powered by our own GPU infrastructure, select a large language model, add a prompt, and chat away. For unlimited chats, sign up for a free account on our Mission Control Portal here.
Check out our helpful tools to help get you started.
DeepSeek Coder is a state-of-the-art code language model developed by DeepSeek AI, designed for high-performance code completion and infilling tasks. It is trained on 2T tokens, comprising 87% code from various programming languages and 13% natural language in both English and Chinese, available in multiple sizes ranging from 1.3B to 33B parameters.
To use DeepSeek Coder, you can integrate it into your project using the Hugging Face Transformers library. First, install the library, then load the model and tokenizer with the provided model name "deepseek-ai/deepseek-coder-6.7b-instruct". You can then input your code requirements, and the model will assist with code completion and infilling tasks. For detailed usage instructions, refer to the model's homepage.
Yes, DeepSeek Coder supports commercial use under its Model License. The code repository is licensed under the MIT License, ensuring flexibility and freedom for commercial and private projects alike. For more details, review the LICENSE-MODEL.
Yes, DeepSeek Coder is trained on a dataset that includes both English and Chinese natural languages, making it suitable for code completion tasks in projects that involve these languages. It's designed to understand and generate code based on the context provided in either language.
DeepSeek Coder achieves state-of-the-art performance among publicly available code models, outperforming others on several benchmarks, including HumanEval, MultiPL-E, MBPP, DS-1000, and APPS. Its training on a large corpus of 2T tokens with a significant percentage of code ensures superior model performance for a wide range of programming languages.
DeepSeek Coder is available in various sizes to suit different project requirements and computational capabilities, including 1.3B, 5.7B, 6.7B, and 33B parameter models. This flexibility allows users to select the most suitable model size for their specific needs.
If you encounter any issues or have questions regarding DeepSeek Coder, you can raise an issue through the Hugging Face repository or contact the DeepSeek team directly at [email protected]. The team is dedicated to providing support and ensuring users can effectively utilize the model for their coding projects.
DeepSeek Coder 6.7B is a code-focused language model trained from scratch on 2 trillion tokens, with a composition of 87% code and 13% natural language. The instruct variant is fine-tuned for instruction-following tasks like code generation, completion, and refactoring across multiple programming languages.
Yes, DeepSeek Coder models are specifically designed for coding tasks. They support code generation from natural language descriptions, code completion, debugging, and refactoring across languages including Python, Java, C++, and JavaScript. The models are available through multiple deployment options including local inference and hosted APIs.
For coding tasks, the DeepSeek Coder series outperforms the general-purpose DeepSeek models. The 33B instruct variant offers the strongest coding performance in the original series, while the newer DeepSeek Coder V2 models provide further improvements. The 6.7B variant offers a good balance between performance and resource efficiency for smaller deployments.
DeepSeek Coder has several known limitations including occasional hallucination of function names or APIs that don't exist, weaker performance on less common programming languages, and a 16K token context window that limits handling of very large codebases. These practical constraints are important to consider for production use cases.
DeepSeek Coder is used for automated code generation, code completion, bug detection, and code explanation tasks. Development teams use it for accelerating prototyping and code automation workflows. Its compact 6.7B size makes it practical for local deployment where latency and data privacy matter.
Yes, DeepSeek Coder is open-source and released under a permissive license that allows both research and commercial use. The model weights are freely available on Hugging Face and can be run locally using frameworks like Ollama, vLLM, or llama.cpp.
DeepSeek Coder 6.7B is smaller and more specialized than GPT-4 or GPT-3.5 Turbo. On code-specific benchmarks, the larger DeepSeek Coder 33B matches GPT-3.5 Turbo on HumanEval. The tradeoff is between GPT's broader capabilities and DeepSeek's open-source accessibility with the ability to self-host and fine-tune.