Dolphin 2.5 Mixtral 8X7B

An open-source mixture-of-experts model built on Mixtral 8x7b, designed for general-purpose text generation, coding, and open-ended conversation.

about

Eric Hartford's Cognitive Computations team trained this model in 3 days on 4x A100 GPUs using QLoRA, fine-tuning Mistral's Mixtral 8x7B mixture-of-experts architecture with ~46.7B total parameters and ~12.9B active per forward pass. The training data was systematically filtered to remove alignment refusals, producing a deliberately uncensored model for code generation, creative writing, and open-ended conversation.

Licenseapache-2.0
Context window(in thousands)32768

Use cases for Dolphin 2.5 Mixtral 8X7B

  1. Virtual Assistants: Create advanced virtual assistants for personal and business use, offering accurate and context-aware responses.
  2. Fraud Detection: Implement fraud detection systems in banking and e-commerce with the model’s rapid data processing capabilities.
  3. Medical Diagnostics: Assist healthcare professionals with AI diagnostics, analyzing patient data to suggest potential diagnoses.

Quality

Arena Elo1063
MMLUN/A
MT BenchN/A

With an Arena Elo score of 1,063, Dolphin 2.5 Mixtral 8X7B outperforms Gemma 2B IT's score of 989 on the LLM Leaderboard.

GPT-3.5 Turbo-1106

1068

Llama 2 Chat 13B

1063

Dolphin 2.5 Mixtral 8X7B

1063

Zephyr 7B beta

1053

Code Llama 70B Instruct

1042

What's Twitter saying?

  • New Uncensored Model by @MistralAI: Dolphin 2.5 Mixtral 8x7b, created by @erhartford, is based on Mixtral, the mixture of experts model by @MistralAI. It's strong at coding tasks and trained on diverse datasets. (Source: @jmorgan)
  • Testing Dolphin-2.5-mixtral-8x7b Uncensored: After a week of testing, Dolphin 2.5 Mixtral 8x7b has shown itself to be a powerful and creative LLM. It uses a SuperPrompt to ensure it meets all user requests. (Source: @BrianRoemmele)
  • Review of Dolphin 2.5 Mixtral Uncensored Model: What happens when you remove all censorship from Mixtral 8x7b? It answers any question you ask. (Source: @erhartford. @MatthewBerman)

Explore Our LLM Library

Discover the power and diversity of large language models available with Telnyx. Explore the options below to find the perfect model for your project.

Organizationdeepseek-ai
Model NameDeepSeek-R1-Distill-Qwen-14B
Taskstext generation
Languages SupportedEnglish
Context Length43,000
Parameters14.8B
Model Tiermedium
Licensedeepseek

TRY IT OUT

Chat with an LLM

Powered by our own GPU infrastructure, select a large language model, add a prompt, and chat away. For unlimited chats, sign up for a free account on our Mission Control Portal here.

HOW IT WORKS

Selecting LLMs for Voice AI

RESOURCES

Get started

Check out our helpful tools to help get you started.

  • Icon Resources ebook

    Test in the portal

    Easily browse and select your preferred model in the AI Playground.

  • Icon Resources Docs

    Explore the docs

    Don’t wait to scale, start today with our public API endpoints.

  • Icon Resources Article

    Stay up to date

    Keep an eye on our AI changelog so you don't miss a beat.

Sign up and start building

faqs

What is Dolphin Mixtral?

Dolphin 2.5 Mixtral 8x7B is an open-source model created by Eric Hartford, built on Mistral AI's Mixtral 8x7B mixture-of-experts architecture. It was trained using qLoRA on the OpenHermes dataset for general-purpose text generation, coding, and conversational tasks.

Is Dolphin Mixtral censored?

Dolphin 2.5 Mixtral is designed as an uncensored model, meaning it does not include built-in content filtering or refusal mechanisms. This makes it highly compliant with user instructions but also requires responsible deployment practices since it will not refuse harmful requests on its own.

Is Dolphin AI safe?

Dolphin models do not have built-in safety guardrails or moderation layers. Users who deploy Dolphin models are responsible for implementing their own safety measures and content filtering through system prompts, output filters, or external moderation tools.

What is dolphin mixtral good for?

Dolphin 2.5 Mixtral excels at general text generation, creative writing, code generation, and open-ended conversation. Its mixture-of-experts architecture provides strong performance comparable to GPT-3.5 Turbo on many benchmarks while being fully open-source and self-hostable.

Dolphin 2.5 Mixtral 8X7B—Chat with this LLM