Hermes 2 Pro Mistral 7B

Outstanding accuracy in function calling and JSON structured outputs.

Choose from hundreds of open-source LLMs in our model directory.
about

With 7 billion parameters, Hermes 2 Pro Mistral 7B, developed by Nous Research, is a top-tier language model. It shines in benchmarks like GPT4All and BigBench, showcasing significant improvements in task performance. Its function calling precision and JSON mode capabilities are particularly noteworthy.

Licenseapache-2.0
Context window(in thousands)32768

Use cases for Hermes 2 Pro Mistral 7B

  1. Language Translation: Quickly translates and understands multiple languages, making it great for global communication.
  2. Sentiment Analysis: Analyzes social media posts, customer reviews, and other public feedback to gauge public sentiment towards a product or service.
  3. Real-time Data Analysis: Provides immediate insights by analyzing live data, useful in areas like stock trading and live event monitoring.
Quality
Arena Elo1074
MMLUN/A
MT BenchN/A

Hermes 2 Pro Mistral 7B scores 1,074 on the Chatbot Arena Leaderboard, ranking above Gemma 2B IT, which has a score of 989.

Llama 2 Chat 70B

1093

Nous Hermes 2 Mixtral 8x7B

1084

Hermes 2 Pro Mistral 7B

1074

Mistral 7B Instruct v0.2

1072

GPT-3.5 Turbo-1106

1068

Performance
Throughput(output tokens per second)N/A
Latency(seconds to first tokens chunk received)N/A
Total Response Time(seconds to output 100 tokens)N/A

Performance metrics for this model were not available at the time of evaluation.

What's Twitter saying?

  • OpenAI API-Compatible Function Calling: Kyle Mistele discusses progress towards OpenAI API-compatible function calling in vllm_project using Mistral 7B instruct v0.3 and NousResearch Hermes 2 Pro models, with contributions from Hugging Face. (Source: @0xblacklight)
  • Advocating for OutlinesOSS: Rémi recommends OutlinesOSS for structured JSON output, citing Andrej Baranovskij's endorsement of Hermes-2-Pro-Mistral-7B LLM and other tools like UnstructuredIO and LangChainAI Pydantic parser. (Source: @remilouf)
  • Extracting JSON from Unstructured Text: Aman Arora explores methods for extracting JSON from unstructured text using open-source models and Pydantic schema, evaluating tools like Guidance, Instructor, DSPy, Guardrails-AI, and jsonformer. He also considers the Hermes-2-Pro-Mistral-7B.Q8_0.gguf model for this task. (Source: @amaarora)

Explore Our LLM Library

Discover the power and diversity of large language models available with Telnyx. Explore the options below to find the perfect model for your project.

TRY IT OUT

Chat with an LLM

Powered by our own GPU infrastructure, select a large language model, add a prompt, and chat away. For unlimited chats, sign up for a free account on our Mission Control Portal here.

HOW IT WORKS
Sign-up to get started with the Telnyx model library
1/4
RESOURCES

Get started

Check out our helpful tools to help get you started.

  • Icon Resources ebook

    Test in the portal

    Easily browse and select your preferred model in the AI Playground.

  • Icon Resources Docs

    Explore the docs

    Don’t wait to scale, start today with our public API endpoints.

  • Icon Resources Article

    Stay up to date

    Keep an eye on our AI changelog so you don't miss a beat.

faqs

What is Hermes 2 Pro - Mistral 7B?

Hermes 2 Pro - Mistral 7B is an advanced version of the Nous Hermes 2 model, enhanced and retrained with an updated OpenHermes 2.5 Dataset and a new Function Calling and JSON Mode dataset. This model is designed for general task and conversation capabilities, excelling in Function Calling, JSON Structured Outputs, and scoring high in evaluations for these features.

How does Hermes 2 Pro improve on function calling and JSON structured outputs?

Hermes 2 Pro incorporates a special system prompt and multi-turn function calling structure, alongside a chatml role, to facilitate reliable and easy parsing of function calls and JSON structured outputs. This model scored a 90% on function calling evaluation and an 84% on structured JSON Output evaluation, indicating significant improvements in these areas.

Where can I learn more about the function calling system for Hermes 2 Pro?

For detailed information on the function calling system utilized by Hermes 2 Pro, visit the GitHub repository at NousResearch Hermes Function Calling.

Can I use Hermes 2 Pro for text generation in languages other than English?

Hermes 2 Pro is primarily designed and optimized for English. While it may have capabilities in other languages, its performance is best with English text generation and processing tasks.

How do I format prompts for Hermes 2 Pro?

Hermes 2 Pro uses ChatML as the prompt format, which allows for a structured system for engaging in multi-turn chat dialogue. For general prompts, use the provided template format, and for function calling, follow the specific system prompts and structures provided in the documentation. More details can be found on the model's page.

Where can I find example inference code for Hermes 2 Pro?

Example code for using Hermes 2 Pro with Hugging Face Transformers library is available on the model's Hugging Face page. Additionally, for function calling, refer to the Hermes Function Calling GitHub repository for comprehensive guides and examples.

What are the system requirements to use Hermes 2 Pro in inference?

To use Hermes 2 Pro in inference, especially in 4bit mode, it requires around 5GB of VRAM. Ensure your setup meets this requirement for optimal performance.

Where can I download quantized versions of Hermes 2 Pro?

Quantized versions of Hermes 2 Pro can be found on Hugging Face at NousResearch Hermes-2-Pro-Mistral-7B-GGUF.

How can I cite Hermes 2 Pro - Mistral 7B in my work?

To cite Hermes 2 Pro - Mistral 7B, use the following format: @misc{Hermes-2-Pro-Mistral-7B, url={https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B}, title={Hermes-2-Pro-Mistral-7B}, author={"interstellarninja", "Teknium", "theemozilla", "karan4d", "huemin_art"} }.