Inference

ABOUT

Dedicated GPU infrastructure for fast inference

Many factors can influence how well AI models perform, including the hardware they run on. Top-tier model performance often demands substantial computational resources, creating a balancing act between cost efficiency and speed.

Our powerful network of owned GPUs delivers rapid inference for high performance without excessive costs or extended timelines. Combined with Telnyx Storage, you can easily upload your data into buckets for instant summarization and automatic embedding. Use data across proprietary and open-source models for the perfect balance of control, cost efficiency, and speed your business needs to stay ahead.

Try it out

Chat with an LLM

Utilize custom data in proprietary and open-source models, or build your own on dedicated GPU infrastructure for fast inference at low costs. Talk to an expert

Powered by our own GPU infrastructure, select a large language model, add a prompt, and chat away. For unlimited chats, sign up for a free account on our Mission Control Portal here.

FEATURES

Confidently implement AI into your applications with dedicated infrastructure and distributed storage.

  • Checkmark

    Instant embeddings

    Data in AI-enabled storage buckets can be vectorized in seconds to feed LLMs for fast, contextualized inference.

  • Checkmark

    Function Calling

    Build smarter applications with function calling for open-source models.

  • Checkmark

    Autoscaling

    Count on our dedicated GPUs to handle a high volume of requests concurrently and scale automatically based on your workload to ensure optimal performance at all times.

  • Checkmark

    JSON mode

    Ensure your inference output conforms to a regular expression or JSON schema for specific applications.

  • Checkmark

    Model flexibility

    Choose the best model for your use case. We currently support models from OpenAI, Meta, and MosaicML—with more on the way.

  • Checkmark

    Low latency

    Go from data to inference in near-real time with the co-location of Telnyx GPUs and Storage.

BENEFITS

Scale confidently

Leverage our dedicated network of GPUs to scale your AI-powered services effortlessly.

>4K

GPUs

Cost-effective

Thanks to our dedicated infrastructure, Telnyx users can save over 40% compared to OpenAI and MosaicML on embeddings alone.

40%

Cheaper embeddings

Supported models

Access the latest open-source LLMs on one platform within days of release. Easily switch between models for ultimate flexibility.

20+

Models

Always-on support

Access our free around-the-clock support—available to every customer.

24/7

Award-winning support

PRODUCTS

See what you can build with our suite of AI APIs

HOW IT WORKS
Inference step 1 - Set up a portal account
1/4
PRICING
See our pricing

Get started with Telnyx Inference today via the Mission Control Portal. View our full pricing here.

Starting at

$0.0004

inference per 1K tokens
RESOURCES

Start building

Take a look at our helpful tools to get started

curl -i -X POST \
  https://api.telnyx.com/v2/ai/chat/completions \
  -H 'Authorization: Bearer YOUR_TELNYX_API_KEY' \
  -H 'Content-Type: application/json' \
  -d '{
    "messages": [
      {
        "role": "user",
        "content": "Hello, World!"
      }
    ],
    "model": "meta-llama/Meta-Llama-3-8B-Instruct"
  }'
  • Icon Resources Article

    Stay up to date

    We post the latest updates from our AI platform on the changelog page, so you can stay in the know.

  • Icon Resources Coding Tutorial

    Smart README creation

    Create accurate READMEs using Telnyx's AI platform for seamless data management and inference.

  • Icon Resources Article

    Explore the library

    Explore 20+ large language models ready for testing and integration into your AI projects.

FAQ

Inference in AI refers to the process by which a machine learning model applies its learned knowledge to make decisions or predictions based on new, unseen data. It's the phase where the trained model is utilized to interpret, understand, and derive conclusions from data inputs it wasn't exposed to during the training phase.