Telnyx AI changelog

Stay up to date with the latest releases and tutorials from the Telnyx AI team.


Sign up to get the updates directly to your inbox.

Llama 3.1 support

July 24th, 2024

Meta’s latest models expand context length to 128K, add support across eight languages, and continue to advance state of the art performance for their respective sizes.

Llama 3.1 support graph

Vision language model support

July 24th, 2024

The /chat/completions endpoint now supports Vision Language Models (VLMs) that are able to process both images (vision) and text (language) as input.

Check out our latest tutorial for examples.

Function calling

May 31st, 2024

The Telnyx /chat/completions endpoint now supports function calling via the tools field. Function calling amplifies the capabilities of large language models by connecting them to your custom software.

Check out our latest tutorial for examples.

Function Calling

Function calling with Telnyx Inference

Ensuring structured output from LLMs

April 23rd, 2024

The /chat/completions endpoint now supports constrained decoding to ensure output conforms to a regular expression or JSON schema.

This provides fine-grained control tailored to your specific schema requirements. Check out our tutorial for examples.

Ensuring Structured Output from LLMs Ensuring structured outputs from LLMs

We've expanded our LLM Library

April 11th, 2024

Our /chat/completions endpoint now supports many of the most popular open-source LLMs from CodeLlama, Deepseek, Meta, Mistral, NousResearch .

For a full list, check the Check the /models endpoint.

New Open-Source Models Explore new OS LLMs in the Mission Control Portal

Using AI to summarize Telnyx Storage objects

March 20th, 2024

The summarize API provides a single convenient endpoint to summarize any text, audio, or video file in a Telnyx Storage bucket. File summaries are done entirely in-house. Under the hood, we are using our /audio/transcriptions endpoint to transcribe audio and video files, and the /chat/completions endpoint to summarize.

This feature is available now in the portal and via API.

Features
The Telnyx Summary API supports the following formats:

  • Text: pdf, HTML, txt, json, csv
  • Audio and video formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm

Summaries can be conducted on files of up to 100MB.


Pricing


Summary API pricing is dependent on the file type being summarized.


For audio and video files, pricing starts from $0.003/ minute—as per the pricing for the /audio/transcriptions endpoint. Text file summary pricing will be based on the /chat/completions endpoint pricing, at $0.0003/ 1K tokens.


Telnyx File Summary A portal view of storage buckets summarized using Telnyx Summarize API

Connect your data on our AI platform

OpenAI Compatible /audio/transcriptions (BETA)

March 12th, 2024

The /audio/transcriptions API provides a speech-to-text endpoint to transcribe spoken words to text.

Features:

  • Supports flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, and webm file types.
  • Supports segment level timestamps.
  • Pairs nicely with our /chat/completions endpoint to summarize audio.

The Telnyx /audio/transcriptions API supports a 4x higher max file size than OpenAI, with users able to carry out transcription on files up to 100MB vs a limit of 25MB with OpenAI.

Pricing starts from $0.003/ minute, 50% cheaper vs. OpenAI.

Follow our Call Summarization tutorial to get started.


Audio Transcriptions API

Explore new updates in our AI Playground

February 29th, 2024

We’re excited to bring system prompts and chat to our AI Playground in the portal.

Features:

  • System prompts allow users to give context and instructions before asking a question to a model. For example, users can specify a role, how to personalize the response, or what tone to use.
  • Telnyx users can view the chat responses and interact with an LLM in the conversation field.

Storage for AI and System Prompts demo



Start testing today in the Mission Control Portal.

Pricing:

  • $0.0002 / 1K tokens for 7B parameter models
  • $0.0003 / 1K tokens for 13B, 34B, 8x7B parameter models
  • $0.001 / 1K tokens for 70B parameter models

Take a look at our pricing page for all our Inference pricing.

OpenAI Compatible /chat/completions

February 22nd, 2024

Chat Completion API enables the LLM to use the chat history for context when returning a model-generated response.

Features:

  • Chat Completions with support for messages, temperature, max_tokens, stream and more.
  • Retrieval augmented generation (RAG) with embedded Telnyx Storage buckets using the tools parameter.

Chat Completion API


Take a look at our Inference Pricing Page for a detailed pricing list.

Connect your data on our AI platform