Stay up to date with the latest releases and tutorials from the Telnyx AI team.
Sign up to get the updates directly to your inbox.
July 24th, 2024
Meta’s latest models expand context length to 128K, add support across eight languages, and continue to advance state of the art performance for their respective sizes.
July 24th, 2024
The /chat/completions endpoint now supports Vision Language Models (VLMs) that are able to process both images (vision) and text (language) as input.
Check out our latest tutorial for examples.
May 31st, 2024
The Telnyx /chat/completions endpoint now supports function calling via the tools field. Function calling amplifies the capabilities of large language models by connecting them to your custom software.
Check out our latest tutorial for examples.
Function calling with Telnyx Inference
April 23rd, 2024
The /chat/completions endpoint now supports constrained decoding to ensure output conforms to a regular expression or JSON schema.
This provides fine-grained control tailored to your specific schema requirements. Check out our tutorial for examples.
Ensuring structured outputs from LLMs
April 11th, 2024
Our /chat/completions endpoint now supports many of the most popular open-source LLMs from CodeLlama, Deepseek, Meta, Mistral, NousResearch .
For a full list, check the Check the /models endpoint.
Explore new OS LLMs in the Mission Control Portal
March 20th, 2024
The summarize API provides a single convenient endpoint to summarize any text, audio, or video file in a Telnyx Storage bucket. File summaries are done entirely in-house. Under the hood, we are using our /audio/transcriptions endpoint to transcribe audio and video files, and the /chat/completions endpoint to summarize.
This feature is available now in the portal and via API.
Features
The Telnyx Summary API supports the following formats:
Summaries can be conducted on files of up to 100MB.
Pricing
Summary API pricing is dependent on the file type being summarized.
For audio and video files, pricing starts from $0.003/ minute—as per the pricing for the /audio/transcriptions endpoint. Text file summary pricing will be based on the /chat/completions endpoint pricing, at $0.0003/ 1K tokens.
A portal view of storage buckets summarized using Telnyx Summarize API
March 12th, 2024
The /audio/transcriptions API provides a speech-to-text endpoint to transcribe spoken words to text.
Features:
The Telnyx /audio/transcriptions API supports a 4x higher max file size than OpenAI, with users able to carry out transcription on files up to 100MB vs a limit of 25MB with OpenAI.
Pricing starts from $0.003/ minute, 50% cheaper vs. OpenAI.
Follow our Call Summarization tutorial to get started.
February 29th, 2024
We’re excited to bring system prompts and chat to our AI Playground in the portal.
Features:
Start testing today in the Mission Control Portal.
Pricing:
Take a look at our pricing page for all our Inference pricing.
February 22nd, 2024
Chat Completion API enables the LLM to use the chat history for context when returning a model-generated response.
Features:
messages
, temperature
, max_tokens
, stream
and more.Take a look at our Inference Pricing Page for a detailed pricing list.