Learn how Telnyx is embracing AI to solve customer problems—and how you can build your own support bot using Telnyx tools.
By Ciaran Palmer
Try our support bot in the bottom right corner of the Mission Control Portal by clicking on the Telnyx logo and selecting “Ask AI Assistant”
Prompt and effective customer support is crucial for any business's success. A recent report from Juniper Research estimates that 70% of all customer support interactions will include chatbots by the end of 2023. How? The release of ChatGPT has enabled companies to build bots at scale, increasing channels for customer support.
In this rapidly evolving customer support landscape, Telnyx is taking a significant step forward with its new AI-powered assistant. For us, this development isn’t about keeping up with trends. We’re aligning with this shift through practical innovation to enhance the efficiency and quality of our customer interactions.
We’re using Telnyx Inference to improve customer support by giving users contextualized repsonses to questions, while also improving answer times—addressing user needs more effectivly.
Our in-house AI assistant is a testament to what you can achieve with the right tools. Whether you're an existing customer familiar with our commitment to quality or a new visitor exploring the potential of AI in customer support, this development marks an important step in our journey to provide an even better service to our customers.
Join us as we dive into how our NOC engineering team worked with our AI squad to develop our AI assistant and how you can leverage these advancements for your own business.
Earlier this year, our team of AI engineers embarked on an initiative to enhance our support experience with AI. The team quickly produced an in-house AI assistant that enabled Telnyx portal users to transition seamlessly from encountering a problem to finding a solution.
So how did they do it?
Ciaran Palmer, Lead Engineer, NOC Automation, takes us through how his team worked to get this project live.
My team embarked on a challenging endeavor to transform Telnyx's customer-facing documentation into searchable vectors. This process was intricate due to the varied formats of documents across different sources:
The initial step was to craft loaders that could consistently retrieve and parse data from these sources. The resulting embedded documents allowed us to use customer queries to fetch the most relevant documentation, enhancing the chatbot's responses and helping our customers reach the information they’re looking for faster. This embedding process was crucial to prevent hallucinations, ensuring the chatbot provides accurate and reliable information.
Despite successfully embedding content from the three sites, we identified a gap in our solution. The API specification, critical for a developer-centric chatbot, presented a unique challenge due to its YAML formatting, which is not inherently suitable for vector embeddings.
To overcome this problem, we:
Embedding this refined data allowed our chatbot to provide precise API-related support, fetching endpoint details as needed and offering a robust solution for developers.
This innovative approach not only ensures rapid resolutions for our customers but does so without incurring additional overhead, reflecting Telnyx's commitment to efficiency and customer satisfaction.
When operating on GPT-4, the 8,000 token limitation presented a significant challenge. Our primary strategy to mitigate this issue was to segment our documents into paragraphs. This solution gave the team the flexibility to provide complete documents or create a “snapshot” centered around the most relevant paragraph. Even for extensive documents, such as our SMS guide that spans multiple SDKs, we were able to keep the information below 2,000 tokens.
As we begin to transition the AI Assistant to run on Telnyx Inference, token limits won’t apply to the bot and we can start to unlock our new assistants true potential.
By harnessing the power of artificial intelligence, our AI Assistant enhances user interactions, offering responses and solutions that are closely tailored to each user's specific needs and questions. From handling common issues to providing detailed information about our services, our AI Assistant efficiently manages a wide array of customer requests.
With complete embedding of our API documentation, we can use the LLM to write simple code to help new users get started more quickly.
The LLM also uses SDK code examples to show developers how to do things in their native programming language, improving the rate at which new users become comfortable with Telnyx APIs.
We built our new chatbot to act as a middleman between Telnyx users and our vast documentation. In cases where a customer is looking for details in our docs, it’s likely the LLM will provide a quality answer—and much faster than a human could.
But some questions will always require a human touch. We’ve trained our bot to transfer customers to a live agent when it can't answer a question.
Our AI Assistant can be enabled on multiple platforms to help developers working with Telnyx products in their chosen channel.
Currently live in the Mission Control Portal and our developer Slack community, the AI Assistant will be coming to telnyx.com soon.
At Telnyx, we love testing out new technology. But the true aim of this project is to provide better, faster support to our customers.
The only way to know if we’ve hit the mark is to hear from you. Take our support bot for a spin to see if it can help you find what you’re looking for.
Initially built on OpenAI APIs, the next challenge was to have the new AI Assistant running on our latest product—Telnyx Inference.
Check back to see how James Whedbee and his team fulfilled specific requests to build Telnyx Inference to support our new AI Assistant.
Related articles