Conversational AI

Your Voice AI Agent works until it doesn't. Here's why.

The hard part of building voice AI agents isn't connecting APIs. It's writing instructions that prevent the LLM from skipping steps, improvising, and breaking.

Sonam Gupta
By Sonam Gupta, PhD
Best practices for voice ai instructions

I recently built a voice-driven bug reporter that listens to a user, collects the details of an issue, and then creates a JIRA ticket through the Telnyx agent platform. On the surface, it sounds simple. Connect JIRA. Add a prompt. Let the model do its thing.

Voice AI Agent JIRA Integration

In reality, the experience taught me something completely different. The hard part was not the JIRA API. It was everything around it: the conversation flow, the timing, the tool interpretation, and the way LLMs behave when you need structured outcomes.


Prompting is a borderline art. The black-box nature of LLMs requires persistence.

Stephen Malito, Director, Solutions Engineering @ Telnyx


This post walks through what I learned, what surprised me, and the design patterns that ended up making the agent work reliably. Along the way, I will highlight a few screenshots from the Telnyx portal that show how the setup looks in practice.


Setting Up the Agent

At a high level, the setup for this agent was straightforward. You need:

  • an LLM
  • voice input and output configured
  • a JIRA integration
  • a prompt that directs the agent through the flow

LLM selection

This is where the agent’s reasoning style and verbosity start.

Selecting an LLM in the Telnyx Portal

Choosing a model that responds consistently and avoids rambling is surprisingly important for collecting structured fields.

Voice configuration

Configuration view for voice ai agents telnyx

As this is a voice agent, pacing matters. If the transcription is slow or the TTS responses are too long, users feel friction. That forced me to write shorter prompts and limit responses to under twenty seconds.

Connecting JIRA

Once JIRA is connected with an API token, the agent gains access to these tools:

  • jira__get_projects
  • jira__get_current_user
  • jira__create_issue

That is really all the setup you need before you start shaping the logic.

Jira integration with Voice AI

What I Expected To Be Hard (But Was Not)

Creating a JIRA issue was the easy part. The tool call itself is simple: pass a project key, summary, description, priority, and issue type. Sending the request worked on the first try.

Other tasks that I assumed would be painful but were not:

  • listing projects through jira__get_projects
  • pulling the current user
  • authentication and permissions

These pieces were mechanical. The problem was everything between them.

What Was Actually Hard

Getting the model to follow a strict step-by-step flow

Here is the high-level flow I needed:

  1. Ask if the user wants project info or user status
  2. Ask for the project
  3. Collect the summary
  4. Collect a detailed description
  5. Ask for priority
  6. Confirm everything
  7. Finally create the issue

Sounds easy, but LLMs love to skip ahead or merge multiple fields into a single response. Without rigid gating, the agent would do things like:

  • ask for summary and description in one question
  • assume a project without asking
  • call the JIRA tool before confirming anything

The solution was to explicitly script each turn in the prompt, telling the model exactly what to ask and in what order. This taught me that most voice agents are only as smart as the conversation scaffolding you give them.

Preventing premature or incorrect tool calls

Even with instructions, the agent sometimes tried to call jira__create_issue early because it felt like it had enough information. Other times, it would call jira__get_projects when the user did not ask for it. I had to add clear constraints like:

  • “Only call a tool when the user directly asks for it”
  • “Do not create any issue before confirming with the user”

These guardrails made the agent predictable.

Handling verbose or confusing tool responses

JIRA endpoints return structured data that is not voice friendly. Too many fields. Nested structures. Lots of metadata the model does not need. The fix was to instruct the model to summarize tool outputs in a human friendly way:

  • “List only the top three project names”
  • “Return only the user status field”

The agent became significantly easier to use once tool responses were filtered and summarized.

Collecting clean fields one at a time

This was one of the biggest challenges. The agent must collect:

  • summary
  • description
  • priority
  • project

If you do not explicitly force one question per field, the model starts improvising. It might combine fields, misinterpret the priority, or treat part of the description as a summary. This reinforced a pattern that applies to all real-world agents:

LLMs do not necessarily handle slot filling. You must enforce it.

Designing a confirmation step that prevents bad writes

Before creating a ticket, the agent says:

“Let me confirm: project is X, summary is Y, description is Z, priority is P. Should I create this?”

This checkpoint mattered more than I expected. Without it, the agent would occasionally move forward with a wrong or incomplete field. With it, mistakes became catchable and fixable.

This is essential any time an agent writes to a system of record.

Avoiding loops and making the agent recover gracefully

Tool calls fail sometimes. Or they return empty results. Or latency causes confusion. In my prompt, I added a rule:

“If something does not work, acknowledge it and move on.”

Without this, the agent would retry forever or get stuck in an apology loop. In production, graceful degradation matters as much as correctness.

What the Debugging Process Looked Like

The Telnyx analysis tab became my best friend during this build. Insert screenshot of analysis tab showing conversation logs, tool calls, responses. I used it to see:

  • exactly what the model said
  • the raw tool call payloads
  • the tool responses
  • whether the logic flowed correctly across turns

Watching the agent in real time revealed issues that were not visible from the prompt alone. It also helped validate that the model was respecting the order I intended.

Conversation Log Example Voice AI Agent

The Patterns That Actually Made The Agent Work

Here are the design patterns that emerged and that I would reuse for any agent:

  • Breaking questions into smaller pieces reduced confusion
  • Confirming the ticket before creation prevented accidental writes
  • Summarizing tool outputs made the agent more voice friendly
  • Adding explicit gating kept the model from calling tools early
  • Error handling mattered more than I expected
  • Tone shaping helped keep the interaction natural

What This Experience Taught Me About Real Agents

The biggest lesson is that the complexity of agent building is not in the API integrations. It is in shaping how the model asks questions, manages state, handles failures, interprets tools, and interacts with humans.

instructions-voice-ai-agents.svg

In other words, the JIRA tool was the easy part. The conversation was the hard part and the pattern applies to almost every real-world agent. If you get the flow, gating, and failure paths right, the agent feels reliable. If you do not, even the simplest task becomes unpredictable.

Conclusion

Building this agent was a great reminder that agents are not automagic. They need structure, good prompt architecture, error handling, and careful shaping of how they gather information from users. The Telnyx portal made it easy to iterate quickly, inspect calls, adjust prompts, and connect external tools. Once the agent logic was in place, the actual JIRA operations were trivial. Try building one yourself from here.

If you're curious what the final prompt looked like after all those iterations, here it is:


Voice AI Prompt - JIRA Bug Reporter Assistant

You are a friendly and efficient Bug Reporter assistant. Your job is to help users report bugs by voice and create properly formatted JIRA issues.

You have access to these JIRA tools:

  • jira_getcurrent_user: Get the status of the current user
  • jira_createissue: Create a new JIRA issue
  • jira_getprojects: Get a list of JIRA projects

Conversation Flow:

1. Before the user reports the bug, ask if they want to get a list of projects or get the status of the current user. You can use the following for that:

  • If the user asks what projects they have access to, use the jira_getprojects tool to answer that question. Give the top 3 names.
  • If the user asks for the user status, use the jira_getcurrent_user tool to answer that.

2. After asking for projects or user status, ask for the project

"Which JIRA project should this bug go in? For example, you might have DR, PORTAL, VOICEAI, or DEVREL."

3. Gather bug details (one at a time):

"What's the bug? Give me a short summary." "Can you describe what happened in detail?" "What's the priority? Low, Medium, Major, or Critical?"

4. Confirm

"Let me confirm:

  • Project: [PROJECT]
  • Summary: [SUMMARY]
  • Description: [DESCRIPTION]
  • Priority: [PRIORITY] Should I create this?"

5. Create issue

If user confirms, use the tool jira_createissue with: { "projectkey": "[PROJECT]", "summary": "[SUMMARY]", "description": "[DESCRIPTION]", "issuetype": "Bug", "priority": "[PRIORITY]" }

If it succeeds: "Perfect! Created issue [ISSUE KEY] in [PROJECT]." If it fails: "I tried to create the issue but ran into a problem. Your bug report was: [SUMMARY] in [PROJECT]."

6. Ask if they want to report another

"Would you like to report another bug?"

If no, say "Thanks for reporting!" and hangup.

IMPORTANT:

  • Keep moving forward even if tool calls fail
  • Don't get stuck - if something doesn't work, acknowledge it and continue
  • Keep responses under 20 seconds
  • Be conversational and natural
  • Don't be abrupt while hanging up
Share on Social

Related articles

Sign up and start building.