Telnyx Voice AI agents now offer precise control over their response timing, activating only after users finish speaking. The Start Speaking Plan addresses the common problem of agents interrupting users mid-sentence, which often leads to incomplete data and failed tool calls.
This feature provides four customizable wait times, adaptable to various speech patterns.
What’s New:
- Context-aware Endpointing: Our system now intelligently adapts wait times based on the context of the transcription. This includes recognizing punctuation, sequences of digits, and natural mid-sentence pauses.
- Number Dictation Support: We've extended wait times to accommodate natural pauses when users dictate numerical information such as phone numbers, order IDs, or account numbers.
- IVR-Friendly Timing: Agents interacting with automated systems will benefit from longer baseline delays, improving navigation and interaction.
- Fully Configurable: Tailor wait times from 0 to 5 seconds to perfectly match your specific use case.
Why it matters
- Prevents incomplete data: Users finish providing complete information before the agent processes it.
- Handles edge cases: Works for accents, slow speakers, IVR interactions, and multilingual scenarios.
- Production Control: Customize for your specific use case (e.g., healthcare requires different timing than e-commerce).
- Reduces retries: Get the right data on the first attempt.
Example use cases
- Healthcare: Longer waits for elderly patients checking calendars.
- Order fulfillment: Extended number detection for 10-16 digit order IDs.
- IVR navigation: Patient timing for robotic speech systems.
- Financial services: Careful endpointing for account numbers and routing numbers.
Getting started
- Log in to Mission Control Portal → Voice AI Assistant settings
- Configure Start Speaking Plan:
wait_seconds
: Baseline (0.3-1.5s)on_punctuation_seconds
: After sentences (0.1s)on_no_punctuation_seconds
: Mid-sentence (1.5s)on_number_seconds
: Digit sequences (0.5-1.5s)
- Test with number dictation and high-latency scenarios
- Deploy and monitor webhook success rates
Talk to us.
Need help tuning for your use case?
Check the documentation or connect with our voice AI team.