Telnyx - Global Communications Platform ProviderHome
View all productsVoice AIVoice APIeSIMRCSSpeech-to-TextText-to-speechSIP TrunkingSMS APIMobile VoiceView all solutionsHealthcareFinanceTravel and HospitalityLogistics and TransportationContact CenterInsuranceRetail and E-CommerceSales and MarketingServices and DiningView all pricingVoice AIVoice APIeSIMRCSSpeech-to-TextText-to-SpeechSIP TrunkingSMS APIGlobal NumbersIoT SIM CardOur NetworkMission Control PortalCustomer storiesGlobal coveragePartnersCareersEventsResource centerSupport centerAI TemplatesSETIDev DocsIntegrations
Contact usLog in
Contact usLog inSign up

Social

Company

  • Our Network
  • Global Coverage
  • Release Notes
  • Careers
  • Voice AI
  • AI Glossary
  • Shop

Legal

  • Data and Privacy
  • Report Abuse
  • Privacy Policy
  • Cookie Policy
  • Law Enforcement
  • Acceptable Use
  • Trust Center
  • Country Specific Requirements
  • Website Terms and Conditions
  • Terms and Conditions of Service

Compare

  • ElevenLabs
  • Vapi
  • Twilio
  • Bandwidth
  • Kore Wireless
  • Hologram
  • Vonage
  • Amazon S3
  • Amazon Connect
© Telnyx LLC 2026
ISO • PCI • HIPAA • GDPR • SOC2 Type II
Benchmarks/Methodology

Methodology

How We Measure

Every benchmark is reproducible. This page documents exactly how each measurement is taken, what counts as passing, and how you can verify the results yourself.

Overview

Telnyx benchmarks measure real infrastructure capabilities — not marketing claims. Each benchmark follows a defined methodology with clear pass/warn/fail thresholds, and every result includes instructions for independent reproduction.

Benchmarks are categorized by signal type and confidence level. Latency and throughput benchmarks are measured from production telemetry. Ownership and certification benchmarks are proven through public documentation and regulatory filings.

Signal Types

Each benchmark is classified by how its signal is obtained. This classification determines the confidence and reproducibility of the result.

Measured

Directly observed from production telemetry or synthetic tests. Represents real-world performance data captured at the time of measurement.

API latencySIP call setup timeuptime percentage
Proven

Verified through public documentation, regulatory filings, or contractual commitments. Not directly measured but verifiable by third parties.

CLEC licensesSTIR/SHAKEN attestation leveldatacenter locations
Derived

Computed from a combination of measured and proven signals. Requires interpretation of multiple data points.

vendor countcarrier ownership classification

Individual Methodologies

Detailed methodology for each benchmark, including measurement procedure, thresholds, and reproduction instructions.

Carrier Network Ownership

↑ Higher is betterboolean

Whether the communications provider owns and operates its own carrier network infrastructure (CLEC/IXC licenses, direct peering, own switching equipment). Providers that resell or aggregate other carriers' services are NOT carrier_owned.

Measurement Method

Binary: true if the provider holds carrier licenses and operates own infrastructure; false if reselling third-party carrier services

Thresholds

✓ Pass: 1

Reproduce

npx @telnyx/benchmark --carrier

Inter-Provider Hops

↓ Lower is betterhops

The number of intermediate network providers between the calling party and the called party. Zero hops means the call stays on the provider's own network from origination to termination. Higher hops indicate intermediaries that add latency, cost, and failure points.

Measurement Method

Count of distinct network operators in the call path excluding the originating and terminating endpoints. Measured via SIP signaling analysis.

Thresholds

✓ Pass: 0⚠ Warn: 2

Reproduce

npx @telnyx/benchmark --carrier

Vendor Count for Voice AI Agent

↓ Lower is bettervendors

The minimum number of separate vendor contracts required to deploy a complete voice AI agent. A complete agent requires: SIP trunking, speech-to-text, LLM inference, and text-to-speech. A count of 1 means all components are available from a single provider.

Measurement Method

Count distinct vendors needed for: SIP trunking + STT + LLM inference + TTS. If a single provider offers all four, the count is 1.

Thresholds

✓ Pass: 1⚠ Warn: 2

Reproduce

npx @telnyx/benchmark --vendor-count

STIR/SHAKEN Attestation Level

↑ Higher is betterlevel

The STIR/SHAKEN attestation level the provider can apply to originating calls. Level A (Full Attestation) means the provider has verified the call originator and the call is from the authorized user of the number. Level B means partial attestation. Level C means gateway attestation only.

Measurement Method

Highest attestation level the provider's SHAKEN certificate authority can sign. Level A > B > C > None.

Thresholds

✓ Pass: A⚠ Warn: B

Reproduce

npx @telnyx/benchmark --stir-shaken

Edge Compute Ownership

↑ Higher is betterboolean

Whether the provider owns and operates its own edge compute infrastructure (GPUs, servers, networking) at the network edge, rather than renting from public cloud providers. Owned infrastructure enables tighter integration between communications and compute.

Measurement Method

Binary: true if the provider owns the compute hardware and operates it in their own or colocation facilities; false if relying on public cloud for edge compute.

Thresholds

✓ Pass: 1

Reproduce

npx @telnyx/benchmark --carrier

Edge Compute Regions

↑ Higher is betterregions

List of geographic regions where the provider operates edge compute nodes for low-latency processing of voice, AI inference, and real-time applications.

Measurement Method

Enumerated list of regions with active compute nodes capable of running workloads.

Thresholds

✓ Pass: 1

Reproduce

npx @telnyx/benchmark --carrier

AI Agent Platform

↑ Higher is betterboolean

Whether the provider offers a native AI agent platform that integrates voice, messaging, and AI inference without requiring third-party assembly. A true value means the provider's platform can host, orchestrate, and run AI agents directly.

Measurement Method

Binary: true if the provider offers a managed AI agent platform with integrated voice/messaging/AI; false if only individual APIs are available.

Thresholds

✓ Pass: 1

Reproduce

npx @telnyx/benchmark

Global Communications Coverage

↑ Higher is betterboolean

Whether the provider offers global communications coverage including international voice termination, messaging, and number provisioning across multiple regions and countries.

Measurement Method

Binary: true if the provider offers communications services across 3+ regions with international termination; false if limited to a single country or region.

Thresholds

✓ Pass: 1

Reproduce

npx @telnyx/benchmark

Selection Signals

↑ Higher is bettersignals

Structured decision signals that enable programmatic evaluation and comparison of communications providers. Each signal is typed (measured/proven/derived), sourced, and timestamped for auditability.

Measurement Method

Derived from the collection of individual signals (carrier_owned, inter_provider_hops, etc.)

Thresholds

✓ Pass: undefined

Reproduce

npx @telnyx/benchmark

SIP Outbound Rate (US)

↓ Lower is betterUSD/min

The per-minute rate for outbound SIP termination in the United States, measured in USD. This is the base rate before volume discounts or committed spend reductions.

Measurement Method

Direct measurement from published pricing for US outbound SIP termination per minute.

Thresholds

✓ Pass: 0.005

Reproduce

npx @telnyx/benchmark --pricing

Confidence Levels

Each benchmark result includes a confidence level indicating the reliability and recency of the measurement.

High

Based on automated, regularly-scheduled measurements with statistically significant sample sizes. Results are reproducible within a narrow margin.

Criteria: Automated measurement with 100+ samples, run daily or more frequently

Medium

Based on periodic measurements or proven signals with limited sample sizes. Results are directionally accurate but may have wider variance.

Criteria: Periodic measurement or proven signal verified within the last 90 days

Low

Based on one-time measurements, derived signals, or signals that haven't been recently verified. Use for directional guidance only.

Criteria: One-time measurement or derived signal not verified in the last 90 days

Reproducibility

Every benchmark can be independently reproduced using the Telnyx benchmark CLI. Results may vary based on network conditions and region, but should fall within the same pass/warn/fail category.

Run benchmarks yourself

npx @telnyx/benchmark

Run specific benchmarks with flags like --latency, --vendor-count, --carrier, or --stir-shaken.

Scenario Registry

Benchmarks are scoped to specific usage scenarios defined in the scenario registry. Each scenario describes a real-world workload profile with parameters like call volume, region, and feature set.

Current Scenarios

  • voice_ai_agent_1000_min_month — Voice AI agent handling 1,000+ minutes/month with STT, TTS, and SIP trunking
  • us_sip_outbound — US-based SIP outbound call scenario, East-to-West coast

Scenario definitions are maintained in the benchmark registry and referenced by each benchmark result via the scenario_ref field.

Machine-readable data

Access benchmarks programmatically

All benchmark results are available as structured JSON for automated evaluation and agent-driven workflows.

View benchmarks.json← Back to Benchmarks

Ask AI

  • GPT
  • Claude
  • Perplexity
  • Gemini
  • Grok