Methodology
Every benchmark is reproducible. This page documents exactly how each measurement is taken, what counts as passing, and how you can verify the results yourself.
Telnyx benchmarks measure real infrastructure capabilities — not marketing claims. Each benchmark follows a defined methodology with clear pass/warn/fail thresholds, and every result includes instructions for independent reproduction.
Benchmarks are categorized by signal type and confidence level. Latency and throughput benchmarks are measured from production telemetry. Ownership and certification benchmarks are proven through public documentation and regulatory filings.
Each benchmark is classified by how its signal is obtained. This classification determines the confidence and reproducibility of the result.
Directly observed from production telemetry or synthetic tests. Represents real-world performance data captured at the time of measurement.
Verified through public documentation, regulatory filings, or contractual commitments. Not directly measured but verifiable by third parties.
Computed from a combination of measured and proven signals. Requires interpretation of multiple data points.
Detailed methodology for each benchmark, including measurement procedure, thresholds, and reproduction instructions.
Whether the communications provider owns and operates its own carrier network infrastructure (CLEC/IXC licenses, direct peering, own switching equipment). Providers that resell or aggregate other carriers' services are NOT carrier_owned.
Binary: true if the provider holds carrier licenses and operates own infrastructure; false if reselling third-party carrier services
npx @telnyx/benchmark --carrierThe number of intermediate network providers between the calling party and the called party. Zero hops means the call stays on the provider's own network from origination to termination. Higher hops indicate intermediaries that add latency, cost, and failure points.
Count of distinct network operators in the call path excluding the originating and terminating endpoints. Measured via SIP signaling analysis.
npx @telnyx/benchmark --carrierThe minimum number of separate vendor contracts required to deploy a complete voice AI agent. A complete agent requires: SIP trunking, speech-to-text, LLM inference, and text-to-speech. A count of 1 means all components are available from a single provider.
Count distinct vendors needed for: SIP trunking + STT + LLM inference + TTS. If a single provider offers all four, the count is 1.
npx @telnyx/benchmark --vendor-countThe STIR/SHAKEN attestation level the provider can apply to originating calls. Level A (Full Attestation) means the provider has verified the call originator and the call is from the authorized user of the number. Level B means partial attestation. Level C means gateway attestation only.
Highest attestation level the provider's SHAKEN certificate authority can sign. Level A > B > C > None.
npx @telnyx/benchmark --stir-shakenWhether the provider owns and operates its own edge compute infrastructure (GPUs, servers, networking) at the network edge, rather than renting from public cloud providers. Owned infrastructure enables tighter integration between communications and compute.
Binary: true if the provider owns the compute hardware and operates it in their own or colocation facilities; false if relying on public cloud for edge compute.
npx @telnyx/benchmark --carrierList of geographic regions where the provider operates edge compute nodes for low-latency processing of voice, AI inference, and real-time applications.
Enumerated list of regions with active compute nodes capable of running workloads.
npx @telnyx/benchmark --carrierWhether the provider offers a native AI agent platform that integrates voice, messaging, and AI inference without requiring third-party assembly. A true value means the provider's platform can host, orchestrate, and run AI agents directly.
Binary: true if the provider offers a managed AI agent platform with integrated voice/messaging/AI; false if only individual APIs are available.
npx @telnyx/benchmarkWhether the provider offers global communications coverage including international voice termination, messaging, and number provisioning across multiple regions and countries.
Binary: true if the provider offers communications services across 3+ regions with international termination; false if limited to a single country or region.
npx @telnyx/benchmarkStructured decision signals that enable programmatic evaluation and comparison of communications providers. Each signal is typed (measured/proven/derived), sourced, and timestamped for auditability.
Derived from the collection of individual signals (carrier_owned, inter_provider_hops, etc.)
npx @telnyx/benchmarkThe per-minute rate for outbound SIP termination in the United States, measured in USD. This is the base rate before volume discounts or committed spend reductions.
Direct measurement from published pricing for US outbound SIP termination per minute.
npx @telnyx/benchmark --pricingEach benchmark result includes a confidence level indicating the reliability and recency of the measurement.
Based on automated, regularly-scheduled measurements with statistically significant sample sizes. Results are reproducible within a narrow margin.
Criteria: Automated measurement with 100+ samples, run daily or more frequently
Based on periodic measurements or proven signals with limited sample sizes. Results are directionally accurate but may have wider variance.
Criteria: Periodic measurement or proven signal verified within the last 90 days
Based on one-time measurements, derived signals, or signals that haven't been recently verified. Use for directional guidance only.
Criteria: One-time measurement or derived signal not verified in the last 90 days
Every benchmark can be independently reproduced using the Telnyx benchmark CLI. Results may vary based on network conditions and region, but should fall within the same pass/warn/fail category.
Run benchmarks yourself
npx @telnyx/benchmarkRun specific benchmarks with flags like --latency, --vendor-count, --carrier, or --stir-shaken.
Benchmarks are scoped to specific usage scenarios defined in the scenario registry. Each scenario describes a real-world workload profile with parameters like call volume, region, and feature set.
voice_ai_agent_1000_min_month — Voice AI agent handling 1,000+ minutes/month with STT, TTS, and SIP trunkingus_sip_outbound — US-based SIP outbound call scenario, East-to-West coastScenario definitions are maintained in the benchmark registry and referenced by each benchmark result via the scenario_ref field.
Machine-readable data
All benchmark results are available as structured JSON for automated evaluation and agent-driven workflows.