Telnyx - Global Communications Platform ProviderHome
View all productsVoice AIVoice APIeSIMRCSSpeech-to-TextText-to-speechSIP TrunkingSMS APIMobile VoiceView all solutionsHealthcareFinanceTravel and HospitalityLogistics and TransportationContact CenterInsuranceRetail and E-CommerceSales and MarketingServices and DiningView all pricingVoice AIVoice APIeSIMRCSSpeech-to-TextText-to-SpeechSIP TrunkingSMS APIGlobal NumbersIoT SIM CardOur NetworkMission Control PortalCustomer storiesGlobal coveragePartnersCareersEventsResource centerSupport centerAI TemplatesSETIDev DocsIntegrations
Contact usLog in
Contact usLog inSign up

Social

Company

  • Our Network
  • Global Coverage
  • Release Notes
  • Careers
  • Voice AI
  • AI Glossary
  • Shop

Legal

  • Data and Privacy
  • Report Abuse
  • Privacy Policy
  • Cookie Policy
  • Law Enforcement
  • Acceptable Use
  • Trust Center
  • Country Specific Requirements
  • Website Terms and Conditions
  • Terms and Conditions of Service

Compare

  • ElevenLabs
  • Vapi
  • Twilio
  • Bandwidth
  • Kore Wireless
  • Hologram
  • Vonage
  • Amazon S3
  • Amazon Connect
© Telnyx LLC 2026
ISO • PCI • HIPAA • GDPR • SOC2 Type II

Ask AI

  • GPT
  • Claude
  • Perplexity
  • Gemini
  • Grok
Back to blog
Voice

AI Voice Fraud and the Telecom Trust Layer

AI voice fraud losses are surging. The only layer where identity is actually enforceable is the carrier layer. Here is why Telnyx owns it.

By Deniz Yakışıklı

AI voice cloning broke phone-call trust. The fix lives at the carrier layer.


The internet was never designed to verify identity. Anyone can claim any identity on any digital channel. Email, messaging apps, social media: they all inherit the same fundamental limitation. Identity is asserted, not enforced.

The telephone network was built differently. Calls pass through licensed carriers, regulated routing infrastructure, and identity attestation frameworks like STIR/SHAKEN. 5.4 billion unique mobile subscribers rely on this network globally (GSMA, 2025). Identity isn't a feature bolted on top. It's a structural property of how the network operates.

That distinction didn't matter much until recently. But AI voice cloning has changed the stakes entirely. When a cloned voice can authorize a wire transfer, approve a credential reset, or impersonate a CEO on a live call, the question of who is really on the other end becomes a question with a dollar sign attached.

Voice deepfakes rose 680% year over year. In 2024, a Hong Kong corporation fell victim to a $25 million deepfake scam. Criminals cloned the CFO's voice and paired it with a fabricated video conference. Every person on the call except the victim was a deepfake. The CEO of WPP was targeted the same way. Scammers cloned his voice and deployed it on a fake Microsoft Teams call. That attack was detected, but only because someone thought to verify through an alternate channel. Relying on someone happening to double-check is not a security strategy.

The regulatory clock


Regulators are catching up to the threat.The U.S. AI Fraud Accountability Act (S.3982) would establish a federal framework for digital impersonation fraud. The EU AI Act already classifies impersonation systems as high-risk, and the EU Data Act adds transparency requirements around AI-generated content. Similar legislation is moving in the UK, Australia, and across Asia-Pacific. Organizations will need a documented way to identify and flag synthetic voice, exactly the kind of audit trail real-time detection produces.

The old verification methods were built for a world without AI voice cloning


The methods most organizations still rely on were not designed for a world where a voice can be cloned in seconds. They verify that someone has access to a channel, not that the person on the call is who they claim to be.

Method What it verifies Why it fails now
Security questions Knowledge of personal data Answers harvestable from social media
SMS passcodes Access to a phone number Vulnerable to SIM swapping
Callback procedures Ownership of a number Caller ID is spoofable
Voice matching

Why no one else can close the gap


The market has split the problem in two, and neither half can solve it alone.

CPaaS providers (Twilio) Voice AI platforms (Vapi, Bland, Retell) Carrier (Telnyx)
What they own API layer, developer tools Agent orchestration, model routing Carrier network, switch, edge compute
STIR/SHAKEN Inherited from upstream carriers None A-level, signed at the switch
Number reputation Pass-through management None

What the carrier layer actually verifies


Owning the carrier infrastructure unlocks three verification points nothing above it can reach.

At origination. Before the call connects, the network confirms it came from a legitimate device on a legitimate network. A-level STIR/SHAKEN attestation, signed at the switch, is the highest identity tier in the US telephone network, a cryptographic statement that the calling party is authorized to use the number. It can't be faked by someone who doesn't control the switch. Telnyx is the licensed carrier doing the signing, not a CPaaS layer relaying somebody else's signature.

During the call. Deepfake Detection with Resemble Detect runs natively on the voice pipeline and analyzes audio frame by frame. It returns a verdict in under four seconds, fast enough to interrupt a fraudulent transaction before it completes, not just record one for the post-mortem. Available through the Voice API and TeXML.

Across the reputation surface. Number reputation becomes a managed surface instead of a black box. A dashboard and API show how carriers rate your numbers in real time, so you can act before pickup rates drop. Inbound spam is filtered at the network layer, before it ever reaches the application. Branded Calling displays caller identity directly on the recipient's screen: name, brand, purpose. And because Telnyx actively polices its own network, clean reputation is enforced rather than hoped for.

These capabilities only work together because they share the same infrastructure. You can't sign at A-level if you aren't the carrier. You can't run sub-four-second detection from behind three vendor hops. You can't manage reputation on a network you don't operate.

The phone call still matters. The trust model has to be rebuilt.


Voice remains the most immediate, intimate, and trusted channel in business. AI exploits that trust precisely because humans can no longer hear the difference. Rebuilding it means moving verification below the level of perception, to the layer where identity is structural, not asserted.

That layer is the carrier. Telnyx is the carrier.

Contact our team of experts to learn how Telnyx Identity Verification can protect your organization from AI voice fraud.

Share on Social
Deniz Yakışıklı

Sr. Product Marketing Manager

Jump to:

AI voice cloning broke phone-call trust. The fix lives at the carrier layer.The regulatory clockThe old verification methods were built for a world without AI voice cloningWhy no one else can close the gapWhat the carrier layer actually verifies

Related articles

The voice sounds right
AI cloning defeats this directly

Every one of these methods operates at the application layer, making assertions about identity that the network itself never validates. When the application is compromised, there's no deeper layer to catch the fraud.

Direct control and policing
Deepfake detection None None Native, sub-4s via Resemble Detect
What breaks at scale Inherited attestation is a weaker signal Agents land as "Spam Likely" Nothing, all layers are the same system

Twilio can pass A-level signals through; they can't generate them, because they don't operate the switch. In a threat landscape measured in seconds, that gap in signal strength is the difference between a verified call and a plausible-looking one.

Voice AI platforms make it trivial to build a conversational agent. Model orchestration, prompt design, conversation flow, all handled. What they don't handle is telephony. The gap stays invisible until the agent goes to production and the calls start landing as "Spam Likely." Pickup rates collapse, and the team discovers that fixing reputation requires access to a layer they never had.

Both gaps point to the same missing piece. Trust as a feature is bolted on. Trust as a property of the network is built in. You can integrate a fraud score; you can't integrate a switch.

The phone call still matters. The trust model has to be rebuilt.

Sign up for emails of our latest articles and news