Telepath
You think you need to build it yourself. You've got a capable team. You've shipped distributed systems before. How hard can a SIP bridge be?
Six months later, you're debugging why your AI agent feels sluggish. The user asks a question, there's a half-second pause that shouldn't exist, and the whole conversation dies. You're chasing phantom silence where the user stops talking but your VAD doesn't trigger because of background fan noise. You're fighting Jitter Buffers that carriers forced on you, assuming you're a slow softphone, not a high-performance AI system.
The latency tax is real. And it's expensive.
Standard SIP implementations were built for humans, not machines. They prioritize audio smoothness over speed. They add 200ms of buffer here, 100ms of Nagle's algorithm there. By the time the audio hits your LLM, the conversation feels processed, not present.
I've spent fifteen years building network infrastructure. What I learned is this: milliseconds matter. A 300ms delay doesn't sound like much until you're on a call with an AI that sounds like it's thinking through mud. The user starts talking slower. Pauses get longer. The whole interaction breaks down.
The problem isn't that SIP is broken. The problem is that SIP wasn't designed for this. It was designed for PBX systems and call centers where 500ms of latency was invisible to human ears. Your AI agent doesn't have that luxury.
Let me walk through what actually happens when you build this in-house:
Your carrier sends you G.711 (an 8kHz codec from 1972). You need to transcode it to 16-bit
PCM for your LLM. That's another hop, another buffer, more latency.
Carriers use aggressive silence suppression. They strip out "silence" to save bandwidth, but what they
call silence isn't actually silence. It's the gap between words where your model is thinking. You now
have digital artifacts that confuse your VAD. So you add your own silence detection on top
of theirs. More buffering. More latency. Telepath's Clean Signal
Protocol fixes this by adaptively handling breaks in the user's speech working in tandem with
the AI Voice Agent's VAD, instead of against it.
When a call goes sideways, is it your code? The carrier's network? The AI provider's API? You have no way to know. You're flying blind, instrumenting everything from scratch, spending weeks chasing problems that were solved by someone else years ago.
You got it working for 5 concurrent calls. Now you need 500. Suddenly your jitter buffers
are fighting each other. Your packet handlers are context-switching. Your infrastructure costs explode.
You hire more engineers to optimize what should be commoditized.
Let me be direct: building this yourself costs more than you think.
Not just in engineering time (though that's months you'll never get back). But in opportunity cost. Your team is debugging RTP packet handlers when they should be building your product. Your CTO is tuning UDP performance when they should be thinking about your roadmap. Your QA is chasing "calls sound choppy sometimes" instead of testing your actual value proposition.
And the worst part? Even after six months of work, you'll have something that handles 95% of cases well. The last 5% (the edge cases where packet loss spikes, where international routes have weird jitter patterns, where carriers push non-standard SIP headers) those will haunt you forever.
We built Telepath from the ground up for speed and scalability, explicitly navigating some of the shortcomings of SIP/RTP protocol and adapting this tried and true technology to interface with modern AI Voice Agents. We are not a traditional SIP gateway, we stripped out everything that is currently causing issues with AI Voice Agents for every other carrier on earth. We wrote our own RTP packet handlers to implement clean signal protocols to precisely guide the AI Voice Agent on when a user stops speaking. We handle G.711, G.722, and Opus natively. We measure every millisecond.
The result is brutal simplicity: less than 300ms end-to-end latency. Your AI sounds present, not processed.
But here's the thing that matters more than the benchmark: you don't build it. You integrate it. You point your carrier at our SIP URI, paste your OpenAI API key, and you're done. One afternoon instead of six months.
Ask yourself:
If you answered "no" to any of those, you're building a moat around the wrong thing.
Latency is table stakes for voice AI now. It's not a feature. It's a requirement. And building it yourself is a trap.
Don't spend six months fighting Jitter Buffers. Let us handle the infrastructure layer.