Telnyx

AIDA: A pragmatic look at how we use AI Agents internally

Building an AI assistant that lives in Slack, remembers context with Mem0, and scales through multi-agent architecture.

Sonam Gupta
By Sonam Gupta, PhD
How Engineers Use AI Agents

As AI systems become more capable, many teams are discovering that raw model intelligence is no longer the limiting factor. The harder problem is operational: how do you embed AI into real workflows in a way that saves time, builds trust, and scales with the organization?

Internally, we ran into this challenge across engineering, support, and operations. Teams were spending significant time switching between Slack, ticketing systems, dashboards, and internal tools just to assemble context. We didn't need another general-purpose chatbot. We needed an assistant that understood our systems, worked where people already collaborated, and respected human ownership of decisions. That's how AIDA, our AI Digital Assistant, came to life.

Designing for daily work, not demos

From the outset, AIDA was designed to live inside Slack. This wasn't about convenience - it was about alignment. Slack is where incidents are discussed, tickets are referenced, and decisions are made in real time. Any AI system that lived outside that flow would inevitably become secondary. Equally important was defining what AIDA should not do. We intentionally avoided autonomous actions in production systems. AIDA's role is to surface relevant information, provide operational context, and assist with diagnosis - not to act independently. Humans remain accountable for decisions and changes. This human-in-the-loop approach has been central to building trust and sustained adoption.

The importance of memory in real workflows

Early versions of AIDA quickly exposed a familiar limitation: statelessness. Conversations would end, context would disappear, and users would need to restate the same background repeatedly. That friction made the assistant feel transactional rather than collaborative.

To address this, we integrated Mem0 as AIDA's memory layer. This gave AIDA the ability to retain meaningful context across interactions - ongoing investigations, recurring operational questions, and user-specific patterns. Over time, the assistant began to behave less like a query interface and more like a teammate that remembers how work unfolds. This persistent context became a prerequisite for expanding AIDA's usefulness beyond conversation alone.

Extending context beyond chat

Once AIDA could retain conversational context, the next challenge was expanding the scope of information it could access. Real operational understanding requires visibility into systems of record, not just what's said in Slack. Support data was a clear example. Engineers frequently needed to understand customer impact or ticket history during incidents, but that information lived in Zendesk, often disconnected from the discussions where decisions were being made.

To close this gap, we added a read-only Zendesk integration to AIDA. The goal wasn't automation for its own sake, but faster, more accurate context-sharing. With this integration, teams can query ticket status, recent updates, user history, and organization-level views directly from Slack. Engineers gain situational awareness without leaving the conversation, and support teams no longer need to manually relay information across tools. The read-only constraint is intentional. AIDA informs and summarizes, but humans remain responsible for responses, updates, and resolutions. This preserves clarity around ownership while significantly reducing context-switching.

From a single assistant to a system of agents

As AIDA's role expanded, we began to see demand beyond Slack. Other internal services needed access to the same diagnostic and operational intelligence, but duplicating logic or tightly coupling systems wasn't sustainable.

This led us to adopt the Agent-to-Agent (A2A) protocol, an open standard that enables AI agents to communicate with each other programmatically. Rather than treating AIDA as a single monolithic assistant, A2A allowed us to expose its capabilities as a small set of focused, domain-specific agents. In practice, this meant separating concerns. One agent focuses on diagnostics and system health analysis. Another encapsulates Jira-related interactions. A third handles predefined operational workflows. These agents can be invoked not only by humans through Slack, but also by other internal services and agents in a structured, predictable way.

This shift didn't materially change how most users interact with AIDA day to day, but it fundamentally changed how the system scales. AIDA became less of a bot and more of an internal AI platform.

How this compares to how AI is commonly used today

While building AIDA, we also spoke with other teams about how they use AI agents internally. A clear pattern emerged. Most internal AI usage today is concentrated in creative or marketing workflows - content generation, analysis, and experimentation. In engineering teams, AI adoption is often limited to copilots embedded in IDEs or design tools, augmenting individual productivity without changing how systems operate.

Where agentic systems do exist internally, they tend to be early-stage and narrowly scoped. Automated testing is a common entry point, with agents helping define test cases or validate expected outcomes, often triggered manually and not yet integrated into CI/CD pipelines. In many organizations, agentic automation is being built for clients, while internal workflows remain largely untouched. This reflects real constraints rather than lack of interest. Meaningful internal adoption requires persistent context, safe system access, and clear boundaries around responsibility. Without those, AI remains an add-on rather than part of the operational fabric.

AIDA was built explicitly to bridge this gap. Memory via Mem0, careful integration with systems like Zendesk, and interoperability through A2A were deliberate choices aimed at moving from experimentation to reliable internal use.

What this has changed for us

AIDA hasn't eliminated work, but it has removed friction. Teams spend less time searching for information and more time acting on it. Context is easier to establish, decisions are better informed, and collaboration across functions happens faster. Adoption has been organic because AIDA fits existing workflows and reinforces human ownership - it's perceived as an amplifier rather than a replacement.

Our experience building AIDA reinforced a simple idea: effective internal AI is not about autonomy - it's about alignment. Alignment with how people already work, with where context lives, and with who ultimately owns decisions. As AI systems evolve, we believe the future lies in networks of specialized, memory-aware agents that collaborate through open standards while keeping humans firmly in the loop. AIDA is our step in that direction - and it's already reshaping how we work every day.


NOTE: Hoping that I have triggered your enthusiasm about AIDA, build some Telnyx voice AI agents, and chat with our AIDA in our external Slack channel.

Building internal AI agents? Swap ideas with us on our subreddit.

Share on Social

Related articles

Sign up and start building.