How Ramp solved the Fatal Flaw in AI Agent Strategy ft. Rahul Sengottuvelu
Why major tech companies create frustrating AI agent experiences and how to use a revolutionary approach to building AI assistants.
Post methodology: Dust custom assistant @AIAscentEssay using Claude 3.7 with system prompt: Please take the supplied transcript text and write a substack-style post about the key themes in the talk at AI Ascent 2025. Style notes: After the initial mention of a person, use their first name for all subsequent mentions; Do not use a first person POV in the posts. Light editing and formatting for the Substack platform.
Rahul Sengottuvelu, Head of Applied AI at Ramp, delivered a concise yet powerful talk on why major companies are struggling with AI agent implementation—and offered a fresh approach to solve this widespread problem.
Rahul began by addressing the audience of established companies with existing user bases and mature software products. His message was clear: despite significant resources, even tech giants are creating frustrating AI agent experiences that consistently disappoint users.
The pattern is familiar to anyone who has used these assistants. You ask an AI agent to book a flight, which it manages to do, but then it immediately fails when you try to do anything else with that booking. Or when you ask Google Slides AI to bold text on a slide, and it responds with the dreaded "Sorry, I can't do that." These experiences, coming from the world's largest software companies, reveal a fundamental flaw in how AI agents are being implemented.
Don’t connect models to the backend…
"People are building feature-incomplete, second-class experiences that frustrate their users, and everyone seems to fail in the same way," Rahul observed. The root of the problem lies in how companies approach AI agent development. Most organizations start with a simplified mental model of their application—a front end connected to a back end. But mature applications are vastly more complex, with hundreds of API endpoints and features developed over years.
When companies bolt on AI agents, they find themselves on an endless journey toward feature parity with their existing front end. They slowly build tools one by one that the language model can use, creating a massive gap between what users expect the AI to do and what it can actually accomplish.
This problem is compounded by organizational structure. The main product typically has a well-staffed team of frontend engineers, product managers, and designers constantly improving the human experience. Meanwhile, the agent team works separately, perpetually playing catch-up and trying to recreate the same functionality through tools.
Editor’s note: This is reminiscent of the early days of mobile app development before people started building “mobile first.”
…Instead, connect them to the frontend
Rahul's solution is elegantly simple: "Computer use yourself."
Instead of painstakingly building tool interfaces for every feature, companies should leverage their existing frontend. At Ramp, their AI assistant can access niche features like changing card branding by spinning up a browser with the user's credentials in the background and navigating the frontend interface. The user only sees the result—that the agent completed the task.
This approach offers several advantages:
Feature completeness from day one
Leveraging (not fighting against) all the work of frontend teams
Handling authentication and user permissions automatically
Simplifying the navigation for computer agents
"Don't reinvent the wheel," Rahul advised. By building heuristics and scaffolding on top of your frontend, you can create a more reliable agent experience while focusing resources on fixing what's broken rather than starting from scratch.
As AI agents become more prevalent in what Rahul called the emerging "Agent Economy," having complete tool interfaces across your feature set will become increasingly important. Companies with distribution advantages need to avoid the common trap of creating frustrating, limited AI experiences that ultimately disappoint users.
The core insight is worth repeating: before letting external agents computer-use your product, learn to computer-use yourself. This approach may be the key to avoiding the AI agent implementation failures we've seen from even the most sophisticated tech companies.