Top Ten Tips for AI Founders; Plus, ElevenLabs and the Future of Voice
For the one year anniversary of Training Data we crunched the advice our guests have shared into the ten most actionable takeaways... and we chatted with Mati Staniszewski of ElevenLabs.
This week we talked to Mati Staniszewski, co-founder and CEO of ElevenLabs, who explained how staying laser-focused on audio innovation has allowed his company to thrive despite the push into multimodality from foundation models. From a high school friendship in Poland to building one of the fastest-growing AI companies, Mati shares how ElevenLabs transformed text-to-speech with contextual understanding and emotional delivery. He discusses the company's viral moments (from Harry Potter by Balenciaga to powering Darth Vader in Fortnite), and explains how ElevenLabs is creating the infrastructure for voice agents and real-time translation that could eliminate language barriers worldwide.
Each week, we ask our guests on Training Data for their advice for AI founders. Here’s a top ten list of what they said over the last year.
Top 10 AI Founder Takeaways From One Year of ‘Training Data’
For the first anniversary of Training Data we ran the key takeaways for each episode through GPT-4 to synthesize our guests’ best advice for AI founders.
Jun 26, 2025
Post methodology: @gpt4 via Dust: take this set of takeaways for AI founders from the training data podcast and compile a top ten list of the most frequent and most actionable takeaways. please maintain the same length and format with bold headings followed by a colon that is in the original source text. order the tips from 10 to 1 based on a combination of frequency and actionability; be sure to consider all of the supplied text in your response; briefly explain how each list item maps to specific episodes; please rank the episodes that have more than one mention. Light editing and reformatting for the Substack editor.
10. Price and Monetize Based on Delivered Value:
Move beyond seat-based or raw usage pricing. Align pricing with the concrete business outcomes your AI enables, whether it’s workflow completion, measurable savings, or strategic impact. This approach better captures the true value you create.
Amit Bendov/Gong (Ep 46): Price based on value delivered, not seats.
Manny Medina/Paid (Ep 39): Set a strategy to move up the pricing maturity curve, align pricing with customer value.
Clay Bevor/Sierra (Ep 11) and Bret Taylor/Sierra (Ep 43): Align pricing models with customer value and buying processes.
Joe Spisak/Meta, Llama (Ep 7): Value shifting from model development to application and customization.
9. Develop Robust Infrastructure for Reliability and Scale:
Mission-critical AI needs enterprise-grade infrastructure—state management, observability, security, and extensibility. Treat reliability as a first-class engineering problem, with comprehensive testing, monitoring, and systematic error handling.
Sridhar Ramaswamy/Snowflake (Ep 16): Reliability and precision are critical for enterprise AI applications, robust engineering.
Ion Stoica/Databricks (Ep 25): Focus on solving real production problems, not demos, compound AI systems.
Sahir Azam/MongoDB (Ep 29): State management is essential, multi-modal data integration.
Harrison Chase/LangChain (Ep 1, Ep 44): Build infrastructure for persistence, observability, and scalability.
Nikesh Arora/Palo Alto Networks (Ep 30): Security must be built in, not bolted on.
Clay Bevor/Sierra (Ep 11): Agent OS, tools to manage complexity.
8. Integrate AI Seamlessly Into Existing Workflows:
Especially for startups targeting the enterprise, the most effective AI products fit naturally into users’ current processes and environments, reducing friction and driving adoption. Strive for “zero-touch” or invisible automation that enhances productivity without disrupting established habits.
Thomas Dohmke/GitHub (Ep 8): Copilot’s success stems from meeting developers where they are.
Eric Glyman/Ramp (Ep 23): Zero-touch automation, systems work for us, not the other way around.
Arvind Jain/Glean (Ep 19): Context is critical, deep integration with enterprise systems.
Anish Agarwal and Raj Agrawal/Traversal (Ep 51): Architect for constant evolution, adapt to enterprise-scale complexity.
Harrison Chase/LangChain (Ep 1, Ep 44), Hema Raghavan/Kumo (Ep 26): Meet customers where they are, build infrastructure for persistence and scalability.
7. Architect for Continuous Evolution and Scalability:
AI-native companies must be ready to reevaluate and rearchitect their systems every 6–12 months as models, data, and compute evolve. Build flexible, modular infrastructure that can rapidly integrate new capabilities and scale with demand.
Anish Agarwal and Raj Agrawal/Traversal (Ep 51): Architect for constant evolution.
Kevin Scott/Microsoft (Ep 4): Architect applications flexibly for new advancements.
Amjad Masad/Replit (Ep 37): Evolve systems with AI capabilities, be ready to rearchitect.
Eric Glyman/Ramp (Ep 23), Joe Spisak/Meta, Llama (Ep 7): Leverage existing foundation models, focus on product, scale and execution are critical.
Lin Qiao/Fireworks (Ep 9): Anticipate shift from training to inference.
6. Prioritize Data Quality, Transparency, and Trust:
High-quality, context-rich, and well-governed data is the foundation of reliable AI. Invest in curating, integrating, and explaining your data and outputs. Make transparency and explainability core to your product, especially in high-consequence or regulated domains.
Daniel Nadler/OpenEvidence (Ep 32): Relentless focus on accuracy and quality, transparency into source citations.
Nikesh Arora/Palo Alto Networks (Ep 30): Domain knowledge and data are the new moats.
Hema Raghavan/Kumo (Ep 26): Make trust and transparency core features.
Sahir Azam/MongoDB (Ep 29): Quality is the new frontier, multi-modal data integration.
Eric Glyman/Ramp (Ep 23), Arvind Jain/Glean (Ep 19): Customer-centric product development, knowledge infrastructure comes first.
Harrison Chase/LangChain (Ep 1, Ep 44): Observability, transparency in agent actions.
5. Build Customer-First, Not Technology-Out:
Focus on deeply understanding and solving real user problems, not just showcasing technical prowess. Embed directly with customers, learn their workflows, and ensure your product delivers tangible, user-centered value. (OpenAI built technology-out, but that doesn't mean you should— there are very few OpenAIs.)
Amit Bendov/Gong (Ep 46): Build customer-first, not technology-out, focus on customer workflows.
Eric Glyman/Ramp (Ep 23): Lead with the benefit, customer-centric product development.
Clay Bevor/Sierra (Ep 11): Work closely with customer experience teams.
Arvind Jain/Glean (Ep 19), Sridhar Ramaswamy/Snowflake (Ep 16): Start with clear user value, AI founders should focus on making complex tasks simple for end users.
Christopher O’Donnell/Day (Ep 36), Matan Grinberg and Eno Reyes/Factory (Ep 2): Work with customers every day, align with customer needs.
4. Leverage Reasoning, Planning, and Agentic Capabilities:
The biggest near-term opportunity and differentiator is building systems that can reason, plan, and autonomously execute complex, multi-step tasks. Invest in inference-time compute, agent orchestration, and chaining together specialized capabilities to deliver superhuman results on real-world workflows.
Bob McGrew/Ex-OpenAI (Ep 50), Noam Brown, Ilge Akkaya and Hunter Lightman/OpenAI/o1 (Ep 15): Reasoning represents the biggest opportunity, thinking longer enables the model to tackle complex problems.
Hanson Wang and Alexander Embiricos/OpenAI/Codex (Ep 49), Anish Agarwal and Raj Agrawal/Traversal (Ep 51): Delegation-based workflows, moving complexity to inference-time compute.
Misha Laskin (Ep 5) and Ioannis Antonoglou (Ep 27)/ReflectionAI: Combining learning and search, depth is the missing piece in agents.
Jim Fan/NVIDIA (Ep 13), Jim Gao/Phaidra (Ep 10): Generalist and agentic capabilities, AI creativity as a differentiator.
Harrison Chase/LangChain (Ep 1, Ep 44): Custom cognitive architectures, orchestration and observability.
3. Focus on Specialized, Domain-Specific Solutions:
Rather than building broad generalists, successful AI startups target high-friction, vertical problems where domain expertise, data context, and specialized workflows create true differentiation and defensibility. Proprietary data or proprietary process know-how can be critical advantages.
Winston Weinberg/Harvey (Ep 33): Build systems next-gen models cannot address, deep domain expertise.
Anish Agarwal and Raj Agrawal/Traversal (Ep 51), Carl Eschenbach/Workday (Ep 41): Enterprise fragmentation creates advantage, domain expertise and data context matter more than general intelligence.
Bob McGrew/Ex-OpenAI (Ep 50): Enterprise applications requiring deep domain integration remain safe from frontier lab competition.
Manny Medina/Paid (Ep 39): Be a hedgehog—best at a specific problem.
Paul Eremenko /P-1 AI (Ep 47): Cognitive automation, federated approach, start simple and scale.
Filip Kozera/Wordware (Ep 35), Max Jaderberg/Isomorphic Labs (Ep 40), Patrick Hsu/Arc Institute (Ep 38): Empowering the next generation of developers, general models, not local solutions, bilingual expertise.
2. Balance Human-AI Collaboration and Oversight:
The best AI systems amplify human strengths rather than replace them. Design for hybrid workflows, build in transparency, and ensure robust human-in-the-loop mechanisms for trust, quality, and adaptability.
Harrison Chase/LangChain (Ep 1, Ep 44), Anish Agarwal and Raj Agrawal/Traversal (Ep 51): Hybrid human-AI collaboration, human-in-the-loop is essential.
Amit Bendov/Gong (Ep 46): AI can’t yet be trusted for tasks requiring full accountability.
Raiza Martin and Jason Spielman/Google/NotebookLM (Ep 17), Christopher O’Donnell/Day (Ep 36): Augment rather than replace human capabilities, amplifying the human element.
Daniel Nadler/OpenEvidence (Ep 32): Trust through transparency and control.
Kareem Amin/Clay (Ep 21): Balancing automation and human creativity.
Kevin Scott/Microsoft (Ep 4): Augmentation rather than replacement.
Manny Medina/Paid (Ep 39): Don’t try to serve everyone—focus on human-centered design.
1. Cultivate Rapid Iteration and Experimentation:
AI founders must embrace a culture of fast prototyping, continuous user feedback, and willingness to pivot. Launch experimental versions early—even if imperfect—to gather insight and evolve products in sync with technology and user needs. Build in a way that continuous model improvements are a tailwind for you instead of a headwind.
Josh Woodward/Google Labs (Ep 34) and Thomas Iljic, Jaclyn Konzelmann and Simon Tokumine/Google Labs (Ep 48): Rapid experimentation, startup-like environment, move quickly from ideas to user testing.
Sebastian Siemiatkowski/Klarna (Ep 6): Rapid experimentation and implementation, move swiftly from concept to production.
Thomas Dohmke/GitHub (Ep 8), Matan Grinberg and Eno Reyes/Factory (Ep 2): Embrace rapid iteration, incubation team, build, ship, and iterate.
Kareem Amin/Clay (Ep 21), Raiza Martin and Jason Spielman/Google/NotebookLM (Ep 17), Clay Bevor/Sierra (Ep 11): Iterative product development, move fast but stay focused.
Anish Agarwal and Raj Agrawal/Traversal (Ep 51): Constantly make six-month bets…willing to reevaluate architecture.
Key takeaways for each episode are in the transcript pages linked from the Training Data series page.
Awesome nuggets for any founder to take away.
I particularly like the first regarding pricing. Pricing AI based on seats or API calls underestimates the transformation it can drive. The real unlock is tying monetization to the job done, e.g. deals closed, tickets resolved, hours saved, insights surfaced, etc. This aligns incentives, proves ROI, and creates a defensible pricing moat. It also forces product teams to deeply understand and measure outcomes, not just activity.