Post methodology: Claude 4.0 via custom Dust assistant @TDep-SubstackPost with the system prompt: Please read the text of the podcast transcript in the prompt and write a short post that summarizes the main points. Please make the post as concise as possible and avoid academic language or footnotes. Refer to podcast guests by their first names after the initial mention. Light editing and reformatting for the Substack editor.
John Noronha, founder of Gamma, discovered something profound about presentations: everyone hates making them, but everyone needs them. Starting in 2020 during the pandemic when we were all trapped on Zoom calls staring at slide decks, John and his co-founder Grant realized that visual communication was both the “language of business” and universally frustrating.
The breakthrough came when John revisited AI models he'd initially dismissed. While testing GPT-3 in 2020 for slide creation had been disappointing, the emergence of image models like Stable Diffusion and DALL-E changed everything. “This is by far the most compelling thing I’ve ever seen that could do it,” John realized, seeing AI-generated images as the next evolution beyond PowerPoint’s clipart.
But it wasn’t just about pretty pictures. The real magic happened when improved text models solved what John calls the “blank page problem”—that overwhelming moment when you face “enter presentation title” and spiral through questions about story structure, fonts, colors, and imagery. Gamma's AI transforms this paralysis into productivity, letting users go “from a vague idea to a fully worked out rough draft” where “their job was editing, not starting from scratch.”
What sets Gamma apart from becoming just another PowerPoint clone is their heavy investment in experimentation. Drawing on John’s background at Optimizely, the team runs hundreds of A/B tests across multiple AI models. As Sam Altman was announcing GPT-5, Gamma engineers were already testing it against Claude and Gemini. “Every time a new model comes out, there’s somebody on our team making a code update,” John explains.
This obsessive testing revealed surprising insights: reasoning models actually hurt creativity (“the longer a model thinks, the less creative it gets”), while Claude emerged as their favorite for its visual taste, even though Gemini Flash wins on cost efficiency. With 20+ image models in their arsenal and a team that was once a third designers, Gamma focuses on taste and creating building blocks that AI can leverage.
The strategy is working. With 50+ million users and cash flow positivity with a team of just 30, Gamma has evolved beyond presentations into what John calls “visual storytelling”—expanding into documents, websites and soon an API for automated visual content generation. When Andrej Karpathy tweeted asking for “the Cursor for slides,” the replies overwhelmingly pointed to Gamma.
Looking ahead, John envisions more agentic editing where users can simply tell the AI to “expand this five-slide presentation to 20 slides” or automatically generate sales pitch decks from CRM data. While PowerPoint and Google Slides still dwarf Gamma's user base at 500 million each, John sees opportunity in the foundation models’ focus on coding rather than creative visual work.
The lesson for other AI application builders? Find your unique perspective, embrace experimentation across models and solve problems the foundation models aren't optimizing for. Instead of following the crowd, carve out neglected territories where your expertise still matters.
Hosted by Sonya Huang
Full transcript on sequoiacap.com
Mentioned in this episode:
Language Models are Few-Shot Learners: Original paper for GPT‑3 that Jon tried early for slide generation and found lacking at the time.
Training language models to follow instructions with human feedback: Cited as the key improvement that made models follow natural prompts and unlocked Gamma’s AI path.
Ideogram: Image-generation service; one of Gamma’s most-used image models.
FLUX (Black Forest Labs): State-of-the-art image model family; another top performer in Gamma’s image stack.
DSPy: Stanford framework for programmatic prompt/chain optimization. Gamma tested it but prefers human-legible, cross-model prompts for portability.
Share this post