Anthropic CPO Mike Krieger: Building AI Products From the Bottom Up
Drawing on his Instagram experience, Mike Krieger discusses how AI product development requires a new approach that’s more responsive to model capabilities.
Post methodology: Dust custom assistant @AIAscentEssay using Claude 3.7 with system prompt: Please take the supplied transcript text and write a substack-style post about the key themes in the talk at AI Ascent 2025. Style notes: After the initial mention of a person, use their first name for all subsequent mentions; Do not use a first person POV in the posts. Light editing and reformatting for the Substack editor.
In a wide-ranging conversation at AI Ascent 2025, Anthropic Chief Product Officer Mike Krieger shared insights about AI product development, the future of agent-to-agent interactions, and how organizations are adapting to AI-powered workflows. The former Instagram co-founder brought a unique perspective that bridges consumer product development with cutting-edge AI research.
The Shift to Bottoms-Up Product Development
One of the most striking revelations from Mike's discussion was how AI product development requires a fundamentally different approach than traditional consumer products. At Instagram, Mike described a "tops down, 3 to 6 month time frame of planning" that was "much more planned and deliberate."
The AI landscape demands something different: "You just have to allow for much more bottoms up creativity, because the best products are the ones that are built very close to the model, and you can only kind of tell what they're capable of pretty late in the process."
This bottoms-up approach has yielded some of Anthropic's most successful products. Mike shared how Artifacts, one of their notable releases, began as "a research prototype that then got taken up by a designer and an engineer and then shipped to production."
The Birth of MCP: Solving Real Problems
The Model Completion Protocol (MCP), which has become an industry standard, emerged organically from internal needs rather than a grand strategic vision. Mike explained that it started when engineers noticed redundant implementations for different integrations:
"We were implementing Google Drive integration, and then we're implementing GitHub integration. And those things should have more in common than not... And the third one that we were queuing up was going to be like yet another completely bespoke thing."
Rather than continuing with bespoke solutions, two engineers identified the common patterns and created what would become MCP. From there, it evolved into an open-source protocol that's now being adopted across the industry, with companies like Microsoft and Amazon contributing to its development.
AI's Impact on Organizational Efficiency
The accelerating capabilities of AI coding tools are creating interesting second-order effects within organizations. Mike noted that AI-powered code generation is exposing other organizational inefficiencies:
"It makes all of your other inefficiencies as a product organization extremely painful because now the alignment meeting is not just standing in the way of an hour of engineering work... it's standing in the way of the equivalent of 4 or 8 hours."
This acceleration is forcing teams to reconsider how they make decisions and drive alignment. While AI can synthesize meetings and tee up conversations, Mike acknowledged they're "not yet at the point where they're organizationally driving decision making."
The Future of Agent-to-Agent Interactions
Looking ahead, Mike identified two key areas of development for MCP and similar protocols. First is "taking action" - moving beyond bringing context into models to enabling them to automate workflows. The second frontier is agent-to-agent interaction:
"Internally we talk about like, at what point will your agents hire other agents and what does that economy of things even look like?"
These interactions raise complex questions about identity, discernment, and auditability. Mike highlighted the research challenge of teaching models appropriate discernment: "If you're transacting with a vendor, sure, you can reveal credit card information, but if it's just some other random agent you're talking to, probably not."
The Compute Conundrum
A recurring theme in Mike's comments was the critical importance of compute allocation decisions. Referencing the "AI 2027" speculative scenario, he confirmed that compute considerations are indeed top of mind at AI labs:
"What is our current compute story? What's the next generation of compute? Who do we partner with? That emphasis and the numbers in there are pretty directionally correct overall."
These decisions create complex tradeoffs: "Incrementally do you spend the extra time on RL or do you spend that time with a customer use case, or do you spend it on your next pre-train?" This becomes especially challenging when successful products consume significant inference capacity that could otherwise be used for research.
Making AI Accessible to Everyone
Despite the rapid advancement of AI capabilities, Mike identified a significant gap in usability: "These products are really hard to use effectively for most people approaching it for the first time."
This usability challenge keeps him "up at night," as he sees "a huge overhang of how models are useful to people and their capabilities today." The contrast with Instagram's intuitive first-time experience is stark - there's no equivalent yet of "the first time you open Instagram, it's like, what do you do? This thing, you take a photo."
The Normalization of AI Usage
Mike observed interesting social dynamics around AI usage, even within Anthropic itself. He described how performance review season revealed evolving attitudes:
"What's happening around performance review season is people were using it to generate their first drafts, which was very interesting in public."
This public visibility of AI usage is breaking down stigmas, similar to early Midjourney usage. Mike believes this shared visibility remains "very important" as people learn to integrate AI into their workflows, noting that these social dynamics are still "at the very beginning of how people even understand how to use this in their work."
Looking Ahead: Anthropic's Focus
When asked about Anthropic's future direction, Mike emphasized their focus on enabling models to "work for hours at a time." This requires developing capabilities like memory, advanced tool use, and organizational self-onboarding.
The pace of development remains breathtaking. Mike joked about people considering Claude 3 "old" despite being released just months earlier: "We released that in February... the pace is very crazy."
As AI labs navigate the complex landscape of research priorities, product development, and compute allocation, Mike's insights offer a valuable glimpse into how one of the industry's leading companies is addressing these challenges while working to make AI more accessible and useful to everyone.