Will Open Source AI Overtake Closed Models? Ft. Olama, Fireworks and OpenRouter
Jeff Morgan, Dmytro Dzhulgakov, and Alex Atallah debate the future of open source AI development.
Post methodology: Dust custom assistant @AIAscentEssay using Claude 3.7 with system prompt: Please take the supplied transcript text and write a substack-style post about the key themes in the talk at AI Ascent 2025. Style notes: After the initial mention of a person, use their first name for all subsequent mentions; Do not use a first person POV in the posts. Light editing and formatting for the Substack platform.
A panel discussion with Jeffrey Morgan (founder of Olama), Dmytro Dzhulgakov (co-founder of Fireworks), Alex Atallah (CEO and co-founder of OpenRouter) and moderated by Sequoia partner Lauren Reeder, revealed several key insights about the current state and future of open source AI models.
The Case for Open Source
The panelists made compelling arguments for why open source models are essential to the AI ecosystem. Alex emphasized a fundamental principle: "Human genius can still come from anywhere around the world, and centralizing it behind model labs is very risky." He suggested that open source creates higher leverage by allowing innovation to emerge from unexpected places.
Jeff highlighted the practical aspects, noting that many open source models are deployed on non-data center hardware, making them more accessible to consumers. He also pointed out the importance of ownership for enterprises: "When they fine tune or train or distill a model with their own data, they want to own the entire model, not some piece of it."
Dima compared AI to other general-purpose technologies like electricity, arguing that while applications and distribution can be regulated, banning the basic technology itself would hinder scientific progress. This perspective frames open source AI as a fundamental driver of innovation rather than a threat.
The DeepSeek Moment
A significant portion of the discussion centered on the "DeepSeek moment" in January 2025, when a Chinese model suddenly dominated the open source landscape. The panelists analyzed how this shift happened and what it revealed about the dynamics of the open source ecosystem.
Jeff observed that DeepSeek, like Meta and Alibaba, was primarily focused on solving consumer problems rather than just selling models. "They have no reason to keep these things closed source," he explained. "They get more feedback if they open source it."
Dima credited DeepSeek's success to excellent engineering, noting that "small teams can move really fast" when they integrate research with systems engineering. Alex highlighted three additional factors:
R1 was in fact a very good open source reasoning model
DeepSeek showed the model’s thinking process (a UX innovation at the time because OpenAI chose not to make its "chain of thoughts" visible)
DeepSeek’s own infrastructure limitations forced American companies to figure out how to scale it effectively.
The Open vs. Closed Source Balance
Currently, open source models represent only about 20-30% of inference tokens used on OpenRouter, according to Alex. However, the panel was optimistic about growth. When asked to predict the percentage of inference tokens that would run through open versus closed source models in five years, the responses were revealing:
Jeff predicted a 50/50 split, citing research on routing layers that intelligently switch between open and closed source models. (See RouteLLM and Minions)
Dima also suggested around 50/50 but noted that the open source landscape would likely be more diverse: "On closed source, it might be still a few leading players... On open source it would not be a single model. It would be much more families and fine-tunes and customizations."
Alex's prediction hinged on whether decentralized providers could become sustainable. Without decentralization, he leaned toward closed source maintaining a majority. However, he suggested that decentralization could tie "AI with a financial system that kicks off a cold start flywheel and helps new people serve traffic."
The Role of Fine-Tuning and Customization
A recurring theme was the importance of customization for enterprise adoption. Alex predicted that fine-tuning might have "a weak moment in the next year or so" as powerful reinforcement learning models scale up, but ultimately expected a shift similar to what happened with software: "People just want transparency and they want customizability."
Dima emphasized this point, noting a shift from pre-training to post-training and reinforcement learning: "If you want to try to build the model which is good for everything, you need to spend a lot of compute... But if you're customizing, you can spend maybe less compute and use your unique data and your unique way of solving problems." This trend, he suggested, would drive growth in open source adoption.
The Path Forward
As the AI industry continues to evolve, the panel suggested that the competition between open and closed source models will drive innovation in both camps. The emergence of models like DeepSeek's R1 and the ongoing development of Meta's Llama family demonstrate that the landscape remains dynamic and unpredictable.
What seems clear is that open source AI represents more than just an alternative to closed models—it embodies a different philosophy about how AI should develop and who should have access to its capabilities. As Alex put it, when "the young kids of today grow up really thinking about state-of-the-art machine learning on the edge, they're just going to make something open source before they make something closed source."
The future of AI may well depend on finding the right balance between these approaches, leveraging the strengths of both to create a more innovative, accessible, and responsible AI ecosystem.
Related: Databricks’ Ion Stoica on the Training Data podcast