We had been living with a division of approaches—the dilemma between open-source and closed-source—when suddenly DeepSeek appeared with its strategy of launching an open-source model with performance comparable to OpenAI’s, directly challenging business models based on closed code. The battle continues, and this divide between AI models is transforming the dynamics of the tech industry as it advances toward coveted general AI.
The global AI leaders are split in their approaches—open or closed—and this divide is reshaping the industry.
At this point, the positions are clear: Meta and xAI have opted for open-source models like Llama 3.1 and Grok-1, while OpenAI and Google have preferred to protect their systems.
A CB Insights study notes that investment flows also split between the two approaches. Since 2020, private developers of open-source AI models have attracted €14.4 billion in venture capital funding, while closed-source developers have raised €36.2 billion—reflecting different bets on how AI innovation will unfold.
The main difference lies in access: closed-source keeps model details and weights private, while open-source makes them available for study, execution, and free adaptation.
DeepSeek arrives
DeepSeek has just entered the scene, emerging from the High-Flyer hedge fund created by its founder, Liang Wenfeng. Its strategy is to launch an open-source reasoning model nearly on par with OpenAI’s performance—directly challenging closed-source business models.
CB Insights notes:
“By drastically reducing development costs, DeepSeek is making the creation of advanced AI models more accessible to more companies and organizations. Democratization could increase competition as more players are able to enter advanced AI.”
Joaquín Cuenca, co-founder of Freepik—a Spanish search engine offering millions of high-quality graphic resources—explains that:
- DeepSeek managed to train its base model with just $5 million worth of equipment,
- whereas U.S. rivals have spent around $100 million training similar models.
Cuenca says DeepSeek is an open-source model a step below what OpenAI can achieve—models that “start thinking about more difficult problems; spend more time thinking; solve issues that only a domain expert could solve.”
He believes DeepSeek benefits from open source because it can use existing components—and it’s also a marketing tool.
He adds:
“We’ve reached AI systems that are smarter than an average person. They know a lot, and the reasoning problem is being solved.”
The goal, he says, is to make AI smarter than all of humanity combined—capable of solving humanity’s problems, thinking as long as needed, and generating unprecedented economic impact.
CB Insights notes that open-source supporters are preparing for an open-source future, while closed-source advocates argue that revenues are crucial for obtaining top resources and talent.
The open-source ecosystem
Among open-source supporters:
- Meta CEO Mark Zuckerberg has publicly declared:
“Meta is committed to open-source AI,”
convinced that an open ecosystem will become the standard.
On the podcast The Lunar Society, Zuckerberg confirmed that Meta will keep offering open-source AI as long as it benefits the company.
Another open-source milestone: Hugging Face replicated OpenAI’s Deep Research functionality in just 24 hours, launching its own open-source version—Open Deep Research—led by research head Leandro von Werra.
Some believe the team was motivated by DeepSeek’s “black-box philosophy” with its R1 model, which is designed for deep, detailed reasoning tasks and stands apart from many other AI models.
A “black box” system is one whose internal processes are opaque—users cannot fully understand how it arrives at conclusions.
Mario Garcés, founder of The Mindkind—a Spanish startup competing with giants like Sam Altman to bridge the gap between generative AI and AGI—believes we face a trust issue:
“The ability to solve the problems we give AI is not the same as truly understanding them.”
Garcés cites John Searle’s Chinese Room thought experiment, which argues that machines may produce correct answers without actual understanding or consciousness—calling into question whether passing the Turing Test truly implies “mind” or comprehension.
Small models boost open-source adoption
CB Insights notes another trend: small models are driving open-source adoption.
Industry leaders and smaller players alike are releasing smaller, specialized open-source models:
- Phi (Microsoft)
- Gemma (Google)
- OpenELM (Apple)
This suggests a two-tier market:
- Frontier closed-source models for the most sophisticated applications
- Smaller open-source models for specialized or edge use cases
The closed-source model
Baidu CEO Robin Li recently said in an internal memo:
“Open-source models make little sense.”
From a business standpoint, he argues:
“Closed code allows you to make money, and only then can you attract computing resources and talent.”
CB Insights adds that closed-source promoters continue to lead private capital markets and that consolidation is forming around frontier models:
- Closed-source leaders like OpenAI, Anthropic, and Google will dominate the market.
- Only tech giants such as Meta, Nvidia, and Alibaba can afford the cost of developing open-source models that compete in performance.
Epoch AI estimates that frontier model training costs are growing 2.4× per year, driven by hardware, talent, and energy needs.
A security issue
Security remains central. Critics fear malicious actors could misuse open-source models to access harmful information, such as bomb-making instructions or cyberattack code.
There are also national security concerns: open-source models could enable foreign actors to develop military applications—weapon systems or intelligence technologies—compromising strategic advantages of leading AI nations.
Closed models often use techniques such as reinforcement learning from human feedback (RLHF) to limit harmful outputs. Open-source models, however, are more likely to be deployed without such safeguards.









