Interviewer:

   From a small office in Castejón de Sos, in the very heart of the Benasque Valley, Mario Garcés works on what many call the Holy Grail of artificial intelligence: creating a general-purpose AI capable of learning and reasoning like a human being. He does so with modest resources but enormous ambition and a unique scientific approach that has allowed him to challenge giants such as OpenAI and Google. Mario Garcés, good morning.

Mario Garcés:

   Hello! Good morning—how are you? Did I recognise myself in that introduction? Absolutely. You summed up very well what we’re trying to do and whom we’re up against.

Interviewer:

   What does your day-to-day work look like, Mario—what do you actually do?

Mario Garcés:

   Let me split it in two.

   Scientific/technical side: we’re searching for a technological way to replicate the cognitive abilities of living beings—ultimately, of humans. Starting from neuroscience, we distilled the fundamental principles that govern human thought and behaviour, and now we’re finding the technology that can reproduce what biology already achieved.

   More mundane side: finding money. Unlike the big tech firms you mentioned—our direct competitors—who received eight to ten billion dollars before having a product, we operate on barely three-hundred-thousand euros a year. That contrast shows the scale of what we’re attempting.

Interviewer:

   And where do you look for funding?

Mario Garcés:

   So far we’ve landed public grants—CDTI’s Neotec programme, Spain’s Ministry of Science & Innovation, and EU-backed funds through the Government of Aragón. Public money has limits, though, so we’re now courting private investment to give us that qualitative leap. It’s hard in Spain; what’s called “venture capital” here is often risk-averse. Hence we’re opening up to international investors for the boost we still need.

Interviewer:

   We’re talking about building an AI that can think like a human—that scares me, Mario.

Mario Garcés:

   It’s a common worry. Today’s large language models—ChatGPT, Gemini, and so on—train on existing human knowledge via neural networks. We don’t really know how they learn it, so we can’t predict their answers, which leads to the famous “hallucinations.”

   One of our founding axioms is safety: the system must always be fully traceable and explainable. We need to know what information it’s processing, how it generates alternatives, and why it chooses one option over another. That continual oversight reduces the risk of an autonomous system making decisions we can’t understand or control.

Interviewer:

   Some scientists are even looking for ways to create artificial life after a person dies…

Mario Garcés:

   Indeed. Take Elon Musk’s Neuralink: the idea is to connect a living brain to an AI so the machine can learn directly from biological neural activity—perhaps even “upload” knowledge. I sometimes joke that Musk himself might be the outcome of one of those experiments that misfired!

Interviewer:

   Do you think that kind of brain-level integration will become reality soon?

Mario Garcés:

   Scientists are divided. Some believe it may never happen, or not for decades—especially if you try to replicate biology in detail: every neuron, neurotransmitter, cortical column, support cell… that’s beyond today’s tech.

   Our path is different: capture function, not biological form. Even so, we must assume it will be possible sooner or later and start preparing society—jobs may disappear, or change, leading to a new human-machine collaboration. We need a social debate, even a new social contract, so the benefits outweigh the risks.

Interviewer:

   Our societies are clearly not ready, and everything is moving at break-neck speed…

Mario Garcés:

   Exactly. The real issue isn’t technological change itself but how fast it spreads. Humans evolve to adapt, but over reasonable timescales. Technology now advances far faster than legislators—or the public—can absorb. Europe tries to regulate; the US and China race ahead. We think a balance is possible: strict control over what leaves the lab, but freedom to explore inside it.

Interviewer:

   Earlier today we talked about pandemic management. Imagine handling Covid-19 with the AI we have in 2025—have you ever thought about that?

Mario Garcés:

   I haven’t pondered it deeply, but look at breakthroughs since 2016-17. DeepMind, for instance, solved the 3-D structure of nearly every known protein—something once thought centuries away. Such tools speed up science: vaccine design, social-behaviour modelling to avoid blanket lockdowns… AI could have lessened the pandemic’s impact. Every gain we make now will help us face future challenges.

Interviewer:

   Mario Garcés, thanks for joining us. We’ll be calling on you again—the field moves too fast!

Mario Garcés:

   The pleasure is mine. I’ll explain as far as I understand. Good morning—and thank you!

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Spanish AI Pioneer Mario Garcés Pursues Human-Like Intelligence From a Remote Pyrenean Office

From a modest office in Castejón de Sos, deep in the Benasque Valley of Aragón, computer-scientist-turned-entrepreneur Mario Garcés is taking on the likes of OpenAI and Google. His start-up, The Mindkind, is chasing what he calls “the technological way to replicate the cognitive abilities of living beings”—in other words, a safe, explainable form of artificial general intelligence (AGI).

Although the world’s leading AI labs routinely attract multi-billion-dollar injections of venture capital before releasing a product, Garcés is running on barely €300,000 a year. Public grants—Spain’s CDTI Neotec programme, the national Ministry of Science and Innovation and EU-backed regional funds—have kept the project alive, but he now needs private money for what he calls “the qualitative leap” that will move the research from theory to scalable prototypes. “Spanish venture capital is often risk-averse, so we’re opening up to international investors,” he admits. While today’s large language models accumulate knowledge by statistically digesting mountains of text—and often “hallucinate” facts—The Mindkind is building a system in which every inference can be traced. “We must know what information the AI processes, how it generates alternative answers and why it chooses one over another,” Garcés insists, arguing that explainability is the only sure route to safety. He dismisses the idea of uploading a human mind verbatim, a vision popularised by ventures such as Elon Musk’s Neuralink, as technologically premature. Instead his research team focuses on reproducing function rather than cloning biological form. Even so, Garcés believes society should start debating a new “social contract” for a future in which machines may be able to reason and even collaborate directly with our brains. Regulation, he says, must strike a delicate balance: “strict control over what leaves the lab, but freedom to explore inside it.” Europe leads on rule-making, yet the United States and China currently dictate the pace of technological advance—a mismatch that worries him more than the science itself. Looking back, Garcés wonders how tools now available in 2025—such as DeepMind’s protein-folding breakthroughs—might have changed the global response to the Covid-19 pandemic. Faster vaccine design and more precise social-behaviour modelling, he argues, are only a taste of what explainable AGI could offer for the next crisis. For now, the researcher continues to refine code and court investors from his mountain base. “Every gain we make now will help us face future challenges,” he says, confident that the real Holy Grail of AI may yet emerge far from Silicon Valley’s glare.

Original News SourceAn interview with our CEO, Mario Garcés about the state of the art of AI and the pathway of THE MINDKIND towards AGI