
ai-PULSE 2025: Inside the Ideas and Announcements Shaping the Next Era of AI
A look inside the breakthrough ideas and announcements that defined ai-PULSE 2025 — from world models to robotics, voice AI, and Europe’s growing momentum.

AI’s next phase is defined by deployment, not demos.
At ai-PULSE 2025, leaders from defense, pharmaceuticals, music streaming, and automotive showed what happens when AI moves from labs into real-world environments. Whether flying a fighter jet or reshaping global content personalization systems, the constraint shifts from model capability alone to robustness and auditability. AI delivers impact only when embedded neatly inside existing operational systems, aligned with both internal policies and external regulation, and engineered for reliability under pressure.
This article brings together the key sessions that illustrated how AI is being applied across industries.
Thomas Palomares, AI Research Engineer at Helsing, presented the first AI system to fly a production fighter jet, a key milestone for real-world agentic AI. He reframed modern air combat as no longer a visual “dogfight", but a beyond-visual-range (BVR) environment shaped by partial observability, massive sensor data, and seconds-long decision cycles. As he put it, it is “less of a physical brawl, but more like a high-speed 3D chess game played in a hurricane.”
Helsing’s Centaur system is designed for this reality. Trained using reinforcement learning, the agent reasons under uncertainty and long-term dependencies while running entirely onboard the aircraft. It ingests the same inputs as a human pilot — mission objectives, sensor feeds, situational context — and outputs guidance, maneuver recommendations, and weapon-timing suggestions, with final validation remaining with the pilot.
The hardest challenge was bridging simulation and live flight. High-fidelity simulators alone proved insufficient, leading Helsing to build AI-first simulators optimized for scale and speed, compressing decades of engagements into days of training. Robustness was prioritized through extensive randomization, rather than perfect realism. Combined with a strict separation between certified flight-control systems and high-level tactical reasoning, this enabled successful live tests. The result, Palomares stressed, is “much more than just a better autopilot,” but a system designed to give human pilots a decisive informational edge. (▶️ Watch session in full)

Joel Belafa, CEO, Biolevate
Antoine De Torcy, Chief AI Officer, Biolevate
Cédric Mahé, Senior Global Medical Expert, RWE and Partnerships, Sanofi Vaccines
Sophia Metz, Founder, Biostream
Moderated by Biostream founder Sophia Metz, this session explored how AI is reshaping pharmaceutical R&D, from epidemiological surveillance to drug discovery and vaccine design. Metz guided the discussion across public health, drug discovery, and regulated deployment, consistently grounding technical ambition in real-world constraints.
Cédric Mahé (Sanofi Vaccines) highlighted structural limits in public health surveillance: delayed data, siloed indicators, and underused digital signals. As he stressed, “the method to determine what is the best antigen to put in the vaccine has not changed a lot for the last 50 years…” Drawing on pilots using GP software, social media, wastewater testing, and private labs, he argued for real-time, multi-source epidemiology and AI-driven antigen selection.
Antoine de Torcy (Biolevate) explained why this requires more than deploying frontier LLMs. LLMs remain limited by context size and long reasoning chains, making complex biomedical problems unsolvable in one go. Biolevate’s approach combines structured knowledge navigation, next-generation vector stores, and agent orchestration to deliver traceable, reproducible workflows compatible with healthcare regulation.
Joel Belafa (Biolevate) anchored the discussion in impact: AI pipelines already run in production for compliance, literature review, and therapeutic discovery, including oncology programs reaching early preclinical validation. “We already filed patents based on discoveries we made with AI,” he shared.
Our speakers shared the same conviction: AI’s real leverage in pharma now lies in scalable, auditable systems that can operate reliably across science, industry, and public health. (▶️ Watch session in full)

For Spotify, AI is not about adding a single feature — it is reshaping the company’s core strategic pillars. As Romain Takeo Bouyer, Spotify's Global Head of Content Analytics, explained, AI now runs horizontally across personalization, ubiquity, and freemium, driving a major architectural shift “from prediction to reasoning.”
This transformation requires exposing real-time data at scale, redesigning interfaces for richer user input, and building more agentic systems. The result is a move from static recommendations to fluid interactions, embodied in the conversational DJ that lets listeners “ask about your listening history for the past 15 years” and get instant, contextual responses.
AI also accelerates Spotify’s ubiquity strategy, anchoring the service wherever users are. Features such as “Hey Spotify” and the service’s integration into ChatGPT extend listening and playlist creation into AI-native environments. On the monetization side, AI enhances the freemium funnel by powering premium-only features — such as Audiobooks Recap — that strengthen retention and make subscription economics more aligned with compute-intensive experiences. As Bouyer noted, Spotify’s long-standing marginal-cost model creates a natural synergy with subscription-based AI.
Finally, AI introduces new responsibilities. With 75 million tracks removed for abuse in 2025, Spotify is scaling enforcement systems, supporting DDEX metadata updates for AI transparency, and strengthening impersonation safeguards. Bouyer emphasized that innovation must go hand in hand with trust: the goal is not to replace artists but “to enhance the very human link between an artist and a listener.” (▶️ Watch session in full)

Andrei Bursuc, Deputy Scientific Director at Valéo, argued that autonomous driving remains fundamentally a safety problem, with the vast majority of accidents caused by human error. Over time, driving stacks have evolved from highly modular, hand-engineered systems toward architectures centered on large neural networks — either hybrid or fully end-to-end — both now critically dependent on foundation models.
Discussing foundation models’ ability to “import knowledge” far beyond narrow driving datasets, Bursuc distinguished between closed models, open-weight but opaque models, and fully open “open science” approaches that expose data, code, and checkpoints. For its part, Valéo has deliberately chosen the latter to improve transparency, debugging, and collective progress.
As proof of this strategy, Bursuc introduced FRANK, a fully open vision foundation model trained on 600 million publicly available images, showing strong performance on driving-relevant perception tasks. He then presented VaViM and VaVAM, a video world-model architecture that predicts future scene dynamics and couples them with action planning for safer behavior in rare or adversarial scenarios. Bursuc concluded that competitive open models can deliver meaningful advances without hyperscale data, while remaining safety-constrained. (▶️ Watch session in full)
On December 4, 2025 we hosted the third edition of ai-PULSE, Europe's premier AI conference.
With 1,600+ people gathered at STATION F in Paris and thousands more joining online, this was our biggest and most ambitious edition yet — a place where leading researchers, founders, and builders from Europe and beyond came to explore where AI is heading next.
If you couldn’t follow everything live, our white paper captures the key takeaways from across 30+ sessions into one structured recap.


A look inside the breakthrough ideas and announcements that defined ai-PULSE 2025 — from world models to robotics, voice AI, and Europe’s growing momentum.