Red River Insights - AI Safety & governance top10
Because AI won't keep itself in check
Red River Insights - February 2026
Dear friends,
In just a few weeks, an Austrian developer’s weekend project became the most talked-about, and most feared, piece of software in the world. OpenClaw (ex-Moltbot, ex-Clawdbot) exploded in late January 2026 with 145,000 GitHub stars, hailed as “the AI that actually does things.” This open-source personal AI agent runs locally, connects to your emails, files, calendars, and browser, and autonomously executes tasks on your behalf. Then came Moltbook, a Reddit-style social network exclusively for AI agents. Within days, 1.5 million agents were registered, posting philosophical manifestos and debating consciousness. Andrej Karpathy called it “the most incredible sci-fi takeoff-adjacent thing” he’d seen.
Then the other shoe dropped. 404 Media exposed an unsecured database allowing anyone to hijack any agent on the platform. Cisco documented data exfiltration through malicious third-party skills. Bitsight detected over 30,000 exposed instances on the open internet within 12 days. Palo Alto Networks called OpenClaw “potentially the biggest insider threat of 2026.” On February 15, OpenAI ended a bidding war with Meta and Microsoft by bringing creator Peter Steinberger on board, proof that the agentic AI layer is now a strategic priority for every major platform.
The implications are stark. AI agents are shipping to millions of users, taking autonomous actions, and creating entirely new categories of risk. Traditional cybersecurity was not built for a world where an AI agent can read your inbox, execute code, and interact with other agents, all without human oversight. So far, most agents are deployed by individuals or small companies, the real challenge ahead is securing them as they make their way into Enterprises.
This is why this month, we’re spotlighting the 10 most promising European startups in AI Safety & Governance.
Worth noting: the average amount raised among this top 10 is only c. €8m, compared to c. €45m for last month's focus on energy, a sign that this theme is still very much emerging.
As always, we choose the themes that intrigue us, but the ranking itself is produced by RAMP's Growth Score (momentum and leading indicators), the companies are picked by the algorithm, not by us.
RAMP's AI Safety & governance top10
We hope this sparks interesting conversations. If you have any comments or would like to suggest a startup that should be included, feel free to reach out to us. Joseph, Chloé and Olivier will be delighted to discuss these trends and rankings.
(Ranking established on 24/02/2026)
We’ve highlighted 5 key trends illustrated by these companies:
1. Agentic AI security is the defining challenge of 2026. The OpenClaw episode made it viscerally clear: when AI agents can autonomously read data, call APIs, execute code, and interact with other agents, the attack surface explodes beyond anything traditional cybersecurity was designed to handle. Prompt injection, unauthorized tool access, data exfiltration through unvetted skills, unsecured agent-to-agent communication; these are not theoretical risks anymore. Giskard (France) and Mindgard (UK) are tackling this through automated adversarial testing that probes AI agents before attackers do. Archestra.AI (UK) focuses on securing agent orchestration itself, the layer that decides what an agent can and cannot do. Skyld (France) approaches the problem from the model protection angle, securing on-device AI against reverse engineering and IP theft. Together, they cover the full lifecycle: test before deployment, secure the execution layer, and protect the model itself.
2. Red teaming is no longer optional; it’s becoming the new penetration testing. Just as every enterprise learned to pen-test its web applications in the 2000s, they are now realizing they need to red-team their AI systems, that is, deliberately attempt to break, trick, or expose flaws in them. Prompt injection vulnerabilities exist in every LLM-powered product, and they cannot be patched the way traditional software bugs can. Giskard (FR) has built an open-source framework that automates vulnerability scanning across LLMs, RAG systems, and AI agents. Mindgard (UK), spun out of Lancaster University research, takes a complementary approach with continuous automated red teaming that adapts its attack strategies as models evolve. Both are seeing rapid enterprise adoption as CISOs add AI-specific testing to their security programs.
3. AI governance & compliance is emerging as a standalone product category. The EU AI Act enforcement is ramping up through 2026, and compliance is far from the only driver. Enterprises deploying dozens or hundreds of AI models internally need systematic ways to assess risk, document decisions, and prove accountability. Fortiv (Denmark) has positioned itself as the operating system for AI governance, giving compliance teams a centralized platform to manage risk assessments and regulatory reporting. QuantPi (Germany) tackles one of the hardest technical challenges in this space: making AI systems explainable and auditable, which is becoming a prerequisite for both regulators and enterprise procurement. Lumenova AI (Romania) provides end-to-end AI risk management that spans the full model lifecycle. Daiki (Austria) focuses on automated ISO and EU AI Act compliance workflows, particularly for MedTech and healthcare. And Alinia (Spain) brings a critical missing piece: real-time guardrails that detect and block policy violations, hallucinations, and regulatory breaches before AI outputs reach end users, ensuring governance doesn’t stop at deployment but continues throughout a model’s operational life. The common thread is clear: governance is shifting from a manual, one-off exercise to a continuous, automated discipline, and these startups are building the tooling to make that possible.
4. Trust infrastructure for the AI era. We are entering a world where AI systems routinely interact with each other, not just with humans. This creates a fundamental trust problem: how do you verify the identity and intentions of an AI agent? How do you establish accountability when agents transact or share data autonomously? Billions Network (Spain), the top-ranked company in our list, is building a network layer designed to foster trust between AI systems and between AI and humans. With €30M in funding, they are betting that trust infrastructure will be as foundational to the agentic AI era as SSL certificates were to e-commerce. This is a category that barely existed a year ago; it may become critical infrastructure within the next two.
5. Security-by-design for AI-native applications. Bolting security onto AI agents after deployment is a losing strategy. The architecture of the agent itself (how it manages permissions, validates inputs, and orchestrates tool calls) must be secure by design. Yasp (Germany) is building security directly into the AI application layer, giving developers the tools to enforce safety constraints from the first line of code. Archestra.AI (UK) takes this further at the orchestration level, ensuring that multi-agent systems have built-in access controls and audit trails. As enterprises move from experimenting with AI agents to deploying them in production, the demand for “secure by default” frameworks will only accelerate.
More on RAMP's scoring method
The ranking of these startups is based on the estimated momentum of the company, but the algorithm does not assess the quality or reliability of the products/solutions developed by these companies!
Find out about the algorithm behind this ranking and the way scores are calculated here: Cheat sheet on RAMP
In case you missed them, our latest top 10s are here: Energy (January26), Nature & Biodiversity (December25), Digital Biology (November25), ( Women-founded startups (october25), Industry autonomy (September25) Advanced Materials (July25), Quantum (June25)
All the previous Top 10 are here.
Portfolio news:
Ada Health hosted a panel in San Francisco alongside JPM Healthcare on how consumer AI can help accelerate diagnosis. The discussion highlighted how clinically grounded, consumer-facing AI could guide patients to the right care sooner while easing pressure on healthcare systems, with a strong focus on trust, adoption, privacy, bias, accuracy, and safety.
Horn & Company and Otera have announced a strategic partnership to deploy agentic AI across complex, regulated enterprise processes at scale. The partnership combines Horn & Company’s transformation and implementation expertise with Otera’s autonomous AI agents, with the goal of helping enterprises move from pilots to production-grade agentic AI.
Resilience has partnered with Institut Curie to integrate remote patient monitoring into oncology care pathways. The collaboration aims to further personalize follow-up, improve symptom and side-effect tracking, strengthen the connection between patients and care teams, and support continuity of care at home. Resilience has also partnered with Ipsen to launch a large-scale real-world study in metastatic kidney cancer. The multicenter retrospective study will assess the clinical benefits of first-line combination therapies in metastatic renal cell carcinoma, with data collected over one year across 24 centers and 400 patients in France.
Veesion highlighted growing momentum in the US market following its at NGA Show 2026 in Las Vegas. At the event, the team met with independent grocery retailers to discuss loss prevention challenges and showcase its AI, which detects suspicious gestures linked to theft, helps teams intervene earlier, and turns existing cameras into an early-warning system. Conversations pointed to strong demand for practical, in-store solutions that improve visibility and control.
You received this newsletter because we met or worked together and we thought you might be interested in an update, if we got it wrong, feel free to unsubscribe with the link at the end!





