We make it possible for organisations to use AI β€” safely, smartly, and on their own terms.

We handle governance, knowledge, and trust β€” so your teams and agents can use any AI model safely, on your terms.

Every organisation wants to use AI. Most can't β€” because they don't trust it with their data, their knowledge, or their reputation. Aimable changes that.

We built a platform that sits between your people and AI β€” enforcing your policies, grounding answers in your verified knowledge, and giving you full control over every interaction. So you can move fast without giving anything up.

2025
Founded
Amsterdam
Headquartered
EU
Built & hosted
Model-agnostic
Works with any AI

What we believe

01

AI should work for the organisation, not the other way around.

Most AI tools ask you to change how you work. We think the technology should adapt to your teams, your processes, and your standards β€” not the reverse.

02

Trust is earned through architecture, not promises.

Anyone can write a privacy policy. We build trust into the system itself β€” every interaction governed, every decision auditable, every boundary enforced by design.

03

Knowledge quality matters more than model choice.

The best model in the world gives bad answers without the right context. We focus on connecting AI to your verified knowledge β€” so the output is grounded, relevant, and reliable.

What drives us

The principles behind the Aimable platform β€” and behind everything we build.

A platform, not a plugin

AI governance shouldn't be an afterthought. We build it as the foundation β€” so everything on top is safe by default.

Sovereignty is non-negotiable

Your data, your models, your rules. Whether in our EU cloud or on your own terms, you keep full control.

Purpose-driven, not prompt-driven

Every Space has a purpose. Aimable uses it to route the right model, apply the right policies, and draw from the right knowledge β€” automatically.

Transparency you can prove

Full audit trail on every interaction. What was asked, what was sent, what was redacted, what was returned β€” exportable for compliance.

Humans and agents, same rules

An agent can't bypass redaction or access knowledge it has no rights to. Every interaction is governed identically β€” person or automated workflow.

Founding Team

IZ
Ian ZeinCEO
AC
ArjΓ© CahnCPO
BE
Bart EversCTO
LV
Ludger VisserFounding & Lead Engineer
PV
Pim VerschoorFounding Client Partner
IZ

Ian Zein

CEO

Former co-founder and CEO of Sentia, a 600-person international cloud company focused on mission-critical IT.

AC

ArjΓ© Cahn

CPO

Former co-founder and CTO of Hippo and later Chief Product Officer at Bloomreach, a Silicon Valley Unicorn.

BE

Bart Evers

CTO

Former co-founder of Gillz, now part of VINCI Energies, with extensive enterprise engineering and delivery experience.

LV

Ludger Visser

Founding & Lead Engineer

Senior engineer with deep AI, Machine Learning, Data and coding experience.

PV

Pim Verschoor

Founding Client Partner

10 years of Enterprise IT experience within Cyber Security and Regulated Industries.

Want to build something together?

We're building the AI platform for every organisation that needs governed AI. If that excites you, we'd love to hear from you.