AI systems don’t go rogue out of malice; they go rogue out of missing structure. Every organisation says it wants “responsible AI.” Fewer can explain what that means day to day—who decides if a use case is safe, who signs off, and how to prove that the decision was sound. That gap between policy and practice is where things go wrong. It’s also where real governance has to begin.
The European Union’s AI Act doesn’t actually ask for another set of policies—it asks for evidence of proportional control. In plain English: the bigger the risk, the tighter the checks, the better the record-keeping. When I set out to design an AI-governance framework for our organisation, I went back to an older idea: decision support. The same discipline that once helped managers make rational choices can help AI systems stay inside their lanes. The premise is simple—data, model, dialogue. Collect the right data. Apply consistent logic. Keep humans in the loop. If you do those three things, you’ve already satisfied half of what regulators and auditors will look for. More importantly, you’ve built a culture that thinks before it acts.
The framework I built runs on two complementary checks. The first is the Compliance Gate. Before any AI tool is used, it’s paired with its intended task and passed through a two-by-two grid—low-risk to high-risk tasks on one axis, well-governed to experimental tools on the other. If a pairing falls into a red zone—say, a generative model on sensitive data—the workflow stops automatically. No paperwork. No delay. Just a deterministic no until the missing controls are in place.
The second is the AURA Review (AI Usage ROI Assessment). Once a tool is cleared for use, it’s scored with a proportionality equation that weighs value, residual risk, fit, and control cost. High-value, low-risk projects move quickly; high-risk, marginal-value experiments trigger more scrutiny. It’s a lightweight way to ensure that governance isn’t only stopping things—it’s also optimising them.
What emerged during implementation was less a policy layer and more a feedback loop. Each governance decision generated structured data: who owned it, who reviewed it, what was approved, what evidence backed it up. Over time, even limited deployment produced a learning system—one that could highlight redundant tools, flag control gaps, and feed into automation of the audit trail. The same reasoning that makes a support organisation resilient or a privacy team effective applied here as well: translate judgement into process, and process into insight.
Good AI governance borrows more from engineering than from law. You don’t rely on humans remembering the right thing; you design systems that make the right thing the easiest thing. A missing approval? The workflow won’t proceed. A tool without a data-processing agreement? It can’t be selected. Every step generates evidence—not because someone demanded it, but because the design logic enforces it. That’s what “privacy by design” and “support by design” really mean in practice: controls that are invisible when they work and unmistakable when they don’t.
A few lessons generalise from the experiment. Treat AI use cases like financial transactions: log them, classify them, trace them. Test with dummy data before you test your luck; synthetic or hashed datasets let you stress-test behaviour without exposing people. Build proportionality into the workflow; high risk doesn’t mean no-go, it just means more eyes, more context, more evidence. Make governance computable—if you can’t run your rules as code, they’ll never keep up with the systems they’re meant to control. And keep learning loops short, because governance is a living service, not a static document.
For me, AI governance isn’t about bureaucracy; it’s about designing reliability. Whether you’re running global customer support or privacy engineering, the same principle holds: systems behave well when they have feedback, accountability, and a memory of what happened last time. That’s what this framework set out to prove. It’s decision support for decisions about machines—a way to make sure our AI systems, like our teams, think before they act.