Doing AI Governance | Edition #53: You Achieved ISO42001 Certification. Welcome to the Start Line.


This newsletter provides practical guidance, tools and resources for the real work of governing safe, secure and lawful AI.

Newsletter #53 - March 2026

Feature Article

You Achieved ISO42001 Certification. Welcome to the Start Line.

By James Kavanagh


The gap between compliance documentation and genuine system-level assurance is where AI governance is falling down. Drawing on the lessons from the tragedy of Grenfell Tower and frontier AI safety case research, this article makes the case that completing an aassurance of organisational conformance, like ISO 42001 certification, is where the real work begins, not where it ends.

I believe that real safety and governance work relentlessly focuses on the outcome, not the input. And standards like ISO42001, frameworks like the NIST Risk Management Framework for AI and even laws like the EU AI Act should only ever be treated as inputs.

ISO 42001 certification tells you that an organisation has started to build the machinery of effective AI governance. Processes exist, policies are written, roles are assigned, documentation is in order. That’s a genuine achievement and I don’t dismiss it. But I think we should be honest about how it it tells you almost nothing about whether any specific AI system is actually behaving safely or securely right now. That demands different questions, and a different mindset. This article is about that gap in practice and how to bridge it.

I draw on what I learned over nearly two decades building and working within management systems at Microsoft and Amazon, on the safety case approach from offshore and other safety-critical industries, and on recent frontier AI safety research from Anthropic, and the UK AI Security Institute. The perspective I put forward is straightforward: that if you can’t make a structured, evidence-based argument about why a specific system is acceptably safe in a specific context, your governance is incomplete, it may even degrade into performative theatre. Regardless of what certificate is on the wall.

This one is technical. It’s written for practitioners who are doing the work, or about to. I hope it provides some good pointers on what to think about next. Read the full article

Did you know we've launched a NEW FREE course?

Doing the Work of AI Governance covers the adaptive governance approach that underpins everything we teach, through three real case studies of governance failure and one of governance done right.

From the Blog

Meaningful Human Oversight of AI

By James Kavanagh


The feature article above asks whether your governance can produce evidence that your AI systems are operating safely. This earlier article asks the question that sits underneath: can the humans in your governance process actually see what’s happening, and can they act in time to change the outcome?

It opens with the Dutch childcare benefits scandal, where a machine-learning fraud detection system destroyed thousands of families, while oversight existed on paper. Amnesty International described that oversight as “formal but not effective.” The article also covers Zillow’s iBuying collapse and Uber’s fatal self-driving crash, each showing a different way that human oversight erodes: through scaling pressure, through poor interface design, through authority that looks clear in a document but dissolves when it matters.

If you’re responsible for governance in your organisation, or you’ve recently taken on that responsibility, the ten pressure-test questions at the end of this article are worth running against your own systems. They surface uncomfortable truths early, which is exactly when you want them. Read the full article

The AI Governance Director's Brief (LinkedIn)

A Cautionary Tale of AI, Risk and a Third Party

By Alexandra El-Shamy (Edition 9: Published February 25, 2026)


“Builder.ai raised over $450 million, was ranked the third most innovative AI company in the world, and had Microsoft, the BBC and Virgin as customers. Then it collapsed, and it turned out the AI had been 700 engineers in India the whole time.”

In Edition 9 of the Director’s Brief, we use the Builder.ai story to explore a risk that sits in almost every organisation: AI supplier relationships that haven’t been properly assessed. The core issue with Builder.ai was fraud, but the exposure pattern is something directors should recognise. AI capabilities are hard to evaluate, contract terms are often accepted without scrutiny, and in most organisations the Board has no visibility to this risk at all. I share four questions I’d be asking if I believed my organisation was exposed. Read the full article

Credential + Capability Bundle

AIGP Exam Preparation + the AI Governance Practitioner Program

US$249

AIGP Exam Preparation: 117 video lessons, 700+ practice questions, 5 full practice exams. Aligned to the 2026 IAPP AIGP Body of Knowledge.

The AI Governance Practitioner Program: Four courses of the Foundation Track covering everything from building AI system inventory to writing governance policies, managing AI risk, and designing adaptive governance mechanisms.

AI Career Pro is independent and not affiliated with, endorsed by, or sponsored by the International Association of Privacy Professionals (IAPP).

What we're working on

Right now we're focused on two big things.

First, I'm completing the review of all 500 exam questions for our AIGP Exam Prep course. These sit alongside the 700 questions already in the course, and they'll be published in days. I think the best possible way to prepare for an exam is to sit practice exams, so I'm excited to release them very soon.

Second, the team are building out two specialist courses: one that dives deep into regulatory compliance, and one that dives deep into threat and risk modelling. Both build towards skills mastery in adaptive governance, and so they're coming with practitioner tools that have been in development for more than six months. I'm excited to show you what we've built.

If anything in this edition sparked a question, hit reply. I read every message.

PS. You're receiving this as a subscriber to communications from AI Career Pro. We respect your privacy, so please unsubscribe through the link below if you do not wish to receive these communications in the future.

PO BOX 7087, Redhead, NSW 2290
Unsubscribe · Preferences

Doing AI Governance

Join over 4,500 subscribers and learn about the real work of AI governance. Moving beyond theory, we focus on the practical application of AI governance in real-world organisations with case studies, tools, templates and guidance. Led by James Kavanagh - the AI governance practitioner who led governance at both AWS and Microsoft.

Read more from Doing AI Governance

Hi Reader, Thank you for being part of the AI Career Pro journey. It means a lot to us. We wanted to let you know that we've just released a new AI Governance Assessment tool. It's free, and takes only 15 minutes to help you assess your capability as a practitioner alongside the governance capacity of your organization. You get your results emailed with tailored recommendations and an analysis of where you stand relative to peers in your region and industry. We built the assessment to help...

This newsletter provides practical guidance, tools and resources for the real work of governing safe, secure and lawful AI. Newsletter #55 - April 2026 Feature Article #1 The biggest risk in AI governance is waiting for the perfect moment to start. By James Kavanagh There have never been more frameworks, more conferences, or more confident agreement that AI governance matters. And yet, it's not clear that the talk is turning into action. I've been in those meeting rooms and conference halls....

This newsletter provides practical guidance, tools and resources for the real work of governing safe, secure and lawful AI. Newsletter #54 - March 2026 Feature Article Stop Copying Frameworks. Start Translating Them. By James Kavanagh Why writing your AI governance policy by transposing straight from a regulation, standard or framework is a shortcut to failure. And what to do instead. Most organisations approach AI governance the same way. They pick up the EU AI Act, ISO 42001, or the NIST AI...