Most boards know the EU AI Act exists. Few have asked the four questions that actually matter for their exposure.
The EU AI Act is not primarily a legal compliance question. It is a portfolio-classification question, and the board's role is to be sure that classification is being done seriously.
Why Legal Cannot Solve This Alone
The instinct when a new regulation arrives is to route it to legal and risk. That is necessary and insufficient. The EU AI Act classifies AI systems by risk category — prohibited, high-risk, limited risk, minimal risk — and the classification is fact-driven, not legal-only.
Legal can tell you what each category requires. Only the technical and business teams can tell you which category each of your systems is actually in. And that classification process is where most organisations are quietly behind.
The Four Questions Boards Should Be Asking
First: do we have a documented inventory of every AI system in use or in development across the enterprise — including third-party AI embedded in software we have licensed? Most boards will get an uncomfortable answer here.
Second: which of those systems fall into each EU AI Act category, and what is our confidence level in the classifications? Third: for systems classified as high-risk, what is the conformity assessment plan and timeline? Fourth: where are our biggest classification uncertainties, and what are we doing to resolve them before they become regulator-facing problems?
What Good Looks Like by End of Q3 2026
By end of Q3, an organisation that takes this seriously will have a maintained AI inventory with clear ownership; a documented classification methodology applied consistently; a tracked workstream for conformity assessment on high-risk systems; and a board cadence — quarterly minimum — for reviewing material changes to the portfolio classification.
Organisations that get this in place quietly are not just managing regulatory risk. They are building a portfolio-management capability that will pay off well beyond the AI Act.
What to do next
- Commission a complete inventory of AI systems in use, including embedded third-party AI
- Establish a classification methodology and apply it consistently across the inventory
- Track conformity assessment timelines for high-risk systems at board cadence
- Resolve classification uncertainties proactively, not in regulator correspondence
Grant & Graham works with boards, audit committees, and CTOs of EU-operating enterprises. If your organisation is dealing with an AI Act readiness programme without a current, defensible system inventory, we can help. Our EU AI Act portfolio classification, governance design, and board advisory are deployed in days, not months. Get in touch or email andrew@grant-graham.co.uk.