The compliance officer's guide to explaining local-first AI to the board
A plain-spoken script for the governance, risk, or data protection lead who has five minutes on the board agenda to explain where the organisation stands on AI.
Most boards have now heard enough about AI to know they should be asking questions, but not quite enough to know which ones. If you're the person responsible for governance, risk, or data protection, there's a reasonable chance you've been given a five-minute slot somewhere between the cyber update and any other business to explain where the organisation stands.
This is a guide to using that time well.
Why the conversation has sharpened
A few things have shifted in the last year or so that make the AI question harder to defer.
The EU AI Act's main obligations for high-risk systems become enforceable on 2 August 2026, with fines of up to €15 million or 3% of global turnover for non-compliance, rising to €35 million or 7% for prohibited practices (CompliQuest). That doesn't mean every organisation is operating a high-risk system, but it does mean the documentation question — what AI do we use, what data does it touch, who is accountable for it — now has a deadline attached.
At the same time, the insurance market has started to treat AI as a distinct category of risk. The ISO has introduced generative-AI exclusions (CG 40 47, CG 40 48 and CG 35 08) that carriers are increasingly attaching at renewal, and some insurers have filed broader "absolute" AI exclusions in their D&O and E&O wordings (Policyholder Pulse). It's worth flagging to the board that the policies they renewed last year may not respond the same way this year.
The third shift is around data transfers. The EDPB's Schrems II guidance, the CJEU's own reasoning, and the European Commission's 2025 Cloud Sovereignty Framework all proceed from the same assumption: data held by a US-incorporated provider carries a residual transfer risk that contractual clauses alone cannot resolve (Kiteworks analysis of EDPB guidance). In practice, that means an AI assistant running in a US vendor's cloud is a harder GDPR conversation in 2026 than it was in 2023, even when the physical data centre is in Europe.
None of these is a reason to stop using AI. They are reasons the board is likely to want a clearer picture of which AI, where, and under whose control.
What local-first actually means
It's worth spending thirty seconds on the definition, because the phrase gets used loosely.
A local-first AI assistant runs the model and processes the context it needs on the user's own device, the content the assistant is working with, whats on screen, whats said in the room, stays on the machine. The vendor may provide software updates and improvements, but they do not receive the content the assistant is working with.
That is a different architecture from "our cloud AI has a private tenant," "data is encrypted in transit," or "we don't train on your data." Those are all reasonable vendor commitments, but they still assume the data leaves the device. Local-first means it doesn't.
The comparison that usually helps
Most boards find a side-by-side easier to follow than a narrative. One that tends to work:
| Cloud AI assistant | Local-first AI assistant | |
|---|---|---|
| Where is the data processed? | On vendor infrastructure | On the user's device |
| Who has technical access to prompts? | Vendor staff under their policies | Only the user |
| Whose legal process can compel disclosure? | The vendor's, including the US CLOUD Act where applicable | The organisation's own |
| What does "delete" mean? | Dependent on vendor retention policy | The file is removed |
| Which has broadest and deepest capability | Broad and quickly evolving | Good for specific tasks |
The point isn't that local-first is better at everything. It's that the rows on that table are the ones the board is being asked about, and the answers are more straightforward when the data hasn't left the building.
The questions the board will usually ask
A few come up almost every time.
Is it as capable as the well-known cloud products? For some tasks, no. In some cases though increased context is more important than processing power. The honest answer is that the right tool depends on the job, and a mixed estate is usually where organisations end up.
What happens when someone leaves? The existing leaver process covers it. When the laptop is wiped, the assistant's context goes with it. With a cloud product, the history sits in the vendor's tenant until someone explicitly requests deletion, and the deletion itself depends on the vendor's retention behaviour.
What's our exposure under the EU AI Act? Easier to document when the system runs locally, because the organisation controls the records the Act asks for — risk classification, data governance, human oversight, technical documentation. With a cloud product, some of those answers come from the vendor's conformity assessment, which the organisation inherits rather than writes. Worth noting that the ICO's Upper Tribunal win against Clearview confirmed UK GDPR applies to non-UK suppliers processing UK residents' data (Clifford Chance) — so the vendor's location doesn't change the compliance obligation.
What about insurance? Worth asking the broker two specific questions before the next renewal: whether the ISO AI exclusion endorsements are being attached to the general liability policy, and whether the D&O carrier has filed an absolute AI exclusion. The answers determine how much of the AI risk is actually insured. Local-first doesn't eliminate the exposure, but it narrows the kinds of claims most likely to trigger those exclusions.
The part that's worth being candid about
A US-incorporated cloud provider can be compelled to produce customer data under US legal process, regardless of where the data is physically stored and regardless of what the contract says. This is the CLOUD Act position, and the EDPB has been explicit that standard contractual clauses cannot override it (Looming Tech). Most organisations have decided to live with that risk for email and document storage because the alternatives are operationally difficult. The AI question is whether to take on the same residual exposure for a new category of data — one that includes, by design, the questions employees ask and the context they attach to them.
That's a judgment for the board, not a decision a compliance function can make alone. But they should be making it with the position understood, not assuming that data residency clauses resolve it.
What to propose at the end of the five minutes
Three things tend to be the right ask, and none of them commits the organisation to a particular product.
- A standing AI governance item on the board agenda until at least the August 2026 AI Act date, consistent with the Harvard Corporate Governance Forum's 2026 board agenda guidance (Harvard Law School Forum).
- A named executive owner for AI governance, with authority to approve or decline AI-related procurement.
- A stated preference for local-first AI, where the use case allows, with cloud AI permitted subject to a documented assessment of the data transfer and insurance questions above.
The third one is the useful one. It doesn't ban anything. It just makes the team choosing a cloud product explain why, which is usually enough to get better decisions out of the procurement process.
A reasonable closing line
Local-first AI isn't a silver bullet, and nobody should pretend the productivity case is settled. But on the specific questions the board is increasingly being asked — where does the data go, who can compel its disclosure, what do we tell the regulator — it produces answers that are easier to give and easier to defend. For a five-minute slot, that's usually the most useful thing the compliance function can offer.
Further reading
- EU AI Act enforcement timeline and fines (CompliQuest)
- ISO generative AI insurance exclusions (Polales Horton & Leonardi)
- "Absolute" AI exclusions in D&O and E&O policies (Policyholder Pulse)
- CLOUD Act and European data sovereignty (Kiteworks)
- The CLOUD Act and AWS EU regions (Looming Tech)
- ICO v Clearview AI (Clifford Chance)
- 2026 board agenda priorities (Harvard Law School Forum on Corporate Governance)