A UK-based asset manager's board paper lands on the chief risk officer's desk on a Monday morning. The board has been asked to certify, in writing and to the FCA, that the firm's AI tooling is consistent with the firm's obligations under the Digital Operational Resilience Act. The AI tool the firm uses was bought in 2023, before DORA was even in force. The vendor's compliance pack is American. The CRO has six weeks. This is what dora compliance looks like as an operational problem rather than a policy one, and it is the conversation taking place across regulated boards in 2026.
What changed between the deployment of the original tool and the demand for the certificate was not the technology. The model in the AI tool is broadly the same. What changed was the regulatory architecture around it. DORA came into force across the EU in early 2025 and was adopted in equivalent form by UK financial regulators. The EU AI Act began applying in stages from 2026. NIS2 raised the cybersecurity bar for critical-sector firms across both jurisdictions. The AI management system standard started being requested in procurement diligence. AI tooling that pre-dated this architecture has not become non-compliant overnight, but it has become non-evidenceable, which in regulated work amounts to the same thing.
This guide explains what AI knowledge management for regulated industries actually requires in 2026, the four regulatory frameworks that now shape procurement, what DORA compliance specifically asks of an AI knowledge platform, the buyer questions that separate evidenceable platforms from confident-sounding ones, and where the sovereignty constraints on UK and EU buyers narrow the field. It is written for compliance and risk officers in financial services, health, and public sector, and for CTOs in regulated firms responsible for proving the AI procurement was defensible after the fact.
The 2026 reality: regulated industries cannot deploy AI without an audit trail
In a regulated firm, the question a board will be asked about any AI deployment is not "does it work" but "can you defend the deployment to a regulator." That question is older than AI. It has applied to outsourcing, cloud adoption, document management, and customer-data processing for two decades. What is new in 2026 is that the regulator's question now reaches all the way to the AI's source corpus, the model's processing layer, and the audit artefacts the platform produces.
A defensible AI deployment in a regulated firm requires three things the regulator can read on a Tuesday afternoon. First, a record of which documents the AI was permitted to answer from on a specified date, with named approvals attached to each one. Second, a record of where the AI processing actually happened: which infrastructure, in which jurisdiction, under which provider's contractual control. Third, a record of how each individual answer was constructed, with citation back to specific documents and versions, not just a summary block at the end.
Most consumer-grade and even enterprise-grade AI tooling produced before DORA's effective date does not produce these artefacts natively. Many produce something adjacent (query logs, permission audit trails, model usage statistics) that satisfies a security review but not a regulatory one. The gap between "we have logs" and "we can produce the curated source set, the approval chain, and the per-answer citation, on this date, for this user" is the gap an AI-knowledge platform either closes or leaves open.
For UK and EU regulated firms, this is also the moment when the broader sovereignty conversation, which we cover in our sovereign AI guide for UK organisations, intersects compliance head-on. A US-controlled AI processing layer creates compliance exposure that residency alone cannot resolve.
The four regulatory frameworks shaping AI procurement
Four frameworks now sit on every regulated procurement-team's compliance map for AI tooling. They overlap, but each adds its own evidence demands.
DORA (Digital Operational Resilience Act)
DORA is the EU's financial-services regulation governing operational and technology resilience. It applies to banks, insurers, payment firms, asset managers, and crypto-asset service providers; UK regulators have implemented an equivalent framework. For AI tooling, DORA's relevance is concentrated in two areas: third-party ICT risk management, which sets the standard for due diligence on any provider whose service the firm depends on, and operational resilience, which requires the firm to demonstrate continuity of critical functions including those that depend on AI. The DORA Article 28 data jurisdiction question, of where the AI processing actually happens and under whose contractual jurisdiction, is the one most often raised in current procurement diligence.
EU AI Act
The EU AI Act introduces a risk-tier classification: prohibited uses, high-risk uses, limited-risk uses, and minimal-risk uses. For most regulated firms, the AI knowledge platform falls into the limited-risk or, if used to support regulated decisions, high-risk category. The key procurement consequence is auditability: high-risk AI must produce documentation a regulator can examine. This includes data governance, technical documentation of the model's behaviour, and a record of automated decisions. Vendors selling into regulated firms now expect to produce this documentation as part of due diligence.
NIS2
NIS2, the EU's network and information security directive, raises the cybersecurity bar for critical-sector firms. Its scope is broader than financial services and includes healthcare, transport, public administration, and digital infrastructure. For AI tooling, NIS2's relevance is principally in the supply-chain and incident-management requirements: the firm must understand its dependencies, must monitor for incidents in those dependencies, and must report when a critical service is compromised.
ISO 27001 and ISO 42001
A vendor that is ISO 27001 certified and ISO 42001 underway signals both an established security posture and a serious commitment to AI governance. ISO 27001 (information security management) is the certification regulated firms have asked of their suppliers for two decades. ISO 42001 (AI management systems) is the new addition, published in 2023 and now starting to appear in procurement requests. Together, the two standards cover most of what an AI procurement diligence pack now needs.
What DORA compliance actually requires of an AI knowledge platform
For most UK and EU regulated firms in 2026, DORA is the framework whose evidence demands shape AI procurement most directly. Three areas matter most.
Third-party risk management
DORA expects firms to maintain a register of their ICT third parties, with risk assessments, contractual provisions, and ongoing monitoring. An AI knowledge platform is an ICT third party. The platform's own architecture decisions, including where it processes data, which other vendors it depends on, and what its incident-response posture looks like, flow through into the firm's DORA register. A vendor whose architecture is opaque or whose subcontractor stack is undisclosed creates work for the firm's compliance function.
Data jurisdiction and contractual scope
This is the area where most US-headquartered AI vendors create genuine difficulty for UK and EU regulated firms. UK or EU data residency is not the same as UK or EU contractual jurisdiction over the AI processing layer. A platform whose AI tier is operated by a US-headquartered company subject to US jurisdiction creates a residual exposure that residency alone cannot resolve. For DORA-regulated firms, this is the area that procurement diligence now focuses on, and the area that narrows the vendor shortlist most aggressively.
Operational resilience and continuity
DORA expects firms to demonstrate that critical functions continue under stress. If AI tooling becomes critical to a process the firm operates, and in many large firms it now does, the platform must support testing, must produce evidence of recovery objectives, and must give the firm visibility into incidents in the vendor's own infrastructure. A vendor whose status pages are sparse or whose incident reporting is on a best-efforts basis creates a DORA gap.
Evaluating an AI knowledge platform against compliance requirements
Six questions, in roughly the order an evaluation should ask them. These are the ones that separate platforms designed for regulated work from platforms retrofitted into it. The same questions apply for ai compliance evaluations across most regulated sectors, with weighting adjusted for the firm's specific framework exposure.
Where does the AI processing layer actually run, and under whose contractual control? If the answer involves any US-controlled inference step, the firm's compliance function will have to either accept the residual jurisdictional exposure or reject the vendor.
Can the platform produce the curated source set as it stood on a specific date? A defensible audit answer requires the platform to know which documents were eligible for AI answers on Tuesday three months ago and who approved each of them.
Are answers cited at the sentence level? A summary with a citation block at the end is weaker evidence than a summary where each clause links to a specific document and version. Regulators reviewing decisions retrospectively rarely have time to re-read the entire source.
Does the platform produce the audit artefacts DORA, ISO 42001 and NIS2 actually expect? A vendor's compliance pack should map to the specific evidence demands of each framework, not just generic security claims.
What is the third-party stack underneath the vendor? If the vendor's AI tier depends on a foundation-model API operated by a third party, that subcontractor relationship flows into the firm's DORA register and needs to be assessable.
Is there a sovereign tier, and what does it actually contain? Vendors with a "sovereign" or "regulated" tier vary widely in what the tier covers. The procurement-relevant question is whether the AI processing layer is in scope, not just the data-at-rest layer.
A more thorough version of these questions, applied across the full enterprise AI search landscape, lives in our enterprise AI search and AI organizational knowledge guide. Curation, the underlying capability that lets a platform answer the first two questions defensibly, is unpacked in our curated knowledge guide. The technical posture behind AnswerVault's answers to all six questions is documented on our security and compliance page, which procurement teams can reference directly in DORA Article 28 third-party assessments.
How AnswerVault delivers compliant AI knowledge management
AnswerVault is a governed AI knowledge layer designed from the start around the audit, citation and jurisdiction demands the four frameworks above now place on AI tooling.
The platform's central artefact is the audit trail. A document does not become eligible for AI answers because it sits in a connected source. It becomes eligible because a named subject matter expert approves it for inclusion, with the approval written into the audit trail at the moment it happens. When a document is superseded, the supersession propagates: the old version stops being used for answers, the new one takes over, and the historical record of which version was canonical on which date is preserved. This is the architecture that lets the platform answer "which documents were available to AI on the date the user asked the question, who approved them, and on whose authority": the question DORA, NIS2 and ISO 42001 are most likely to ask after the fact.
Citations are at the sentence level. Every clause in an answer resolves to a specific document, version and approver. A regulator asking where an answer came from gets a response with three nouns in it, not three paragraphs.
The platform is structured in three tiers, and the answer to the sovereignty question depends on which tier:
- Starter is UK-hosted, suitable for SMEs and pilots, runs on shared infrastructure with EU/UK data residency.
- Business is UK-hosted, dedicated infrastructure, suitable for most regulated mid-market organisations.
- Enterprise sovereign is UK-controlled, contractually outside the jurisdictional reach of the CLOUD Act. For the Enterprise tier specifically, the AI processing layer is part of the sovereign boundary, not just the data-at-rest layer. This is the tier designed for DORA-regulated financial services, NIS2-regulated critical sectors, and public sector buyers whose sovereignty constraint is procurement-blocking.
AnswerVault is ISO 27001 aligned and ISO 42001 underway. AI is included in every plan; there are no per-query usage charges, no separate API key requirements, and no need to bring your own model. Customer data is never used to train AI models, by AnswerVault or by our foundation-model providers. The web chat surface is the default, with Microsoft Teams, Slack, CLI, and API available as additional surfaces.
Next steps
If you are preparing an AI procurement defence for a regulated board or a regulator, the most useful first move is to map your specific framework exposure (DORA, EU AI Act, NIS2, ISO 27001, ISO 42001) to the audit artefacts your existing or candidate AI tooling can actually produce. The gaps in that mapping are the gaps the procurement decision needs to close. For the broader category context, our enterprise AI search and AI organizational knowledge guide walks through the procurement evaluation across all four platform shapes, with regulated buyers in mind.
Try AnswerVault free: AI knowledge management with built-in audit trail.
AnswerVault is built by Catapult CX, an enterprise technology consultancy. The product was originally developed for a global pharmaceutical company with strict data governance requirements; the same architecture now powers the SaaS platform.