Under the EU AI Act, AI systems are classified into four risk tiers (unacceptable, high, limited, minimal) with obligations proportional to the risk level. High-risk systems used in financial services contexts, including creditworthiness assessment and insurance risk pricing, face mandatory risk management systems, data governance, human oversight, and registration requirements under Articles 9 through 15.
- How to determine whether your firm’s AI audit tools fall within the EU AI Act’s risk classification system under Article 6 and Annex III
- What the phased compliance timeline means for audit technology procurement decisions in 2026 and 2027
- How the AI Act’s transparency and human oversight requirements interact with your professional obligations under ISA 500 and ISA 220
- When your audit client’s use of AI in financial reporting creates a new risk assessment consideration under ISA 315
What the EU AI Act regulates and why auditors should care
The EU AI Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024. It’s the first jurisdiction-wide legal framework for artificial intelligence anywhere in the world, and it applies to any AI system placed on the EU market or used within the EU, regardless of where the provider is based. That extraterritorial reach matters. If your firm uses a US-developed AI audit tool on engagements in the Netherlands, the tool falls within the Act’s scope.
The Act introduces a risk-based classification model. Not all AI systems face the same obligations. Four tiers exist: unacceptable risk (banned outright since February 2025), high risk (subject to extensive obligations), limited risk (transparency requirements only), and minimal risk (no mandatory requirements beyond AI literacy).
Most audit technology falls in either the limited or minimal risk categories. But certain applications land in the high-risk bracket, particularly those that make or influence decisions about individuals’ access to financial services.
Why should this concern a non-Big 4 audit firm? First, if your firm uses AI tools in the audit process, you need to know what category those tools occupy and what that means for your professional responsibility under ISA 220.14 (engagement quality management). Second, if your audit client uses AI systems in their financial reporting process (automated revenue recognition, AI-driven expected credit loss models, predictive inventory provisioning), the reliability of that AI system becomes part of your evidence assessment under ISA 500.7.
The AI literacy obligation is already live. Since 2 February 2025, all organisations that deploy or use AI systems in the EU must ensure their staff have adequate AI literacy. That includes audit firms. If your team uses an AI tool to analyse journal entries, both ISA 220 and the AI Act expect them to understand what the tool does.
Risk classification: where audit tools land
Article 6 defines what makes a system “high risk.” Two pathways exist. The first (Article 6(1)) catches AI systems embedded in products already regulated under EU harmonisation legislation (medical devices, machinery, aviation). This path won’t affect audit tools. The second (Article 6(2)) references Annex III, which lists specific use cases classified as high risk by social impact.
Annex III, point 5(b) classifies AI systems intended to evaluate the creditworthiness of natural persons or establish their credit score as high risk. Point 5(c) covers AI for risk assessment and pricing in life and health insurance. These are the financial services categories. If your audit client is a bank using an AI-based credit scoring model, that model is a high-risk AI system subject to Articles 9 through 15.
For audit tools specifically, the classification depends on function. An AI tool that analyses a general ledger to flag statistical anomalies is performing data analysis. It doesn’t make decisions about individuals’ access to services or evaluate anyone’s creditworthiness. This likely qualifies as minimal risk, carrying no mandatory obligations beyond AI literacy.
An AI tool that predicts the probability of fraud by a specific employee, however, could trigger employment-related high-risk classification under Annex III, point 4 (AI systems used in employment contexts for monitoring and evaluating performance). The distinction matters for procurement.
When evaluating AI audit tools, your firm’s quality management system under ISQM 1.34 should include a classification assessment against the AI Act’s risk tiers. This isn’t just legal compliance. Knowing whether your tool is a minimal-risk analytics platform or a high-risk decision support system changes how you document reliance on the tool’s outputs in your audit evidence working papers.
One additional nuance: Article 6(3) allows providers to rebut the high-risk presumption for Annex III systems by demonstrating the system doesn’t pose a significant risk to health, safety, or fundamental rights of natural persons. The EBA published a mapping exercise in late 2025 assessing AI Act implications for the banking sector, finding that existing CRR/CRD internal governance requirements align closely with many AI Act obligations. Audit firms serving banking clients should request this rebuttal documentation if it exists.
The compliance timeline and what shifts with the Digital Omnibus
Obligations activate on a staggered schedule. Prohibitions on unacceptable-risk AI practices took effect on 2 February 2025. Rules for general-purpose AI models (the large language models powering many new audit tools) became applicable on 2 August 2025. The main high-risk system obligations were originally scheduled for 2 August 2026.
That August 2026 date may shift. In November 2025, the European Commission published the Digital Omnibus proposal, which aims to simplify compliance across overlapping EU digital regulations (AI Act, NIS2, DORA, GDPR). For the AI Act specifically, the Digital Omnibus proposes linking the effective date of high-risk obligations to the availability of harmonised standards, with long-stop dates of 2 December 2027 for standalone high-risk systems and 2 August 2028 for product-embedded systems.
The Digital Omnibus is a proposal, not final law. It still needs to pass the legislative process. But it signals that the Commission recognises the implementation burden. For audit firms, this means a wider preparation window rather than an excuse to defer classification work. The GPAI rules are already live.
For Dutch firms specifically, the AI Act applies directly as an EU Regulation. No national transposition is needed (unlike NIS2). Enforcement happens through national market surveillance authorities. Under Article 74(6), national financial supervisory authorities (the AFM and DNB in the Netherlands) are designated as market surveillance authorities for high-risk AI systems used by financial institutions. The AFM, which already inspects audit quality, will also oversee AI Act compliance for systems used in the financial sector.
Human oversight requirements meet ISA 500
Article 14 of the AI Act requires that high-risk systems be designed for effective human oversight. Deployers must ensure that oversight individuals understand the system’s capabilities and limitations, can correctly interpret its outputs, and can override or disregard the system when necessary. Four requirements in one article.
This aligns with what ISA 500.A33 already requires: audit evidence from automated tools must be evaluated for relevance and reliability, and the auditor must understand the tool sufficiently to assess whether the evidence is suitable.
The AI Act adds legal force to what was previously professional guidance only. If you use an AI tool that qualifies as high risk (unlikely for most audit analytics, but possible for tools influencing decisions about individuals), Article 14 demands documented human oversight arrangements that go beyond “the senior reviewed the output.”
Even for minimal-risk audit AI, the professional standards create equivalent expectations. ISA 220.14 requires the engagement partner to take responsibility for quality, including appropriate use of technology. ISQM 1.34 requires policies addressing automated tools, including understanding the methodology. If your AI tool uses a machine learning model to prioritise audit procedures and your team can’t explain the model’s basis for prioritisation, that’s an ISA 220 problem regardless of AI Act classification.
The GPAI Code of Practice, endorsed by the European AI Board in August 2025, provides a “presumption of conformity” for providers that voluntarily adhere to it. Major providers including Amazon, Google, Microsoft, OpenAI, and Anthropic signed on as early signatories. If your audit tool vendor’s underlying AI model is covered by the Code, that provides some assurance. But it doesn’t reduce your firm’s obligation to evaluate whether the tool’s specific application in your audit context is appropriate.
Your client’s AI is your audit risk
The more significant impact of the AI Act on audit practice isn’t about your tools. It’s about the tools your clients use. When a client deploys AI in their financial reporting process, those systems become part of the information system relevant to financial reporting under ISA 315.25.
Consider an IFRS 9 expected credit loss model. If a banking client uses a machine learning algorithm to estimate probability of default for its retail loan portfolio, that algorithm is almost certainly a high-risk AI system under Annex III, point 5(b). As deployer, the bank must comply with Article 26: use the system per instructions, assign competent human oversight, monitor for risks, and report serious incidents. Failure on any of these creates both a regulatory exposure and an audit risk.
For the auditor, this means a layered assessment. You evaluate the financial reporting accuracy of the ECL output (ISA 540.13) and the regulatory compliance posture of the AI system simultaneously. If the bank hasn’t performed a conformity assessment or registered the system in the EU database per Article 49, that’s non-compliance. Resulting fines create a contingent liability under IAS 37.
The same logic applies to other AI-driven reporting processes. An insurance client using AI for claims reserving. A manufacturing client using predictive algorithms for inventory obsolescence provisioning. In each case, you assess: (a) whether the AI system is within scope, (b) what risk tier it occupies, (c) whether the client has met its deployer obligations, and (d) what the financial statement impact of non-compliance would be.
Ask your client during planning: “Do you use any AI or machine learning models in your financial reporting process?” If yes, request the AI system inventory that Article 26(5) requires deployers of high-risk systems to maintain. Its absence is a control environment finding.
Worked example: Van der Berg Holding N.V.
Client profile: Van der Berg Holding N.V. is a Dutch mid-market financial services group with €110M in assets under management. The group includes a consumer lending subsidiary (Van der Berg Krediet B.V.) that uses an AI-powered credit scoring model licensed from a Belgian fintech provider. The holding also recently implemented an AI-based journal entry testing tool at group level for internal audit.
Step 1: Classify the client’s AI systems
The credit scoring model at Van der Berg Krediet B.V. evaluates creditworthiness of natural persons. This falls squarely within Annex III, point 5(b): high risk.
Articles 9 through 15 apply to the Belgian fintech as provider. Deployment obligations under Article 26 apply to Van der Berg Krediet. The AI journal entry testing tool at group level performs general ledger data analysis. It doesn’t make decisions about natural persons, so it likely qualifies as minimal risk under Article 4 (AI literacy obligation only).
Documentation note
“AI system classification assessment performed per EU AI Act Article 6 and Annex III. Credit scoring model (Van der Berg Krediet B.V.): high risk per Annex III, 5(b). AI journal entry testing tool (group-level): minimal risk. Classification documented in the risk assessment working paper.”
Step 2: Evaluate compliance posture for the high-risk system
You request conformity documentation from the Belgian fintech provider. They produce a CE marking certificate and reference their EU AI database registration. You then request Van der Berg Krediet’s deployer compliance records under Article 26: human oversight assignments, monitoring procedures, instructions for use.
The credit risk manager has been designated as oversight person. However, they haven’t documented their assessment of the system’s capabilities and limitations per Article 14(4)(a). When asked, they confirm they “trust the model” but can’t describe the features driving the scoring algorithm.
Documentation note
“Article 14(4)(a) requires the oversight person to understand the system’s capabilities and known limitations. Van der Berg Krediet’s designated person cannot describe the model’s decision logic. Consequences: (1) AI Act non-compliance risk under Article 26(1), (2) audit concern about management’s ability to assess the reliability of AI-generated credit loss estimates under ISA 540.13(c).”
Step 3: Assess the impact on the ECL estimate
Van der Berg Krediet’s IFRS 9 Stage 1 and Stage 2 allocations depend on the credit scoring model’s output. If the model misclassifies borrowers (assigning higher creditworthiness than warranted), Stage 2 transfers will be understated and the ECL provision underestimated.
Given the lack of documented human oversight and the inability of the designated person to interpret model logic, you determine reliance on the AI model’s output alone is insufficient.
Documentation note
“Due to insufficient human oversight documentation, we will perform independent recalculation of the Stage 2 transfer for a sample of €8.2M in retail loan exposures. We will compare the AI model’s PD estimates against historical default data to assess reasonableness (ISA 540.18).”
Step 4: Assess fine exposure
High-risk AI system non-compliance under the AI Act carries fines of up to €15 million or 3% of worldwide annual turnover. For Van der Berg Holding (€110M AUM, estimated €14M group revenue), maximum exposure is approximately €420K. You assess disclosure requirements.
Documentation note
“Maximum AI Act fine exposure estimated at €420K (3% of €14M group revenue). Probability of enforcement action in the current period assessed as possible but not probable, given that high-risk system obligations await harmonised standards. Disclosure as contingent liability under IAS 37.86 is appropriate. No provision recorded.”
Conclusion: The file demonstrates a systematic assessment of the client’s AI systems against the EU AI Act, identifies specific compliance gaps with financial statement implications, and documents audit responses proportionate to the risk.
Practical checklist for your next engagement
- During planning, ask whether your client uses AI or machine learning models in any process that feeds the financial statements. Request an inventory. If they don’t have one, note the absence and identify the key systems yourself.
- Classify each AI system against Article 6 and Annex III. Credit scoring and insurance pricing are high risk. General data analytics and process automation are typically minimal risk.
- For any high-risk AI system deployed by your client, request the provider’s conformity documentation and the client’s Article 26 deployer records (human oversight assignments, monitoring procedures, incident reporting arrangements, and the system’s instructions for use).
- Assess whether AI Act non-compliance creates a contingent liability under IAS 37.86 or a provision under IAS 37.14.
- For your own firm’s AI tools, perform a risk classification against the AI Act and document it in your quality management system under ISQM 1.34. Ensure staff have completed AI literacy training (Article 4, effective since February 2025).
- Where your client relies on an AI model for an accounting estimate (ECL, claims reserves, inventory provisioning), evaluate whether the human oversight arrangements under Article 14 give you sufficient basis to rely on the model’s output under ISA 540.13.
Common mistakes
- Assuming the AI Act only applies to technology companies. It applies to any organisation deploying AI within the EU, including audit clients in financial services and healthcare. The EBA confirmed in its November 2025 mapping exercise that credit institutions using AI credit scoring are in scope.
- Treating AI tool outputs as equivalent to traditional CAATs without evaluating model methodology. ISA 500.A33 requires you to assess reliability of audit evidence from automated tools. “The tool flagged it” isn’t sufficient documentation. Record what the tool does, what data it ingests, and why the output constitutes appropriate evidence.
- Ignoring the GPAI obligations that are already live. If your audit tool uses a general-purpose AI model (most modern analytics platforms do), the provider must comply with the GPAI transparency rules effective since August 2025. Ask your vendor whether they’ve adhered to the Code of Practice.
Get practical audit insights, weekly.
No exam theory. Just what makes audits run faster.
No spam — we're auditors, not marketers.
Related content
Frequently asked questions
Does the EU AI Act apply to audit software?
Yes, the EU AI Act applies to any AI system used within the EU, regardless of where the provider is based. Most audit analytics tools fall in the minimal or limited risk categories, carrying no mandatory obligations beyond AI literacy. However, tools that influence decisions about individuals (e.g., fraud prediction targeting specific employees) could trigger high-risk classification under Annex III.
What is the AI literacy obligation under the EU AI Act?
Since 2 February 2025, all organisations that deploy or use AI systems in the EU must ensure their staff have adequate AI literacy. For audit firms, this means team members using AI tools to analyse journal entries or predict misstatement risk must understand what the tool does and how it generates its outputs.
How does the EU AI Act affect auditing clients that use AI?
When a client deploys AI in their financial reporting process (e.g., AI-driven ECL models, automated revenue recognition), those systems become part of the information system relevant to financial reporting under ISA 315.25. The auditor must assess whether the AI system is within EU AI Act scope, what risk tier it occupies, whether the client has met deployer obligations, and what the financial statement impact of non-compliance would be.
What are the fines for EU AI Act non-compliance?
High-risk AI system non-compliance carries fines of up to €15 million or 3% of worldwide annual turnover. For prohibited AI practices, fines reach €35 million or 7% of turnover. These create contingent liabilities that auditors must assess under IAS 37.
When do the high-risk AI system obligations take effect?
The main high-risk obligations were originally scheduled for 2 August 2026. The Digital Omnibus proposal (November 2025) may shift this to 2 December 2027 for standalone high-risk systems, linked to the availability of harmonised standards. The proposal is not yet final law. GPAI model rules and AI literacy are already live.
Further reading and source references
- Regulation (EU) 2024/1689 (EU AI Act): Articles 6, 9–15, 14, 26, 49, and 74 on risk classification, high-risk obligations, human oversight, deployer obligations, and national enforcement.
- ISA 500, Audit Evidence: paragraphs 6, 7, A5, A12, and A33 on evidence quality and automated tools.
- ISA 220 (Revised), Quality Management for an Audit of Financial Statements: paragraph 14 on engagement partner responsibility for technology use.
- ISA 540 (Revised), Auditing Accounting Estimates: paragraph 13 on understanding management’s estimation process.
- ISA 315 (Revised 2019): paragraph 25 on the information system relevant to financial reporting.
- ISQM 1: paragraph 34 on quality management policies for automated tools.
- IAS 37: paragraphs 14 and 86 on provisions and contingent liability disclosure.