Key Points

  • The AI Act classifies AI systems into four risk tiers: unacceptable, high, limited, and minimal.
  • Fines reach up to EUR 35 million or 7% of global annual turnover for deploying a prohibited AI system.
  • High-risk AI systems used in credit scoring or insurance pricing must complete a conformity assessment before 2 August 2026.
  • Audit firms that deploy AI tools in engagement workflows should assess whether those tools qualify as high-risk under Annex III.

What is AI Act (EU)?

The Act's risk-based framework assigns obligations proportionate to the potential harm an AI system can cause. Article 5 bans outright a narrow set of systems (social scoring by public authorities, real-time biometric identification in public spaces with limited exceptions, manipulative AI, and exploitative subliminal techniques). High-risk systems listed in Annex III carry the heaviest compliance load: providers must maintain a quality management system, produce technical documentation, enable logging, conduct a conformity assessment, and register the system in the EU database before placing it on the market.

For financial services, Annex III point 5(b) classifies AI systems used to evaluate creditworthiness or set insurance premiums as high-risk. This means any bank, insurer, or fintech operating in the EU that uses an AI-driven scoring model must satisfy the full Article 9 risk management requirements by 2 August 2026. Deployers (the organisations using the system, not just the providers who built it) carry their own obligations under Article 26, including human oversight, input data monitoring, incident reporting, and cooperation with national authorities.

The phased timeline matters. Prohibited practices applied from 2 February 2025. General-purpose AI model obligations (transparency, copyright disclosure, technical documentation, and systemic risk mitigation) became applicable on 2 August 2025. High-risk system rules apply from 2 August 2026. Systems embedded in products already regulated under EU sectoral legislation (medical devices, machinery) receive an extended deadline of 2 August 2027. Entities already preparing for CSRD reporting obligations will recognise the pattern: phased timelines with different deadlines for different categories of organisation. Where an entity uses AI systems in preparing its ESRS disclosures, both the AI Act and the CSRD apply simultaneously.

Worked example

Client: German electronics company, FY2025, revenue EUR 310M, IFRS reporter. Schaefer deploys an AI-driven credit scoring tool to assess customer creditworthiness before extending trade credit. The tool assigns risk scores based on payment history and financial statement ratios, supplemented by external credit bureau data. Management treats the tool as an internal decision-support system and has not performed a regulatory classification.

Step 1 — Classify the AI system under the Act

The credit scoring tool evaluates the creditworthiness of natural persons (Schaefer's trade credit customers include sole proprietors). Annex III, point 5(b) of Regulation 2024/1689 classifies AI systems used for creditworthiness assessment as high-risk. The tool falls within scope.

Documentation note: record the classification analysis, including the Annex III reference, a description of the tool's function, and the date the analysis was performed. Retain management's initial (incorrect) position that the tool was out of scope and the basis for overriding it.

Step 2 — Map obligations to the provider and deployer

Schaefer procured the tool from an external vendor (the provider). The vendor must supply conformity documentation and technical specifications under Articles 11 and 13, along with instructions for use per Article 16. Schaefer, as the deployer, must implement human oversight per Article 14, ensure input data is relevant and representative per Article 10, monitor the system in operation, and report serious incidents to the national authority under Article 26.

Documentation note: file the vendor's Declaration of Conformity and CE marking documentation. Record Schaefer's deployer obligations checklist and the name of the designated human oversight officer. File the input data quality assessment performed on the training and live datasets.

Step 3 — Assess compliance gap and timeline

The high-risk obligations apply from 2 August 2026. At the balance sheet date (31 December 2025), Schaefer has approximately seven months to achieve compliance. Management estimates implementation costs at EUR 280,000 (external legal review EUR 85,000, IT adjustments EUR 120,000, staff training EUR 40,000, conformity assessment fees EUR 35,000). No provision is recognised because no present obligation exists at the reporting date; the expenditure relates to future compliance activities.

Documentation note: record the gap analysis and the cost estimate breakdown. State the accounting conclusion under IAS 37.14 that no provision is required because the obligation to comply has not yet crystallised into a present obligation at year-end. Note the disclosure consideration under IAS 37.89 if the expenditure is material.

Step 4 — Evaluate disclosure requirements

The auditor considers whether the AI Act's upcoming obligations create a contingent liability or require disclosure as a subsequent event. At 31 December 2025, the obligation is legislative (not entity-specific) and does not meet the IAS 37 definition of a present obligation. Under IAS 1.125 (or IFRS 18 once effective), management should disclose the expected compliance cost if it is material to users' understanding of the financial statements. The auditor evaluates whether the disclosure is adequate.

Documentation note: document the assessment of disclosure completeness, referencing the AI Act timeline and the estimated cost. Record management's disclosure decision. If management omits disclosure, record the auditor's evaluation of whether the omission constitutes a misstatement under ISA 450.

Conclusion: the classification of Schaefer's credit scoring tool as high-risk under Annex III is defensible because it directly falls within point 5(b). The accounting treatment (no provision, potential disclosure) is consistent with IAS 37 because no present obligation exists at the reporting date, and the compliance costs relate to future regulatory requirements.

Why it matters in practice

  • Teams overlook the deployer's own obligations. Article 26 of the AI Act assigns distinct responsibilities to deployers (the entities using AI systems), separate from those of providers (the entities that developed them). Assuming the vendor's conformity documentation covers the deployer's duties leaves gaps in human oversight and input data quality monitoring, as well as incident reporting and corrective action procedures. The deployer obligations apply regardless of whether the provider has completed its own conformity assessment.
  • Audit firms using AI tools within their own engagement processes (automated journal entry testing, predictive analytics for risk assessment) often fail to assess whether those tools qualify as high-risk. If an AI tool influences audit conclusions on which third parties rely, the firm should evaluate the tool against Annex III and the firm's own ISQM 1 quality management system.

AI Act vs. GDPR

DimensionAI Act (Regulation 2024/1689)GDPR (Regulation 2016/679)
ScopeGoverns AI systems placed on the market or deployed in the EU, regardless of where the provider is establishedGoverns processing of personal data of EU residents, regardless of where the controller is established
TriggerPlacing an AI system on the EU market or putting it into service within the EUProcessing personal data of individuals in the EU
Risk frameworkFour-tier classification: unacceptable, high, limited, minimal riskNo formal risk tiers, but data protection impact assessments required for high-risk processing under Article 35
Maximum fineEUR 35 million or 7% of global turnoverEUR 20 million or 4% of global turnover
OverlapAI systems processing personal data must comply with both regulations simultaneously; the AI Act does not override GDPR requirements

The two regulations run in parallel. An AI credit scoring tool processes personal data (triggering GDPR) and performs creditworthiness assessment (triggering AI Act Annex III). The deployer must satisfy both sets of requirements. A GDPR-compliant system is not automatically AI Act-compliant, because the AI Act imposes additional obligations around technical documentation and conformity assessment that GDPR does not require, plus ongoing human oversight duties.

Related terms

Frequently asked questions

Does the AI Act apply to AI tools used by audit firms?

Yes, if the firm deploys AI systems within EU territory, it is a "deployer" under Article 3(4). The classification depends on the tool's function. An AI system that automates credit risk assessment for a client engagement could fall under Annex III, point 5(b). An AI tool used purely for internal scheduling would not. Firms should map each AI tool to the Act's risk categories before August 2026.

What are the fines for non-compliance with the AI Act?

Article 99 ties penalties to severity. Deploying a prohibited AI system carries fines up to EUR 35 million or 7% of global annual turnover. Non-compliance with high-risk system requirements drops to EUR 15 million or 3% of turnover. Supplying incorrect information to authorities falls further, at EUR 7.5 million or 1% of turnover. For SMEs, the regulation applies the lower of the fixed amount and the turnover percentage.

When do high-risk AI system obligations take effect?

The main high-risk obligations under Articles 8 to 15 (for providers) and Article 26 (for deployers) apply from 2 August 2026. Systems embedded in products regulated by EU harmonisation legislation listed in Annex I, Section B (such as medical devices and machinery) receive an extended deadline of 2 August 2027. Prohibited AI practices under Article 5 already apply since 2 February 2025.