- You’ll be able to distinguish continuous auditing from continuous monitoring using the IIA’s GTAG 3 framework, with clear ownership and output differences
- You’ll understand how continuous monitoring at your audit client affects your risk assessment under ISA 315.12 through ISA 315.26
- You’ll know when you can rely on continuous monitoring outputs as audit evidence under ISA 330 and when you can’t
- You’ll have a worked example of evaluating a client’s continuous monitoring programme during a statutory audit
The ownership question: who runs it determines what it is
The single most important distinction between continuous auditing and continuous monitoring is ownership. Continuous auditing is owned by the internal audit function. Continuous monitoring is owned by management (typically operations, finance, or IT). This isn’t a technicality. It determines independence, the purpose of the output, and what happens when an exception is found.
When internal audit runs continuous auditing, the output is an audit finding. The internal auditor assesses the exception, determines whether it indicates a control failure, and reports it through the audit reporting structure (to the audit committee, under the internal audit charter). Internal audit doesn’t fix the exception. It reports it.
When management runs continuous monitoring, the output is an operational alert. The AP manager sees a flagged duplicate invoice, investigates it, and resolves it. Management is both detecting and correcting. No independent assessment is involved.
That’s the line.
This ownership distinction is why GTAG 3 warns against internal auditors assuming ownership of continuous monitoring processes. If internal audit builds and operates the monitoring tool, they lose independence over the process they built. GTAG 3 recommends that internal audit design the methodology, hand it to management for ongoing operation, and then test whether management’s monitoring is working effectively.
How continuous auditing works in practice
Continuous auditing uses technology to perform audit procedures more frequently than the traditional annual cycle. Instead of testing a sample of 25 purchase orders during October fieldwork, the internal audit team configures an automated script that tests every purchase order against defined criteria (approval thresholds, segregation of duties, vendor master file consistency) on a weekly or daily basis.
The IIA’s 2025 North American Pulse survey found that 78% of Chief Audit Executives identified data analytics as their teams’ most critical capability gap. Continuous auditing is the practical application of data analytics to the audit cycle, and most internal audit functions are still building the capability rather than operating it at scale.
A continuous auditing programme typically covers two areas. Continuous controls assessment (CCA) tests whether specific internal controls are operating effectively on an ongoing basis. Continuous risk assessment (CRA) analyses transaction data to identify emerging risk patterns (unusual journal entries, concentration of vendor payments, changes in transaction volumes) that might warrant a targeted audit. Both feed into the internal audit plan: CCA confirms that tested controls remain effective between audit cycles, and CRA directs audit resources toward areas where the risk profile is shifting.
For the typical mid-tier audit client (€10M to €100M revenue), continuous auditing by internal audit is still uncommon. Most mid-market companies in Europe don’t have a full-time internal audit function, let alone one with the data analytics capability to run automated testing. Where it exists, it’s usually at the larger end of the mid-market (€50M+ revenue) or at subsidiaries of larger groups where the parent’s internal audit function pushes the methodology down.
How continuous monitoring works in practice
Continuous monitoring is more common than continuous auditing because it serves an operational purpose, not just an assurance purpose. Any automated exception report that management uses to oversee a process qualifies as continuous monitoring. The AP duplicate invoice flag is one example. Others include automated three-way matching (purchase order, goods receipt, invoice), payroll monitoring that flags employees receiving payments to changed bank accounts, and IT security monitoring that alerts on failed login attempts or privilege escalations.
The connection to the external audit is direct. Under ISA 315.26, the auditor is required to obtain an understanding of the entity’s internal control relevant to the audit. If the client operates continuous monitoring over a significant transaction cycle, that monitoring is part of the entity’s control environment. You need to understand it, evaluate its design, and decide whether to test its operating effectiveness as part of your audit strategy under ISA 330.
The design question is whether the monitoring tool tests the right things. An AP duplicate detection tool that only matches exact invoice amounts won’t catch a duplicate where the vendor submitted €10,000.00 on one invoice and €10,000 on the other (different formatting, same amount). An IT access monitoring tool that alerts on failed logins but doesn’t flag when a terminated employee’s account remains active for 30 days after departure has a design gap.
The operating effectiveness question is whether management actually investigates and resolves the exceptions the tool generates. A monitoring tool that flags 200 exceptions per week, of which management investigates 40 and ignores 160, isn’t operating effectively regardless of how well it’s designed. Your ISA 330 test of operating effectiveness should include a sample of flagged exceptions and trace them through to management’s investigation and resolution.
Where they overlap and where the IIA draws the line
The overlap happens when internal audit uses the same technology platform as management’s continuous monitoring. If the internal audit team and the AP team both use the same data analytics tool to run exception queries on the same transaction data, the question is whether internal audit is performing an independent assessment or merely duplicating management’s monitoring.
GTAG 3 addresses this directly. Internal audit should test management’s continuous monitoring, not replicate it. If management monitors AP for duplicate invoices, internal audit’s continuous auditing procedures should test whether management’s monitoring is effective (by independently re-running the detection logic, comparing results, and checking whether exceptions were resolved). Internal audit should not be the first line of detection for duplicate invoices. That’s management’s job.
The practical test is: if internal audit stopped running its continuous auditing procedures tomorrow, would management’s monitoring continue to catch the same exceptions? If yes, the roles are properly separated. If no (because internal audit’s tool is the only detection mechanism), internal audit has assumed a management responsibility and compromised its independence under IIA Standard 1100.
For the external auditor, this matters when deciding whether to rely on internal audit work under ISA 610 (Revised). If internal audit’s “continuous auditing” is actually performing management’s monitoring function, the independence assumption underlying ISA 610 reliance is weakened.
What this means for external auditors under ISA 315 and ISA 330
If your audit client has implemented continuous monitoring over a significant process, you face a specific decision chain during planning.
First, under ISA 315.12, you need to understand the monitoring. What processes does it cover? What exception criteria does it use? How frequently does it run? Who reviews the output? What happens when an exception is flagged? This goes into your understanding of internal control and informs your risk assessment.
Second, you decide whether to adopt a controls-reliance strategy under ISA 330. If the client’s three-way matching tool processes 100% of purchase transactions and routes all mismatches for manual review before payment, your tests of details on completeness and accuracy of accounts payable may be reduced (not eliminated, because ISA 330.18 still requires substantive evidence).
Third, if you decide to rely on the monitoring controls, you need to test their operating effectiveness under ISA 330.8. This means selecting a sample of exceptions generated by the tool, tracing them to management’s investigation and resolution, and evaluating whether the monitoring ran consistently throughout the period. A tool that was operational for ten months but offline for two months during a system migration doesn’t provide year-long assurance. You’ll need to design alternative procedures (substantive testing) for the gap period, and document why you’re splitting the testing approach across two periods in your ISA 330 documentation.
Fourth, you evaluate whether the monitoring tool itself is reliable. Under ISA 315.A165 through A167 (IT general controls), the automated controls within the monitoring tool (the exception detection logic, the completeness of data feeds, the access controls over the tool’s configuration) need to be addressed. If someone in the AP team can modify the duplicate detection threshold without logging the change, the tool’s integrity is compromised.
Worked example: evaluating a client’s AP continuous monitoring programme
Client scenario: Brouwer Logistiek B.V., a Dutch logistics company with €38M revenue, uses an automated monitoring tool (built in-house using Python scripts running against their ERP database) that flags potential duplicate invoices, invoices exceeding approval thresholds, and payments to new vendors added in the last 30 days. The AP manager reviews flagged items daily. Brouwer has no internal audit function.
Step 1: Understand the monitoring tool’s scope and design (ISA 315.12 through ISA 315.26)
Request the technical documentation for the monitoring tool. Identify that it runs four detection rules: exact duplicate invoice numbers, approximate amount matching (within €50), approval threshold breach (invoices above €5,000 without dual sign-off), and new vendor payments. The tool pulls data nightly from the ERP system. The AP manager receives an email each morning with flagged items.
Documentation note
Record the four detection rules, the data source (ERP nightly extract), the frequency (daily), and the responsible person (AP manager) in the planning memorandum under ISA 315 understanding of internal control. Note that Brouwer has no internal audit function, so no continuous auditing exists. All monitoring is management-owned.
Step 2: Evaluate design effectiveness
The approximate amount matching uses a €50 tolerance. This would miss a duplicate where the vendor submits two invoices for €10,000 and €10,047 (within tolerance, flagged) but would also miss one for €10,000 and €10,150 (outside tolerance, not flagged). Assess whether the €50 tolerance is appropriate given the average invoice size (€4,200). At that average, a €50 tolerance is approximately 1.2% of a typical invoice, which is reasonable for detecting exact duplicates but may miss split invoices.
Documentation note
Record the design assessment in the audit file. Note the tolerance limitation and consider whether additional substantive procedures are needed for the split invoice risk.
Step 3: Test operating effectiveness (ISA 330.8)
Select a sample of 20 exceptions flagged by the tool during the audit period (stratified across all four detection rules). For each, trace from the flag to the AP manager’s investigation record and resolution. Confirm that 18 of 20 were investigated within two business days, with documented conclusions. Two exceptions from March were investigated after five business days due to staff absence. Assess whether the delay is a deficiency (it is a deficiency, but not a significant one, because the invoices were not paid during the delay period).
Documentation note
Record the sample results. Conclude that the monitoring tool is operating effectively with a minor observation about investigation timeliness during staff absences. Communicate the observation to management under ISA 265.10 (other deficiencies). It does not rise to significant deficiency level under ISA 265.9.
Step 4: Test the reliability of the tool itself (IT general controls)
Confirm who has access to modify the Python scripts. Only the IT administrator and the financial controller have write access to the script repository. Changes are logged in version control (Git). Review the change log for the audit period. One change was made in June to add the new vendor detection rule. The change was approved by the financial controller and tested before deployment. Conclude that access controls and change management over the monitoring tool are adequate.
Documentation note
Record the ITGC assessment for the monitoring tool in the IT controls section of the audit file. Reference ISA 315.A165 through A167.
Practical checklist for assessing client monitoring programmes
- Ask every audit client during planning whether they operate any automated monitoring or exception reporting over significant transaction cycles. If they do, add it to your ISA 315 understanding of internal control.
- For each monitoring tool, document the detection rules, data source, frequency, responsible person, and exception investigation process. This is the minimum needed to evaluate design effectiveness.
- If you plan to rely on the monitoring controls to reduce substantive testing, test operating effectiveness by selecting a sample of flagged exceptions and tracing them through to resolution. Don’t just confirm the tool runs. Confirm that management acts on its output.
- Evaluate the IT general controls over the monitoring tool itself: who can change the detection logic, how changes are logged, and whether the data feed from the source system is complete.
- If the client also has an internal audit function performing continuous auditing over the same process, assess whether the roles are properly separated. Internal audit should test management’s monitoring, not operate it.
Common mistakes when relying on automated monitoring outputs
- Treating continuous monitoring as a control without testing it. The monitoring tool is a control. Like any control you intend to rely on, it requires a test of operating effectiveness under ISA 330.8. Accepting management’s assertion that “the tool runs every day” without testing a sample of exception investigations is equivalent to accepting management’s assertion that a manual approval control operates without testing any approvals.
- Confusing continuous monitoring with continuous auditing when the client calls it “continuous auditing.” If management built and operates the tool, and management investigates the exceptions, it’s continuous monitoring regardless of what the client calls it. The label doesn’t change the independence analysis. Document the actual ownership in your audit file.
Get practical audit insights, weekly.
No exam theory. Just what makes audits run faster.
No spam — we're auditors, not marketers.
Related content
Frequently asked questions
What is the difference between continuous auditing and continuous monitoring?
Continuous auditing is owned by the internal audit function and uses automated tools to test controls or transactions on an ongoing basis, producing audit findings reported through the audit reporting structure. Continuous monitoring is owned by management (operations, finance, or IT) and uses automated tools to track business processes and flag exceptions for management investigation and resolution. The key distinction is ownership: internal audit reports findings independently, while management both detects and corrects.
How does continuous monitoring affect the external audit under ISA 315?
Under ISA 315.26, if the client operates continuous monitoring over a significant transaction cycle, the auditor must understand it as part of the entity’s internal control. This includes what processes it covers, exception criteria, frequency, who reviews output, and what happens when exceptions are flagged. This understanding informs the risk assessment and the decision on whether to adopt a controls-reliance strategy under ISA 330.
Can external auditors rely on continuous monitoring outputs as audit evidence?
External auditors can rely on continuous monitoring controls to reduce (but not eliminate) substantive testing under ISA 330, but only after testing the monitoring’s operating effectiveness under ISA 330.8. This means selecting a sample of flagged exceptions, tracing them to management’s investigation and resolution, evaluating consistency of the monitoring throughout the period, and assessing IT general controls over the tool itself under ISA 315.A165 through A167.
What does the IIA’s GTAG 3 say about the relationship between continuous auditing and monitoring?
GTAG 3 recommends that internal audit should test management’s continuous monitoring, not replicate it. If management monitors AP for duplicate invoices, internal audit should test whether that monitoring is effective rather than being the first line of detection. If internal audit builds and operates the monitoring tool, they lose independence over the process. The practical test is whether management’s monitoring would continue to catch exceptions if internal audit stopped its procedures.
What are common mistakes when relying on automated monitoring outputs in an audit?
The two most common mistakes are: treating continuous monitoring as a control without testing it (accepting management’s assertion that the tool runs daily without testing exception investigations, which violates ISA 330.8), and confusing continuous monitoring with continuous auditing when the client labels it incorrectly (if management built, operates, and investigates exceptions from the tool, it’s monitoring regardless of what the client calls it).
Further reading and source references
- IIA GTAG 3 (Second Edition), Continuous Auditing: Coordinating Continuous Auditing and Monitoring to Provide Continuous Assurance: the definitive framework for distinguishing continuous auditing from continuous monitoring.
- ISA 315 (Revised 2019), Identifying and Assessing the Risks of Material Misstatement: paragraphs 12–26 on understanding the entity’s internal control, and A165–A167 on IT general controls.
- ISA 330, The Auditor’s Responses to Assessed Risks: paragraphs 8–18 on tests of controls and substantive procedures.
- ISA 610 (Revised), Using the Work of Internal Auditors: reliance on internal audit work and independence considerations.
- IIA 2025 North American Pulse Survey: data analytics as the most critical capability gap for internal audit functions.
- ISA 265, Communicating Deficiencies in Internal Control: paragraphs 9–10 on communicating significant and other deficiencies.