Key Takeaways
- How to apply four data analytics techniques to real engagement data under ISA 520 and ISA 500
- How to document analytics as audit evidence that satisfies ISA 500.A2’s relevance and reliability requirements
- Where analytics fits in the risk assessment process under ISA 315.A14
- What a working paper looks like when analytics supports a substantive conclusion
What ISA 520 and ISA 500 Actually Require from Analytics
ISA 520 does not mention the words “data analytics.” The standard refers to analytical procedures: comparisons of recorded amounts to expectations the auditor develops (ISA 520.4). But the standard is technology-agnostic on purpose. Whether you build your expectation in Excel or a specialist tool, the requirements in ISA 520.5 are the same. The expectation must be sufficiently precise. Significant differences must be identified. And those differences require investigation.
ISA 500.A2 adds the evidence dimension. Any audit evidence you obtain from analytics must be relevant to the assertion and reliable given its source. A Benford’s Law test on journal entries is relevant to the occurrence assertion. A trend analysis on revenue is relevant to completeness and accuracy. But neither produces sufficient evidence on its own. ISA 330.A5 requires the auditor to consider the combination of procedures, not a single analytics output.
Where analytics changes the economics of a mid-tier engagement is in the ratio of effort to coverage. Testing 60 journal entries from a random sample tells you almost nothing about the other 139,940 entries. Running an outlier filter across the full population in 20 minutes tells you which 500 entries warrant investigation. Coverage is orders of magnitude better. And the cost is comparable.
The practical implication: analytics is strongest as a risk identification tool (ISA 315.A14 references the use of data analysis in understanding the entity) and as corroborative evidence layered on top of traditional testing. It does not replace testing. It changes where you point your tests. That distinction matters for documentation.
The real barrier is documentation, not technology
One common misconception is that analytics requires specialist software. It doesn’t. The ciferi analytical review calculator runs ISA 520 procedures on trial balance data without any installation. For journal entry testing, a filtered pivot table in Excel covers most of what mid-tier firms need. Every analytics procedure needs a documented expectation, a documented output, and a documented conclusion linking the two. If you skip any of those steps, the analytics adds time without adding evidence.
The Four Techniques That Produce Audit Evidence
Not every analytics technique is equally useful on a mid-tier engagement. These four produce output that translates directly into working paper evidence. Each one maps to a specific ISA requirement.
Trend Analysis and Fluctuation Testing (ISA 520.5)
ISA 520.5 requires the auditor to develop an expectation of recorded amounts. Trend analysis does this by comparing current-period figures to prior periods or budgets. The key is precision. ISA 520.A5 states that the precision of the expectation depends on the level at which data is disaggregated and the reliability of the source.
An entity-level revenue comparison (this year versus last year) is a starting point. A revenue comparison disaggregated by product line, by quarter, with an expectation built from known volume changes and contractual price adjustments, is a substantive analytical procedure under ISA 520.5. The difference between those two is a few hours of work and several levels of evidence quality.
Most mid-tier teams stop at entity level because disaggregation takes time. But the engagement letter usually lists the client’s segments. The prior-year file usually contains disaggregated data. Building the disaggregated expectation from those two sources adds perhaps 90 minutes to the analytical procedure and transforms it from a planning-stage screening tool into a substantive procedure.
Outlier Detection in Journal Entries (ISA 240.32)
ISA 240.32 requires the auditor to test journal entries for indicators of fraud, specifically those made at unusual times, by unexpected individuals, or involving unusual accounts. Outlier detection automates the identification step.
Sort journal entries by posting time. Entries posted between 22:00 and 06:00 get flagged. So do entries posted on weekends or public holidays. Filter by user. Entries posted by accounts not normally authorised for journal entry processing get flagged. Filter by account combination. Revenue to cash with no offsetting receivable movement gets flagged. None of these are automatic findings. Each flag is a starting point for inquiry under ISA 240.A46.
You can run all of these filters in a single pivot table. Export the general ledger, add columns for day of week (the WEEKDAY function in Excel), posting hour (extract from the timestamp), and user ID. Conditional formatting on weekends and out-of-hours entries takes under ten minutes.
Document filter criteria before applying them
If you apply filters, scan the results, then write up the criteria retrospectively, a reviewer cannot confirm the criteria were independent of the results. Write the filter criteria in the planning memo. Apply them to the data. Document the output. This sequence matters for ISA 240.32(a) defensibility.
Benford’s Law First-Digit Testing (ISA 240 and ISA 500)
Benford’s Law predicts the expected frequency distribution of first digits in naturally occurring numerical datasets. In an unmanipulated dataset, the digit 1 appears as the first digit approximately 30.1% of the time, digit 2 about 17.6%, and the frequency decreases logarithmically through to digit 9 at 4.6%. Deviations from this distribution can indicate data manipulation or fabrication. They can also signal systematic error in the recording process.
The mathematical basis is straightforward: when you combine data from multiple sources with different orders of magnitude, the leading digits follow a logarithmic distribution rather than a uniform one. This is why it works well on accounting populations.
This is not a pass/fail test. A deviation in first-digit frequency does not prove fraud or error. It identifies populations that warrant further investigation.
The technique works best on large, naturally occurring datasets: accounts payable, revenue transactions, expense claims. It works poorly on datasets with artificial constraints (invoices priced at €9.99, rents at round numbers, utility bills in narrow ranges). Always document which datasets you applied the test to and why certain populations were excluded.
Stratification and Gap Analysis (ISA 530.A10)
When you do move to sampling, ISA 530.A10 encourages stratification to reduce the variability within each stratum and thereby reduce sample size. Data analytics handles the stratification mechanically. Sort the population by value. Identify natural breakpoints. Separate high-value items for 100% testing, middle-value items for representative sampling, and low-value items for analytical coverage.
The breakpoints should link to performance materiality. Any item individually exceeding performance materiality goes into the 100% testing stratum. Items between the trivial threshold and performance materiality form the middle stratum, sampled representatively. Items below the trivial threshold receive analytical coverage only (unless a specific risk applies). This is not arbitrary. ISA 530.A10 references reducing sample risk by ensuring that each stratum is internally homogeneous.
Gap analysis is the related technique: sort sequential documents (invoice numbers, cheque numbers, delivery notes) and identify gaps or duplicates. Missing invoice numbers in a sales ledger are a completeness assertion flag. Duplicate cheque numbers in the cash disbursements journal are an occurrence assertion flag. Both map directly to assertion-level testing under ISA 330.
The ciferi materiality calculator can help set the threshold above which items receive 100% testing, since your stratification breakpoints should relate to performance materiality.
What You Need from the Client (and When to Ask for It)
Request the full general ledger extract during planning, not during fieldwork. If you wait until fieldwork starts, you lose the ability to run analytics before arriving on site. The ideal data request includes the complete journal entry file (all fields, not a summary), the trial balance at both year-end and comparative period, the chart of accounts, and the user access list for the accounting system.
Most ERP systems (Exact Online, SAP Business One, AFAS, Twinfield) export to CSV or Excel. Ask the client for the export in the original format. Do not accept PDF printouts of transaction lists. They cannot be filtered or analysed programmatically.
If the client pushes back on providing the full journal entry file
Cite ISA 240.32. The auditor is required to test journal entries and other adjustments for indicators of fraud. The data request is not optional. It is a direct consequence of the standard’s requirements.
Ask for the chart of accounts alongside the general ledger. Without the chart of accounts, you cannot identify which account combinations are unusual. Revenue debited against a general expense account is suspicious only if you know the account classification. Most clients can export it in under five minutes.
For multi-entity clients, request the general ledger for each entity separately. Consolidation adjustments should come as a separate file. Running analytics on a consolidated GL that includes eliminations produces false flags on intercompany entries and obscures entity-level patterns. Keep the populations separate until you need to consolidate.
Worked Example: Van Houten Metaal B.V.
Van Houten Metaal B.V. is a Dutch steel fabrication company with €38M revenue, 82,000 journal entries in the current period, a team of one partner and two associates, and a prior-year audit file with no significant findings.
1. Extract and Profile the Data
Export the full general ledger and trial balance from the client’s Exact Online system. Load the journal entry file into Excel. Row count: 82,412 entries.
Documentation note
Record the extraction date, source system, period covered, total number of records, and file hash in WP A.1.1. Cross-check the trial balance to the general ledger totals to confirm completeness of the extract.
2. Run Benford’s First-Digit Test on Accounts Payable
Isolate the 14,208 AP transactions. Count the first-digit frequency. Result: digit 5 appears at 12.4% versus the expected 7.9%. The chi-squared statistic exceeds the critical value at 95% confidence. This does not confirm manipulation. It identifies the AP population as warranting focused testing.
Documentation note
Record the Benford’s output table, the population tested, the deviation identified, and the follow-up procedure planned in WP B.3.2. Attach the frequency distribution table as a supporting schedule.
3. Flag Unusual Journal Entries for ISA 240.32 Testing
Filter the full journal entry population. Criteria: entries posted between 22:00–06:00, entries posted on weekends or public holidays, entries with rounded amounts above €10,000, entries debiting revenue accounts with credits to non-standard offsetting accounts. Result: 347 entries flagged (0.42% of the population). Select the 25 highest-value flagged entries for inquiry and corroboration.
Documentation note
Record the filter criteria, the number of entries flagged, the basis for selecting the 25 for testing, and the results of inquiry for each in WP C.2.1.
4. Perform Disaggregated Trend Analysis on Revenue
Van Houten operates in two segments: structural steel (68% of revenue, €25.8M) and architectural metalwork (32%, €12.2M). Compare each segment’s quarterly revenue to prior year, adjusting for the known loss of one structural steel client (€1.4M annualised) and a new architectural contract (€0.8M, first deliveries in Q2).
Structural steel expectation: €24.4M (prior year minus lost client). Actual: €25.1M. Difference: €0.7M, within the 5% investigation threshold set at planning (performance materiality of €760K). No further work required on this segment.
Architectural metalwork expectation: €11.8M (prior year plus new contract, pro-rated for Q2 start). Actual: €13.4M. Difference: €1.6M, exceeding the investigation threshold. This drives a substantive inquiry and corroboration procedure under ISA 520.7.
Documentation note
Record the expectation, the basis for each adjustment, the actual result, the investigation threshold, and the conclusion for each segment in WP D.1.3. The difference in architectural metalwork is the key audit matter for this section.
5. Conclude and Document
The analytics produced four outputs: a population requiring focused AP testing (Benford’s), 25 journal entries for ISA 240.32 testing, a completed revenue analytical procedure with one segment requiring follow-up, and stratification breakpoints for sampling in remaining populations. Total time spent on analytics: approximately 6 hours from extraction through documentation.
The output directed 80% of the remaining fieldwork to the areas where risk was highest. Without the analytics, the team would have sampled randomly from the full AP population, tested 25 journal entries selected by value only, and performed entity-level revenue analysis that would not have identified the architectural metalwork difference. The €1.6M difference in that segment exceeded performance materiality. A random approach would have missed it.
Documentation note
Include a summary of all analytics performed, the conclusions reached, and the linkage to further procedures in the engagement completion memo (WP Z.1). Cross-reference each analytics output to the specific working paper where the follow-up procedure is documented.
Practical Checklist for Your Next Engagement
- Confirm data extraction completeness before running any analytics. Cross-check the extracted trial balance to the general ledger and to the client’s own reports. Record the reconciliation in your working papers. (ISA 500.A31)
- Document the expected result before you see the actual result. ISA 520.5(a) requires you to develop an expectation. If you document the expectation after seeing the output, you’ve introduced confirmation bias and your procedure is weaker evidence.
- Set your investigation threshold at planning, tied to performance materiality. ISA 520.A13 permits both quantitative and qualitative thresholds. Write the threshold in the planning memo before fieldwork starts.
- For every analytics flag, document the follow-up procedure performed, the evidence obtained, and the conclusion reached. An unexplained flag left in the file is worse than no analytics at all. The AFM has flagged exactly this pattern: analytics performed but not followed through. (ISA 520.7)
- Link each analytics output to the assertion it addresses. A journal entry outlier test addresses the occurrence and accuracy assertions. A Benford’s test addresses occurrence. Revenue trend analysis addresses completeness and accuracy. If you can’t name the assertion, the procedure isn’t mapped correctly.
- Keep the technology simple unless the engagement justifies otherwise. Excel pivot tables, the ciferi ISA 520 calculator, and basic sorting cover 90% of what a mid-tier engagement needs. ISA 520 does not require specialist software. It requires a documented expectation and a documented conclusion.
Common Mistakes Regulators Flag
- Not documenting the expected outcome before obtaining the actual result. The AFM’s 2023 inspection report identified this as a recurring deficiency: auditors performed analytical procedures but did not document the expected outcome before obtaining the actual result. Without a pre-determined expectation, the procedure under ISA 520.5 is incomplete because the auditor cannot demonstrate that the comparison was independent.
- Not documenting why filter criteria were chosen. The FRC’s 2022–23 Audit Quality Inspection Report noted that firms using data analytics on journal entry testing frequently failed to document why specific filter criteria were chosen. Under ISA 240.32(a), the selection criteria must relate to identified fraud risk factors. Filtering by “round amounts” without linking that criterion to a specific fraud risk for the entity does not satisfy the requirement.
Get practical audit insights, weekly.
No exam theory. Just what makes audits run faster.
No spam — we're auditors, not marketers.
Related Tools
Related Reading
Frequently Asked Questions
Does data analytics in audit require specialist software?
No. ISA 520 does not require specialist software. Excel pivot tables, the ciferi ISA 520 analytical review calculator, and basic sorting cover 90% of what a mid-tier engagement needs. The barrier is not technology — it is documentation.
Can data analytics replace traditional audit sampling?
Analytics does not replace testing. It changes where you point your tests. ISA 330.A5 requires the auditor to consider the combination of procedures, not a single analytics output. Analytics is strongest as a risk identification tool and as corroborative evidence layered on top of traditional testing.
What is Benford’s Law and how is it used in audit?
Benford’s Law predicts the expected frequency distribution of first digits in naturally occurring numerical datasets. In an unmanipulated dataset, the digit 1 appears as the first digit approximately 30.1% of the time, with frequency decreasing logarithmically through to digit 9 at 4.6%. Deviations can indicate data manipulation or systematic error. It works best on large, naturally occurring datasets like accounts payable and revenue transactions.
When should I request the client’s general ledger data?
Request the full general ledger extract during planning, not during fieldwork. If you wait until fieldwork starts, you lose the ability to run analytics before arriving on site. The ideal data request includes the complete journal entry file, the trial balance at year-end and comparative period, the chart of accounts, and the user access list.
What are the most common regulator findings on data analytics in audit?
The AFM’s 2023 inspection report identified a recurring deficiency: auditors performed analytical procedures but did not document the expected outcome before obtaining the actual result. The FRC noted that firms frequently failed to document why specific filter criteria were chosen and how they related to identified fraud risk factors.