What you'll learn
- You'll be able to build a response matrix that satisfies ISA 240.43 by linking every fraud risk to a procedure with documented nature, timing, extent, and unpredictability adjustments
- You'll understand why the AFM's most common ISA 240 finding (no documented link between risks and procedures) happens and how to prevent it
- You'll know how the six mandatory minimum responses (overall responses, journal entry testing, estimates review, significant unusual transactions, analytical procedures near completion) fit into the matrix
- You'll be able to explain, for each procedure, how it differs from error-focused testing (ISA 240.44)
You've identified four fraud risks on the engagement. They're sitting in your risk register. But when the reviewer opens the file, the question isn't whether you found the risks. It's whether every single one traces to a procedure that was actually performed, with documented reasoning for why that procedure (and not another) addresses that specific fraud risk.
A fraud risk response matrix under ISA 240 maps each identified fraud risk to a specific audit procedure, documenting how the nature, timing, and extent of that procedure differ from what you would do if the risk were one of error alone, with separate columns for unpredictability and unbiased design.
What ISA 240 actually requires for fraud risk responses
ISA 240.43 requires the auditor to design and perform further audit procedures whose nature, timing, and extent are responsive to the assessed risks of material misstatement due to fraud at the assertion level. ISA 240.44 adds a second layer: the auditor must consider whether, in selecting audit procedures, there is a need to address the risk of management override of controls. This is not optional. Every fraud risk in your register needs a procedure, and every procedure needs to show how it responds to that specific risk.
The standard also introduces two concepts that most audit files miss entirely. First, ISA 240.42 requires an element of unpredictability in the selection of audit procedures. You are expected to vary the nature, timing, or extent of your procedures from year to year, or to perform procedures that entity personnel would not anticipate. If you tested revenue at year-end last year and you test revenue at year-end this year using the same sample sizes and the same approach, you have not introduced unpredictability.
Second, ISA 240 (Revised) adds unbiased design confirmation (paragraph 42 of the revised text). For each procedure, you state what contradictory evidence the procedure could reveal. If the procedure can only confirm management's position, it fails the unbiased design test. This is a direct response to the observation that auditors tend to design procedures that seek confirming evidence rather than disconfirming evidence, which is exactly the bias that fraud exploits.
These requirements exist because fraud is different from error. Errors are random. Fraud is deliberate, concealed, and designed to avoid detection by the exact procedures auditors usually run. A response matrix that mirrors your error-testing programme has missed the point of ISA 240 entirely. The response must be qualitatively different from your error-focused work, and the matrix is where you demonstrate that difference.
Why unstructured working papers create the AFM's most common finding
The AFM has flagged the same deficiency across multiple inspection cycles: no documented link between identified fraud risks and the procedures performed to address them. The finding does not mean firms are ignoring fraud. It means the link between risk identification and response is invisible in the working papers.
Here is how it typically happens. The team discusses fraud in planning. Risks go into a risk summary. Procedures go into a separate audit programme. The programme references the risks somewhere in a header, maybe a footnote. But when a reviewer tries to trace one specific fraud risk from the register to the exact procedure that addresses it (and back again), the chain breaks. The risk says "management override." The programme says "journal entry testing." Nobody documented which specific aspect of management override the journal entry testing addresses, what was different about the timing compared to error testing, or why the selection criteria were designed the way they were.
The problem is structural. When the risk register and the audit programme are separate documents with no formal linkage, the connection exists only in the engagement team's heads. If the senior who designed the procedures leaves the firm, or if the file is reviewed two years later by an inspector who never met the team, the link is gone.
A response matrix fixes this by making the chain explicit. One row per response. A column linking to the risk register. Separate columns for nature, timing adjustment, extent adjustment, unpredictability, evidence type, and unbiased design. The reviewer can trace any risk forward to a procedure and any procedure back to a risk without opening a second working paper. The risk register points forward to the response matrix. The response matrix points backward to the risk register. Both directions work without ambiguity.
This bidirectional cross-referencing is what regulators look for. The AFM does not expect auditors to write more. They expect auditors to connect what they have already written.
What goes into each column of the response matrix
A well-structured response matrix contains seventeen columns organised across five blocks. Here is what each block covers and why it exists.
The first block is response design. Every row starts with a linked risk identifier (pointing back to the risk register), a response category (overall response, management override procedure, or assertion-level response), the nature of the procedure in specific terms, the timing adjustment explaining when the procedure will run and why that timing differs from prior year or from normal error-testing timing, and the extent adjustment explaining how far the procedure goes and what drives that scope.
The nature column is where most teams fall short. "Test revenue" is not a nature description. "Confirm a sample of intercompany freight bookings directly with the counterparty entity, inspecting delivery documentation from the receiving warehouse rather than from the client's own system" is a nature description. The level of specificity matters because it demonstrates that the procedure was designed for this fraud risk, not copied from a generic programme.
The second block covers unpredictability and evidence. The unpredictability column (required by ISA 240.42) describes what makes this procedure different from what entity personnel would expect. "Performed at interim" is not unpredictable if you did interim testing last year too. Unpredictability means genuinely varying your approach: testing a different location, using a different selection method, applying procedures at unexpected times of year, or testing accounts that the client would not anticipate you examining. If you have done the same procedure the same way for three years, the entity knows what to expect, and a person committing fraud knows what to avoid.
The evidence type column prevents over-reliance on inquiry. ISA 240 is explicit that inquiry alone is not sufficient evidence for fraud procedures. If "inquiry only" is selected, you must document why no corroborating evidence is obtainable. In practice, this column should almost never show "inquiry only" for an assertion-level fraud response.
The unbiased design column states what contradictory evidence the procedure could surface. For revenue testing, unbiased design might mean: "This procedure could reveal bookings with no corresponding receipt at the counterparty, which would indicate fictitious revenue." For estimates, it might mean: "This procedure could reveal a systematic pattern of overstatement, which would indicate management bias." The point is to demonstrate that the procedure was designed to find disconfirming evidence, not just to tick a box.
The third block addresses significant unusual transactions. For any response that touches transactions outside the normal course of business, you document the business rationale evaluation required by ISA 240.52. This is not a column that applies to every row. It is activated when the response involves a transaction that meets the "significant unusual" threshold.
The fourth block is execution and conclusion. Separate columns for work done, results obtained, and the conclusion drawn. These columns get completed during or immediately after fieldwork (not at the review stage, which is another recurring inspection finding). The AFM has specifically noted that response matrices completed entirely at the review stage suggest fraud procedures were not performed contemporaneously with the fieldwork.
The fifth block is sign-off. Preparer, reviewer, dates. The dates matter because they establish when the work was done relative to the engagement timeline.
How fraud responses differ from error testing
This is the question that separates a good response matrix from a copied audit programme. ISA 240.44 requires the auditor to consider the nature, timing, and extent of procedures in light of the specific fraud risk. That means you cannot write "test revenue cut-off" as your response to a revenue fraud risk and leave it there. You need to show what is different about this procedure compared to what you would do if the risk were one of unintentional error.
Nature changes when you move from testing whether a transaction is recorded correctly to testing whether the transaction is real. Error testing checks classification and measurement. Fraud testing checks existence, occurrence, and whether the underlying documentation is authentic. For a revenue fraud risk, the nature shift might be from vouching recorded revenue to invoices (testing accuracy) to confirming revenue directly with the customer and inspecting delivery documentation from a third party (testing occurrence). The difference is the source of evidence: for fraud testing, you seek evidence from outside the entity's own records because those records are the very thing that may have been manipulated.
Timing changes when you deliberately move procedures to periods the client does not expect. If you always test revenue at year-end, the entity knows that. A person committing fraud can ensure the fraudulent entries are cleaned up before your testing window. Varying the timing to test an unexpected quarter, or performing unannounced inventory observations, is a timing response to fraud risk. Document why the timing was chosen and how it differs from prior year. If the timing is identical to last year, explain why the unpredictability requirement is nevertheless satisfied (perhaps because you varied nature or extent instead).
Extent changes when you increase sample sizes beyond what ISA 530 would require for error testing alone, or when you test full populations using data analytics rather than sampling. The extent adjustment column captures this: how many more items, what broader scope, or what additional locations compared to error testing. Full-population analytics are particularly effective for journal entry testing because fraud often involves a small number of entries that would be missed by statistical sampling but are identifiable through pattern analysis.
If you cannot articulate the difference between your fraud response and what you would have done for an error risk in the same assertion, the response is not responsive to fraud. Go back to the risk and ask what a person trying to conceal this fraud would do to avoid detection by your current procedure. Then design the procedure to counteract that concealment strategy.
The six mandatory minimum responses
Every ISA 240 engagement requires at least six responses in the matrix before you add entity-specific procedures. These are pre-populated in a structured working paper and cover the mandatory procedures that apply regardless of the entity's specific circumstances.
The first two are overall financial-statement-level responses. Response one addresses assignment and supervision: who on the team has the knowledge, skill, and ability to handle engagement responsibilities given the assessed fraud risks, and what level of supervision do they need (ISA 240.43). This is not a generic staffing note. It should name the team members, state their relevant experience, and explain how the supervision plan addresses the fraud risks identified. If the engagement has a complex revenue recognition fraud risk, the person testing revenue should have relevant sector experience.
Response two addresses accounting policy evaluation: whether the entity's accounting policies (particularly for subjective measurements and complex transactions) may indicate fraudulent financial reporting (ISA 240.45). This evaluation is separate from the ISA 540 estimates work. Here, you are asking whether management's choice of accounting policies (not just their application) could be motivated by a desire to manipulate reported results. Policies that maximise revenue recognition speed, defer expense recognition, or choose the most favourable measurement basis are all indicators worth documenting.
The next three are management override procedures, which ISA 240 treats as presumed risks on every engagement regardless of other assessed risks.
Journal entry testing (ISA 240.49 Revised) gets its own detailed working section with gating steps that must be completed before any selection begins. The response matrix row links to that working section and documents the nature, timing, and extent at a summary level. The detailed testing happens in the dedicated journal entry testing section.
Estimates review (ISA 240.50-51 Revised) requires a retrospective comparison of prior-year estimates to actual outcomes, looking for directional bias across multiple periods. Again, the response matrix row links to the dedicated estimates and bias review section where the detailed work is documented.
Significant unusual transactions (ISA 240.52 Revised) require evaluation of the business rationale for transactions outside the normal course of business. The response matrix row for this procedure captures each significant unusual transaction identified and the conclusion on whether the business rationale suggests fraud.
The sixth is analytical procedures performed near the end of the audit (ISA 240.53). This is not the same as your ISA 520 analytical procedures. This is a fraud-specific evaluation: do the results of near-completion analytics indicate a previously unrecognised risk of material misstatement due to fraud? The procedures might be identical in form to ISA 520 analytics, but the evaluation lens is different. You are looking for anomalies that suggest fraud, not just unexpected variances that suggest error.
After these six, you add entity-specific responses for each assertion-level fraud risk identified in the risk register. Each assertion-level risk should have at least one response row, though complex risks may require multiple responses addressing different aspects.
Worked example: Dijkstra Logistics B.V.
Scenario: Dijkstra Logistics B.V. is a Dutch freight forwarding company with revenue of EUR 68M. The engagement team identified three fraud risks beyond management override: inflated revenue through fictitious intercompany freight bookings (assertion-level), understated fuel cost accruals to meet covenant targets (assertion-level), and misappropriation of cash through duplicate supplier payments (assertion-level). The risk register contains these as risks alongside the management override presumed risk.
The team populates the first six mandatory rows. The two overall responses document the team composition (a senior with prior logistics experience assigned to revenue testing) and the accounting policy evaluation (focus on revenue recognition timing given intercompany volumes). Documentation note: Row 1 records "Senior [name] assigned to revenue stream testing based on 3 years logistics sector experience. Partner to review all intercompany entries above EUR 50,000." Row 2 records specific accounting policies under review: percentage-of-completion on long-haul contracts, intercompany elimination timing, fuel cost accrual methodology.
For the intercompany revenue fraud risk, the team adds a seventh row. Nature: confirm a sample of intercompany freight bookings directly with the counterparty entity, inspecting delivery documentation from the receiving warehouse (not from Dijkstra's own system). Timing: Q3 testing (prior year tested Q4 only). Extent: full population analysis of intercompany bookings above EUR 25,000, sample of 30 below that threshold. Unpredictability: Q3 timing is new; prior year tested Q4. Selection includes bookings initiated by two specific employees flagged during the team discussion. Unbiased design: the procedure could reveal bookings with no corresponding receipt at the counterparty, which would indicate fictitious revenue. Documentation note: "Intercompany confirmations sent directly to [counterparty warehouse manager]. Delivery notes obtained from counterparty system, not from Dijkstra's ERP. Q3 testing window selected to introduce timing unpredictability."
For the fuel cost accrual risk, the team adds an eighth row. Nature: retrospective test of prior-year fuel accrual versus actual costs in the current year, combined with independent recalculation using published diesel price indices. Timing: performed at completion when actual costs are known. Extent: all fuel accruals above EUR 15,000. Unpredictability: independent price index used instead of management's supplier contracts (this is the first year using an external benchmark). Unbiased design: the procedure could reveal systematic understatement if actual costs consistently exceed the accrual. Documentation note: "Fuel price data obtained from [independent index]. Variance analysis performed for each month. Cumulative understatement of EUR 42,000 identified (0.06% of revenue, below PM of EUR 340,000). No fraud indicator but pattern noted for current-year evaluation in the estimates section."
For the duplicate supplier payment risk, the team adds a ninth row. Nature: data analytics on the full AP payment file (duplicate amounts to same supplier within 5 business days, same amount to different bank accounts for the same supplier). Timing: full-year population. Extent: 100% of payments. Unpredictability: first year using full-population analytics on AP (prior year sampled). Unbiased design: the procedure could reveal duplicate payments regardless of whether flagged by management's own controls. Documentation note: "AP population of 14,200 payments extracted from [system]. Analytics identified 23 potential duplicates. 21 confirmed as legitimate (credit notes, instalment payments). 2 referred to management for investigation. Value: EUR 8,400 total. No fraud indicator. Management confirmed both items were processing errors subsequently recovered."
The result: a reviewer opening this file can trace every fraud risk to a procedure, see exactly how the procedure differs from error testing, confirm the unpredictability element, verify that the evidence obtained goes beyond inquiry, and check the unbiased design confirmation. Both directions of the cross-reference work without ambiguity.
Practical checklist
- Confirm every risk in the fraud risk register has at least one linked response row in the matrix (ISA 240.43).
- Check that the six mandatory minimum responses are populated before adding entity-specific rows. Do not leave the mandatory rows until completion.
- For each response, verify that the nature, timing, or extent column (at least one) explains how this procedure differs from what you would do for an error risk in the same assertion.
- Complete the unpredictability column for every response (ISA 240.42). If it says "N/A" or is blank, the response will be flagged on review.
- Before signing off, trace two risks forward (risk to response to work done) and two responses backward (response to risk to source). If any chain breaks, the matrix has a gap.
- Complete "Work Done / Results" and "Conclusion" columns during or immediately after fieldwork, not at the review stage.
Common mistakes
- Copying the audit programme into the response matrix without adjusting nature, timing, or extent for fraud risk. The AFM has flagged this pattern repeatedly: the fraud response is identical to the error response, which means the fraud risk was not actually addressed.
- Leaving the unpredictability column blank or writing "procedures varied." ISA 240.42 requires a description of what was varied and how. "Procedures varied" tells a reviewer nothing about what was actually different.
- Completing the work done and conclusion columns weeks after fieldwork, based on memory rather than contemporaneous notes. Regulators look at completion dates. A response matrix finalised entirely at the review stage signals that fraud procedures were an afterthought rather than a planned part of the engagement.
- Omitting the unbiased design column or filling it with generic statements like "procedure is unbiased." The column must state what contradictory evidence the procedure could reveal for this specific fraud risk. If you cannot name the contradictory evidence, reconsider the procedure design.
Get practical audit insights, weekly.
No exam theory. Just what makes audits run faster.
No spam — we're auditors, not marketers.