Key takeaways

  • Where the real differences between firm tiers show up in the audit file (not in the standards applied but in how they’re applied)
  • How staffing models, methodology platforms, and review structures vary across firm sizes under ISA 220.13–19
  • How to evaluate which approach produces better outcomes for different client types
  • What these differences mean if you’re moving between firm tiers or choosing an auditor

The methodology gap is a technology gap

Every audit firm in Europe applies the same ISAs. That’s the starting point, and it’s where the similarity ends.

Big 4 firms audit through proprietary platforms (Deloitte’s Omnia, PwC’s Aura, EY’s Canvas, KPMG’s Clara). These tools don’t just store working papers. They enforce methodology. If ISA 315.12 requires the auditor to identify and assess risks of material misstatement, the platform won’t let you move to the testing phase until the risk assessment module is completed. Fields are mandatory. Sign-offs are sequential. Exception reports fire automatically when a required step is bypassed.

Mid-tier firms (BDO, Mazars, Grant Thornton, Baker Tilly) sit in between. They typically use commercial audit software (CaseWare, Inflo, TeamMate, or firm-developed platforms) with standardized templates that reference ISA paragraphs. Those templates prompt the auditor but don’t physically prevent them from skipping a step. A senior associate at a mid-tier firm can technically sign off on a risk assessment that has empty sections. The review process catches it (usually). The technology doesn’t block it.

Small firms (two to eight partners, often SRA members in the Netherlands) frequently work in Word and Excel. The methodology lives in the partner’s head and in whatever template pack the firm purchased or built years ago. ISA compliance depends almost entirely on the individual practitioner’s knowledge and discipline. Some small-firm partners produce files that would survive any inspection. Others produce files that wouldn’t survive a first-year reviewer’s checklist.

This isn’t a quality judgment. It’s a structural observation about how different firm sizes convert the same standards into audit evidence, and how that creates different risk profiles for clients and different inspection outcomes when the AFM or FRC reviews the work.

Staffing and supervision under ISA 220

ISA 220.13 requires the engagement partner to determine that sufficient and appropriate resources are assigned to the engagement. How firms interpret “sufficient and appropriate” varies dramatically by tier.

On a €50M revenue client, a Big 4 team might include a partner, a senior manager, two seniors, and four associates. The partner reviews the final file. Detail-level review falls to the senior manager. Seniors direct the associates’ work, and contact between associates and the partner might be limited to the planning meeting and the closing meeting. This structure creates consistency (every file gets the same layered review) and distance (the partner may not personally understand every judgment in the file).

At a mid-tier firm, the same engagement might have a partner, a manager, and two seniors. Fewer layers means the partner is closer to the detail. Staff interact with the partner weekly during fieldwork. Files from mid-tier engagements tend to show more visible partner involvement, but review coverage depends more heavily on whether the manager is competent.

A small firm taking on that same client (if the firm accepts it at all) might assign the partner, one qualified senior, and one assistant. Review happens in real time because there aren’t enough layers for it to happen sequentially. When the partner is strong, this produces exceptional files. When the partner is stretched across too many engagements, file quality drops with no safety net underneath.

ISA 220.19 requires the engagement partner to review the audit documentation at appropriate points. What counts as “appropriate” depends entirely on infrastructure. At a Big 4 firm, the methodology platform generates review prompts at each milestone. Mid-tier firms define review timing by firm policy, typically at planning completion, interim review, and pre-issuance. At a small firm, review timing is whatever the partner’s schedule allows.

Where small firms have a genuine advantage

Regulatory reports focus on deficiencies. That creates a misleading impression that larger firms always produce better audits.

The FRC’s 2022–23 inspection results showed that 76% of FTSE 350 audits (all Big 4) required improvement or significant improvement. Inspection results at mid-tier and smaller firms from the AFM varied widely by firm but averaged lower on certain indicators, particularly partner involvement at planning and depth of client knowledge documented in the file.

Small firms hold two structural advantages that no amount of Big 4 technology replicates.

Partner proximity

When the engagement partner personally interviews the client’s finance director, visits the warehouse, and reviews the bank confirmations, professional skepticism at the top of the file carries a different weight. ISA 200.A22 describes professional skepticism as an attitude. Attitudes are harder to delegate than procedures. A partner who spent four days on-site has a different skeptical posture than one who reviewed a summary prepared by a senior manager who reviewed a memo prepared by a senior who reviewed a spreadsheet prepared by an associate.

Client continuity

Small-firm partners often audit the same clients for decades. ISA 315.13 requires the auditor to obtain an understanding of the entity and its environment. A partner in year 15 of an engagement has an understanding that a Big 4 team in year two of a rotation cannot match, regardless of how many hours they spend on walkthroughs and industry research. That accumulated knowledge shows up in the risk assessment: specific risks identified, unusual transactions flagged, accounting estimates challenged with real knowledge of the business.

The fragility of these advantages

These advantages are real but fragile. They depend entirely on the individual partner. If that partner retires, burns out, or takes on too many clients, the firm has no system to catch the decline.

Where Big 4 methodology prevents common failures

The AFM’s inspection reports consistently identify the same deficiencies at smaller firms: insufficient risk assessment documentation, weak linkage between identified risks and audit procedures, and inadequate evaluation of misstatements. These are exactly the areas where Big 4 methodology platforms provide the most structural support.

A Deloitte auditor who fails to document the linkage between a risk and a test procedure gets a system alert. Incomplete risk-response matrices trigger flags automatically. A small-firm auditor who misses the same linkage has nothing between them and an AFM finding except their own diligence and their partner’s review.

ISQM 1.16(c) requires firms to establish quality objectives related to the performance of engagements. Big 4 firms satisfy this by building quality objectives into the software. Mid-tier firms satisfy it through templates and training. Small firms satisfy it through policies that may or may not translate into consistent practice on every file.

The PCAOB’s 2023 inspection report found that firms with integrated methodology platforms had 40% fewer findings related to risk assessment documentation compared to firms using manual or template-based approaches. Technology doesn’t replace judgment, but it does prevent the most common omissions. And omissions (not errors of judgment) are what generate the majority of inspection findings at smaller firms.

Worked example: the same client audited by different firm tiers

Dekker Precision Parts B.V. manufactures industrial components in Eindhoven. Annual revenue: €38M. Total assets: €27M. One significant related party: a majority shareholder who also supplies raw materials at non-market rates. A pending warranty claim of approximately €1.2M.

The Big 4 file

The team sets overall materiality at €380K (1% of revenue). The platform requires a separate assessment of the related party as a significant risk under ISA 550.18. A mandatory work program for related party transactions generates automatically. The warranty provision triggers an ISA 540 (revised) template for accounting estimates. Total team: six people across four levels. Budgeted hours: 820.

What the file looks like: 140+ working papers generated by the platform. Every ISA requirement mapped. Review notes embedded in the system. Complete, systematic, and expensive.

The mid-tier file

The team sets overall materiality at €380K using the same benchmark. During planning, the manager flags the related party and creates a separate section in CaseWare. The warranty provision gets documented in the provisions working paper with an ISA 540 assessment. Total team: four people across two levels. Budgeted hours: 480.

What the file looks like: 60–80 working papers. ISA requirements covered, but with more judgment about which areas need detailed documentation versus which can be addressed in a summary memo. Complete, practical, and cost-effective for the client.

The small firm file

The partner sets overall materiality at €380K. During planning (conducted personally), the partner notes the related party in the planning memo and decides to test the pricing of related party purchases against market rates. The warranty provision is discussed directly with the client’s lawyer and documented in a single working paper. Total team: two people. Budgeted hours: 280.

What the file looks like: 25–35 working papers. Critical judgments are well-documented because the partner made them personally. Supporting areas have lighter documentation. If an inspector reviews this file, the related party work will pass. Whether the completeness of disclosure testing meets ISA 550.25 depends on how thorough the partner was in an area where the firm has no template.

The takeaway

Same client. Same ISAs. The files look nothing alike. File quality depends less on which standards were applied and more on the infrastructure and individual judgment behind the application.

Practical checklist for evaluating your firm’s approach

  1. Map your firm’s methodology to the ISA requirements that generate the most inspection findings (ISA 315 risk assessment, ISA 540 accounting estimates, ISA 550 related parties, ISA 570 going concern). Where your methodology has no structured prompt or template for a high-risk requirement, you have a gap.
  2. Count the layers between the person performing the work and the person signing the opinion. If there are fewer than two review layers on a significant-risk area, document how partner proximity compensates for the missing structural review.
  3. Compare your firm’s budgeted hours on a €30M–€50M revenue client against mid-tier benchmarks. If your hours are less than 60% of the mid-tier average, identify which ISA requirements are getting less time and whether that creates risk.
  4. Review your last two internal inspection reports. Categorize findings by whether they relate to documentation (fixable with better templates) or judgment (fixable only with better training or staffing). That split tells you whether your problem is infrastructure or people.

Get practical audit insights, weekly.

No exam theory. Just what makes audits run faster.

No spam — we're auditors, not marketers.

Related tools and reading

Put audit concepts into practice with these free tools:

Related guides:

Frequently asked questions

What is the main difference between Big 4 and small firm audit approaches?

The core difference is not the standards (all apply the same ISAs) but how they operationalize those standards. Big 4 firms embed ISA requirements into proprietary software that constrains auditor judgment with mandatory fields and sequential sign-offs. Small firms often work in Word and Excel, where ISA compliance depends almost entirely on the individual practitioner’s knowledge and discipline.

Do Big 4 firms always produce better audit quality than small firms?

No. The FRC’s 2022–23 inspection results showed that 76% of FTSE 350 audits (all Big 4) required improvement or significant improvement. Small firms hold structural advantages in partner proximity and client continuity. When the partner is strong, small firms can produce exceptional files.

How do staffing models differ across firm tiers?

On a €50M revenue client, a Big 4 team might include 6–8 people across four levels (820 budgeted hours). A mid-tier firm uses 4 people across two levels (480 hours). A small firm assigns 2 people (280 hours). Fewer layers at smaller firms mean the partner is closer to the detail, but review coverage depends more heavily on individual competence.

What advantage do small audit firms have over Big 4?

Two structural advantages: partner proximity (the engagement partner personally handles key procedures, carrying different sceptical weight) and client continuity (decades of accumulated knowledge showing up in specific risk identification and estimate challenge). These advantages are real but fragile, depending entirely on the individual partner.

Where does Big 4 methodology prevent common audit failures?

Big 4 platforms provide the most structural support in risk assessment documentation, risk-to-procedure linkage, and misstatement evaluation — exactly where inspectors find the most deficiencies at smaller firms. The PCAOB found that firms with integrated platforms had 40% fewer risk assessment documentation findings compared to firms using manual approaches.

Source references

  • FRC, 2022–23 Inspection Results (FTSE 350 and Tier 2/3 firms)
  • AFM, Inspection reports on PIE and non-PIE audit firms
  • PCAOB, 2023 Inspection Report
  • ISA 220 (Revised), Quality Management for an Audit of Financial Statements
  • ISQM 1, Quality Management for Firms that Perform Audits or Reviews of Financial Statements
  • ISA 315 (Revised 2019), Identifying and Assessing the Risks of Material Misstatement