Insights
Building Trust in AI: Mitigating Risk While Unlocking Value in Audit
Platformed
Platformed
Subscribe to our newsletter
Subscribe to our newsletter
Building Trust in AI: Mitigating Risk While Unlocking Value in Audit
As artificial intelligence becomes more integrated into the audit process, the profession faces a delicate balancing act - how to harness AI’s transformative potential while maintaining the integrity, transparency, and trustworthiness that underpin the value of an audit. For mid-tier firms especially, the stakes are high. Adopting AI too hastily, or without the right guardrails, risks compromising audit quality. But failing to act risks being left behind by faster-moving competitors and client expectations.
In this post, we explore the core trust challenges around AI in audit, such as hallucination, provability, and traceability, and how firms can approach automation responsibly. We also showcase how Menzies LLP, a leading mid-tier firm, implemented AI to automate General IT Control (GITC) assessments under ISA 315, demonstrating how thoughtful adoption can elevate audit quality, consistency, and client experience.
Why AI in Audit Needs a Foundation of Trust
Auditing is fundamentally about trust. Clients, regulators, and the public rely on auditors to provide an independent, reliable view of an organization or entity's health - from financial and operational to compliance and IT. When introducing AI into this process, auditors must ensure the same level of accountability and rigour applies.
Yet AI brings new types of risk. The Financial Reporting Council’s recentthematic review of automated tools lays this bare. The report highlighted a range of concerns from across the profession – including inconsistent documentation of AI tool performance, a lack of clear governance frameworks, and difficulty tracing how conclusions were reached. Without mitigating these risks, AI systems can threaten rather than support audit quality.
Three of the biggest trust challenges actually go cap in hand:
1) Hallucination - Accuracy Risk in Generative AI
Generative AI can produce fluent and authoritative-sounding outputs that are factually wrong. While this is a liveable risk in everyday consumer lives, in audit, where precision and accuracy are non-negotiable, this creates a real danger. An unvetted AI tool might generate risk assessments, control descriptions, or audit responses that “look right” but aren’t grounded in the underlying data - a phenomenon known as hallucination which strengthens the need for provability.
2) Provability - Defensibility of Judgments
Auditors must be able to justify every decision, especially when challenged by regulators. But some AI tools operate as black boxes: decisions are made, but the reasoning behind them is opaque. If a partner can’t explain how a conclusion was reached or what data supported it, the credibility of the entire engagement is undermined, making traceability increasingly important.
3) Traceability - Linking Inputs to Outputs
Traceability is key to audit quality. Auditors need to show how specific inputs (like client-provided documentation or risk factors) led to specific outputs (like identified risks or control testing approaches). With AI in the mix, ensuring a clear audit trail becomes essential.
Designing for Trust - How Platformed Addresses These Risks
AtPlatformed, trust isn’t a feature - it’s the foundation. Our AI Auditor was built with audit-specific risks in mind, particularly for firms performing ISA 315 audits.
- Hallucination prevention through source-linked generation - Platformed’s responses are grounded in your client’s own policies, documents, and evidence. Users can trace every output back to its source and validate it before it’s used - meaning nothing is fabricated from thin air
- Provability through transparent logic - Each AI-driven suggestion includes a clear rationale and supporting evidence. Whether it’s a suggested risk, control, or test, Platformed shows exactly what inputs were used and how the AI reached its recommendation
- Traceability through data lineage - Every decision and data point in Platformed is logged, versioned, and auditable. This means your team - and your reviewers - can follow the logic from initial documentation all the way through to final output
- Control over frameworks, bespoke to each client and firm - Audit leads can control which prompts, frameworks, and reference materials are used - and ensure consistency across team and firm
- Continuous validation - As recommended in the FRC’s thematic review, Platformed supports both initial and periodic testing of its outputs. Firms can review AI decisions against known benchmarks and adjust logic or inputs accordingly
Case Study: Menzies LLP and the AI-Powered GITC Assessment
A practical example of trust-centric AI in action comes from Menzies LLP – a UK top 25 firm serving mid-market clients. In a recentcase study, Menzies shared how it implemented Platformed to automate parts of its General IT Controls (GITC) assessment process under ISA 315.
The Challenge
With audit complexity growing and new standards requiring more rigorous understanding of IT environments, Menzies needed a way to perform GITC assessments at scale - without overwhelming staff or compromising quality. But introducing AI into such a critical part of the audit required careful planning.
According to Simon Massey, Audit Partner at Menzies, “If we hadn’t invested in technology, we’d probably need five times more people just to get through the workload. But we’ve made a choice - instead of offshoring, we’re tech-enabling.”
Read the interview with Simon Massey at Menzies and Accountancy Age here
The Implementation
The Menzies team worked with Platformed to embed the firm’s ISA 315 methodology directly into the AI’s framework. This meant the system wasn’t just generating GITC assessments generically - it was doing so in line with how Menzies trained its auditors.
“Working with Platformed ensures we will be able to offer a consistent and high-quality approach across all our audits.Platformed empowers our auditors to work more efficiently, raising the bar even further on audit quality and maintaining high consistency.”
Tom Woods, Partner (Head of Audits) at Menzies LLP
The Results
- Consistency - All GITC assessments now follow a common structure, using firm-approved language and logic
- Efficiency - What used to take hours of manual drafting is now accelerated, freeing up staff for higher-value tasks
- Assurance - Partners and reviewers have full visibility into how assessments were generated and can drill down into the sources used
What the FRC Says About Trust in Automation
The FRC’s thematic review offers a broader context for Menzies’ approach. It notes that while automated tools and AI can support audit quality, many implementations lack the controls needed to ensure reliability. In particular, the FRC calls for:
- Clear documentation of judgments and logic underpinning automated decisions
- Robust initial testing and ongoing validation of tool performance
- Firm-wide governance over how tools are selected, configured, and deployed
- Full traceability of inputs, processing steps, and outputs
Firms using Platformed are already aligning with these best practices. Platformed’s AI Auditor enables firms to embed their methodology, control access, monitor performance over time, and demonstrate compliance with emerging regulatory expectations.
Balancing Innovation and Accountability
For audit firms, the message is clear - AI is not a silver bullet, but it can be a powerful accelerator when implemented responsibly. The key is to anchor every automation effort in trust, transparency, and traceability.
Firms that succeed will be those that:
- Treat AI as an augmentation tool, not a replacement for human judgment
- Demand rigorous documentation, testing, and oversight of any AI-powered system
- Choose platforms that are purpose-built for audit, rather than generic AI tools
- Involve audit leaders and compliance teams early and often in the implementation process
Trust Is the Differentiator
As clients become more digital, regulatory expectations increase, and staffing pressures grow, AI will only become more central to audit delivery. But the firms that thrive won’t just be the fastest adopters - they’ll be the most thoughtful.
By embedding provability, traceability, and governance into their AI strategies, firms like Menzies are setting a new benchmark for quality in the age of automation.
Platformed is proud to support that journey. Platformed’s AI Auditor empowers firms to meet ISA 315 and broader audit requirements with confidence - not just through faster outputs, but through transparent, defensible, and auditable AI-powered decisions.
To learn more or see how Platformed can help your firm, visitplatformed.com/financial-audit.