AI Transparency Notice
Revision: October 1, 2025
This notice concerns the use of AI technologies with Mastermind’s services.
Revision: October 1, 2025
This notice concerns the use of AI technologies with Mastermind’s services.
Transparency and integrity are core to how Mastermind operates. As the field of assessment and third-party assurance evolves, so do the tools we use to serve our clients with excellence. This notice explains how we thoughtfully integrate artificial intelligence (AI) into our audit processes, not to replace human judgment or accelerate inspection procedures indiscriminately, but to enable deeper and more meaningful engagement with our clients during the limited time we share each year.
This is not a disclaimer or notice issued in response to a regulatory requirement. It is our way of being clear and direct about how we approach new technologies that inherently surface security and privacy considerations. Here, we outline the AI tools we use, the purposes they serve, and the guardrails we maintain to uphold the rigorous standards you expect from a company audacious enough to call itself Mastermind.
When used responsibly, AI helps us redirect time from low-value tasks such as manually reviewing lengthy procedural documents. It also returns time to our auditees that could otherwise be self-serviced by our assessors via offline review, allowing us to focus on critical thinking, professional skepticism, and in-depth analysis. These skills are essential to delivering objective, high-quality audits. Our approach is designed to amplify our team’s expertise, not to automate away the nuance and diligence central to effective auditing.
We have also adopted the AI Principles of the Organisation for Economic Co-operation and Development (OECD) as a cornerstone of our governance framework. These internationally recognized principles guide us in:
AI, applied in a calculated and responsible way, also helps us attract and grow the most experienced professionals in the field. It means we can avoid the cheap compromises of nearshoring, offshoring, or replacing skilled tasking with AI agents, and instead give our teams the tools to focus on what they do best.
Mastermind primarily uses GPT-5, with limited use of GPT-4o, GPT-3.5/4-mini, and GPT-4.5 (research use), to assist with two activities: summarizing documented policies and procedures provided by clients and suggesting potential key process audit trails to be further investigated by assessors. This use of AI is shared openly to give client trust and safety teams the context they need to evaluate our services and confirm that our approach aligns with any applicable regulatory obligations, ethical expectations, or internal compliance requirements.
OpenAI’s models can support the automated analysis, summarization, and interpretation of client-provided evidence, including both text and multimodal data (text and image), allowing audits to be completed with greater efficiency, accuracy, and analytical depth.
OpenAI’s models can algorithmically generate recommended key business or audit processes (“audit trails”) for further investigation by assessors, applying advanced reasoning and natural language understanding to client-provided design documents, such as control policies and procedure artifacts, and evaluating them against the normative criteria in scope for the audit.
Protecting client information is central to how we apply AI in our audit engagements. Our approach follows internationally recognized standards and regulatory frameworks, with clear governance protocols, human-in-the-loop oversight, defined access controls, and full data lifecycle management to ensure secure and compliant handling of client data from acquisition through processing, retention, and eventual disposal.
All data entered into our AI tools is manually provided by our audit professionals. These platforms do not have direct access to, or integrations with, our systems of record or retention solutions. This deliberate separation reduces exposure risk and ensures that confidential information is never automatically extracted, exported, or harvested by external services.
We use AI tools exclusively to process governance documentation, including policy, procedure, and design artifacts, and per standard operating procedure, we do not enter client production data, transactional records, or other operating effectiveness evidence into these platforms. This limited scope reduces data risk and supports best-practice data minimization principles.
We use paid AI tool subscriptions that allow us to enforce administrative controls such as user-level access permissions and rapid revocation of access when necessary. These subscriptions also provide enhanced contractual protections, including more restrictive and transparent terms of use, defined data retention limits, and stronger legal obligations around data rights than those offered by free versions.
We routinely review our data handling practices and privacy configurations to ensure continuous alignment with applicable regulations and industry standards. These reviews help us identify emerging risks early and implement timely updates to maintain compliance and data protection.
All outputs generated by AI tools in the audit process undergo human review and approval before finalization or external distribution. This oversight preserves professional judgment, maintains accuracy, and mitigates risks associated with automated decision-making.
We monitor updates to AI models, relevant risk assessments, and evolving best practices communicated by OpenAI and other recognized authorities. We also evaluate any releases of new or updated system cards to determine potential impacts on our audit processes and adjust our internal controls as needed to address new risks and maintain safe, compliant AI usage.
All personnel using AI tools receive targeted training on tool capabilities, limitations, and applicable compliance requirements, including the mandatory AI literacy requirement under Article 4 of the EU AI Act, effective February 2, 2025. All Mastermind staff have participated in and successfully completed an AI literacy program developed by CeADAR and Ireland’s Department of Enterprise, Trade and Employment, supported by the European Digital Innovation Hub and the European Commission, including graded assessments. This training equips staff to apply AI responsibly and in ways that complement professional expertise.
By following these measures, we uphold our commitment to transparency, data protection, and the responsible use of AI to enhance, not replace, auditors’ professional judgment.