AI Transparency Notice

Revision: October 1, 2025

This notice concerns the use of AI technologies with Mastermind’s services.

Transparency and integrity are core to how Mastermind operates. As the field of assessment and third-party assurance evolves, so do the tools we use to serve our clients with excellence. This notice explains how we thoughtfully integrate artificial intelligence (AI) into our audit processes, not to replace human judgment or accelerate inspection procedures indiscriminately, but to enable deeper and more meaningful engagement with our clients during the limited time we share each year.

This is not a disclaimer or notice issued in response to a regulatory requirement. It is our way of being clear and direct about how we approach new technologies that inherently surface security and privacy considerations. Here, we outline the AI tools we use, the purposes they serve, and the guardrails we maintain to uphold the rigorous standards you expect from a company audacious enough to call itself Mastermind.

When used responsibly, AI helps us redirect time from low-value tasks such as manually reviewing lengthy procedural documents. It also returns time to our auditees that could otherwise be self-serviced by our assessors via offline review, allowing us to focus on critical thinking, professional skepticism, and in-depth analysis. These skills are essential to delivering objective, high-quality audits. Our approach is designed to amplify our team’s expertise, not to automate away the nuance and diligence central to effective auditing.

We have also adopted the AI Principles of the Organisation for Economic Co-operation and Development (OECD) as a cornerstone of our governance framework. These internationally recognized principles guide us in:

  • Upholding the rule of law, fairness, and human rights
  • Being open about how our AI works and why
  • Keeping systems and data safe and secure
  • Owning our decisions and how we make them
  • Promoting inclusive growth and human well-being

AI, applied in a calculated and responsible way, also helps us attract and grow the most experienced professionals in the field. It means we can avoid the cheap compromises of nearshoring, offshoring, or replacing skilled tasking with AI agents, and instead give our teams the tools to focus on what they do best.

Mastermind primarily uses GPT-5, with limited use of GPT-4o, GPT-3.5/4-mini, and GPT-4.5 (research use), to assist with two activities: summarizing documented policies and procedures provided by clients and suggesting potential key process audit trails to be further investigated by assessors. This use of AI is shared openly to give client trust and safety teams the context they need to evaluate our services and confirm that our approach aligns with any applicable regulatory obligations, ethical expectations, or internal compliance requirements.

Client Evidence Review

Capabilities & Use Case

OpenAI’s models can support the automated analysis, summarization, and interpretation of client-provided evidence, including both text and multimodal data (text and image), allowing audits to be completed with greater efficiency, accuracy, and analytical depth.

Key Risks

  • Misinterpretation or Fabrication: Risk of producing inaccurate summaries or introducing fabricated details when interpreting evidence.
  • Privacy Incident: Risk of exposing or mishandling personal data or other sensitive data contained in client evidence during sampling or review.
  • Over-Refusal: Risk of declining to process benign yet relevant evidence, particularly in multimodal contexts.

OpenAI Controls & Mitigations

  • Data Filtering: Use of rigorous pre-training data filtering to reduce exposure to PII and other sensitive content.
  • Refusal Training: Training the model to decline requests involving unsafe, illegal, or regulated content.
  • Multimodal Refusal Evaluation: Assessing refusal behavior for appropriateness across both text and image inputs.
  • Monitoring & Logging: Continuously monitoring for over-refusal and unnecessary rejection, while verifying accuracy in evidence handling.

Mastermind Considerations

  • Data Privacy: Minimize unnecessary collection of personal data, request client redaction when samples contain nonessential personal data, and ensure all evidence is processed in compliance with data protection regulations (e.g., GDPR, CCPA).
  • Access Controls: Restrict model access to authorized personnel and approved systems.

Key Process Audit Trail Generation

Capabilities & Use Case

OpenAI’s models can algorithmically generate recommended key business or audit processes (“audit trails”) for further investigation by assessors, applying advanced reasoning and natural language understanding to client-provided design documents, such as control policies and procedure artifacts, and evaluating them against the normative criteria in scope for the audit.

Key Risks

  • Hallucination: Risk of generating inaccurate, non-compliant, or unsupported process steps.
  • Bias: Risk of introducing unintended social, demographic, or procedural biases into process recommendations.
  • Over-Refusal: Risk of over-refusal and unnecessary omission of valid process steps, particularly in edge-case scenarios.

OpenAI Controls & Mitigations

  • Alignment Techniques: Implementation of supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF) to align model outputs with intended use cases, regulatory requirements, and compliance standards.
  • Safety Evaluations: Execution of extensive pre-deployment evaluations to assess and mitigate risks of hallucination, bias, and inappropriate refusal behavior.
  • Instruction Hierarchy: Training models to enforce a hierarchical instruction framework, prioritizing system-level directives to minimize susceptibility to prompt manipulation or conflicting instructions.
  • Moderation API & Classifiers: Deployment of automated content moderation and classification systems to detect and filter unsafe, non-compliant, or sensitive outputs.

Mastermind Considerations

  • Auditability: All process generation activities are logged and governed by human-on-the-loop controls, ensuring continuous oversight, review, and verification.
  • Documentation: Generated outputs are systematically reviewed to validate completeness, accuracy, and alignment with audit requirements.

Protecting client information is central to how we apply AI in our audit engagements. Our approach follows internationally recognized standards and regulatory frameworks, with clear governance protocols, human-in-the-loop oversight, defined access controls, and full data lifecycle management to ensure secure and compliant handling of client data from acquisition through processing, retention, and eventual disposal.

Manual Data Input & No Direct Storage Connections

All data entered into our AI tools is manually provided by our audit professionals. These platforms do not have direct access to, or integrations with, our systems of record or retention solutions. This deliberate separation reduces exposure risk and ensures that confidential information is never automatically extracted, exported, or harvested by external services.

Scope of AI Tool Use: Governance Documentation

We use AI tools exclusively to process governance documentation, including policy, procedure, and design artifacts, and per standard operating procedure, we do not enter client production data, transactional records, or other operating effectiveness evidence into these platforms. This limited scope reduces data risk and supports best-practice data minimization principles.

Administrative and Contractual Safeguards

We use paid AI tool subscriptions that allow us to enforce administrative controls such as user-level access permissions and rapid revocation of access when necessary. These subscriptions also provide enhanced contractual protections, including more restrictive and transparent terms of use, defined data retention limits, and stronger legal obligations around data rights than those offered by free versions.

Ongoing Compliance Reviews

We routinely review our data handling practices and privacy configurations to ensure continuous alignment with applicable regulations and industry standards. These reviews help us identify emerging risks early and implement timely updates to maintain compliance and data protection.

Human-in-the-Loop (HITL)

All outputs generated by AI tools in the audit process undergo human review and approval before finalization or external distribution. This oversight preserves professional judgment, maintains accuracy, and mitigates risks associated with automated decision-making.

Model Updates & Monitoring

We monitor updates to AI models, relevant risk assessments, and evolving best practices communicated by OpenAI and other recognized authorities. We also evaluate any releases of new or updated system cards to determine potential impacts on our audit processes and adjust our internal controls as needed to address new risks and maintain safe, compliant AI usage.

User Training

All personnel using AI tools receive targeted training on tool capabilities, limitations, and applicable compliance requirements, including the mandatory AI literacy requirement under Article 4 of the EU AI Act, effective February 2, 2025. All Mastermind staff have participated in and successfully completed an AI literacy program developed by CeADAR and Ireland’s Department of Enterprise, Trade and Employment, supported by the European Digital Innovation Hub and the European Commission, including graded assessments. This training equips staff to apply AI responsibly and in ways that complement professional expertise.

By following these measures, we uphold our commitment to transparency, data protection, and the responsible use of AI to enhance, not replace, auditors’ professional judgment.