No Audio

Guide to AI Governance – Frequently Asked Questions

Introduction

AI governance is among the most complex and consequential topics facing boards and executive teams today. As AI adoption accelerates across organisations, business leaders are moving quickly to unlock its potential — often faster than governance, oversight and accountability frameworks can mature to ensure responsible AI deployments.

In our view, AI governance extends well beyond technology oversight. It encompasses accountability structures, ethical guardrails, data governance, cybersecurity, privacy, regulatory compliance, control design, workforce impact, as well as the critical importance of realising long-term value. These questions are rarely confined to a single function. Instead, they span the enterprise, engaging CFOs, CIOs, CISOs, CROs, compliance and legal leaders, internal audit, human resources, operations and marketing — often simultaneously. And, of course, boards are focused on asking the right questions.

To address these evolving concerns, Protiviti has developed this comprehensive AI Governance FAQ guide. We provide practical, cross-functional perspectives on the governance of AI systems and data, while also addressing broader implications across compliance, cybersecurity, finance, people and culture, customer experience, operations, internal audit and board oversight. The questions are designed to surface the issues leaders are confronting as AI moves from experimentation to embedded, enterprise-wide use.

We have structured this guide to serve as a reference document rather than a linear narrative. As a result, certain content and themes may appear across multiple sections to ensure clarity and accessibility. Readers are encouraged to focus on the questions most relevant to their roles or immediate priorities.

Of note, this material aligns with Protiviti’s proprietary AI Capability Map, which defines the areas and processes organisations must develop to create and accelerate value-delivering business outcomes while establishing trust, transparency and effective governance across the AI lifecycle (see accompanying graphic).

The information contained herein is for general informational purposes only and does not constitute legal or accounting advice. Organisations should consult their own legal, technology and financial advisers regarding specific legal, regulatory, technology, and financial accounting and disclosure questions relevant to their circumstances.

 

Section 1: Setting the foundation — AI governance, accountability and operating model

+ EXPAND ALL

1. What does “effective AI governance” look like in practice?

+

Effective AI governance consists of practices, principles and frameworks that ensure AI systems are financially sound, responsibly deployed and aligned with an organisation’s overall business strategy. Following are some key characteristics of effective AI governance:

  • Effective allocation of resources enables the organisation to formulate AI strategies, scoping out the best use cases through a rigorous business case process and laying the necessary data foundations to advance the business with emphasis on generating expected ROI.
  • Ethical frameworks and guidelines clarify the organisation’s values and guardrails in deploying and using AI solutions by addressing issues such as bias, fairness, accountability and transparency.
  • Interdisciplinary collaboration involves collaboration across various disciplines, including technical, legal, ethical and business perspectives, to address the multifaceted impacts of AI, including its impact on the workforce.
  • Risk management focuses on identifying and prioritizing risks, and implementing monitoring and control processes that reduce risks associated with AI systems to an acceptable level. It implements continuous monitoring and evaluation processes to ensure effective adaptation to new challenges and findings.
  • Transparency and explainability ensures AI solutions are useful to decision-makers. As an alternative to a black box, explainable AI models enable decision-makers to understand model output, engendering confidence in them when considering recommended actions.
  • Accountability mechanisms are essential to hold individuals or teams accountable for the responsible performance of AI systems. These mechanisms include auditing processes to review AI system performance and adherence to internal governance policies.
  • Data governance integration involves data governance practices specifically tailored for AI, focusing on data quality, privacy and security in training and deploying AI models.
     

In summary, the attributes of effective AI governance can facilitate acceleration of AI adoption without compromising quality and confidence as well as build an AI operating model that scales and brings value to the business. AI governance is about ensuring trust through secure AI solutions, navigating the known and developing regulatory landscape, and addressing gaps in AI talent and skills.

2. How is AI governance different from traditional IT governance?

+

AI governance focuses on AI technologies. Traditional IT governance manages IT resources and infrastructure. Both address compliance with regulations such as security and privacy.

  • AI governance emphasises responsible deployment. By contrast, the emphasis on ethical considerations may not be as prominent in traditional IT governance.
  • AI governance is more dynamic in nature. Accordingly, AI governance frameworks must be flexible and adaptable. However, traditional IT systems are more static, with governance relying more on established processes and standards.
  • Stakeholders and stakeholder engagement may vary. AI governance engages multiple stakeholders, including internal and external parties — e.g., ethics champions, social scientists and community representatives. By contrast, traditional IT governance typically involves just internal stakeholders focused on business needs.
  • The regulatory landscape differs. As AI regulations emerge and change, governance must adapt to comply with new laws and standards that specifically address AI's evolving implications. Traditional IT governance relies on established frameworks and compliance requirements that may not evolve as rapidly.
     

Thus, the ethical considerations, interdisciplinary collaboration and dynamic nature of managing risk distinguish effective AI governance from traditional IT governance.

3. How is AI governance different from traditional data governance?

+

AI governance and traditional data governance are similar in that they focus on ensuring data quality, integrity, security, compliance and ethical use. However, the scope of traditional data governance is broader as it addresses data lifecycle management through policies, controls and best practices. AI governance builds upon this foundation:

  • For training data, robust measures to detect and mitigate bias are used to ensure AI datasets are accurate, representative and ethically sourced. Privacy and compliance are paramount, necessitating the removal of sensitive information or anonymisation of that information by permanently removing personal identifiers to make re-identification impossible.
  • When managing prompts, best practices include sanitising user inputs to prevent injection attacks, securely logging interactions and maintaining transparency about prompt usage and consent.
  • Model outputs require rigorous validation processes — both automated and human-in-the-loop — to ensure they are safe, fair and explainable, especially in high-risk use cases. Continuous monitoring for bias and ethical risks is essential, alongside clear accountability for addressing errors or unsafe content on a timely basis.
  • Data retention policies must define how long various data types are stored, ensure secure disposal and strictly limit access, all in accordance with applicable legal and business requirements.
  • Comprehensive data lineage tracking enables full transparency and traceability throughout the AI lifecycle, supporting auditability, troubleshooting and regulatory reporting.
     

The bottom line: Traditional data governance applies more broadly across an organisation’s entire data lifecycle and encompasses the effective management of data for various purposes beyond AI. Effective AI governance relies on this foundation to address the distinct challenges of AI.

4. What constitutes an AI ethical framework?

+

The EU AI Act sets mandatory legal requirements for safety and fundamental rights with a focus on human oversight, privacy, transparency and fairness. Meanwhile, the National Institute of Standards and Technology (NIST) and Institute of Electrical and Electronics Engineers (IEEE) guidelines promote human well-being, transparency, accountability, safety, data privacy and trust.

To illustrate what constitutes an AI ethical framework, an overview of a typical AI ethical framework may include:

  • Fairness: Ensure AI systems do not discriminate against individuals or groups based on race, gender, age or other protected characteristics.
  • Transparency: Communicate how AI systems should operate, including sources of their training data, their intended purpose, the decision-making processes involved and their limitations.
  • Accountability: Establish clear responsibilities for the outcomes produced by AI systems, ensuring that stakeholders can be held accountable for their actions.
  • Privacy, safety and security: Protect personally identifiable information and other sensitive data, maintain safe and secure AI deployment and comply with relevant regulations.
  • Human oversight: Maintain human involvement (human-in-the-loop) in critical decision-making processes to ensure organisational values, ethical considerations and regulatory requirements are taken into account.
  • Sustainability: Promote AI technologies that support the organisation’s environmental, social and governance (ESG) initiatives, referencing such frameworks as the ones promulgated by the Carbon Disclosure Project and International Sustainability Standards Board.
     

Guidelines should be supported with specific implementation direction and effective enforcement mechanisms. For example, an ethical framework might call for the following measures:

  • Regular audits of AI algorithms for bias, use of diverse datasets for training, and implementation of corrective measures when biases are detected.
  • Development of explainable AI models to avoid the frustration of the “black box” problem.
  • Maintenance of documentation that outlines the AI system’s design and purpose.
  • Defined roles and responsibilities within the organisation for AI governance, and specific mechanisms for reporting and addressing issues related to AI decisions.
  • Implementation of data minimisation practices, securing personal data and ensuring informed consent from users prior to data collection.
  • Testing of AI systems to identify potential risks and provide assurance that established measures and control processes are operating as intended.
  • Design of AI systems that require human approval for significant decisions, particularly in sensitive areas such as healthcare, criminal justice and finance.
  • Evaluation of the environmental impact of AI systems and prioritisation of energy-efficient algorithms and infrastructure.
     

 An AI ethical framework serves as a guiding document for organisations aiming to develop and deploy AI technologies responsibly. Each principle in the framework should be adapted to address specific business and regulatory needs and stakeholder expectations.

5. Who owns the AI governance process, including AI governance decisions across the organisation, and how should accountability be allocated between management, risk, compliance and internal audit?

+

A well-defined ownership structure and clear accountability among various organisational groups and stakeholders are what make AI governance work. Ownership starts at the top of the organisation and works its way down to the first, second and third lines.

It begins with executive leadership. This could be the CEO and a chief AI officer (CAIO) or other senior executive sponsor. Executive leaders are ultimately accountable for AI governance decisions. They set the strategic direction, prioritise resources, and establish an AI governance framework that is aligned with the strategy, supports efficient allocation of resources and ensures responsible AI deployment.

An AI governance committee develops policies, monitors compliance with ethical standards, assesses risks and provides guidance on AI initiatives. Its composition should include an executive sponsor (a CEO direct report, e.g., a CAIO or member of the executive team), and representatives from line management, risk, compliance, legal, IT, ethics and internal audit. See Question 53 for a discussion of committee composition and the standard responsibilities for each committee member.

Line management (first line) consists of business unit leaders, operators, product owners and technical teams responsible for the development and operational management of AI systems. As model owners, they are accountable for the proper functioning of the models they deploy in accordance with their prescribed, intended role and with applicable regulatory frameworks. They identify and mitigate AI-related risks, ensure compliance with governance policies, allocate resources for risk management and maintain documentation for transparency in AI decision-making.

Risk management and ethics teams (second line) develop and communicate AI governance policies, conduct risk assessments, provide training to management on risk mitigation strategies and monitor adherence to established frameworks. They also challenge and support management in implementing effective governance practices.

Compliance teams (second line) monitor compliance with laws and regulations related to AI and conduct periodic reviews. In collaboration with risk management and ethics teams, they provide guidance on ethical considerations in AI deployments and ensure alignment of governance practices with compliance standards.

Internal audit (third line) assesses the performance of AI governance frameworks, evaluates compliance with internal policies and external regulations, and recommends improvements. In reporting its findings to executive leaders, the AI governance committee, and the board of directors or a designated board committee, internal audit provides independent assurance to the effective functioning of AI governance.

This governance structure works best through effective collaboration. Collaboration mechanisms include regular meetings, clear reporting lines, joint training sessions to ensure all stakeholders understand their respective AI governance roles, and continuous feedback loops from second-line functions and internal audit to the governance committee to improve AI governance practices. The focus on issues resolution and addressing audit findings is an important aspect of the collaborative process. By clearly defining ownership and accountability across these groups and fostering collaboration, organisations can establish a robust AI governance framework that promotes ethical AI use while effectively managing risks.

6. How do business leaders balance business value with privacy, IP protection and regulatory obligations?

+

When formulating AI priorities and strategy and selecting and designing AI use cases, management must balance the quest for business value with critical concerns around data privacy, intellectual property protection, and compliance with applicable laws and regulations.

This balancing act begins with ensuring that the data used by AI systems complies with privacy laws, ethical standards, and the company’s brand promise around protecting personally identifiable information (PII) and other sensitive information. Techniques such as data anonymisation or encryption help protect sensitive data; however, if overly restrictive, these privacy measures can limit the richness and utility of data in ways that can potentially impact AI performance. Conversely, insufficient privacy protections expose the organisation to reputational harm and legal risk. Thus, the organisation must find a middle ground that enables innovation while safeguarding personal data.

Intellectual property protection and regulatory compliance are equally important. AI solutions often rely on proprietary algorithms and confidential data; robust IP protection strategies (like secure development environments and non-disclosure agreements) are necessary to safeguard innovations and proprietary information from imitation, theft and unauthorised disclosure. However, excessive restrictions can stifle collaboration and innovation, so management must weigh the benefits of sharing data against the risks of IP exposure.

Regulatory frameworks governing AI — such as the EU AI Act or anti-discrimination laws — require organisations to invest in bias mitigation, transparency and compliance measures, which may increase costs and slow deployment but are vital for avoiding penalties and maintaining stakeholder trust.

Overall, there are trade-offs that must be thoughtfully considered. To manage them effectively, an integrated approach that addresses technical, legal and ethical considerations is needed. By implementing privacy-preserving techniques, safeguarding IP, adhering to laws and regulations, and fostering cross-functional collaboration, management can design AI use cases that drive innovation and competitive advantage while minimising risks.

7. Does the scope of AI governance encompass AI transformation initiatives?

+

Yes. The AI governance framework should emphasise the critical need for alignment between AI initiatives and overall business objectives, which inherently influences how resources and capital are allocated. As discussed further in the response to Question 38, the potentially high opportunity costs of AI investments necessitate that AI use cases be prioritised using three essential factors: value, feasibility and risk. The objective is to identify opportunities that have clear value levers, can be realistically implemented within the current environment and have risk profiles the organisation can govern confidently. This selection process is an AI governance imperative. To that end, governance structures often require organisations to establish clear ownership, define policies and set priorities that ensure AI investments support strategic goals and deliver measurable business value.

Boards and executive management set the tone in overseeing AI investments and monitoring how capital is deployed to maximise value while minimising financial and reputational risks. ROI tracking, risk assessments and strategic alignment reviews are integral to this oversight process. This focus merits their attention because of the myriad examples of business transformation opportunities AI offers across industries:

  • Boost productivity and efficiency. Whether it's manufacturing, logistics or services, AI agents can operate autonomously, make decisions and take actions based on real-time data and learned experiences, adapt to new situations, and continuously improve their performance over time.
  • Scale dynamically as tasks and data volumes increase. As businesses grow, AI solutions can enable scaling operations to handle varying workloads and adapt to new tasks without human intervention. The flexibility their deployment offers enables expansion without significant resource increases.
  • Improve speed of customer service. AI agents can handle queries faster and more accurately to deliver personalised experiences at scale. This transformation can improve response times and resolution rates in consumer-facing industries. See response to Question 65 for further discussion.
  • Enhance decision-making. In finance, healthcare and strategy-setting, agentic AI can facilitate smarter, more informed decisions by analysing large amounts of data in real time, spotting hidden patterns and highlighting key insights.
  • Innovate products and services. For example, in healthcare, revolutionising patient care with personalised advice and remote treatment would be game-altering.
  • Add another dimension of labour. Both generative and agentic AI can be transformative in performing work along with employees and contractors, thereby increasing efficiency and reducing costs.
     

Opportunities for transforming business processes with AI solutions are endless. For example, in cybersecurity, AI agents autonomously monitor and respond to incidents. In IT support, they can act autonomously to perform software updates. In human resources, they can streamline hiring processes. Supply chain management can benefit from AI-driven logistics. Marketing strategies are enhanced by AI-driven analysis of consumer behaviour.

Thus, AI governance embraces the transformation possibilities of AI, just as it encompasses the protection and response aspects of managing AI deployments. The governance focus on investment decisions should ensure resources are directed at the most impactful AI initiatives. In doing so, AI governance integrates ethical, legal and reputational considerations into the capital allocation process to ensure responsible AI deployment.

8. What are the structural characteristics of an AI governance operating model that enables innovation and ideation while balancing and maintaining appropriate oversight and control?

+

An innovation-enabling AI governance operating model balances structure with flexibility while ensuring responsible AI deployments. The following structure helps to achieve this vital balance:

A centralised centre of excellence involves stakeholders from business, technical, risk, compliance and ethics functions to ensure balanced decision-making and rapid resolution of issues. Responsible for oversight, transparency and coordination of all AI initiatives, this group sets standards, policies and best practices while supporting agile innovation across business units.

Defined roles and responsibilities clarify ownership across AI initiatives. A RACI matrix can be useful in delineating responsibilities by identifying who is responsible, who is accountable, who should be consulted and who should be informed. This delineation positions teams to focus on their specific tasks without overlap, allowing for faster decision-making and execution.

Relentless focus on the stakeholder experience is a strategic imperative when developing AI solutions. Chatbots, virtual customer assistants, recommendation engines, AI-driven email responders, personalized marketing platforms and intelligent routing tools are examples of AI deployments that serve as the primary interface between organisations and their customers, employees and other users. If these solutions fail to meet stakeholder expectations for responsiveness, relevance or ease of use, they can quickly erode trust and satisfaction and, in the case of customers, lead to attrition. Conversely, AI solutions that enhance the customer journey — by making interactions smoother, faster and more personalised — encourage greater adoption and foster long-term loyalty. Personalisation drives higher engagement and retention rates — consider streaming services like Netflix that use AI-driven recommendation systems to suggest content tailored to individual preferences. The same concept works well with employees and other users.

Cross-functional collaboration creates interdisciplinary teams that include stakeholders from technology, business, legal, compliance and ethics. Diverse perspectives facilitate innovation by fostering creative solutions and encouraging innovative thinking, as team members bring different expertise and insights to the table. A cross-functional team consisting of data scientists, business analysts and compliance officers can work collaboratively on AI projects, ensuring effective ideation and creativity while also aligning innovative solutions with regulatory requirements. For example, a healthcare organisation might form a team that includes clinicians, data engineers and ethics champions to develop an AI-driven diagnostic tool that meets clinical needs while also adhering to ethical standards.

Flexible oversight mechanisms offer a governance framework that allows for regular checkpoints and audits throughout the AI development lifecycle while maintaining flexibility. Flexible oversight and accountability for AI investment decisions and budgets enable rapid iteration, failing fast and experimentation by avoiding bureaucratic delays and encouraging teams to test new ideas quickly. An agile governance model where prototypes are regularly reviewed and adjusted based on feedback from users and customers allows innovation teams to pivot timely in response to feedback and market needs.

Innovation pipelines and idea management are made possible through structured processes for submitting, evaluating and prioritising AI innovation ideas based on criteria that balance business value, technical feasibility and risk. A transparent idea management process encourages employees to contribute innovative concepts, knowing there’s a clear path for evaluation and potential implementation. For example, a financial institution may use an internal platform where employees can propose AI applications for fraud detection, with a committee assessing feasibility and potential impact.

Ethical and responsible AI principles embedded into the governance framework facilitate the building of trust with stakeholders and the public, creating a supportive environment for innovation. For example, a technology company developing a facial recognition system may establish guidelines to ensure the technology is free from bias, thereby fostering public acceptance and encouraging further innovation in AI applications.

Continuous learning and feedback loops enable ongoing learning and adaptation based on project outcomes, regulatory changes, and feedback from audits, users and external stakeholders. Continuous learning encourages teams to experiment, learn from failures within defined guardrails and iterate on their projects, promoting a “fail fast” mindset and culture of responsible innovation. For example, a retail organisation utilising AI for inventory management can analyse performance data and customer feedback to refine algorithms, leading to improved accuracy and efficiency over time.

An AI governance operating model that effectively balances innovation with oversight and control is characterised by defined roles, cross-functional collaboration, flexible oversight, structured idea management, ethical principles and continuous learning.

9. How are organisations ensuring awareness of responsible AI usage to enable implementation of appropriate frameworks, enhance cyber resilience and maintain information confidentiality?

+

As discussed in Question 4, awareness of the strategic importance of responsible AI deployments begins with an AI ethical framework that is aligned with legal and regulatory requirements in all markets in which the organisation operates. Other aspects of awareness include:

Offering targeted training and communication to increase the focus on responsible AI practices. Tailoring training for different audiences, including developers, business leaders and end users, is best practice. Examples include workshops and webinars on topics such as data privacy, bias mitigation and secure AI deployment, as well as scenario-based training to illustrate to employees the real-world implications of irresponsible AI use. Internal communication channels can be used to share updates on responsible AI initiatives.

Enhancing cyber resilience by integrating cybersecurity measures into AI governance to protect against threats like adversarial attacks, data breaches and model manipulation. Regular penetration testing on AI systems to identify vulnerabilities and the use of encryption and access controls to safeguard sensitive data used in AI models are best practices to address emerging cyber threats while ensuring compliance.

Fostering cross-functional collaboration among AI experts, legal advisers, ethics champions and cybersecurity professionals to review and approve AI projects and ensure they align with ethical and security standards. The AI governance or steering committee discussed in Questions 5 and 53 can facilitate this collaboration through regular meetings to discuss risks, opportunities and lessons learned from AI deployments.

Engaging stakeholders and the public through awareness campaigns to educate external stakeholders, including customers, policymakers and partners, about responsible AI usage. Publishing AI principles and a code of ethics on the organisation’s website along with supporting materials to simplify complex concepts can build confidence in the organisation’s focus on AI’s benefits and boundaries.

Measuring and refining awareness efforts through surveys and feedback channels to gauge employee and stakeholder understanding of responsible AI practices. Metrics such as training completion rates and incident reports related to AI misuse can offer valuable insights regarding areas where attention is needed.

By combining robust frameworks, targeted training, cross-functional collaboration and public engagement, organisations can effectively raise awareness of responsible AI usage.

10. How should AI governance priorities, strategies and policies be memorialised and communicated to stakeholders?

+

Clarity, accessibility and engagement are the focus when memorialising and communicating AI governance priorities, strategies and policies to stakeholders.

Organisations should develop comprehensive documentation that clearly outlines ethical guidelines, roles and responsibilities, risk management practices, and compliance measures. These documents should be easily accessible to all relevant stakeholders through secure repository websites that provide a centralised digital hub for the organisation and should be accessible only to employees.

Regular training sessions, workshops, webinars and structured communication channels — such as newsletters and town hall meetings — should be used to educate internal stakeholders, provide updates and foster engagement. Stakeholder feedback mechanisms facilitate continuous improvement of AI governance practices. As for customers, ecosystem partners and regulatory bodies, relevant summaries of AI governance policies should be posted on the company’s website to emphasise the organisation’s commitment to compliance and ethical standards in AI governance.

To reinforce the importance of AI governance, executive leadership should actively participate in communicating the organisation’s priorities and strategies, demonstrating their commitment to responsible AI use. Transparency is further enhanced by publishing regular reports on governance efforts, compliance and outcomes, which helps build trust with both internal and external stakeholders. Aligning governance communications with recognised regulatory standards and industry best practices also clarifies the organisation’s dedication to ethical and compliant AI deployment. This structured approach assures AI governance is not only well-documented but also consistently communicated and understood, ensuring all stakeholders are informed, engaged and aligned with the organisation’s commitment to responsible AI usage.

11. What new positions or roles should C-level executives be considering to ensure the requisite talent and expertise resides on their respective teams as AI plays an increasing role in the business?

+

C-level executives should ensure their teams possess the necessary talent and expertise to balance the organisation’s focus on AI-driven innovation and responsible AI deployments. New positions and roles should focus on bridging technical expertise with strategic oversight, as well as addressing emerging challenges in AI governance, ethics and workforce transformation.

At the executive management level, examples include introducing specialised roles like chief AI officer (CAIO), AI ethics officer or data scientist-in-residence, as well as updating multiple position descriptions across the organisation to emphasise AI literacy, data-driven decision-making and change management capabilities.

Following are specific comments pertaining to different executives:

The chief financial officer (CFO) and finance team may benefit from roles such as an AI-driven financial analyst or a predictive analytics specialist to enhance financial planning, forecasting and cost optimisation through advanced modelling. The finance organisation may gain tangentially from this approach, which widens the universe of potential finance team members to include not only accountants, but also data analysts (or jobs that include both), making a finance role more appealing to current and future employees.

The chief information officer/chief technology officer (CIO/CTO) should consider roles such as AI infrastructure architects or machine learning engineers to manage scalable AI systems, modernise legacy technologies and ensure integration across the enterprise.

The chief information security officer (CISO) should establish roles like an AI risk specialist or a cyber threat intelligence analyst with expertise in securing AI systems against adversarial attacks and ensuring compliance with data privacy laws.

The chief risk/compliance officer (CRO/CCO) should introduce positions such as an AI model risk manager or regulatory technology (RegTech) specialist to oversee AI model validation, regulatory compliance and governance frameworks.

The chief human resources officer (CHRO) should create roles like a workforce transformation lead or an AI talent strategist to manage reskilling initiatives, workforce planning and ethical adoption of AI in HR processes.

The chief marketing officer (CMO) should consider roles like an AI-powered marketing strategist or a customer data scientist to drive personalized campaigns, optimise customer engagement and uphold ethical marketing standards.

The chief operating officer (COO) should consider adding a process automation manager or an AI operations specialist to focus on streamlining workflows, optimising supply chains and scaling AI-enabled operational efficiencies.

The chief audit executive (CAE) should consider hiring individuals with data and AI expertise and establishing roles such as AI audit analyst or continuous monitoring specialist to ensure transparency, compliance and risk mitigation in AI systems.

The chief legal officer/general counsel (CLO/GC) should develop positions like an AI regulatory counsel or an intellectual property specialist to navigate evolving legal landscapes, protect IP and ensure compliance with AI-related laws.

12. What training and development initiatives should C-level executives be considering to ensure the requisite talent and skillsets reside on their respective teams as AI plays an increasing role in the business?

+

C-level executives should prioritise training and development programs that enhance AI literacy and strategic thinking across their teams. They should invest in leadership development programs to equip their teams with the skills needed to lead AI-driven transformations, promote a culture of continuous learning and ensure employees are adept at leveraging AI tools effectively across the business while navigating the associated risks. Following are specific comments pertaining to different executives:

  • The CFO should focus on training related to AI-driven financial forecasting, risk analytics and data visualisation techniques to improve decision-making and operational efficiency. Training on the operation of specific AI models and ecosystem tools, as a skill, will also enable finance organisations to focus more on data analysis than on data creation.
  • The CIO/CTO must prioritise upskilling in AI technology management, ensuring their teams can integrate AI into IT systems while aligning with business strategy and cybersecurity best practices.
  • The CISO should emphasise training on AI-enhanced cybersecurity measures, threat intelligence and data privacy frameworks to protect against new risks introduced by AI technologies.
  • The CRO/CCO must focus on developing expertise in regulatory technology and using AI to enhance compliance monitoring, risk assessment and governance frameworks.
  • The CHRO should lead initiatives on AI's impact on talent acquisition, retention and workforce reskilling, ensuring HR teams are proficient in people analytics and change management.
  • The CMO should leverage training in AI for customer insights, personalisation and campaign automation, focusing on ethical data usage and digital engagement strategies.
  • The COO needs to prioritise training in operational efficiency through AI, emphasising process automation, supply chain optimisation and data-driven decision-making to enhance overall productivity.
  • The CAE needs to adopt training on AI for advanced audit analytics and continuous monitoring to enhance agility and effectiveness in the audit process. Training should focus on both auditing the use of AI and auditing with AI.
  • The CLO/GC should prioritise education on AI-related legal trends, intellectual property rights and evolving data regulations to navigate compliance challenges effectively.

Section 2: The CFO perspective, financial discipline and ROI generation

+ EXPAND ALL

13. What are the primary considerations for measuring ROI from deployments of AI tools, services and infrastructure? How are total costs estimated?

+

Measuring ROI from AI deployments involves several factors that help organisations evaluate the effectiveness and value of their investments. Following are measurable factors that have a direct impact on the return side of the ROI equation:

Cost savings from the reduction in operational costs due to increased efficiency, automation of processes and minimised manual intervention. These savings include reduced labour costs, reductions in error rates (thereby eliminating the costs of rework) and optimisation of resources (e.g., reduced costs of fuel through optimisation of trucking routes).

Increased revenue generated as a result of AI initiatives, such as improved sales through personalised marketing, enhanced customer experiences or new product offerings enabled by AI capabilities.

Employee productivity gains resulting from AI tools that assist in decision-making, streamline workflows and reduce time spent on repetitive tasks. These gains can lead to higher output with the same or fewer resources, and also mitigate pervasive or emerging employee/supply demand issues (in finance as well as other areas of the organisation) by allowing the redirection of resources.

Quality improvements through AI’s impact on product or service quality, including reductions in complaints, returns, defects and errors requiring rework.

Scalability from how AI solutions enable the organisation to scale operations — handle higher volumes of data, transactions and users — without a proportionate increase in infrastructure. Scalable AI applications can support growth while maintaining or improving profitability.

Cost considerations also have a direct impact on measuring ROI, either as a determinant of investment or a measure of ongoing total costs:

  • AI infrastructure costs related to hardware (servers, GPUs), software licenses, cloud services and data storage solutions required to support AI applications.
  • Development and implementation costs incurred from hiring data scientists, engineers and consultants, as well as expenses for designing, testing and deploying AI models.
  • Data acquisition and management costs related to acquiring, cleaning and managing data for training AI models; such costs may include purchasing datasets, data storage solutions, and tools for data preprocessing and governance.
  • Upskilling and training costs needed to train employees on new AI technologies and tools. These costs are driven by formal training programs, workshops and continuous learning initiatives to ensure staff can effectively utilize AI solutions so that their value proposition can be fully realised.
  • Energy costs; the development, training and operation of AI models — especially large-scale or generative models — can require substantial computational resources, leading to significant energy consumption and associated costs that can impact the overall profitability and sustainability of AI initiatives, particularly as organisations scale their AI deployments.
  • Maintenance and support costs associated with maintaining AI systems, including regular updates, monitoring, troubleshooting and technical support. These costs can accumulate over time as systems require adjustments and enhancements.
  • Compliance and regulatory costs from legal consultations, audits and implementing necessary compliance measures to provide the necessary assurance that AI systems comply with relevant regulations and standards.
  • Change management costs associated with communication efforts, stakeholder engagement and strategies to address resistance to change. As with any major change, there is a “change curve” effect.
     

There are also measurable factors that have an impact on ROI (albeit the impact may not be as direct or as tangible as the ones listed above), but the impact may be appropriate to consider as “soft” benefits when constructing the business case for investing in AI use cases and initiatives:

Customer satisfaction and retention improvements due to AI-driven enhancements in service quality, personalization and responsiveness are likely to sustain customer loyalty and drive referrals to new customers.

Enhanced brand reputation is achieved through improved quality.

Faster time-to-market in developing and launching new products or services as a result of AI capabilities can lead to competitive advantages and increased market share.

More effective risk mitigation results from the extent to which AI deployments reduce risks, such as fraud detection, compliance monitoring or predictive maintenance, thereby preventing costly incidents.

Innovation enablement from how AI technologies facilitate innovation leads to new business models, processes or markets, driving long-term growth and sustainability.

Positive employee engagement is achieved through the use of AI tools that enhance work experiences, reduce burnout and allow increased emphasis on more strategic tasks.

Opportunity costs represent the potential benefits that the organisation may forego by investing resources in AI instead of alternative projects or initiatives. The question is whether the AI investment is the best use of available resources.

By incorporating these measurable direct and indirect factors into the ROI analysis, organisations can achieve a more accurate and comprehensive understanding of the financial implications of their AI deployments. Balancing both the costs and benefits will enable more informed decision-making regarding the value of AI investments undertaken (to enable an organisation to “fail fast,” if necessary, and move on) and future AI investments.

14. How are CFOs and finance organisations evaluating AI investment business cases?

+

CFOs and finance organisations are increasingly adopting a disciplined, financially-driven approach to evaluating AI investment business cases, moving beyond the initial technical hype and fear of missing out (FOMO) to focus on clear, measurable financial outcomes and strategic alignment.

In assessing AI investment proposals, the total cost of ownership should be considered along with the expected ROI. As discussed in Question 13, direct costs like technology acquisition as well as obvious and hidden labour costs associated with AI implementation and maintenance are relevant considerations in measuring the total cost of ownership. The full cost goes beyond third-party spend and includes the time it takes to implement, manage change, train and hire.

As for the expected ROI, the focus for AI use cases has shifted from experimental spending to achieving stricter ROI thresholds and faster payback periods, often expecting returns within six to 12 months for high-value projects like customer support and sales automation. The attractiveness of the payback period depends on the nature of the project; for example, less than six months for simple automation or high-impact, low-complexity use cases and 12-18 months for complex or strategic initiatives. A strong strategic justification may be required for projects with a payback of greater than 18 months, as may be the case for complex agentic AI projects offering high long-term potential. Thresholds are more difficult to set as practice varies depending on the maturity of the organisation’s AI capabilities, and the practice and timelines are as evolutionary as AI itself.

Successful AI investment proposals present robust ROI frameworks that highlight productivity improvements, revenue expansion and “soft” benefits rather than merely focusing on cost savings. CFOs prefer comprehensive business cases that quantify current performance metrics, define up-front baseline metrics and offer evidence-based plans for AI deployment and performance management and monitoring. Proposals that emphasise AI’s technical capabilities without addressing financial returns, risks and opportunity costs are less likely to gain approval until they do. Business case documents and processes for IT investments can be leveraged as a starting point.

In addition, CFOs are acutely aware of the complexity and high failure rates associated with AI projects. Accordingly, they place significant importance on scrutinising all costs involved, including data preparation, integration, training and process redesign, along with any integration fees and ongoing SaaS costs. Successful AI investments prioritise human enablement, strategic alignment and sustainable value creation, focusing on job augmentation rather than mere automation. Continuous evaluation mechanisms are established to ensure AI systems deliver accountable, risk-adjusted returns throughout their lifecycle.

Finally, there is the preservation of institutional memory. Scrutiny of a comprehensive business case is only worthwhile when there is reporting on the performance of AI deployments against the expectations set by that business case with senior management and the board. Post-mortems establish confidence in those executives who consistently deliver to expectations, and that credibility enables future requests to be more readily considered.

See Questions 38 and 41 for further discussion. Question 38 offers commentary on how AI use cases should be prioritised so that capital isn’t wasted on “AI for AI’s sake.” Question 41 offers suggestions for how to discuss AI ROI with senior executives and board members who may be fatigued with discussions of AI hype without realising tangible results.

15. When determining ROI for AI deployments, how are total savings — including time savings, cost reductions and other relevant efficiencies — assessed and quantified?

+

The starting point for determining ROI for AI implementations involves the need to plan and carefully craft and document the baseline amounts against which the changes can be measured. While many companies are seeking to drive revenue growth with AI by enhancing customer experiences, enabling product innovation and accelerating sales cycles, there are many use cases that reduce costs.

Time savings are quantified by measuring the reduction in manual effort achieved through automation and process optimisation enabled by AI. Metrics such as hours saved per employee or reduced cycle times for specific workflows are used to calculate productivity gains. For example, AI-powered chatbots can significantly reduce customer service response times, measured by comparing average handling times before and after deployment. These time savings can be converted into monetary value by calculating the labour costs associated with the time saved and reallocating resources to higher-value activities.

Cost savings are assessed by identifying areas where AI improves efficiency, reduces waste or eliminates redundancies. For example, predictive maintenance solutions powered by AI can reduce equipment downtime and repair costs, which are quantified by tracking historical maintenance expenses versus those incurred after AI implementation. Additionally, organisations may compare pre- and post-deployment costs (materials, labour and overhead) and evaluate reductions in error rates, rework costs or fraud losses as direct financial benefits of AI adoption.

Other relevant efficiencies can arise from AI deployments, often yielding intangible efficiencies that contribute to ROI, such as improved decision-making, enhanced scalability and better customer experiences. These are further discussed in Question 13. They are quantified using metrics like customer retention or satisfaction rates, revenue growth or increased market share. For example, AI-driven personalization engines can boost sales by tailoring recommendations to individual customers (measured by tracking conversion rates and average order values before and after implementation). Similarly, scalability benefits are assessed by analysing the ability to handle higher volumes of transactions or data without proportionally increasing costs.

There is a combination of methods used to quantify savings comprehensively, including:

  • Baseline comparisons: Establishing benchmarks for key performance indicators prior to AI deployment and measuring improvements afterward.
  • Cost-benefit analysis: Comparing the total investment in AI (including infrastructure, training and maintenance) against realised savings and revenue gains.
  • Payback period: Assessing project risk by calculating the time period required to recover the initial cost of an investment.
  • Net present value and internal rate of return: Evaluating long-term financial returns on AI investments.
  • Scenario modelling: Using simulations to estimate potential savings under different usage conditions or adoption levels.
     

By combining these methods and metrics, organisations can develop a holistic view of the ROI from AI deployments, ensuring that both tangible and intangible benefits are captured and aligned with strategic objectives, and enabling timely calls on the ongoing utility of investments as their use evolves.

Section 3: AI risk identification, controls and integration with ERM

+ EXPAND ALL

16. How should organisations consider AI-related risks in a way that aligns with enterprise risk management and audit planning?

+

The Committee of Sponsoring Organisations’ (COSO) ERM Framework provides a useful perspective for purposes of integrating AI-specific risks into an organisation’s enterprise risk management (ERM) process. Simply stated, a structured approach to identifying, assessing, prioritising, sourcing, measuring, monitoring and mitigating risk applies to AI-related risks just as it does to all other enterprise risks.

Identifying AI-related risks begins with creating a comprehensive inventory of all AI systems in use or under development, including their purposes, stakeholders and data sources. Collaboration with cross-functional teams (IT, legal, compliance, ethics and business units) can facilitate the identification of risks associated with specific AI systems. A risk taxonomy that includes AI-specific risks may be useful for this purpose. Established frameworks (the NIST AI Risk Management Framework) can be useful in classifying AI risks into such domains as operational, ethical, technical, legal and reputational risks.

Criteria and approaches for assessing risk include assessment of business impact — how risks could affect strategy execution, financial performance, operational efficiency, customer satisfaction or competitive position. They also include reputational impact and the consequences of non-compliance with applicable laws and regulations — such as fines, sanctions or lawsuits — and the probability of risk occurrence based on historical data, industry trends and the complexity of the AI system.

A scoring or ranking system (low, medium, high) can be useful for purposes of prioritising risks considering their likelihood and impact. Risk maps are commonly used for this purpose, and AI-specific risks should be considered along with other enterprise risks. Risks can also be prioritised based on their potential to disrupt critical business processes or harm key stakeholders. High-impact, high-likelihood risks with high velocity for disruption should receive immediate focus.

AI-related risks arise from various sources across the lifecycle of AI systems. Sourcing priority risks offers insights on how to monitor, measure and mitigate them. Examples of key sources include:

  • Poor-quality, incomplete or biased training data
  • Privacy risks from how PII or other sensitive information is used in AI training data and prompts
  • Black-box models with limited transparency
  • Model drift
  • Failure of AI systems integrated into critical business processes
  • Dependency on external vendors or third-party AI tools
  • AI-generated harmful, biased or unethical outcomes
  • Non-compliance with emerging AI regulations or industry standards.
     

Scenario-based workshops with the right stakeholders at the table can be effective in sourcing potential failures or unintended consequences of AI deployments.

Measuring AI-related risks moves beyond risk mapping to quantify them where possible. Simulations can be used to explore worst-case scenarios and stress-test AI systems under different conditions. For instance, organisations can simulate cyberattacks to evaluate the robustness of AI-driven cybersecurity tools.

Specific metrics that are incorporated on a dashboard enable ongoing monitoring. Metrics can cover such areas as data quality and integrity, model performance and reliability, bias and fairness, transparency and explainability, privacy and data protection, operational effectiveness, and other pertinent matters. See the response to Question 17 below for examples of metrics. Risk tolerances used for AI-specific risks should be aligned with the company’s AI governance policies. Real-time monitoring systems that track AI performance metrics can enable detection of anomalies and flag emerging risks — providing an early warning capability. This is particularly important for dynamic environments where data patterns evolve rapidly, as in the case of high-risk AI deployments.

Mitigating risk strategies involves proactive measures to reduce risks to an acceptable level and ensure responsible AI deployment. Key risk-mitigation strategies include governance frameworks, bias mitigation techniques, privacy-preservation methods, explainability and transparency, robust security measures, regulatory compliance, feedback loops and continuous improvement, as well as training and awareness.

By systematically identifying, assessing, prioritising, sourcing, measuring, monitoring and mitigating AI-related risks through the lens of the organisation’s ERM framework, management can ensure their AI initiatives align with ERM principles. With respect to audit planning, it is best practice for internal audit to coordinate its activities with the ERM process to include high-priority risks — including AI-specific risks — in the audit plan.

17. What are examples of metrics relevant to AI-related risks?

+

Metrics, measures and monitoring are discussed in the prior question in conjunction with integrating AI-specific risks into the ERM process. Below are examples of metrics related to AI risk areas:

AI RISK AREA

RELEVANT METRICS

Data quality and integrity
  • Percentage of missing or incomplete data in training datasets
  • Number of detected data anomalies or outliers per month
  • Frequency of data validation errors during model updates
Model performance and reliability
 
  • Model drift incidents detected (number of times performance drops below a set threshold)
  • Frequency of unexpected or unexplainable outputs
  • Time between model retraining cycles
Bias and fairness
  • Disparity ratio across demographic groups in model outputs, such as approval rates by gender or ethnicity
  • Number of bias-related complaints or incidents escalated
  • Count of bias or fairness audits conducted per year
Transparency and explainability
  • Percentage of AI models with documented explainability reports
  • Number of user requests for explanation of AI decisions fulfilled
  • Average time to respond to explainability requests
Privacy and data protection
  • Number of privacy breaches
  • Incidents of non-compliance with data protection policies
  • Volume of personal data processed
Security and resilience
  • Number of security vulnerabilities identified in AI solutions
  • Frequency of penetration tests or adversarial attack simulations performed
  • Average time to remediate detected security issues
Regulatory and policy compliance
  • Number of AI models reviewed for regulatory compliance
  • Incidents of non-compliance with relevant AI regulations
  • Frequency of policy updates or governance reviews related to AI
Operational effectiveness
  • System uptime or availability percentage for AI-powered services
  • Time required to reveal and remediate AI-related incidents
  • Number of operational interruptions caused by AI failures
Stakeholder engagement and training
  • Percentage of relevant staff completing responsible AI training
  • Number of stakeholder AI-related consultations
  • Frequency of internal communications on AI risk management

While not intended to be exhaustive or a one-size-fits-all framework, these illustrative KPIs and metrics provide actionable insights into the ongoing performance, governance and risk of AI systems. A balanced family of metrics supports continuous monitoring and effective risk management.

18. How should AI risks be mapped to existing internal control frameworks without creating parallel or duplicative governance structures?

+

AI risks should be mapped to existing internal control frameworks by integrating AI-specific risk considerations directly into established governance, risk management and control processes. Organisations can leverage authoritative frameworks — such as those provided by COSO and NIST — to align AI risks with standard control domains like technology, security, privacy and compliance.

This approach involves embedding AI risk assessments into current risk evaluation cycles, mapping AI controls to specific processes and activities, and utilising existing audit and compliance teams for oversight. By applying an integrated approach to adapt current frameworks and processes already in place to accommodate AI’s unique risk profile, organisations leverage proven structures, minimise duplication, and ensure a consistent, holistic risk view across all technologies.

Many organisations are still early in their AI adoption journey. It is common to see AI-specific intake, review or governance processes established to create visibility and consistency. These focused mechanisms can be appropriate while AI use is emerging, fragmented or not yet well understood. However, they should be viewed as transitional, with a clear intent to integrate AI risk management into existing ERM frameworks as adoption matures.

The foundational principle is that AI does not introduce entirely new categories of risk — it changes how existing risks manifest. Accordingly, AI risks should ultimately be mapped into the organisation’s established risk and control frameworks (ERM, COSO, SOX, IT general controls, data governance, third-party risk management), rather than managed indefinitely through standalone structures.

The following table illustrates examples of common AI risk themes and how they align to existing risk and control domains. Because AI risks often cut across multiple frameworks depending on the use case, the table focuses on risk themes and control domains, rather than assigning risks to a single framework.

AI RISK THEMEMAPPED RISK / CONTROL DOMAIN
Bias or unfair outcomesCompliance and conduct risk
Inaccurate or hallucinated outputsOperational risk; financial reporting risk
Model drift or unapproved changesChange management; monitoring controls
Reliance on AI outputs w/o human oversightManagement review 
Use of sensitive or regulated dataData privacy and cybersecurity
Vendor embedded or black box AIThird-party risk management

Importantly, this integration does not mean business as usual. While AI-related risks align to familiar domains, many existing controls were designed for deterministic systems and traditional decision models. As a result, organisations often need to enhance and adapt existing controls — such as management review, change management, monitoring and third-party oversight — to ensure they remain effective in an AI-enabled environment. Long-term effectiveness comes from integration rather than separation, using familiar frameworks while ensuring the controls within them evolve to address AI-specific risks.

19. What are some useful control frameworks that should be considered to support AI risk management and control evaluation activities?

+

Following are some useful control frameworks to consider:

According to the National Institute of Standards and Technology (NIST) website, the NIST AI Risk Management Framework is intended to “guide critical infrastructure operators towards specific risk management practices to consider when engaging AI-enabled capabilities.” Organisations can utilise it to establish a common understanding of AI risks and develop tailored risk management strategies.

According to the International Organisation for Standardisation, ISO/IEC 42001:2023 — AI Management System Standard is “designed for entities providing or utilising AI-based products or services, ensuring responsible development and use of AI systems.” Organisations can adopt this standard to establish a formal governance structure for AI that addresses risks and promotes responsible AI usage.

NIST Control Overlays for Securing AI Systems is a framework that builds upon existing security controls and overlays AI-specific guidelines to address unique vulnerabilities in AI systems. The overlays focus on various AI use cases, providing implementation-focused controls that enhance security and privacy in AI deployments. Organisations can use this standard to strengthen their security posture for AI systems while ensuring compliance with established federal and industry standards.

The Cloud Security Alliance AI Controls Matrix is specifically focused on AI, complementing traditional cybersecurity controls and addressing risks associated with AI systems in cloud and hybrid environments. Organisations can utilise it to conduct robust evaluations of AI controls, ensuring that AI systems deployed in cloud environments meet security and compliance standards.

Section 4: The CIO/CTO perspective, AI architecture and security

+ EXPAND ALL

20. Does the selection of an AI tool or package differ from other systems selections?

+

Yes. The unique nature of AI technologies gives rise to considerations beyond a standard system selection process, although those can serve as a starting point. AI tools can learn through trial-and-error, adjusting their behaviour rather than remaining fixed after deployment. These dynamic factors require organisations to evaluate not only the technical capabilities of the tool but also its ethical implications, explainability features, scalability and adaptability to changing data patterns. Ease of integration and regulatory compliance are other considerations.

While traditional systems are typically evaluated based on fixed functionality and performance metrics, AI tools require continuous oversight to ensure models remain reliable, unbiased and aligned with organisational goals. This necessitates a focus during the selection process on factors like model retraining, data quality management and lifecycle monitoring. Ultimately, selecting an AI tool involves balancing technical innovation with robust risk management and ethical considerations, making it a more complex and iterative process than traditional system selection.

21. What security and architecture “must haves” should be in place before deploying or scaling generative AI?

+

To ensure secure and scalable deployment of generative AI systems, organisations must implement robust security, architecture and governance measures. Following are key “must haves” when deploying generative AI:

Identity and access management prevents unauthorised access by ensuring users and applications have appropriate permissions based on their respective roles. Role-based access controls should be enforced, and least-privilege principles should be in place to restrict access to sensitive AI systems and data. Multi-factor authentication and centralised identity management solutions also enhance security.

Data loss prevention tools help secure sensitive information — such as intellectual property and customer data — by preventing leaks or misuse during training, inference or storage. Other best practices include applying encryption for data at rest and in transit, and setting policies to block unauthorised data transfers.

Model access controls ensure only authenticated and authorized users or systems can interact with generative AI models. Strict access controls should be established for AI models, including application programming interfaces (API) that allow different software applications to communicate and exchange data with one another while preventing unauthorised use or tampering. Token-based authentication offers yet another secure protocol. Additionally, model lifecycle management controls should be established to govern model versioning, updates, retraining and retirement, ensuring changes (and access) are reviewed and traceable if risk thresholds are exceeded.

Robust logging and monitoring enables real-time monitoring and forensic analysis to detect anomalies, misuse or breaches. Comprehensive logging mechanisms should be implemented to capture all interactions with AI systems, including user queries, model responses and system actions. Best practices include use of centralised logging platforms integrated with SIEM (security information and event management) tools for continuous monitoring. AI-specific incident response procedures should also be established (including the ability to disable models, tools, data sources, etc.) if misuse, hallucinations or security incidents are detected.

Secure integration patterns prevent vulnerabilities in integrations that could expose systems to external threats or data leakage. Secure integration architectures should be designed to connect generative AI systems to enterprise applications and workflows. Best practices include use of APIs with secure protocols (such as https or OAuth), validation of inputs to avoid injection attacks and adoption of zero-trust architecture principles.

Encryption and key management maintain data security in storage and processing. This protection is accomplished through encryption of sensitive data used or generated by AI systems and secure key management practices. Best practices include use of hardware security modules or cloud-based key management services for encryption.

Bias and output validation prevent reputational and legal risks associated with biased or harmful content generation. Mechanisms to detect and mitigate bias in model outputs should be implemented to ensure fairness and compliance with ethical standards. Model outputs should be audited regularly and models fine-tuned using diverse and representative datasets.

Adversarial robustness ensures the reliability and security of generative AI systems in hostile environments. Defenses should be deployed against adversarial attacks, such as input manipulation or poisoning of training data. Best practices include adversarial training techniques and monitoring for suspicious activity targeting AI models.

Regulatory compliance avoids legal penalties and ensures ethical use of generative AI systems. It ensures adherence to regulations in all jurisdictions in which the company operates. That is why it is important to maintain transparent documentation of data usage, model design and decision-making processes.

Scalability and resilience ensure AI platforms can manage increasing workload demand and data access efficiently and securely while maintaining availability and performance. Architectures should support independent scaling across models, retrieval layers and orchestration services while maintaining high availability and fault tolerance. Best practices include use of containerization to encapsulate everything the system needs to run and behave consistently across different computing environments (such as development, testing and production) and cloud-native architectures to scale resources dynamically.

22. How does management assess and mitigate AI-specific technology risks without slowing delivery or hindering innovation?

+

Assessing and mitigating AI-specific technology risks requires a proactive, integrated approach that balances robust risk management with the need for speed and innovation. Following are ways to address key risks while maintaining delivery velocity:

Data leakage in AI systems is the unintentional exposure, retention or output of sensitive training data or user input. It occurs when models memorise confidential information, leading to intellectual property theft, privacy violations and regulatory penalties. Assessing this risk entails identifying sensitive data used by AI systems, evaluating where it could be exposed (for example, during training, inference or storage), and conducting regular audits of data access logs and implementation of automated tools to detect unusual data transfer patterns. It is mitigated through data encryption, access controls, zero-trust architecture and privacy-preserving techniques. To maintain speed, automate compliance checks and data governance processes to reduce manual oversight and integrate privacy-preserving techniques into the AI pipeline from the start to avoid rework later.

Prompt injection is a security vulnerability where attackers use crafted inputs to override AI's original instructions, forcing it to behave maliciously or bypass safety guardrails. To assess this risk, test AI models for vulnerabilities to malicious prompts using adversarial testing techniques and monitor logs for unusual or harmful inputs that may indicate potential injection attempts. The risk is mitigated with input validation, contextual constraints and human oversight. Build input validation and monitoring tools into the development pipeline to catch issues early without disrupting workflows.

Insecure plugins/tools in AI systems are vulnerable extensions or third-party applications (browser tools, API connectors) that AI models automatically invoke. They pose risks like data exfiltration, remote code execution and unauthorised access because they often trust AI-generated inputs without validation or strict access controls. Assess this risk with security reviews of third-party plugins, APIs and tools before integration and continuous monitoring for updates or vulnerabilities in external components. The risk is mitigated with secure integration, sandboxing and regular patching. To maintain speed, establish a pre-approved list of trusted plugins/tools and automate vulnerability scanning for faster integration.

Model supply chain risk means vulnerabilities that can arise at any stage of an AI model's lifecycle when using third-party tools, e.g., pre-trained models or datasets. These risks include data poisoning, model tampering and embedded malware, which can cause security breaches, biased outputs or complete system failure. To assess this risk, evaluate the provenance of pre-trained models and datasets to identify potential risks, such as tampering or bias, and audit third-party vendors for adherence to security and ethical standards. Mitigate the risk through model validation, provenance tracking and adversarial training. To maintain speed, use automated tools to assess model quality and compliance quickly, reducing manual evaluation time.

Unauthorised shadow AI risk refers to employees using unsanctioned, unmonitored AI tools (such as ChatGPT, Copilot or specialised apps) for work without IT approval, creating major security, privacy and compliance risks. These unauthorised systems can leak sensitive intellectual property, violate data regulations, and produce inaccurate or biased outputs. In assessing this risk, regularly scan the organisation’s network for unauthorised AI deployments and conduct interviews or surveys with business units to uncover any unapproved AI initiatives. Mitigate the risk through centralised AI governance, mandatory approval processes and endpoint detection tools to identify unapproved AI systems running within the organisation. To maintain speed, streamline approval processes by creating clear guidelines and templates for submitting AI use cases, enabling faster review and decision-making.

Strategies to balance risk management and delivery speed include:

  • Leverage automated tools for risk assessments, compliance checks and monitoring to minimise manual effort and accelerate processes (anomaly detection systems, vulnerability scanners and AI lifecycle management platforms).
  • Deploy risk-based prioritisation to focus mitigation efforts on the highest-impact and most likely risks rather than attempting to address all risks equally.
  • Integrate security and governance measures into the AI development lifecycle from the design phase to avoid costly delays later.
  • Foster cross-functional collaboration between technical teams, compliance officers and business leaders to encourage a more holistic focus in addressing risk while ensuring alignment with delivery goals.
  • Use continuous real-time monitoring to detect and address emerging risks dynamically, reducing the need for disruptive interventions.
  • Implement feedback loops to drive continuous improvement in risk mitigation practices.

23. What does a “good” AI reference architecture look like from a governance and risk perspective — and how is it different for generative AI versus traditional analytics and machine learning?

+

A “good” AI reference architecture from a governance and risk management perspective is one that integrates robust controls, ethical safeguards and risk mitigation measures into the design of the AI system and throughout the AI lifecycle. It ensures compliance with regulatory requirements, transparency in decision-making, accountability for outcomes and alignment with organisational goals. The architecture must support scalability, resilience and continuous monitoring to address evolving risks while fostering innovation. Most importantly, governance is enforced through architectural design choices — such as defined trust boundaries and embedded monitoring — rather than relying solely on policies or manual reviews.

That said, the governance and risk management requirements differ significantly between generative AI systems and traditional analytics or machine learning models due to their distinct capabilities and operational contexts. Generative AI systems, such as large language models (LLMs), present unique governance and risk challenges that extend beyond the model itself — shifting risk into prompts, orchestration logic and downstream actions. These include risks such as misinformation, explainability challenges and exposure to prompt injection attacks. Governance frameworks for these systems must include mechanisms for output validation, bias detection and transparency while ensuring compliance with emerging AI regulations. Privacy-preservation techniques, input sanitisation, monitoring and access controls are also needed. As a result, effective generative AI architectures rely more on continuous runtime controls and monitoring versus one-time pre-deployment validation.

In contrast, traditional analytics and machine learning models are typically more deterministic and operate within predictive boundaries, enabling governance to be centralised around data quality, validation and auditability instead of runtime behaviour control. Governance therefore emphasises prevention and assurance rather than ongoing behavioral containment. While bias is still a concern, traditional models often operate within narrower domains, making bias easier to detect and mitigate through controlled datasets and feature engineering. Their governance focuses on ensuring high-quality, structured training data, mitigating model drift through regular retraining and monitoring, and maintaining operational alignment. Traditional models, such as regression or decision trees, are also generally more interpretable. Compliance efforts emphasise transparency in decision-making and auditability, which is easier to establish due to the deterministic nature of these models.

24. Which control layers must be explicitly designed into the AI stack versus handled through enterprise controls already in place?

+

To manage AI systems effectively, it is essential to distinguish between control layers that must be explicitly designed into the AI stack and those that can be managed through existing enterprise controls. The following table outlines these distinctions:

STACK COMPONENT

AI STACK CONTROLS
(explicit design required)

ENTERPRISE CONTROLS
(managed through existing controls)

Data
  • Data quality management
  • Bias detection and mitigation
  • Data privacy and protection measures
  • Data lineage tracking
  • General data governance policies
  • Data retention and disposal policies
  • Data loss prevention and encryption
Model
  • Model behavior validation and testing
  • Explainability and interpretability mechanism
  • Adversarial robustness controls
  • Monitoring for model drift and performance
  • Standard performance monitoring frameworks
  • Compliance with existing regulatory requirements
  • Change management and software development lifecycle
Application
  • Input validation and sanitisation
  • Output validation and filtering
  • Access controls specific to AI functionalities
  • User interaction monitoring
  • Existing application security controls
  • Network security measures
Infrastructure
  • Container security and orchestration controls
  • Secure deployment practices
  • Resource allocation and scaling policies
  • Overall IT security policies
  • Disaster recovery and business continuity plans
Identity
  • Role-based access controls specific to AI systems
  • Multi-factor authentication for AI access
  • Centralised identity management systems
  • Existing user access management protocols

In addition, there are third-party risk management considerations that should be integrated into the analysis. We have devoted the following Section 5 to discussing these considerations.

Section 5: Third-party risk management considerations

+ EXPAND ALL

25. How does management build and maintain a complete inventory of AI systems and use cases (including embedded vendor AI and AI features in enterprise platforms) and classify them by risk and control requirements?

+

As discussed below, management should adopt an approach that encompasses identification, classification, control mapping and maintenance.

Identification: Conduct an audit of all AI systems in use by engaging with business units, IT and procurement teams. Utilise automated discovery tools that scan for AI components across applications, databases and cloud services. Reconcile the AI systems inventory against vendor and subcontractor lists, and confirm which external providers host, develop, operate or materially influence AI functionality. In addition, review vendor contracts and documentation to identify embedded AI features and capabilities. Document each identified AI system in a centralised repository, including details such as system name, purpose, owner, data sources, model types, and whether the system is in-house or vendor-supplied.

Classification: Develop a risk classification framework based on factors such as data sensitivity, potential impact on operations, compliance requirements and ethical considerations. Use an appropriate scoring system to evaluate risks associated with each AI system and feature, considering aspects such as bias, privacy implications and operational dependencies. Incorporate third-party inherent risk factors into the framework, including fourth-party/subcontractor reliance, concentration risk, geographic/data residency exposure, and the extent of vendor access to sensitive or confidential data. Define clear vendor due diligence triggers (e.g., high-risk AI use cases, use of sensitive data, customer-facing decisioning) and align classification outcomes to required third-party risk management (TPRM) activities. Engage cross-functional teams to assess each AI system against the established criteria, ensuring a consistent and thorough evaluation process.

Control mapping: Based on the risk classification, assign appropriate control requirements for each AI system. This includes technical controls (access management and data encryption), operational controls (monitoring and auditing) and governance controls (compliance with regulations and ethical guidelines). For vendor-supplied or externally hosted AI, map controls such as contractual safeguards (e.g., data processing terms, confidentiality, IP usage restrictions, subcontractor requirements), audit and assessment rights (e.g., SOC report cadence), model/material change notification, incident notification timelines, and security and privacy addenda mandating specific technical and operational controls that ensure data is handled securely and in compliance with regulations. Create a control matrix linking each AI system to its specific control requirements. Ensure this mapping aligns with existing ERM frameworks and industry best practices.

Maintenance: Establish a routine for reviewing and updating the inventory to reflect changes in AI systems, new deployments or modifications to existing systems. Periodic audits provide assurance as to inventory accuracy. Foster ongoing communication with business units and IT teams to capture new AI initiatives and ensure the inventory remains comprehensive and current. For vendor-supported AI, align inventory updates to the TPRM lifecycle, including periodic risk reassessments, review of updated SOC reports and security/privacy attestations, monitoring of vendor performance and incidents, and confirmation of subcontractor changes. Implement feedback mechanisms to continuously improve the inventory management process.

26. How should management govern third-party and open-source AI, and who owns ongoing risk when a provider’s model or terms change?

+

Management should govern third-party and open-source AI by implementing rigorous vendor due diligence, robust contractual controls, continuous monitoring and well-defined exit strategies.

Due diligence focuses on assessing the provider’s technical capabilities, security practices, ethical standards, compliance with regulations, and reputation. This assessment includes evaluating the provenance of models and datasets, understanding the provider’s approach to risk mitigation, and reviewing their history of performance and responsiveness to incidents. For open-source AI, management must analyse community activity, code quality, licensing terms and potential vulnerabilities before adoption.

Contractual controls are essential to define responsibilities, data ownership, intellectual property rights, security requirements and compliance obligations. Contracts should include provisions for transparency in model updates, audit rights, incident reporting, and clear delineation of liability for errors, breaches or misuse.

Additionally, management should establish mechanisms for continuous monitoring — such as regular performance reviews, compliance checks and security audits — to ensure third-party and open-source AI systems continue to meet organisational standards.

Exit strategies should be documented in contracts, specifying conditions for termination, transition support, data retrieval and safe decommissioning of AI assets if the provider’s terms or model reliability change.

Ownership of ongoing risk when a provider’s model or terms change typically rests with the organisation deploying the AI, as they are responsible for ensuring continued compliance, security and ethical use. Management must proactively monitor for changes in vendor terms, model behaviour or regulatory requirements, and promptly assess their business impact. If risks arise due to provider modifications, the organisation should be prepared to invoke contractual protections, adjust controls or switch providers as needed.

Clear accountability, supported by strong governance and contractual frameworks, enables organisations to manage third-party and open-source AI risks while maintaining operational resilience and compliance.

27. What architectural and contractual controls are necessary to manage model updates, vendor drift, changes in terms of use and downstream risk ownership?

+

Organisations must implement both architectural and contractual controls within their AI governance framework.

Architecturally, this means designing systems with modularity and interoperability in mind, allowing for seamless integration, replacement or rollback of models as needed. Version control, robust change management processes and automated testing environments are essential to evaluate the impact of model updates before deployment. Additionally, continuous monitoring mechanisms should be established to detect performance degradation, bias or compliance issues resulting from model or vendor changes. These technical safeguards ensure organisations can respond quickly and effectively to updates or unexpected vendor behaviors without disrupting business operations.

Contractually, organisations should negotiate clear terms with vendors regarding update notification requirements, transparency into model changes, and rights to audit or review modifications. Contracts should specify obligations for timely communication of any changes in terms of use, data handling practices or model functionality. Provisions should address downstream risk ownership by defining liability, indemnification and remediation procedures if vendor-driven changes introduce risks of non-compliance. Well-defined exit strategies, including data portability and support for transitioning to alternative solutions, further protect the organisation’s interests.

28. How should organisations assess and govern AI risk introduced through third-party software, SaaS platforms and vendor-embedded AI that may not be visible to traditional risk assessments?

+

Organisations should recognise that AI risk increasingly enters the environment indirectly — through third-party software, SaaS platforms and vendor-embedded functionality — often without being visible through traditional vendor risk or IT assessments. Governing this risk effectively requires shifting the focus from who the vendor is to what the AI capability does and how it influences business outcomes.

A practical starting point is improving visibility. In many cases, the gap is not the vendor itself, but the fact that traditional third-party risk management functions were not designed to ask AI-specific questions or surface embedded AI risk. Organisations should enhance vendor intake and inventory processes to identify where AI is embedded, even when it is not marketed as a standalone “AI tool.” This includes understanding what functions the AI performs, what data it uses, whether outputs are relied upon for operational or compliance decisions, and how changes to the AI model could affect downstream processes. The absence of visibility — not malicious intent — is often the largest third-party AI risk driver.

Risk assessment should then move beyond static vendor questionnaires and consider how AI behaves over time. Unlike traditional software, AI outputs may be probabilistic, adaptive, and influenced by changes in data, configuration or vendor updates. Organisations should assess risks such as lack of transparency, data provenance gaps, bias, model drift and overreliance on AI outputs — particularly where AI influences regulated, financial or high judgment activities.

Governance expectations should be integrated into existing third-party risk management, rather than handled through a separate AI process. This includes:

  • Defining clear ownership for AI-enabled vendor capabilities
  • Requiring disclosure of embedded AI functionality, including intended use, decision impact, and reliance on customer or third-party data
  • Incorporating AI-specific contractual terms, such as:
    • Notification and approval thresholds for material AI or model changes
    • Clear limitations on data use, retention and model training
    •  Audit, assurance or independent assessment rights for AI-enabled services
  •  Aligning AI oversight with existing security, privacy and compliance controls
     

Where full transparency is not feasible due to proprietary constraints, organisations should seek alternative assurance mechanisms such as independent assessments, performance metrics and ongoing monitoring commitments.

Ongoing monitoring is especially important for third‑party AI and should be scaled based on the risk and impact of the AI‑enabled capability, not the vendor itself. Rather than treating vendor assessments as point‑in‑time events, organisations should apply risk‑based monitoring aligned to how the AI is used and what it influences — recognizing that low‑impact AI functionality may warrant lighter oversight, while AI supporting regulated activities, financial reporting, customer outcomes or high‑judgment decisions may require closer and more frequent monitoring. Monitoring mechanisms should detect material changes in AI behaviour, performance or risk profile — such as vendor updates, changes in data sources, increased automation of decision‑making or expanded reliance on AI outputs — with clearly defined escalation paths and reassessment triggers so that governance remains proportionate and keeps pace with change.

Ultimately, organisations should apply a simple but effective principle: risk accountability cannot be outsourced. While AI capabilities may be delivered by vendors, responsibility for how those capabilities affect operations, compliance and reporting remains with the organisation. Embedding AI-aware governance into third-party risk processes helps ensure vendor-introduced AI risks are identified, understood and managed with the same rigor applied to internally developed solutions.

Section 6: Incident response and resilience

+ EXPAND ALL

29. What should the organisation continuously monitor to demonstrate AI is performing operationally as intended and staying within risk appetite — and who reviews the reports, how often and with what thresholds?

+

Continuous monitoring should focus on key areas such as model performance, bias indicators, security events, usage patterns and control effectiveness.

Model drift (such as degradation in accuracy or relevance due to changing data patterns) should be tracked using performance metrics such as accuracy, precision and recall.

Bias indicators should be monitored using such fairness metrics as demographic parity or disparate impact ratios.

Security events like unauthorised access, adversarial attacks or data breaches require real-time monitoring through logging and anomaly detection systems.

Usage patterns should be analysed to identify deviations from expected behaviour, such as unusual input/output trends or excessive usage that may signal misuse or inefficiencies.

Control effectiveness should be assessed by testing governance mechanisms, such as access controls, audit trails and compliance with regulatory standards.

Reports generated from these monitoring activities should be reviewed by different stakeholders at varying frequencies based on the criticality of the metrics:

  • Operational teams should review real-time alerts and daily reports to address immediate issues, such as performance drops or security incidents.
  • Risk and compliance functions (second line) should conduct weekly or monthly reviews to evaluate adherence to thresholds for bias, security and overall risk alignment.
  • Executive committees or senior leadership should receive quarterly summaries of key metrics and high-priority risks, ensuring alignment with the organisation’s strategic goals and risk appetite.
  • Board-level oversight may be required for critical AI systems, particularly when thresholds are breached or significant risks emerge.
     

Monitoring thresholds are useful in triggering early warning alerts for potential issues before they become significant problems. For example, a maximum allowable model error rate or minimum fairness ratio can serve as a benchmark for performance and bias monitoring. Security thresholds might include acceptable incident response times or predefined limits on unauthorised access attempts.

30. What does an “AI incident” look like for the organisation (harmful output, privacy breach, model failure, regulatory trigger), and what is the playbook for containment, rollback/kill switch, customer communications and board reporting?

+

An “AI incident” is any event where an AI or machine learning system causes, or is at risk of causing, an incident that generates harmful output, spawns privacy breaches, gives rise to operational failures or triggers regulatory reporting requirements. Examples include the generation of toxic or biased outputs, leakage of sensitive personal data, evidence of model hallucinations, accuracy losses due to model drift or data poisoning, and emergence of adversarial attacks or legal/regulatory triggers (for instance, GDPR or EU AI Act reporting events). Security breaches like prompt injection, bypassing safety rules to make the AI misbehave (“jailbreaking”) or model theft also qualify as AI incidents and require immediate attention.

The response playbook begins with rapid detection and containment. Management should activate containment protocols — disabling affected endpoints, isolating compromised models, updating guardrails and removing compromised data sets. For severe incidents, a “kill switch” may be triggered to bring AI operations to a halt, requiring dual authorisation from leaders such as the CISO and CTO. Rollback procedures involve restoring the last-known good model checkpoint, reverting corrupted training data or deployment and preserving forensic logs for post-incident analysis. Remediation includes retraining and revalidating models before redeployment to ensure improved controls and resilience.

Customer and board communications are essential for transparency and regulatory compliance. Affected customers should receive prompt, pre-approved notifications explaining the nature of the incident, impacted systems or data, and steps being taken for remediation — meeting requirements such as GDPR’s 72-hour breach notification. Ongoing updates and guidance should be provided as the situation evolves. Boards and executive teams must be immediately informed of critical incidents, receiving reports that summarise the event, its impact, containment measures, regulatory exposure and recovery timelines. Post-incident reviews should document root causes, actions taken and lessons learned, feeding improvements back into the organisation’s AI incident response playbook and governance framework.

31. What architectural capabilities must exist to disable, roll back or constrain AI systems rapidly when risk thresholds are exceeded or incidents occur?

+

One critical capability is the establishment of the “kill switch” mechanism mentioned in the previous question that allows for immediate deactivation of the AI model’s functionalities. This should be integrated at both the application and infrastructure levels, enabling authorised personnel to halt AI operations quickly without disrupting other system components. The kill switch should be easily accessible and require dual authorisation to prevent unauthorised use.

Organisations also need robust version control and rollback capabilities within their AI architecture. This involves maintaining multiple versions of models and datasets, along with comprehensive logs of changes made over time. By implementing automated deployment pipelines with rollback functionality, teams can revert to a previous stable version of an AI model or application in response to detected issues or performance degradation. This capability should be coupled with real-time monitoring tools that assess model performance continuously against predefined risk thresholds, triggering alerts and automated responses when anomalies are detected.

Finally, dynamic resource allocation and throttling mechanisms are essential for constraining AI systems during incidents. These mechanisms allow organisations to limit the operational capacity of AI models — such as reducing the volume of requests processed or restricting access to sensitive functionalities — while investigations and remediation efforts are underway.

32. How should AI incident response be integrated into existing cyber, data and operational resilience playbooks — and when does AI require a different response model?

+

Integrating AI incident response into existing cyber, data and operational resilience playbooks requires a systematic approach that adapts established incident response frameworks — such as those based on NIST SP 800-61 — to address the unique risks and failure modes associated with AI systems. Public companies in the United States need to fold this into their required cyber reporting and disclosure processes, as well.

This integration involves extending traditional incident response phases — detection, containment, investigation and recovery — to include AI-specific considerations like model drift, adversarial attacks, data poisoning and bias incidents. Organisations should inventory their AI assets and equip incident response teams with the necessary expertise, including data scientists and machine learning engineers, to handle AI-related incidents effectively. Additionally, monitoring tools should be established to capture AI-centric telemetry, such as prompt logs and model inferences, while scenario-specific playbooks should be tested and updated regularly to ensure both speed and accuracy in managing AI incidents.

AI may necessitate a distinct response model when threats involve manipulation of the AI system itself, such as prompt injection, model exploitation or “jailbreaking” attacks that do not conform to traditional definitions of system breaches or data exfiltration. In these scenarios, standard incident response playbooks may fall short. The probabilistic and adaptive nature of AI systems calls for ongoing performance monitoring, prompt pattern analysis and specialised forensic techniques, such as model interrogation or feature attribution (an explainable AI technique that assigns importance scores to input features based on their relative contributions to a model’s specific output to explain why a model made a decision by highlighting the input factors that influenced the output most).

Furthermore, decisions made by AI can have significant downstream impacts on business operations, compliance and fairness. Therefore, organisations must tailor their response protocols to encompass these unique challenges, ensuring that incident response strategies are robust enough to manage the complexities inherent in AI technologies.

Section 7: The CRO/CCO perspective

+ EXPAND ALL

33. What should CROs and CCOs be focused on first with AI — risk reduction or growth enablement?

+

Both — but in the right order. The first priority is being business-first, meaning define where AI can drive real value-added outcomes and strategic impact (such as enhanced customer experiences, revenue generation, cost efficiencies and improved resilience), and then set the guardrails so model capabilities can scale responsibly. If the risk and compliance second line shows up at the decision-making table only as the “department of no,” it will get bypassed. The value proposition going forward is strategic enablement — helping the business make better decisions and operate more efficiently, not just reacting to issues after the fact.

34. How do we stop AI governance from becoming a bureaucracy that slows the business down?

+

Governance has to be a routing mechanism, not a speed bump. The win is an operating model that differentiates low-risk from high-risk and applies the right rigor at the right time — intake, tiering, approvals, testing and monitoring — so that the organisation can move fast when it should, and slow down when it must. That means a use case inventory, risk-tiering approach and clear decision rights across stakeholders. Executed well, AI governance actually accelerates innovation because innovation, risk and compliance teams have a shared view of the rules of the road. See responses to Questions 6 and 8 for further commentary.

35. What’s the most important outcome that should be demanded from AI programs across the three lines?

+

Measurable business outcomes — not “we deployed a model.” Outcomes that contribute only a better write-up, a faster summary, or a mere shaving of an hour or two off a specific manual task should not be confused with transformation. The real value of AI solutions shows up when they help the organisation change the underlying process with fewer handoffs, fewer exceptions, less rework, better controls embedded upstream and clearer accountability. That is where alignment of the three lines matters. When business processes are transformed effectively, it results in improved business performance, more effective monitoring and compliance, and positive testing results — outcomes that impact the first, second and third lines positively.

36. How should AI risk appetite be set in a way the business can actually use it?

+

Risk appetite can’t be abstract. Explicit boundaries are needed by AI type and use case — predictive versus generative versus agentic. These boundaries must be articulated in terms of quantitative tolerances the business can execute against, such as error rates, bias thresholds, human-review triggers, incident definitions, escalation thresholds and criteria for “kill switches.” These thresholds should be baked into funding gates, design gates and deployment gates. Only through this process can the board and senior management gain the confidence that AI deployments are being scaled within defined limits, rather than hoping AI behaves.

37. Where do CROs/CCOs see the biggest AI risk that senior leaders often underestimate?

+

Two places — accountability and data. As AI shifts decisions from deterministic to probabilistic, ownership blurs quickly as business leaders, model developers, data scientists and technology experts all touch the process. This overlapping dynamic results in a lack of accountability in which everyone assumes someone else is accountable. Separately, AI outcomes are only as defensible as the underlying data — quality, lineage, consent, retention and appropriate use. If the training data and related controls aren’t ready, “turning on AI” won’t create ROI. It will create noise, exceptions and reputational risk.

38. How should AI use cases be prioritised so that capital isn’t wasted on “AI for AI’s sake”?

+

The process of prioritising AI use cases should be guided by three essential factors: value, feasibility and risk.

By evaluating each use case through this lens, organisations can identify opportunities that have clear value levers, can be realistically implemented within the current environment (meaning the absence of formidable technical or operational barriers), and have risk profiles the organisation can govern confidently. This selection process must be considered carefully, as a lot of money can be invested in AI monitoring agents and oversight tooling. These investments can carry high opportunity costs when it is more effective to build robust business processes that proactively minimize risk. The most successful AI portfolios incorporate both oversight tools and improved business processes. However, the starting point should always be initiatives that create outcomes the business can experience and measure.

39. What does “good” look like for second-line enablement — without stepping on first-line ownership?

+

The second line should set standards, guardrails and decision frameworks — not run the use cases. The first line owns the business outcomes and day-to-day controls. The second line defines policy and standards, risk classifications, intake/tiering, required testing and documentation, monitoring metrics, third-party expectations, and issue escalation. The third line provides independent assurance that the program is operating as intended. When each line plays its respective role, the business moves faster and AI is deployed responsibly and effectively.

40. How are questions about model risk management altered to address AI deployments?

+

AI does not eliminate the inherent risks associated with models. Instead, it alters the ways in which failures can occur and accelerates the rate at which issues may proliferate. This reality necessitates a thoughtful approach to risk management, prompting important practical questions:

  • Is it possible to test the model thoroughly in a controlled environment before it is launched?
  • Can we continuously monitor the model for signs of drift, misuse and new modes of failure as they emerge?
  • Are we able to explain the outcomes produced by the model at a level that will satisfy the requirements of regulators, auditors and customers?
  • Do we have well-defined triggers for re-validating the model when there are changes to data, prompts, vendors or model versions?
     

These four questions touch upon critical issues — identifying potential issues before launch, ongoing oversight to ensure reliability and safety over time, transparency that supports accountability and builds trust, and ensuring model reliability and relevance as circumstances evolve.

41. What’s the right way to talk about AI ROI with senior executives and board members who may be fatigued with discussions of AI hype without realising tangible results?

+

It may be best to approach this matter directly. Much of what is currently delivered represents incremental improvements — resulting in enhancements of efficiency for certain tasks but falling short of fundamentally transforming processes. This reality has led to considerable frustration in boardrooms and C-suites. What they see may be something tantamount to a science project. What they want is a controlled growth engine.

The solution is to define ROI through measurable, specific elements such as time savings, error reduction, minimised manual controls, decreased cycle times, fewer exceptions and expedited issue resolution. These metrics should be linked to a comprehensive roadmap that updates both data and process infrastructure. Without effective measurement, discussions will continue to focus on speculative hype rather than substantive progress.

42. If the organisation could do only three things to advance AI governance over the next 90 days, what should they be?

+

Following are three immediate actions for advancing AI governance:

Establish the foundational governance practices. The first priority is to focus on scalable foundational elements. This includes clarifying AI governance or steering committee decision rights to ensure key stakeholders are empowered to guide and oversee AI initiatives (see Questions 5 and 53 for further discussion of this committee). In addition, organisations should implement robust AI intake and inventory tiering processes to identify, register and categorise new AI models and tools systematically. Finally, it is essential to define the minimum control requirements that apply across all AI systems, ensuring consistent governance and risk management in the deployment and operation of models.

Select high-impact use cases. The second step is to identify three to five use cases where AI can deliver significant value. Target areas characterised by high volume, manual and repetitive tasks that currently pose pain points for the organisation. By focusing on these opportunities, AI can help remove operational friction and simultaneously strengthen control outcomes, driving measurable improvements in efficiency and effectiveness.

Define a measurement and reporting model. The third action is to develop a clear measurement model. This involves specifying what will be reported on a monthly basis — such as key risk indicators and key performance indicators — to track progress and outcomes. Organisations should also establish criteria for what constitutes an incident, clarify who has authority to approve exceptions, and outline protocols for stopping or rolling back AI systems if tolerance thresholds are exceeded. These steps provide transparency, accountability and safety in AI operations.

Section 8: The CISO perspective

+ EXPAND ALL

43. What does “effective AI governance” mean for a CISO?

+

For a CISO, “effective AI governance” means a comprehensive framework is in place and enforced — one that ensures AI systems are secure, resilient and aligned with the organisation’s risk appetite and regulatory obligations.

This involves integrating AI-specific security controls — such as robust access management, data protection, monitoring for adversarial threats and incident response — into existing cybersecurity programs, while also addressing emerging risks like model manipulation, data leakage and unauthorised shadow AI. Effective AI governance for the CISO requires cross-functional collaboration to ensure transparency, accountability and ethical use of AI, continuous risk assessment, and the ability to respond rapidly to incidents or evolving threats, while also enabling innovation in a controlled and compliant manner.

44. Which AI risks fall under the CISO’s accountability?

+

AI risks under the CISO’s accountability primarily encompass those related to security, privacy and resilience of AI systems. This includes protecting sensitive data used or generated by AI models from breaches, unauthorised access or leakage; ensuring robust identity and access management controls around AI tools and infrastructure; and defending against adversarial threats such as prompt injection, model poisoning or exploitation of vulnerabilities in third-party AI components. The CISO is also responsible for monitoring AI system activity for signs of misuse or anomalous behaviour, implementing data loss prevention measures, and safeguarding the integrity and confidentiality of both training data and model outputs.

Additionally, the CISO is accountable for risks arising from insecure integration of plugins or APIs, supply chain vulnerabilities in pre-trained or vendor-supplied models, and the proliferation of unauthorised or “shadow AI” within the organisation. This includes ensuring all AI deployments — whether developed internally or procured from vendors — adhere to enterprise security policies, are subject to regular audits and have effective incident response protocols in place.

While CISOs collaborate with other stakeholders on broader governance, ethical and compliance issues, their core accountability centers on maintaining the security and operational resilience of AI systems throughout their lifecycle.

45. What is the CISO’s role in enabling AI innovation to move fast without increasing cyber and data risk?

+

The CISO’s role in enabling rapid AI innovation is to implement robust security and data governance measures that allow the organisation to adopt and scale AI technologies quickly while minimising cyber and data risks.

This involves embedding security controls — such as identity and access management, data loss prevention, continuous monitoring, and incident response — directly into AI development pipelines and operational processes. By collaborating closely with business, IT and data teams, the CISO ensures innovation is supported by clear policies, automated guardrails and risk-based assessments. The objective is clear: Enable the efficient launch of new AI initiatives without compromising security or compliance. The CISO also fosters a culture of security awareness and proactive risk management, empowering teams to innovate confidently within established boundaries.

46. What are the most common AI security blind spots?

+

The most common AI security blind spots often stem from insufficient attention to the unique vulnerabilities of AI systems.

One significant blind spot is data-related risks, including poor data governance, inadequate protection of sensitive training data and exposure to data poisoning attacks that compromise model integrity. Organisations may overlook the need for robust data validation, encryption and anonymisation practices, leaving AI systems vulnerable to manipulation or breaches.

Model-specific threats, such as adversarial attacks involving malicious inputs or model extraction attacks in which attackers reverse-engineer proprietary algorithms, are also a concern. These risks are exacerbated by a lack of monitoring and testing for adversarial robustness during the AI lifecycle.

Insecure integrations and supply chain risks represent another blind spot. Many organisations fail to adequately vet third-party AI tools, pre-trained models or APIs embedded in their systems, exposing them to vulnerabilities introduced by vendors.

Similarly, prompt injection risks — where malicious prompts manipulate generative AI outputs — are often underestimated, especially in user-facing applications.

Additionally, shadow AI deployments, where business units independently adopt AI tools without IT oversight, create governance gaps and increase the risk of non-compliance, data leakage and security breaches.

Addressing these blind spots requires proactive measures such as comprehensive risk assessments, continuous monitoring, secure integration practices and cross-functional collaboration to ensure AI systems are resilient and aligned with organisational security policies.

47. How should AI risks integrate with existing cyber frameworks?

+

AI risks should be integrated with existing cyber frameworks by embedding AI-specific considerations into the established governance, risk management and compliance processes the organisation already has in place.

This integration involves mapping AI-related risks — such as data privacy, model bias, adversarial attacks and vendor risks — in conjunction with existing risk assessment methodologies like those outlined in the NIST, ISO or COBIT frameworks. By leveraging crosswalks that map the relationships among the frameworks and standards the organisation is using, AI controls can be aligned with traditional cybersecurity practices and ensure AI systems are subject to the same rigorous security standards and monitoring protocols as other technology assets.

Fostering collaboration between AI teams and cybersecurity professionals will enable organisations to create a unified approach that embraces a holistic view of risk, facilitates compliance with regulatory requirements and promotes responsible AI deployment without creating parallel governance structures.

48. How should CISOs evaluate generative AI platforms and tools? How should they evaluate AI agents?

+

For both generative AI platforms and AI agents, CISOs should adopt a comprehensive, risk-based framework that emphasises security, compliance and ethical considerations.

In regard to generative AI, this includes assessing the platform's ability to secure data throughout the entire AI lifecycle starting with data acquisition and model training all the way to model deployment and monitoring. CISOs should examine the transparency of the generative AI model, looking for explainability features that allow stakeholders to understand how outputs are generated. Additionally, they need to evaluate the platform against established security standards and frameworks (NIST, ISO) to ensure compliance with regulations like GDPR or CCPA. Key aspects such as data privacy measures, bias detection capabilities and incident response protocols should also be scrutinised to mitigate risks associated with harmful outputs or data breaches.

In evaluating AI agents, CISOs should focus on the operational context in which these agents will be deployed and their potential impact on business processes. This assessment should address each agent's integration with existing systems, its access to sensitive data, and the security measures in place to protect against unauthorised access or misuse. CISOs also should consider the agent's ability to operate within defined governance frameworks, ensuring it adheres to organisational policies and regulatory requirements. Continuous monitoring and auditing capabilities should be evaluated to track the agent's performance, detect anomalies and assess compliance with established controls.

49. What is the CISO’s role in managing third-party and vendor AI risk?

+

See the questions and responses listed under Section 5 for a discussion of third-party risk management considerations. With that background, the CISO’s role in managing third-party and vendor AI risk is to ensure all external AI solutions integrated into the organisation’s environment meet stringent security, privacy and compliance standards.

This involves leading rigorous vendor due diligence, assessing the security posture of AI providers, and ensuring contractual agreements include clear requirements for data protection, incident reporting, audit rights, and ongoing compliance with regulatory and ethical guidelines. The CISO should oversee continuous monitoring of third-party AI system performance, manage risks related to model updates or changes in vendor terms, and maintain clear accountability for any incidents or vulnerabilities introduced through vendor solutions.

50. How should CISOs monitor AI systems that continuously evolve?

+

CISOs should monitor continuously evolving AI systems by implementing a robust, adaptive monitoring framework that tracks key risk indicators such as model performance, bias, drift, security events and compliance with governance policies.

This process should involve deploying real-time monitoring tools to detect anomalies, adversarial attacks or unintended outputs, while also conducting regular audits to ensure the systems align with ethical and regulatory requirements. Continuous controls monitoring also is essential to assess the effectiveness of security measures and governance frameworks as both the AI system and threat landscape evolve. In addition, CISOs should establish feedback loops to incorporate insights from incidents, audits and stakeholder reviews into ongoing system updates.

51. What skills and roles should CISOs build as AI deployment scales?

+

CISOs must build skills and roles that combine traditional cybersecurity expertise with AI-specific knowledge to address emerging risks effectively.

Key skills include understanding machine learning concepts, data governance, adversarial AI threats and regulatory requirements specific to AI systems. CISOs should also foster cross-functional collaboration by creating dedicated roles such as AI security specialists, data privacy officers and ethical AI advisers who focus on securing AI systems, mitigating bias and ensuring compliance. Additionally, CISOs must develop strategic leadership capabilities to align AI risk management with organisational goals while enabling the innovation necessary to grow and sustain the business.

52. How should CISOs measure success in AI governance?

+

See Question 1 for a discussion of what constitutes “effective AI governance” in practice. CISOs should measure success in AI governance through a combination of quantitative and qualitative metrics that reflect the effectiveness of risk management, compliance and ethical practices within AI deployments.

Key performance indicators might include the reduction in security incidents related to AI systems, the frequency and severity of bias or fairness issues identified and addressed, and adherence to regulatory compliance benchmarks. Additionally, CISOs should assess stakeholder confidence and satisfaction through feedback mechanisms, such as surveys or audits, to gauge perceptions of AI governance effectiveness. Regular reporting on these metrics to executive leadership and the board can provide insights into the maturity of AI governance frameworks, enabling continuous improvement and alignment with organisational objectives while fostering trust in AI technologies. See Question 17 for other illustrative examples of metrics related to common AI risk areas.

Section 9: The human element and CHRO perspective

+ EXPAND ALL

53. Which functions and leaders should be represented on an organisation’s AI governance committee and what are the standard roles and responsibilities?

+

Organisation and functioning of an AI governance committee are discussed in the responses to Questions 5, 9 and 42. The committee should include a diverse set of functions and leaders to ensure comprehensive oversight and effective management of AI initiatives. Composition of the committee and standard responsibilities for each role include:

COMMITTEE MEMBERRESPONSIBILITY
Executive lead (CAIO, CIO or CTO)Provide strategic direction for AI initiatives, ensure alignment with business objectives and oversee AI integration into internal processes.
Business/product ownerRepresent business needs and ensure AI solutions address actual business requirements, maximise value and align with customer needs.
Legal/compliance officerAssess AI-related legal risks and work within the organisation to ensure AI projects adhere to relevant laws, regulations and internal policies.
Information security/IT leadAddress cybersecurity, data privacy and reliability of AI systems, ensuring data is handled securely and AI models are protected from threats.
Data management/AI technical leadProvide technical expertise on data governance, model selection, system integration and best practices in AI development and deployment.
Privacy/HR representativeOversee the handling of employee or customer data, address fairness and equity concerns, and evaluate workforce impact of AI implementations.
Ethics and responsible AI leadChampion ethical considerations in AI development and deployment, conduct regular ethical reviews and promote public trust in AI solutions.
Risk management leadIdentify AI-specific risks and work within the organisation to develop effective risk mitigation strategies.
CAEAdvise on emerging AI risks and aligning innovation with risk appetite; ensure internal audit is up to date on current AI processes and uses.
Other stakeholders (as needed)Depending on the nature of AI implementations, other representatives or industry specialists may be included to provide additional insights and perspectives.

54. How should the criteria and accountability for AI-driven people decisions — hiring, performance, workforce planning, upskilling — be defined and enforced?

+

The criteria and accountability for AI-driven people decisions should be defined clearly within a framework that emphasises fairness, transparency and compliance with ethical standards.

Organisations should establish specific metrics and benchmarks for AI tools used in these processes to ensure they are aligned with diversity and inclusion goals. For instance, criteria for hiring algorithms should include parameters that actively mitigate bias and promote equitable outcomes, while performance evaluation systems should be designed to provide clear explanations for decisions made by AI, allowing employees to understand how evaluations are derived. Additionally, organisations should implement regular audits of AI systems to assess their impact on workforce diversity and overall employee satisfaction, ensuring the technology aligns with organisational values and regulatory requirements.

Accountability for AI-driven people decisions should involve multiple stakeholders, including HR leaders, compliance officers and data scientists, to create a governance structure that oversees the implementation and functioning of AI systems. Establishing clear lines of responsibility ensures any biases or inaccuracies identified in AI outputs are addressed promptly and transparently. Organisations should also incorporate feedback mechanisms that allow employees to report concerns regarding AI-driven decisions, fostering an environment of trust and continuous improvement without fear of retribution.

55. What governance mechanisms are needed to ensure employees, managers and HR teams have the right skills and judgment to use AI responsibly in their day-to-day work?

+

Organisations must implement governance mechanisms that combine clear policies, targeted training and ongoing oversight. This includes establishing explicit AI usage guidelines and ethical standards, launching AI and data literacy programs tailored to different roles, and providing scenario-based training on interpreting AI outputs and recognising potential risks or biases.

Regular audits, human-in-the-loop decision protocols and transparent communication channels further reinforce accountability and foster a culture of continuous learning and ethical awareness. Cross-functional governance bodies — such as AI governance committees or centers of excellence — should oversee skill development initiatives, monitor compliance and adapt educational content as technology and regulations evolve, ensuring the workforce is both competent and vigilant in their use of AI systems.

56. How should AI governance inform target operating model design, job redesign, skills modernisation and evolving role definitions across the workforce? 

+

AI governance should embed AI capabilities into organisational workflows while also ensuring ethical use, compliance and value creation. Operating models must evolve to integrate human-AI collaboration, assigning tasks to AI systems where automation adds efficiency while reserving strategic and creative roles for employees. Job redesign should focus on transitioning employees from repetitive roles to more analytical or oversight-driven positions, such as managing AI outputs or interpreting insights. 

Skills modernisation is critical, requiring organisations to invest in upskilling programs that build AI literacy, data analysis capabilities, prompting skills and socio-emotional intelligence to maximize human-AI interaction value. Role definitions should emphasise hybrid responsibilities, such as mentoring AI systems and leveraging their outputs for decision-making. 

Updating job descriptions for AI integration requires more than adding new tasks to existing job descriptions. A thorough review of roles and job functions to identify core responsibilities that can be automated or enhanced by AI is required. The question is how does AI change work, and changes to job descriptions should reflect that by highlighting collaboration with AI systems, overseeing outputs, interpreting data-driven insights and addressing ethical considerations. To that end, employee input is vital for effective redesign and addressing concerns about job displacement or skill gaps.

57. How can HR ensure AI adoption drives sustainable workforce transformation rather than spawn unmanaged job erosion or just localised efficiency gains? 

+

HR can ensure AI adoption drives sustainable workforce transformation by pivoting from an “automation-first” to an “augmentation-first” strategy, focusing on job redesign, fostering skills-based talent management, and emphasising proactive, human-centric change management. Key approaches include implementing “humans in the loop” for critical decisions, upskilling employees to use AI tools, and measuring success based on total system output and employee engagement rather than just headcount reduction.

Following are five steps HR can take to drive sustainable AI transformation:

Shift from job elimination to job redesign. Rather than replacing jobs, HR redesigns them by pairing human judgment (creativity, empathy, critical reasoning) with AI speed (data processing, pattern recognition). Placing the emphasis on tasks rather than roles leads to delineating the tasks that can be automated, the tasks that should be augmented by AI and the tasks requiring purely human intervention. A focus on “humans in the loop” emphasises human oversight in key decisions (hiring, promotion, performance evaluation) to avoid bias, ensure ethics and maintain quality. 

Implement strategic reskilling and capability building. A strategy to build AI fluency treats AI literacy as a core competency for the entire workforce. HR should analyse the impact of AI on specific jobs to create upskilling pathways that enable employees to transition into new roles rather than becoming redundant. As AI may create emerging roles, HR should proactively define these new roles, such as AI trainers, explainers and ethicists. 

Drive total system output over localised gains. Shift HR metrics away from “cost-per-hire” or “administrative efficiency” toward total system impact. This means emphasising total workday throughput (improvements in overall company agility and speed), quality and velocity of decision-making, employee experience and engagement, and pilot-led adoption in which AI tools are tested in small, cross-functional groups to measure the impact on overall productivity before scaling. 

Foster a sustainable, human-centric culture. HR should foster clear communications of the AI roadmap, including potential changes to roles, to reduce anxiety and build trust. Employees should be engaged in the design of new, AI-enabled workflows to ensure tools are usable and valuable rather than disruptive. Employees should also be rewarded for experimenting with and adopting AI tools in their daily work. 

Establish robust AI governance. Regular audits of AI tools for algorithmic bias in hiring, performance management and promotion ensure ongoing fairness and equity in their deployment. The AI governance or steering committee discussed in Questions 5 and 53 can play the pivotal role in AI governance.

Organisations viewing AI solutions as an extension of the workforce will likely be the most successful in transitioning to the evolving new world of work.

58. What indicators should HR and executive leadership monitor to determine whether AI governance is functioning as intended from a people, culture and trust perspective?

+

HR and executive leadership should monitor a range of indicators that can be grouped into three key dimensions:

People-centricEmployee sentiment and engagementRegularly survey employees to gauge their perceptions of AI’s impact on their roles, fairness in decision-making and overall job satisfaction.
Upskilling and reskilling metricsTrack participation rates and outcomes of training programs designed to help employees adapt to AI-driven changes, ensuring workforce readiness.
Career progression and mobilityMonitor whether AI-enabled processes are supporting equitable opportunities for promotions, lateral moves and professional development across the workforce.
Cultural indicatorsBias and fairness metricsEvaluate AI systems for evidence of bias or disparate impact on different demographic groups, using fairness metrics such as demographic parity or disparate impact ratios.
Adoption rates and collaborationMeasure how well employees are adopting AI tools and how effectively they are collaborating with these systems to enhance productivity and innovation.
Ethical compliance and reportingMonitor adherence to ethical guidelines for AI use, including the frequency and resolution of ethics-related concerns raised by employees or stakeholders.
Trust/transparency indicatorsTransparency in AI decisionsAssess whether employees and stakeholders understand how AI systems make decisions, supported by explainability tools and clear communication.
Incident reporting and resolutionTrack the number and severity of AI-related incidents (biased outputs, harmful decisions) and the effectiveness of remediation efforts.
Stakeholder confidenceConduct periodic feedback sessions with internal and external stakeholders to measure trust in AI systems and governance practices.

59. How should AI governance, as it relates to people, evolve as AI moves from experimentation to embedded, enterprise-wide use across the workforce? 

+

As AI transitions from isolated pilot projects to enterprise-wide deployment, AI governance related to people must shift from ad hoc oversight to a structured, proactive framework that prioritizes transparency, accountability and workforce engagement. 

Organisations should establish clear policies delineating roles and responsibilities for AI oversight, including cross-functional collaboration involving HR, compliance, IT and business leaders. These bodies must ensure ethical considerations — such as fairness, bias mitigation and privacy — are embedded in every stage of the AI lifecycle. Regular training and communication programs are essential to equip employees with the knowledge to interact responsibly with AI, understand its limitations and raise concerns when necessary.

With resistance to change a challenge for many organisations, employees need to understand the ground rules for the responsible and ethical use of AI models. Accordingly, management should communicate their strengths and limitations, the intention to deploy them thoughtfully with purpose, the initial use cases planned and how the models align with broader strategic initiatives. Reskilling and upskilling initiatives are essential for those employees with job functions that are impacted. The policies and ground rules for its use should be aligned with applicable laws and regulations and the need to protect the company’s intellectual property, address cybersecurity and privacy risks, and reinforce monitoring protocols and accountabilities. 

As AI becomes more deeply integrated into workflows and decision-making, governance should emphasise continuous monitoring and feedback mechanisms. This includes instituting processes for reviewing and auditing AI outcomes, gathering employee input, and adapting controls as technology and regulations evolve.

60. How does the organisation ensure AI-enabled people decisions are fair, explainable and aligned with company values — rather than just technically compliant?

+

Organisations must implement robust governance frameworks that embed ethical principles into every phase of AI development and deployment. This involves proactively mitigating algorithmic bias through diverse training data and regular audits, fostering explainability so both technical and non-technical stakeholders can understand decisions, and integrating cross-functional teams (including HR, ethics and legal) to shape and review AI systems against the company’s mission and values. Ongoing monitoring, stakeholder feedback and transparent communication further reinforce trust and accountability, ensuring AI-driven outcomes reflect organisational standards of fairness and social responsibility rather than just compliance checkboxes.

61. What are the most significant people-related risks associated with AI adoption, and who owns the mitigation of them?

+

From the standpoint of the CHRO, the most significant people-related risks associated with AI adoption include algorithmic bias, workforce displacement and the dangers of over-automation.

Bias in AI systems can arise from unrepresentative or flawed training data, leading to discriminatory outcomes that unfairly impact certain groups. This not only erodes trust but also exposes organisations to reputational and legal risks.

Workforce displacement is another critical concern, as automation can render certain roles redundant, creating anxiety, resistance and morale issues among employees. 

Over-automation, where human judgment is excessively replaced by AI, can result in a loss of empathy in decision-making, reduced employee engagement and unintended consequences in situations requiring nuanced human understanding.

The responsibility for mitigating these risks lies across multiple stakeholders within the organisation. HR must lead efforts to address workforce impacts, such as upskilling employees for new roles and fostering a culture of adaptability. IT and data science teams are responsible for ensuring fairness and explainability in AI models by conducting bias audits and implementing ethical design principles. Leadership, including C-suite executives, must champion responsible AI use by embedding company values into AI governance frameworks and ensuring alignment with organisational goals. 

Ultimately, cross-functional collaboration and continuous monitoring are essential to ensuring AI adoption enhances, rather than undermines, the well-being and trust of the workforce.

Section 10: Customer experience, branding and the CMO perspective

+ EXPAND ALL

62. How does management ensure customers clearly understand when they are interacting with AI, and what impact does that have on trust, satisfaction and conversion?

+

Organisations should prioritise transparency by explicitly communicating the use of AI in customer interactions through clear labeling and messaging. This could involve building trust by informing customers at the outset of their engagement that they are interacting with an AI system, along with providing insights into the capabilities and limitations of the AI. When customers understand they are engaging with AI, it can enhance their satisfaction by setting realistic expectations and reducing frustration from misinterpretations of AI responses. Ultimately, this clarity can lead to higher conversion rates of customers completing desired actions, as they are more likely to engage positively with a service they perceive as transparent and trustworthy.

63. What controls should be in place to ensure AI-generated content consistently reflects an organisation’s brand voice, values and ethical standards across all customer touchpoints?

+

Organisations should implement comprehensive controls such as detailed brand guidelines and human oversight. Brand guidelines should be translated into AI-specific playbooks that define tone of voice, approved terminology and ethical boundaries, including topics to avoid and protocols for fact-checking and bias mitigation. They should be integrated into the AI’s training data and prompts. Additionally, content stewardship involves human reviewers who oversee high-stakes outputs, refine language, and validate that the content adheres to the brand’s storytelling style and ethical commitments before it reaches customers.

Organisations also should establish robust governance frameworks that include automated checks for brand compliance, tools to flag deviations and mechanisms to ensure regulatory adherence and quality assurance, particularly in sensitive industries like healthcare or finance. Continuous monitoring, feedback loops and regular audits help refine AI-generated outputs, keeping them aligned with brand values and evolving customer sentiment. 

Transparency is another critical control, with brands openly communicating their use of AI in content creation to foster trust and maintain ethical integrity across all customer touchpoints.

64. How does management identify, test and mitigate bias in AI-driven customer interactions that may lead to unfair treatment, exclusion or reputational damage?

+

Organisations should adopt a structured approach that begins with understanding the types of bias that may arise — such as racial, gender or socioeconomic — that often stem from biased training data or model design.

Conducting thorough audits of training datasets and employing algorithmic analysis to evaluate model outputs across different demographic segments help detect any disparate impacts on different demographic groups. Testing for bias can be achieved by simulating diverse user scenarios and utilising advanced bias detection tools that analyse data, algorithms and outputs. Mitigation strategies include data debiasing techniques, using fairness-aware algorithms, and incorporating human oversight in decision-making processes.

65. When and how can customers seamlessly escalate from AI to a human — and are those points clearly governed to prevent frustration or negative experiences?

+

Customers can seamlessly escalate from AI to a human when predefined triggers are met, such as the AI failing to resolve an issue after multiple attempts, or detecting rising frustration or negative customer sentiment. Complex, high-value cases that require human judgment may also be a trigger. 

Escalation can also occur upon direct customer request, like saying “talk to a human.” To ensure these points are clearly governed and to prevent frustration, organisations should implement intelligent escalation rules and ensure all interaction history and context are passed to a human agent to avoid repetition for the customer. Governance practices include monitoring escalation data, training agents to handle escalations empathetically and using technology to facilitate immediate, well-routed transfers. These measures enhance customer satisfaction by ensuring smooth transitions and resolving issues effectively without unnecessary delays or confusion.

66. How is customer data governed when used by AI models, and how does management ensure consent, privacy and data minimisation standards are consistently enforced?

+

Customer data governance in AI models involves establishing a framework that ensures data is collected, processed and utilised in compliance with legal and ethical standards. Robust data management practices should dictate how data is sourced, stored and shared, along with ensuring transparency about data usage. A “privacy by design” focus embeds privacy considerations into AI systems from initial adoption to deployment and ultimate disposal. Additionally, regular audits and monitoring are essential to ensure compliance with data protection regulations mandating strict guidelines on consent, data access and user rights.

To enforce consent, privacy and data minimisation standards consistently, organisations should implement modern consent management frameworks that go beyond traditional checkbox approaches. This includes providing clear, plain-language explanations of what data is collected, the purpose for which it is collected and how long it will be retained, along with offering users granular control over their data preferences (as is required by law in many jurisdictions). Data minimization practices should limit the collection to only what is necessary for AI model objectives, with mechanisms in place for prompt data deletion when it is no longer needed. These practices enable organisations to enhance customer trust.

67. What governance mechanisms ensure AI systems used in customer-facing operations are resilient, monitored and fail safely without disrupting service?

+

Automated performance monitoring tools, real-time anomaly detection dashboards and predefined fail-safe protocols such as graceful degradation or seamless fallback to human agents are mechanisms that ensure AI systems in customer-facing operations are resilient, continuously monitored and able to fail safely. These mechanisms are supported by rigorous lifecycle governance — including regular audits, red-teaming exercises and revalidation of models — to identify vulnerabilities proactively and maintain operational stability. Clear escalation paths, transparent incident management procedures (see question responses in Section 6) and ongoing risk assessments further ensure any system failures are quickly addressed without disrupting service. Embedding these controls within a robust governance framework, aligned with regulatory and ethical standards, enables organisations to deliver reliable, trustworthy AI-driven customer experiences while minimising operational risks.

68. How does management govern those risks introduced by third-party AI models or vendors that could directly impact the customer experience or brand perception?

+

See the responses to the questions in Section 5 for a discussion of third-party risk management (TPRM) considerations in managing AI-related risks. Because these risks can impact customer experiences and brand perception, organisations must modernise their TPRM frameworks to address AI-specific risks such as model bias, lack of transparency and data security vulnerabilities. This includes conducting rigorous due diligence during vendor selection, requiring vendors to adhere to responsible AI policies, and incorporating AI-specific clauses into contracts, such as obligations for transparency, bias mitigation and incident response. Organisations also should implement structured risk assessments, prioritising high-risk vendors for deeper evaluation, and align their practices with global standards like the NIST AI Risk Management Framework. Continuous monitoring of vendor performance, regular audits and clear escalation protocols ensure identified issues are mitigated promptly, thus safeguarding customer trust and protecting the organisation’s reputation.

69. If an AI-driven interaction causes customer harm or public backlash, who is accountable, and what is the organisation’s defined response and remediation process?

+

Legal frameworks generally hold the business using the AI system responsible for its outcomes, regardless of whether the harm arises from the AI itself or human failures. Thus, if an AI-driven interaction causes customer harm or public backlash, accountability primarily rests with the deploying organisation. Vendors or developers may share accountability if there is evidence that negligence in design, training or contractual obligations contributed to the issue. However, the organisation deploying the AI retains the primary responsibility for ensuring ethical use, fairness and compliance, as using AI does not absolve a company of its duty of care to customers or the public. Furthermore, customers look to the brand, not to the ecosystem. 

Looking inwardly, accountability rests with model owners. As the ones ultimately responsible for their design, development and operation, they are accountable for their proper functioning. The defined response and remediation process begins with rapid detection through active monitoring, analysing customer complaints and conducting internal reviews to identify issues early. Immediate containment measures, such as pausing the AI system or placing outputs under human review, prevent further harm. Impact assessment and root cause analysis follow, investigating the breadth of harm and identifying underlying issues like data bias or governance failures. When and where required, regulatory bodies are notified and affected customers are informed, with remediation actions undertaken, including correcting outputs, offering compensation or providing appeal mechanisms. The final steps include retraining or revalidating the AI system, updating governance protocols and implementing lessons learned to prevent recurrence.

70. How does management ensure AI-powered experiences are accessible to customers with disabilities or low digital fluency — and governed accordingly?

+

Companies should embed inclusive design principles from the outset and adhere to globally recognised accessibility standards (such as Web Content Accessibility Guidelines and the Americans with Disabilities Act). This focus on inclusion involves leveraging AI technologies like speech recognition, image-to-speech tools and simplified interfaces to accommodate diverse needs while conducting usability testing with individuals from these groups to identify pain points early. Governance frameworks include regular audits using automated accessibility testing tools, continuous user feedback loops, and policy updates to ensure compliance and adaptability. Additionally, training teams on accessibility fundamentals and collaboration with advocacy groups to stay aligned with evolving best practices promote ethical and equitable delivery of AI-powered services.

71. What safeguards ensure AI errors are detected quickly and corrected before their effect proliferates across the customer base? 

+

Robust monitoring systems that leverage real-time analytics and anomaly detection algorithms can ensure AI errors — such as inaccurate outputs, hallucinations and wrong recommendations — are detected quickly and corrected before they proliferate. These systems continuously track the performance of AI models against established benchmarks, flagging any deviations or anomalies for immediate review. Additionally, organisations can establish feedback mechanisms, allowing users to report inaccuracies or issues directly, which helps in identifying problems that may not be captured by automated monitoring alone. Regular model evaluations and updates, informed by user feedback and performance data, further enhance the accuracy and reliability of AI outputs.

To correct identified errors effectively, organisations should have predefined response protocols that outline the steps for investigation, containment and remediation. This includes immediate human intervention when an error is flagged, enabling the appropriate experts to assess the situation and implement corrections quickly. Root cause analysis is essential for understanding why and sourcing where the error occurred, leading to updates in training data, algorithm adjustments or changes in governance processes to prevent recurrence.

72. How does the organisation ensure AI decisions and messaging align with its stated ESG, DEI and corporate responsibility commitments?

+

Alignment of AI decisions and messaging with the organisation’s ESG (environmental, social and governance), DEI (diversity, equity and inclusion), and corporate responsibility commitments requires embedding these principles into its AI governance frameworks from the outset. ESG and DEI criteria should be integrated into use case review and approval, model design, training data selection and output validation processes. Cross-functional oversight should reinforce this integration. Regular audits should assess compliance with these commitments, and continuous monitoring mechanisms should be in place to detect and address any misalignments or unintended impacts. Transparent reporting, stakeholder engagement and ongoing employee training further reinforce the alignment of AI outcomes with the organisation’s broader ethical and social values, ensuring that technology serves as an extension of its mission and responsibility to all stakeholders.

Section 11: The COO and operations perspective

+ EXPAND ALL

73. How does the organisation ensure the governance process doesn’t delay or restrict business progress that drives operational value in the short term?

+

By adopting a flexible and adaptive governance framework that emphasises agility and responsiveness to changing business needs, management can ensure the governance process does not delay or restrict business progress. This involves establishing clear roles, responsibilities and streamlined decision-making processes that empower teams to act quickly while still adhering to essential compliance and ethical standards. To that end, governance policies should be reviewed periodically to identify opportunities to improve their agility and ensure continued alignment.

Additionally, the organisation should promote a culture of collaboration between governance and operational teams, facilitating open communication and feedback loops that allow for real-time adjustments and innovations. The intention is to balance the control side of AI governance with the innovative entrepreneurial side of the business so that neither is too disproportionately strong relative to the other. By leveraging technology such as automated compliance checks and data analytics, governance can be integrated into daily operations without creating bottlenecks. This proactive approach enables the organisation to drive short-term operational value while maintaining the integrity and effectiveness of its governance processes, ultimately supporting sustained business progress.

See responses to Questions 6, 8, 22, 33 and 34 for further commentary.

74. How does the organisation ensure the governance process is flexible enough to handle rapid and ongoing change around AI?

+

This question captures an important concern from an operations perspective. An agile governance framework should emphasise adaptability and continuous improvement. All hands on deck contributing to the deployment of AI solutions should understand that innovation is the organisation’s lifeblood. No participant at the table should function as the “department of no.” 

The organisation ensures the governance process is flexible enough to handle rapid and ongoing changes around AI by implementing a cross-functional advisory board — such as the AI governance committee discussed in the responses to Questions 5 and 53. This board should regularly assess emerging AI trends, technologies and regulatory developments, thus allowing for timely updates to governance policies and practices in response to new insights, risks and opportunities. Furthermore, it should foster a culture of innovation and collaboration, encouraging stakeholder feedback.

The objective is to foster collaborative decision-making processes so that the organisation can move fast when it should and slow when it must. By prioritizing ongoing training and upskilling of employees on the latest AI tools and ethical considerations, the organisation equips its workforce to navigate changes effectively, ensuring governance remains relevant and supportive of strategic objectives.

75. How is AI governance leveraged over the long term when it comes to running and maintaining AI models? 

+

Over the long term, AI governance is leveraged to ensure AI models remain aligned with business goals, compliant with evolving regulations, and resilient to changing customer needs, operational risks and ethical values throughout their lifecycle. This is achieved by embedding structured, cross-functional oversight processes that include continuous monitoring of model performance, regular audits for fairness and bias, and clear accountability for decision-making. Governance frameworks should never be static or check-the-box frameworks. They should be designed to adapt as both technology and as regulatory landscapes change, incorporating mechanisms for ongoing policy review, impact assessments and documentation of all model updates or retraining exercises.

Long-term governance also involves establishing dedicated committees (such as the AI governance committee discussed in the responses to Questions 5 and 53) or centers of excellence that oversee risk management, legal compliance and ethical standards across all AI initiatives. As AI capabilities and the technology landscape evolve rapidly, these bodies must remain engaged in ensuring operational consistency through standardised deployment processes, enforcing data privacy and security requirements and providing immediate remediation protocols if issues arise. By integrating feedback loops, transparency and lessons learned from incidents, organisations create a culture of responsible AI that supports trustworthy, scalable and adaptable AI adoption over time.

76. As employees often leverage AI by deploying tools that are not supported by the enterprise, how can the organisation empower them without restricting innovation?

+

This reality presents a source of tension in many organisations. As discussed in the responses to Questions 22, 44 and 46, when business units independently adopt AI tools without IT oversight (shadow AI deployments), governance gaps can arise and create non-compliance risks, data leakage incidents and security breaches.

The pace of change in the marketplace requires organisations to empower their employees to innovate. The question is how does the governance framework balance flexibility and compliance? For starters, employees should be offered access to approved, company-supported AI tools that are versatile and user-friendly, reducing the incentive to rely on unsanctioned alternatives. To further encourage innovation, the organisation should provide ongoing training and upskilling programs, enabling employees to understand the capabilities and limitations of its existing AI solutions while equipping them with the skills to use these tools effectively and ethically.

That said, the use of non-company-supported tools continues. While the organisation should establish clear policies and communication around approved technologies, mechanisms like controlled sandboxes or pilot programs can enable employees to explore new tools in a secure environment. This makes sense because innovation involving any technology or tool entails experimentation — starting small, learning by doing, keeping track of innovation in the marketplace and responding to value-adding opportunities in an agile way. Feedback loops are also critical, as employees should be encouraged to propose new tools or approaches, which are then evaluated for compliance and potential integration into the company’s ecosystem. The challenge, of course, is that the duration of time required to complete these evaluations can frustrate operators. However, by aligning governance with support for safe experimentation, the organisation can ensure employees can innovate freely without compromising security, compliance or organisational values.

Section 12: The CAE perspective

+ EXPAND ALL

77. What is internal audit’s role in establishing and/or showcasing effective AI governance? How can internal audit most effectively support enterprise AI initiatives?

+

Internal audit’s role is to provide independent assurance and risk‑informed insight that AI governance structures, controls and oversight mechanisms are appropriately designed, aligned to enterprise risk management and operating as intended. Internal audit most effectively supports enterprise AI initiatives by developing a sufficient understanding of AI technologies, associated risks, relevant standards, and emerging regulatory and governance expectations. This understanding enables internal audit to ask the right questions and apply credible challenge — translating AI concepts into familiar risk and control expectations such as accountability, data management, change management and monitoring. By doing so, internal audit can assess governance design, clarify control expectations and set clear standards for audit‑ready evidence before AI becomes embedded in critical or regulated processes. In this role, internal audit serves as a trusted source of risk intelligence, helping management anticipate where governance practices must evolve, while at the same time not assuming ownership or decision‑making responsibility.

Internal audit also helps showcase effective AI governance through independent assessments and clear, evidence‑based reporting to executive management, boards and audit committees. By articulating how AI risks are being identified, managed and monitored within existing enterprise frameworks, internal audit helps provide confidence that AI is being governed with the same discipline applied to other enterprise risks.

78. What does “audit-ready” AI look like today, and what evidence should management be prepared to provide to internal audit?

+

Audit‑ready AI does not mean that AI systems are perfect, fully mature or risk‑free. Rather, it means the organisation can clearly explain how AI is being used, why it is being used, how risks are identified and managed, and how reliance on AI outputs is governed. An audit‑ready environment emphasises transparency, accountability and control — allowing internal audit to evaluate AI use with the same discipline applied to other enterprise risks.

At a practical level, management should be prepared to provide evidence that demonstrates:

  • Clear visibility into AI use — an inventory of AI use cases (including third‑party and embedded AI), with defined business purpose, risk classification and ownership.
  • Defined governance and accountability — documented roles and responsibilities, escalation paths and oversight mechanisms for AI‑related decisions.
  • Documented risk assessments — identification of key AI risks (data quality, bias, model drift, security and regulatory exposure) and how those risks are mitigated within existing enterprise frameworks.
  • Controls across the AI lifecycle — evidence of controls over data inputs, development or configuration changes, access, monitoring and exception handling.
  • Ongoing monitoring and validation — defined metrics, thresholds and processes to monitor performance, detect issues and respond when AI behaves outside expectations.
  • Change management and traceability — records of significant changes to models, configurations, data sources or vendors, and how those changes are reviewed and approved.
 

Importantly, audit‑ready AI is context‑dependent. The level of formality and evidence required should scale with the criticality of the AI use case, the degree of automation or reliance, and the potential regulatory, financial or operational impact. Internal audit’s role is not to validate every AI output, but rather to assess whether management has established sufficient governance, controls and evidence to support responsible and defensible AI use.

79. How should internal audit balance its advisory and assurance roles as organisations adopt AI at different levels of maturity?

+

Internal audit must adjust the balance between advisory involvement and independent assurance without compromising objectivity. In the early stages, when AI use cases are emerging and governance structures are still being formed, internal audit can play a valuable advisory role by helping management identify key risks, design foundational controls and clarify accountability. At this stage, the objective is not to validate performance, but rather to ensure risk and control considerations are embedded from the outset. Providing perspective on governance, policies and guardrails early can help avoid more difficult remediation later.

As AI adoption scales and becomes more integral to core business processes, internal audit’s emphasis should shift increasingly toward assurance and independent evaluation. Once governance frameworks, controls and monitoring mechanisms are in place, internal audit’s role is to assess whether these elements are operating as intended and keeping pace with change. This includes evaluating data integrity, model risk management, third‑party dependencies, and management’s ability to detect and respond to model drift or unintended outcomes. At higher levels of maturity, internal audit’s credibility rests on its ability to provide objective insight into whether AI risks are being managed sustainably, not merely designed well.

Across all stages of maturity, the overriding challenge for CAEs is preserving independence while remaining relevant. Internal audit must be clear about when it is advising on what “good” looks like versus when it is opining on whether expectations are being met. Effective CAEs establish guardrails around advisory involvement, document role transitions and communicate openly with the audit committee about how internal audit’s posture evolves as AI adoption advances. CAEs must also be careful not to remain in advisory mode longer than necessary; lingering too long can blur role clarity and dilute the function’s ability to deliver timely, independent assurance as AI risk becomes operational and material. Ultimately, the goal is not to choose between advisory or assurance, but to apply each deliberately in a manner that supports responsible innovation and disciplined risk oversight.

80. How does a CAE balance achieving process efficiency and automation goals with maintaining proper controls?

+

A CAE should view AI-driven efficiency and automation as a controlled transformation rather than a trade-off between speed and control. Leading practices emphasise embedding “controls by design” into AI-enabled processes by building controls into workflows, not adding them after deployment. This is accompanied by clearly defined ownership, documented boundaries for use and established human-in-the-loop review triggers for higher-risk activities. Frameworks such as the NIST AI RMF and ISO 42001 reinforce the need for defined risk tolerances, accountability and continuous monitoring to ensure AI operates within acceptable risk parameters. This also requires clearly defined exception handling processes so that when outputs fall outside expected parameters or system behaviour deviates from intended use, there are predefined escalation paths, fallback procedures and human intervention points.

In practice, balance is achieved by applying a risk-based approach and anchoring on AI use case risk while leveraging automation to strengthen control effectiveness. Higher-risk AI use cases, such as those impacting financial decisions or external stakeholders, require more rigorous validation, monitoring and governance. Lower-risk use cases can operate with lighter controls.

Importantly, automation through deploying AI can enhance assurance by enabling real-time monitoring, automated validations and alerts for unusual behaviour. However, automated controls can fail silently. Unlike human-executed controls, automated controls do not generate noise when they stop working. As a result, continuous monitoring of both the underlying process and the operation of the control itself is mandatory. This includes validating that alerts are functioning, thresholds remain appropriate and exception handling processes are triggered when needed.

The goal is not to slow down innovation, but to ensure it operates within clearly defined guardrails that are actively monitored and supported by effective exception management. 

81. How can internal audit adapt its audit approach to address AI systems that are dynamic, probabilistic and continuously evolving?

+

AI systems differ from traditional systems in that behaviour can change over time and outputs are probabilistic, which means they provide the most likely answer rather than a single “correct” one. As a result, internal audit needs to shift from point-in-time testing toward a lifecycle-based approach that evaluates controls across design, development, deployment and ongoing monitoring.

The focus should be less on validating individual AI outputs and more on assessing whether the organisation has established effective governance and control mechanisms to manage that variability. That means assessing whether the organisation has governance and operational processes that monitor AI model performance, detect drift, manage retraining, and address bias or unintended outcomes. It also includes evaluating explainability, specifically whether the organisation can meaningfully interpret and justify AI-driven outputs, decisions and model behaviour, particularly for higher-risk use cases where transparency is required for oversight, regulatory compliance or stakeholder trust.

From a practical standpoint, audit teams adapt by:

  • Auditing the AI governance and monitoring framework (roles, decision rights, approvals, inventories and periodic reviews);
  • Testing whether monitoring controls exist and are effective (alerts, thresholds, escalation paths and documented reviews); and 
  • Validating that changes (model updates, prompt changes, data source changes and vendor version changes) trigger re-testing and refreshed risk assessments.
 

In addition, audit procedures should explicitly assess explainability controls, including the availability of documentation, model interpretability techniques and the ability for stakeholders to understand and challenge outputs. The goal is to determine whether the AI system remains reliable, controlled and aligned with its intended use over time.

To perform this work effectively, internal audit teams also need to evolve their capabilities by adopting new skills and tools. This includes building foundational knowledge of AI concepts, model behaviour and common risks, as well as leveraging tools for data analysis, model monitoring and scenario-based testing. As AI becomes more embedded in business processes, audit teams must be able to evaluate both the technical and governance aspects of AI systems with sufficient depth and consistency. 

82. What are the most common AI risk blind spots for CAEs, including risks introduced through third-party and embedded AI solutions?

+

As organisations accelerate their adoption of AI, many risks emerge not from the technology itself, but from how it is implemented, governed and relied upon. For CAEs, blind spots often develop when AI capabilities evolve faster than oversight frameworks or when their presence is not fully understood across the enterprise and extended value chain.

Following are among the most common AI risk blind spots CAEs should be mindful of:

  • Shadow AI” operating outside formal oversight and awareness, where employees use generative AI tools, develop ad hoc models or enable embedded AI features without approval, documentation or governance. This creates a fundamental blind spot, as internal audit cannot assess risks, controls or compliance for AI use cases it does not know exist.
  • Limited visibility into embedded and third-party AI, where AI capabilities within vendor platforms or business tools operate outside formal governance, inventory and risk assessment processes.
  • Unclear ownership and accountability, particularly when responsibilities for data, models, oversight and outcomes are fragmented across functions or shared with vendors.
  • Applying traditional controls to non-traditional risk, where static, point-in-time control frameworks fail to address the adaptive, learning nature of AI systems and their unintended consequences.
  • Model drift, data integrity and change risk, as AI performance and behaviour evolve over time without triggering reassessment or enhanced monitoring.
  • Overreliance on AI outputs by users, especially when explainability, confidence levels and human challenge mechanisms are weak or applied inconsistently.
 

Together, these blind spots underscore the importance of moving beyond point-in-time assessments toward ongoing visibility, accountability and monitoring. For CAEs, the goal is not to slow innovation, but to ensure governance, risk management and assurance practices evolve in step with how and where AI is actually used — inside the organisation and across its ecosystem.

83. When is the best time to engage the CAE in the process of evaluating AI innovation initiatives?

+

The “best time” is across the entire lifecycle of AI initiatives, inclusive of pre-initiative, adoption and ongoing, though with different roles at each stage.

Internal auditors are uniquely positioned to provide independent assurance to support governing bodies through both advisory and assurance services, and the CAE's presence on AI governance committees helps ensure the internal audit function is up to date on current AI processes and uses throughout the organisation. In this capacity, the CAE should also operate as a strategic risk partner to the business, helping leadership understand emerging AI risks, align innovation with risk appetite and make informed decisions. Further, early engagement, often called “shifting left,” allows internal audit to provide proactive advisory input on risk identification, ethical considerations, data considerations and regulatory alignment before development and implementation begin. This prevents the “bolting on” of controls late in the process, which is often less effective and more costly.

However, internal audit’s role is not a “one-and-done” exercise. Engagement must be sustained, and as the AI initiative progresses, internal audit’s role shifts. During adoption and deployment, internal audit provides independent assurance over whether controls are appropriately designed and implemented. After deployment, the focus moves to ongoing monitoring and internal audit’s periodic testing to verify controls remain fit-for-purpose as AI models and vendors change and the business and technology environments evolve. Internal audit can support a cadence of periodic reviews (such as quarterly monitoring reviews, annual governance effectiveness assessments and targeted deep dives for high-risk AI use cases) while preserving independence by being explicit about when it is acting in an advisory capacity versus providing assurance.

The Institute of Internal Auditors (IIA) notes internal audit can be a trusted adviser on AI, but participation should be structured so independence as an assurance provider is not compromised. This is how the CAE can remain engaged as a strategic risk adviser throughout the initiative without “owning” the program, enabling internal audit to add value and help to drive continued oversight as AI systems evolve. To that end, the recently released COSO publication, “Achieving Effective Internal Control Over Generative AI,” may be useful as a context as it applies the principles underpinning its 2013 integrated control framework to address the unique risks and internal controls pertaining to generative AI.

84. Should the CAE be involved in evaluating the adoption of new AI tools? Do some generative AI systems have inherently better controls than others?

+

Yes, the CAE should be involved in the evaluation of any new generative AI tool. Internal audit’s role is not to “pick the tool,” but rather to verify that consistent risk management and control expectations are applied.

With respect to third-party AI platforms, internal audit should assess whether appropriate third-party risk management processes are in place, including due diligence, contractual protections and ongoing monitoring. This evaluation also should be aligned with established standards such as ISO/IEC 42001 for AI management systems and ISO/IEC 27001 for information security, including assessing whether tool providers maintain relevant certifications to these standards and whether those certifications appropriately cover the services being used. Internal audit involvement helps ensure the organisation is asking the right questions before rolling tools out broadly.

In terms of specific tools, no generative AI platform is inherently better controlled in a way that removes the organisation’s responsibility. While vendors may offer different features or safeguards, control effectiveness ultimately depends on how the organisation configures, governs and monitors usage. Risks can arise from user behaviour, poor data handling or lack of oversight, regardless of the platform. Leading practices like the NIST AI Risk Management Framework stress that third-party components can complicate risk measurement and accountability if vendors aren’t transparent about methodologies, controls or documentation; therefore, internal audit and second line risk functions should evaluate vendor capabilities such as auditability, documentation, incident response, security/privacy protections and contractual rights (such as system and organisation controls (SOC) reports and right-to-audit clauses). The IIA similarly highlights obtaining SOC reports for externally hosted data and ensuring service-level agreements include right-to-audit. These are concrete ways to compare vendors beyond marketing claims.

It is also important to recognise that not adopting enterprise-grade generative AI tools introduces its own risks. If organisations do not provide approved, governed solutions, employees may turn to publicly available or less secure tools without appropriate controls, increasing the risk of data leakage, non-compliant usage and lack of visibility. From an audit perspective, this reinforces the need to evaluate not only the risks of adoption, but also the risks of non-adoption and shadow AI usage.

Ultimately, the CAE’s role in the adoption of new AI tools is to evaluate whether controls are consistently applied across tools and supported by operational evidence. The focus should remain on whether the organisation has implemented appropriate controls such as access management, data protection, user training and monitoring, and whether those controls are operating effectively in practice.

85. How should internal audit functions evolve their talent model and skill requirements as AI reshapes the nature of audit work?

+

Internal audit functions should evolve toward a human-led, AI-augmented talent model where technology enhances, rather than replaces, the role of the auditor. AI is fundamentally changing how audit work is performed by automating repetitive, data-intensive activities and enabling deeper analysis at scale. As a result, auditors can shift their focus toward higher-value activities such as applying judgment, engaging stakeholders and generating insights. This shift requires intentional alignment between talent strategy and AI enablement so that internal audit can operate more efficiently while maintaining appropriate oversight and professional skepticism.

To support this transition, internal audit should move away from rigid, role-based structures and adopt a skills-based model that emphasises flexibility and continuous development, where auditors regularly update their skills to keep pace with evolving technology. This requires an evolution of the talent model to support a more dynamic, innovation-oriented culture. CAEs should embed ongoing training, experimentation and practical AI use into day-to-day audit activities, supported by governance aligned with enterprise standards and leading frameworks. All auditors should develop a baseline level of AI fluency, including an understanding of how AI tools work, their limitations and how to deploy them effectively in audit activities. A smaller subset of the team should build more advanced capabilities, such as configuring AI tools or developing solutions, while others may serve as subject-matter specialists in areas such as data analytics or AI risk. At the same time, core audit competencies such as risk assessment, control evaluation and regulatory interpretation remain essential and become more impactful when combined with AI capabilities.

As AI takes on more routine tasks, the differentiating value of internal audit will increasingly come from human capabilities. Skills such as the ability to challenge and interpret results become more important. While this reinforces the need for a continuous learning culture, CAEs need to focus on building AI capabilities, whether through targeted upskilling of existing staff, bringing in subject-matter specialists, or strategically hiring individuals with data and AI expertise, and thereby treat AI as part of their broader talent portfolio and as a force multiplier for capacity and insight.

86. For organisations that have not yet established formal AI governance, what are the first three actions a CAE should take?

+

For organisations early in their AI journey, the CAE’s role is to help leadership understand emerging AI‑related risk exposure, create shared awareness and prepare internal audit for future assurance.

First, understand how AI is being used across the organisation. At this stage, a fully comprehensive AI risk assessment is rarely feasible, as most organisations lack formal governance structures, defined ownership and a complete inventory of AI use cases. Rather than attempting a comprehensive assessment, the CAE should focus on grounding the discussion in common AI risk themes and then understanding where AI — whether built internally or embedded in third‑party tools — is showing up in the business. From there, the CAE can identify where AI may be introducing meaningful risk, especially in areas involving sensitive data, financial reporting, regulatory decisions or heavy reliance on automated outputs. The objective is to surface material exposure and blind spots, not to rate or fully assess every AI system before governance and inventory processes are in place.

Second, bring that preliminary risk perspective to executive leadership and the audit committee to build shared awareness. The CAE should communicate what is currently understood about AI use, where visibility remains limited, and why existing risk assessment and control processes may not fully capture AI-related risks today. Framing the discussion around material exposure, areas of uncertainty and how AI adoption may be outpacing current oversight helps leadership calibrate risk and understand the limitations of early stage assessments. The objective is not to prescribe solutions or formal governance structures, but rather to establish a shared understanding of risk, clarify why certain exposures warrant attention and reinforce the need for more structured risk assessment as AI use expands.

Finally, prepare the internal audit function for more robust risk assessment and assurance as AI governance matures. The CAE should consider how AI is likely to affect the audit universe over time, including potential SOX or compliance implications, changes to risk assessment assumptions, and the types of evidence internal audit will need once AI governance and inventory processes are established. Early planning may include identifying skill gaps, updating audit methodologies and determining where advisory support today can help enable more effective, risk-based assurance in the future.

87. How can the CAE help boards understand AI risks and governance maturity without requiring directors to have deep technical expertise?

+

By adopting a straightforward communication strategy involving plain, business-friendly language that translates complex AI concepts into relatable business terms, the CAE can play a valuable role in helping boards understand the organisation’s AI risks and governance maturity without immersing directors into technical detail. The CAE can:

  • Build board AI fluency by positioning AI as a business enabler and risk area, using short education sessions, analogies and case studies.
  • Provide executive summaries from management or independent experts to translate technical reports into key takeaways.
  • Use structured AI governance frameworks, such as maturity assessments (initial, defined, advanced) supported with concrete business examples.
  • Tie AI risks to familiar categories like operational, financial and reputational risk.
  • Run scenario-based discussions and use visual aids (dashboards, risk matrices) to make AI risks and governance maturity easier to understand and discuss.
 

This is a real opportunity for the CAE to contribute value in the boardroom, as directors want to know what they need to know and act upon it without being immersed in myriad details.

Section 13: The CLO/GC perspective

Note: There are many issues that are discussed in this FAQ that fall within the purview of the CLO/GC. These matters include, among others: management of personal and sensitive data entering AI systems; compliance with applicable privacy laws; managing AI-generated outputs for accuracy, bias and legal liability; contractual accountability of AI vendors; audit rights with respect to third-party relationships; AI incident response plans; and governing AI use in high-stakes decisions, such as hiring, lending, healthcare and legal. These matters are not covered in this section.

+ EXPAND ALL

88. Where should data used to train AI tools and AI-generated data be stored, and who legally or ethically owns that data? 

+

Data used to train AI tools and the resulting AI-generated data should be stored in secure, compliant environments that adhere to relevant data protection regulations and organisational policies. For example, on-premises data centers and cloud solutions are common storage locations, both of which require encryption, access controls and regular audits to protect sensitive information. 

Ownership rights of training data typically reside with the functional or unit users of the organisation or the third-party vendors that collected or purchased the data, depending on contractual agreements and applicable laws. Ethical considerations arise when using personally identifiable information; organisations must obtain informed consent from individuals whose data is used, ensuring transparency and upholding privacy rights. 

In contrast, ownership of AI-generated data can be more complex, often falling to the organisation that deployed the AI system, although it may also involve shared rights with data providers or users who contributed to the AI model's outputs. Ethically, organisations should consider the implications of how AI-generated data is used, ensuring it aligns with their values and does not perpetuate bias or harm. This dual focus on legal ownership and ethical stewardship is paramount in fostering trust and accountability in AI data governance practices.

89. Should organisations implement an AI acceptable use policy, and if so, how should such a policy be effectively enforced? How does this policy concept differ from an AI governance policy?

+

An AI acceptable use policy guides employees and stakeholders on appropriate AI tool usage, specifying clear do's and don'ts to prevent misuse, ethical concerns and non-compliance. Its goal is responsible individual behaviour. 

An AI governance policy covers organisation-wide management of AI, including oversight, risk, compliance and ethics. It outlines how AI is developed, deployed, monitored and updated, addressing issues like bias, explainability, privacy and accountability, and involves teams across IT, legal, HR and compliance. While these policies differ in focus, they are interconnected. The AI acceptable use policy operates within the framework established by the AI governance policy, ensuring that individual users adhere to the organisation’s overarching principles and standards for AI. 

AI acceptable use policies are essential as AI becomes integral to business operations and decision-making processes. Without clear guidelines, there is the risk of employee misuse, leading to bias, privacy issues, regulatory non-compliance, reputational harm, or inaccurate or unethical results. An acceptable use policy clarifies what constitutes proper and improper AI use, promotes legal and ethical compliance, and serves as a safeguard for innovation and experimentation. 

Enforcing an AI acceptable use policy combines monitoring tools, regular audits and reporting channels to detect unauthorised activity and ensure compliance. Disciplinary actions, from retraining to termination, address violations and promote accountability. While too much oversight can hinder innovation and rigid policies risk obsolescence, enforcement works best through clear communication, ongoing education, adaptive updates and fair consequences to keep the acceptable use policy effective and relevant.

90. How should exposure to IP infringement — both inbound and outbound — be managed?

+

Managing exposure to intellectual property (IP) infringement requires a combination of proactive and reactive strategies designed to mitigate risks, uphold legal compliance and protect valuable assets. 

To prevent inbound infringement (when an organisation inadvertently infringes on the IP rights of others), organisations should conduct regular IP audits and review contracts, licenses and agreements to identify potential risks associated with third-party rights. Robust internal policies are established to guide employees on respecting third-party IP, supported by regular training programs that educate staff about copyright, trademark and patent laws. Technology-driven solutions, such as automated IP monitoring tools, help detect potential violations early, ensuring the organisation avoids costly disputes. If claims of infringement arise, organisations should consult legal counsel to assess exposure, negotiate settlements or seek licensing agreements to remediate the issue. 

To protect against outbound infringement (when others infringe on the organisation’s IP), organisations should secure their IP rights through patents, trademarks, copyrights and trade secret protections in relevant markets. Monitoring services and market surveillance tools are employed to identify unauthorised use of the organisation’s IP by competitors or other parties. Upon detecting infringement, organisations should document the evidence and issue formal cease-and-desist notices. The use of mediation or arbitration can also resolve conflicts efficiently. However, if necessary, organisations may need to escalate matters to court to seek injunctions, damages or royalties. Strategic partnerships, such as cross-licensing agreements, can further reduce exposure to outbound infringement risks while fostering collaborative opportunities.

An integrated approach to IP management combines inbound and outbound strategies into a cohesive framework, emphasising risk assessment, legal protections, employee education and active enforcement. Organisations also leverage cross-functional teams — including legal, compliance and operations — to ensure comprehensive oversight of IP risks.

91. What are the key challenges in applying AI governance across jurisdictions and how should they be managed?

+

AI governance varies significantly across jurisdictions, reflecting different institutional priorities, technological capabilities and political systems. For example, the EU focuses on comprehensive, rights-based regulations (EU AI Act), emphasising ethical standards and risk management. The United States favors innovation-driven, sectoral self-regulation with voluntary frameworks and industry guidelines. China employs centralised, state-controlled governance, blending strategic national plans with sector-specific rules and strong state oversight. India adopts a hybrid model balancing policy direction, digital growth and emerging legislative measures to support innovation and oversight. 

With just these four examples, it is clear why multinationals face challenges ensuring compliance across jurisdictions. Varied laws create regulatory fragmentation and rapid AI advancements are outpacing regulatory updates. This necessitates the use of adaptive frameworks to address the compliance complexities facing organisations operating globally. Insufficient operational guidance for risk assessments and real-time compliance monitoring may create implementation gaps. As AI deployments become more autonomous, ensuring meaningful human involvement becomes more difficult. Finally, disparities in infrastructure and expertise lead to uneven protection and accountability.

Given these challenges, a layered, adaptive approach enables organisations to manage risk, maintain trust and ensure responsible AI deployment at scale across diverse regulatory environments. A centralised, principles-based governance framework should be established to promote consistency in core areas such as transparency, accountability, ethical use and data security, while allowing for tailored adaptations to meet region-specific legal and regulatory requirements. Regulatory mapping — systematically analysing the laws and regulatory guidelines of each jurisdiction in which the organisation operates — helps to identify overlaps, conflicts and unique local obligations. Management then harmonises internal policies to comply with the strictest applicable standards and implements modular controls for regions with unique requirements. Cross-functional teams — including legal, compliance, IT and business leaders — collaborate to interpret new regulations, operationalise compliance, and ensure ongoing monitoring and audit trails.

92. How are AI systems inventoried, audited and retired over time?

+

Inventorying AI systems involves systematically cataloging all AI systems in use across the organisation, including both proprietary and third-party solutions. Typical methods include developing a centralised, regularly updated AI asset registry that records key details such as each system’s purpose, data sources, model version, deployment environment, responsible owners and integration points. Best practices emphasise cross-functional collaboration — engaging IT, compliance, business units and procurement — to ensure comprehensive visibility and avoid shadow AI deployments. Challenges in this phase include identifying all AI assets in decentralised or legacy environments, keeping the inventory current amid rapid innovation and ensuring accurate documentation as models evolve or are retrained.

Auditing AI systems is an ongoing process that evaluates systems for regulatory compliance, ethical alignment, security and performance. Audits typically involve reviewing training data for bias, validating model accuracy, assessing explainability, and checking adherence to internal and external standards. Automated monitoring tools and periodic manual reviews help detect drift, unintended consequences or emerging risks. Best practices call for independent audits, transparent reporting and stakeholder engagement — including input from legal, technical and business leaders — to ensure objectivity and accountability. Common challenges include the technical complexity of auditing “black box” models, resource constraints for frequent audits and navigating evolving regulatory requirements across jurisdictions.

Retiring AI systems becomes necessary when models become obsolete or unsupported, pose unacceptable risks, or are abandoned in favor of another tool or approach pursuant to an ROI consideration. The retirement process starts with clear decommissioning criteria — for example, declining performance, regulatory changes, end-of-vendor support — and involves planning for data migration, archiving or secure deletion. Best practices require thorough impact assessments and stakeholder communication, and audit trails must be maintained for traceability. Organisations often face challenges in disentangling AI systems from critical business processes, managing dependencies and ensuring continuity during transitions. A structured retirement protocol helps minimise operational disruption, mitigate data security risks, and maintain compliance with retention and privacy obligations.

93. How can the organisation’s AI governance framework be structured as a living document that evolves at the pace of change?

+

The organisation’s AI governance framework can be structured as a living document by establishing it as an agile, continuously updated resource managed by a permanent cross-functional committee that includes representatives from legal, compliance, technical and business units. This committee should meet regularly — on a set cadence and in response to significant regulatory, legal or technological developments — to review and revise governance policies, controls and documentation. 

The framework should incorporate mechanisms for ongoing horizon scanning of emerging laws and regulations, court decisions, and advances in AI technology. This ensures rapid integration of new requirements and best practices. Transparent version control, audit trails documenting changes and open communication channels for stakeholder feedback are essential to maintain accountability and responsiveness. By employing these practices, the governance framework remains robust yet adaptable, capable of evolving in step with the evolving landscape of AI regulation and the fast pace of innovation.

94. When an AI model makes or influences a business decision, what documentation is the organisation required to preserve, and how can management ensure the systems actually are preserving it?

+

The organisation is required to preserve comprehensive logs, audit trails and model artifacts documenting an AI model’s decision-making process. Logs should include detailed records, such as input data, reasoning steps, external tools used and final outputs. 

Audit trails should document user and administrator interactions, including approvals, edits and queries, along with timestamps and contextual details like file IDs or case numbers. Model artifacts requiring preservation include training data snapshots, model versions, evaluation results and known failure modes. These records collectively provide traceability, enabling organisations to reconstruct how decisions were made and validate their compliance with ethical and legal standards. 

Management should also ensure that systems are effectively capturing these artifacts by implementing robust governance mechanisms and technical controls. For example, cryptographically signed, immutable logs protect data integrity, while automated reporting and storage reduce errors and ensure consistency. Governance teams or committees should oversee policy enforcement, conduct regular audits of logging processes, and review captured data for completeness and accuracy. Clear roles and responsibilities — such as assigning data stewards and model owners — further ensure accountability and separation of duties. Regular system reviews, combined with monitoring dashboards and compliance checks, help management verify that all required records are being captured and preserved correctly.

95. What constitutes an adequate “contemporaneous record” of how AI model decisions are made?

+

An adequate “contemporaneous record” refers to detailed, time-stamped documentation created at or immediately after the moment an AI model renders a decision. This record captures all relevant information necessary to trace, explain and validate the decision-making process, including input data, processing steps, model version, parameters, external tools or APIs used, as well as the final output or recommendation. The record also logs any human interventions, overrides or approvals associated with the decision, along with contextual metadata such as user IDs, timestamps and case references.

Contemporaneous records provide the operational ability to preserve, explain and defend every AI-assisted decision. Their retention clearly indicates that record retention policies should specifically address AI-generated content, model outputs, training data and prompt histories. Contemporaneous records enable reconstruction of the exact circumstances under which a decision was made, making them essential for regulatory compliance, auditability and transparency, particularly when an AI model: 

  • Is used to facilitate tax decisions, transfer pricing or financial reporting;
  • Contributes to a decision-making process that is under investigation and requests are made on the decision-making process, requiring management to present information regarding the inputs, model version and output, as well as who acted on that output;
  • Produces artifacts subject to a legal hold situation;
  • Generates probabilistic outputs incorporating uncertainty, randomness or statistical inference in the decision-making process (versus deterministic outputs) that are subject to e-discovery requests;
  • Is updated, retrained or decommissioned, requiring preservation of the version that supported decisions requiring explanation; and
  • Flags or misses a compliance issue, raising the need for sufficient end-to-end logging to demonstrate reasonable controls were in place.
 

Other relevant questions to consider:

  • Are the organisation’s AI vendors contractually required to preserve and produce system logs, model configurations and audit trails when it issues a legal hold?
  • Is the organisation prepared for regulators and opposing counsel who understand AI well enough to ask for model cards, bias audits and decision logs as part of their discovery process?
 

Some of these issues continue to be clarified by regulation and in the courts. But the intent of contemporaneous records is clear. They support post-hoc reviews, facilitate root cause analysis in the event of errors or disputes, and provide evidence for ethical and legal accountability. Their maintenance ensures that decisions can be explained to stakeholders, regulators or affected individuals, and helps organisations demonstrate adherence to governance standards, fairness principles and operational best practices.

Section 14: Board and audit committee oversight

+ EXPAND ALL

96. In conjunction with the AI governance process, how should the board engage with management?

+

In today’s rapidly changing AI-enabled environment, directors should ensure they are sufficiently educated about AI so that they are positioned to engage the CEO and senior executives in conversations regarding AI strategy and risks. This is particularly important given stakeholder demands to generate sufficient measurable returns from AI initiatives and ensure responsible deployments of the technology. To that end, following are action items for boards to consider when engaging with management during the AI oversight process: 

Understand how “successful integration” of AI is defined and measured. Boards should constructively engage and, if necessary, challenge management on where AI is being deployed, how outcomes are measured, and whether AI initiatives are delivering consistent, scalable value rather than one-off, localised gains. Consider such metrics and indicators as:

  • Percentage of AI use cases deployed at scale in core operations
  • ROI performance of AI initiatives compared to business cases
  • Management confidence trends in AI integration effectiveness
 

Emphasise ethical and responsible AI as an enterprise governance priority. Boards should incorporate defined accountability, risk oversight and integration into the organisation’s broader governance framework. Consider such metrics and indicators as:

  • Existence and maturity of an ethical AI governance framework
  • Management confidence in responsible AI deployment
  • Frequency and severity of AI-related risk issues escalated to the board
 

For organisations early in their AI journey, encourage management to shift the focus. Move beyond process efficiencies, cost savings and productivity gains to a more transformative emphasis, including improvements in customer experiences, products and services that drive revenue growth and market share. Consider such metrics and indicators as:

  • Distribution of AI investments across efficiency versus strategic growth use cases
  • AI initiatives directly tied to customer experience, revenue or market expansion
  • Management articulation of AI’s role in go-to-market and product strategies
 

Evaluate whether the nature, extent and timing of the board’s oversight of management’s AI governance framework is fit-for-purpose. Assess the scope and scale of the company’s AI deployments. Consider such metrics and indicators as:

  • Evidence that the governance framework is functioning effectively
  • Evidence that the framework is periodically recalibrated as the organisation becomes more AI-mature
 

Ensure management has formalised AI risk governance and integrated it into the ERM process. There should be regular board‑level visibility as AI initiatives scale. Consider such metrics and indicators as:

  • Percentage of major AI initiatives subject to documented risk and governance review
  • Nature of AI-related risks incorporated into the ERM process
  • Frequency of board or committee reporting on AI ethics, transparency and trust
  • Periodic risk discussions integrated with assessments of AI value creation opportunities, rather than treating these considerations as an afterthought
 

Evaluate management’s end‑to‑end AI roadmap that links AI aspirations to enabling, ongoing investments in technology infrastructure and workforce capabilities. Consider such metrics and indicators as:

  • Progress against the approved AI roadmap, with emphasis on technology modernisation and capability enhancement milestones
  • Investments in AI training, upskilling and talent linked to management’s objectives
  • Percentage of AI initiatives delayed due to infrastructure or skills constraints
  • Management focus shifting from pilots to repeatable, enterprise-wide value

97. In conjunction with the AI governance process, how should the board improve its oversight?

+

Organisations in the early stages of their AI journey struggle more with tactical obstacles such as planning and identifying impactful use cases. While the emphasis on initial quick wins is understandable, sustainable long-term ROI is best realised when ethical considerations, transparency and trust are integrated consistently throughout the AI lifecycle.

It is incumbent upon the board to reinforce the importance of focusing on strategic-level obstacles that, once addressed, will enable their organisation to advance more effectively on their AI transformation journey to achieve value. Here are three suggestions for directors to consider in localised their AI governance impact:

Make AI a standing board agenda item linked explicitly to enterprise strategy, value creation, innovation priorities and competitive positioning. Consider such metrics and indicators as:

  • Frequency with which AI appears as a standing agenda item at full board or designated board committee meetings
  • Management’s ability to articulate how AI initiatives support strategic objectives, innovation priorities and competitive positioning through a consistent reporting cadence
  • Extent to which AI discussions shift from ad hoc and reactive updates to forward-looking strategic dialogues
  • Increased board confidence in management’s AI narrative and linkage to long-term value creation
 

Calibrate the board’s AI oversight priorities with the maturity of the organisation’s AI capabilities and ability to deliver expected ROI, adjusting discussion topics as these factors evolve. In terms of ROI, exclusive Protiviti research (March 2026) reveals more AI-mature organisations tend to integrate AI into strategy, innovation and competitive position, whereas less AI-mature organisations tend to concentrate on identifying opportunities and use cases and establishing governance frameworks, likely because foundational elements are not yet in place. Consider such metrics and indicators as:

  • Management’s clarity in articulating the organisation’s AI maturity and ROI generation capabilities
  • How board discussion time is allocated across AI strategy and competitive positioning, AI implementation progress and measures of success, and governance frameworks and foundational capabilities
  • Shift of oversight discussions beyond foundational questions as AI maturity increases, e.g., fewer debates about “what AI is” and more focus on scaling, optimisation and strategic impact
 

Evaluate the executive accountability framework for AI transformation and the board–management engagement model; if necessary, take steps to strengthen them. Consider such metrics and indicators as:

  • Clarity and consistency of executive ownership for AI outcomes, as reported to the board
  • Alignment of the board oversight model used (full board versus distributed committee model) with AI’s strategic importance to the organisation
  • Frequency and quality of management briefings on AI risks, opportunities and progress emphasising AI as an enterprise priority, not a technology issue
  • Clarity of AI oversight responsibilities across the full board and board committees
  • Board members demonstrating greater AI fluency through regular management briefings, independent learning and ongoing board education

98. What information should boards and designated board committees expect regarding AI governance, risk and performance to fulfill their oversight responsibilities effectively? 

+

To support a robust AI governance oversight process, boards should receive regular updates on AI governance, risks and performance. If the responsibility for AI governance oversight has been delegated to the audit, technology or risk committee, the full board should still receive periodic AI-related updates, particularly in relation to strategy. 

On the governance front, directors should have a clear understanding of the organisation’s AI strategy, including its AI maturity relative to peers and how AI initiatives align with business goals; the ethical principles guiding development and responsible deployment; and the accountability structures in place to ensure sufficient human oversight.

Other relevant topics include giving directors an understanding of:

  • The AI governance framework;
  • The work of the AI governance committee or an equivalent cross-functional committee (see responses to Questions 5 and 53);
  • Defined roles, responsibilities and accountabilities (see response to Question 5);
  • Escalation protocols for AI-related issues;
  • How the AI lifecycle is managed, including inventorying, versioning, ownership and documentation of AI models and systems, and the processes for model development, deployment, monitoring, updating and retirement;
  • How compliance with applicable laws and regulations across jurisdictions is being addressed; and
  • Regular updates regarding policy changes and efforts to embed corporate responsibility commitments into AI practices.
 

Timely updates on the identification and assessment of significant AI risks should be provided to directors. Findings from internal audit and external audits and regulatory notifications related to AI should provide insights regarding the number of compliance and security incidents and the effectiveness of AI-specific controls and incident response mechanisms. Particular attention should be given to the effectiveness of vendor risk management practices, ongoing monitoring of high-risk models, the status of remediation actions and lessons learned. Performance reporting should include both operational and ethical metrics presented on a board dashboard. Metrics should address model reliability, fairness assessments, compliance incidents and financial performance to focus on whether AI delivers value, remains compliant and conforms to the organisation’s strategy and standards. Some examples are included in the responses to Questions 17 and 95.

99. How should AI risks be communicated to the board or a designated board committee?

+

AI risks should be communicated to the board or a designated board committee in a clear, concise and structured manner that aligns with the organisation’s broader risk management framework. 

The objective of risk reporting is to enable the board to provide effective oversight and support proactive decision-making. This involves presenting AI risks in terms of their potential impact on strategic objectives, operations, reputation and compliance, using plain language. A dashboard that informs the board of emerging risks, trends and regulatory changes is an effective practice. To that end, the response to Question 17 offers examples of metrics related to AI risk areas that could be considered for inclusion on the dashboard. Management should also prioritise key risks — such as algorithmic bias, regulatory non-compliance, data privacy concerns, cybersecurity vulnerabilities and third-party exposures — for purposes of discussions with the board regarding mitigation strategies, ongoing monitoring efforts and any gaps in controls.

Section 15: SOX/ICFR considerations

+ EXPAND ALL

100. If AI technologies are supporting processes that are in-scope for Sarbanes-Oxley (SOX) compliance, does this mean that AI should be considered an “in scope” SOX application?

+

Not necessarily. The use of AI within a SOX in-scope process does not automatically mean the AI itself should be treated as an in-scope SOX application. SOX scoping should continue to be driven by financial statement and disclosure impact and reliance, not by the presence of AI alone. The SEC has strongly emphasised the avoidance of “AI washing” within financial statement disclosures and the filings in which they are included, so appropriately determining the materiality of AI usage within internal control over financial reporting (ICFR) processes is critical.

AI should be considered in-scope for SOX when it is used in a manner that is material to financial reporting or the operation of key controls. This includes situations where AI directly performs, automates or replaces a control; materially influences control execution or judgment; or generates outputs that management relies on without sufficient human review.

In many cases, AI functions as supporting technology rather than a primary SOX application — for example, assisting with data preparation, analysis or drafting activities. In these scenarios, the focus is typically on whether existing SOX controls adequately address the risks introduced by AI, rather than treating the AI tool itself as a standalone in-scope system. Conversely, AI that automates control execution, performs calculations, applies rules or materially influences judgments affecting estimations incorporated in the financial statements may warrant inclusion in SOX scope or require enhancements to existing SOX controls. In the event AI capabilities are added to a pre-existing SOX application, existing controls should be evaluated to determine whether they appropriately address potential risks introduced — while application scope remains unchanged, existing control processes might require change.

Effective SOX scoping therefore requires understanding how AI is used, not simply whether it exists. Key considerations include the degree of reliance on AI outputs, the level of automation versus human oversight, the potential for error or bias to affect financial reporting, and whether AI usage changes the nature or risk profile of existing controls.

An effective approach to consider AI is: “What does it do, and what controls are in place to oversee its functionality?”

AI roleExamplePotential SOX implication
Informational or advisory only (human reviews and approvals)Using AI to develop templates used in the financial reporting processOften out of scope; assess IT general controls and data reliability
Automates steps affecting financial data or controlsAutomated approval of journal entries based on defined parameters within an AI solutionLikely SOX-relevant; evaluate application controls and change management
Black box logic influencing financial reporting estimates or classificationsLeveraging a third-party AI solution to fully manage software development lifecycle processes for in-scope SOX applicationsHeightened SOX relevance; stronger governance, validation and monitoring likely required

As AI becomes more embedded in financial processes, organisations should expect to enhance risk assessments, documentation and control design to reflect AI-specific risks — even when AI is not classified as an in-scope SOX application.

101. What are the primary risk and control considerations as AI technologies are deployed to support financial processes and control activities?

+

When AI is deployed to support financial processes or control activities, the key consideration is how it changes risk, judgment and control execution — not simply whether AI is present. From an internal audit and SOX perspective, AI introduces opportunities to enhance efficiency and insight, while also creating new governance and control expectations that should be addressed deliberately.

Primary risk areas to consider include:

  • Model behaviour and judgment risk: AI may apply logic or learn patterns that influence estimates, classifications or exceptions. If outputs affect financial reporting, the risk of bias, drift (performance changing over time) or unexplained results increases.
  • Data integrity and lineage: Input data quality is key. Inaccurate, incomplete or poorly governed data can propagate errors into financial results.
  • Change and version management: AI models can evolve more frequently than traditional applications; uncontrolled changes can undermine control reliability.
  • Transparency and explainability: Limited ability to explain how an AI produced an output (e.g., “black box” behaviour) can challenge auditability and management review.
  • Overreliance on automation: Controls that rely heavily on AI may degrade if human oversight is not defined clearly.
  • Regulatory and compliance risk: Deployed AI systems may be prohibited by existing regulatory guidance. Additionally, new or revised guidance may cause previously compliant AI solutions to become non-compliant.
 

As additional AI systems are deployed in support of ICFR processes, it is important to codify AI-specific risks within existing risk and control frameworks leveraging best practice frameworks such as the NIST AI RMF or ISO 42001. Doing so provides organisations with strong structural support to respond to questions from external parties (customers, regulators, auditors) on the adequacy of their AI risk management program.

102. When AI is used to automate or support SOX-relevant controls, how should the control framework and testing approach adapt to maintain regulatory defensibility?

+

When AI is used to automate or support SOX-relevant controls, regulatory defensibility is maintained by adapting — not replacing — the existing control framework. Yes, there is the overriding objective of determining that controls over financial reporting are designed and operating effectively. But what’s different when AI is involved is where and how evidence is produced, reviewed and monitored.

How the control framework should adapt AI-enabled controls should continue to map to COSO principles (risk assessment, control activities and monitoring), with added clarity around accountability, data and change management. Management should explicitly define whether AI is assisting a control (decision support) or executing it (automation), and ensure roles, approvals and escalation paths are documented. Governance expectations — such as model ownership, acceptable use and performance monitoring — can be aligned to the NIST AI Risk Management Framework (NIST AI RMF) without creating a parallel SOX framework. 

Formally demonstrating alignment to best practice frameworks such as the NIST AI RMF provides organisations with strong structural support to respond to regulatory inquiries on the adequacy of their AI risk control framework and testing approaches.

Testing should focus less on inspecting independent AI solution execution and more on validating the control logic, inputs and oversight mechanisms of the AI system itself.

Control elementTesting emphasis
AI logic and/or modelDesign validation; alignment with control objectives
Data inputsCompleteness, accuracy and access controls
Ongoing monitoringMonitoring metrics, thresholds and exception handling
Change managementFormal approval, testing and version tracking

In instances where AI outcomes are not fully explainable, stronger management review controls and evidence of challenge become especially important.

103. How might external auditors react to internal audit/management’s use of AI in support of SOX testing activities? What impact might AI deployments have on external audit reliance strategies?

+

In regard to the reaction of external auditors to internal audit and management use of AI:

External auditors have generally taken a cautious approach to relying on AI-enabled SOX testing to date, particularly where AI is used to generate audit evidence, assess control operation or support judgment-based conclusions affecting the financial statements. This caution reflects the relative newness of AI-enabled testing, limited regulatory precedent, and concerns around explainability, validation and consistency of outputs.

At the same time, this level of reluctance is unlikely to persist as AI becomes more deeply embedded in SOX processes and as expectations from regulators and standard setters continue to evolve. External auditors will increasingly need to evaluate AI-enabled testing in the same way they assess other automated or technology-enabled procedures — based on governance, competence, objectivity and evidence reliability.

Over time, well-designed incorporation of AI-enabled testing has the potential to enhance external audit reliance, particularly when it improves consistency, expands coverage (including full-population testing) and strengthens monitoring activities. The key determinant is not the use of AI itself, but whether management and internal audit can clearly demonstrate how the AI model operates, how outputs are validated, and how exceptions are identified, evaluated and resolved. Especially in the early stages of AI-enabled testing, demonstrating appropriate human oversight (including evidence of review and sign-off) is critical. Retaining evidence of auditor AI usage (for instance, screenshots or prompt parameters, manual review of completeness and accuracy) is helpful to support external reliance conversations.

In regard to the impact of AI on external audit reliance strategies:

As AI adoption accelerates, external auditors may also adjust their overall audit strategies, including the nature and timing of walkthroughs, the extent of substantive testing and the involvement of specialists. Organisations that align AI-enabled testing with established SOX principles — and engage auditors early — will be better positioned as reliance expectations mature.

While AI does not change the fundamentals of SOX reliance, it alters how governance, transparency and evidence quality is approached. But when implemented thoughtfully, AI-enabled testing can support more effective and scalable assurance for both management and external auditors.

Strong AI governance can support or increase reliance on internal audit work, consistent with The IIA’s Three Lines Model — particularly where internal audit demonstrates independence, competence and disciplined oversight of AI tools. Practical steps include documenting how AI supports testing objectives, retaining auditable evidence and aligning with auditors on expectations. Over time, organisations that mature their AI governance may find external auditors more willing to leverage AI-enabled testing rather than duplicate it.

104. When AI is used to assess a full population (rather than a sample) as part of control testing and test exceptions are identified based on the full population assessment, how will this impact external audit's view of control effectiveness?

+

When AI is used to test a full population rather than a sample, external auditors generally view this as a potential enhancement to coverage and risk insight, not an automatic conclusion about control effectiveness. The key determinant is how management and internal audit interpret, govern and respond to the exceptions identified — and whether the AI-enabled testing is reliable and well-controlled. While expanding control testing to include full populations has the potential to enhance overall risk mitigation, it is critical to consider the additional lift needed to support completeness, accuracy and appropriate handling of unexpected occurrences.

From an external audit perspective, full-population testing can strengthen confidence in exception detection, but it also raises expectations around precision and follow-up. Unlike sampling, where some deviation is anticipated, population-level testing often surfaces a broader range of anomalies such as data issues, outlier cases or items outside the control’s intended scope. Auditors will focus on whether exceptions are appropriately triaged, root-caused and linked back to the relevant COSO control objectives and financial reporting assertions.

How exceptions are handled matters more than how many are identified:

Full population testing outcomeLikely external auditor focus
Clear, explainable exceptions with documented remediationSupports control effectiveness while identifying potential process enhancements
High exception volume with defined thresholds and rationaleMay still be effective if well-governed; however, may point to gaps in existing control execution
Unexplained or unmanaged exceptionsRaises questions about operating effectiveness or control design; internal audit’s testing process may also be evaluated to confirm appropriate parameters are in place

To maintain defensibility, management should ensure AI-based testing includes defined thresholds, management review controls, and clear evidence of challenge and resolution. Where AI logic or results are less transparent, stronger oversight and documentation become especially important.

Section 16: Monitoring, remediation and continuous improvement

+ EXPAND ALL

105. How should organisations monitor AI performance and risk over time, and what triggers should prompt reassessment or remediation?

+

Continuous, data-driven oversight processes that track both technical and ethical indicators are key to enabling effective monitoring of AI performance and associated risks. 

KPIs include model accuracy, precision, recall and error rates, which measure how reliably the AI system achieves its intended outcomes. In addition, organisations should monitor fairness and bias indicators — such as disparate impact across demographic groups — as well as explainability scores and user satisfaction ratings. Tracking operational data like system uptime, latency and resource consumption provides insight into AI’s reliability and efficiency. For customer-facing systems, feedback loops capturing complaints, escalation rates and anomalous outputs are essential for identifying emerging issues that may proliferate quickly, impacting trust or compliance.

Risk monitoring requires collecting data on compliance incidents, audit findings and security vulnerabilities, as well as tracking changes in regulatory requirements or business objectives. Triggers for reassessment or remediation include significant drops in model performance, detection of bias or unfair treatment, recurring customer complaints, regulatory non-compliance, or the introduction of new data sources or use cases. Additionally, major changes in the external environment — such as new laws, market shifts or technological advances — should prompt a thorough review of affected AI systems.

106. When AI governance or control issues are identified, what does effective remediation look like, and how should progress be tracked and reported?

+

Effective remediation of identified AI governance or control issues involves a systematic approach that begins with rapid detection and containment of the issue, followed by a thorough root cause analysis to understand and source the underlying factors contributing to the problem. This process should include immediate corrective actions, such as pausing the affected AI system or implementing manual oversight to prevent further harm. Organisations should then update governance protocols, policies and training programs to address the identified gaps and prevent recurrence. 

Organisations can track remediation progress through established metrics and timelines, with regular reporting to stakeholders on remediation efforts, including updates on corrective actions taken, improvements made and any changes in risk assessments. Utilising dashboards to enable executives and directors to visualise progress and compliance with remediation plans ensures transparency and accountability.

Loading...