Hospitality Company Builds Foundation for Pursuing Responsible AI, Mitigating Risk with Comprehensive Standards and Enhanced Controls

Client Snapshot


The client is a multinational hospitality company that manages and franchises a broad portfolio of hotels and resorts.


Client Situation

Before exploring and adopting generative artificial intelligence capabilities across the enterprise, the client wanted to establish a comprehensive AI governance standard and control framework to address any new risks.


Work Performed

Protiviti conducted a thorough analysis of the client’s existing information security and technology standards and controls, then enhanced those controls and developed a customised AI governance standard, addressing the risks and concerns cited in leading industry standards on AI.



The introduction of a comprehensive governance standard and updated controls paved the way for the organisation to move forward aggressively, yet responsibly, with critical AI initiatives.


As is the case with many organisations today, this multinational hospitality company was contemplating how best to introduce and manage model-based/generative artificial intelligence (AI) across its worldwide network. The balance between risk and reward, a common challenge for technology leaders, was top of mind. Specifically, the client wanted to:

  • enable AI use and development within the organisation without creating undue risk for such use cases as decision support, customer service, knowledge management, and productivity improvements.
  • while developing a better understanding of the industry standards to which the organisation needed to align
  • accomplishing this in concert with existing standards and controls.

The overriding objective was to ensure that the updated AI governance standard be “actionable, digestible and comprehensible.” Minimising risk and creating new opportunities to apply AI technology to the business would put this client well ahead of its competitors.

In addition, it was important to this company’s leaders that any controls introduced incorporate the corporate values of ethics and fairness, which are built into the organisation’s code of conduct. This consideration raises a paradox for clients looking to aggressively introduce AI – the need to think of AI as a unit that needs to behave as if it were an ethical, fair, human person and how this application of AI ethics will translate into operations.

Tight timeline, detailed reviews

The client’s periodic control review and assessment, a window for addressing annual changes to controls, was scheduled one month from the start of this project, so the client could quickly incorporate, publish, and operationalise the controls for immediate use by the organisation. During that timeframe, the team thoroughly analysed the existing information security and technology standards and controls, provided suggested revisions where possible and delivered a set of minimised and actionable net-new controls, along with the guidance to implement them. This would immediately drive changes and improvements to AI risk-mitigation and help accelerate AI-first transformation.

While working through this comprehensive analysis, we worked closely with the client to identify the industry standard(s) to which the organisation would align, established how to best integrate to the identified existing standards and controls, and developed a process to gain consensus for this change across the organisation. Decisions on alignment were made with the client to ensure suitability and practicality.

The result was an AI Governance standard tailored to the client, incorporating risk management principles, compliance requirements and industry practices. Our team went through each risk called out by the leading AI governance frameworks and determined the best way to mitigate the risk: employing an existing control, modifying a control or creating a net new control. The resulting controls were mapped back to NIST AI 100-1 Risk Management Framework (RMF), ISO/IEC 23894:2023, the European Union AI Act and other emerging industry guidance, along with our proprietary, leading practices frameworks, ensuring traceability and a comprehensive strategy to address AI risks.

Building upon what was already in place was a key success criterion for the client, so the team also mapped the guidance against the client’s 134 existing controls to identify where those controls provided coverage, where controls needed to be modified to address AI risks or where net new controls needed to be created. Leveraging most existing content versus subjecting the organisation to a complete overhaul proved to be a factor in exceeding the client’s expectations.

To address concerns about the ethics of AI seamlessly fitting into the client’s code of conduct, we incorporated into that code how the AI ‘behavior’ would match the company’s dedication to treating its employees and customers fairly. We asked, “how do we govern humans?” and “how should we govern AI?” and the Venn diagram overlap of those two questions became more powerful as that paradigm was discussed with the client.

Mitigating risk, ensuring compliance

The development of a tailored AI governance standard significantly improved the client's ability to manage and control its AI initiatives, guiding the responsible and ethical use of AI technologies throughout the organisation. This standard established the ‘rules’ for the client’s employees who are using AI, including the newer, model-based and generative types of AI. Enhancing the client’s controls to mitigate AI risks and align with industry-leading AI frameworks and regulations such as the NIST AI RMF, ISO/IEC, and the AI Act supported compliance with industry practices and requirements, reducing the risk of non-compliance. The process of establishing clear guidelines around ethical considerations ensures that employees using AI technologies do so responsibly. This includes ensuring fairness and transparency in automated decisions, which are increasingly scrutinised by regulators and the public alike. Through comprehensive control mapping, gaps in existing controls were identified. This allowed for a targeted approach to improve security postures against potential cyber threats or data breaches linked to AI applications.

By creating a governance standard specifically designed to address the unique risks associated with AI, such as biases in decision-making or data privacy issues, the client was better equipped to identify and mitigate these risks before they could result in violations and reputational damage. This proactive approach was critical for maintaining trust in their AI systems.

This tailored AI governance standard significantly improved the client's ability to guide and accelerate the responsible and ethical use of AI technologies across the organisation.

Benefits realised

The output we created gave the client the assurance and confidence needed to expeditiously pursue AI solutions and use cases, knowing that the right governance building blocks and the right framework to protect the organisation were in place. Having the right governance framework is essential and acts as an accelerator for AI-powered transformation.

The careful analysis enabled our team to deliver results that exceeded the client’s expectations, particularly given the limited timeframe. Most of the NIST AI RMF controls required already existed and simply needed to be extended to address AI technologies. Leveraging the majority of existing controls allowed the client to focus on the few unique controls that needed to be developed to effectively govern and control AI across the company. The new standard and controls were custom tailored to fit seamlessly into the client’s structure.

Once the mapping exercise was complete, daily working sessions were held to develop the new AI standard with particular attention to the modification of existing controls and the creation of new controls. In total, we delivered one new AI standard and nine net new AI controls, along with six modified controls. Additionally, updated procurement procedures, an acceptable use policy, user training and processes were implemented to help the client educate and enforce the new controls.

With these updates, the client is now well positioned to leverage the latest, powerful types of AI into its daily operations, giving it a competitive advantage over its peers across the industry.