NextGen Risk Management: How Do Machines Make Decisions?

NextGen Risk Management-How Machines Make Decisions
NextGen Risk Management: How Do Machines Make Decisions?

Introduction

Effective risk identification and monitoring are integral to an organisation’s success and improving strategic decision-making. Accurate and timely risk identification and assessment help drive efficiencies and improve customer experiences with business processes.

Consistent with its agile risk management philosophy, Protiviti presents its perspective on establishing and sustaining leading practises for identifying, assessing, mitigating and monitoring risks stemming from artificial intelligence (AI).

AI and Risk Management

Many organisations are quickly adopting AI based on the benefits it can create. AI technologies have the potential to advance established industries by improving the efficiency and accuracy of company operations and customer experiences. Additionally, AI is opening the door to entirely new operating models, ushering in a new set of competitive dynamics that rewards organisations focused on interpreting and extracting internal and external data quickly and accurately[1].

Machine learning, a type of AI, utilises the fields of knowledge discovery and data mining. Machine learning algorithms study and react to data automatically, without human assistance or intervention, enabling systems to learn from experience and improve. However, using machine learning and AI increases complexity and creates new, more dynamic risks that may lead to unintended consequences.

To mitigate the new and changing risk environment, an organisation needs to have a properly established risk management foundation. Organisations can leverage existing risk management frameworks to create a framework that can identify and oversee the wide range of risks associated with AI. For instance, risk frameworks utilised to assess new products and services, as well as activities, can be leveraged, as AI is developed, implemented and changed. Another useful framework is a model risk management (MRM) framework that is based on identifying, measuring and monitoring all risks related to a model — generally a component of AI in the form of a machine learning algorithm.

MRM practises mitigate the risks of traditional econometric model lifecycles, however, often they fail to capture the risks presented by AI. While these frameworks can be leveraged, organisations may not be currently equipped and resourced to handle all risks and ongoing monitoring needed in an AI environment. To account fully for risks posed by AI, organisations’ existing frameworks and risk practises can be tailored with some well-targeted enhancements within the AI lifecycle, as discussed in detail below.

As use of AI continues to expand exponentially, risk and compliance functions will be challenged to rethink resourcing, traditional oversight monitoring techniques, and how to leverage existing frameworks to ease implementation and fully manage risks.

AI technologies have the potential to advance established industries by improving the efficiency and accuracy of company operations and customer experiences. Additionally, AI is opening the door to entirely new operating models, ushering in a new set of competitive dynamics that rewards organisations focused on the scale and sophistication of data much more than the scale or complexity of capital.

AI in the Marketplace

The financial services industry continues to invest heavily in artificial intelligence systems, leading other industries such as manufacturing, healthcare and professional services. Last year, research firm IDC said it expected the banking industry to spend more than $5 billion on artificial intelligence systems in 2019. Overall, IDC projects spending on AI systems will reach $97.9 billion in 2023, more than two and one half times the $37.5 billion that will be spent in 2019[2].

Financial institutions are incorporating AI into asset management, fraud detection, credit risk management and regulatory compliance, to name a few use cases. Specifically, these organisations are turning to machine learning models as an alternative to traditional models to gain faster, more accurate, and insightful predictions and classifications in their risk management and financial management business decisions. Several types of AI components and the effect they have on organisations are provided below.

AI use in Marketplace

Incorporating and monitoring AI the correct way is important. There have been several instances where major organisations have rushed to deploy AI, only to learn of the unmitigated risks and unintended consequences of their application. In 2018, a major consumer brand discovered that the AI used in its hiring process discriminated against female job applicants. The software was designed to align a candidate’s history with that of employees who had proven successful at the company over the previous 10 years[3]. The design of the algorithm did not intend to discriminate but the data set on which the model relied caused unintended consequences and bias. The following table shows common risks that organisations are encountering through the use of AI:

Key Risks Posed by AI

Common risks of AI_Image

Although AI is innovative and technically complex, it has foundational components of a core model that quantifies theories, techniques and assumptions from processed input data. However, the differences with AI are the exponential increase of model complexity due to intricate algorithms, vast unstructured data sets and the potential for immense decision trees. AI — specifically, machine learning — removes the element of human subject-matter expertise from the decision process, which can result in unwanted risk exposure.

As the use of machine learning models continues to expand across the financial services industry, regulators are increasing their attention on model risk. The following three root causes can result in model risk:

  • A model has fundamental errors that cause it to produce inaccurate or biased outputs when viewed against the design objective and intended business use.
  • A model is implemented or used inappropriately, or when its limitations or assumptions are not fully understood.
  • A model is misused because of a misunderstanding of its purpose and limitations.

To avoid these challenges, organisations should consider these fundamental questions:

  • Do you know how the machine learning model was built?
  • Do you know its purpose?
  • Do you know how to use the results and how success is defined?

The Federal Reserve Board (FRB) has reinforced that SR 11-7/ OCC 2011-12[4] (Guidance on Model Risk Management) remains the applicable regulatory guidance on the use of AI. There have been no indications by the FRB of any new standards or requirements that will come into place. Although SR 11-7/ OCC 2011-12 provides a foundation for establishing risk management frameworks for mitigating risks posed by AI systems, guidance and expectations have not been expanded and formalised to address the dynamic changes, unintended results, and bias risks[5] posed by AI.

Organisations can proactively mitigate these unique AI risks by establishing cross-functional frameworks, based on a clearly defined scope of each AI solution and interdependencies with existing risks in its operating environment. Consider the use of a chatbot as an example. An organisation will need to consider legal, compliance, reputational and operational risks if any issues (discrimination, bias, privacy, etc.) arise from the use of a chatbot.

Recently, the New York Department of Financial Services launched an investigation into gender discrimination in financial institutions’ consumer algorithms that are used to determine credit limits[6]. Needless to say, organisations using AI for decisions are facing scrutiny across the board as it relates to the risk taxonomy. Given these challenges, organisations should enhance their current risk management framework by establishing a cross-functional risk governance process to ensure AI risks are understood, assessed, and mitigated throughout the AI lifecycle.

AI Lifecycle and Effective Challenge

Insight into the lifecycle will help organisations navigate various considerations, including risk and compliance, governance and reporting, data management, technology, and workforce and training implications. Additionally, an environment of effective challenge, where decision-making processes promote a range of views, fosters independent testing and validation of current practises and AI solutions prior to implementation and production, and an integrated environment of open and constructive engagement. Organisations can take the following actions now to enhance risk mitigation during the AI lifecycle:

1. Design and Mitigate


AI Governance Build-Out

  • Adapt and extend existing model governance to fit AI tools, specifically the use and maintenance of models, validation of models, and the adequate disclosure of model assumptions and limitations.
  • Review and update the model risk policy regulating the definition of model risk, scope of MRM, roles and responsibilities, model approval and change process and management of model weaknesses, to encompass the new risks that AI presents.
  • Develop an AI policy consisting of requirements around use, development, and ongoing monitoring, which include roles and responsibilities for business leaders, independent risk and compliance managers, and technology and operations functions.
  • Determine the interoperability requirements based on the organisation’s risk appetite as part of the AI policy.
  • Develop a methodology around bias to ensure fairness and address algorithmic bias, as well as bias against humans.
  • Configure a risk-based methodology consisting of severity tiers, which will incorporate the necessary requirements to implement AI successfully.
  • Formalise a well-defined project oversight and change management framework around AI systems.
  • Improve data quality programmes to profile input data and strengthen data governance (i.e., embed data requirements and a rigorous data monitoring process).
  • Build a data warehouse for all performance monitoring and testing data. This will allow an AI tool to easily input and manage the data repository once the structure is built.
  • Configure application resiliency controls, detailed business-continuity planning and disaster recovery.
  • Track and aggregate monitoring in centralised warehouses and align to issue and change management programmes.

AI Tool Design

  • Define the purpose and scope of the AI solution clearly, including its methodology, decision criteria, and data requirements.
  • Hold meetings with key stakeholders to understand the AI tool requirements, desired output and use cases.
  • Before developing an AI tool, map its process workflow, including data inputs, variables, and monitoring triggers to gain a full understanding of the foundation of the tool.
  • Complete documentation of the AI tools underlying model’s purpose, design, assumptions, parameterisa-tion, testing, limitations, and user instruction.
  • Identify scale and potential inherent risks that may be triggered with the use of an AI solution.
  • Examine the amount of change that a business will be required to undergo as it relates to building and running the AI tool in production.
  • Embed, understand and analyse rules and regulatory requirements in the algorithm design and monitoring.
  • Define hyperparameters, including a standard set of analysis to be run on input data and output results.
  • Perform quality control during pre-implementation rollout.
  • Obtain appropriate approvals and signoffs for development and use of the AI tool.
  • Build mechanisms within the AI tool to ensure accountability and adequate access to redress. Algorithms, data and design processes should all be auditable.
  • Configure consistent and recurring testing in a live environment.
  • Conduct preliminary analytics on the outputs generated by the tool to understand its limitations and determine optimal parameters when building out the tool.
  • Validate the parameters chosen through human subject-matter experts (SMEs) and industry benchmarks.

2. Implement


  • Ensure the approved project plan serves as the baseline or source of record, and acts as a “contract” of the work to be performed to successfully implement the AI tool.
  • Hold meetings with key stakeholders to introduce the AI and designate model owners and SMEs to monitor performance.
  • Configure a cross-functional team consisting of data scientists, AI experts, model risk experts, data officers, regulatory experts, and any key stakeholders to help mitigate risks associated with the implementation of the AI tool.
  • Establish and monitor controls and human override in the design of the algorithm to control inputs, processing and outcomes during implementation.
  • Conduct proof-of-concept testing and/or controlled case studies before going into live production.
  • Develop an implementation plan for moving the AI solution into production and assist with the implementation phase.
  • Develop and formalise communication protocols to internal and external stakeholders (e.g., consumers, investors, regulators) of the use of the newly implemented AI tool.
  • Perform a production readiness analysis to ensure the AI solution can be implemented successfully.
  • Perform validation testing of the AI tool prior to implementation and make final updates to mitigate any material weaknesses of the tool.

3. Testing and Effective Challenge


  • Perform rigorous and continuous testing of underlying/input data.
  • Perform scheduled backups and parallel testing of underlying/input data.
  • Conduct periodic testing of the controls in place to guardrail underlying/input data.
  • Perform post-implementation AI validation testing and exceptions testing and conduct a risk assessment.
  • Review AI model findings and hold meetings with key stakeholders and SMEs to discuss key takeaways.
  • Review performance threshold exception reports to identify areas of improvement for the model.
  • Formalise review of key risks inherent in AI and its operational component (e.g., economic variables, qualitative factors).
  • Perform a quality assurance review of surrounding business objectives, stated benefits and process flow.
  • Review choice of architecture, hyper-parameters, optimisers, regularisation and activation functions.
  • Conduct an independent assessment as it relates to operating within parameters outlined in the approval documentation.
  • Modify parameters dynamically to reflect emerging patterns in the input data, as this will replace the traditional approach of periodic manual review and model refresh.
  • Provide insight regarding risk and compliance considerations that align to the use of AI.
  • Conduct an independent audit to ensure the design and effectiveness of controls relied upon to mitigate the model’s risks.
  • Perform an independent assessment of the process for establishing and monitoring limits on model use.
  • Conduct a bias/variance analysis.
  • Develop a challenger model using alternative algorithms to benchmark output performance.
  • Perform a post-implementation analysis to determine if the change management process or methodologies need to be modified.
  • If needed, redesign and recalibrate the AI model based on the findings, discussions, and risk and compliance considerations.
  • Incorporate appropriate human intervention throughout each component of the AI lifecycle.
  • Develop an AI feedback loop consisting of existing complaints and customer feedback to allow an organisation to understand and quickly resolve AI issues and/or defects.

AI Risk Management Framework

Numerous organisations are intensely focused on gaining a competitive advantage through AI implementation. To succeed, organisations need to commit to monitoring and understanding risks posed by AI.

As AI becomes more prevalent, it is crucial for organisations to move into an agile risk target state to manage AI risks. An organisation can align its MRM infrastructure with the enhanced procedures and controls, while incorporating new AI activity governance, agile implementation and effective challenge of AI tools. Establishing an AI risk framework will benefit an organisation’s ability and speed to innovate. This can be applied to all three lines of defense and updated regularly to reflect evolving best practises and regulatory expectations. The updated framework can leverage existing governance and risk management activities while catering to AI.

AI Risk Management Framework

With an agile AI risk framework, organisations should, at a minimum, implement the following activities and concepts per the framework components:

1. Governance


  • A formalised governance structure will establish accountability around the execution of the AI lifecycle. It will also assign appropriate resources and processes required to assess the design and performance of the AI tool.
  • Organisations will be required to ensure resources possess the appropriate skill sets needed to challenge, control, and monitor the use of AI. However, due to the complexity of AI, the respective skill set to govern AI effectively will be tailored for the sustainability and for each business use of the AI tool.

- For example, a line-of-business SME will be needed to verify if the expected AI outputs are achieved, while a technology SME is needed to verify if the AI was efficiently integrated into an organisation’s technological infrastructure without falling into algorithmic loops that overload the system.

  • With the enhancement of the governance structure, organisations will need to incorporate the following:

- A formalised, documented, clear, and comprehensive definition of AI.

- Defined roles and responsibilities.

- A formalised and socialised project governance charter.

- A formalised and responsive change management process.

2. Inventory & Risk Assessment


  • Organisations will immediately need to revisit their tools inventory to ensure AI models are included. A robust model inventory provides management with a comprehensive overview of all models in use, including model owners, restrictions on use, and the validation status. Lack of a robust method to update the model inventory on a regular basis can result in undocumented model changes, inefficient processes to risk rate models, and ineffective performance monitoring.
  • The organisation’s model risk assessment process, as required under regulatory guidance, will need to be formally adapted to incorporate AI. The risk assessment process will need to assess model impact risk, covering both the assumptions that are drawn from models and the impact of decisions based upon model output. Conducting a risk assessment allows an institution to understand inherent risks of the business, products and services, as well as the effectiveness of the controls in place. A periodic risk assessment will support appropriate scheduling of monitoring to ensure resources are allocated and risk is mitigated.

3. Data Aggregation & Quality


  • Organisations will need an effective and transparent process to improve underlying or input data throughout the model’s tenure. A formalised and documented model input change management process and communication plan is critical to the aggregation and quality of underlying or input data used in the AI tool. The key stakeholders (model owner, model user, model approver, and independent reviewer) will be required to maintain and/or understand the following components:

- Data quality and data set integration.

- Data architecture and data infrastructure.

- Understand > review > assess > remediate > algorithms.

- Transparency of algorithms.

- Effective controls in place to guardrail underlying/input data.

4. Integrated Development & Implementation


  • The successful development and implementation of AI solutions within an enterprise depends largely on the design and effectiveness of the control and testing process. An enhanced control framework and continuous testing can help reduce inherent risks to a residual risk level that aligns with the organisation’s risk appetite and framework. Currently, organisations tend to test new initiatives within a sandbox environment; however, given the complexity and development of AI, they should consider configuring consistent and recurring testing outside a sandbox. Developing a control framework and testing process would allow organisations to identify gaps and potential options for improvement quickly. The control process should be determined and aligned by an established and enhanced risk assessment framework. The risk assessment process is critical, as it helps to determine the controls needed to mitigate the inherent risks.
  • Organisations should consider the key risks generated from the use of AI. For example, data bias will require organisations to produce impartial decisions by examining the choice of data. As bias in AI can trigger costly errors, organisations will need to focus on the front-end of the AI lifecycle, the development of the AI tool. One way to identify data bias is by benchmarking with other models or the opinion of SMEs. Appropriate data de-biasing techniques should be used to remove bias from development data. In addition to traditional methods such as downscaling and quantile mapping, randomisation and sample weighting should also be incorporated to correct data bias. The statistical soundness of selecting unbiased development and holdout data should be given extra emphasis for machine learning models.

5. Ongoing Performance Monitoring


  • Performance monitoring is essential to mitigating risks connected to AI tools. Effective monitoring will help an organisation draw clear conclusions to support business decisions. An effective performance monitoring function comes from a highly automated monitoring and testing programme, using a common methodology and real-time reporting. Organisations can enhance the rigor of the performance monitoring function by using the techniques below:

- Real-time monitoring and bias output reporting.

- Results and output-based testing.

- Proactive trend, concentration and correlation identification.

- Assurance of appropriate and compliant recommendations.

- Continuous automated exception identification, alert system and reporting.

- Proper skill set.

- Repurposing workforce.

- Reskilling workforce.

- Multidisciplinary team structure with formal project management.

  • Effective challenge requires the cooperation and alignment of all three lines of defense, as each plays a specific role. The first line of defense, specifically model developers and owners, works to understand and monitor the risks from the use of an AI tool. The second line, the model validators, independently establishes key protocols for risk and compliance decisions while working with model developers and owners. Lastly, the third line of defense, specifically audit, conducts its own tests to ensure that the residual model risk of the AI tool does not surpass the risk appetite established. The scope of activities by the third line of defense will stay similar in nature in comparison to the traditional MRM framework. However, the third line of defense will be required to expand its skill set to understand how AI algorithms work and their intended use, as well as understand the risk they pose to technology infrastructure and operations. To have the most impact, an effective challenge must include the following:

- Two-way communication on strategic business and risk decisions as it relates to the use of the AI tool.

- Transparency and direction to business and risk leadership before issues arise from the use of the AI tool.

- Full use of the AI tool according to the established risk appetite.

  • Additionally, it will be critical for organisations to maintain human subject-matter oversight rather than strictly relying on software solutions to render analysis, as software has the potential to fail to understand the impacts of the results. Lastly, organisations should review and update policy, procedures and processes periodically to encompass the changes that AI brings, which, in turn, will help an organisation effectively evaluate an AI tool.

6. Independent Validation


  • As with any model, periodic independent validations[7] will continue to be a focal point of AI monitoring. To assess the innovations of AI, model validators will need to understand the challenges, such as a model’s fitness for use, and develop customised methods for validating AI tools. The validation will still be required to assess models broadly from four perspectives: conceptual soundness, process verification, ongoing monitoring and outcomes analysis.
  • SR 11-7 and OCC 2011-12 require that model documentation be comprehensive and detailed enough so that a knowledgeable third party can recreate the model without having access to the model development code. The complexity of AI and the model development process are likely to make documentation of AI tools much more challenging than traditional model documentation. It is recommended that organisations standardise their model development and validation procedures for AI and provide a model documentation template that is consistent with regulatory expectations and its model risk management policies and standards.

7. Postmortem Review


  • An organisation will need to plan strategically and execute effectively around the performance monitoring results, as postmortem reviews will be crucial to refining and improving the models. Organisations will need to thoroughly examine the analysis and explanation of the AI output, bias and interpretability analysis, and review performance threshold exceptions and controls in place. Based on the examination and reviews, organisations will need to constantly redesign and recalibrate the AI tool for continuous improvement.

Conclusion

With the continued investment in AI, the use of AI in business processes and practises is only growing larger in scope and deeper in granularity. To stay ahead and provide effective and efficient monitoring of risk, organisations will not only utilise AI as their most comprehensive and valued tool but will need agile risk and compliance management. Competitive advantages will come not only from how organisations use AI but also from how they are able to avoid mistakes, ensure smooth customer experiences, prevent violations of law and explain what AI is intended to do to customers and regulators.

An AI tool will never be fully clear of risk, but an efficient and effective AI risk management framework will keep risk manageable and enable organisations to respond to fluctuations in the outputs and decisions generated by AI. The key for all organisations using AI currently is to build and maintain AI in a responsible and transparent way, which, in turn, will help reduce operational cost and, more important, maintain the confidence of customers.

Contacts

Managing Director
+1.212.479.0692
 
Managing Director
+1.212.471.9674
 
Managing Director and Americas Financial Services Leader
+1.312.476.6327
Managing Director
+1.212.603.8378
 
Managing Director
+1 469-374-2564
Director
+1.212.603.8398
 
Managing Director and Global Risk and Compliance Leader
+1.704.972.9615

 


[1] “The New Physics of Financial Services: How Artificial Intelligence Is Transforming the Financial Ecosystem” World Economic Forum, Aug. 15, 2018
[2] IDC Worldwide Artificial Intelligence Spending Guide
[3] Forbes Insights
[4] What Are We Learning about Artificial Intelligence in Financial Services?
[5] Validation of Machine Learning Models: Challenges and Alternatives
[6] NYDFS Apple Card Investigation
[7] Validation of Machine Learning Models: Challenges and Alternatives