Major U.S. bank, a pioneer in the use of machine learning models, teams with Protiviti to improve its model validation framework
An early adopter of machine learning (ML) models requested assistance with ML model validation after identifying internal expertise gaps.
Work with the client to improve their model validation framework with relevant ML protocols and apply it to a prioritized list of ML models.
A collaborative approach helped meet validation goal and deadline and delivered an improved ML model framework, satisfying regulators and the board.
Following the financial crisis of 2007-2008, regulators issued specific guidance to help banks reduce the risk of financial losses or other adverse consequences stemming from decisions based on incorrect or misused financial models. Since then, the guidance has become the model risk management bible for financial institutions. It is used to ensure that model validation, typically performed annually, can identify vulnerabilities in the models and manage them effectively.
Recently, the rapid advance and broader adoption of machine learning (ML) models have added more complexity and time to the model validation process. Specifically, ML models have highlighted expertise gaps in in-house model validation teams trained in traditional modeling techniques. Often, those gaps include lack of quantitative expertise and understanding of statistical requirements for ML models, including the use of different ML techniques for the prediction and classification of data — all of which could undermine the validation process.
One U.S. bank, an early adopter of ML models, realized that its model validation team not only lacked the proper expertise to validate some of the bank’s models but was short-staffed in the face of a regulatory deadline.
Although the bank had validation policies and procedures in place, the model validation team was aware that validating the bank’s ML models was far from straightforward and differed significantly from the process for validating traditional statistical models. Climbing the ML learning curve would take some time, which the team did not have. The bank’s head of model risk management requested assistance with the bank’s model validation efforts from Protiviti’s team of machine learning and artificial intelligence experts.
Improving the Framework
The bank had already ranked its ML models from high to low priority based on potential financial exposure from using the models incorrectly. Protiviti brought to the job our ML model validation framework and a team of experts who joined forces with the bank’s validation personnel to address this prioritized list. Thinking ahead, the bank’s team also asked us to spearhead a gap analysis of their own model validation framework against the supervisory guidance and Protiviti’s ML model validation framework. Working together, we identified the best practices from each framework, including the elements necessary to validate the bank’s ML models, and updated the bank’s framework based on those.
Additionally, as the project progressed, the bank and Protiviti developed tools to continually assess the bank’s improved framework to ensure that it remained focused on the most important aspects of the ML models and that validation protocols for those models would not fall through the cracks.
The client’s team, composed of model validation and risk managers and led by the head of machine learning, gave its full support to the project and dedicated all the necessary resources. As one example, bank stakeholders assisted Protiviti with access to an unfamiliar computer and data system used by the ML models, and facilitated communication with the system’s vendor when questions arose. This proactive and collaborative approach created an atmosphere of success in which the two organizations frequently problem-solved and drafted validation reports together.
By laying the proper foundation and combining frameworks early in the process, Protiviti and the bank set the project on the right course from the beginning. The enhanced ML validation foundation enabled a broad scope of ML models — text mining, supervised learning, anomaly detection, natural language processing, etc. — to be validated in a matter of weeks. The validations were appropriately rigorous and tailored to the risk profile and complexity of the machine learning technique, and in alignment with regulatory expectations.
The increasing adoption of machine learning in the financial industry will continue to put pressure on banks to identify and manage the associated financial risks of ML models. That task promises to grow only more complex over time as artificial intelligence capabilities advance. Seeking expert guidance that helps banks implement a robust ML model validation framework will not only ease the validation burden, but it will also give banks confidence that the models will pass the muster of regulators.