Reducing Risk Through Model Validation

Reducing Risk Through Model Validation

POWERFUL INSIGHTS

Virtually no organization is immune from model risk. The last two decades are filled with numerous examples of model risk manifesting itself in the form of significant losses for organizations that fail to pay enough attention to model limitations. Hundreds of billions of dollars have been lost due to faulty models or improper usage of models.
However, while model risk is unavoidable, it can be managed and controlled.


“Those institutions faring better during the recent turmoil generally placed relatively more emphasis on validation, independent review, and other controls for models and similar quantitative techniques. They also continually refined their models and applied a healthy dose of skepticism to model output.”

–Ben Bernanke, 20081


Issue

Quantitative models are once again under a microscope. Models are often blamed for inaccurate valuations, incorrect risk management strategies and significant investment losses. Though models alone did not cause the global financial crisis, many assign a large portion of the blame to them.

The financial crisis is not the first instance of significant losses related to the use of quantitative models. In 1998, Long Term Capital Management (LTCM) collapsed following Russia’s default on its debt. LTCM’s complex models used a Gaussian copula and assumed no contagion across markets. In 1992, Deutsche Bank lost $500 million due to the use of flat volatility in its options pricing models.

Every model-related crisis underscores the inherent dangers in overreliance on quantitative modeling. However, the benefits of such analytical tools are too great to ignore. Despite some current skepticism, financial markets will continue to innovate and rely on models.

Challenges and Opportunities

Models are simplified and idealized representations of the real world and, by definition, will be wrong in some cases. Model failure can be caused by a variety of limitations, such as:

  • The use of wrong assumptions
  • Inappropriately calibrated model
  • Development of model based on deficient data
  • Flaws in logical and theoretical structure
  • Incorrect coding or implementation of a valid and well calibrated model
  • Confirmation bias (Models are used to confirm previous beliefs; results that contradict the desired outcome are disregarded.)
  • Improper application (Models developed for one set of products or circumstances are applied blindly to a different set of products or circumstances.)

While every model has some limitations, it is important to know the extent to which a model can be wrong yet remain useful. Well-controlled model development and robust model validation create the opportunity to identify and reduce the impact of model limitations.

Our Point of View

Organizations should apply lessons from historical model-related losses to improve current modeling practices.

Many model or process failures can be avoided by meaningful validation of the model. In many cases, model validations are focused too narrowly on discrete model components such as testing mathematical formulas, or are not performed at intervals that capture changing market and economic conditions. A narrow approach generally fails to adequately identify weaknesses across the entire modeling framework, such as model failure at extreme inputs, hard-coded parameters or invalid assumptions.

A comprehensive approach to model validation provides greater coverage and yields greater benefits to model developers and users. Holistic validation of models includes a review of model inputs, assumptions, theory, analytics, output, and associated governance and controls in order to determine if the model is operating as desired and for its intended use.

PROVEN DELIVERY 

How We Help Companies Succeed

Our Model Risk Management practice helps organizations by assessing, designing and implementing model governance programs and by conducting independent model validations. Our holistic approach to model validation includes evaluating the model governance framework and underlying model structure, inputs, assumptions, analytics and output.

We also develop customized quantitative models, refine and calibrate existing models, and design stress testing and scenario analysis programs to supplement existing analytics.

Example

A large financial institution wanted to increase its understanding of the economic capital modeling framework it had implemented. Protiviti performed a validation of the institution’s economic capital framework, including credit, interest rate, operational and market risk models. We conducted a thorough review of model theory, framework, assumptions and processes, and created parallel replica models to validate the client’s model analytics and calculations. Our validation identified a number of weaknesses in the client’s model, such as:

  • Incomplete technical documentation to properly replicate methodology independently
  • Excessive sensitivity to key inputs that caused model output to exceed maximum boundaries, especially under stress scenarios
  • Inconsistent application of distribution assumptions for market risk modeling: lognormal for income simulation and normal for valuation
  • Incorrect parameters hard coded into the model created 60 percent model error in some critical circumstances
  • 10 percent potential model error due to improper implementation of approved modeling framework

Protiviti’s team provided recommendations to address the limitations identified, improve the model’s suitability and increase model transparency to executive management.

1At the Federal Reserve Bank of Chicago’s Annual Conference on Bank Structure and Competition, Chicago, Illinois. May 15, 2008.

Contacts

Cory Gunderson
+1.212.708.6313
[email protected]
Shaheen Dil, Ph.D.
+1.212.603.8378
[email protected]

Ready to work with us?