Human v. machine: Tackling artificial intelligence risks in financial institutions

In the novel Tell the Machine Goodnight, Katie Williams tells the story of Pearl, a technician for Apricity Corporation, which has developed a machine that “uses a sophisticated metric, taking into account factors of which we are not consciously aware,” and with 99.7% accuracy, offers recommendations for what will make people happy. Does this narrative provide a glimpse into our future? Will machines, using complex and unexplainable reasoning, and with a very high degree of accuracy, make all our decisions for us? You will have to read the book to find out how Pearl and her happiness machine fare. We are comfortable saying, though, that for the financial services industry we expect machines, specifically artificial intelligence (AI), will significantly improve the efficiency and effectiveness of much of what we do today and will help us make better decisions. But, unlike Pearl, financial institution users will need not only to be able to explain how their machines work, but also to defend how they reach conclusions. Therein lies the challenge of AI for the financial services industry.

What is artificial intelligence?

Artificial intelligence is the “simulation of human intelligence processes by machines, especially computer systems.”[1] AI techniques include machine learning (i.e., how a computer develops its intelligence), natural language processing, automation and robotics, and machine vision.
While many people think of AI as a recent phenomenon, the beginnings of modern AI can be traced to the works of the classical philosophers, who attempted to describe the process of human thinking as the mechanical manipulation of symbols.[2]That said, the field of AI was not formally founded until 1956, when the term “artificial intelligence” was coined at a conference at Dartmouth College. For several decades thereafter, interest and investment in AI remained strong, but were eventually replaced by disappointment when AI failed to realise its expected potential of producing a machine as intelligent as a human. As the 21st century neared, examples of successful application of machine learning in both the public and private sectors, more powerful computer hardware, and the accessibility of enormous sets of data spurred new interest and advancements, many of which – e.g., IBM’s Watson, Amazon’s Alexa, Google Maps, and those chatbots on nearly every company’s website – are now familiar. In fact, AI has become so prevalent that we likely don’t give much thought anymore to its impact on our daily lives.

AI adoption in the financial services industry

Based on a recent global survey of over 500 financial services professionals, AI-enabled innovation is now mission critical for the financial services industry[3] – not surprising when the estimated annual value of AI and analytics for global banking alone has been estimated as high as $1 trillion.[4] AI is being used by the financial services industry in the front, middle, and back office. Use cases include:

  • Customer service engagement – use of the chatbots mentioned above to handle routine customer service requests.
  • Customer satisfaction – use of predictive analytics to detect emerging signs of customer dissatisfaction, even before the customer has submitted a complaint.
  • Know Your Customer – use of AI to perform digital ID verification in mere seconds to authenticate the address and true identities of individuals, to gather and extract data needed to meet customer onboarding requirements, and to risk rate customers.
  • Credit decisioning – use of machine learning algorithms that leverage traditional and nontraditional data to determine loan eligibility.
  • Securities trading – use of AI to monitor structured and unstructured data to make more accurate stock trading decisions.
  • Transaction processing – use of biometrics (voice, video, and print) to authenticate clients and authorise transactions.
  • Document processing – use of machine vision and natural language processing to scan and process documents.
  • Fraud/money laundering detection – use of machine learning to detect patterns of fraud and other types of money laundering.
  • Early warning systems – use of AI to evaluate internal and external data to identify early indicators of stress that may signal customer credit problems.

The list goes on and on and future uses are limited only by our imaginations. And it’s worth noting that it’s not just the financial services industry that is interested in AI – financial regulators are interested as well. The UK’s Financial Conduct Authority, as an example, noted in its 2022/23 Business Plan[5] that it intends to use AI in supervision and enforcement.

The regulation of AI

The industry adoption of AI has attracted the attention of a variety of international bodies and national regulators who are focused on the best way to regulate the risks of AI without stifling innovation and undercutting the many potential benefits. Integral to their deliberations is determining where existing laws and regulations (e.g., data privacy, antidiscrimination) can be applied to AI and where the novel risks of AI (e.g., autonomous decision-making) may require new regulatory approaches. Given the complexities involved, some regulators, for now at least, are issuing principles and guidelines as they continue to evolve their thinking. However, draft regulation is under consultation in the EU and other regulators are consulting with the public and private sectors to stimulate debate on the best way to regulate AI while encouraging its safe use.

The key challenges and risks of AI

Asking industry stakeholders what the biggest challenges to achieving AI goals are will yield different responses, but there are several common themes,[6] as seen in the chart below. Missing from the chart are the regulators, who would agree with many of these challenges and add some concerns focused on outcomes. One of the more comprehensive discussions about the challenges and risks of AI for the financial services industry is included in the February 2022 final report of the Artificial Intelligence Public-Private Forum,[7] a collaboration among the Bank of England, the Financial Conduct Authority, and the private sector. The following discussion draws on that document and the chart above to highlight the challenges that both regulators and the industry agree must be addressed in any risk assessment of AI.

Budget

Cost estimates for implementing complete AI solutions may range from US$20,000 to US$1,000,000,[8] not including ongoing operating costs for running AI models and storing huge data sets. It’s no wonder then that budget is a concern of most stakeholders, and that significant attention needs to be paid to determining the ROI of AI investments – a challenging task in itself. Going into an AI project, financial institutions need to identify the problem they think AI can solve and determine what metrics, quantitative and qualitative, will be used to track results, while also defining the short- and long-term indicators of business success.

Technology infrastructure

Beyond the problem of non-integrated legacy systems that the financial services industry has dealt with historically, AI will further test technology infrastructure. To realise the potential of AI, financial institutions will need, among other considerations, state-of-the-art technology, high computing power and scalable storage capacity. Institutions that have failed to invest in technology will need first to address their technology needs.

Availability and integrity of data

Data is at the heart of both industry and regulatory concerns. The key feature of AI is its ability to process large volumes of data and unearth hidden patterns. The use of unstructured data (e.g., images for biometrics), data from third-party providers, and synthetic data (i.e., data that is artificially manufactured and not generated from real-world events) carry potential benefits but also risks. Establishing and maintaining data quality are thus considered key challenges in AI development.

Many financial institutions will have clearly established data strategies and governance programmes. However, preexisting control frameworks may need to be adapted for AI to account for the governance of the complexity of data sources, the increased importance of the documentation of data quality, provenance and completeness and representativeness, and the need for consistent monitoring.

Data privacy and security

Existing data governance frameworks should already consider the need to manage data privacy, data access, and compliance with data protection laws and regulations, including cross-border requirements. With AI, additional privacy risks may arise when data sets from a variety of internal and external sources are combined and used for a decisioning process, and these also must be managed. Cutting-edge AI systems may also be at greater security risk and a target for bad actors looking to exploit weaknesses in a financial institution’s operational resilience armory.

Complexity

Complexity – the ability of technology to process vast data sets from various sources using complex models (which may include so-called deep learning models that “learn” and adapt) and produce complex outputs – is one of the key challenges for managing the risks arising from AI models. Many model governance frameworks and processes are not designed with this complexity in mind and will need to be revisited.

Explainability

Another significant challenge AI users face is explaining how certain AI models work, especially those that are dynamic, i.e., continually learning. There are increasing calls from the industry for more clarity on what level of explainability or interpretability is necessary. They prompt the question “Explainable to whom?” Explainability will vary according to the context: Every stakeholder (e.g., customer, regulator, data scientist, senior management) will require a different level of understanding.

For customers, the focus should not only be model features, but also the nature and clarity of communications. For example, a retail customer seeking a loan may want or deserve to know that AI will be used to make the credit decision, the data that will be used to make the decision and how to challenge an unfavorable outcome if they believe it inaccurately represents their creditworthiness. In-depth details about the data used or the model parameters are unlikely to be of value to most retail customers, though they are critical for helping the institution and regulators understand and accept the model.

Fairness

One of the most cited concerns from regulators about AI is that model bias may lead to discriminatory outcomes. As treating customers fairly is a tenet of many regulatory and consumer protection laws, achieving fairness with AI is a key area of regulatory focus. Similarly, antidiscrimination legislation is enshrined in many countries and will apply to the AI models.

Biased outcomes can be attributed to a variety of sources, including data, the model’s algorithm, the model itself, and the use and interpretation of inputs and outputs. Since biases may not be immediately apparent, outputs need to be challenged. A mechanism for challenging outputs could be defining a set of criteria for measuring bias in models and evaluating bias and fairness throughout the development life cycle of technical and business issues. Although human-based modeling is also subject to unintended bias or errors, humans, unlike machines, can be held accountable for their decisions. Many regulators are therefore calling for a “human in the loop” in AI processes.

Model governance and validation

Model governance and validation are critical to the successful implementation and maintenance of AI systems. The complexity, speed and scale of AI models amplify existing model risks and present new challenges. For example, dynamic AI models learn continuously from live data and consequently generate variable outputs. As such, existing governance processes and assessments of model risk need to be realigned for the adaptation cycle. Other factors in AI model risk management include unanticipated changes to the data inputs (data drift), to the relationship between inputs and outputs (model drift) or in the statistical properties in the model (concept drift). The speed of change also differentiates AI model risk management from conventional model risk management. AI models can have many small and incremental changes and change quickly. Many existing model validation processes are not structured to respond to rapid, small changes, which individually may not be material, but collectively may have a significant impact on the outcome. Validation procedures, therefore, need to be customised to effectively challenge core model components.

Board and management oversight

Boards of directors and senior management play a key role in challenging and providing oversight of the strategy, risk appetite, and culture of a financial institution. They must also play a key role in developing and implementing an effective AI risk management and controls framework. To achieve this requires an understanding of AI and committing to investments in technology and human resources that are needed to ensure AI programmes function effectively. For countries such as the UK and Australia where management accountability is established, management will need to consider how to evidence that this oversight exists, including by instilling a culture of ethics and fairness.

Skills and upskills

The technical skills needed to develop and sustain an effective AI programme include programming, data science and statistics, system development, neural network architecture, model development and validation, and more. And then there are the soft skills: creativity, analytical thinking, communication and the ability to work as part of the team. Financial institutions need to be willing to invest in these skills in adequate numbers. Just as important, they need to commit to upskilling other personnel with at least a working knowledge of the principles of AI to become “citizen data scientists” who are able to participate in the creative process and explain the impacts of AI to various stakeholders, including customers and regulators.

The artificial intelligence imperative: Call to action

The financial services industry has already made clear that it views AI as a strategic imperative and will continue to invest in and evaluate its applicability. While AI is still in the early stages of adoption and regulators are cautiously monitoring the developments, the industry can demonstrate its willingness and ability to partner with regulators to ensure that its use of AI has public acceptance and is seen to deliver real benefits to society while minimising and managing potential risks. Achieving that degree of support is how we define success.

To achieve that success, we offer the following advice to financial institutions:

  • Understand the current state of play and the AI opportunities available.
  • Set a strategy, risk appetite, and budget for AI that recognise and address the challenges discussed above, including the impact on the institution’s culture and long-term impact on operations and staffing.
  • Identify gaps (e.g., data, risk management, talent) that may hinder AI adoption and develop action plans for bridging them, and continually consider how these gaps may temper AI adoption plans.
  • Establish a centralised function, including representatives from all three lines of defense, for vetting AI opportunities before they are submitted to management for approval.
  • Consider engaging outside experts to supplement existing talent and/or to challenge AI plans.
  • Establish regular checkpoints to assess effectiveness of AI investments and determine whether course corrections are needed, or, in the words of management guru Tom Peters, “Test fast, fail fast, adjust fast.”
  • Develop a comprehensive training and awareness programme that focuses on upskilling current personnel at all levels and informing and educating customers about the use of AI.
  • Continue to engage transparently with regulators on advancements in AI to foster support and acceptance.
  • Never lose sight of the need for a “human in the loop.”
  • Don’t sit on the sidelines.

Not every financial institution can or should move at the same pace. However, ignoring the advances that AI can bring will threaten a financial institution’s ability to compete and possibly even its longer-term viability.

[1] “What is Artificial Intelligence (AI)?”, www.techtarget.com/searchenterpriseai/definition/AI-Artificial-Intelligence.

[2] “A Brief History of Artificial Intelligence,” by Tanya Lewis, LiveScience, December 4, 2014, www.livescience.com/49007-history-of-artificial-intelligence.html.

[3] “State of AI in Financial Services: 2022 Trends,” NVIDIA Corporation, 2022, www.nvidia.com/content/dam/en-zz/Solutions/industries/finance/ai-financial-services-report-2022/fsi-survey-report-2022-web-1.pdf.

[4] “AI-Bank of the Future: Can Banks Meet the AI Challenge?”, McKinsey, September 2020, www.mckinsey.com/~/media/mckinsey/industries/financial%20services/our%20insights/ai%20bank%20of%20the%20future%20can%20banks%20meet%20the%20ai%20challenge/ai-bank-of-the-future-can-banks-meet-the-ai-challenge.pdf?shouldIndex=false.

[5] “Business Plan 2022/23,” Financial Conduct Authority, July 4, 2022, www.fca.org.uk/publications/business-plans/2022-23.

[6] “State of AI in Financial Services: 2022 Trends.”

[7] “Artificial Intelligence Public-Private Forum,” Bank of England and the Financial Conduct Authority, February 2022, www.bankofengland.co.uk/-/media/boe/files/fintech/ai-public-private-forum-final-report.pdf?la=en&hash=F432B83794DDF433D091AA5.

[8] “How Much Does Artificial Intelligence Cost in 2021?”, by Sayantani Sanyal, Analytics Insights, June 9, 2021, www.analyticsinsight.net/how-much-does-artificial-intelligence-cost-in-2021/.

Loading...