MAS’ AI Governance Mandate Sets the Bar for Financial Institutions 8 min read Download By Dr. Bernard Tan, Protiviti Singapore Director and cybersecurity expert, and Sam Bassett, Protiviti Singapore Managing Director and Technology LeadArtificial intelligence has moved decisively beyond the experimentation phase in financial services. What began as advanced analytics and predictive modelling has rapidly evolved into generative AI copilots, autonomous agents, embedded decision engines and customer-facing AI systems. In many institutions, AI is no longer peripheral. It is becoming an essential component of operational infrastructure. Download Topics Risk Management and Regulatory Compliance Artificial Intelligence Against this backdrop, the Monetary Authority of Singapore (MAS) has issued its proposed Guidelines on Artificial Intelligence Risk Management (AIRG). The direction is unmistakable. As AI adoption accelerates, governance expectations and practices in financial services must mature in parallel.While the new Guidelines complement MAS’ existing Fairness, Ethics, Accountability and Transparency (FEAT) principles, they go further in operationalising how AI risks should be governed across financial institutions. AIRG signals a structural shift that elevates AI from a technology initiative to a board-level risk category.Elevating AI to the board agendaA defining feature of AIRG is its explicit emphasis on board and senior management accountability. MAS makes clear that leadership plays a critical role in establishing and overseeing robust AI risk management frameworks.This is more than symbolic.Historically, AI governance has often been fragmented across innovation teams, data science functions and model risk units. AIRG reframes AI risk as enterprise risk. Boards are expected to define AI risk appetite, embed AI into enterprise risk frameworks, ensure clear lines of accountability and, where risks are material, establish appropriate governance forums such as cross-functional AI risk committees.For directors and senior executives, this raises fundamental strategic questions:Where is AI embedded in core business processes?Which AI systems materially affect customer outcomes or financial performance?What is our tolerance for model opacity, bias risk or automation risk?Do we have credible escalation and oversight mechanisms?In effect, MAS is asking institutions to treat AI not as experimental innovation, but rather as a managed strategic capability.The inventory imperativeOne of the most operationally significant expectations under AIRG is the requirement for financial institutions to establish and maintain an accurate, up-to-date inventory of AI use cases, systems and models.This requirement reflects a practical truth. AI cannot be governed if it cannot be seen.In many organisations, AI capabilities have proliferated organically, embedded in vendor solutions, business unit tools, digital platforms and internal productivity applications. Without a consolidated view, institutions risk fragmented oversight and inconsistent controls.Beyond simple identification, MAS expects financial institutions to assess the risk materiality of each AI use case. At a minimum, this assessment needs to consider:Impact, including financial, operational, customer and regulatory consequencesComplexity, including model novelty and explainability challengesReliance, including degree of autonomy and availability of human alternatives Higher-risk AI systems must be subject to more stringent controls.This structured classification exercise is likely to be transformative. It often reveals shadow AI deployments, unclear ownership structures and inconsistent validation practices. More importantly, it establishes the foundation for proportionate governance, which is a central principle of AIRG.Lifecycle governance, not point-in-time controlsAnother highlight of the proposed Guidelines is the emphasis on end-to-end lifecycle controls.AI risk is dynamic. Models evolve. Data drifts. Use cases expand. Generative AI systems may behave unpredictably under novel prompts. Governance frameworks that rely solely on pre-deployment checks are insufficient.MAS expects institutions to implement controls that span the entire AI lifecycle, including:Data management, quality and securityFairness and bias mitigationTransparency and explainability standardsHuman oversight mechanismsIndependent validation for higher-risk AIOngoing monitoring and incident managementTechnology and cybersecurity safeguardsChange management and safe decommissioningThe inclusion of elements such as adversarial testing, monitoring thresholds and kill switches highlights MAS’ recognition that AI risks can crystallise in real time.The lifecycle approach defined in the AIRG aligns AI governance more closely with existing model risk management and technology risk disciplines, while extending them to address the distinct characteristics of generative and autonomous systems.Proportionate but not optionalA notable strength of AIRG is its proportionate application framework.Where AI is deeply embedded or mission-critical, full governance expectations apply. Where AI is used in a limited, assistive capacity (for example, drafting emails or summarising documents), financial institutions are subject to more basic policy requirements.However, proportionate does not mean permissive.Even assistive AI usage requires clear ownership, defined boundaries for acceptable use, human review protocols and approved tool governance. Given the rapid internal deployment of generative AI tools across many financial institutions, this clarification is timely.The message from MAS is clear: All AI use in financial services requires governance. The intensity of control should match the level of risk.Capability as a strategic differentiatorBeyond structural controls, MAS emphasises the importance of adequate skills, training and technological capacity to manage AI responsibly.This is a critical point. AI governance cannot be sustained through policy documents alone and a “checklist” mindset. Institutions must invest in:Board and senior management educationCross-functional risk and compliance capabilityTechnical validation expertiseRobust infrastructure and cybersecurity resilienceAs AI systems grow more sophisticated, so too must the capabilities required to oversee them. Institutions that fail to build internal understanding risk either over-constraining innovation or underestimating exposure.From compliance exercise to strategic enablerIt would be easy to interpret AIRG as introducing another compliance obligation into an already complex regulatory environment. That would be a missed opportunity to view this as an opportunity for growth.Strong AI governance enables rather than constrains innovation. Clear risk appetite statements empower responsible experimentation. Structured inventories allow institutions to scale high-value AI use cases confidently. Robust monitoring builds customer and regulatory trust.AI governance, in fact, is now a leadership mandate.Financial institutions stand at a pivotal moment. AI capabilities will continue to advance. Customer expectations will evolve. Regulatory scrutiny will intensify.The institutions that succeed will not be those that retreat from AI innovation. They will be those that pair technological ambition and enablement with governance discipline, transforming AI from an experimental tool into a resilient, trusted and strategically governed capability.Call to actionConduct an AIRG readiness assessment: Establish a clear baseline of your current AI governance, controls and oversight against MAS expectations. Identify high-risk use cases such as GenAI and customer-facing AI that may require immediate attention.Clarify governance and accountability: Ensure board and senior management ownership of AI risk. Define clear roles across the three lines of defence and formalise escalation and oversight structures for material AI initiatives.Build a centralised AI inventory and risk materiality framework: Implement consistent AI identification criteria and develop a centralised inventory. Apply structured risk materiality assessments so that higher-impact AI systems receive proportionate controls.Embed proportionate lifecycle controls: Strengthen controls across data governance, fairness and explainability. Ensure AI risks are monitored, documented and managed consistently across the full lifecycle.Invest in capability and culture: Equip Boards, management and risk teams with the knowledge to oversee AI effectively. Foster a risk-aware culture that balances innovation with control and accountability. How Protiviti can help Protiviti supports financial institutions in operationalising MAS AIRG requirements through a structured and proportionate approach. We conduct readiness and gap assessments to benchmark current practices against MAS expectations. We help design and strengthen AI governance structures, including board and senior management oversight, clear three-lines-of-defence accountability, AI risk policies and cross-functional AI oversight structures. We also establish robust AI identification, centralised inventory and risk materiality frameworks to ensure higher-risk use cases such as GenAI and customer-facing AI receive appropriate oversight.Across the AI lifecycle, Protiviti designs and embeds practical controls covering data governance and fairness and explainability. We align policies with MAS FEAT principles, TRM Guidelines and outsourcing expectations to ensure regulatory consistency. Beyond framework design, we deliver board and management training, AI risk capability building and independent assessment services, including governance reviews and internal audit program design, helping institutions strengthen AI resilience. About the authors Dr Bernard Tan is a director at Protiviti Singapore with over 25 years of experience in financial services and consulting, with proven expertise in IT, cybersecurity, digital banking, and operational and anti-money laundering (AML) audits. He has been responsible for the APAC IT Audit and Data Analytics teams. Bernard is an elected board director of the Singapore ISACA Chapter and has served as a panel judge for the Singapore Cybersecurity Awards. Additionally, he has been a speaker and moderator at various technology and cybersecurity conferences.Sam Bassett is the country leader for Protiviti Singapore. With over 25 years' experience, he's primarily worked in financial services with consulting firms or directly in the banking industry to deliver change and support strategic, tactical, and operation goals across Asia, Europe and the Middle East. His strengths are in building client and stakeholder relationships, managing and engaging teams, and delivering change to support strategic, tactical and operational goals. Find out more about our solutions: Pro Digital Hightech Artificial Intelligence Organisations leverage Protiviti's evidence-based analytics and AI consulting services to drive growth and increase competitive advantage. Pro Screen System Integration Technology Our tech consulting services range from strategy, design and development through implementation, risk management and managed services. Every business is becoming a technology business. Let us help you transform. graph Risk management Protiviti’s risk management services help organisations assess risk and develop tech-enabled solutions to manage risk and compliance in an agile manner and minimise potential losses. Featured insights SURVEY When AI Readiness Meets ROI Reckoning | AI Pulse Series 2026 5 min read Learn what is limiting AI ROI from Protiviti’s yearlong research paper. SURVEY From AI Exploration to Transformation - What AI Success Looks Like | AI Pulse Survey - Vol.1 5 min read Explore global trends in AI adoption, maturity & ROI. Learn what drives success, key challenges & how to scale AI effectively in Protiviti's AI pulse survey 2025. SURVEY From Data Confusion to AI Confidence - Data Is the Foundation of Trustworthy AI | AI Pulse Survey - Vol.2 5 min read AI Pulse Survey Vol. 2 results are in! AI’s potential starts with data clarity. Discover how leading organisations are cutting through data chaos with strong data governance and data-savvy cultures — unlocking AI that delivers real results. SURVEY From Automation to Autonomy - The Capabilities and Complexities of AI Agents | AI Pulse - Vol.3 6 min read Protiviti's AI Survey 3.0 reveals organisations are using Agentic AI to enhance decisions, improve efficiency and shape the future. SURVEY Top Risks 2026: Executive Perspectives & Growth Opportunities 8 min read Protiviti Top Risks Report 2026 shares executive insights on Gen AI, agentic AI, cyber threats and economic risks. RESEARCH GUIDE FAQ Guide on the Use of AI for Financial Crime Compliance 31 min read Ask financial crime professionals what the most challenging part of their job is, and most will likely say it is the timely identification of suspicious activity. As much as companies have worked to improve their detection capabilities given their... WHITEPAPER MAS Technology Risk Management Update 3 min read With the increase in cyber-attacks like the recent solar winds one and the very public issues with WireCard which left many firms in Singapore and beyond being unable to process transactions, the strengthening of the Monetary Authority of Singapore ... Previous Article Pagination Next Article Leadership Bernard Tan Bernard is a director at Protiviti Singapore with over 25 years of experience in financial services and consulting, with proven expertise in IT, cybersecurity, digital banking, and operational and anti-money laundering (AML) audits. He has been responsible for the APAC ... Learn More Sam Bassett Sam is the country leader for Protiviti Singapore. With over 25 years' experience, he's primarily worked in financial services with consulting firms or directly in the banking industry to deliver change and support strategic, tactical, and operation goals across Asia, ... Learn More