MAS AI Risk Management

Ensuring Responsible AI Use: MAS Guidelines For Financial Institutions In Singapore

Are you prepared to embed responsible AI practices at the heart of your organisation?

With generative AI copilots, autonomous agents, and embedded decision engines becoming operational infrastructure, artificial intelligence is reshaping the financial sector. Recognising this, the Monetary Authority of Singapore (MAS) has proposed new guidelines on AI risk management for all financial institutions (FIs), ensuring AI - including generative and agent-based models - is used safely, ethically, and in line with robust risk management expectations. These guidelines are designed to complement, not replace, MAS’s FEAT principles (Fairness, Ethics, Accountability, Transparency), supporting innovation while safeguarding trust and integrity.

MAS organises its expectations for AI risk management into five foundational pillars:

  1. Board and senior management oversight
  2. AI identification and inventory with risk materiality assessment
  3. End-to-end lifecycle controls
  4. Capability and capacity building
  5. Proportionate application based on business criticality

These guidelines signal a shift in how AI is perceived within organisations - from a mere technological initiative to a significant board-level risk category.

Business outcomes of robust AI risk management

By aligning with MAS guidelines FIs can achieve:

  • Stronger regulatory compliance and reduced risk of supervisory findings or penalties
  • Enhanced board and management oversight over AI-driven strategic initiatives
  • Improved customer trust through responsible, transparent, and fair AI deployment
  • Reduced operational, model, and third-party risks across the AI lifecycle
  • Greater innovation confidence with proportionate controls for higher-risk AI use cases

How you can prepare

To meet MAS expectations and future-proof your AI strategy, take these practical steps:

  1. Conduct an AIRG readiness assessment:
    Establish a clear baseline for your current AI governance, controls and oversight against MAS expectations. Identify high-risk use cases such as Gen AI and customer facing AI that may require immediate action.
  2. Clarify governance and accountability:
    Ensure board and senior management own AI risk. Define clear roles across the three lines of defence and formalise escalation and oversight structures for material AI initiatives.
  3. Build a centralised AI inventory and risk materiality framework:
    Implement consistent AI identification criteria and develop a centralised inventory. Apply structured risk materiality assessments so that higher-impact AI systems receive proportionate controls.
  4. Embed proportionate lifecycle controls:
    Strengthen controls across data governance, fairness and explainability. Ensure AI risks are monitored, documented, and managed consistently across the full lifecycle.
  5. Invest in capability and culture:
    Equip boards, management and risk teams with the knowledge to oversee AI effectively. Foster a risk-aware culture that balances innovation with control and accountability.

Protiviti helps financial institutions through comprehensive AIRG readiness and gap assessments, benchmarking existing governance, risk, and control frameworks against MAS expectations. We design and enhance AI governance structures, policies, and operating models, including board oversight and cross-functional AI risk committees.

Our team builds centralised AI inventories and risk materiality methodologies aligned to impact, complexity, and reliance criteria. We embed end-to-end lifecycle controls covering data, fairness, explainability, validation, third-party risk, monitoring, and incident management.

In addition, we strengthen technology and cybersecurity controls, deliver targeted AI risk training, and provide independent assurance and audit support to prepare institutions for MAS supervisory reviews.

  1. Board oversight
    What information does our board need to fulfill its oversight responsibilities for AI risk?
  2. High-risk use cases
    How should financial institutions identify and prioritise high-risk AI use cases?
  3. Lifecycle controls
    What lifecycle controls are required under MAS’s AI risk management guidelines?
  4. Regulatory alignment
    How can we ensure our AI risk management framework is aligned with MAS FEAT principles and regulatory expectations?

Find out more about Protiviti Singapore’s consulting services:

Blogs

March 11, 2026
8 min read

MAS’ AI Governance Mandate Sets the Bar for Financial Institutions

Artificial intelligence has moved decisively beyond the experimentation phase in financial services. What began as advanced analytics and predictive modelling has rapidly evolved into generative AI copilots, autonomous agents, embedded decision engines and customer-facing AI systems. In many institutions, AI is no longer peripheral. It is becoming an essential...

Featured insights

Loading...