You are currently viewing AI Risk: Practical Considerations for Financial Institutions

AI Risk: Practical Considerations for Financial Institutions

Artificial Intelligence (AI) is no longer a peripheral experiment in financial services. Today, AI systems are actively deployed across credit assessment, fraud detection, customer onboarding, advisory tools, and operational automation. These systems increasingly influence decisions with direct financial and consumer impact.

In November 2025, the Monetary Authority of Singapore (MAS) issued a consultation paper on the proposed Guidelines on Artificial Intelligence (AI) Risk Management (the Guidelines). They are timely and important, as they make clear that AI now sits firmly within MAS’ oversight rather than being treated as an innovation side project.

From Innovation Tool to Regulated Risk

Up until recently, AI systems have been treated as technology enablers, owned by IT teams or external vendors and managed outside traditional risk frameworks. The Guidelines decisively change this approach by positioning AI as a formal risk category, comparable to credit, market, operational, and compliance risk.

Under the Guidelines, Regulated Financial Institutions are expected to implement controls across the full AI lifecycle, including:

  • board and senior management oversight,
  • end-to-end model lifecycle governance,
  • data quality and bias management,
  • explainability and transparency,
  • system robustness and resilience, and
  • continuous monitoring and review.

The rationale is straightforward: when AI systems influence financial outcomes, failures or bias can directly impact investors, market integrity, and regulatory standing.

Real-World Case Studies: Why Governance Matters

1. Bias and Fairness in AI-Driven Decision-Making

A widely cited global example is the Apple Card controversy, where the credit-limit algorithm was alleged to offer significantly higher limits to men than to women with similar financial profiles. The lack of transparency around how decisions were made triggered public backlash and regulatory attention.

While Singapore has not experienced a similarly publicised case, the authorities have been clear that the absence of public enforcement does not imply regulatory tolerance. Rather, Singapore’s approach is preventative. AI systems used in onboarding, credit assessment, transaction monitoring, and customer segmentation are expected to undergo fairness testing and outcome monitoring to avoid “silent harm” before it escalates.

Lesson for financial institutions:

AI models used for credit or risk decisions must be explainable, auditable, and demonstrably fair. Even perceived bias in AI-driven decision systems, whether used in onboarding, client segmentation, or risk scoring, can erode trust and attract regulatory attention. MAS’ emphasis on fairness, data governance, and explainability is designed to prevent such outcomes before they escalate.

2. Historical Data and Embedded Bias

Globally, several major banks have faced allegations that AI-influenced mortgage lending systems resulted in discriminatory outcomes across certain demographic groups. These cases highlight how historical data, if not properly assessed, can embed and perpetuate bias when used to train AI models.

This risk is particularly relevant in Singapore’s data-rich financial environment, where behavioural and transactional datasets are heavily relied upon. Without disciplined governance, AI systems trained on legacy patterns may unintentionally reinforce exclusionary or skewed outcomes in credit scoring, fraud thresholds, or customer risk profiling.

Lesson for financial institutions:

In Singapore’s data-rich environment, AI models trained on behavioural or transactional datasets may unintentionally reinforce skewed outcomes in areas such as client risk profiling, transaction surveillance thresholds, or suitability assessments. MAS expects firms to actively test for bias and monitor outcomes throughout the AI lifecycle — not merely at deployment.

3. Regulatory Consequences of Weak AI Governance

In the United States, regulators have imposed significant financial penalties on banks where algorithmic credit models disproportionately rejected applications from minority groups. Investigations cited biased training data, insufficient oversight, and weak governance frameworks as contributing factors.

Lesson for financial institutions:

MAS’ approach is preventative, but the accountability principle remains. AI failures are not treated as technical glitches — they are governance and compliance failures. Responsibility for AI outcomes rests squarely with boards and senior management, regardless of whether models are developed internally or sourced from third-party vendors.

4. AI-Driven Fraud and External Threats — A Singapore Reality

In Singapore, AI risk is not limited to internal models. AI-powered fraud is already a material external threat. Recent industry reporting in Singapore highlights that AI-assisted scam tactics are already affecting the local financial ecosystem. These include:

Recent industry reporting highlights a growing use of AI-assisted impersonation, synthetic messaging, and automated social-engineering techniques to bypass traditional controls. These tactics allow fraudsters to scale scams more efficiently and tailor them to specific victims, making detection significantly more challenging for Regulated Financial Institutions.

Reports indicate that digital banks in Singapore have experienced millions of dollars in scam-related losses, alongside rising dispute volumes and operational strain. While not all incidents are explicitly labelled as “AI-driven”, the increasing sophistication and automation of scam methods point to the growing role of AI-enabled tools in facilitating these attacks.

Singapore authorities, including MAS, the Singapore Police Force (SPF), and the Cyber Security Agency (CSA), have issued public advisories warning of scams involving AI-generated deepfake audio and impersonation. These scams often target finance teams and senior management, exploiting trust in internal processes to induce fraudulent payments or disclosure of sensitive information.

Global research and reporting shows that AI is increasingly being leveraged to enhance the sophistication of investment and cryptocurrency scams. Analysts note that AI tools can generate realistic phishing content, fabricate fake trading platforms and endorsements, and automate social-engineering campaigns that deceive investors into transferring funds or engaging with fraudulent services. Deepfake technology, in particular, is being used to impersonate trusted figures and exploit investor trust in online environments. While these trends are documented internationally, they reflect risk patterns that are directly relevant to Singapore’s investor and digital asset landscape.

Lesson for financial institutions:

AI governance must extend beyond internal models to encompass external threat exposure, particularly where AI is leveraged by malicious actors to exploit trust, speed, and automation. In this context, MAS’ focus on resilience, monitoring, and escalation reflects the reality that AI is also reshaping the threat landscape that fund managers must defend against, including risks to payment controls, authorisation workflows, and operational security.

5. Over-Reliance on AI Outputs and Operational Oversight

A governance-focused example involved a major bank reversing workforce decisions after discovering that an AI voice bot failed to deliver expected efficiency gains. The issue was not technological failure alone, but over-reliance on AI outputs without adequate validation or human oversight.

Lesson for financial institutions:

AI-driven insights should inform, not replace, human judgement. MAS’ emphasis on human-in-the-loop controls and fallback mechanisms is especially relevant where AI outputs influence investment decisions, surveillance alerts, or operational responses.

What MAS is Ultimately Guarding Against

Taken together, these cases illustrate the risks MAS seeks to address through its AI Risk Management Guidelines:

  • opaque “black-box” decision-making,
  • untested bias in AI outputs,
  • over-reliance on third-party vendors,
  • insufficient board awareness of AI risks,
  • and weak escalation or incident-response frameworks.

In articulating these risks, MAS has also made clear that the proposed Guidelines are not developed in isolation. They build on earlier information papers and supervisory work on artificial intelligence, advanced analytics, and model risk management, which identified similar governance gaps and good practices across the industry. The proposed Guidelines consolidate these earlier insights into a more structured and comprehensive governance framework.

MAS’ message is not anti-innovation. Rather, it reflects the expectation that AI systems influencing financial outcomes must meet the same governance standards as any other critical function.

Practical Implications for Regulated Financial Institutions

To align with MAS’ proposed supervisory expectations Regulated Financial Institutions should take a proportionate, risk-based approach, starting with an assessment of whether, and how, AI is used within an organisation. This is particularly important given that AI may be embedded within tools and systems that firms do not always consciously label as “AI”.

As an initial step and as a Regulated Financial Institution it should be determined:

  • whether AI is currently deployed or embedded in business processes, including within investment, risk, compliance, operational or support functions;
  • the nature, purpose and materiality of those use cases, considering the extent to which AI influences decisions, outcomes or controls;
  • the potential risks arising from such usage, including governance, data, fairness, operational and regulatory risks and;
  • whether AI-enabled features embedded in third-party platforms or tools may access, process, store, or analyse sensitive or confidential information without having explicit awareness or intent.

In particular, such entities should recognise that certain “always-on” or default AI features within email, cloud, document management, or productivity platforms may process data automatically. Even where there is no intention to deploy AI, such features may still interact with sensitive information unless actively restricted.

Accordingly, Regulated Financial Institutions should assess:

  • what data such AI-enabled features can access,
  • whether that data may be retained, analysed, or used for model improvement by third-party providers, and
  • whether the firm has the practical ability to restrict, disable, or switch off such AI functions where the associated risks are not acceptable.

Based on this assessment, such entities should then focus on the following:

  • AI Use-Case Mapping: Identify and maintain a clear inventory of where and how AI is used (or embedded by default) across the organisation, together with an assessment of the associated risks.
  • Governance & Accountability: Assign clear ownership for AI use and AI-related risks at senior management and board levels, commensurate with the extent and materiality of AI adoption.
  • Policy & Control Frameworks: Institute basic policies governing the use of AI, including permitted and disallowed uses, data access boundaries, and review mechanisms, scaled appropriately to the firm’s level of AI adoption.
  • Vendor Oversight: Strengthen due diligence and contractual controls for third-party AI solutions, including clarity on data access, retention, and training practices.
  • Monitoring & Review: Where AI is integrated into business processes or platforms, implement ongoing checks to identify bias, model drift, unintended data exposure, or control weaknesses.
  • Training & Awareness: Ensure staff are adequately trained on applicable AI guidelines, understand how AI-enabled features operate, the types of data they may access or process, and the relevant escalation pathways and governance expectations.

Proactive and proportionate action will not only reduce regulatory exposure but also help Regulated Financial Institutions avoid inadvertent data leakage, strengthen investor confidence, and improve inspection readiness.

Conclusion: Getting AI Governance Right Before It Goes Wrong

The Guidelines are timely and pragmatic. Global and local experience shows that AI-related failures, whether due to bias, opacity, or weak oversight, can escalate rapidly into regulatory, legal, and reputational crises.

For Singapore fund managers, payment entities involved in cross-border and domestic money transfers, crypto-related businesses, trust service providers, and similar firms, disciplined AI governance is no longer optional. It is a core component of sustainable operations, regulatory credibility, and long-term trust.

How Curia Regis Can Support

Curia Regis works with Regulated Financial Institutions in Singapore across the Capital Markets space as well as Payment Services, and the inflection point between the two. This includes:

  • designing AI governance and risk frameworks,
  • integrating AI into compliance and enterprise risk management,
  • conducting independent AI readiness and control assessments,
  • and aligning operational practices with MAS supervisory expectations.

As regulatory focus on AI intensifies, proactive governance will be the differentiator between resilient regulated entities and ill-prepared ones.