You are currently viewing Deep Dive into AI-Model Risk Management

Deep Dive into AI-Model Risk Management

The Monetary Authority of Singapore (MAS) conducted a thematic review of financial institutions’ Artificial Intelligence (AI), including Generative AI model risk management practices in mid-2024. The review highlighted several key areas where financial institutions (FIs) could improve their AI governance and risk management. This topic will be of particular interest to FIs that:
a) Deploy AI in their investment methodology; and/or
b) Use AI across operational business functions as a tool/support system to ease administrative impediments.

Governance and Oversight
MAS has identified several key issues related to AI governance in the finance sector and recommended best practices to address these challenges. One major issue is the lack of a comprehensive AI governance framework, which has led to inconsistent approaches in AI development, deployment and risk management across FIs.

1To address this, MAS recommends that FIs, and in particular the larger ones, establish cross-functional oversight forums to coordinate AI governance and risk management across different departments. Additionally, FIs should review and update their existing policies and procedures to specifically address AI-related risks, such as model bias, explainability and operational risks. Developing clear ethical guidelines for AI use is also essential, with an emphasis on fairness, transparency and accountability.

MAS also suggests that FIs implement robust data management practices to ensure data quality and integrity. Furthermore, FIs should adhere to rigorous model development and validation processes to ensure the accuracy and reliability of their AI models. Ongoing model monitoring and retraining are essential to address issues such as model drift and performance degradation. In addition, FIs must identify and manage operational risks associated with AI, including data breaches and system failures and ensure robust risk management practices for third-party AI providers.

Many FIs also face challenges due to insufficient AI capabilities inhouse, which hinder their ability to effectively develop, deploy and manage AI models. FIs are recommended to invest in building their AI capabilities through targeted training and education.

By addressing these issues and implementing the recommended solutions, FIs can effectively manage AI risks, ensure responsible AI use and maintain a competitive edge in the rapidly evolving AI landscape.

Key Risk Management Systems and Processes
One of the significant challenges FIs face is accurately identifying AI usage across various departments. Due to the broad and evolving nature of AI technologies, it can be difficult to define and categorise the different types of AI models being utilised. This complexity can lead to gaps in understanding the full scope of AI integration within the FI.

To address this challenge, FIs should establish clear definitions of AI, considering critical factors such as model complexity, autonomy and the potential impact of the models in question. A centralised inventory of AI models is essential to maintain a comprehensive view, capturing key attributes such as the purpose, scope and risk profile of each model. Furthermore, regular reviews and updates to this inventory are necessary to ensure that it reflects the most current state of the FI’s AI landscape.

Assessing the risk materiality of AI models is a critical task in determining how to allocate resources and implement effective risk management strategies. The risk posed by AI systems can vary significantly depending on factors such as model complexity, the potential impact on the FI and the degree of reliance on AI in key business processes. Without a structured approach to assessing risk, FIs may struggle to identify high-risk areas that require immediate attention.

To effectively assess risk materiality, FIs should conduct a comprehensive risk assessment that considers multiple dimensions, including:
a) potential impact on the business;
b) complexity of the models; and
c) degree to which AI is embedded into operations.

It is also important to regularly review the risk materiality of AI models to reflect any changes in the AI landscape or the FI’s evolving risk profile.

Implementing effective risk management for AI requires a comprehensive approach that spans the entire AI lifecycle, from model development and deployment to ongoing monitoring. Risk management must cover all stages to ensure that AI models are robust, perform as expected and comply with ethical and regulatory standards.

A robust model development and validation process is crucial. This should include:
a) rigorous data quality checks;
b) appropriate model selection; and
c) continuous performance evaluation.

Ethical considerations must also be incorporated throughout the lifecycle, ensuring fairness, transparency and accountability in AI systems. Establishing strong change management processes is essential to properly manage updates and alterations to AI models and their underlying infrastructure. Last but not least, comprehensive disaster recovery and business continuity plans should be in place to mitigate the impact of potential disruptions caused by AI failures.

By addressing these key issues and implementing the proposed solutions, FIs can better manage AI risks and ensure that AI technologies are used responsibly and ethically across the enterprise.

Development and Deployment of AI
A significant concern highlighted in the review is ensuring the quality and reliability of AI models. This encompasses several factors, starting with data quality, as the relevance, representativeness and integrity of data used to train and validate models are crucial for ensuring model performance. Rigorous model validation processes must be followed to assess the performance, accuracy and robustness of AI models, ensuring they meet the necessary standards before deployment.

Given the complexity of AI models, it is essential to develop techniques that can explain the decision-making process of these models, enhancing transparency and trust. Furthermore, ensuring fairness in AI models is paramount. Bias in data or algorithms can lead to inequitable outcomes, so employing bias mitigation techniques is necessary to ensure that models generate fair, unbiased results.

To address these issues, MAS recommends several solutions. FIs should implement robust data management practices to ensure that the data used is of high quality, integrity and security. Additionally, they should adhere to best practices in model development and validation, such as using appropriate statistical techniques and machine learning algorithms to mitigate risks like overfitting2 . Rigorous testing of models is essential to evaluate performance, accuracy and robustness under different scenarios. To enhance model transparency, organisations should develop and implement techniques for model explainability. Finally, employing bias mitigation techniques is critical to ensuring fair and equitable outcomes, reducing the risk of discrimination or unintended consequences in AI decision-making.

Once AI models are deployed, continuous monitoring is essential to identify and address issues related to their performance. Over time, models may experience performance degradation or drift due to changes in underlying data or operational conditions. To maintain model accuracy and reliability, it is crucial to establish effective monitoring and retraining procedures. Additionally, managing changes to AI models and their underlying infrastructure requires a structured approach, as changes to models can have significant operational impact. Implementing effective change management processes ensures that any alterations to AI models are controlled, documented and appropriately evaluated before they are deployed. This ensures that changes are implemented in a structured manner, minimising potential disruptions and maintaining the integrity of the model over time.

Regularly assessing model performance helps identify issues early and ensures that models continue to deliver accurate results.

By addressing these key issues and implementing the proposed solutions, FIs can effectively manage the risks associated with AI, ensuring the responsible, transparent and ethical use of this technology. Adopting these best practices will not only help in mitigating AI-related risks but also promote innovation and enhance the long-term effectiveness of AI systems.

Conclusion
The thematic review highlights the importance of robust AI risk management practices for FIs. Key areas of focus include establishing strong governance and oversight, implementing rigorous risk management processes, enhancing AI capabilities, promoting transparency and explainability and prioritising ethical AI. By addressing these challenges, FIs can harness the power of AI while mitigating risks, ensuring responsible AI use and maintaining a competitive edge in the rapidly evolving AI landscape.

What does this mean for you?
Stay ahead of the curve and ensure your business thrives in a rapidly evolving AI landscape.

As technology progresses and shapes the financial industry, ask our expert Singapore team at Curia Regis to assist you with tailored compliance solutions to meet your unique business needs. We’ll guide you through the complexities of MAS regulations and emerging global standards, minimising compliance risk and maximising opportunities.

Want to learn more? Contact us at [email protected] or follow our LinkedIn page at https://uk.linkedin.com/company/curiaregis for the latest regulatory updates.

  1. Ability to understand how an AI model arrived at a particular decision or prediction ↩︎
  2. When AI model learns the training data overly well, to the point where it performs extremely well on training data but very poorly on new data that it has not seen in the training dataset ↩︎

Leave a Reply