Use of AI in Financial Institutions: Focus on Governance and Risk Management

February 18, 2025
Dr. Tamara Teves
While using (and especially developing) AI applications in financial institutions may involve a significant administrative burden, this effort can be highly beneficial. Thorough preparation and documentation help build a robust risk management matrix and, most importantly, effective control mechanisms. A well-structured risk management framework is essential to ensuring the resilience and sustainable success of financial institutions.
The importance of artificial intelligence (“AI”) has significantly increased across all areas of life in recent years, particularly in the financial industry. Accordingly, the Swiss Financial Market Supervisory Authority (“FINMA”) addressed the risks associated with AI in its Risk Monitor 2023 and shared its insights from experience with AI in FINMA Supervisory Notice 08/2024, titled “Governance and Risk Management in the Use of Artificial Intelligence,” published on December 18, 2024. Additionally, on February 12, 2025, the Swiss Federal Council published an overview on the possible regulation of artificial intelligence (“AI”) in Switzerland.
The Federal Council has opted for a regulatory approach aimed at strengthening Switzerland’s position as an innovation hub, safeguarding fundamental rights, including economic freedom, and enhancing public trust in AI.
It is encouraging that Switzerland continues to rely on its proven principles of technology-neutral, principles-based, and targeted regulation for AI, as it has done with regulation topics concerning DLT, blockchain, and digital assets.
New technologies drive efficiency gains, cost reductions, new business opportunities, and scalability. At the same time, FINMA recognizes the risks associated with the use of AI and therefore expects appropriate risk management and solid governance from financial institutions.
Risks to be Considered
Based on its supervisory activities, FINMA identifies operational risks as the primary concern for financial institutions, particularly model risks (e.g., lack of robustness, accuracy, bias, and explainability) as well as IT and cyber risks.
Additional risks stem from increased dependency on third parties, such as hardware solution providers or cloud service providers. Furthermore, there are legal and reputational risks, as well as challenges in assigning (internal) responsibilities. As a result, autonomous or poorly explainable system behavior, combined with dispersed responsibilities, renders the use of AI applications in licensed (or aspiring) financial institutions difficult to approve.
What Financial Institutions Must Consider Meeting FINMA’s Expectations When Using AI
1. Strong Governance: AI applications are expected to demonstrate robustness, accuracy, freedom from bias, stability, and explainability. Regular testing (see point 3) and clearly assigned responsibilities to employees with appropriate skills and experience are essential. When AI applications are procured externally, the same standards apply. Therefore, it is advisable to address and implement testing, control mechanisms, data quality standards, responsibilities, and liability issues early in outsourcing contracts. Additionally, it must be ensured that the service providers involved have the necessary expertise and experience.
2. Data Quality Guidelines and Controls: Internal guidelines and controls are necessary to ensure data quality for AI applications. Since AI systems often learn autonomously from data without human intervention, data origin and quality are often more critical than model selection. Poor data quality can result from inaccuracies, inconsistencies, incompleteness, non-representative samples, or outdated information. Historical data may carry biases that skew future predictions or may no longer be representative due to changing environments. There is also a risk of using deliberately manipulated data unintentionally. Regulators expect financial institutions to establish internal policies and guidelines to ensure data completeness, accuracy, integrity, and secure data availability and access (FINMA Guidance 08/2024, dated December 8, 2024, p. 5).
3. Testing and Ongoing Monitoring: Careful selection of performance indicators, regular testing, and continuous internal monitoring of AI applications are required. Employees should define key questions and predefined performance expectations and verify whether these performance indicators are met, ideally before deploying the AI application. This includes monitoring and testing AI outputs when input data changes.
4. Explainability of AI Results: AI application results must be traceable, explainable, and reproducible. Financial institutions must be able to conduct and document critical internal evaluations of these results.
5. Independent Review Requirement: Similar to blockchain technology, FINMA expects that developers (not users) of AI applications conduct independent reviews by qualified personnel or third parties. For significant applications, FINMA assesses whether the independent review provides an objective, expert, and unbiased opinion on the suitability and reliability of the AI application for a specific use case and whether the results of the review were considered during the development process (FINMA FINMA Guidance 08/2024, dated December 8, 2024, p. 7).
6. Internal Documentation (Manuals or Guidelines): Financial institutions are expected to maintain comprehensible internal documentation (manuals or guidelines) for their employees. Such documentation should cover the application’s purpose, data selection and preparation processes, model selection, performance metrics, assumptions, limitations, testing procedures, controls, and fallback solutions.
7. Risk Inventory with Risk Classification (Risk Management and Control Matrix): Financial institutions must establish consistent criteria for identifying AI applications, their significance, and their specific risks and likelihood of occurrence, depending on their use cases and functions. A risk management or internal control system (ICS) matrix is recommended for maintaining such a risk inventory.
Given that the financial sector is considered a sensitive area by the Swiss Federal Council—alongside healthcare and law enforcement—financial institutions must fully understand the logic behind AI-driven decisions. This requirement applies even though financial data is not classified as "particularly sensitive data" under the Swiss Data Protection Act (DSG) (cf. BJ, Rechtliche Basisanalyse im Rahmen der Auslegeordnung zu den Regulierungsansätzen im Bereich künstliche Intelligenz, vom 31.08.2024, S. 41). For instance, when AI is used to generate or execute investment recommendations, the underlying decision-making logic must be understood, tested, documented, retested, and all associated risks identified both before and during the deployment of the AI system.
While using (and especially developing) AI applications in financial institutions may involve a significant administrative burden, this effort can be highly beneficial. Thorough preparation and documentation help build a robust risk management matrix and, most importantly, effective control mechanisms. A well-structured risk management framework is essential to ensuring the resilience and sustainable success of financial institutions.
The Regulatory Team at Advoro is happy to assist you with these matters.