Artificial intelligence
Moderator:
How is OSFI considering AI – both for use by OSFI as the regulator, and for those OSFI regulates?
Superintendent Peter Routledge:
- AI is part of OSFI’s expanding integrity and security risk lens. AI technologies present both opportunities and risks to the financial system. OSFI is focused on how AI affects institutions’ financial resilience and operational, cyber, and integrity and security risk profiles.
- We encourage thoughtful adoption—AI tools may become essential for areas like fraud detection and cybersecurity, but institutional governance and control must keep pace with deployment.
- OSFI is taking a measured, fact-based approach to AI oversight. It has the ability to both reduce and amplify existing risks – depending on how it’s used. This makes it difficult to identify and manage, however – this is why many of our existing risk management frameworks, including those for cybersecurity, third party risk, and model risk (to name a few) remain highly relevant and provide a strong foundation for oversight.
- We are building supervisory capacity, and through our ongoing work and collaboration with experts, we are assessing how institutions are deploying AI — especially in areas like credit risk, model risk, cybersecurity, and decision-making. We’re also analyzing how AI may amplify or mask existing risks.
- Internally, OSFI is actively exploring the safe, transparent use of AI and automation to support supervision, analysis, and internal operations. Our approach prioritizes accountability, explainability, and data security.
Moderator:
AI has risks and opportunities. How is OSFI thinking about these?
Superintendent Peter Routledge:
- AI is a powerful tool—but it must be adopted responsibly. AI can enhance productivity and efficiency, both for financial institutions and for OSFI itself. But if deployed without proper safeguards, it can also amplify a range of existing risks.
- AI increases multiple risk types—both internally and externally. Internally, AI can raise model risk, operational risk, and legal or reputational risk (to name just a few). Externally, malicious actors using AI tools can increase cyber threats, financial fraud, and geopolitical interference.
- OSFI’s existing guidance already addresses many AI-related risks. We already supervise institutions’ management of model risk, cyber resilience, third-party relationships, and operational risk. These are foundational to mitigating AI risks.
- OSFI collaborates actively on AI oversight. We’re working with stakeholders and international partners to stay ahead of technological change.
- Through the Financial Industry Forum on Artificial Intelligence (FIFAI) series with the Global Risk Institute in 2022, we developed shared knowledge and responsible adoption principles, such as the EDGE framework (explainability-data-governance-ethics). In 2024, we collaborated with FCAC to co-publish the Risk Report – AI Uses and Risks at Federally Regulated Financial Institutions. Most recently we began working with partners on the second FIFAI.
