On December 22, the National Credit Union Administration (NCUA) updated its Artificial Intelligence (AI) resource page to consolidate key technical and policy references for federally insured credit unions. The page sits within NCUA’s broader cybersecurity and financial technology resources and is explicitly framed as support for evaluating and performing due diligence on third‑party AI vendors. It links AI oversight back to existing NCUA guidance on third‑party relationships, including 07‑CU‑13 (Evaluating Third Party Relationships) and 01‑CU‑20 (Due Diligence Over Third Party Service Providers).
NCUA notes that credit unions are increasingly using AI to enhance member service, streamline operations, and remain competitive, while also facing AI‑specific risks such as algorithmic opacity, fair lending concerns, data privacy and security, operational resilience, and model risk. The resources on the page are presented as tools to help address those issues rather than as new regulatory requirements.
AI Governance and Risk Management
For AI governance, NCUA directs credit unions to the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) AI resources. NIST’s materials provide a structured approach to AI design, development, governance, and use, including practical recommendations for managing risks to individuals and organizations. NCUA highlights that these resources may assist credit unions in developing “trustworthy” AI systems that align with their cooperative, member‑focused mission.
NCUA also cites a Committee of Sponsoring Organizations (COSO) of the Treadway Commission paper titled “Realize the Full Potential of Artificial Intelligence: Applying the COSO Framework and Principles to Help Implement and Scale Artificial Intelligence.” That document applies the COSO enterprise risk management framework to AI, covering governance structures, board oversight, risk appetite, risk assessment methodologies, and performance monitoring for AI implementations in areas such as member services, fraud detection, and operational efficiency.
AI Data Security and Secure Deployment
The NCUA resource page points to two AI‑focused publications from the Cybersecurity and Infrastructure Security Agency (CISA). The first is a Cybersecurity Information Sheet on AI Data Security, which discusses securing the data that powers AI systems across their lifecycle, including data supply chain security, protection against maliciously modified data, and managing data drift to preserve the integrity and accuracy of AI‑driven decisions. NCUA notes that these materials may assist credit unions in building data security frameworks for AI training and operational data.
The second CISA document, “Deploying AI Systems Securely,” addresses methods for securely deploying and operating AI systems developed by external entities. It covers issues such as protecting model weights, implementing secure APIs, and establishing continuous monitoring protocols for AI systems in production. NCUA positions this guidance as a resource for credit unions considering or using AI for member services, fraud detection, and operational efficiency, with an emphasis on maintaining system integrity and protecting member data.
AI in Financial Services and Deepfake‑Driven Fraud
To place AI in a financial sector context, NCUA references a U.S. Department of the Treasury report, “Artificial Intelligence in Financial Services.” That report examines both traditional AI and generative AI use cases and addresses data privacy and security standards, bias and explainability challenges, consumer protection issues, concentration risk, and third‑party vendor management related to AI technologies. NCUA suggests that credit unions can use this report to better understand the regulatory landscape and risk mitigation expectations as they evaluate AI tools.
Finally, NCUA highlights a FinCEN report on “Fraud Schemes Involving Deepfake Media Targeting Financial Institutions.” This publication describes how criminals use AI‑generated deepfakes to create fake identity documents, photos, and videos to evade customer verification controls, outlines specific red‑flag indicators of such activity, and offers best practices for strengthening identity verification and reporting suspicious activity. NCUA notes that credit unions can use this material to enhance fraud detection capabilities and member protection against AI‑enabled scams.
Our Take
Taken together, NCUA’s updated AI resource hub signals that supervisory expectations around AI will be grounded in existing, well‑known frameworks rather than in a bespoke AI rulebook or through regulation by enforcement action.
The update also confirms that AI is firmly within the scope of third‑party oversight and traditional safety‑and‑soundness, compliance, and cybersecurity disciplines. Credit unions exploring or expanding AI use can expect NCUA examiners to look to these same sources as benchmarks when assessing how credit unions are governing AI solutions and managing associated risks.
