The CFPB recently published a blog post about the agency’s on-going efforts to monitor industry updates and innovation and how these changes align with regulatory obligations under the CFPB’s consumer protection laws. This post specifically highlighted using artificial intelligence (AI) and/or machine learning (ML) related to the adverse action notices that are required under the Equal Credit Opportunity Act (ECOA) and the Fair Credit Reporting Act (FCRA).
The CFPB acknowledged the two tensions that exist with new technology in this area: the potential for gains in efficiency versus the potential to create or amplify regulatory risks. “In considering AI or other technologies, the Bureau is committed to helping spur innovation consistent with consumer protections.”
It’s worth noting that the CFPB is not the only federal agency grappling with the effects of AI and algorithms in underwriting. The Department of Housing and Urban Development (HUD) has a regulation pending in the related area of fair lending concerns raised by the use of algorithms in underwriting.
In its post, the CFPB highlighted both the flexibility that exists in the current law, which would help foster the use of AI/ML, and also the agency’s interest in engaging with the industry on ways the regulatory scheme could evolve.
First, the CFPB noted that there may be uncertainty in the industry about how to fit AI/ML within the current regulatory framework but explained: “The existing regulatory framework has built-in flexibility that can be compatible with AI algorithms.”
To show this point, the CFPB gives the example that the official comments to Regulation B (which implements ECOA) state that when disclosing a factor, a creditor is not required to describe how a factor adversely affected an application or how a factor relates to creditworthiness, stating “[t]hus, the Official Interpretation provides an example that a creditor may disclose a reason for a denial even if the relationship of that disclosed factor to predicting creditworthiness may be unclear to the applicant. This flexibility may be useful to creditors when issuing adverse action notices based on AI models where the variables and key reasons are known, but which may rely upon non-intuitive relationships.”
The CFPB also emphasized that ECOA and Regulation B do not require a creditor use a particular list of reasons or limit the reasons that can be included, noting that under the current regulation creditors must accurately describe the factors, even if those factors are not on the current sample forms available.
Next, the CFPB highlighted some of the tools it offers to help reduce regulatory uncertainty with AI/ML, explaining that it wants to promote innovation while also facilitating compliance with the agency’s consumer protection laws. Specifically, it highlighted three new policies the agency previously announced in September 2019 (see the CFPB’s announcement here, as well as our previous coverage):
- A revised Policy to Encourage Trial Disclosure Programs (TDP Policy);
- A revised No-Action Letter Policy (NAL Policy); and
- The Compliance Assistance Sandbox Policy (CAS Policy).
The CFPB encouraged companies to make use of these tools and highlighted in particular that the TDP Policy and the CAS Policy provide for legal safe harbor, which could help reduce the regulatory uncertainty companies may fear with AI/ML and adverse action notices.
The agency noted that they are particularly interested in exploring three areas:
- The methodologies for determining the principal reasons for an adverse action. (As the official comments that give examples were issued in 1982.)
- The “accuracy of explainability methods, particularly as applied to deep learning and other complex ensemble models”.
- How to convey the principal reasons in a way that is understandable and accurately reflects the factors used in the model, including “how to describe varied and alternative data sources, or their interrelationships, in an adverse action reason.”
The agency’s statement stressed their hope that by providing these tools that stakeholders would be encouraged to explore using AI/ML – and to engage with the CFPB regarding the impact of current regulations. Specifically, the CFPB encouraged companies to use the TDP Policy to “test disclosures that may improve upon existing adverse action disclosures, including in ways that might go beyond the four corners of the regulations without causing consumer harm.”
The statement hinted that it is gauging the need for regulatory updates, to better match innovations in industry, saying that “applications granted under the innovation policies, as well as other stakeholder engagement with the Bureau, may ultimately be used to help support an amendment to a regulation or its Official Interpretation.”