Last week, the Consumer Financial Protection Bureau (CFPB or Bureau) released its latest Supervisory Highlights report, focusing on the use of advanced technologies in credit scoring models. This edition of Supervisory Highlights concerns select examinations of institutions that use credit scoring models, including models built with advanced technology commonly marketed as AI/ML technology, when making credit decisions. The report repeated the CFPB’s previous statements that there is “no ‘advanced technology’ exception” to federal consumer protection laws (which, to our knowledge, no industry participant has suggested to exist) and asserted that financial institutions will need to improve their practices to ensure compliance with the Equal Credit Opportunity Act (ECOA) and Regulation B. This includes actively searching for less discriminatory alternatives, critically evaluating the use of alternative data, and rigorously testing and validating adverse action reasons.
The CFPB’s Specific Industry Observations:
- Credit Card Lenders: Examiners conducted statistical analyses of certain institutions’ underwriting and pricing practices and found disproportionately negative outcomes for Black or African American and Hispanic applicants when compared to white applicants. According to the report, certain credit scoring models contributed to disparities in multiple card products, particularly for Black and African American applicants. Institutions were thus directed to use compliance tools to search for less discriminatory alternative credit models and to enhance their fair lending compliance management systems, including by maintaining fair lending controls that can evaluate and address potential risks associated with credit scoring models.
- Auto Lenders: Examiners identified risks associated with using a large number of input variables in credit scoring models. For example, some institutions used model inputs where their analysis identified any purported contribution of the input to the accuracy of the model without requiring an assessment of whether and how much the input contributed to accuracy, and without documentation of the business justifications for using the input, such that institutions may have used model inputs that were predictive of prohibited characteristics without considering alternatives that may have had less discriminatory effects while equally contributing to the model’s accuracy. Institutions were directed to review input variables for fair lending risks and to consider less discriminatory alternatives that meet their business needs.
- Auto Lenders’ Adverse Action Notices: Examiners assessed certain institutions’ use of credit scoring models built using AI/ML technology, including models that in some cases used more than a thousand variables. Examiners found that the institutions did not sufficiently ensure compliance with adverse action notice requirements, such as how they selected the reasons given in adverse action notices when the adverse action was based on the model score. Examiners directed institutions to test and validate the methodologies used to identify reasons in adverse action notices.
Key Takeaways:
- Search for Less Discriminatory Alternatives (LDAs): The CFPB is explicitly pushing creditors to actively search for and implement LDAs to their current and future credit scoring models, and, as the Bureau previewed in its August 2024 comment in response to the U.S. Treasury’s RFI regarding artificial intelligence, examination teams have been searching for LDAs for creditors’ models when creditors themselves have not done so. This includes using open-source automated debiasing methodologies to identify potential alternative models that, according to the CFPB, can reduce disparities while maintaining predictive accuracy. Examiners have directed institutions to increase the rigor of their testing protocols, and specifically to include LDA analyses in those protocols.
- Criticism of Models with Large Numbers of Attributes: The report highlights the risks associated with credit scoring models that use a large number of input variables (e.g., more than 1,000), including alternative data not directly related to consumers’ finances. According to the CFPB, such models can be difficult to monitor for proxies of prohibited bases under ECOA. Accordingly, institutions must ensure adequate review of input variables for fair lending risks before they are selected as model inputs.
- Skepticism of Alternative Data: The CFPB continues to be skeptical of the use of alternative data in credit scoring models, particularly when such data is not directly related to consumers’ financial behavior. This skepticism is rooted in the perceived potential for these variables to act as proxies for prohibited bases, thereby increasing the risk of discrimination.
- Fair Lending Testing: The report underscores the importance of comprehensive fair lending testing, both for evaluating models for disparate treatment and for assessing disparate impact.
Our Take:
This special edition of Supervisory Highlights represents the clearest directives from the CFPB on issues that have been percolating for a while now — LDAs to credit models and their specific input variables and the methods used to derive adverse action notices from machine learning models. It is now clear beyond doubt that the CFPB expects that any underwriting model should be subjected to disparate impact analysis, including a search for LDAs and scrutiny of specific input variables prior to selecting them for use in the model. We expect that CFPB examiners’ reliance on their own statistical analyses to identify whether models result in a disparate impact and whether LDAs exist may lead the Bureau to take a more aggressive stance toward institutions that fail to identify and implement LDAs on their own.
With respect to adverse action notices, as discussed here and here, the CFPB is pushing the same message that it has for the past several years — that creditors must explain the specific reasons for denying an application or other adverse action that is based on a complex credit model — while still not specifying any particular method to derive adverse action reasons from machine learning models. However, the Bureau’s report explains that it will look for evidence of “testing and validation” of methodologies used to identify principal adverse action reasons in its supervisory examinations.
It will be interesting to see how these issues play out after any leadership change at the CFPB occurs, but we believe these basic principles are likely to continue as Bureau expectations, regardless of the administration change. For these reasons, we recommend that institutions create robust fair lending controls to assess their credit models and heavily document their fair lending analyses and business justifications for using such models.