On May 26, the Consumer Financial Protection Bureau (CFPB or Bureau) announced that federal anti-discrimination law requires companies to explain to applicants the specific reasons for denying an application for credit or taking other adverse actions, even if the creditor is relying on credit models using complex algorithms.
In a corresponding Consumer Financial Protection Circular published the same day, the CFPB started with the question, “When creditors make credit decisions … do these creditors need to comply with the Equal Credit Opportunity Act’s (ECOA) requirement to provide a statement of specific reasons to applicants against whom adverse action is taken?”
Yes, the CFPB confirmed. Per the Bureau’s analysis, both ECOA and Regulation B require creditors to provide statements of specific reasons to applicants when adverse action is taken. The CFPB is especially concerned with something called “black-box” models — decisions based on outputs from complex algorithms that may make it difficult to accurately identify the specific reasons for denying credit or taking other adverse actions.
This most recent circular asserts that federal consumer financial protection laws and adverse action requirements should be enforced, regardless of the technology used by creditors, and that creditors cannot justify noncompliance with ECOA based on the mere fact that the technology they use to evaluate credit applications is “too complicated,” “too opaque in its decision-making,” or “too new.”
The Bureau’s statements are hardly novel. Regulation B requires adverse action notices and does not have an exception for machine learning models, or any other kind of underwriting decision-making for that matter. It’s difficult to understand why the Bureau thought it was necessary to restate such a basic principle, but what is even more difficult to understand is why the Bureau has not provided any guidance on the appropriate method for deriving adverse action reasons for machine learning models. The official commentary to Regulation B provides specific adverse action logic applicable to logistic regression models, but the Bureau noted in a July 2020 blog post that there was uncertainty about the most appropriate method to do so with a machine learning model. That same blog post even stated that the Bureau would consider resolving this uncertainty by amending Regulation B or its official commentary. A few months later, the Bureau hosted a Tech Sprint on adverse action notices during which methods for deriving adverse action reasons from machine learning models were specifically presented to the Bureau. Now, a year and half later, the Bureau has still declined to provide any such guidance, and the May 26 announcement simply emphasizes — and perpetuates — the same uncertainty that the Bureau itself recognized in 2020, without offering any guidance or solution whatsoever. It is disappointing, to say the least.