On April 25, officials from the Federal Trade Commission (FTC), the Civil Rights Division of the U.S. Department of Justice (DOJ), the Consumer Financial Protection Bureau (CFPB), and the U.S. Equal Employment Opportunity Commission (EEOC) (together, the Agencies) issued a joint statement warning against the potential for automated systems, including artificial intelligence (AI), used in credit decisions, housing, and employment opportunities to “perpetuate unlawful bias,” “automate unlawful discrimination,” and produce other “harmful outcomes.” To combat these perceived risks, the Agencies resolve to monitor the development and use of automated systems and promote responsible innovation while underscoring that “[e]xisting legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices.”
In the joint statement, each of the Agencies recapped their recent activities related to automated systems or AI.
- As discussed here, in May 2022, the CFPB published a circular relating to adverse action notices and AI/machine learning models, stating that federal consumer financial laws apply regardless of the technology being used. The joint statement notes that “[t]he circular also made clear that the fact that the technology used to make a credit decision is too complex, opaque, or new is not a defense for violating these laws.”
- In January 2023, the DOJ filed a statement of interest in federal court asserting that the Fair Housing Act applies to algorithm-based tenant screening services.
- As discussed here, in June 2022, the FTC issued a reportto Congress titled “Combatting Online Harms Through Innovation,” warning about using artificial intelligence to combat online problems, noting concerns that these tools can have inherent potential for inaccuracy, bias, and discrimination, and can harm marginalized communities. The agency has also warned businesses that it may violate the FTC Act “to use automated tools that have discriminatory impacts, to make claims about AI that are not substantiated, or to deploy AI before taking steps to assess and mitigate risks.”
- In addition to the EEOC’s enforcement activities on discrimination related to AI, the EEOC issued a technical assistance document explaining how the Americans with Disabilities Act applies to the use of software, algorithms, and AI to make employment-related decisions about job applicants and employees.
The joint statement concluded by reiterating the Agencies’ “pledge to vigorously use our collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.”
The joint statement did not announce any new policies by any of the Agencies. However, this marks another unprecedented “all of government” approach by a consortium of federal agencies to enforce existing federal consumer financial protection laws and to work collaboratively on AI risks, which is similar to the approach taken by the DOJ, CFPB and federal banking agencies in the “Combatting Redlining Initiative” announced in October 2021. Even though they are providing little to no guidance on what regulated companies should do, the Agencies are signaling that they are looking for examples of AI/machine learning-related harms to consumers to pursue in enforcement actions.