On October 30, President Biden issued a sweeping Executive Order calling on Congress to enact privacy laws and directing federal agencies to review existing rules and potentially explore new rulemakings governing the use of artificial intelligence (AI) across various sectors of the U.S. economy. Among other things, the Executive Order will require AI system developers to submit safety test results to the federal government, establish standards for detecting AI-generated content to fight consumer fraud, and develop AI tools to identify and fix vulnerabilities in critical software. According to the White House fact sheet, the stated goal of the Executive Order is to “ensure that America leads the way in seizing the promise and managing the risks of [AI].” To that end, the Executive Order focuses on national security, privacy, discrimination and bias, healthcare safety, workplace surveillance, innovation, and global leadership.
According to the Executive Order, the advancement, development and use of AI should be governed by eight guiding principles. The eight guiding principles are: (1) artificial Intelligence must be safe and secure; (2) responsible innovation, competition, and collaboration will allow the U.S. to lead in AI and unlock the technology’s potential to solve some of society’s most difficult challenges; (3) the responsible development and use of AI require a commitment to supporting American workers; (4) policies must be consistent with advancing equity and civil rights; (5) the interests of Americans who increasingly use and interact with AI-enabled products must be protected; (6) Americans’ privacy and civil liberties must be protected; (7) managing the risks from the federal government’s own use of AI and increasing its internal capacity to regulate, govern, and support responsible use of AI; and (8) the federal government should lead the way to global societal, economic, and technological progress, as the U.S. has in previous eras of disruptive innovation and change.
Action items relevant to the financial services industry include the following:
- Within 90 days, the Assistant Attorney General in charge of the U.S. Department of Justice (DOJ) Civil Rights Division is directed to convene a meeting of the heads of federal civil rights offices to discuss: preventing and addressing discrimination in the use of automated systems, including algorithmic discrimination; increasing coordination between the DOJ’s Civil Rights Division and federal civil rights offices concerning issues related to AI and algorithmic discrimination; and promoting public awareness of potential discriminatory uses and effects of AI.
- Within 150 days, the Secretary of the Treasury shall issue a public report on best practices for financial institutions to manage AI-specific cybersecurity risks.
- Within 180 days, the Secretary of Housing and Urban Development and the Director of the Consumer Financial Protection Bureau (CFPB) are encouraged to issue additional guidance:
- addressing the use of tenant screening systems in ways that may violate the Fair Housing Act, the Fair Credit Reporting Act, or other relevant federal laws, including how the use of data, such as criminal records, eviction records, and credit information, can lead to discriminatory outcomes; and
- Tenant screening has already been a priority for the CFPB this year. As discussed here, on February 28, the CFPB and Federal Trade Commission (FTC) jointly issued a Request for Information, seeking public comment on how background screening affects individuals seeking rental housing in the United States. The deadline for comments was May 30.
- addressing how the Fair Housing Act, the Consumer Financial Protection Act of 2010, or the Equal Credit Opportunity Act apply to the advertising of housing, credit, and other real estate-related transactions through digital platforms, including those that use algorithms to facilitate advertising.
- addressing the use of tenant screening systems in ways that may violate the Fair Housing Act, the Fair Credit Reporting Act, or other relevant federal laws, including how the use of data, such as criminal records, eviction records, and credit information, can lead to discriminatory outcomes; and
- Independent regulatory agencies are encouraged to consider using their full range of authorities to protect American consumers from fraud, discrimination, and threats to privacy, including risks to financial stability, and to consider rulemaking, as well as emphasizing or clarifying where existing regulations and guidance apply to AI, including clarifying the responsibility of regulated entities to conduct due diligence on and monitor any third-party AI services they use, and emphasizing or clarifying requirements and expectations related to the transparency of AI models and regulated entities’ ability to explain their use of AI models.
The Executive Order builds upon last year’s set of five principles, known as the Blueprint for an AI Bill of Rights, released by the White House Office of Science and Technology Policy. As discussed here, the five principles were designed to protect the rights of Americans in the age of AI.