On October 4, the White House Office of Science and Technology Policy released a set of five principles, known as the Blueprint for an AI Bill of Rights, designed to protect the rights of Americans in the age of artificial intelligence (AI). Developed over the course of a year, the principles are intended to help guide the design, use, and deployment of automated systems. While the principles are not binding, the White House hopes they will convince tech companies to deploy AI more responsibly and limit the use of surveillance.

“In America and around the world, systems supposed to help with patient care have proven unsafe, ineffective, or biased. Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity — often without their knowledge or consent. These outcomes are deeply harmful — but they are not inevitable.”

The White House intends to apply the key principles to: 1) automated systems that 2) have the potential to meaningfully impact people’s rights, opportunities, or access to critical resources or services.

  • First Principle — the public should be protected from unsafe or ineffective systems.
    • Automated systems should be developed with consultation from diverse communities and experts to identify concerns, risks, and potential system impacts.
    • Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use.
    • Americans should be protected from inappropriate or irrelevant data use in the design, development, and deployment of automated systems.
  • Second Principle — automated systems should be used and designed in an equitable way to prevent algorithmic discrimination.
    • Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions; gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.
    • Developers of automated systems should take proactive and continuous measures, including equity assessments and algorithmic impact assessments, to protect individuals and communities from algorithmic discrimination.
  • Third Principle — protection against purported abusive data practices.
    • Developers of automated systems should seek permission in the collection, use, access, transfer, and deletion of data where possible, and, where not possible, should use default privacy by design safeguards.
    • All consent requests should be brief and understandable in plain language.
  • Fourth Principle — the public should be aware that an automated system is used, and understand how and why it contributes to outcomes that impact individuals.
    • Developers of automated systems should provide a plain language description of how the system functions and the role automation plays in the system, including when an algorithmic system is used to make a decision impacting an individual.
  • Fifth Principle — consumers should have the ability to opt-out of automated systems and have access to a person who can quickly remedy issues.
    • Consumers should have timely access to a person via a fallback or escalation process if an automated system fails, produces an error, or to contest a decision.

In 2019, the European Commission published a similar set of principles called the Ethics Guidelines for Trustworthy AI. The European Parliament is currently in the process of drafting the EU Artificial Intelligence Act, a legally enforceable adaptation of the guidelines.