Whitehouse 2021

White Houses Releases AI Bill of Rights Blueprint

By MedTech Intelligence Staff
No Comments
Whitehouse 2021

On Tuesday, October 4, the White House released a Blueprint for an Artificial Intelligence (AI) Bill of Rights geared toward protecting the American public as the use of AI and machine learning expands throughout industry and online.

Against a backdrop of growing concern surrounding biased data and rights to privacy and informed consent, the White House has released the “Blueprint for an AI Bill of Rights” that lays out five principles and associated practices to protect the American public against potential harm. “Among the great challenges posed to democracy today is the use of technology, data and automated systems in ways that threaten the rights of the American public,” read the White House statement introducing the blueprint. The statement cites the use of algorithms in hiring and credit decisions that exacerbate unwanted inequities and discrimination, as well as unchecked social media data collection as just some of the concerns facing the country, while also recognizing the significant benefits brought about through the use of automated systems in agriculture, emergency preparedness and health care.

“This important progress must not come at the price of civil rights or democratic values,” the statement reads. “The President has spoken forcefully about the urgent challenges posed to democracy today and has regularly called on people of conscience to act to preserve civil rights—including the right to privacy, which he has called ‘the basis for so many more rights that we have come to take for granted that are ingrained in the fabric of this country.’”

The blueprint includes five principles that lay out individual’s rights and signal potential regulatory frameworks to guide the design, use and deployment of automated systems to protect those rights. They include, in part:

Safe and Effective Systems: You should be protected from unsafe or ineffective systems.

“Automated systems should be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system. Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards … Independent evaluation and reporting that confirms that the system is safe and effective, including reporting of steps taken to mitigate potential harms, should be performed and the results made public whenever possible.”

Algorithmic Discrimination Protections: You should not face discrimination by algorithms, and systems should be used and designed in an equitable way.

“… Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way. This protection should include proactive equity assessments as part of the system design, use of representative data and protection against proxies for demographic features, ensuring accessibility for people with disabilities in design and development, pre-deployment and ongoing disparity testing and mitigation, and clear organizational oversight …”

Data Privacy: You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.

“… Designers, developers, and deployers of automated systems should seek your permission and respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate ways and to the greatest extent possible; where not possible, alternative privacy by design safeguards should be used. Systems should not employ user experience and design decisions that obfuscate user choice or burden users with defaults that are privacy invasive. Consent should only be used to justify collection of data in cases where it can be appropriately and meaningfully given. Any consent requests should be brief, be understandable in plain language, and give you agency over data collection and the specific context of use; current hard-to-understand notice-and-choice practices for broad uses of data should be changed …”

Notice and Explanation: You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.

“Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays, notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible. Such notice should be kept up-to-date and people impacted by the system should be notified of significant use case or key functionality changes … Reporting that includes summary information about these automated systems in plain language and assessments of the clarity and quality of the notice and explanations should be made public whenever possible.”

Human Alternatives, Consideration, and Fallback: You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.

“You should be able to opt out from automated systems in favor of a human alternative, where appropriate .. You should have access to timely human consideration and remedy by a fallback and escalation process if an automated system fails, it produces an error, or you would like to appeal or contest its impacts on you. Human consideration and fallback should be accessible, equitable, effective, maintained, accompanied by appropriate operator training, and should not impose an unreasonable burden on the public. Automated systems with an intended use within sensitive domains, including, but not limited to, criminal justice, employment, education, and health, should additionally be tailored to the purpose, provide meaningful access for oversight, include training for any people interacting with the system, and incorporate human consideration for adverse or high-risk decisions.”

Read the Blueprint for an AI Bill of Rights

Related Articles

  • Dr. Christopher Joseph Devine, President, Devine Guidance International

    In this fourth and final foray into Subpart C – Design Controls, Dr. D will review the last three subsections; (h) design transfer, (i) design changes, and (j) the design history file (DHF), located within section 820.30. These final three…

  • Dr. Christopher Joseph Devine, President, Devine Guidance International

    The proverbial rubber meets the road when the actual execution of test protocols commences. In this edition of Devine Guidance , Dr. D will continue with his dissection of 21 CFR, Part 820; Section 820.30, subsection f (design verification) and…

  • Dr. Christopher Joseph Devine, President, Devine Guidance International

    FDA considers the Design History File documented evidence that a medical device was designed and developed in accordance with §820.30; if documented evidence does not exist, FDA will issue a Form 483 observation or even a warning letter.

  • Checkbox

    Prior to design verification of packaged products, consider the following process-related elements.

Leave a Reply

Your email address will not be published.