Artificial intelligence impacts Americans on a daily basis in their personal lives and in the workplace. And yet while federal and state governments have made relatively minimal efforts to govern its development, design, and usage, this appears to be changing. The White House Office of Science and Technology Policy recently released its “Blueprint for an AI Bill of Rights,” a non-binding whitepaper intended to support the development of policies and practices that protect civil rights and promote democratic values in the building, deployment, and governance of automated systems. This blueprint includes five key principles to help protect Americans in the age of artificial intelligence, and employers would be wise to consider them when developing their own policies and practices.
1. Safe and Effective Systems
“You should be protected from unsafe or ineffective systems.”
The OSTP recommends that automated systems should be developed in consultation with diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system. Automated systems should not be designed with an intent or reasonably foreseeable possibility of endangering safety. Rather, they should be designed to proactively protect individuals from harms stemming from unintended, yet foreseeable, uses or impacts of automated systems.
This is an important principle, as the OSTP explained, because of unfortunate situations that have occurred in the past, including:
2. Algorithmic Discrimination Protections
“You should not face discrimination by algorithms and systems should be used and designed in an equitable way.”
The White House recommends that designers, developers, and deployers of automated systems take proactive and continuous measures to protect individuals and communities from algorithmic discrimination – when automated systems contribute to unjustified treatment or impact people based on race, color, ethnicity, sex, or other classifications protected by law. It recommends they should use and design automated systems in an equitable way. This should include proactive equity assessments, use of representative data and protection against proxies for demographic features, ensuring accessibility for people with disabilities, disparity testing, and organizational oversight.
This principle was developed by the OSTP based on historical situations such as:
3. Data Privacy
“You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.”
The report recommends that individuals should be protected from privacy violations through design choices that ensure that protections are included by default in automated systems. This specifically includes ensuring that data collection confirms to reasonable expectations and that only the data strictly necessary for the specific context is collected.
Designers, developers, and deployers of automated systems should seek permission from individuals and respect their decisions regarding the collection, use, access, transfer, and deletion of data in appropriate ways. Requests for consent should be brief, understandable, and provide an individual with agency over their data collection and its use. In addition, continuous surveillance and monitoring should not be used in work or in other contexts where the use of surveillance is likely to limit rights, opportunities, or access.
The OSTP cited to specific situations that have occurred in the past, and which warrant the need for this principle:
4. Notice and Explanation
“You should know that an automated system is being used, and understand how and why it contributes to outcomes that impact you.”
The OSTP advises designers, developers, and deployers of automated systems to provide accessible, plain-language documentation that includes descriptions of the overall system functioning and the role automation plays, notice that such systems are being used, the entity responsible for the system, and explanations of outcomes that are clear, timely, and accessible.
The “Notice and Explanation” principle stems from prior situations, including:
5. Human Alternatives, Consideration, and Fallback
“You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.”
Finally, the White House recommends that individuals should be able to opt out from automated systems in favor of a human alternative, where appropriate. They should also have access to timely human consideration and remedy by a fallback and escalation process if an automated system fails, produces an error, or if the individual would like to appeal or contest its impact.
The OSTP finds this principle necessary, given instances that have occurred in the past:
The OSTP advises that independent evaluation and/or reporting should be used to ensure the protections afforded by these five principles.
Next Steps for Businesses
The Blueprint for an AI Bill of Rights is designed to support policies and practices to protect individuals’ rights in the development and use of automated systems. For businesses, however, this is a strong sign from the White House that it is taking artificial intelligence seriously. It is also an indication that future – and significant – legislation surrounding artificial intelligence will likely be proposed at the federal and state levels. Businesses should stay abreast of these developments to ensure that their practices are in compliance with applicable rules and regulations governing artificial intelligence.
See more »
DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.
© Fisher Phillips | Attorney Advertising
Refine your interests »
Back to Top
Explore 2022 Readers’ Choice Awards
Copyright © JD Supra, LLC
Leave a Reply
You must be logged in to post a comment.