The U.S. can improve its AI governance strategy by addressing online biases The U.S. can improve its AI governance strategy by addressing online biases

The United States has been working to codify the National Artificial Intelligence (AI) Initiative that focuses on six strategic pillars: improving AI innovation, advancing trustworthy AI, creating new education and training opportunities through AI, improving existing infrastructure through new technologies, facilitating federal and private sector utilization of AI to improve existing systems, and promoting an international environment that supports further advances in AI. In April 2022, the U.S. Department of Commerce, and the National Institute on Standards (NIST) announced members of the inaugural National Artificial Intelligence Advisory Committee (NAIAC), which will be tasked with advising the Biden administration on how to proceed with national AI governance efforts. At their first meeting on May 4, 2022, the NAIAC discussed the use of AI pertaining to U.S. competitiveness, issues related to workforce, and whether there is adequate national oversight of AI systems. Taken together, the objectives of the national AI initiative and the creation of the NAIAC will ensure strategic and timely approaches to the design and deployment of autonomous systems, as well as further establish national norms.

Nicol Turner Lee

Senior Fellow - Governance Studies

Director - Center for Technology Innovation

Twitter@nturnerlee

Samantha Lai

Research Assistant, Center for Technology Innovation - The Brookings Institution

Twitter_SamanthaLai_

Of equal importance is that the technology needs to be improved for domestic use cases as part of this national effort, especially in areas with the potential to create either differential treatment or disparate impact for federally protected and other vulnerable populations. If the U.S. excludes such considerations from national governance discussions, historic and systemic inequalities will be perpetuated, limiting the integration of the needs and lived experiences of certain groups into emerging AI innovations. Poor or inadequate decisions around financial services and creditworthiness, hiring, criminal justice, health care, education, and other scenarios that predict social and economic mobilities stifle inclusion and undercut democratic values such as equity and fairness. These and other potential harms must be paired with pragmatic solutions, starting with a comprehensive and universal definition of bias, or the specific harm being addressed. Further, the process must include solutions for legible and enforceable frameworks that bring equity into the design, execution, and auditing of computational models to thwart historical and present-day discrimination and other predatory outcomes.

While the NAIAC is the appropriate next step in gathering input from various stakeholders within the private and public sectors, as well as from universities and civil society stakeholders, representatives from more inclusive and affected groups are also key to developing and executing a more resilient governance approach. In 2021, the Brookings Institution Center for Technology Innovation (CTI) convened a group of stakeholders prior to the NAIAC formation to better understand and discuss the U.S.’s evolving positions on AI. Leaders represented national and local organizations advocating for various historically-disadvantaged and other vulnerable populations.

The goal of the Brookings dialogue was to delve into existing federal efforts to identify areas for more deliberate exchange for civil and equal rights protections. In the end, roundtable experts called for increased attention to be paid to the intended and unintended consequences of AI on more vulnerable populations. Experts also overwhelmingly found that any national governance structure must include analyses of sensitive use cases that are exacerbated when AI systems leverage poor quality data, rush to innovate without consideration of existing civil rights protections, and fail to account for the broader societal implications of inequalities that embolden AI systems to discriminate against or surveil certain populations with greater precision.

In some respects, the roundtable concurred with the need for a “Bill of Rights for an AI-Powered World,” a framework introduced in 2021 by the White House Office of Science and Technology Policy (OSTP). Here, OSTP is calling for the clarification of “the rights and freedoms we expect data-driven technologies to respect” and to establish general safeguards to prevent abuse in the U.S. But without direct discussion on how bias is defined in the public domain, and what specific use cases should be prioritized, the U.S. will wane in the protection and inclusion of historically disadvantaged groups as AI systems evolve.

In this blog, we offer a brief overview of key points from the roundtable discussion, and further clarify definitions of bias that were shared during the roundtable. We also surface scenarios where the U.S. can effectuate change, including in the fields of law enforcement, hiring, financial services, and more. We conclude with priorities that could be undertaken by the newly established advisory committee, and the federal government writ large, to make progress on inclusive, responsible, and trustworthy AI systems for more vulnerable groups and their communities.

Defining AI bias

To start, the U.S. needs a common understanding of AI and the related problems it can generate, which is important in a space where meanings can be ambiguous and in some instances fragmented. The National Artificial Intelligence Initiative has defined trustworthy AI as appropriately reflecting “characteristics such as accuracy, explainability, interpretability, privacy, reliability, robustness, safety[,] . . . security or resilience to attacks,” all while “ensur[ing] that bias is mitigated.” In a more recent report, NIST defined bias as “an effect that deprives a statistical result of representativeness by systemically distorting it.” Added to these definitions are general definitions adopted among the private sector which equate bias mitigation with fairness models. A previous Brooking report approaches the definition from a more comparative lens, framing bias as “outcomes which are systemically less favorable to individuals within a particular group and where there is no relevant difference between groups that justifies such harms.” Further, the authors suggest that algorithmic biases in machine learning models can lead to decisions that can have a collective, disparate impact on certain groups of people even without the programmer’s intention to discriminate.

At face value, the U.S. definitions tend to be broad and somewhat generalized when compared to those from the EU, which has positioned AI according to practical degrees of risk. Most notably, the EU Artificial Intelligence Act categorizes AI use into three different tiers. Those of unacceptable risk would be prohibited (taking, for example, the use of facial recognition for law enforcement), while high risk systems would be authorized but subject to scrutiny before they can gain access to the EU market (taking, for example, AI used for hiring and calculating credit scores). Meanwhile, limited and minimal-risk AI, such as AI chatbots and AI use in inventory management, will be subject to light transparency obligations. Civil and human rights are factored into the definitions offered by the Organisation Economic Co-operation and Development (OECD) and other international bodies. The OECD defines innovative and trustworthy AI as those that include: respect for human rights and democratic values; setting standards for inclusive growth; human-centered values and fairness; transparency and explainability; robustness, security, and safety; and accountability. Compared to the U.S., international entities have taken a more proactive, and perhaps proscriptive, approach to defining bias to ensure some common consensus on the harms being addressed.

While roundtable participants didn’t have full consensus on the most commonly accepted definition of AI bias, they did offer perspectives on the outcomes that should be further investigated, especially those that seem to collide with the public interest and equity. Generally, diversity and inclusion are treated as afterthoughts in AI development and execution, and flagged when systems go awry, resulting in quick fixes that do not address the breadth of such harmful technologies. Roundtable experts also shared that most biases occur as a consequence of poor data quality, which will be discussed later in the blog. Experts also pointed to the lack of privacy in this technological age, which continues to leave marginalized groups more vulnerable to unmitigated data collection without their knowledge. In sum, roundtable participants found that AI biases reflect larger systemic issues of societal discrimination, poor data quality, and the lack of data privacy protections. There was also mention of how the lack of workforce diversity in the computer and data sciences hinders more inclusive approaches.

These factors shared during the roundtable overwhelmingly support why the U.S. needs more focused guidance on how to attain inclusive, equitable, and fair AI. The Biden administration has already centered equity among federal initiatives, including AI. Executive Order 13985, Advancing Racial Equity and Support for Underserved Communities Through the Federal Government, directs the U.S. Department of Defense to advance equitable AI by “investing in agency-wide responsible AI development and investing in the development of a more diverse AI workforce, including through partnerships with Historically Black Colleges and Universities (HBCUs) and Minority Serving Institutions (MSIs).” The previous administration provided a running start on AI governance when they dived into discussions and strategies for how federal agencies could harness the transformative capabilities of AI. The Equal Employment Opportunity Commission (EEOC) started this process in its own work focused on mitigating disparities in AI-driven hiring tools for people with disabilities. Yet, more needs to be done in the U.S. to affirmatively own the existence of online data biases, and flush out areas for change.

The U.S. can improve its AI governance strategy by addressing online biases The U.S. can improve its AI governance strategy by addressing online biases

Red flag, use cases

The fact of the matter is that if the federal government gets bias identification and mitigation wrong, it will erode the trust in the efficacy of autonomous systems, especially among everyday citizens whose lives are becoming more dependent on them. Below are some of the use cases in housing, hiring, criminal justice, healthcare, finance, political disinformation, and facial recognition that are already raising red flags due to limited oversight.

AI and housing

America has a long history of racist housing and lending policies, enabled by racialized policies including the Indian Removal Acts, Fugitive Slave Act, Repatriation Act, and more. Today, biases in home appraisals and loan approvals continue to pose systemic challenges in mortgage applications and ownership as zoning ordinances and redlining steepen gaps for Black applicants. While laws such as the Fair Housing Act of 1968 and the Equal Credit Opportunity Act of 1974 prevented housing discrimination on a mass scale, discrimination abounds with AI creating even greater precision in fostering inequities. Automated mortgage lending systems have been found to charge Black and Hispanic borrowers significantly higher prices for mortgage loans, at a difference of roughly $800 million a year. Meanwhile, online lenders have followed trends of discrimination set by face-to-face lenders, cumulatively rejecting a total of 1.3 million creditworthy Black and Latino applicants between 2008 and 2015. While some argue that app-based approvals have been shown to be 40% less likely to offer higher mortgage rates for borrowers of color and not reject an individual’s application based on their race alone, the technology is still emboldening disparities when it comes to appraisals for existing owners, resulting in homes in majority Black neighborhoods being appraised for 23% less than properties in mostly white neighborhoods – even with the technology.

AI and hiring

Over the years, more and more companies are using AI to reduce operational costs and increase hiring efficiencies. However, these systems are not divorced from the differences that men and women experience in the workplace, and hiring algorithms have been shown to positively favor white people over people of color. For example, a study found that targeted ads on Facebook for supermarket cashiers were received by an audience that was 85% women, while jobs with taxi companies were shown to an audience that was 75% Black. In another instance, Amazon rolled back a hiring algorithm because it rejected female applicants, or any resume referencing women’s activities; the algorithm was primarily trained on a largely male dataset of engineers. Added to these examples of AI in hiring is the use of emotion recognition technology (ERT) to evaluate candidates during the hiring process. Research has found that Black and Hispanic men have been passed over for employment when pre-screened by such ERT tools. The disproportionately negative results generated by the AI resulted in their disqualification early in the hiring process.

AI and criminal justice

A history of biased and discriminatory laws has reinforced racism in the criminal justice system, which disproportionately polices and incarcerates low-income people and people of color. Black people are incarcerated at five times the rates of white people. And the introduction of AI in this space has only created an additional perpetrator of injustices within the system. The PATTERN algorithm, created by the Department of Justice as part of the First Step Act, was used to predict recidivism and shorten criminal sentences based on good behavior. Yet the algorithm has been shown to exhibit biases against people of color, overpredicting recidivism among minority inmates at rates of two to eight percent compared to white inmates. Other risk assessment algorithms have exhibited similar biases, taking for example the COMPAS algorithm that had been used in the states of New York, Wisconsin, California, and more. A ProPublica article found that Black people were twice as likely as white people to be labeled high risk but not re-offend, while white people were more likely to be labeled as low risk but then re-offend. Such risk-assessment tools receive widespread use across the criminal justice system, from initial sentencing to determining early releases, exacerbating existing biases within the system with little oversight.

AI and health care

AI use in healthcare has also been shown to exacerbate social inequities. An algorithm used to determine transplant list placement had a race coefficient that placed Black patients lower on the list than white patients, even though Black Americans are significantly more likely than white Americans to have kidney failure. Meanwhile, an algorithm used by hospitals to predict patients needing follow-up care identified a group of patients that only consisted of 18% Black patients and 82% white patients when the figures should have followed a nearly-50/50 split instead. Notably, an AI made for skin cancer detection had been primarily tested on white patients and failed to produce accurate diagnoses of darker-skinned patients. Many of these questions of life and death are left to the whims of biased technology, worsening existing health inequities faced by people of color.

AI and financial services

AI usage in financial systems perpetuate further bias. Some FinTech algorithms perpetuate lending patterns that are known to charge Latinx borrowers 7.9 basis points and African Americans 3.6 basis points more for buying and refinancing mortgages. While researchers have found that FinTech discriminates 40% less than face-to-face lenders, historically marginalized communities continue to be disproportionately impacted. There is also the problem of credit invisibility, which 44 million Americans suffer from as they are “disconnected from mainstream financial services and thus do not have a credit history.” Since FinTech cannot assess future borrowing or credit behaviors among these populations due to the lack of data, they are still stricken by the wealth gaps in the U.S. that limit financial independence.

AI and political disinformation

Unmitigated data collection, coupled with the use of artificial intelligence in social media algorithms, have enabled bad actors in spreading disinformation targeting marginalized groups. During the 2016 U.S. presidential elections, Russian operatives took advantage of social media to target Black people, spreading messages seeking to incite racial conflict and discourage Black people from going to the ballot box. Online voter suppression, conducted through misleading information on the presence of law enforcement at polling places or the spread of incorrect voting information, has been used to target racial minorities and prevent them from casting their votes. Political disinformation threatens the civil rights of racial minorities, erecting barriers to their full participation in the American democracy.

Facial recognition technologies

Many of the previously mentioned use cases rely upon face detection and facial recognition technologies, which has technical shortcomings when it comes to the identification and classification of diverse subjects. In law enforcement or criminal justice systems, the use of inaccurate facial recognition technologies (FRT) has resulted in multiple wrongful arrests, disproportionately affecting Black and Latino populations. FRT has also delivered similar sinister consequences when applied to the surveillance of public housing residents, whose access to apartment units has been dictated by FRT results.

The depth and breadth of AI biases compel the need for principled guidance and solutions from the federal government. Further, digital competitiveness cannot be fully realized in the U.S. without a framework that proactively addresses these and other persistent domestic challenges restricting the inclusiveness of emerging technologies. But before offering several proposals for how the U.S. might proactively address online biases, roundtable participants also highlighted two additional elements that exacerbate AI biases – data quality and workforce diversity.

Traumatized data

Existing data documents the historically unjust treatment and underrepresentation of historically marginalized communities. For example, some job recruitment ads assume that people of color, or women, are less qualified as they are less represented in the workforce compared to mainstream populations, or white men. None of these assumptions consider the larger and broader societal factors and injustices in education and the labor market that have made it difficult for women or people of color to equitably participate in the labor market. Similarly, housing loans for Black and Brown communities tend to be disproportionately higher, a result of decades of discriminatory housing laws and redlining. In her work, University of Virginia Data Activist in Residence and Criminologist Renee Cummings refers to these disparities as “data trauma” because they emulate the historical legacies that get baked into existing datasets and show up in machine learning algorithms. And because these nuances are tightly correlated with the society in which we live, the data becomes harder to disentangle from those explicit and unconscious assumptions. When incorporated into AI models, the data trauma is carried on and inflicted on future generations on whom the models are used among a plethora of use cases that determine social mobility, economic justice, and even civil rights.

Workforce diversity

Another important consideration brought up by Brookings roundtable participants is the need for workforce diversity in AI. Facebook and Google have a nominal representation of women in their research staff at 15% and 10% respectively, according to the AI Now Institute. These figures go even lower for tech workers. Only 2.5% of Google’s workforce are Black, and the numbers at Facebook and Microsoft are only at 4%. These problems go beyond the talent pipeline, as “workers in tech companies experience deep issues with workplace cultures, power asymmetries, harassment, exclusionary hiring practices, unfair compensation, and tokenization that are causing them to leave or avoid working in the AI sector altogether.” The lack of diversity in tech spaces means that machine learning algorithms and other autonomous systems are being developed without the lived experiences necessary to avert poor data treatment, or create much better products or services.

How the U.S. should prioritize increased equal opportunity in AI

Going back to what the U.S. is doing now, the newly established NAIAC created five working groups during their first meeting, which included: leadership in trustworthy AI, leadership in research and development, supporting the U.S. workforce and providing opportunity, leadership in competitiveness, and international cooperation. In addition, they also established a subcommittee on AI and law enforcement, tasked with investigating issues of bias and data security. Despite being commercially-focused, the establishment of the subgroups signals the administration’s commitment to human-centered AI, and in combatting existing biases commonly found in AI systems. Human-centered AI seeks to ensure the equitable and responsible use of technology while considering how explicit biases shape existing technologies.

Related Content

TechTank

Transparency is the best first step towards better digital governance

Mark MacCarthy
TechTank

The Declaration for the Future of the Internet is for wavering democracies, not China and Russia

Alex Engler
Technology & Innovation

Institutionalizing Data Analysis in German Federal Governance

Alex Engler

While these aspects of the NAIAC may potentially confront equitable and fair treatment of disparate groups, there still exists the need for civil rights activists and interdisciplinary experts to be part of these discussions. We also suggest a set of additional recommendations to inform the broader advisory and subgroups as they design and execute a national AI governance strategy.

1. The U.S. needs to undertake an update of the existing civil rights regime to determine interpretability for AI governance.

For years, civil rights activists have fought for equitable access to housing, loans, jobs, and more. In the previous examples, it is certain that AI biases will set back such progress, and leave people with little recourse for remediation of harmful and deceptive practices. The current civil rights regime is ill-equipped and outdated. For example, the Fair Housing Act prohibits housing discrimination based on race, disability, sex, and other factors. However, it does not account for discriminatory algorithms that allow renters to seek out people of specific races and ethnicities. Federal laws preventing voter intimidation fall short of tackling online disinformation seeking to alienate and frighten communities of color. Right now, it is also difficult for individuals to bring lawsuits against tech companies. As people’s experiences differ on social media platforms, it is difficult for one person to ascertain if the information that they are viewing is different from others, especially that which is predatory. Among a long list of legislative actions in Congress, there are already bills in circulation seeking to combat digital harms. This includes legislation limiting targeted advertising, such as the Banning Surveillance Advertising Act, and legislation seeking to prevent online voter intimidation, such as the Deceptive Practices and Voter Intimidation Prevention Act of 2021. But efforts to update our civil rights regime and fight back against online harms will be integral in protecting the civil rights of historically marginalized groups. To create improved harmonization around harms reductions, an assessment of the existing civil rights regime is a starting point toward more responsible AI and greater equity.

2. The U.S. should identify and determine specific use cases for recommendations of more stringent oversight and potential regulatory actions, including in financial services, healthcare, employment, and criminal justice.

When contemplating AI risk, it is important to outline and specify which use cases require stringent oversight and regulatory actions. The NAIAC could be a vehicle to employ frameworks similar to the EU AI Act, specifying and classifying use cases with different degrees of risk to determine appropriate levels of regulation. There are also multiple agencies working to combat AI biases across different sectors, like the one previously mentioned at the U.S. Equal Employment Opportunity Commission (EEOC). Recently, NIST has also released guidelines on managing biased AI. To proceed, there should be an inventory, assessment, and coordination of red flag areas among government agencies that prompt discussions on both remedies, and potential enforcements to directly address higher-risk scenarios that foreclose on equal opportunities for vulnerable populations.

3. The U.S. must deepen its participatory approach to AI governance that encourages public input, industry best practices, and consumer disclosures.

AI governance should be democratized to allow input beyond technologists. A participatory framework should be created to allow public input, incorporate industry best practices, and provide consumer disclosures to maximize transparency for those most impacted by these new technologies. Audits and impact assessments will also be key in the rollout of new technologies, focusing in particular on determining disparate impact and the quality of data used and documentation kept. Particularly sensitive algorithms – especially those used by the federal government – should undergo regular reviews to evaluate their long-term impacts on more vulnerable groups. And consumer input should be more valued. In the status quo, there are limited means for consumers to provide suggestions and feedback to those creating algorithms. For example, the creation of regulatory sandboxes and consumer feedback loops for AI models that pose substantive risk to citizens and consumers could be used in deliberative, debiasing efforts.

4. To achieve true equity, the U.S. should consider and employ an anti-racist approach to AI systems from design to implementation.

Much of the egregiousness in AI biases emanates from the existing systemic inequalities that are rooted in racism. While the diversification of developer teams, scrutiny of data biases, and widespread consumer input will help level the playing field in AI design and execution, it’s not always enough. Understanding and identifying bias is an integral part of the efficacy and usefulness of an algorithm. That is why the U.S. needs best practices that uphold the integrity of the algorithmic design and execution processes, and avoid the explicit discriminatory and predatory practices that are institutionally innate. What the roundtable revealed is that an anti-racist framework that prompts policymakers to ensure inclusive representation by addressing structural challenges is need, which includes limited research endowments to minority-serving institutions, including HBCUs, HSIs, and even community colleges, or having guardrails on AI systems that potentially replicate in-person inequities, like in law enforcement applications. The main theme here is that AI will not cancel the historical and structural circumstances that led to such disparities if they are not intentionally acknowledged and addressed.

Conclusion

The national AI governance dialogue provides the opportunity to set new norms for how the U.S. tackles AI biases. Having discussions on important issues creates the chance to clarify definitions and enact policy changes with a meaningful probability of mitigating biases and moving closer to a more inclusive economy. Whereas the process began with the Trump administration, the Biden White House can finish it, bringing to bear the non-technical considerations of emerging technologies.

Beyond these important considerations, stakeholders in these emerging technologies must trace back to the roots of the problems, which lie in the lack of diversity in design teams and data that continues to carry on trauma and discrimination of the past. By reviewing the existing civil rights regime, outlining cases in need of oversight, encouraging more democratic participation in AI governance, and incorporating anti-racist principles into every aspect of the algorithmic design process, it is possible that, with the joint efforts of tech companies, government institutions, civil rights groups, and citizens, existing AI biases could be upended. More importantly, the protections for historically marginalized groups can be better integrated into national governance, bringing the U.S. closer to the goals of equal opportunity for all in the digital age.


The authors are grateful for the contribution of experts from the 2021 meeting, and discussions following the formal meeting.

Amazon, Apple, Facebook, Google, and Microsoft are general, unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and not influenced by any donation.

Popular Articles