Many businesses use artificial intelligence (“AI”), algorithms, software, and other forms of technology to make employment-related decisions. Employers now have an array of computer-based tools at their disposal to assist them in hiring employees, monitoring job performance, determining pay or promotions, and establishing the terms and conditions of employment. As such, many employers rely on different types of software that incorporate algorithmic decision-making and AI at a variety of stages of the employment process.
For example, some employers use resume scanners that prioritize applications using certain keywords, and some use video interviewing software to evaluate candidates based on their facial expressions and speech patterns. Further, some employers use “virtual assistants” or “chatbots” that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements. In addition, some employers use testing software that create “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit.” Others use employee-monitoring software that rates employees based on their keystrokes or other task-based factors. Employers may use these tools in a benign attempt to be more efficient, increase objectivity, or decrease the potential effects of implicit bias. However, the use of these tools may inadvertently disadvantage job applicants and employees with disabilities and may even violate the Americans with Disabilities Act (“ADA”).
Accordingly, on May 12, 2022, the U.S. Equal Employment Opportunity Commission (“EEOC”) released guidance advising employers that the use of AI and algorithmic decision-making tools to make employment decisions could result in unlawful discrimination against applicants and employees with disabilities. The EEOC’s technical assistance discusses potential pitfalls the agency wants employers to be aware of to ensure such tools are not used in discriminatory ways. Specifically, the guidance outlines how existing ADA requirements may apply to the use of AI in employment-related decisions and offers “promising practices” for employers to help with ADA compliance when using AI decision-making tools. This guidance is not meant to be new policy but rather is intended to clarify existing principles for the enforcement of the ADA and previously issued guidance.
The ADA and analogous state laws prohibit covered employers from discriminating against qualified employees and applicants based on known physical or mental disabilities, and also require employers to provide those employees with reasonable accommodations for their disabilities.According to the EEOC, one of the most common ways that an employer’s use of AI or other algorithmic decision-making tools could violate the ADA is if the employer fails to provide a reasonable accommodation that is necessary for a job applicant or employee to be rated fairly and accurately by the algorithm. Further, ADA violations may arise if an employer relies on an algorithmic decision-making tool that intentionally or unintentionally “screens out” an individual with a disability, even though that individual is able to do the job with a reasonable accommodation. Moreover, employers may violate the ADA if they us an algorithmic decision-making tool that runs afoul of the ADA’s restrictions on disability-related inquiries and medical examinations.
With these issues in mind, the EEOC identified a number of “promising practices” that employers should consider to help alleviate the risk of ADA violations connected to their use of AI tools. Specifically, in order to comply with the ADA when using algorithmic decision-making tools, the EEOC recommends the following best practices:
1. Employers must provide reasonable accommodations when legally required, and the EEOC recommends the following practices that will help employers meet this requirement:
2. Employers should reduce the chances that algorithmic decision-making tools will disadvantage individuals with disabilities, either intentionally or unintentionally. According to the EEOC employers can do this by:
3. Employers may also seek to minimize the chances that algorithmic decision-making tools will assign poor ratings to individuals who are able to perform the essential functions of the job, with a reasonable accommodation if one is legally required. Employers may accomplish this goal by:
4. Before adopting an algorithmic decision-making tool, employers should ask the vendor to confirm that the tool does not ask job applicants or employees questions that are likely to elicit information about a disability or seek information about an individual’s physical or mental impairments or health, unless such inquiries are related to a request for reasonable accommodation.
As technology continues to develop and the use of AI in employment decision making becomes even more prevalent, the EEOC will likely expand on its guidance regarding employers’ use of AI and how it intersects with both the ADA and other federal anti-discrimination laws. As always, we will continue to update you on any new developments that may arise.