The dangers of using AI: Feds release important new guidance
As employers’ use of artificial intelligence (AI) tools continues to grow, so does the risk of running afoul of Title VII — the federal law that bans employment discrimination based on a number of protected classes including race and sex.
To help employers stay on the right side of the law, the EEOC has released a new technical assistance document that aims to explain the application of Title VII to the use of automated systems, including systems that incorporate AI tools.
Title VII does not outright ban the use of automated systems to assist in administering a variety of HR functions, such as those relating to hiring, performance monitoring, and decisions about pay and promotions.
But “[w]ithout proper safeguards, their use may run the risk of violating existing civil rights laws,” the EEOC advises.
So what are the legal issues that can arise, and what are the safeguards that should be in place?
Let’s dive in.
Understanding ‘disparate impact’
Key to understanding the dangers that the use of AI present is an understanding of the legal concept known as “disparate impact.”
An employment practice can violate Title VII if it facially and intentionally discriminates against members of a class of people who are protected by the law. In the law, this is known as “disparate treatment.” An obvious example: An employer has a practice of not hiring people of a particular race.
But there is a second way for violations of Title VII to occur, and that is where the concept of disparate impact comes in.
Essentially, this means that even a facially neutral policy or practice can violate Title VII if it disproportionately and adversely affects the members of a particular class that the law protects.
If that happens – and the employer is unable to show that the policy or practice is legitimately related to job requirements – then the plaintiff can win a Title VII claim based on a disparate impact theory. And even if the employer establishes this “business necessity” defense, the plaintiff can still win by showing there is another way for the employer to satisfy its legitimate interests without adversely impacting a protected class of people.
Example: An employer has a height requirement that applies equally to all candidates for employment but screens out a disproportionate number of female job applicants. If the employer cannot show that the height requirement is legitimately related to the job, a female applicant can succeed on a Title VII claim based on a disparate impact theory.
So what’s the AI tie-in?
Here’s how disparate impact comes into play with respect to AI.
Fact: Automated systems can unintentionally have the effect of screening out a disproportionate number of protected class members. Let’s say, for example, that an AI tool is programmed to zero in on job candidates whose characteristics mirror those of an employer’s star employees. If the star employees are all men, then in that scenario the use of the tool can lead to disparate impact discrimination against women.
The guidance document specifically focuses on whether an employer’s selection procedures – including procedures relating to hiring, promotion and firing – result in disparate impact discrimination under Title VII.
It does not address disparate treatment or protections provided by other federal employment discrimination laws.
AI questions and answers
The document uses a series of questions and answers to guide employers in the proper use of AI when making hiring, promotion and firing decisions.
Here are some of the questions and answers that the document provides. (Some questions/answers have been paraphrased.)
Can an employer’s use of an algorithmic decision-making tool be a “selection procedure” that is subject to EEOC guidelines?
Yes. EEOC guidelines addressing selection procedures “apply to algorithmic decision-making tools when they are used to make or inform decisions about whether to hire, promote, terminate, or take similar actions toward applicants or current employees.”
How can employers check to determine whether their use of an AI tool has an unlawful disparate impact?
“[E]mployers can assess whether a selection procedure has an adverse impact on a particular protected group by checking whether use of the procedure causes a selection rate for individuals in the group that is ‘substantially’ less than the selection rate for individuals in another group.”
If the use of such a tool does have a disparate impact on a protected group, the employer must show that its use of the tool is job-related and consistent with business necessity.
If an AI tool is developed or administered by a third party, can an employer still be liable for disparate impact discrimination resulting from the tool’s use?
Yes. For example, an employer might be liable under Title VII if it uses a test that was developed by an outside vendor and produces a disparate impact. In addition, employers may be liable under Title VII if they give software vendors the authority to act on their behalf and those vendors then use tools that produce disparate impact discrimination.
The guidance essentially advises employers to vet vendors to make sure the tools they use will not discriminate against applicants or employees.
What should an employer do if it finds out that a particular AI tool would produce disparate impact discrimination?
Employers finding themselves in that position “can take steps to reduce the impact or select a different tool in order to avoid engaging in a practice that violates Title VII,” the guidance says.
Employers should continually self-analyze to make sure that the practices they are using do not result in disparate impact discrimination, the guidance adds.
The bottom line
Bottom line: The EEOC is serious about making sure employers do not use AI tools in a way that produces disparate impact discrimination. Employers must be proactive in making sure that such tools do not produce that result – and must understand that they can be held liable for the actions of an outside vendor acting on their behalf to develop AI tools and/or implement their use.
For the full guidance, which is part of a broader agency initiative addressing the proper use of AI by employers, click here.
Free Training & Resources
Resources
The Cost of Noncompliance
You Be the Judge