Legal Considerations for HR Teams Using AI in the Workplace
While it’s true that AI is transforming HR, its adoption comes with significant legal challenges. So it’s crucial for HR teams to understand the legal considerations for AI use in the workplace.
What noted science fiction writer Isaac Asimov cautioned in 1988 remains equally true today: “[t]he saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.”
Let’s take a closer look at the legal considerations for AI use in the workplace.
How HR Uses AI: Common Applications
Many HR pros use AI tools to screen and evaluate résumés and cover letters; to identify potential candidates through online platforms and social media; and to analyze applicants’ speech and facial expressions during interviews.
AI is also being used to onboard new employees, generate performance reviews and monitor ongoing employee performance.
Legal Considerations for AI Use
Understanding legal considerations for AI is a must for HR teams looking to stay compliant and avoid costly mistakes. Here’s what you need to know about AI as it relates to discrimination laws, other employment laws and AI-specific legislation.
1. Discrimination Laws
Wherever AI is a factor in employment decisions, there is also exposure to employment discrimination. If there is a statistically significant disparity in decisions involving AI, the potential for a disparate impact claim under Title VII or other statutes arises. Such claims are already being filed and may prove problematic.
First, it may be difficult to separate where and how the AI tools impacted the decision. If so, Title VII allows potential plaintiffs a simpler route to proving their case: “if the complaining party can demonstrate to the court that the elements of a respondent’s decisionmaking process are not capable of separation for analysis, the decisionmaking process may be analyzed as one employment practice.” 42 USC §2000e-2(k)(1)(B).
Second, AI systems are often trained on historical data and so can inadvertently reinforce discriminatory patterns, resulting in outcomes that disproportionately affect employees based on race, gender, age, or other protected characteristics. And, the very nature of AI makes it difficult to control against such patterns.
Finally, such disparate impact claims impose a notoriously heavy burden on defendants under Title VII: to prove that “the challenged practice is job related for the position in question and consistent with business necessity.” 42 USC §2000e-2(k)(1)(A)(I).
Plus, such suits are almost inevitably class actions.
And, the EEOC has taken notice. In May 2022, the agency issued guidance on AI concerning the ADA, warning that AI tools could unintentionally exclude individuals with disabilities, fail to provide reasonable accommodations, or make inappropriate disability-related inquiries.
The following year, the EEOC issued AI guidance concerning Title VII, stating that if AI produces a protected group’s selection rate below 80% of a non-protected group, it could be discriminatory unless the employer proves the AI tool is job-related and necessary.
2. Other Employment Laws
AI can be used to test integrity. However, such uses may trigger exposure under the Employee Polygraph Protection Act (EPPA), which limits using lie detector tests for pre-employment screening or during employment.
States often have parallel statutes, and two recent class actions have used the Massachusetts anti-polygraph law to challenge employers’ use of AI to measure the integrity of candidates.
AI can also be used to measure productivity or even activity: e.g., tracking keystrokes, mouse clicks, web browsing, etc. to determine if an employee is “active” or “idle.” These systems may send real-time alerts to employers upon detecting that an employee is not “working” and, thus tempt employers to use those records to limit pay.
This may be a dangerous temptation: the U.S. Department of Labor in its recent Field Assistance Bulletin No. 2024-1 cautioned relying solely on automated timekeeping and monitoring systems without proper human oversight can lead to compliance issues with federal wage and hour laws because such metrics are not definitive indicators of “hours worked” under the FLSA and do not replace the need to assess whether the employee was actually permitted or required to work during that time.
3. AI-Specific Laws
AI-specific laws are being added by states and municipalities to control AI generally and/or in employment.
So HR’s use of AI needs to be vetted against the local laws where the employer has locations or where candidates for employment are being recruited, interviewed, or vetted (and this is too new to predict how courts will decide which state’s law will apply in what context).
Illinois
In Illinois, before an employer conducts an AI interview or, more generally, uses AI to make employment decisions, the employer must notify the applicant that AI will be used.
Employers that rely on AI to analyze video interviews must also obtain prior consent from the candidate to use the AI program and report demographic data about the applicants to the Illinois Department of Commerce and Economic Opportunity annually by December 31.
Additionally, August 2024 amendments to the Illinois Human Rights Act now explicitly prohibit using AI in a manner that discriminates against candidates and employees based on protected characteristics and using zip codes as a proxy for a protected class.
California
California’s AB 2930 sought to prevent “algorithmic discrimination.” The proposed bill would have required impact assessments, notice to those affected, governance programs, and public policy disclosures from employers and developers using AI for “consequential decisions.”
Violations under the proposed bill could have resulted in civil liability, allowing individuals and public attorneys to sue for damages if AI tools caused harm.
Although the bill ultimately did not pass, it is expected that similar legislation will be introduced in California in 2025 and beyond.
Colorado
Effective February 2026, Colorado will allow job applicants to challenge hiring decisions made using AI.
Employers will be required to: (i) notify applicants when an adverse decision is based on AI, and (ii) offer a formal process for applicants to challenge the decision and request human review. Employers will be presumed to have exercised reasonable care if they: (i) implement risk management policies and programs, (ii) conduct annual impact assessments, (iii) provide various notices to applicants, and (iv) promptly notify the Attorney General upon discovering “algorithmic discrimination.”
New York City
Employers and employment agencies are forbidden from using AI-driven tools for hiring or promotion unless an independent bias audit is conducted beforehand. That audit must assess selection rates and impact ratios across various demographic categories. Plus, candidates must be informed about the use of such tools.
Vermont
In Vermont, H114 (which is pending but has not yet passed) seeks to restrict the use of automated decision systems (ADS) for employment-related decisions. ADS is defined in the proposed bill as “an algorithm or computational process that is used to make or assist in making employment-related decisions, judgments, or conclusions.”
Washington
Washington’s H.B. 1951 (which is pending but has not yet passed) seeks to prohibit employers from using AI in a discriminatory manner. The pending legislation would also require employers to perform annual assessments of their AI use and to provide notice to applicants and employees that are subject to the use of AI.
Best Practices for HR Teams Using AI
When it comes to legal considerations for AI, controlling the risks will start with some established rules. It is too early to set equivalent rules for AI in HR but there are indeed suggestions that may form a rubric of best practices:
- Geography: As with so many other issues in HR (e.g., paid leave), there will be a need to check jurisdiction by jurisdiction to determine what is permitted, what is forbidden and what is required.
- Risk financing: Employers will want contractual protections from their AI vendors. Ideally, indemnity but at the very least representations that can be reliably enforced. And, if there is EPLI or other insurance, what is covered?
- Cost-benefit assessment: There is a risk in AI from both AI-specific statutes and general employment statutes but also rewards. For each AI use in HR, there will need to be a considered judgment of whether the rewards are concrete and exceed the risks.
- Diligence: Vet outside vendors to ensure that they have adopted certified procedures designed to promote non-discriminatory outputs.
- Customization: Explore whether AI vendors (or your own programmers) can customize AI programs to focus on a limited number of non-discriminatory candidate characteristics that matter most to your organization.
- Anticipation: Anticipate the need for and incorporate processes for accommodating candidates with disabilities or other special needs.
- Vigilance: Running the numbers on a RIF is now standard operating procedure. The same should be done with AI assessments. Early notice of problems permits correcting small problems before those become immense. Testing also ensures that qualified candidates are not unintentionally excluded. Continue to test after implementation.
Employment attorneys Kevin Connelly and Brian Casillas also contributed to this article.
Free Training & Resources
Resources
The Cost of Noncompliance
You Be the Judge