AI Hiring Compliance: What New AI Lawsuit Means for HR Leaders
As AI hiring tools become embedded in core HR workflows, AI hiring compliance is moving higher on HR leaders’ priority lists. A class action lawsuit against Eightfold serves as an early reminder that adopting AI-enabled tools requires a clear understanding of how those systems operate, even when key processes happen behind the scenes.
To understand why this case matters from an HR compliance perspective, it is helpful to focus on what the complaint alleges and what it has not yet established.
What the Eightfold Lawsuit Is Claiming About AI Hiring Tools
The complaint claims that Eightfold uses AI to generate candidate evaluations that applicants don’t see and may not even know exist. According to the plaintiffs, those evaluations draw on a wide range of data and are used to score or rank candidates in ways that can influence who advances in the hiring process.
The dispute turns on how those evaluations are treated under existing law. The plaintiffs argue they function like consumer reports under the federal Fair Credit Reporting Act, as well as related California law, which would trigger notice and access requirements for candidates.
HR teams have already seen how FCRA compliance risks in background checks can lead to costly consequences when those requirements are mishandled.
The plaintiffs’ argument centers less on any single hiring outcome and more on whether candidates should’ve been told these evaluations were being created and also whether they should’ve been given access to them.
What’s notable from an HR compliance standpoint is where the complaint draws its line. It focuses on how AI-driven evaluations are generated and classified — not on intent or bias.
Why HR Leaders Should Pay Attention to AI Hiring Compliance Now
AI hiring tools now influence far more than a single screening step. They shape how candidates are ranked, filtered, and routed through workflows that HR teams rely on to manage volume and speed. That broader role puts these tools squarely in the middle of everyday hiring decisions.
From a compliance perspective, that matters because evaluations can be generated and applied long before a human decision is made. But there’s also a practical business reason to pay attention. Organizations invest heavily in HR technology to widen the talent pool and identify strong candidates faster. When AI-driven criteria or data inputs don’t align with how HR actually wants roles scoped, qualified candidates can be filtered out without anyone realizing it.
That’s where AI hiring compliance and operational oversight intersect. Knowing how a tool evaluates candidates helps HR teams assess legal risk, but it also helps ensure the technology is doing the job it was purchased to do. If the logic inside a system is overly restrictive, based on assumptions HR wouldn’t endorse, or pulling in data HR didn’t expect, the result can be missed talent as well as compliance questions.
“If a candidate didn’t knowingly provide the data, it shouldn’t be used to judge them,” says Barb Hyman, founder and CEO of Sapia.ai.
This shifts the conversation from “Are we compliant?” to “Do we understand how our hiring technology is shaping outcomes?” That question sits squarely with HR, and it’s what makes informed conversations with vendors essential as AI becomes a standard part of hiring workflows.
AI Hiring Compliance Questions HR Should Be Asking Vendors
Once automated hiring tools are part of routine workflows, AI hiring compliance depends on whether HR understands how those tools are shaping candidate movement through the process. These four questions focus on the areas most likely to affect compliance and hiring outcomes.
- What data does the tool use to evaluate candidates? This question helps HR confirm whether evaluations rely only on applicant-provided information or include additional data that could raise compliance or trust concerns.
- Where in the hiring process does AI influence decisions? Ask this to get clarity about whether AI is advisory or actively filtering candidates before human review.
- When AI influences decisions, what ability does HR have to override or adjust those evaluations? This one shows whether HR can intervene when outcomes don’t align with hiring intent or role requirements.
- What changes over time, and how are customers notified? Asking this upfront helps HR avoid surprises caused by model updates or configuration shifts that alter outcomes without clear awareness.
A more detailed ATS vendor questions guide is available for teams conducting deeper vendor evaluations.
Eightfold Is Not the Only AI Hiring Lawsuit HR Should Watch
The claims against Eightfold are part of a broader set of legal challenges focused on how automated hiring systems are used. This case isn’t unique in raising questions about the role technology plays in screening and selection decisions.
A separate, ongoing lawsuit involving Workday has raised concerns about how AI-driven tools may affect hiring outcomes. While the legal arguments are different, both cases center on how automated systems factor into hiring decisions, rather than on how employers describe their use of technology.
Seen alongside one another, these lawsuits point to a shift in where legal scrutiny is landing. Attention is moving toward how hiring systems function inside real workflows, including how evaluations are created, applied, and carried forward. That makes it increasingly important for HR leaders to understand how their hiring tools operate in practice, because those mechanics are becoming central to how risk is assessed.
What This Means for AI Governance in HR
Recent litigation is helping clarify where AI governance gaps tend to appear in hiring. As automated tools influence screening and evaluation decisions, AI hiring compliance increasingly depends on how well HR understands and oversees the systems they rely on.
In practice, AI governance in HR shows up in everyday decisions. That includes how tools are configured, how evaluations move through workflows, and how changes are managed once technology is live. Clear ownership matters, particularly when outcomes don’t align with hiring intent or when questions arise about how decisions were made.
As AI continues to shape hiring processes, governance becomes an ongoing responsibility rather than a one-time review.
“Trust is fragile in hiring, and once it’s lost, both employers and candidates pay the price,” Hyman says.
That’s the thread connecting cases involving Eightfold and Workday. They don’t suggest HR should step back from AI. They reinforce why AI governance needs to be intentional, informed, and grounded in the realities of day-to-day hiring.
Free Training & Resources
Webinars
Provided by SkyStem
White Papers
Provided by Perkspot
Resources
The Cost of Noncompliance
Test Your Knowledge
The Cost of Noncompliance
The Cost of Noncompliance
