AI Hiring Discrimination: New Ruling in the Workday Lawsuit

AI is changing hiring fast. Workday’s AI hiring discrimination lawsuit sends a clear warning: failing to spot potential algorithmic bias can land employers in serious legal trouble.
The Mobley v. Workday case highlights why HR leaders can’t afford to overlook these risks – even if they trust their HR tech vendors.
This isn’t about just one software platform. It’s about the increasing pressure on employers to really understand the HR technology behind employment decisions.
Why Mobley v. Workday Should Be on HR’s Radar
Even if your organization doesn’t use Workday, the questions raised here go way beyond any single vendor. How much do you know about what your hiring algorithms are doing? What level of oversight do you have over the AI tools involved with your hiring process?
That’s why the latest move in this case matters so much. It’s not just legal drama – it’s a strong signal that employers’ use of AI is coming under closer legal scrutiny.
Here’s a brief recap – and the latest legal development.
Mobley v. Workday Recap: Lawsuit Alleges AI Hiring Discrimination
Since 2017, Derek Mobley, a Black man over 40 with anxiety and depression, applied to more than 100 jobs at companies using Workday’s screening tools and says he was rejected every time.
He filed a lawsuit against Workday, alleging AI hiring discrimination based on race, age and disability. Specifically, Mobley claimed Workday embeds AI and machine learning into its tools to “make hiring decisions,” its software can automatically reject candidates or move them forward in the hiring process. He supported this allegation with his example of receiving a job rejection in the middle of the night, within an hour of applying.
A federal court in California said that supported Mobley’s claim that his rejection “decision was automated” rather than being made with human oversight. The court allowed his disparate impact claims against Workday to proceed.
Latest Ruling in the Case
In early 2025, Mobley filed a motion seeking preliminary certification of a nationwide class action on the age discrimination claim.
If granted, this would allow other similarly situated candidates to join the lawsuit – those over the age of 40 who applied to jobs through, and were rejected by, the Workday platform.
A federal court in California issued its ruling on May 16.
Workday presented several arguments opposing the class status. One was the “logistical hurdles to identifying members of the collective,” likely caused by the vast number of applicants at companies using Workday.
The court was not swayed. “If the collective is in the ‘hundreds of millions’ of people, as Workday speculates, that is because Workday has been plausibly accused of discriminating against a broad swath of applicants. Allegedly widespread discrimination is not a basis for denying notice,” the court said.
Workday also raised concerns about the scope of certification, suggesting eligibility should be limited to candidates who were qualified and had a reasonable chance of being hired.
The court rejected that argument, emphasizing the central question: whether Workday’s system has a disparate impact on applicants over 40.
The court clarified that applicants don’t need to prove qualification, rejection, or likelihood of hire to have standing in a disparate impact claim. Under this theory, the harm lies in being denied a fair chance to compete, not in the final hiring decision.
The court granted preliminary certification of a class action under the ADEA, covering candidates over 40 who applied through and were rejected by the Workday platform “from September 24, 2020, through the present,” the ruling stated.
“We continue to believe this case lacks merit. This preliminary ruling comes early in the case, before the facts are fully established. We are confident that once the facts are presented, the plaintiff’s claims will be dismissed,” a Workday spokesperson said in an email to HRMorning.
Case Provides Example of Disparate Impact Theory
Disparate impact isn’t new. Despite a recent executive order discouraging its use by federal agencies, laws like the ADEA still allow disparate impact claims in private lawsuits, just like Mobley’s.
On April 23, President Trump signed an executive order instructing federal agencies not to rely on disparate impact theory when enforcing federal civil rights laws.
In our coverage of Trump’s disparate impact order, we clarified that several laws explicitly allow the use of disparate impact claims. One of those laws is the ADEA – the law at issue in this case.
The bottom line: Mobley v. Workday reinforces that the disparate impact theory remains legally viable, despite the executive order discouraging its use by federal agencies. While the federal government’s enforcement priority may have shifted, private employers can still face disparate impact claims through civil litigation.
What Comes Next in the Mobley Case
Here’s where the case currently stands: The court allowed Mobley’s claims to move forward as a nationwide class action. Workday raised concerns about the notice going out to potential class members.
The court laid out clear next steps and deadlines as the case moves forward:
- By May 28, both sides must file a joint update with a proposed plan for identifying and notifying class members. If they can’t agree, each side will submit its own plan with an explanation.
- On June 4, the judge will hold a video conference to review the proposed schedules and decide how to proceed.
Meanwhile, the parties continue to work through the discovery process, when both sides exchange information and evidence to prepare for trial.
The Class Action Status and Its Implications
Granting class action status means the case now represents a broader group of applicants, not just Mobley. That raises the stakes for employers using algorithmic hiring tools, especially at scale.
It also signals that courts may be open to examining systemic patterns in how these tools affect different groups, even without any intent to discriminate – raising the stakes for employers.
What to Watch as the Case Moves Forward: The HR Tech Details
Among other things, discovery will examine how Workday’s screening tools operate and whether they create unintended bias. This scrutiny will test employer oversight, algorithm training data and decision processes, all of which could reshape AI hiring practices.
As part of this investigation, two specific Workday tools have come under early scrutiny, highlighting the concrete technologies at the heart of the case. They were included in the court’s discussion, signaling their importance in the case and potentially wider industry implications.
HR Tech Tool No. 1: Candidate Skills Match
According to court records, Candidate Skills Match is part of a subscription service called “Workday Recruiting.” It extracts skills from the employer’s job posting and the applicant’s materials to determine how well the applicant’s skills match the role they applied for. The results are reported to the employer as “strong,” “good,” “fair,” “low,” “pending,” or “unable to score.”
HR Tech Tool No. 2: Workday Assessment Connector
According to Mobley’s complaint, Workday Assessment Connector uses machine learning to adjust recommendations based on employer behavior, which could raise legal concerns if it amplifies bias based on protected characteristics.
In response, Workday said the tool acts as a bridge to allow employers to access only “third-party” AI features. It also noted that customers with Workday Recruiting subscriptions may “enable, disable, use, or ignore its many features,” including its AI features.
Given these developments, it’s important to consider the critical questions and practical actions that can help manage the risks and opportunities ahead.
Questions HR Leaders Should Be Asking Now
As AI takes on a bigger role in hiring, HR leaders can’t just assume the tech is working as it should. It’s time to take a closer look at what these tools are actually doing and how they might be shaping employment outcomes – fairly or not.
Here are a few questions worth asking to help you spot red flags, tighten oversight and stay ahead of what’s quickly becoming a legal and ethical minefield.
Are You Clear on What Your AI Hiring Tools Actually Do?
Understanding what your AI tools are doing behind the scenes is critical. Are you aware of the data sources, criteria, and algorithms influencing candidate evaluations? Transparency helps expose bias before it becomes a liability.
Who Monitors Your AI Tools – and How Often?
Effective oversight means regularly reviewing AI tools for fairness and potential algorithmic discrimination. Who is responsible for monitoring these systems, and how frequently are they audited? Strong governance reduces legal risks and supports more equitable hiring outcomes.
Are You Prepared to Explain Hiring Decisions?
Increased reliance on AI raises questions about accountability. Can your team clearly articulate how hiring decisions are made when AI is involved? Being able to explain these outcomes builds trust and is crucial in responding to challenges or legal inquiries. Could you confidently explain these decisions to a federal investigator – or in a courtroom?
How HR Can Get Ahead of the Risk
As legal scrutiny around algorithmic bias intensifies, HR has a critical role to play in mitigating risk and reinforcing fairness in hiring.
Getting ahead means taking deliberate steps to strengthen internal oversight, understand your vendors’ technology, and stay informed on legal developments. These actions reduce exposure and position HR as a strategic partner in responsible innovation.
Build Cross-Functional Oversight with Legal and Tech
HR owns a major piece of the risk puzzle, but it can’t solve it in a silo. AI-related risk requires coordinated oversight from legal, compliance and IT.
Establish regular reviews of hiring tools with these stakeholders to assess risk, evaluate how algorithms function, and determine where accountability lies. A cross-functional approach helps ensure alignment on both legal exposure and ethical expectations.
Review Contracts and Audit Your HR Tech Stack
Third-party tools can create risk that isn’t visible on the surface. Review your contracts for indemnification clauses, audit rights, and any language related to algorithmic bias.
At the same time, audit your tech stack to understand which tools include AI features, what data they use, and whether those features are active. Even passive use of embedded AI can carry legal implications.
Keep Pace With Fast-Moving AI Legal Trends
The legal environment around AI in employment is evolving quickly. What’s considered acceptable today could trigger litigation tomorrow. Assign a point person or team to track court cases, agency guidance, and state legislation that may affect your processes. Staying informed helps HR lead, not react.
Mobley v. Workday, Inc., No. 23-cv-00770-RFL, 2025 U.S. Dist. LEXIS 94475 (N.D. Cal. 5/16/25).
Free Training & Resources
White Papers
Provided by Paycom
Resources
Case Studies
You Be the Judge