A single federal case in California has the potential to establish liability standards for AI hiring tools that will govern the entire industry. Most HR leaders have heard the name. Very few understand what's actually being argued — or why the theory being tested could make every AI vendor contract in your filing cabinet a legal exposure.
---
Derek Mobley is a Black man over 40 who applied for more than 100 jobs through Workday's platform and was rejected by every one of them. His lawsuit alleges that Workday's AI screening tools systematically discriminated against him on the basis of race, age, and disability — and that Workday itself, as the vendor providing those tools to employers, bears liability under federal civil rights law.
The case is a collective action, meaning it's brought on behalf of a class of similarly situated applicants. If certified, it wouldn't just affect Mobley — it would represent a significant portion of the applicant population that interacts with Workday-powered hiring processes.
The core legal theory is worth understanding in detail. The lawsuit argues that Workday is an "employment agency" under Title VII, the ADEA (Age Discrimination in Employment Act), and the ADA. That's a significant stretch from how these laws have historically been applied — they were written with human staffing agencies in mind, not software platforms. But the argument has enough surface validity that Judge Rita Lin allowed the case to proceed past an initial motion to dismiss, which is itself a meaningful signal.
Under the ADEA theory specifically, Workday's AI is alleged to use age as a proxy factor in screening decisions — either explicitly or through correlates like graduation year, years of experience patterns, or other signals that function as age proxies. This is the theory that's worth watching closely. Disparate impact claims under the ADEA have a lower bar than intentional discrimination claims — a plaintiff doesn't need to show that Workday intended to discriminate, only that the tool's outcomes fell disproportionately on older workers.
Why does this matter beyond the specific parties? Because the vendor liability theory, if it holds, fundamentally changes the compliance calculus for every company in the AI hiring space. Right now, most employers operate under the assumption that if their AI vendor gets sued, that's the vendor's problem. California's FEHA automated decision system regulations took effect in October 2025 and explicitly establish joint liability between employers and vendors — but Mobley is testing whether federal law reaches the same conclusion without waiting for California-style state-level regulation.
If Workday is found to be a functional employment agency, then every AI screening vendor could be subject to the same liability framework. That changes how vendors write their contracts, how they conduct bias audits, and — critically — how much they'll charge you for indemnification provisions.
There's a second case worth knowing alongside Mobley: Kistler v. Eightfold, which proceeds on a different legal theory — the Fair Credit Reporting Act. That case alleges that Eightfold's scoring of job candidates constitutes a consumer report under the FCRA, which would trigger disclosure and adverse action notice requirements that Eightfold didn't fulfill. If that theory holds, AI scoring tools that candidates never see could be regulated like credit reports. The combination of Mobley and Kistler suggests the legal system is developing multiple parallel approaches to AI hiring liability simultaneously.
What should you do right now?
First, pull your AI vendor contracts and read the indemnification language. If your vendor promises to defend and indemnify you against discrimination claims arising from their tool's outputs, understand how solid that promise actually is and whether it survives vendor insolvency or acquisition.
Second, understand what disclosure and consent your current tools require and whether you're meeting it. Illinois requires notice when AI influences any employment decision. NYC requires bias audits and publication of results. California's FEHA regs require impact assessments. Document that you've done the analysis, even if you conclude no additional action is needed.
Third, build a paper trail. The question in litigation is often not whether your AI discriminated — it's whether you took reasonable steps to prevent it. Documentation of bias testing, vendor selection criteria, and human review processes is your best defense.
The legal environment around AI hiring tools is evolving faster than most compliance calendars. Cases like Mobley are setting precedents in real time.
---
Quick Hits
Kistler v. Eightfold: The FCRA Theory
The Kistler case argues that AI talent platforms that score and rank candidates are producing "consumer reports" under the Fair Credit Reporting Act — which would require disclosure to candidates and adverse action notices when they're rejected based on those scores. If the FCRA theory holds, it would impose notice and dispute rights on a broad category of AI screening tools that currently operate entirely behind the scenes.
California FEHA ADS Regulations
California's automated decision system regulations, which took effect October 1, 2025, impose joint liability on employers and their AI vendors for discriminatory outcomes. The regs require employers to conduct and document impact assessments, provide candidates with notice that AI is being used, and establish processes for candidates to appeal automated decisions. These aren't aspirational guidelines — they're enforceable requirements.
Insurance Coverage for AI Discrimination Claims
Employment practices liability insurance (EPLI) policies are increasingly being tested on whether they cover AI-related discrimination claims. Some insurers are adding AI exclusions; others are adding AI-specific endorsements. Review your EPLI policy with outside counsel to understand whether a Mobley-style claim against your company would be covered. Many HR leaders assume it would be — many policies suggest otherwise.
---
The Operator's Take
The Mobley case is getting coverage in legal trade press, but it deserves prominent space in every HR leader's strategic risk assessment. The reason is simple: if the vendor liability theory holds, the AI hiring vendor market transforms overnight. Vendors will have to carry significant insurance, conduct rigorous bias audits, and price compliance overhead into their contracts. Some smaller vendors won't survive that shift.
From where I sit, building AI products that touch hiring decisions, the honest answer is that liability clarification is good for the market. Right now, there are vendors selling AI screening tools who have done minimal bias auditing and are banking on the compliance landscape staying fuzzy. Mobley clears the fog. Companies that have been doing the work — transparent audits, documented testing, defensible processes — will be better positioned. Companies that have been hoping the legal clock would keep ticking are running out of runway.
Get ready now rather than scrambling when the first adverse action notice lands.
---
Your disclosure and consent language is either protecting you or exposing you. Purpose-built templates built for the current regulatory environment will save you legal fees and give you defensible documentation if questions arise.
Get it here → AI Screening Disclosure & Consent Templates
---