The compliance conversations in HR AI always focus on the tools — the bias audits, the disclosure notices, the impact assessments. Rarely does anyone focus on the contract. That's a mistake. In most standard AI vendor agreements, employers absorb a disproportionate share of liability for outcomes they have limited visibility into. That's the arrangement you're accepting when you sign without negotiating.
---
California's FEHA Automated Decision System regulations, which took effect in October 2025, changed the liability landscape in a specific and important way. Under those regulations, employers and vendors are jointly liable for discriminatory outcomes produced by AI hiring tools. Joint liability sounds like shared responsibility. In practice, it means plaintiffs can sue both parties — and employers often make easier targets because they have the employment relationship.
That legal reality should be changing how HR and legal teams approach AI vendor contracts. Most aren't there yet.
Here's what's typically missing from standard AI hiring vendor agreements:
Transparency about how the model works.
Most vendor contracts include language allowing vendors to protect their "proprietary algorithms" from disclosure. That's reasonable from a trade secret perspective. It becomes a problem when you have no mechanism to understand why a candidate was scored the way they were scored — and then a candidate files a discrimination charge. You cannot defend decisions you cannot explain. Your contract needs a defined process for explainability on request, even if the full model isn't disclosed.
Bias testing requirements with teeth.
Vendors increasingly include language noting that they conduct bias audits. Few contracts specify: Who conducts the audit? What methodology? How often? What happens if the audit reveals adverse impact? What are the vendor's remediation obligations? If your contract doesn't answer these questions, "we conduct bias audits" is marketing language, not a contractual commitment. Push for specifics — audit cadence, methodology disclosure, and notification requirements when adverse impact is found.
Data retention and deletion terms.
AI hiring tools process candidate data. That data may persist in the vendor's model long after the candidate relationship ends. Under GDPR, CCPA, and an expanding set of state biometric privacy laws, candidates have rights to access and deletion. Your contract needs to specify: what data is retained, for how long, where it's stored, and what your vendor's process is for honoring deletion requests. Vague "we comply with applicable law" language isn't sufficient.
Cooperation with investigations and audits.
When an EEOC charge lands, or when your state's fair employment agency wants to investigate, you need your vendor's cooperation. Standard contracts often include provisions that make it difficult or expensive for vendors to provide the data and documentation a regulator might request. Negotiate explicit cooperation obligations — not just a general agreement to "work together" but specific commitments around data production, audit participation, and timeline.
Indemnification that's actually proportionate.
Standard vendor contracts often cap indemnification at the value of the contract — which for a $25,000-a-year SaaS subscription means your vendor's maximum liability is $25,000 even if their tool produces discriminatory outcomes at scale. For AI tools that touch millions of hiring decisions, that cap is insufficient. Negotiate uncapped indemnification for claims arising from the vendor's model design and bias testing failures.
The Mobley v. Workday collective action is instructive here. That lawsuit — a nationwide ADEA age discrimination claim against an AI hiring tool company — is precisely the scenario most HR legal teams haven't war-gamed. The plaintiffs didn't just name Workday; they named the employers who used Workday's AI tools. Joint liability, in practice.
The NIST AI Risk Management Framework 1.0 and ISO/IEC 42001:2023 both provide useful frameworks for thinking about AI vendor risk. Neither is law in the US, but regulators increasingly reference them as standards for reasonable care. Having documentation showing you evaluated your vendors against these frameworks before signing matters.
The practical starting point: pull your current AI vendor contracts. Have legal redline them against the provisions above. Most will be deficient in multiple areas. Use that gap analysis as leverage in your next renewal conversation.
---
Quick Hits
The Workday collective action: what employers need to know.
The Mobley v. Workday lawsuit alleges that Workday's AI hiring tools discriminated against applicants based on age, race, and disability — and that employers who used those tools share liability. The case is still moving through courts, but the legal theory has traction. Every employer using AI hiring tools should understand their vendor contracts' indemnification and cooperation provisions before the next renewal.
The NIST AI Risk Management Framework isn't optional anymore.
NIST AI RMF 1.0 was published as voluntary guidance. Increasingly, it's being cited by regulators, referenced in state AI legislation, and used by plaintiff's attorneys to establish what "reasonable care" looks like for AI systems. If you haven't mapped your AI hiring tools against the NIST framework, you're operating without a standard that courts are beginning to treat as baseline.
ISO/IEC 42001:2023: the AI management certification you'll be asked about.
ISO 42001 is the international standard for AI management systems. Enterprise procurement teams and large clients are beginning to ask vendors whether they're certified. That certification conversation will eventually flow downstream to HR AI vendors. Ask your vendors where they are on 42001 compliance — and note the ones who don't know what you're talking about.
---
The Operator's Take
Building AI products that sit inside hiring decisions has made me acutely aware of something most HR leaders don't think about until it's too late: the contract is the compliance instrument. Not the bias audit. Not the disclosure notice. The contract.
Because the contract is what defines what your vendor actually owes you when something goes wrong. The bias audit that happens once a year according to a methodology the vendor chose tells you what they want to tell you. The contract is what you can enforce.
Most HR teams don't have legal counsel that specializes in AI vendor contracts. Most general counsel offices are still figuring out what questions to ask. That gap is real, and it's creating risk exposure that most boards don't know exists. Close it before a regulator or a plaintiff's attorney closes it for you.
---
Disclosure notices and consent frameworks are the front-line compliance requirement under NYC LL144, Illinois AIPA, and California FEHA. Getting them wrong isn't just a fine — it's a signal to regulators that your AI governance is inadequate. My templates are built against actual regulatory requirements, not generic legal boilerplate.
Get it here → AI Screening Disclosure & Consent Templates ($49 on Gumroad)