There is no federal law governing AI in hiring in the United States. What there is: a rapidly expanding patchwork of state and local regulations with conflicting requirements, different enforcement mechanisms, and extraterritorial reach that most legal teams haven't caught up to. The cost of ignoring this is no longer theoretical.
---
In 2021, New York City passed Local Law 144 — the first law in the country specifically regulating the use of AI and automated employment decision tools in hiring. It required employers using covered tools to conduct annual bias audits, publish the results publicly, and notify candidates before using AI in the screening process.
That single law took the industry years to fully absorb. Today, it's just the beginning.
Illinois has its Artificial Intelligence Video Interview Act, which predates NYC's law and requires employers to explain how AI analyzes video interviews, get written consent, limit data sharing, and destroy recordings within 30 days of a hire. The Illinois AI in Hiring Act, passed more recently, extends notice requirements to any situation where AI influences an employment decision — not just video interviews.
Colorado's AI Act applies an Automated Decision-Making Technology framework originally designed for consumer protection to employment contexts. Employers using high-risk AI tools face significant disclosure, documentation, and risk management obligations.
California's FEHA Automated Decision System regulations took effect in late 2025 and represent the most sophisticated employment-specific AI regulation yet enacted in the United States. Uniquely, California holds both the employer and the AI vendor jointly liable for discrimination — meaning if your screening vendor's model produces disparate impact, you share the legal exposure even if you didn't build the model.
The EU AI Act classifies employment AI as high-risk and imposes conformity assessment, transparency, and human oversight requirements. For any company hiring in Europe or using vendors that process European candidate data, these obligations are already phasing in.
Amazon's privacy counsel put it plainly: "Unless your company is willing to engage in digital isolationism, you will be subject to laws outside of where you physically exist."
The practical problem is that these laws don't harmonize. An action that satisfies California's disclosure requirements may not satisfy Illinois' consent requirements. A bias audit methodology compliant with NYC's standards may not meet what EU regulations will eventually require. Building a compliance program that satisfies all of them simultaneously requires deliberate architecture — not just checking boxes on each law individually.
Here's what a framework that actually works looks like.
Start with a complete inventory of every AI tool touching your hiring process. Not just the tools your recruiting team intentionally deployed — also the AI-assisted features built into your ATS, your background check provider, your job board matching algorithms. Most employers discover tools they didn't realize were covered.
For each tool, document the vendor's bias audit methodology, adverse impact data by protected class, and what disclosure language they provide or recommend. This documentation is the foundation of your defense if a complaint is ever filed.
Build a consent and disclosure workflow that meets the most demanding standard across all jurisdictions you operate in — typically a combination of California's specificity requirements and Illinois' written consent requirements. One notice that satisfies both will satisfy most other jurisdictions as well.
Establish an annual audit cycle. Not because every jurisdiction legally requires it annually (though NYC does), but because models drift. Training data changes. What was valid last year may perform differently on this year's candidate pool. Annual audits aren't a compliance formality — they're good risk management.
Assign a named owner for AI hiring compliance. In most organizations, AI tools were deployed by recruiting operations, governed (loosely) by IT, and not reviewed by legal until something went wrong. That structure doesn't work anymore. Someone needs to own this.
The cost of building a real compliance program is real but predictable. The cost of a class action or regulatory enforcement action is neither.
---
Quick Hits
Maryland's AI Bills
Maryland has passed legislation requiring employers to disclose when AI is used in employment decisions and creating a private right of action for violations. Like California, Maryland's law creates direct liability for employers — not just for the tools they build, but for the tools they buy. More states are moving in this direction, and the trajectory is clear: employer liability for third-party AI tools is becoming the norm, not the exception.
The EEOC Enforcement Gap
The EEOC has issued guidance on AI in hiring — focusing on how existing anti-discrimination law applies to automated tools — but enforcement actions specifically targeting AI have been limited. That's not because the risk is low; it's because large-scale AI discrimination cases require technical expertise that regulators are still building. The enforcement environment will tighten. Proactive compliance now is dramatically less expensive than reactive remediation later.
EU AI Act Employment Timeline
The EU AI Act's high-risk AI provisions — which cover employment AI — are phasing in over a multi-year period, but the obligation to begin conformity assessments is already in motion for new deployments. Companies using AI to screen candidates in Europe need to be working on this now, not when the final enforcement deadlines arrive. The documentation requirements alone are significant.
---
The Operator's Take
I sit in an unusual position — I build the tools, and I track the regulations closely. Here's my honest read: most of the compliance burden created by these laws is reasonable. Transparency, bias auditing, human oversight — these aren't unreasonable asks. What's genuinely burdensome is the patchwork. Complying with five different regulatory frameworks that use different definitions, different thresholds, and different enforcement mechanisms is expensive in a way that has nothing to do with whether your AI is actually good or fair. The industry needs federal preemption — a single national standard that's rigorous but coherent. Until then, the compliance cost asymmetrically burdens smaller employers who can't afford the legal infrastructure, while large enterprises absorb it as a line item. That's not a good outcome for anyone.
---
Disclosure and consent documentation is where most employers fall short first — either because they're using generic language that doesn't meet the specific requirements of the laws that apply to them, or because they have no disclosure process at all. The AI Screening Disclosure & Consent Templates I put together are jurisdiction-aware, covering the specific requirements of NYC, Illinois, California, and the EU. Ready to use with your legal team's review.
Get it here → AI Screening Disclosure & Consent Templates
---