The companies that get hit hardest by AI hiring regulations are rarely the ones using the most aggressive AI. They're the ones who deployed tools without documenting their decisions, and then can't demonstrate compliance when someone asks. Ignorance isn't a defense when the law requires affirmative disclosure. A practical audit framework is the difference between a defensible posture and an expensive problem.
---
Let's start with what an AI audit actually is, because the term gets used loosely. An AI audit for HR purposes is a structured process to identify every AI tool touching employment decisions, assess its risk level, test for disparate impact, document your compliance posture, and ensure your team knows how to operate within the guardrails.
This isn't theoretical anymore. California's FEHA Automated Decision System regulations took effect October 1, 2025, making employers and vendors jointly liable for discriminatory outcomes. NYC Local Law 144 requires annual bias audits for automated employment decision tools. Illinois requires notice whenever AI influences any employment decision. Colorado's AI Act applies consumer protection principles to employment contexts. The EU AI Act classifies employment AI as high-risk with ongoing monitoring requirements.
The patchwork is genuinely complicated — and it's only getting more so. Here is a five-step framework that addresses all of it.
Step 1: Inventory every AI tool in your hiring process.
Most HR teams can name the obvious ones — their ATS, their video interview platform, their background check provider. They miss the non-obvious ones: the AI features embedded in their ATS that they didn't explicitly turn on, the resume scoring algorithm their staffing agency uses, the LinkedIn Recruiter AI features their recruiters are using without centralized awareness.
Build a complete map. For each tool, document: vendor name, what AI functionality is active, what data is being processed, what employment decisions it influences, and who owns the vendor relationship.
Step 2: Classify each tool by risk level.
Not all AI in HR carries the same regulatory and ethical weight. A tool that recommends who to reject is categorically different from a tool that schedules interview reminders. Create a tiered classification: tools that directly influence selection decisions (highest risk), tools that influence candidate ranking or scoring (high risk), tools that support process efficiency without affecting selection (lower risk).
This classification determines what compliance obligations apply. NYC LL144 covers "automated employment decision tools" that make or substantially assist in employment decisions — not every AI tool in your stack. Knowing which tier each tool falls into focuses your compliance work where it matters.
Step 3: Run bias tests on high-risk tools.
For every tool classified as high-risk or highest-risk, you need disparate impact analysis. At minimum, this means analyzing selection rates by race/ethnicity, gender, age, and other protected characteristics at each stage where AI is making or informing decisions.
If your vendor provides audit results, review them critically. NYC LL144 requires the bias audit to be conducted by an independent auditor — self-reported vendor results don't satisfy the requirement. Ask for the methodology, the demographic groups tested, and the adverse impact ratios. If a vendor can't or won't provide this, that's significant information.
Step 4: Document everything.
Compliance posture lives and dies on documentation. What did you know, when did you know it, what did you do about it? For each AI tool, maintain records of: when it was deployed, what bias testing has been done and when, what disclosure was made to candidates, what human review process exists, and what action was taken when bias was identified.
California FEHA regulations impose specific documentation requirements. The EU AI Act requires technical documentation, conformity assessments, and logs of AI system operation. NIST AI RMF and ISO/IEC 42001:2023 provide frameworks for structuring this documentation in a way that regulators recognize.
Documentation isn't just defensive. It's the foundation for continuous improvement. If you don't document your bias test results over time, you can't know whether your interventions are actually working.
Step 5: Train your team.
The most technically compliant AI system will produce discriminatory outcomes if the humans operating it aren't trained on what they're looking at. This means training recruiters to understand what the AI is recommending and why, how to critically evaluate AI outputs rather than automatically accepting them, when and how to override AI recommendations, and how to recognize and report potential bias.
The UW research showing that humans mirror AI bias roughly 90% of the time in severe cases is a devastating indictment of uninstructed human review. "Human in the loop" is only meaningful if the humans in the loop are trained to exercise genuine judgment. That requires deliberate, ongoing training — not a one-time onboarding module.
This five-step framework won't make you immune to litigation or regulatory action. Nothing does. But it puts you in the category of employers who can demonstrate reasonable care, good-faith compliance efforts, and a documented process for continuous improvement. That matters enormously when something goes wrong.
---
Quick Hits
Impact Assessment Requirements Are Expanding
Multiple jurisdictions now require or are considering algorithmic impact assessments before deploying AI in hiring. These assessments evaluate whether a system is likely to have discriminatory effects before it's deployed — a proactive requirement, not a reactive one. If you're deploying new AI tools, impact assessment should be part of the vendor due diligence process, not an afterthought.
Notification and Consent Obligations Vary Widely
Illinois requires notice whenever AI influences any employment decision. California FEHA requires disclosure to candidates that an automated decision system is being used. NYC requires employers to post notice of bias audit results. These requirements don't perfectly overlap, which means a compliance framework that works in one state may be incomplete in another. Design for the most stringent requirement and you're covered everywhere.
NIST AI RMF vs. ISO 42001: Which Framework to Use?
Both frameworks are useful; they're not mutually exclusive. NIST AI RMF 1.0 is U.S.-centric and focused on risk management processes. ISO/IEC 42001:2023 is a certifiable management system standard more aligned with EU expectations. For companies operating in both markets, ISO 42001 is increasingly the credible baseline — and its documentation requirements map well onto what regulators are asking for.
---
The Operator's Take
Most HR teams treat compliance as a legal problem and hand it off to counsel. That's a mistake. Counsel can tell you what the regulations say. They can't tell you which of your tools are actually making discriminatory decisions, because they don't have visibility into your stack. The people who need to own AI compliance in HR are the operators — the HR leaders and recruiting leaders who know what tools are deployed, how they're being used, and where the gaps are. Legal is a resource and a backstop, not the primary owner. When I see HR teams treat compliance as someone else's problem, I already know what their audit is going to find.
---
The five steps above give you the framework. The hard part is execution — knowing exactly what to document, what bias tests to run, what vendor questions to ask, and what the output should look like when you're done.
Get it here → Complete AI Hiring Compliance Toolkit ($119)
---