Local Law 144 Compliance Guide: What AI Hiring Tools Need in 2026
By The Screening Room —
Local Law 144 compliance in 2026: who’s in scope, what a bias audit must include, posting + notice rules, and how to avoid common compliance traps.
If you’re responsible for hiring in 2026 and you use any AI that screens, scores, ranks, tags, or recommends candidates for New York City roles, Local Law 144 compliance is no longer a “legal note” you can delegate and forget. It’s an operational discipline: evidence that your automated employment decision tool (AEDT) was audited, your notices went out on time, and your public posting is complete.
From an operator’s perspective, LL144 is deceptively simple: do an annual bias audit, publish a summary, and provide notice. In practice, most programs fail in the seams—scope confusion, missing “distribution date,” auditors who aren’t truly independent, audits run on the wrong data, or notices that technically exist but can’t be proven.
This guide is a practical walk-through of what Local Law 144 requires, what the NYC rules add, and how to build a compliance workflow that will hold up when someone asks for receipts.
What Local Law 144 does (in plain English)
Local Law 144 of 2021 restricts employers and employment agencies from using an AEDT in New York City unless three things are true: (1) the tool has had a bias audit within the last year, (2) a summary of the most recent audit results is publicly available, and (3) covered candidates and employees received notice.
The NYC Department of Consumer and Worker Protection (DCWP) is the agency that enforces the law, and DCWP’s own AEDT page is the easiest official starting point because it links to the rule, FAQs, and complaint channel.
The key thing to understand: LL144 is not an “AI model law.” It’s a “use law.” It attaches to how the tool is used in a hiring or promotion workflow—not whether you call it “AI,” not whether the vendor markets it as “machine learning,” and not whether the final decision is made by a human.
If a tool’s simplified output substantially assists or replaces discretionary decision-making, you should assume it’s an AEDT until you can prove otherwise.
Who it applies to (and the two scope mistakes I see most)
LL144 obligations fall on employers and employment agencies using AEDTs—not on vendors. The DCWP FAQ is explicit that the employer/employment agency is responsible for ensuring a bias audit was done before use.
That creates the first common mistake: teams assume “the vendor is handling it.” Vendors can coordinate an audit and provide materials, but the compliance exposure is still on the employer.
The second common mistake is over-indexing on headquarters location. Your company doesn’t need to be based in NYC. What matters is whether the AEDT is being used “in the city” as defined in DCWP guidance. If you’re hiring for a role located in NYC (even part time), or a remote role tied to an NYC office, you’re in scope. For agencies, if the agency is in NYC, it’s in scope; if it’s outside NYC, the role/location analysis still matters.
Finally, notice requirements are tied to NYC residents (candidates or employees). Many teams miss this because their ATS captures “location” as a preference, not residency, and they don’t have a clean flag that a given candidate is an NYC resident.
Practical takeaway: before you think about audits, build an “in-scope detector” in your workflow—something you can point to that identifies when LL144 applies.
What counts as an AEDT under the NYC rules
The NYC rules (6 RCNY Subchapter T) narrow and operationalize the meaning of “substantially assist.” An AEDT includes a computational process that issues a simplified output (score, tag, classification, ranking, recommendation) and is used to:
- Rely solely on a simplified output, with no other factors considered; or
- Use the simplified output as one of a set of criteria where it is weighted more than any other criterion; or
- Use the simplified output to overrule conclusions derived from other factors (including human decision-making).
The definition is workflow-driven. The exact same model could be “covered” in one use case and “not covered” in another depending on whether the output is predominant and decision-driving.
Also note what is not a simplified output: tools that only translate or transcribe existing text (for example, converting a resume from PDF or transcribing a video interview) are carved out in the NYC rules.
The bias audit requirement: what it actually entails
The law’s headline is “annual bias audit,” but the NYC rules and DCWP FAQ define the minimum audit content.
Minimum audit calculations
At minimum, an independent auditor must calculate:
- Selection rates (or scoring rates, depending on how the tool is used)
- Impact ratios
And those calculations must be done across:
- Sex categories
- Race/ethnicity categories
- Intersectional sex-by-race/ethnicity categories
The DCWP FAQ describes this as the minimum required evaluation, and the NYC rules define key terms like selection rate, scoring rate, and impact ratio.
Selection rate vs scoring rate (which one applies?)
This is where a lot of audits go wrong.
If your AEDT makes yes/no advancement decisions, assigns candidates to a “qualified/unqualified” bucket, or otherwise classifies people into groups used for screening, you’re in selection-rate land.
If your AEDT produces a numerical score (or equivalent) and candidates are moved forward based on being above some threshold, you’re in scoring-rate land.
The NYC rules define scoring rate specifically as the rate at which individuals in a category receive a score above the sample’s median score.
If your tool produces both (score + recommendation), your auditor should be explicit about which output is “the simplified output” used in the workflow, and why the chosen analysis matches the actual decision path.
Impact ratio: the math you’re going to publish
The NYC rules define impact ratio as either:
- Selection rate for a category divided by the selection rate of the most selected category; or
- Scoring rate for a category divided by the scoring rate for the highest scoring category.
The law itself doesn’t adopt the four-fifths rule, but the impact ratio structure is obviously aligned to adverse impact thinking. Operators should treat an impact ratio below 0.80 as a red flag that demands a root-cause review, even if LL144 doesn’t explicitly force a remediation step.
Historical data is the default
DCWP’s FAQ states that historical data from the tool’s use must be used, with exceptions when there is insufficient data for a statistically significant audit. The NYC rules allow test data as a fallback in that situation.
Two important operational details:
You can rely on an audit that uses pooled historical data across multiple employers using the same AEDT, but only if you provided your own historical data to the auditor—or it’s your first time using the tool.
You cannot infer or impute demographic data using software. The DCWP FAQ explicitly prohibits using inferred demographic data for the audit.
Small sample exclusions (the 2% rule)
Both the NYC rules and DCWP FAQ discuss the ability to exclude categories that represent less than 2% of the data used for the bias audit from the impact ratio calculations.
Operationally, this is not a “get out of jail free” card. If you exclude groups, your published summary should explain why, include counts, and be consistent year over year.
Independence: what disqualifies an auditor
DCWP does not maintain a list of approved auditors. The law doesn’t require pre-approval.
But independence is defined. An auditor is not independent if they:
- Were involved in using, developing, or distributing the AEDT;
- Have an employment relationship with the employer, agency, or the vendor during the audit; or
- Have a direct financial interest or material indirect financial interest in the employer, agency, or vendor.
From an operator lens: independence is less about the logo on the report and more about whether the relationship would survive a conflict-of-interest question.
Publishing requirements: what must be on your website before use
LL144 compliance is not complete when the audit PDF exists in someone’s inbox. You must make required information publicly available before you use the tool.
The NYC rules require that, before use, employers/employment agencies make publicly available (in a clear and conspicuous manner on the employment section of their website):
The date of the most recent bias audit and a summary of results. That summary must include:
- Source and explanation of the data used
- Number of individuals assessed that fall within an unknown category
- Number of applicants/candidates
- Selection or scoring rates (as applicable)
- Impact ratios for all categories
The tool’s distribution date, defined as the date you began using that specific AEDT.
Two practical implications:
- “Distribution date” is not a vendor release date. It’s your date of first use.
- “Clear and conspicuous” means this can’t be buried in a PDF linked from a footer. Put it where a candidate would reasonably find it: careers page, hiring FAQ, or an “AI in Hiring” disclosure page linked from job postings.
Notice requirements: build it like you’ll need to prove it
DCWP revised its educational slide deck to clarify the timing: notice must be provided 10 business days prior to using an AEDT.
The DCWP FAQ also lays out acceptable notice channels:
- Job posting, or email/mail
- For job applicants, notice can alternatively be posted on the employment section of the website (and it does not have to be position-specific)
- For candidates for promotion, notice can be in a written policy/procedure (not position-specific)
The notice must tell candidates/employees:
- That an AEDT will be used
- The job qualifications or characteristics the AEDT will assess
- Instructions for how to request a reasonable accommodation
Separately, the NYC rules require a process for individuals to request information about the type and source of data collected by the AEDT and the data retention policy, and to provide that information within 30 days of a written request.
Operator advice: treat notice as a system-of-record problem, not a template problem. If you can’t demonstrate when notice was delivered relative to when the AEDT was used, you’re effectively non-compliant.
Penalties and enforcement: why 2026 is a “tightening” year
Penalties under LL144 are not theoretical. DCWP can impose civil penalties, and violations can stack per day.
The New York State Comptroller’s December 2025 audit of DCWP’s enforcement program is the most important “2026 update” because it signals where enforcement is going next.
The Comptroller found DCWP’s complaint intake process ineffective, limited follow-through, and a mismatch between DCWP’s review of bias audit postings and the Comptroller’s own re-review (DCWP found one issue; the Comptroller identified at least 17 potential non-compliance instances across the same set of companies).
The Comptroller also noted that DCWP entered a memorandum of understanding with NYC’s Office of Technology and Innovation (OTI) for technical support and tools like an “Enforcement Workbook,” but DCWP wasn’t consistently using those resources.
Why this matters: the audit includes recommendations that, if implemented, push DCWP toward more structured enforcement and better identification of non-compliance beyond complaints.
Translation: if you’ve been assuming LL144 enforcement is “light,” 2026 is the year to get your house in order.
Common compliance gaps (what breaks in real organizations)
Here are the failure modes I see most often when I review hiring stacks:
1) You don’t actually know where AEDTs exist in your workflow
Teams inventory “the assessment vendor” but miss AI embedded inside:
- ATS ranking/matching features
- CRM outreach prioritization
- Interview scheduling that filters candidates
- Identity/fraud screening that affects advancement
Fix: maintain a living AEDT register with “simplified output,” where it shows up, and which decision it influences.
2) Your audit is real, but it’s not aligned to your use
The audit analyzes scoring rates, but your workflow uses a yes/no recommendation.
Fix: document the decision path (inputs → simplified output → human action) and make the auditor reference it explicitly.
3) Your auditor independence is shaky
If the “independent auditor” is the vendor’s consultancy arm, or a firm with revenue tied to selling the tool, you’re inviting trouble.
Fix: collect an auditor independence statement and capture conflicts in your vendor risk file.
4) Your public posting is incomplete
Most postings miss at least one of these: unknown counts, distribution date, source/explanation of data used, or full category tables.
Fix: use a posting checklist that maps one-to-one to 6 RCNY § 5-303.
5) Your notice exists, but you can’t prove timing
The easiest way to fail a compliance review is “we post it on our website” with no record of when it went live, or “we email candidates” with no durable log.
Fix: store notice events (timestamp, channel, candidate population, template version) in a system you can export.
What vendors should be telling you (and what they’re not)
If you buy AI hiring software, the vendor should be able to answer these without hand-waving:
- Is this feature an AEDT under NYC’s definition when configured the way we intend to use it?
- What exactly is the simplified output (score/ranking/recommendation), and how is it produced?
- What data do you need from us to run a valid bias audit, and what do you provide?
- What demographic categories does the audit support (sex, race/ethnicity, intersectional), and how do you handle unknowns?
- Can the audit be run on our historical data, and can we get extracts that an independent auditor can use?
- What changes to the model trigger a re-audit (new version, new features, new model weights, changed thresholds)?
- What evidence do we have that the tool’s output is not overriding other criteria (if the vendor claims it’s not covered)?
If a vendor can’t answer these, you’re buying compliance debt.
Action checklist for HR leaders (a minimal LL144 operating system)
Use this as your internal implementation plan.
Step 1: Build your AEDT inventory
- List every tool that produces a simplified output used in screening or promotion
- For each: capture where it runs (ATS, assessment, chatbot), the output type, and the workflow step
- Tag which roles/locations create NYC exposure
Step 2: Decide in-scope logic
- Define what “in the city” means for your org using DCWP guidance
- Define how you identify NYC residents for notice
- Create a trigger: when a job req is NYC-linked, LL144 workflow starts
Step 3: Engage an independent auditor early
- Validate independence against NYC rule criteria
- Align on whether selection-rate or scoring-rate analysis matches your workflow
- Define data needs and timelines (don’t wait until renewal week)
Step 4: Prepare data (and stop guessing)
- Capture the outcome used for screening (selected/advanced vs not)
- Capture the score if applicable
- Capture self-reported demographic data where lawful, and document unknowns
- Document the time period and any limitations
Step 5: Publish the required posting before use
Create a dedicated “AI in Hiring” page and publish:
- Bias audit date + summary tables (rates + impact ratios)
- Unknown counts
- Source/explanation of data used
- Distribution date (your first use)
Step 6: Operationalize notice
- Ensure notice goes out 10 business days before AEDT use
- Log notice delivery as an auditable event
- Maintain version control of notice language
Step 7: Add a quarterly compliance review
- Confirm audits are within the 12-month window
- Confirm public posting is still live and accurate
- Confirm no new AEDT features were turned on without triggering review
The operator’s bottom line
Local Law 144 is manageable if you treat it as a workflow with evidence, not a one-time PDF.
The bias audit is just one artifact. Your real risk sits in the operational layer: scope detection, data hygiene, auditor independence, public posting completeness, and a notice system you can prove.
If you want a shortcut, start with a single question: “If DCWP asked us tomorrow, could we show our work?” If the answer is no, you don’t have Local Law 144 compliance—you have optimism.
Want a ready-to-use compliance checklist that maps directly to what auditors ask for (data requirements, posting fields, and vendor questions)? Get the AI Bias Audit Checklist ($29). It’s built for HR and product teams who need evidence—not vibes.