Building AI products that touch millions of hiring decisions. Covering the intersection of AI, HR technology, and compliance.
The Screening Room covers everything AI in HR Tech — from product launches and vendor teardowns to regulatory compliance and implementation strategy. Written from an operator's perspective by someone building AI products that touch millions of hiring decisions. We cut through vendor marketing to deliver actionable intelligence for HR leaders, TA professionals, and anyone evaluating or deploying AI in the hiring stack.
Articles
Local Law 144 compliance in 2026: who’s in scope, what a bias audit must include, posting + notice rules, and how to avoid common compliance traps.
Greenhouse Real Talent combines AI candidate matching, fraud signals, and identity verification with CLEAR. What it means for recruiting teams in 2026.
HR is about to buy “answers” not workflows. Before you turn on an HR chatbot, decide: vendor copilot or your own internal AI layer—and lock down citations, permissions, and change control.
For years, AI in recruiting meant a bolt-on. This week the system of record went agentic. When the ATS becomes the recruiter, build vs. buy stops being a procurement question and becomes an architecture decision.
Every significant voluntary departure looks obvious in hindsight. The performance reviews that trailed off, the meeting invitations that went unaccepted, the Slack messages that got shorter. The data was there — it just wasn't being read. AI changes that equation entirely, and the implications are more complex than the vendor brochures suggest.
The AI hiring content you're consuming is mostly written with enterprise buyers in mind. Eightfold at $50K a year. HireVue serving a third of the Fortune 100. Paradox contracts that start at $25K. That advice doesn't translate to a 500-person company — and trying to apply it will cost you money and credibility with your leadership team.
Most companies evaluate AI hiring tools the same way they evaluate any SaaS product: demo, feature comparison, price negotiation. That process will get you a vendor with a good sales team. It won't tell you whether their AI is going to create a discrimination lawsuit in 18 months. The evaluation criteria most companies use are precisely backwards.
A single federal case in California has the potential to establish liability standards for AI hiring tools that will govern the entire industry. Most HR leaders have heard the name. Very few understand what's actually being argued — or why the theory being tested could make every AI vendor contract in your filing cabinet a legal exposure.
Most companies spend thousands of dollars optimizing where they post jobs and almost nothing on what those job posts actually say. That's backwards. The words in your job description are filtering candidates before a single resume is reviewed — and for most organizations, they're doing it in ways you'd never consciously endorse.
The most expensive hire you'll ever make is the one you didn't need to make. Companies have been pouring budget into external recruiting while sitting on untapped talent that already knows the culture, the systems, and the customers. AI is finally making it possible to fix that.
The HR function has been trying to earn a seat at the revenue table for decades. AI is the most compelling business case the function has ever had — but most HR leaders are presenting it wrong, with the wrong metrics, to the wrong audience. Here's how to build a business case that gets funded.
Every major ATS now has an AI story. Workday, Greenhouse, iCIMS, SAP SuccessFactors — they've all shipped AI features in the last 12 months. The pitch is simple: "You're already here, and now we do AI too." It's compelling, and for a lot of HR teams, it's going to be the wrong choice.
Every few months there's a new wave of "AI is coming for recruiter jobs" content. It performs well because fear performs well. But the data tells a more nuanced story — and the real competitive risk for most HR professionals isn't the technology itself. It's the widening skills gap between those who learn to work with it and those who wait for it to go away.
The companies that get hit hardest by AI hiring regulations are rarely the ones using the most aggressive AI. They're the ones who deployed tools without documenting their decisions, and then can't demonstrate compliance when someone asks. Ignorance isn't a defense when the law requires affirmative disclosure. A practical audit framework is the difference between a defensible posture and an expensive problem.
Every few years, a vendor comes along claiming to solve the entire talent problem in one platform. Most of them are overpromising. Eightfold is different enough to take seriously — and complicated enough that you should go in with clear eyes about what it actually delivers versus what the pitch deck says.
The AI HR tech market is overbuilt, overfunded, and overdue for a reckoning. When a market has hundreds of point solutions all solving adjacent problems, consolidation isn't a possibility — it's a mathematical inevitability. The question isn't whether your vendor gets acquired. It's whether you'll be caught flat-footed when it happens.
Research from the University of Washington Information School found that AI resume screeners preferred white-associated names over Black-associated names 85% of the time. That's not a bug report from a niche product. It's a finding about AI tools that are, right now, filtering candidates at companies you've heard of. Understanding what the research actually says — and what it means for organizations using these tools — is not optional.
Enterprise AI HR tools have list prices. Then they have real costs. The gap between the two is where most organizations get surprised — usually 12 to 18 months after signing, when they're calculating what it actually took to get the tool working. Building a real total cost of ownership framework before you sign isn't pessimism. It's basic financial discipline applied to a category where vendors have strong incentives to show you only part of the picture.
"We always have a human review AI recommendations before any hiring decision." I've heard some version of this from almost every enterprise HR leader I've spoken to about AI governance. It sounds rigorous. Research out of the University of Washington suggests it's largely ineffective — and in some cases, actively counterproductive. This is worth sitting with.
The compliance conversations in HR AI always focus on the tools — the bias audits, the disclosure notices, the impact assessments. Rarely does anyone focus on the contract. That's a mistake. In most standard AI vendor agreements, employers absorb a disproportionate share of liability for outcomes they have limited visibility into. That's the arrangement you're accepting when you sign without negotiating.
Olivia doesn't sleep. She doesn't have a bad morning that makes her brusque with the ninth candidate of the day. She speaks 100 languages and has probably screened more candidates this week than your entire recruiting team has in the last year. Paradox built something that actually works at scale — and it's worth understanding exactly how.
The first wave of AI in HR was about recommendations. The next wave doesn't ask for permission. Agentic AI — autonomous systems that execute multi-step workflows without human handholding — has moved from lab curiosity to enterprise deployment faster than almost anyone predicted. If you think this is just another vendor buzzword, you haven't been paying attention to the adoption numbers.
Survey data on AI in HR is everywhere and mostly useless — "leaders are excited about AI's potential" covers a multitude of organizational sins. The CHRO data I'm looking at right now is different because it shows not just sentiment, but the gap between what executives say about AI and what their organizations are actually doing. That gap is where the interesting story lives.
Every organization with a data science team eventually asks the same question: why are we paying a vendor for this when we could build it ourselves? In AI recruiting, that question has a specific answer — and most of the teams asking it don't like the answer when they work through it honestly.
Every time an AI screening tool produces a discriminatory outcome, the framing is "AI bias." I want to push back on that framing — not to defend the outcome, but because the wrong diagnosis leads to the wrong treatment. The AI isn't the problem. The data is. And fixing the data is a lot harder than switching vendors.
There is no federal law governing AI in hiring in the United States. What there is: a rapidly expanding patchwork of state and local regulations with conflicting requirements, different enforcement mechanisms, and extraterritorial reach that most legal teams haven't caught up to. The cost of ignoring this is no longer theoretical.
HireVue is either the most important assessment tool in enterprise hiring or a very expensive placebo. Depending on who you ask, you'll get a confident answer in either direction. I've spent time under the hood. The truth is more nuanced — and more useful — than either camp admits.
The AI-in-HR conversation used to be about potential. That conversation is over. We've moved from "should we explore this?" to "how do we scale what's already running?" — and the teams still debating whether to experiment are now a full cycle behind.