Photo provided by DepositPhotos
Artificial intelligence is now making the first call on the vast majority of job applications in the United States, and a new patchwork of state regulations is forcing employers to rethink how they use the technology.
Industry data from MSH indicates that AI handles 95% of initial candidate screening in 2026, a sharp increase from prior years. The shift has automated one of the most consequential steps in the hiring process, often before a human recruiter ever reviews a résumé.
The scale is significant. According to reporting from HR Defense Blog, AI-powered hiring tools processed more than 30 million applications in 2024. That same year, the tools triggered hundreds of discrimination complaints, raising concerns among regulators about bias, transparency, and accountability in automated decision-making.
People are also reading…
State regulators move first
In the absence of comprehensive federal legislation, individual states and cities have begun setting their own rules.
New York City's Local Law 144 requires annual independent bias audits of automated employment decision tools, according to legal analysis from Clark Hill and Holland & Knight. Employers using AI to screen candidates in the city must have a third party verify the tools are not producing discriminatory outcomes.
Colorado's SB 24-205, which takes effect in June 2026, requires impact assessments for high-risk AI systems, including those used in hiring. It is among the most comprehensive state-level AI laws enacted to date.
California's Civil Rights Council has extended the state's existing anti-discrimination laws to cover AI hiring tools, and added a four-year record-keeping mandate for employers.
Illinois has the AI Video Interview Act, which requires candidate consent and specific reporting for employers using AI to analyze video interviews.
Federal regulators are watching as well. The Equal Employment Opportunity Commission's Strategic Enforcement Plan for 2024 through 2028 lists AI and automated decision-making as a priority area for enforcement.
Compliance challenge for multi-state employers
The growing list of state laws creates a complex compliance environment, particularly for companies operating across multiple jurisdictions. Some states require audits. Others require consent. Several require impact assessments or detailed record-keeping that many employers were not previously tracking.
Human resources experts say one of the biggest challenges is visibility. Many companies do not have a clear inventory of which AI systems are involved in their hiring workflows, from résumé screening software and scheduling chatbots to platforms that rank candidates or analyze video interviews. Each of those systems could be subject to audit under one or more of the new laws.
Companies looking to get ahead of the regulations are starting by examining what AI interview tools actually do, what risks they carry, and what questions to ask vendors. Mid-market firms in particular are reassessing how they integrate hiring tools, many of which were adopted quickly without a compliance framework in place.
The shift has also driven demand for recruitment analytics platforms that give employers a clearer view of how hiring decisions are being made and documented.
A changing conversation
For years, the case for AI in hiring centered on efficiency: faster screening, larger candidate pools, and lower cost per hire. The new regulatory environment is shifting the conversation toward fairness, accountability, and whether candidates have meaningful recourse when an algorithm rejects them.
The states are effectively asking a question the tech industry has largely deferred: who is responsible when an automated hiring decision goes wrong?
That question does not yet have a clear answer. But the laws rolling out over the next two years will force companies to come up with one, and HR leaders who have not started preparing may find themselves catching up quickly.

