Trump Signs Executive Order to Block State AI Laws: What HR Needs to Know
HR teams using AI in hiring and other employment decisions are caught in the middle of a federal-state power struggle over regulation. On Dec. 11, President Trump added to his executive orders, signing one that aims to block state AI laws, raising immediate questions for employers that have been preparing compliance plans for 2026.
What the Executive Order Says About State AI Laws
The new order, titled Ensuring a National Policy Framework for Artificial Intelligence, directs federal agencies to block or limit state and local laws that regulate AI where those laws are viewed as interfering with national AI policy or interstate commerce. Agencies must review existing and proposed state AI requirements and identify actions that could preempt, challenge or restrict them.
It instructs the Attorney General to form an AI Litigation Task Force within 30 days to challenge state AI laws in court. It also directs the Commerce Department to publish, within 90 days, an evaluation of existing state AI laws and identify “onerous” provisions for potential challenge.
A single federal approach is the stated goal, with the White House arguing that a patchwork of state laws creates compliance headaches and barriers to innovation. The order frames state AI regulation as a potential obstacle to economic competitiveness and technological development.
Importantly for employers, the executive order does not immediately invalidate state AI laws or suspend existing compliance obligations. Instead, it sets up a review and enforcement process that could lead to future agency action, funding conditions or legal challenges aimed at curbing state authority.
Danielle Ochs, Technology Practice Group Co-Chair for Ogletree Deakins, said the order raises uncertainty but does not pause current compliance obligations, and litigation is likely to drag on before anything changes.
Until courts or federal agencies act, state AI laws governing employment practices, hiring tools, transparency and bias mitigation remain in effect. Legal experts expect challenges over whether the administration can preempt state AI laws without new federal legislation.
Why It Matters for HR Compliance
HR teams face mounting pressure: 83% of companies had planned to use AI to screen resumes in 2025, according to a survey by Resume Builder – yet state rules demand audits, notices and bias tests that vary widely. For example, Colorado treats many hiring tools as “high-risk” systems that require impact assessments, while Illinois mandates specific notices and safeguards when AI is used in video interviews or other employment decisions.
The executive order may signal future federal relief, but it doesn’t pause state attorney general enforcement or private lawsuits, leaving employers reviewing vendor tools and tightening documentation around how AI influences hiring and employee decisions.
Compliance planning already creates real costs. For instance, HR teams have to build notice and consent workflows tied to Illinois requirements, document AI decision inputs to align with California civil rights expectations, and prepare impact assessments where Colorado treats employment tools as high-risk.
“The most dangerous assumption is believing employers no longer have to ensure workplace tools are free from discriminatory impact,” Ochs said. “Title VII, the ADA, and state civil rights laws still apply, regardless of this executive order.”
The bottom line: Even if some AI-specific rules get challenged later, the duty to avoid discriminatory impact does not go away.
State AI Laws for HR to Watch
There’s a variety of state AI laws, many relevant to HR, hiring and workforce decisions. States are regulating bias, transparency and tech use in workplaces. Here’s a snapshot of key states to track:
California AI Laws
California’s HR risk is driven less by a single “AI hiring” statute and more by how existing civil rights rules apply when automated tools influence employment decisions. Under FEHA, employers remain responsible for disparate impact and related documentation risks tied to AI‑assisted screening, scoring, and selection, even when a third‑party vendor supplies the tool.
Applicant tracking systems, resume screening tools and interview analytics are treated as decision inputs that must be defensible. Expect growing pressure to document how these tools are used, what factors they weigh and how outcomes are monitored for bias.
Separately, California has enacted AI transparency requirements aimed at developers of high-impact systems. While those laws don’t directly regulate HR teams, they can shape what vendors must disclose during due diligence and renewals.
Illinois AI Laws
Illinois regulates AI directly through employment law, making it one of the clearest state examples of HR exposure.
House Bill 3773, signed Aug. 9, 2024, amends the Illinois Human Rights Act and takes effect Jan. 1, 2026. It requires notice to applicants and employees when AI is used in recruiting, hiring, promotion, or other employment decisions, and it reinforces that employers cannot use AI in a way that results in unlawful discrimination. Expect agency guidance ahead of the effective date and be ready to adjust notice workflows.
Illinois also has the Artificial Intelligence Video Interview Act, effective Jan. 1, 2020, which applies when AI is used to analyze video interviews. The law requires advance notice, an explanation of how the AI works and applicant consent. It also requires deleting interview videos within 30 days after an applicant’s request and instructing any other parties with copies to delete them, too.
In practice, this means recruiting workflows need a clear trigger for when notices go out, documentation showing where AI is used in screening and selection, and vendor support for consent, retention, deletion and bias controls.
Colorado AI Laws
In May 2024, Colorado became the first state to enact comprehensive AI legislation. The law will regulate AI systems and include protections against discrimination in employment, housing, government services, and financial services. It mandates annual impact assessments for employment AI tools like hiring algorithms.
Originally set for Feb. 1, 2026, enforcement was postponed to June 30, 2026. This positions Colorado employers as early test cases for class actions on disparate impact claims.
New York AI Laws
New York’s clearest HR exposure sits at the city level, where New York City directly regulates automated employment decision tools used in hiring and promotion.
New York City Local Law 144 requires employers to complete an independent bias audit before using automated employment decision tools to screen candidates or employees for hiring or promotion. Employers must also provide advance notice to candidates and employees that an automated tool will be used and disclose the job qualifications and characteristics the tool evaluates.
The law applies broadly to tools that substantially assist or replace discretionary decision-making, including resume screening software, candidate ranking tools, and algorithmic scoring systems, even when provided by third-party vendors.
In practice, this means HR teams need documented bias audits updated annually, clear notice language embedded in recruiting workflows, and vendor cooperation on audit data, methodology, and public posting requirements. Enforcement is underway through the DCWP, making New York City an early test case for active enforcement of AI hiring rules.
Legal Uncertainty: Can Federal Rules Override State AI Hiring Laws?
The executive order does not override state statutes on its own, and legal challenges are expected over whether the administration can preempt state AI laws without congressional action. Until courts or agencies act, HR compliance obligations under existing state law remain unchanged.
While the federal-state fight plays out, the safest approach is to keep following current state requirements and document where AI touches hiring and employment decisions.
Next Steps for HR Compliance Under the AI Order
We’ve been recommending to HR leaders that they have strong AI governance in place, and that advice hasn’t changed, Ochs says. It’s important for key stakeholders to understand where AI is used and establish the parameters and guidelines that govern its use.
Use this short checklist to update your compliance and governance approach:
- Inventory AI tools. Catalog all AI tools used by HR, from applicant screening to performance analytics.
- Review state obligations. Map states where you have employees against state AI law requirements. Identify notice, consent, transparency or bias testing obligations for each.
- Standardize where possible. Align internal policies with the most stringent state requirements to create uniform practices.
- Update candidate and employee notices. Make sure notices about AI use in hiring and employment decisions comply with states’ transparency laws.
- Establish bias testing and documentation. Implement clear procedures for testing algorithmic systems for discriminatory outputs. Keep records of testing, outcomes and mitigation steps.
- Train HR teams. Ensure HR professionals understand AI policy obligations where you operate.
- Monitor legal developments. Track litigation related to the executive order and follow state AI law updates, as timelines and outcomes remain uncertain.
- Prepare for federal standards. Anticipate a national AI regulatory framework that could change compliance needs further.
Free Training & Resources
Webinars
Provided by insightsoftware
Resources
What Would You Do?
The Cost of Noncompliance
Case Studies
The Cost of Noncompliance
