Artificial Intelligence (AI) is rapidly reshaping how organizations recruit, screen, and manage talent. In response, lawmakers are quickly moving to promote fairness, transparency, and accountability in its use. With no federal AI law yet in place, states such as Colorado, California, Illinois, and Texas are stepping in, introducing sweeping regulations that directly impact employers, human resources professionals, and staffing firms. Sarah Kalaei, Eastridge’s General Counsel, breaks down what these new laws mean for your business and provides practical compliance steps to help you stay ahead.
(Effective October 1, 2025)
California’s Civil Rights Department (CRD) has adopted new regulations, setting one of the nation’s strongest standards for the use of Automated Decision Systems (ADS) in employment. The rules define an ADS broadly as any computational process that makes or assists in making employment decisions such as hiring, promotions, training, or performance evaluations. This includes AI-powered resume screeners, chatbots, and algorithmic scoring tools. Under the amended regulations, employers may not use ADS or selection criteria that result in discrimination against applicants or employees based on characteristics protected under the Fair Employment and Housing Act (FEHA). The CRD’s new standards apply to all employers operating in California, signaling a clear message to organizations to ensure that technology-driven hiring and management tools are fair, transparent, and bias-free.
What Employers Need to Know:
What the New Rules Mean for Employers, Staffing Leaders, and Clients:
If an ADS tool filters out applicants and employees in ways that produce biased patterns and results, all parties (employers, staffing partners, vendors, and clients) could be held legally responsible.
Eastridge’s Recommendation:
(Effective January 1, 2026)
California continues to lead in regulating AI and has recently enacted legislation under the California Consumer Privacy Act (CCPA) to address the use of Automated Decision-Making Technology (ADMT) in employment-related decisions. The new rules apply to for-profit entities that (a) conduct business in California and (b) either have gross annual revenues exceeding $26.6 million or process large volumes of personal data.
Under these regulations, ADMT is defined as any technology that processes personal information and uses computation to replace or substantially replace human decision-making. The rules apply when ADMT is used to make “significant decisions” about California residents, such as hiring, work assignments, compensation, promotions, demotions, or terminations.
What Employers Need to Know:
CCPA-covered entities that use ADMT to make significant employment decisions without meaningful human involvement will soon face new compliance obligations.
To qualify as meaningful human involvement, the decision-maker must understand how to interpret the ADMT’s outputs, review and analyze ADMT’s output and any other relevant information before finalizing a decision, and must have authority to modify or overturn the ADMT’s recommendation. If these conditions are met, the decision may fall outside the ADMT rules. However, in practice, maintaining such involvement at scale, particularly when evaluating thousands of applicants, poses significant operational challenges and may be difficult to achieve consistently. Employers should immediately assess their current systems to determine whether real human review exists or whether adjustments will be needed to achieve compliance by 2026, including:
Eastridge’s Recommendation:
(Effective January 1, 2026)
Illinois has also joined the list of states in addressing the risks in using AI in hiring and employment decisions. The Illinois Human Rights Act has been amended to make it unlawful for employers to use AI in ways that result in discrimination, even if unintentional. The law broadly defines “artificial intelligence” to include any machine-based system that generates outputs influencing employment decisions. Unlike California, Illinois does not require formal bias or impact assessments, though conducting such reviews may help employers defend against potential claims. Employers must also notify employees when AI is used in recruitment, hiring, promotion, or other employment-related decisions, reflecting a continued national trend toward transparency in AI use.
(Effective January 1, 2026)
Texas has entered the AI compliance landscape with the Texas Responsible Artificial Intelligence Governance Act (TRAIGA 2.0), which takes effect January 1, 2026. The law applies to any individual or entity that conducts business in Texas or provides products or services to Texas residents and is involved in the development, distribution, or deployment of AI systems in Texas.
TRAIGA 2.0 prohibits the development or use of AI systems intended to unlawfully discriminate against protected classes based on race, color, national origin, sex, age, religion, or disability. It also requires government agencies and healthcare providers to give consumers clear and understandable notice when interacting with AI systems.
Notably, the law’s definition of “consumer” excludes individuals in employment or commercial contexts, meaning private employers are not required to disclose AI use in hiring or employment decisions. While TRAIGA 2.0 does not directly regulate workplace AI, its emphasis on transparency and consumer notification reflects a broader national trend likely to influence future AI legislation.
|
Compliance Area |
Action Required |
Frequency |
|
Bias Testing |
Conduct AI audits/bias testing for disparate impact |
Every 6–12 months |
|
Vendor Oversight |
Review AI vendors for compliance and liability terms |
Annually |
|
Transparency |
Notify applicants/employees when AI is used |
Ongoing |
|
Record Retention |
Maintain AI-related documentation and results |
4 years minimum |
|
Training |
Educate HR/staff on AI compliance |
Semi-annually |
|
Content Provenance |
Tag or watermark AI-generated materials |
By 2026 |
Eastridge Workforce Solutions is already preparing clients for these changes through:
With Eastridge, employers can focus on building great teams, while we handle the evolving complexity of AI compliance.
AI promises tremendous efficiency in recruiting, but it also introduces new compliance responsibilities. The message from lawmakers is clear that innovation cannot come at the expense of fairness and accountability. By proactively addressing bias, transparency, and documentation, and by partnering with trusted staffing experts like Eastridge Workforce Solutions, your organization can stay compliant, ethical, and competitive in the age of intelligent hiring.
Disclaimer: This article is for informational purposes only and does not constitute legal advice. Employers should consult with qualified legal counsel to understand how these regulations apply to their specific circumstances.
Ready to future-proof your hiring practices?
Contact Eastridge to learn how we can help your business stay compliant with the latest AI employment laws while delivering the talent you need.