Navigating The New Wave of AI Discrimination and Transparency Laws: What Employers and Staffing Leaders Need to Know
Artificial Intelligence (AI) is rapidly reshaping how organizations recruit, screen, and manage talent. In response, lawmakers are quickly moving to promote fairness, transparency, and accountability in its use. With no federal AI law yet in place, states such as Colorado, California, Illinois, and Texas are stepping in, introducing sweeping regulations that directly impact employers, human resources professionals, and staffing firms. Sarah Kalaei, Eastridge’s General Counsel, breaks down what these new laws mean for your business and provides practical compliance steps to help you stay ahead.
California’s New Rules on Automated Decision Systems
(Effective October 1, 2025)
California’s Civil Rights Department (CRD) has adopted new regulations, setting one of the nation’s strongest standards for the use of Automated Decision Systems (ADS) in employment. The rules define an ADS broadly as any computational process that makes or assists in making employment decisions such as hiring, promotions, training, or performance evaluations. This includes AI-powered resume screeners, chatbots, and algorithmic scoring tools. Under the amended regulations, employers may not use ADS or selection criteria that result in discrimination against applicants or employees based on characteristics protected under the Fair Employment and Housing Act (FEHA). The CRD’s new standards apply to all employers operating in California, signaling a clear message to organizations to ensure that technology-driven hiring and management tools are fair, transparent, and bias-free.
What Employers Need to Know:
- Joint Liability: The regulations add a new term “agent,” which now includes vendors, developers, and others who use ADS on behalf of employers. This means that both employers and their vendors can be held liable for discriminatory outcomes produced by ADS.
- Record Keeping Requirements Extended to Four Years: Employers must now retain employment and personnel records, including data generated by the ADS, data inputs, outputs, and bias testing results, for at least four years.
- Bias Testing Encouraged: Regular audits are recommended to detect and correct discriminatory patterns. Courts and regulators may consider the quality, frequency, scope, results, and employer response to bias testing when assessing compliance or damages.
What the New Rules Mean for Employers, Staffing Leaders, and Clients:
If an ADS tool filters out applicants and employees in ways that produce biased patterns and results, all parties (employers, staffing partners, vendors, and clients) could be held legally responsible.
Eastridge’s Recommendation:
- Audit all ADS and AI-based recruiting tools.
- Review and update vendor agreements to clearly define compliance duties, indemnity provisions, and notice requirements for any litigation or regulatory inquiry.
- Consider implementing bias testing and documenting corrective actions.
- Update recordkeeping policies to comply with the new four-year retention requirement.
- Ensure human oversight when using ADS in decision-making.
California Expands AI Legislation Under the CCPA
(Effective January 1, 2026)
California continues to lead in regulating AI and has recently enacted legislation under the California Consumer Privacy Act (CCPA) to address the use of Automated Decision-Making Technology (ADMT) in employment-related decisions. The new rules apply to for-profit entities that (a) conduct business in California and (b) either have gross annual revenues exceeding $26.6 million or process large volumes of personal data.
Under these regulations, ADMT is defined as any technology that processes personal information and uses computation to replace or substantially replace human decision-making. The rules apply when ADMT is used to make “significant decisions” about California residents, such as hiring, work assignments, compensation, promotions, demotions, or terminations.
What Employers Need to Know:
CCPA-covered entities that use ADMT to make significant employment decisions without meaningful human involvement will soon face new compliance obligations.
To qualify as meaningful human involvement, the decision-maker must understand how to interpret the ADMT’s outputs, review and analyze ADMT’s output and any other relevant information before finalizing a decision, and must have authority to modify or overturn the ADMT’s recommendation. If these conditions are met, the decision may fall outside the ADMT rules. However, in practice, maintaining such involvement at scale, particularly when evaluating thousands of applicants, poses significant operational challenges and may be difficult to achieve consistently. Employers should immediately assess their current systems to determine whether real human review exists or whether adjustments will be needed to achieve compliance by 2026, including:
- Providing notice to individuals before using ADMT. The notice must include the purpose of the technology, a description of how the technology works, information and instructions on the right to opt out, instructions for accessing data processed by the ADMT, information on appealing decisions, and anti-retaliation protections.
- Conducting detailed risk assessments evaluating the potential impact of ADMT use, including whether the benefits outweigh the risks to consumers, the business, and other stakeholders. A summary of each risk assessment must be submitted to the CCPA by April 1, 2028.
- Implementing and honoring opt-out mechanisms and access rights for affected individuals.
- Establishing an appeals process for individuals impacted by significant ADMT-based decisions.
Eastridge’s Recommendation:
- Identify and document all ADMT tools currently in use and determine whether CCPA regulations apply.
- Develop and distribute pre-use notices to affected individuals.
- Conduct and maintain pre-use risk assessments for each ADMT tool.
- Create and implement procedures to manage opt-out requests and data access rights.
- Establish a clear, fair appeals process for individuals affected by ADMT-based employment decisions.
Illinois Enacts Artificial Intelligence Regulations
(Effective January 1, 2026)
Illinois has also joined the list of states in addressing the risks in using AI in hiring and employment decisions. The Illinois Human Rights Act has been amended to make it unlawful for employers to use AI in ways that result in discrimination, even if unintentional. The law broadly defines “artificial intelligence” to include any machine-based system that generates outputs influencing employment decisions. Unlike California, Illinois does not require formal bias or impact assessments, though conducting such reviews may help employers defend against potential claims. Employers must also notify employees when AI is used in recruitment, hiring, promotion, or other employment-related decisions, reflecting a continued national trend toward transparency in AI use.
Texas Enacts Responsible Artificial Intelligence Governance Act
(Effective January 1, 2026)
Texas has entered the AI compliance landscape with the Texas Responsible Artificial Intelligence Governance Act (TRAIGA 2.0), which takes effect January 1, 2026. The law applies to any individual or entity that conducts business in Texas or provides products or services to Texas residents and is involved in the development, distribution, or deployment of AI systems in Texas.
TRAIGA 2.0 prohibits the development or use of AI systems intended to unlawfully discriminate against protected classes based on race, color, national origin, sex, age, religion, or disability. It also requires government agencies and healthcare providers to give consumers clear and understandable notice when interacting with AI systems.
Notably, the law’s definition of “consumer” excludes individuals in employment or commercial contexts, meaning private employers are not required to disclose AI use in hiring or employment decisions. While TRAIGA 2.0 does not directly regulate workplace AI, its emphasis on transparency and consumer notification reflects a broader national trend likely to influence future AI legislation.
Compliance Roadmap
|
Compliance Area |
Action Required |
Frequency |
|
Bias Testing |
Conduct AI audits/bias testing for disparate impact |
Every 6–12 months |
|
Vendor Oversight |
Review AI vendors for compliance and liability terms |
Annually |
|
Transparency |
Notify applicants/employees when AI is used |
Ongoing |
|
Record Retention |
Maintain AI-related documentation and results |
4 years minimum |
|
Training |
Educate HR/staff on AI compliance |
Semi-annually |
|
Content Provenance |
Tag or watermark AI-generated materials |
By 2026 |
How Eastridge Helps Employers Stay Compliant
Eastridge Workforce Solutions is already preparing clients for these changes through:
- AI Compliance Reviews: auditing recruitment systems for fairness and bias.
- Vendor Vetting: Ensuring your technology partners meet state transparency and accountability standards.
- Disclosure Language: Providing ready-to-use notice templates for candidate communications.
- Data Retention Systems: Implementing recordkeeping protocols aligned with California’s four-year rule.
- HR Training: Equipping teams with practical tools to identify and mitigate AI-related risks.
With Eastridge, employers can focus on building great teams, while we handle the evolving complexity of AI compliance.
Final Takeaway
AI promises tremendous efficiency in recruiting, but it also introduces new compliance responsibilities. The message from lawmakers is clear that innovation cannot come at the expense of fairness and accountability. By proactively addressing bias, transparency, and documentation, and by partnering with trusted staffing experts like Eastridge Workforce Solutions, your organization can stay compliant, ethical, and competitive in the age of intelligent hiring.
Disclaimer: This article is for informational purposes only and does not constitute legal advice. Employers should consult with qualified legal counsel to understand how these regulations apply to their specific circumstances.
Ready to future-proof your hiring practices?
Contact Eastridge to learn how we can help your business stay compliant with the latest AI employment laws while delivering the talent you need.
