Employers: Ensure You Are in Compliance with California’s New AI Anti-Discrimination Rules
Under the Fair Employment and Housing Act (FEHA)
California has made it clear that artificial intelligence (AI) utilization must conform to long-standing anti-discrimination laws under the Fair Employment and Housing Act (FEHA). Accordingly, the California Civil Rights Council recently updated existing anti-discrimination laws in California’s FEHA to specifically address the use of technology in making employment decisions and to protect against employment discrimination arising from AI, algorithms, and other automated decision systems (ADS).
Compliance Implications for Employers Adopting AI Agent Technologies
As California employers newly incorporate AI agents into their employment procedures, it is vital that they comply with the new anti-discrimination provisions adopted by the Civil Rights Council. Below, we consider the specific provisions.
Effective October 1, 2025, California employers must comply with new FEHA regulations that hold them accountable for discrimination resulting from the use of AI and ADS in the employment process. This applies to employers and companies with five or more employees.
These new rules apply to ADS, including computer use with AI, machine learning, and algorithms that employers use to make or assist in employment decisions, such as hiring, firing, promotion, and compensation.
Some common examples include:
-
- Resume screening software used for specific terms or patterns;
- Targeted job advertisements or recruiting materials directed at specific groups;
- Video interview analysis and skill assessments;
- Facial expressions, voice analysis, and word choice used during online interviews; and
- Predictive analytics for promotions or performance evaluations.
Why AI Regulations are Needed
ADS, which may rely on algorithms or artificial intelligence, are increasingly used in employment settings to facilitate a wide range of decisions related to job applicants and employees, including recruitment, hiring, and promotion. While these tools can bring efficiency and benefits, they can also exacerbate biases and lead to discrimination against specific groups.
Thus, California’s new AI rules under the Fair Employment Housing Act prohibit employers from using AI and ADS that result in disparities in the hiring process or direct discrimination against applicants based on protected characteristics, such as race, age, religion, gender, disability, and national origin.
Process for Issuing the Regulations
Under California law, the California Civil Rights Department is responsible for enforcing many of the state’s robust civil rights laws, including in the areas of employment, housing, businesses and public accommodations, and state-funded programs and activities.
As part of those efforts, the Civil Rights Council develops and issues regulations to implement state civil rights laws, including for new and emerging technologies. With respect to the issue of ADS, the Civil Rights Council’s final regulations are the result of public hearings and input from experts, as well as federal reports and guidance. These regulations went into effect on October 1, 2025.
The New AI Antidiscrimination Laws in California’s FEHA Aim to
-
- Protect against bias for the protected characteristics such as race, age, religion, gender, disability, and national origin;
- Ensure employers maintain employment records;
- This liability can extend to third-party vendors or staffing agencies
- Ensure that the ADS assessments do not constitute unlawful medical injury
Key Provisions of California’s New AI Antidiscrimination Laws Under FEHA
1. Protection Against Bias
The use of ADS may violate California law if it harms applicants and/or employees based on protected characteristics, such as race, age, religion, gender, disability, or national origin.
Anti-Bias Testing as an Affirmative Defense
The Regulations provide that, to defend against a discrimination claim based on the use of ADS, employers can show that they conducted anti-bias testing to avoid unlawful discrimination prior to and after adopting ADS. § 11009(f). The regulations identify six relevant aspects of such testing, including the quality, efficacy, recency, and scope of such testing, as well as the results of the testing or other due diligence.
The courts will gauge how recently companies audited ADS used in conjunction with employment decisions, how thorough the employer’s testing was, what the results of audits showed, and whether the employer made corrections in line with the audits’ findings. In practice, this makes regular testing and documentation essential to defending discrimination claims that implicate AI tools.
The new regulations provide that bias audits and similar proactive measures can be used as evidence in discrimination cases when ADS are used in connection with employment decisions, such as hiring, termination, or promotion.
2. Maintenance of Employment Records
The regulations were amended to require employers and covered entities to preserve ADS-related records, personnel records, and employment records for at least four years.
This includes retaining the data used to run ADS tools, the outputs generated (including scores or rankings), the criteria applied to job or promotion candidates, and the results of any testing or evaluations. If a complaint is filed by an applicant or employee, an employer must retain these records for even longer.
3. Vendor Responsibility and Liability
The regulations explicitly state that liability can extend to third-party vendors or staffing agencies used by the employer. Thus, if the employer’s staffing partner or AI software provider uses an ADS tool on the employer’s behalf and it has a discriminatory impact, the employer may still be held responsible.
The regulations also extend liability for ADS discrimination to an employer’s agent, which is defined as anyone acting on behalf of an employer, directly or indirectly, to exercise a function traditionally exercised by the employer or any other FEHA-regulated activity.
4. Medical Inquiry Restrictions
The regulations state that AI tools that analyze applicants for physical or mental traits may constitute unlawful medical inquiries.
ADS AI-driven assessments, such as questions, games, puzzles, or other challenges that elicit information or are used to obtain information about a medical disability, may constitute an unlawful medical inquiry under FEHA. Additionally, ADS assessments that may measure an applicant or employee’s skills, dexterity, reaction time, and other abilities or characteristics are also discriminatory if used for the employment hiring, termination, or promotion process.
Key Takeaways and Compliance Reminders for Employers
Employers should take the following steps to ensure they are in compliance with the new FEHA requirements:
-
- Implement and document bias testing and be prepared to show how they addressed any issues found during testing.
- Update their anti-discrimination and reasonable accommodation policies to address the use of ADS systems.
- Perform anti-bias testing, establish a plan outlining the frequency and nature of such testing, and document the testing process, criteria, results, and steps taken to address such results.
- Update their data retention policy to ensure ADS-related data is preserved for at least four years. Employers should also review and update their data retention policies to comply with these new rules and ensure records are retained for at least 4 years.
- Review their vendor contracts and add obligations related to testing, transparency, termination, data protection, and compliance. The regulations explicitly state that liability can extend to third-party vendors or staffing agencies used by the employer. Thus, employers should review their vendor agreements and contracts, require transparency regarding any testing, and allocate responsibility for compliance and liability in contracts through appropriate warranties and indemnification clauses.
- Ensure that the ADS programs and the AI-driven assessments used to elicit information on a disability do not constitute an unlawful medical inquiry or a violation under FEHA.
Next Steps For Employers
California’s recently updated AI rules under the Fair Employment Housing Act came into effect on October 1, 2025, and are a relatively new regulation given the rapidly changing pace of technology in the workplace.
Going forward, California employers using AI tools in employment decision-making and hiring processes should be prepared to be held accountable for audits over bias, recordkeeping of documents, and oversight of vendors and staffing agencies.
Therefore, it is beneficial for employers to be aware of these new AI and ADS rules, to make necessary accommodations and changes to their employment regulatory policies, and to avoid the risk of AI compliance issues and litigation. The AI and ADS civil rights regulations in the workplace will likely expand across state lines.
Authors
Related Capabilities
Featured Insights

Event
Mar 3 – 5, 2026
25th Annual Legal Malpractice & Risk Management (LMRM) Conference

Press Release
Feb 13, 2026
Hinshaw Team Wins Appeal in Criminal Indictment of Waukegan City Clerk Janet Kilkelly

Press Release
Feb 10, 2026
Hinshaw Trial Team Secures $0 Defense Verdict in $15 Million Auto Accident Trial

Press Release
Feb 4, 2026
Hinshaw Celebrates 17 Consecutive Years of Being Named an Equality 100 Award Winner

Press Release
Feb 5, 2026
Hinshaw Legal Team Secures Directed Verdict in Florida Equine Fraud Case

Press Release
Feb 2, 2026
Hinshaw Welcomes 16 Attorneys in Seven Offices and Announces Opening of a Cleveland Office

Press Release
Jan 20, 2026
Hinshaw Attorneys Named to the LCLD 2026 Fellowship Class and 2026 Pathfinder Program

Press Release
Jan 15, 2026
Hinshaw Client Secures a Complete Jury Verdict in Fraudulent Misrepresentation Horse Sale Case

Press Release
Jan 6, 2026
Hinshaw Adds Four-Member Consumer Financial Services Team in DC and Florida




