California’s Proposed AI Employment Decision Rules: October 2024 Update
California is leading efforts to regulate artificial intelligence in employment, with the California Civil Rights Division issuing revised rules in October 2024 to clarify key terms and address public concerns on AI use in hiring and promotions.
In recent years, California has positioned itself at the forefront of regulating artificial intelligence (AI) and automated decision systems (ADS), particularly in employment practices. The California Civil Rights Division (CRD) has been actively refining its proposed rules to address concerns surrounding the use of these systems in hiring, promotions, and other employment decisions. In response to public testimony during a July hearing, the CRD issued revised rules in October 2024. These changes reflect adjustments made in response to public testimony and provide clearer definitions of key terms such as "automated decision system," "agent," and "employment agency."
Revised Definitions in Focus
One of the most significant updates in the October revision of the CRD’s proposed rules revolves around the definitions of key terms, which have a direct impact on employers who rely on AI and ADS in their employment practices.
Automated Decision System (ADS)
In the revised rules, the definition of an ADS has been expanded to include a "computational process that makes a decision or facilitates human decision-making regarding an employment benefit." This broad definition encompasses systems utilizing AI, machine learning, algorithms, statistics, and other data processing tools.
While the original definition provided a general framework, the October version goes further in specifying activities these systems might engage in, such as screening, evaluating, categorizing, and making recommendations. The rules clarify that ADS can facilitate decisions about hiring, promotions, pay, benefits, and other employment matters. However, ambiguity remains around the term "facilitate" human decision-making, raising questions about whether systems that assist but do not fully automate decisions fall under the rules.
For instance, an AI tool used to verify education credentials may flag discrepancies between an applicant's claimed degree and the degree they actually earned. While the tool doesn't make the final decision, it may influence the human decision-maker. This raises questions about whether such a tool, which aids but doesn’t complete the decision process, qualifies as an ADS subject to regulation. As a result, it is crucial for employers to recognize that automated systems used for even seemingly minor employment decisions may fall within this scope absent further clarification.
Notably, the proposed rule specifically excludes technologies like word processing and spreadsheet software, website hosting, and cybersecurity tools. However, there is still room for interpretation about whether more basic automation processes, like simple if/then workflows, fall within the scope of these regulations.
Agent
The definition of “agent” has also undergone notable clarification. An agent is now defined as any person acting on behalf of an employer to exercise a function traditionally performed by the employer. The concept of “function traditionally performed by the employer” is central to the revised definition. The CRD now clarifies that an agent is any person acting on behalf of an employer to carry out tasks that would historically be the employer’s responsibility. This includes key functions such as applicant recruitment, screening, hiring, promotion, and decisions related to pay, benefits, or leave. By framing it this way, the revised rule refines the scope of who can be considered an agent, ensuring that third parties who step in to perform these traditionally employer-led activities—such as vendors using AI to screen resumes or assist with hiring—are also subject to the same compliance standards.
This expanded definition emphasizes that even when employers outsource administrative functions, such as payroll or benefits management, those vendors may still be considered agents if they are performing tasks typically managed by the employer. Employers must, therefore, closely scrutinize their partnerships with third-party vendors, especially those using AI or machine learning, to ensure compliance with the revised definition. This reinforces the need for employers to recognize that delegating traditionally employer-led functions to third parties does not absolve them from regulatory obligations.
Employment Agency
The definition of an employment agency has been refined to include any entity undertaking, for compensation, services to identify, screen, and procure job applicants or employees. The emphasis in the revised rules is on “screening” as a crucial step in procuring an applicant, positioning it as a key function of employment agencies.
This revision sharpens the distinction between screening resumes for specific terms or patterns and the broader process of selecting candidates, aligning more closely with the concept of applicant procurement. However, the proposed rules lack a clear definition of what “screening” entails, creating potential uncertainty for employers, particularly those relying on third-party vendors for background checks. Without explicit guidance on whether background checks fall under the definition of screening, employers may struggle to determine their compliance obligations.
Considerations for Employers Using AI in Criminal Screening
The CRD’s revised proposed rules place a strong emphasis on preventing discriminatory practices when using AI and ADS in employment decisions, including the sensitive area of criminal history screening. Employers must ensure that their use of these systems follows the same legal standards as human decision-making, particularly the requirement that criminal history can only be considered after a conditional offer of employment has been made.
The rules require that ADS used in criminal background checks operate transparently. Employers must provide applicants with the reports and decision-making criteria used by the system, ensuring compliance with anti-discrimination laws. In addition, employers must conduct regular anti-bias testing of these systems and retain records of these tests, along with any data used, for at least four years.
The focus on transparency and fairness aligns with broader trends, such as the White House’s Blueprint for an AI Bill of Rights and the EEOC’s guidelines on algorithmic fairness. Employers should be diligent in auditing their AI systems to avoid disparate impacts on protected classes, particularly when it comes to decisions involving criminal history. They must ensure that the criteria used by ADS are job-related and necessary for business purposes and consider less discriminatory alternatives where available.
CRD Solicits Public Input
The California Civil Rights Division is continuing to refine its proposed rules on automated decision systems in employment, and now is the time for employers to engage in the process. The CRD is accepting written comments on the most recent modifications through November 18, 2024. This is a critical opportunity for employers to ensure that the regulations on AI and ADS are clear, practical, and reflective of modern hiring practices.
Comments can be submitted via email to Council@calcivilrights.ca.gov. For more information and to review the proposed modifications, visit the CRD’s webpage at calcivilrights.ca.gov/civilrightscouncil.
Parting Thoughts
The October revisions to the CRD’s proposed rules on automated decision systems represent a significant step in California’s efforts to regulate AI in employment practices. For employers, this means taking a closer look at how AI and ADS are utilized, particularly in recruitment and criminal history checks. The expanded definitions of ADS, agent, and employment agency require careful scrutiny of both technology use and relationships with third-party vendors. As these rules continue to evolve, staying informed and proactively assessing compliance will be crucial for navigating California’s forward-looking AI regulations in employment.
Release Date: October 31, 2024
Alonzo Martinez
Alonzo Martinez is Associate General Counsel at HireRight, where he supports the company’s compliance, legal research, and thought leadership initiatives in the background screening industry. As a senior contributor at Forbes, Alonzo writes on employment legislation, criminal history reform, pay equity, AI discrimination laws, and the impact of legalized cannabis on employers. Recognized as an industry influencer, he shares insights through his weekly video updates, media appearances, podcasts, and HireRight's compliance webinar series. Alonzo's commitment to advancing industry knowledge ensures HireRight remains at the forefront of creating actionable compliance content.