The California Department of Civil Rights continues to refine its proposed regulations regarding automation. [+] Decision-making systems in employment.
getty
In recent years, California has been at the forefront of regulating artificial intelligence (AI) and automated decision-making systems (ADS), particularly in employment practices. The California Civil Rights Department (CRD) is actively pursuing amendments to the proposed regulations to address concerns regarding the use of these systems in hiring, promotion, and other employment decisions. In response to public testimony at a public hearing in July, the CRD issued revised regulations in October 2024. These changes reflect adjustments made in response to public testimony and provide clear definitions of key terms such as “automated decision-making system,” “agent,” and “.” Employment agency. ”
Revised definition with focus
One of the most important updates in the October revision of the CRD’s proposed regulations centers around the definition of key terms that directly impact employers who rely on AI and ADS in their employment practices.
Automatic decision system (ADS)
The revised regulations expand the definition of ADS to include “a computational process that makes or facilitates human decision-making regarding employment benefits.” This broad definition includes systems that utilize AI, machine learning, algorithms, statistics, and other data processing tools.
While the original definition provided a general framework, the October version specifies in more detail the activities in which these systems may be involved, including screening, assessment, classification, and recommendations. . The regulations make clear that ADSs can facilitate decision-making regarding hiring, promotion, pay, benefits, and other employment matters. However, the term “facilitates” human decision-making remains ambiguous, raising questions about whether systems that support decision-making but do not fully automate it fall under this rule.
For example, AI tools used to verify educational backgrounds can flag discrepancies between an applicant’s claimed degree and the degree they actually earned. Although the tool does not make the final decision, it can influence the human decision maker. This raises the question of whether such tools, which only assist but do not complete the decision-making process, qualify as regulated ADSs. As a result, it is important for employers to be aware that even automated systems used for seemingly minor employment decisions may fall within this scope without further clarification.
Notably, the proposed rules specifically exclude technologies such as word processing and spreadsheet software, website hosting, and cybersecurity tools. However, there is still room for interpretation as to whether more basic automated processes, such as simple if/then workflows, fall within the scope of these regulations.
agent
The definition of “agent” has also become noticeably clearer. An agent is now defined as a person who performs functions on behalf of an employer that have traditionally been performed by the employer. The concept of “jobs traditionally performed by an employer” is central to the revised definition. The CRD now clarifies that an agent is a person who performs tasks on behalf of an employer that have historically been the responsibility of the employer. This includes key functions such as applicant recruitment, selection, hiring, promotion, and decisions regarding compensation, benefits, and time off. With this framework in place, the revised rules narrow the scope of who is considered an agent, allowing traditionally employer-driven ensure that third parties performing activities are considered agents. are also subject to the same compliance standards.
This expanded definition means that even if an employer outsources administrative functions such as payroll or benefits administration, those vendors can still We emphasize that you may be considered an agent. Employers should therefore closely scrutinize their partnerships with third-party vendors, particularly those that use AI and machine learning, to ensure compliance with the revised definition. This reinforces the need for employers to recognize that delegating traditionally employer-driven functions to third parties does not relieve them of their regulatory obligations.
Employment agency
The definition of a recruitment agency has been improved to include any entity that is compensated for services that identify, screen, and source job applicants and employees. The revised rules place emphasis on “selection” as an important step in securing applicants, and position it as an important function for recruitment agencies.
This revision clarifies the distinction between screening resumes with specific terms or patterns and a broader process of selecting candidates, which more closely aligns with the concept of applicant sourcing. However, the proposed rule lacks a clear definition of what “screening” entails, creating potential uncertainty for employers, especially those that rely on third-party vendors for background checks. gender is occurring. Without clear guidance on whether a background check falls within the definition of screening, employers may have a difficult time determining their compliance obligations.
Considerations for employers using AI for criminal testing
The CRD’s revised proposed rules focus on preventing discriminatory practices when using AI and ADS to make employment decisions, including in the sensitive area of criminal background checks. Employers must ensure that the use of these systems is subject to the same legal standards as human decision-making, in particular the requirement that criminal history only be considered after a conditional offer of employment has been made.
This rule requires that ADS used for criminal background checks operate transparently. Employers must provide applicants with the reports and decision-making criteria used in the system to ensure compliance with anti-discrimination laws. In addition, employers must regularly conduct anti-bias testing of these systems and retain records of these tests, along with the data used, for at least four years.
The focus on transparency and fairness is consistent with broader trends, such as the White House’s AI Bill of Rights Blueprint and the EEOC’s Guidelines on Algorithmic Fairness. Employers should be diligent in auditing their AI systems to avoid disparate impacts on protected classes, especially when it comes to decisions related to criminal records. You should ensure that the criteria used by your ADS are relevant to the job and necessary for your business purposes, and consider less discriminatory alternatives where available.
CRD invites public input
As the California Department of Civil Rights continues to refine its proposed regulations regarding automated decision-making systems in employment, now is the time for employers to engage in the process. CRD is accepting written comments on the latest changes until November 18, 2024. This is an important opportunity for employers to ensure that regulations around AI and ADS are clear, practical and reflect modern employment practices.
Comments can be sent via email to council@calcivilrights.ca.gov. For more information and to review the proposed amendments, please visit the CRD webpage at calcivilrights.ca.gov/civilrightscouncil.
Thoughts of parting
October revisions to the CRD’s proposed rules regarding automated decision-making systems represent an important step in California’s efforts to regulate AI in employment practices. For employers, this means taking a closer look at how AI and ADS are being used, particularly in recruitment and criminal background checks. With the expanded definition of ADS, agent, and recruitment agency, both the use of technology and relationships with third-party vendors must be carefully scrutinized. As these rules continue to evolve, it is important to stay informed and proactively assess compliance to address advanced AI regulations in employment in California.