Could AI reject your resume? California tries to prevent employers from using AI to screen them
Summary
Here's a summary of key points about the new proposed regulations:
1. California is proposing new regulations to restrict how employers can use artificial intelligence (AI) in hiring and employment decisions.
2. The California Civil Rights Council has released draft regulations that:
- - Clarify it's illegal to use AI systems that harm applicants/employees based on protected characteristics
- - Require employers to keep AI-related employment records for 4 years
- - Prohibit third parties from aiding employment discrimination via AI systems
- - Define key terms like "automated decision system" and "adverse impact"
3. The regulations would apply to AI used for resume screening, interviewing, personality assessments, job ad targeting, and other hiring/employment processes.
4. Employers could be held liable for discrimination resulting from AI systems, similar to human-made decisions.
5. The rules aim to prevent AI from perpetuating bias or creating new forms of discrimination in hiring and employment.
6. Public comments on the proposed regulations are being accepted until July 18, 2024.
7. These efforts align with broader initiatives like the White House's AI Bill of Rights and EEOC guidelines on algorithmic fairness.
8. California legislators have also introduced bills to further regulate AI in employment, including impact assessments and governance programs.
9. The regulations and proposed laws reflect growing concerns about potential algorithmic bias and discrimination in AI-powered employment tools.
Could AI reject your resume? California tries to prevent a new kind of discrimination | KPBS Public Media
California regulators are moving to restrict how employers can use artificial intelligence to screen workers and job applicants — warning that using AI to measure tone of voice, facial expressions and reaction times may run afoul of the law.
The draft regulations say that if companies use automated systems to limit or prioritize applicants based on pregnancy, national origin, religion or criminal history, that’s discrimination.
Members of the public have until July 18 to comment on the proposed rules. After that, regulators in the California Civil Rights Department may amend and will eventually approve them, subject to final review by an administrative law judge, capping off a process that began three years ago.
The rules govern so-called “automated decision systems” — artificial intelligence and other computerized processes, including quizzes, games, resume screening, and even advertising placement. The regulations say using such systems to analyze physical characteristics or reaction times may constitute illegal discrimination. The systems may not be used at all, the new rules say, if they have an “adverse impact” on candidates based on certain protected characteristics.
The draft rules also require companies that sell predictive services to employers to keep records for four years in order to respond to discrimination claims.
A crackdown is necessary in part because while businesses want to automate parts of the hiring process, “this new technology can obscure responsibility and make it harder to discern who’s responsible when a person is subjected to discriminatory decision-making,” said Ken Wang, a policy associate with the California Employment Lawyers Association.
The draft regulations make it clear that third-party service providers are agents of the employer and hold employers responsible.
The California Civil Rights Department started exploring how algorithms, a type of automated decision system, can impact job opportunities and automate discrimination in the workplace in April 2021. Back then, Autistic People of Color Fund founder Lydia X. Z. Brown warned the agency about the harm that hiring algorithms can inflict on people with disabilities. Brown told CalMatters that whether the new draft rules will offer meaningful protection depends on how they’re put in place and enforced.
Researchers, advocates and journalists have amassed a body of evidence that AI models can automate discrimination, including in the workplace. Last month, the American Civil Liberties Union filed a complaint with the Federal Trade Commission alleging that resume screening software made by the company Aon discriminates against people based on race and disability despite the company’s claim that its AI is “bias free.” An evaluation of leading artificial intelligence firm OpenAI’s GPT-3.5 technology found that the large language model can exhibit racial bias when used to automatically sift through the resumes of job applicants. Though the company uses filters to prevent the language model from producing toxic language, internal tests of GPT-3 also surfaced race, gender, and religious bias.
“This new technology can obscure responsibility.”Ken Wang, policy associate with the California Employment Lawyers Association
Protecting people from automated bias understandably attracts a lot of attention, but sometimes hiring software that’s marketed as smart makes dumb decisions. Wearing glasses or a headscarf or having a bookshelf in the background of a video job interview can skew personality predictions, according to an investigative report by German public broadcast station Bayerischer Rundfunk. So can the font a job applicant chooses when submitting a resume, according to researchers at New York University.
California’s proposed regulations are the latest in a series of initiatives aimed at protecting workers against businesses using harmful forms of AI.
In 2021, New York City lawmakers passed a law to protect job applicants from algorithmic discrimination in hiring, although researchers from Cornell University and Consumer Reports recently concluded that the law has been ineffective. And in 2022, the Equal Employment Opportunity Commission and the U.S. Justice Department clarified that employers must comply with the Americans with Disabilities Act when using automation during hiring.
The California Privacy Protection Agency, meanwhile, is considering draft rules that, among other things, define what information employers can collect on contractors, job applicants, and workers, allowing them to see what data employers collect and to opt-out from such collection or request human review.
Pending legislation would further empower the source of the draft revisions, the California Civil Rights Department. Assembly Bill 2930 would allow the department to demand impact assessments from businesses and state agencies that use AI in order to protect against automated discrimination.
Outside of government, union leaders now increasingly argue that rank-and-file workers should be able to weigh in on the effectiveness and harms of AI in order to protect the public. Labor representatives have had conversations with California officials about specific projects as they experiment with how to use AI.
x
Civil Rights Council Releases Proposed Regulations to Protect Against Employment Discrimination in Automated Decision-Making Systems
SACRAMENTO – The California Civil Rights Council today announced the release of proposed regulations to protect against discrimination in employment resulting from the use of artificial intelligence, algorithms, and other automated decision-making systems. In compliance with the Administrative Procedure Act, the Civil Rights Council has initiated the public comment period for the proposed regulations and encourages interested parties to submit public comments by the July 18, 2024 deadline.
“Employers are increasingly using artificial intelligence and other technologies to make employment decisions — from recruitment and hiring to promotion and retention,” said Civil Rights Councilmember Hellen Hong.“While these tools can bring a range of benefits, they can also contribute to and further perpetuate bias and discrimination based on protected characteristics. The proposed regulations clarify how existing rules protecting against employment discrimination apply to these emerging technologies, and I encourage anyone who is interested to participate in the regulatory process by submitting public comment.”
“We’re proud of California’s innovative spirit,” said Civil Rights Department Director Kevin Kish. “Through advances in technology and artificial intelligence, we’re taking steps to tackle climate change, develop cutting edge treatments in healthcare, and build the economy of tomorrow. At the same time, we also have a responsibility to ensure our laws keep pace and that we retain hard-won civil rights. The proposed regulations announced today represent our ongoing commitment to fairness and equity in the workplace. I applaud the Civil Rights Council for their work.”
Under California law, the California Civil Rights Department (CRD) is charged with enforcing many of the state’s robust civil rights laws, including in the areas of employment, housing, businesses and public accommodations, and state-funded programs and activities. As part of those efforts, the Civil Rights Council — a branch of CRD — develops and issues regulations to implement state civil rights laws. With respect to automated-decision systems, the Civil Rights Council’s initial proposed regulations are the result of a series of public discussions, including an April 2021 hearing, and careful consideration of input from experts and the public, as well as federal reports and guidance.
Automated decision-making systems — which may rely on algorithms or artificial intelligence — are increasingly used in employment settings to facilitate a wide range of decisions related to job applicants or employees, including with respect to recruitment, hiring, and promotion. While these tools can bring myriad benefits, they can also exacerbate existing biases and contribute to discriminatory outcomes. Whether it is a hiring tool that rejects women applicants by mimicking the existing features of a company’s male-dominated workforce or a job advertisement delivery system that reinforces gender and racial stereotypes by directing cashier ads to women and taxi jobs to Black workers, many of the challenges have been well documented.
Among other changes, the Civil Rights Council’s proposed regulations seek to:
- Clarify that it is a violation of California law to use an automated decision-making system if it harms applicants or employees based on protected characteristics.
- Ensure employers and covered entities maintain employment records, including automated decision-making data, for a minimum of four years.
- Affirm that the use of an automated decision-making system alone does not replace the requirement for an individualized assessment when considering an applicant’s criminal history.
- Clarify that third parties are prohibited from aiding and abetting employment discrimination, including through the design, sale, or use of an automated decision-making system.
- Provide clear examples of tests or challenges used in automated decision-making system assessments that may constitute unlawful medical or psychological inquiries.
- Add definitions for key terms used in the proposed regulations, such as “automated-decision system,” “adverse impact,” and “proxy.”
The Civil Rights Council and CRD encourage all interested parties and members of the public to participate in the regulatory process. Written comments must be submitted by 5:00 PM PT on July 18, 2024. Comments may be submitted by email at council@calcivilrights.ca.gov. A public hearing on the proposed regulations will be held at 10:00 AM PT on July 18, 2024. Additional information on how to submit public comment and participate in the hearing is available here.
The notice of proposed rulemaking, initial statement of reasons for the proposed regulations, and proposed text of the regulations are available here.
For email updates on the proposed rulemaking and other Civil Rights Council activities, you can subscribe online at https://calcivilrights.ca.gov/subscriptions/.
The California Civil Rights Department (CRD) is the state agency charged with enforcing California’s civil rights laws. CRD’s mission is to protect the people of California from unlawful discrimination in employment, housing, public accommodations, and state-funded programs and activities, and from hate violence and human trafficking. For more information, visit calcivilrights.ca.gov.
x
x
California Proposes New Anti-Discrimination Rules When Artificial Intelligence Impacts Hiring
In response to growing concerns about algorithmic bias in employment practices, the California Civil Rights Council (The Council) has proposed amendments to the Fair Employment and Housing Act (FEHA) to address discrimination arising from the use of automated decision systems. The proposed amendments align with broader efforts, including the White House’s Blueprint for an AI Bill of Rights and the Equal Employment Opportunity Commission’s (EEOC) guidelines on algorithmic fairness, to ensure that technological advancements do not perpetuate existing biases or create new forms of discrimination in the employment lifecycle.
Definition and Scope of AI Under the Act
The proposed amendments define an “automated decision system” as a computational process that screens, evaluates, categorizes, recommends, makes, or facilitates decisions impacting applicants or employees. This includes systems using machine learning, algorithms, statistics, or other data processing or artificial intelligence techniques. This definition is generally consistent with other AI laws, such as the Colorado Artificial Intelligence Act and New York City’s Local Law 144.
The proposed amendments cover various activities, processes, tools, and solutions performed by automated decision systems, including:
- Using computer-based tests to make predictive assessments about an applicant or employee; measure skills, dexterity, reaction time, and other abilities or characteristics; or measure personality traits, aptitude, attitude, and cultural fit.
- Directing job advertisements or other recruiting materials to targeted groups.
- Screening resumes for specific terms or patterns before any human review of applicant materials.
- Analyzing facial expressions, word choice, and/or voice in online interviews.
- Rank or prioritize applicants based on their schedule’s work availability.
The examples provided by the Council are illustrative and non-exhaustive. They aim to ensure that all forms of automated decision systems are scrutinized for fairness and compliance with anti-discrimination standards, reflecting a robust approach to modernizing employment practices.
Who Is Affected?
The proposed amendments apply to any organization that regularly pays five or more individuals for work or services. An employer’s “agent” and “employment agencies” are also covered.
The proposed amendments define “agent” as any person acting on behalf of an employer, directly or indirectly. This includes third parties providing services related to hiring or employment decisions, such as recruiting, applicant screening, hiring, payroll, benefits administration, evaluations, decision-making about workplace leaves or accommodations, or administering automated decision systems for these purposes.
The proposed amendments amend the definition of “employment agency” to mean any person providing compensated services to identify, screen, or procure job applicants, employees, and work opportunities. This includes those who offer these services through automated decision systems. This clarification specifies the range of actions covered by an employment agency and acknowledges the use of automated systems in providing these services, which is becoming more common.
Employer Impact
The proposed amendments clarify that it is unlawful for employers to use selection criteria, including automated decision systems if such use results in adverse impacts or disparate treatment based on characteristics protected under FEHA. Employers may be liable for discrimination stemming from these systems, just as they would be for decisions made without them. However, employers can defend their use of automated decision systems by demonstrating that the criteria were job-related, necessary for the business, and that no less discriminatory alternatives were available. Additionally, evidence that employers conducted anti-bias testing or took similar measures to prevent discrimination will be considered in their defense.
Employers are accountable for the actions of their agents, while agents themselves are liable for assisting or enabling any discriminatory employment practices resulting from the use of automated decision systems.
Consideration of Criminal History in Employment Decisions
The proposed amendments amend the FEHA clarifying the role of automated decision systems in consideration of an applicant’s criminal history. In particular:
- § 11017.1(a): The Council proposes adding that employers using automated decision systems to consider criminal history must comply with the same amendments as human-based inquiries. Specifically, employers are prohibited from inquiring or assessing an applicant’s criminal history until after a conditional job offer has been extended. This ensures that automated systems are not used unlawfully unless specific exceptions apply.
- § 11017.1(d)(2)(C): If an employer may withdraw a job offer based on criminal history as part of its initial assessment when using an automated decision system, they must provide the applicant with the report or data generated by the system, along with information on the assessment criteria used.
- § 11017.1(d)(4): Using an automated decision system alone does not qualify as an individualized assessment of an applicant’s criminal history. Employers must conduct additional human-based processes to determine if a conviction directly relates to the job.
Other Clarifications of Law and Additional Requirements
The proposed provisions aim to prohibit the use of automated decision systems that result in disparate treatment based on various protected characteristics. Specifically, these amendments seek to prevent discrimination arising from the use of such systems concerning sex, pregnancy, childbirth or related medical conditions, marital status, religious creed, disability, and age.
Notable are the Council’s proposed amendments concerning automated decision systems when used to conduct medical and psychological examinations. The Council identifies that administering personality-based questions through automated decision systems, such as inquiries into optimism, emotional stability, or extroversion, can constitute prohibited medical inquiries. Similarly, using gamified screens within these systems to evaluate physical or mental abilities, like tests requiring rapid clicking or reaction time, may also infringe upon FEHA amendments. Such examinations, if not directly related to job requirements or lacking reasonable accommodations for individuals with disabilities, could be deemed unlawful.
Record Keeping Obligations
The proposed inclusion of records pertaining to automated decision systems extends beyond traditional documentation to encompass data used in the training, operation, and outputs of such systems. By explicitly stating that these records must be retained for a minimum of four years following the last use of the automated decision system, the amendment establishes a clear and enforceable standard for recordkeeping practices.
Additionally, the amendment addresses the accountability of entities involved in the provision, sale, or use of automated decision systems on behalf of covered entities. The proposal promotes transparency and accountability of all covered entities throughout the employment process by stipulating that these parties must maintain relevant records, including but not limited to automated decision system data.
Next Steps
The Civil Rights Council urges the public to engage in the regulatory process by submitting written comments by 5:00 PM PT on July 18, 2024. Comments can be sent via email to council@calcivilrights.ca.gov. A public hearing on the proposed regulations is scheduled for 10:00 AM PT on July 18, 2024. For additional information on how to contribute to the discussion and participate in the hearing, the public is encouraged to review CRC’s Notice of Proposed Rulemaking.
Parting Thoughts
The proposed amendments extend the reach of the Fair Employment and Housing Act (FEHA) to encompass automated decision systems. By aligning these technologies with existing legal obligations, California expands the scope of protections against discrimination in employment, adapting regulations to the digital era. Employers are encouraged to submit public comments and actively monitor the Council's website for developments in this regulatory process.
Comments
Post a Comment