Could AI reject your resume? California tries to prevent employers from using AI to screen them

Summary

Here's a summary of key points about the new proposed regulations:

1. California is proposing new regulations to restrict how employers can use artificial intelligence (AI) in hiring and employment decisions.

2. The California Civil Rights Council has released draft regulations that:

  •    - Clarify it's illegal to use AI systems that harm applicants/employees based on protected characteristics
  •    - Require employers to keep AI-related employment records for 4 years
  •    - Prohibit third parties from aiding employment discrimination via AI systems
  •    - Define key terms like "automated decision system" and "adverse impact"


3. The regulations would apply to AI used for resume screening, interviewing, personality assessments, job ad targeting, and other hiring/employment processes.

4. Employers could be held liable for discrimination resulting from AI systems, similar to human-made decisions.

5. The rules aim to prevent AI from perpetuating bias or creating new forms of discrimination in hiring and employment.

6. Public comments on the proposed regulations are being accepted until July 18, 2024.

7. These efforts align with broader initiatives like the White House's AI Bill of Rights and EEOC guidelines on algorithmic fairness.

8. California legislators have also introduced bills to further regulate AI in employment, including impact assessments and governance programs.

9. The regulations and proposed laws reflect growing concerns about potential algorithmic bias and discrimination in AI-powered employment tools.

Could AI reject your resume? California tries to prevent a new kind of discrimination | KPBS Public Media

kpbs.org

By Khari Johnson / CalMatters

Published June 20, 2024 at 3:04 PM PDT

California regulators are moving to restrict how employers can use artificial intelligence to screen workers and job applicants — warning that using AI to measure tone of voice, facial expressions and reaction times may run afoul of the law.

The draft regulations say that if companies use automated systems to limit or prioritize applicants based on pregnancy, national origin, religion or criminal history, that’s discrimination.

Members of the public have until July 18 to comment on the proposed rules. After that, regulators in the California Civil Rights Department may amend and will eventually approve them, subject to final review by an administrative law judge, capping off a process that began three years ago.

The rules govern so-called “automated decision systems” — artificial intelligence and other computerized processes, including quizzes, games, resume screening, and even advertising placement. The regulations say using such systems to analyze physical characteristics or reaction times may constitute illegal discrimination. The systems may not be used at all, the new rules say, if they have an “adverse impact” on candidates based on certain protected characteristics.

The draft rules also require companies that sell predictive services to employers to keep records for four years in order to respond to discrimination claims.

A crackdown is necessary in part because while businesses want to automate parts of the hiring process, “this new technology can obscure responsibility and make it harder to discern who’s responsible when a person is subjected to discriminatory decision-making,” said Ken Wang, a policy associate with the California Employment Lawyers Association.

The draft regulations make it clear that third-party service providers are agents of the employer and hold employers responsible.

The California Civil Rights Department started exploring how algorithms, a type of automated decision system, can impact job opportunities and automate discrimination in the workplace in April 2021. Back then, Autistic People of Color Fund founder Lydia X. Z. Brown warned the agency about the harm that hiring algorithms can inflict on people with disabilities. Brown told CalMatters that whether the new draft rules will offer meaningful protection depends on how they’re put in place and enforced.

Researchers, advocates and journalists have amassed a body of evidence that AI models can automate discrimination, including in the workplace. Last month, the American Civil Liberties Union filed a complaint with the Federal Trade Commission alleging that resume screening software made by the company Aon discriminates against people based on race and disability despite the company’s claim that its AI is “bias free.” An evaluation of leading artificial intelligence firm OpenAI’s GPT-3.5 technology found that the large language model can exhibit racial bias when used to automatically sift through the resumes of job applicants. Though the company uses filters to prevent the language model from producing toxic language, internal tests of GPT-3 also surfaced race, gender, and religious bias.

“This new technology can obscure responsibility.”

Ken Wang, policy associate with the California Employment Lawyers Association

Protecting people from automated bias understandably attracts a lot of attention, but sometimes hiring software that’s marketed as smart makes dumb decisions. Wearing glasses or a headscarf or having a bookshelf in the background of a video job interview can skew personality predictions, according to an investigative report by German public broadcast station Bayerischer Rundfunk. So can the font a job applicant chooses when submitting a resume, according to researchers at New York University.

California’s proposed regulations are the latest in a series of initiatives aimed at protecting workers against businesses using harmful forms of AI.

In 2021, New York City lawmakers passed a law to protect job applicants from algorithmic discrimination in hiring, although researchers from Cornell University and Consumer Reports recently concluded that the law has been ineffective. And in 2022, the Equal Employment Opportunity Commission and the U.S. Justice Department clarified that employers must comply with the Americans with Disabilities Act when using automation during hiring.

The California Privacy Protection Agency, meanwhile, is considering draft rules that, among other things, define what information employers can collect on contractors, job applicants, and workers, allowing them to see what data employers collect and to opt-out from such collection or request human review.

Pending legislation would further empower the source of the draft revisions, the California Civil Rights Department. Assembly Bill 2930 would allow the department to demand impact assessments from businesses and state agencies that use AI in order to protect against automated discrimination.

Outside of government, union leaders now increasingly argue that rank-and-file workers should be able to weigh in on the effectiveness and harms of AI in order to protect the public. Labor representatives have had conversations with California officials about specific projects as they experiment with how to use AI.

x


x

Civil Rights Council Releases Proposed Regulations to Protect Against Employment Discrimination in Automated Decision-Making Systems

State of California

SACRAMENTO – The California Civil Rights Council today announced the release of proposed regulations to protect against discrimination in employment resulting from the use of artificial intelligence, algorithms, and other automated decision-making systems. In compliance with the Administrative Procedure Act, the Civil Rights Council has initiated the public comment period for the proposed regulations and encourages interested parties to submit public comments by the July 18, 2024 deadline.

“Employers are increasingly using artificial intelligence and other technologies to make employment decisions — from recruitment and hiring to promotion and retention,” said Civil Rights Councilmember Hellen Hong.“While these tools can bring a range of benefits, they can also contribute to and further perpetuate bias and discrimination based on protected characteristics. The proposed regulations clarify how existing rules protecting against employment discrimination apply to these emerging technologies, and I encourage anyone who is interested to participate in the regulatory process by submitting public comment.”

“We’re proud of California’s innovative spirit,” said Civil Rights Department Director Kevin Kish. “Through advances in technology and artificial intelligence, we’re taking steps to tackle climate change, develop cutting edge treatments in healthcare, and build the economy of tomorrow. At the same time, we also have a responsibility to ensure our laws keep pace and that we retain hard-won civil rights. The proposed regulations announced today represent our ongoing commitment to fairness and equity in the workplace. I applaud the Civil Rights Council for their work.”

Under California law, the California Civil Rights Department (CRD) is charged with enforcing many of the state’s robust civil rights laws, including in the areas of employment, housing, businesses and public accommodations, and state-funded programs and activities. As part of those efforts, the Civil Rights Council — a branch of CRD — develops and issues regulations to implement state civil rights laws. With respect to automated-decision systems, the Civil Rights Council’s initial proposed regulations are the result of a series of public discussions, including an April 2021 hearing, and careful consideration of input from experts and the public, as well as federal reports and guidance.

Automated decision-making systems — which may rely on algorithms or artificial intelligence — are increasingly used in employment settings to facilitate a wide range of decisions related to job applicants or employees, including with respect to recruitment, hiring, and promotion. While these tools can bring myriad benefits, they can also exacerbate existing biases and contribute to discriminatory outcomes. Whether it is a hiring tool that rejects women applicants by mimicking the existing features of a company’s male-dominated workforce or a job advertisement delivery system that reinforces gender and racial stereotypes by directing cashier ads to women and taxi jobs to Black workers, many of the challenges have been well documented.

Among other changes, the Civil Rights Council’s proposed regulations seek to:

  • Clarify that it is a violation of California law to use an automated decision-making system if it harms applicants or employees based on protected characteristics.
  • Ensure employers and covered entities maintain employment records, including automated decision-making data, for a minimum of four years.
  • Affirm that the use of an automated decision-making system alone does not replace the requirement for an individualized assessment when considering an applicant’s criminal history.
  • Clarify that third parties are prohibited from aiding and abetting employment discrimination, including through the design, sale, or use of an automated decision-making system.
  • Provide clear examples of tests or challenges used in automated decision-making system assessments that may constitute unlawful medical or psychological inquiries.
  • Add definitions for key terms used in the proposed regulations, such as “automated-decision system,” “adverse impact,” and “proxy.”

The Civil Rights Council and CRD encourage all interested parties and members of the public to participate in the regulatory process. Written comments must be submitted by 5:00 PM PT on July 18, 2024. Comments may be submitted by email at council@calcivilrights.ca.gov. A public hearing on the proposed regulations will be held at 10:00 AM PT on July 18, 2024. Additional information on how to submit public comment and participate in the hearing is available here.

The notice of proposed rulemaking, initial statement of reasons for the proposed regulations, and proposed text of the regulations are available here.

For email updates on the proposed rulemaking and other Civil Rights Council activities, you can subscribe online at https://calcivilrights.ca.gov/subscriptions/.

The California Civil Rights Department (CRD) is the state agency charged with enforcing California’s civil rights laws. CRD’s mission is to protect the people of California from unlawful discrimination in employment, housing, public accommodations, and state-funded programs and activities, and from hate violence and human trafficking. For more information, visit calcivilrights.ca.gov.

x


x

Update on California’s Efforts to Regulate the Use of AI in Employment Decision-Making

By Alice Wang and Ellie McPike on April 13, 2023 Print

Updated May 19, 2023: AB 331 has died in committee and will not become law this session.

  • California’s Civil Rights Council has revised proposed regulations governing the use of automated-decision systems.
  • A proposed bill, AB 331, would impose obligations on employers to evaluate the impact of an automated decision tool (ADT), prohibit use of an ADT that would contribute to algorithmic discrimination, add a new notice requirement, and create a governance program.
  • A separate bill, SB 721, would create a temporary Working Group to deliver a report to the legislature regarding artificial intelligence.

California continues to take steps to regulate the burgeoning use of artificial intelligence, machine learning, and other data-driven statistical processes in making consequential decisions, including those related to employment. The California Civil Rights Council (CRC)1 recently issued updated proposed regulations governing automated-decision systems. The agency had issued a working draft in March 2022.  In addition to these regulatory efforts, California lawmakers have introduced two bills designed to further regulate AI in employment.  California’s efforts at oversight now consist of the following:

  • The CRC’s Proposed Modifications to Employment Regulations Regarding Automated-Decision Systems;
  • Assembly Bill No. 331, to add Chapter 25 (commencing with Section 22756) to Division 8 of the Business and Professions Code, relating to artificial intelligence; and
  • Senate Bill No. 721, “California Interagency AI Working Group,” to add and repeal Section 11546.47 of the Government Code, relating to artificial intelligence.2

While these approaches have a common goal of minimizing the potential negative consequences of artificial intelligence when deployed (in relevant part) in the employment and personnel management contexts, they also represent simultaneous efforts from disparate bodies racing in California to be the first to regulate new technology.  Adding to the confusion in an increasingly crowded space is the fact that these entities are proposing their own unique definitions regarding similar terminology (e.g., “adverse impact” by the CRC versus “algorithmic discrimination” by the California Assembly, and “automated-decision system” by the CRC versus “automated decision tool” by the California Assembly), when it remains unanswered who should even be defining these terms as a threshold matter.  Taken as a whole, the CRC, whose mission is to promulgate regulations that implement California’s civil rights laws, appears to be leapfrogging the legislative process with its efforts.

The following summarizes the latest primary updates regarding California’s three-pronged approach.

Civil Rights Council’s Proposed Rules

The latest iteration of the CRC’s Proposed Modifications to Employment Regulations Regarding Automated-Decision Systems was released on February 10, 2023.  Since first publishing draft modifications to its antidiscrimination law in March 2022, the CRC has continued to refine its definitions of key terms without altering the primary substance of the proposed regulations.  The CRC’s most recent proposal includes the following primary updates:

Key Updates to Definitions

  • Introduces definition for adverse impact, which includes but is limited to, “the use of a facially neutral practice that negatively limits, screens out, tends to limit or screen out, ranks, or prioritizes applicants or employees on a basis protected by the Act.  ‘Adverse impact’ is synonymous with ‘disparate impact.’”
  • Introduces definition for artificial intelligence to mean a “machine-learning system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions.”
  • Introduces definition for machine learning to mean the “ability for a computer to use and learn from its own analysis of data or experience and apply this learning automatically in future calculations or tasks.”
  • Broadens the definition of agent from a person acting on behalf of an employer to provide services “related to …. the administration of automated-decision systems for an employer’s use in recruitment, hiring, performance, evaluation, or other assessments that could result in the denial of employment or otherwise adversely affect the terms, conditions, benefits, or privileges of employment” to “the administration of automated-decision systems for an employer’s use in making hiring or employment decisions” (emphasis added).
  • Introduces definition for proxy to mean a “technically neutral characteristic or category correlated with a basis protected by the Act.”
  • Provides a more fulsome list of examples of tasks that constitute automated-decision systems, and clarifies that automated-decision systems exclude word-processing software, spreadsheet software, and map navigation systems.
  • Provides an example of the capability of algorithms to “detect patterns in datasets and automate decisions [sic] making based on those patterns and datasets.”
  • Renamed and refined the definition of Machine-Learning Data to “Automated-Decision System Data.”

Key Updates to Defense to Unlawful Employment Practice and Recordkeeping Obligations

  • Clarifies how an employer or covered entity can defend against a showing that it engaged in an unlawful use of selection criteria that resulted in an adverse impact or disparate treatment on an application, employee or class of applicants or employees on a protected basis, i.e., that the employer or covered entity can show that the selection criteria, as used, is job-related for the position in question and consistent with business necessity and there is no less-discriminatory policy or practice that serves the employer’s goals as effectively as the challenged policy or practice.3 
  • Extends recordkeeping obligations not just to any person who sells or provides an automated-decision system or other selection criteria to an employer or covered entity, but also to any person “who uses an automated-decision system or other selection criteria on behalf of an employer or other covered entity.”
  • Clarifies the scope of records to be preserved, rather than simply referring to “records of the assessment criteria used by the automated-decision system.”4 

Assembly Bill No. 331

Introduced by Assembly Member Bauer-Kahan on January 30, 2023, Assembly Bill No. 331 (AB 331) would, similar to NYC Local Law 144, impose obligations on employers to evaluate the impact of an automated decision tool5 and to provide notice regarding its use, and provide for formation of a governance program.  SB 331 would also prohibit a deployer6 from using an ADT in a way that contributes to algorithmic discrimination.7

Impact Assessment

AB 331 would require a deployer and a developer8 of an ADT to perform an impact assessment9 on/before January 1, 2025, and annually thereafter, for any ADT used that includes:

  • a statement of the purpose of the ADT and its intended benefits, uses, and deployment contexts;
  • a description of the ADT’s outputs and how the outputs are used to make, or are a controlling factor in making, a consequential decision;
  • a summary of the type of data collected from natural persons and processed by the ADT when it is used to make, or is a controlling factor in making, a consequential decision;
  • a statement of the extent to which the deployer’s use of the ADT is consistent with or varies with the statement required of the developer;10
  • an analysis of the potential adverse impacts on the basis of sex, race, color, ethnicity, religion, age, national origin, limited English proficiency, disability, veteran status, or genetic information;
  • a description of the safeguards that are or will be implemented by the deployer to address any reasonably foreseeable risks of algorithmic discrimination arising from the use of the ADT known to the deployer at the time of the impact assessment;11
  • a description of how the ADT will be used by a natural person, or monitored when it is used, to make or be a controlling factor in making, a consequential decision; and
  • a description of how the ADT has or will be evaluated for validity or relevance.12

Notice Requirements

AB 331 would also require a deployer, at or prior to an ADT being used to make a consequential decision,13 notify any natural person who is the subject of the consequential decision that an ADT is being used to make, or is a controlling factor in making, the consequential decision, and to provide that person with:

  • a statement of the purpose of the ADT;
  • contact information for the developer; and
  • plain language description of the ADT that includes a description of any human components and how any automated component is used to inform a consequential decision.

Furthermore, if a consequential decision is made solely based on the output of an ADT, a deployer must accommodate a natural person’s request not to be subject to the ADT and to be subject to an alternative selection process or accommodation, if technically feasible.

Governance Program

AB 331 would also require a deployer or developer to establish and maintain a governance program to map, measure, manage, and govern the reasonably foreseeable risks of algorithmic discrimination associated with the use of the ADT.  In relevant part, the governance program must conduct an annual and comprehensive review of policies, practices, and procedures to ensure compliance with this chapter, and make reasonable adjustments to administrative and technical safeguards in light of material changes in technology, the risks associated with the ADT, the state of technical standards, and changes in business arrangements or operations of the deployer or developer.

Senate Bill No. 721

Senate Bill No. 721 (SB 721), titled “California Interagency AI Working Group,” was introduced on February 16, 2023 and would create a Working Group to deliver a report to the legislature regarding artificial intelligence and be disbanded by January 1, 2030.

The Working Group would consist of 10 members, including two appointees by the governor, two appointees by the president pro tempore of the Senate, two employees by the speaker of the Assembly, two appointees by the attorney general, one appointee by the California Privacy Protection Agency, and one appointee by the Department of Technology.  The Working Group would be chaired by the director of technology, and the members must be Californians with expertise in at least two of the following areas:  computer science, artificial intelligence, the technology industry, workforce development, and data privacy.

The Working Group would be required to accept input from a broad range of stakeholders, including from academia, consumer advocacy groups, and small, medium and large businesses affected by artificial intelligence policies.  The Working Group would be required to:

  • Recommend a definition of artificial intelligence as it pertains to its use in technology for use in legislation;
  • Study the implications of the usage of artificial intelligence for data collection to inform testing and evaluation, verification and validation of artificial intelligence to ensure that artificial intelligence will perform as intended, and minimize performance problems and unanticipated outcomes;
  • Determine proactive steps to prevent artificial intelligence-assisted misinformation campaigns and unnecessary exposure for children to the potentially harmful effects of artificial intelligence;
  • Determine the relevant agencies to develop and oversee artificial intelligence policy and implementation of that policy; and
  • Determine how the Working Group and the Department of Justice can leverage the substantial and growing expertise of the California Privacy Protection Agency in the long-term development of data privacy policies that affect the privacy, rights, and the use of artificial intelligence online.

SB 721 would also require the Working Group to submit a report to the legislature on or before January 1, 2025 regarding the foregoing, and every two years thereafter.  SB 721, if enacted would remain in effect until January 1, 2030.

With the proliferation of new regulations and laws, it is more important than ever for employers to stay abreast of developments regarding this topic, especially given the potential for a resulting patchwork of obligations for those who choose to incorporate qualifying artificial intelligence into their personnel management processes. Littler will continue to monitor these developments and report on any significant developments.


x


x

 

California Proposes New Anti-Discrimination Rules When Artificial Intelligence Impacts Hiring

Alonzo Martinez

In response to growing concerns about algorithmic bias in employment practices, the California Civil Rights Council (The Council) has proposed amendments to the Fair Employment and Housing Act (FEHA) to address discrimination arising from the use of automated decision systems. The proposed amendments align with broader efforts, including the White House’s Blueprint for an AI Bill of Rights and the Equal Employment Opportunity Commission’s (EEOC) guidelines on algorithmic fairness, to ensure that technological advancements do not perpetuate existing biases or create new forms of discrimination in the employment lifecycle.

Definition and Scope of AI Under the Act

The proposed amendments define an “automated decision system” as a computational process that screens, evaluates, categorizes, recommends, makes, or facilitates decisions impacting applicants or employees. This includes systems using machine learning, algorithms, statistics, or other data processing or artificial intelligence techniques. This definition is generally consistent with other AI laws, such as the Colorado Artificial Intelligence Act and New York City’s Local Law 144.

The proposed amendments cover various activities, processes, tools, and solutions performed by automated decision systems, including:

  • Using computer-based tests to make predictive assessments about an applicant or employee; measure skills, dexterity, reaction time, and other abilities or characteristics; or measure personality traits, aptitude, attitude, and cultural fit.
  • Directing job advertisements or other recruiting materials to targeted groups.
  • Screening resumes for specific terms or patterns before any human review of applicant materials.
  • Analyzing facial expressions, word choice, and/or voice in online interviews.
  • Rank or prioritize applicants based on their schedule’s work availability.

The examples provided by the Council are illustrative and non-exhaustive. They aim to ensure that all forms of automated decision systems are scrutinized for fairness and compliance with anti-discrimination standards, reflecting a robust approach to modernizing employment practices.

Who Is Affected?

The proposed amendments apply to any organization that regularly pays five or more individuals for work or services. An employer’s “agent” and “employment agencies” are also covered.

The proposed amendments define “agent” as any person acting on behalf of an employer, directly or indirectly. This includes third parties providing services related to hiring or employment decisions, such as recruiting, applicant screening, hiring, payroll, benefits administration, evaluations, decision-making about workplace leaves or accommodations, or administering automated decision systems for these purposes.

The proposed amendments amend the definition of “employment agency” to mean any person providing compensated services to identify, screen, or procure job applicants, employees, and work opportunities. This includes those who offer these services through automated decision systems. This clarification specifies the range of actions covered by an employment agency and acknowledges the use of automated systems in providing these services, which is becoming more common.

Employer Impact

The proposed amendments clarify that it is unlawful for employers to use selection criteria, including automated decision systems if such use results in adverse impacts or disparate treatment based on characteristics protected under FEHA. Employers may be liable for discrimination stemming from these systems, just as they would be for decisions made without them. However, employers can defend their use of automated decision systems by demonstrating that the criteria were job-related, necessary for the business, and that no less discriminatory alternatives were available. Additionally, evidence that employers conducted anti-bias testing or took similar measures to prevent discrimination will be considered in their defense.

Employers are accountable for the actions of their agents, while agents themselves are liable for assisting or enabling any discriminatory employment practices resulting from the use of automated decision systems.

Consideration of Criminal History in Employment Decisions

The proposed amendments amend the FEHA clarifying the role of automated decision systems in consideration of an applicant’s criminal history. In particular:

  • § 11017.1(a): The Council proposes adding that employers using automated decision systems to consider criminal history must comply with the same amendments as human-based inquiries. Specifically, employers are prohibited from inquiring or assessing an applicant’s criminal history until after a conditional job offer has been extended. This ensures that automated systems are not used unlawfully unless specific exceptions apply.
  • § 11017.1(d)(2)(C): If an employer may withdraw a job offer based on criminal history as part of its initial assessment when using an automated decision system, they must provide the applicant with the report or data generated by the system, along with information on the assessment criteria used.
  • § 11017.1(d)(4): Using an automated decision system alone does not qualify as an individualized assessment of an applicant’s criminal history. Employers must conduct additional human-based processes to determine if a conviction directly relates to the job.

Other Clarifications of Law and Additional Requirements

The proposed provisions aim to prohibit the use of automated decision systems that result in disparate treatment based on various protected characteristics. Specifically, these amendments seek to prevent discrimination arising from the use of such systems concerning sex, pregnancy, childbirth or related medical conditions, marital status, religious creed, disability, and age.

Notable are the Council’s proposed amendments concerning automated decision systems when used to conduct medical and psychological examinations. The Council identifies that administering personality-based questions through automated decision systems, such as inquiries into optimism, emotional stability, or extroversion, can constitute prohibited medical inquiries. Similarly, using gamified screens within these systems to evaluate physical or mental abilities, like tests requiring rapid clicking or reaction time, may also infringe upon FEHA amendments. Such examinations, if not directly related to job requirements or lacking reasonable accommodations for individuals with disabilities, could be deemed unlawful.

Record Keeping Obligations

The proposed inclusion of records pertaining to automated decision systems extends beyond traditional documentation to encompass data used in the training, operation, and outputs of such systems. By explicitly stating that these records must be retained for a minimum of four years following the last use of the automated decision system, the amendment establishes a clear and enforceable standard for recordkeeping practices.

Additionally, the amendment addresses the accountability of entities involved in the provision, sale, or use of automated decision systems on behalf of covered entities. The proposal promotes transparency and accountability of all covered entities throughout the employment process by stipulating that these parties must maintain relevant records, including but not limited to automated decision system data.

Next Steps

The Civil Rights Council urges the public to engage in the regulatory process by submitting written comments by 5:00 PM PT on July 18, 2024. Comments can be sent via email to council@calcivilrights.ca.gov. A public hearing on the proposed regulations is scheduled for 10:00 AM PT on July 18, 2024. For additional information on how to contribute to the discussion and participate in the hearing, the public is encouraged to review CRC’s Notice of Proposed Rulemaking.

Parting Thoughts

The proposed amendments extend the reach of the Fair Employment and Housing Act (FEHA) to encompass automated decision systems. By aligning these technologies with existing legal obligations, California expands the scope of protections against discrimination in employment, adapting regulations to the digital era. Employers are encouraged to submit public comments and actively monitor the Council's website for developments in this regulatory process.

Comments

Popular posts from this blog

How much money family of 4 needs to live comfortably in U.S. cities

California is burning under Gavin’s leadership (Opinion) | TahoeDailyTribune.com

Sacramento Report: Lawmakers Want to Cut Red Tape to Ramp up Battery Storage   | Voice of San Diego