Assemblymember Rebecca Bauer-Kahan Introduces Measure to Regulate Automated Decision Systems

Published On: February 22, 2024


On Thursday (2/15/24), Assemblymember Rebecca Bauer-Kahan (D-Orinda) introduced AB 2930, which aims to regulate Automated Decision Systems (ADS) by assessing and eliminating algorithmic bias. The legislation is similar to a measure she introduced last year that failed passage. 

The bill is the latest in a series of AI-related measures introduced this session, including:

  • Senate Bill 1047 by Senator Scott Weiner (D-San Francisco) would enact the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act.
  • Senate Bill 896  by Senator Bill Dodd (D-Napa) “builds upon recent AI directives from President Joe Biden and Gov. Gavin Newsom to encourage innovation while ensuring the rights and opportunities of all Californians are protected,” according to the author.
  • Senate Bill 892 by Senator Steve Padilla (D-San Diego) would require the Department of Technology (CDT) to establish safety, privacy, and nondiscrimination standards relating to AI services and prevent related contracts unless they comply with such standards.
  • Senate Bill 893 by Senator Steve Padilla (D-San Diego) would establish the California Artificial Intelligence Research Hub within the Government Operations Agency.

Details from the legislative summary of AB 2930 include:

This bill would require a deployer to, at or before the time an automated decision tool is used to make a consequential decision, as defined, notify any natural person that is the subject of the consequential decision that an automated decision tool is being used to make, or be a controlling factor in making, the consequential decision and to provide that person with, among other things, a statement of the purpose of the automated decision tool. The bill would, if a consequential decision is made solely based on the output of an automated decision tool, require a deployer to, if technically feasible, accommodate a natural person’s request to not be subject to the automated decision tool and to be subject to an alternative selection process or accommodation, as prescribed.


This bill would prohibit a deployer from using an automated decision tool in a manner that results in algorithmic discrimination, which the bill would define to mean the condition in which an automated decision tool contributes to unjustified differential treatment or impacts disfavoring people based on their actual or perceived race, color, ethnicity, sex, religion, age, national origin, limited English proficiency, disability, veteran status, genetic information, reproductive health, or any other classification protected by state law.

This bill would authorize certain public attorneys, including the Attorney General, to bring a civil action against a deployer or developer for a violation of the bill and would authorize a court to award, only in an action for a violation involving algorithmic discrimination, a civil penalty of $25,000 per violation. The bill would require a public attorney to, before commencing an action for injunctive relief, provide 45 days’ written notice to a deployer or developer of the alleged violations of the bill and would provide a deployer or developer a specified opportunity to cure those violations, if the deployer or developer provides the person who gave the notice an express written statement, under penalty of perjury, that the violation has been cured and that no further violations shall occur. By expanding the scope of the crime of perjury, this bill would impose a state-mandated local program.

About the Author: Staff

Contact us, share tips and news: Contributors to this site include writers, analysts and researchers who occasionally use AI tools to perform routine tasks, such as analyzing and transcribing documents, and checking grammar and spelling.