Senator Scott Wiener (D- San Francisco) has introduced SB 294, known as the Safety in Artificial Intelligence Act, aiming to establish a comprehensive framework for ensuring the safe development of AI models within California, the legislator announced.
The core objectives of SB 294 include mandating responsible scaling practices for AI labs, robust testing for safety risks in advanced models, and the imposition of strong liability for damages resulting from foreseeable safety risks. The proposal also seeks to create CalCompute, a cloud-based compute cluster available to AI researchers and smaller developers within California’s public university system.
The bill is currently classified as an “intent bill,” which means it cannot proceed through the regular legislative process at this stage of the year but can generate discussions and gather feedback from various stakeholders, both within and outside the AI industry.
Over the past decade, AI models, particularly language models, have found applications in diverse fields, from cybersecurity to healthcare and even space exploration. However, these advancements have also raised concerns about the potential misuse of AI for malicious purposes, such as cyberattacks and the creation of dangerous technologies.
“AI models are likely to improve their current capabilities further, and to suddenly develop new and surprising capabilities very rapidly,” Weiner said in the press release. “Thus, policymakers cannot afford to wait to engage with the technology: this technology is too important for us to wait to react until risks have already been realized. Good policy can help us contain risks at low cost while harnessing this technology’s power for good.
The Safety in AI Act seeks to foster innovation while minimizing the risks posed by AI technologies, primarily applying to companies developing AI models at the cutting edge of the industry, with requirements triggered by the computational power used in model development and training.
Under the framework outlined in SB 294, companies developing advanced AI models must disclose their safety risk testing plans, improvement strategies, responses to potential dangers, and plans for ensuring model safety as they scale up.
To foster innovation and responsible AI development, SB 294 proposes the launch of CalCompute, an AI cloud compute cluster dedicated to safe and secure AI system research. This initiative builds upon Stanford’s National Research Cloud proposal, utilizing California’s world-class public university system to advance AI development.
In addition to these measures, the bill calls for commercial cloud computing companies to institute Know Your Customer (KYC) policies for offerings large enough to train advanced AI models.