Bill Text: CA SB1047 | 2023-2024 | Regular Session | Amended
Bill Title: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.
Spectrum: Partisan Bill (Democrat 4-0)
Status: (Vetoed) 2024-09-29 - In Senate. Consideration of Governor's veto pending. [SB1047 Detail]
Download: California-2023-SB1047-Amended.html
Amended
IN
Assembly
June 20, 2024 |
Amended
IN
Assembly
June 05, 2024 |
Amended
IN
Senate
May 16, 2024 |
Amended
IN
Senate
April 30, 2024 |
Amended
IN
Senate
April 16, 2024 |
Amended
IN
Senate
April 08, 2024 |
Amended
IN
Senate
March 20, 2024 |
Introduced by Senator Wiener (Coauthors: Senators Roth, Rubio, and Stern) |
February 07, 2024 |
LEGISLATIVE COUNSEL'S DIGEST
Digest Key
Vote: MAJORITY Appropriation: NO Fiscal Committee: YES Local Program: YESBill Text
The people of the State of California do enact as follows:
SECTION 1.
This act shall be known, and may be cited, as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.SEC. 2.
The Legislature finds and declares all of the following:SEC. 3.
Chapter 22.6 (commencing with Section 22602) is added to Division 8 of the Business and Professions Code, to read:CHAPTER 22.6. Safe and Secure Innovation for Frontier Artificial Intelligence Models
22602.
As used in this chapter:(e)“Covered guidance” means either of the
following:
(1)Guidance issued by the National Institute of Standards and Technology and by the Frontier Model Division that is relevant to the management of safety risks associated with artificial intelligence models that may possess hazardous capabilities.
(2)Industry best practices, including safety practices, precautions, or testing procedures undertaken by developers of comparable models that are relevant to the management of safety risks associated with artificial intelligence models that may possess hazardous capabilities.
(f)
(g)“Critical harm” means a harm listed in paragraph (1) of subdivision (n).
(i)(1)“Derivative model” means an artificial intelligence model that is a derivative of another artificial intelligence model, including either of the following:
(A)A modified or unmodified copy of an artificial intelligence model.
(B)A combination of an artificial intelligence model with other software.
(2)“Derivative model” does not include either of the following:
(A)An entirely independently trained artificial intelligence model.
(B)An artificial intelligence model, including one combined with other software, that is fine-tuned using a quantity of computing power greater than 25 percent of the quantity of computing power, measured in integer or floating-point operations, used to train the original model.
(j)(1)“Developer” means a person that creates, owns, or otherwise has responsibility for an artificial intelligence model.
(2)“Developer” does not include a third-party machine-learning operations platform, an artificial intelligence infrastructure platform, a computing cluster, an application developer using sourced models, or an end-user of an artificial intelligence model.
(k)“Fine tuning” means the adjustment of the model weights of an artificial intelligence model after it has finished its initial training by training the model with new data.
(l)
(m)(1)“Full shutdown” means the cessation of operation of a covered model, including all copies and derivative models, on all computers and storage devices within the custody, control, or possession of a nonderivative model developer or a person that operates a computing cluster, including any computer or storage device remotely provided by agreement.
(2)“Full shutdown” does not mean the cessation of operation of a covered model to which access was granted pursuant to a license that was not granted by the licensor on a discretionary basis and was not subject to separate negotiation between the parties.
(n)(1)“Hazardous capability” means the capability of a covered model to be used to enable any of the following harms in a way that would be significantly more difficult to cause without access to a covered
model that does not qualify for a limited duty exemption:
(A)The creation or use of a chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties.
(B)At least five hundred million dollars ($500,000,000) of damage through cyberattacks on critical infrastructure via a single incident or multiple related incidents.
(C)At least five hundred million dollars ($500,000,000) of damage by an artificial intelligence model that autonomously engages in conduct that would violate the Penal Code if undertaken by a human with the necessary mental state and causes either of the following:
(i)Bodily harm to another human.
(ii)The theft of, or harm to, property.
(D)Other grave threats to public safety and security that are of comparable severity to the harms described in paragraphs (A) to (C), inclusive.
(2)“Hazardous capability” includes a capability described in paragraph (1) even if the hazardous capability would not manifest but for fine tuning and posttraining modifications performed by third-party experts intending to demonstrate those abilities.
(3)On and after January 1, 2026, the dollar amounts in this subdivision shall be adjusted annually for inflation to the nearest one hundred dollars ($100) based on the change in the annual California Consumer Price Index for All Urban Consumers published by the Department of Industrial Relations for the most recent annual period ending on December 31 preceding the adjustment.
(o)“Limited duty exemption” means an exemption, pursuant to subdivision (a) or (c) of Section 22603, with respect to a covered model that is not a derivative
model, which applies if a developer can provide reasonable assurance that a covered model does not have a hazardous capability and will not come close to possessing a hazardous capability when accounting for a reasonable margin for safety and the possibility of posttraining modifications.
(p)“Machine-learning operations platform” means a solution that includes a combined offering of necessary machine-learning development capabilities, including exploratory data analysis, data preparation, model training and tuning, model review and governance, model inference and serving, model deployment and monitoring, and automated model retraining.
(q)
(r)
(s)
(t)“Posttraining
(u)
(v)
(a)Before initiating training of a covered model that is not a derivative model, a developer of that covered model may determine whether the covered model qualifies for a limited duty exemption.
(1)In making the determination authorized by this subdivision, a developer shall incorporate all applicable covered guidance.
(2)A developer may determine that a covered model qualifies for a limited duty exemption if the covered model will have lower performance on all benchmarks relevant under subdivision (f) of Section 22602 and has an equal or lesser general capability than either of the following:
(A)A noncovered model that manifestly lacks hazardous capabilities.
(B)Another model that is the subject of a limited duty exemption.
(3)Upon determining that a covered model qualifies for a limited duty exemption, the developer of the covered model shall submit to the Frontier Model Division a certification under penalty of perjury that specifies the basis for that determination.
(4)A developer that makes a good faith error regarding a limited duty exemption shall be deemed to be in compliance with this subdivision if the developer reports its error to the Frontier Model Division within 30 days of completing the training of the covered model and ceases operation of the artificial intelligence model until the developer is otherwise in compliance with subdivision (b).
(b)Before initiating training of a covered model that is not a derivative model and is not the subject of a limited duty exemption, and until that covered model is the subject of a limited duty exemption, the developer of that covered model shall do all of the
following:
22603.
(a) Before a developer initially trains a covered model, the developer shall do all of the following:
(3)Implement all covered guidance.
(4)
(A)Provides reasonable assurance that if a developer complies with its safety and security protocol, either of the following will apply:
(i)The developer will not produce a covered model with a hazardous capability or enable the production of a derivative model with a hazardous capability.
(ii)The safeguards enumerated in the protocol will be sufficient to prevent unreasonable risk of critical harms from the exercise of a hazardous capability in a covered model.
(i)Describes in detail how the testing procedure incorporates fine tuning and posttraining modifications performed by third-party experts intending to demonstrate those abilities.
(ii)Describes in detail how the testing procedure incorporates the possibility of posttraining modifications.
(iii)Describes in detail how the testing procedure incorporates the requirement for reasonable margin for safety.
(iv)Describes in detail how the testing procedure addresses the possibility that a covered model can be used to make posttraining modifications or create another covered model in a manner that may generate hazardous capabilities.
(v)
(D)
(E)If applicable, describes
(F)
(G)
(H)Meets other criteria stated by the Frontier Model Division in guidance to achieve the purpose of maintaining the safety of a covered model with a hazardous capability.
(5)
(6)
(7)
(8)
(9)Refrain from initiating training of a covered model if there remains an unreasonable risk that an individual, or the covered model itself, may be able to use the hazardous capabilities of the covered model, or a derivative model based on it, to cause a critical harm.
(10)Implement other measures that are reasonably necessary, including in light of applicable guidance from the Frontier Model Division, National Institute of Standards and Technology, and standard-setting organizations, to prevent the development or exercise of hazardous capabilities or to manage the risks arising from them.
(c)(1)Upon completion of the training of a covered model that is not the subject of a limited duty exemption under subdivision (a) and is not a derivative model, the developer shall perform capability testing sufficient to determine if a limited duty exemption applies
with respect to the covered model pursuant to its safety and security protocol.
(2)Upon determining if a limited duty exemption applies with respect to the covered model, a developer of the covered model shall submit to the Frontier Model Division, under penalty of perjury, a certification of compliance with the requirements of this section within 90 days and no more than 30 days after initiating the commercial, public, or widespread use of the covered model that includes both of the following:
(A)The basis for the developer’s determination whether a limited duty exemption applies.
(B)The specific methodology and results of the capability testing undertaken pursuant to this subdivision.
(d)Before initiating the commercial, public, or widespread use of a covered model that is not subject to a limited duty exemption, a developer of the nonderivative version of the covered model shall do all of the following:
(1)Implement reasonable safeguards and requirements, informed by the training and testing process, to do all of the following:
(A)Prevent an individual from being able to use the hazardous capabilities of the model, or a derivative model, to cause a critical harm.
(B)Prevent an individual from being able to use the model to create a derivative model that is used to cause a critical harm.
(C)Ensure, to the extent reasonably possible, that the covered model’s actions and any resulting critical harms can be accurately and reliably attributed to it and any user responsible for those actions.
(2)(A)Provide reasonable requirements to developers of derivative models to prevent an individual from being able to use a derivative model to cause a critical harm.
(B)If a developer provides access to the derivative model in a form that makes fine tuning possible, provide information to developers of that derivative model in a manner that will enable them to determine whether they have done a sufficient amount of fine tuning to meet the threshold described in subparagraph (B) of paragraph (2) of subdivision (i) of Section 22602.
(3)Refrain from initiating the commercial, public, or widespread use of a covered model if there remains an unreasonable risk that an individual may be able to use the hazardous capabilities of the model, or a derivative model based on it, to cause a critical harm.
(4)Implement other measures that are reasonably necessary, including in light of applicable guidance from the Frontier Model Division, National Institute of Standards and Technology, and standard-setting organizations, to prevent the development or exercise of hazardous capabilities or to manage the risks arising from them.
(e)
(f)
(C)Other information useful to accomplishing the purposes of this subdivision, as determined by the Frontier Model Division.
(g)(1)
(2)The report required by this subdivision shall be made not later than 72 hours after the developer learns that an artificial intelligence safety incident has occurred, or the developer learns facts sufficient to establish a reasonable belief that an artificial intelligence safety incident has occurred.
(h)(1)(A)Reliance on an unreasonable limited duty exemption does not relieve a developer of its obligations under this section.
(B)A determination that a covered model qualifies for a limited duty exemption that results from a good faith error reported pursuant to paragraph (4) of subdivision (a) is not an unreasonable limited duty exemption.
(2)A limited duty exemption is unreasonable if the developer does not take into account reasonably foreseeable risks of harm or weaknesses in capability testing that lead to an inaccurate determination.
(3)A risk of harm or weakness in capability testing is reasonably foreseeable, if, by the time that a developer releases a model, an applicable risk of harm or weakness in capability testing has already been identified by either of the following:
(A)Any other developer of a comparable or comparably powerful model through risk assessment, capability testing, or other means.
(B)By the National Institute of Standards and Technology, the Frontier Model Division, or any independent standard-setting organization or capability-testing organization cited by either of those entities.
22604.
(a) A person that operates a computing cluster shall implement(a)
(1)
(2)
(3)
(b)
(c)Annually
(d)
(e)
(f)Retain a customer’s Internet Protocol addresses used for access or administration and the date and time of each access or administrative action.
22605.
(a) A developer of a covered model that provides commercial access to that covered model shall provide a transparent, uniform, publicly available price schedule for the purchase of access to that covered model at a given level of quality and quantity subject to the developer’s terms of service and shall not engage in unlawful discrimination or noncompetitive activity in determining price or access.22606.
(a) If the Attorney General finds that a person is violating this chapter, the Attorney General may bring a civil action pursuant to this section.22607.
(a) Pursuant to subdivision (a) of Section 1102.5 of the Labor Code, a developer shall not prevent an employee from disclosing information to the Attorney General if the employee has reasonable cause to believe that the information indicates that the developer is out of compliance with the requirements of Section 22603.22608.
The duties and obligations imposed by this chapter are cumulative with any other duties or obligations imposed under other law and shall not be construed to relieve any party from any duties or obligations imposed under other law and do not limit any rights or remedies under existing law.SEC. 4.
Section 11547.6 is added to the Government Code, to read:11547.6.
(a) As used in this(2)“Limited duty exemption” has the same meaning as defined in Section 22602 of the Business and Professions Code.
(b)
(c)
(10)(A)On or before July 1, 2026, issue guidance regarding both of the following:
(i)Information relevant to determining whether an artificial intelligence model is a covered model, as defined in Section 22602 of the Business and Professions Code.
(ii)Technical thresholds and benchmarks relevant to determining whether a covered model is subject to a limited duty exemption under paragraph (2) of subdivision (a) of Section 22603 of the Business and Professions Code.
(d)