Bill Text: CA SB294 | 2023-2024 | Regular Session | Amended
Bill Title: Health care coverage: independent medical review.
Spectrum: Partisan Bill (Democrat 6-0)
Status: (Engrossed) 2024-08-15 - August 15 hearing: Held in committee and under submission. [SB294 Detail]
Download: California-2023-SB294-Amended.html
Amended
IN
Senate
January 03, 2024 |
Amended
IN
Senate
September 13, 2023 |
Introduced by Senator Wiener |
February 02, 2023 |
LEGISLATIVE COUNSEL'S DIGEST
Existing law requires the Secretary of Government Operations to develop a coordinated plan to, among other things, investigate the feasibility of, and obstacles to, developing standards and technologies for state departments to determine digital content provenance. For the purpose of informing that coordinated plan, existing law requires the secretary to evaluate, among other things, the impact of the proliferation of deepfakes, defined to mean audio or visual content that has been generated or manipulated by artificial intelligence that would falsely appear to be authentic or truthful and that features depictions of people appearing to say or do things they did not say or do without their consent, on state government, California-based businesses, and residents of the state.
This bill would express the intent of the Legislature to enact legislation related to artificial intelligence that would relate to, among other things, establishing standards and requirements for the safe development, secure deployment, and responsible scaling of frontier AI models in the California market by, among other things, establishing a framework of disclosure requirements for AI models, as specified.
Digest Key
Vote: MAJORITY Appropriation: NO Fiscal Committee:Bill Text
The people of the State of California do enact as follows:
SECTION 1.
The Legislature finds and declares all of the following:SEC. 2.
Section 1368.012 is added to the Health and Safety Code, to read:1368.012.
(a) Commencing July 1, 2025, a health care service plan that provides treatment for mental health or substance use disorders pursuant to Section 1374.72 shall treat a modification, delay, or denial issued in response to an authorization request for coverage of treatment for a mental health or substance use disorder for an enrollee up to 26 years of age as if the modification, delay, or denial is also a grievance submitted by the enrollee in accordance with Sections 1368, 1368.01, and 1368.015.SEC. 3.
Section 1374.37 is added to the Health and Safety Code, to read:1374.37.
(a) (1) Commencing July 1, 2025, a health care service plan that, itself or through its delegates, upholds its decision, in whole or in part, to modify, delay, or deny a health care service in response to a grievance submitted by an enrollee or processed pursuant to Section 1368.012, or has a grievance that is otherwise pending or unresolved upon expiration of the relevant timeframe specified in Sections 1368.01 and 1374.30, shall automatically submit within 24 hours a decision regarding a disputed health care service to the Independent Medical Review System and all information that informed the health care service plan’s conclusion if the health care service plan’s decision is to deny, modify, or delay either of the following with respect to an enrollee up to 26 years of age:SEC. 4.
Section 10169.4 is added to the Insurance Code, to read:10169.4.
(a) Commencing July 1, 2025, a disability insurer that provides treatment for mental health or substance use disorders pursuant to Section 1374.72 shall treat a modification, delay, or denial issued in response to an authorization request for coverage of treatment for a mental health or substance use disorder for an insured up to 26 years of age as if the modification, delay, or denial is also a grievance submitted by the insured in accordance with this article.SEC. 5.
Section 10169.6 is added to the Insurance Code, immediately following Section 10169.5, to read:10169.6.
(a) (1) Commencing July 1, 2025, a disability insurer that, itself or through its delegates, upholds its decision, in whole or in part, to modify, delay, or deny a health care service in response to a grievance submitted by an insured or processed pursuant to Section 10169.4, or has a grievance that is otherwise pending or unresolved upon expiration of the relevant timeframe specified in Section 10169, shall automatically submit within 24 hours a decision regarding a disputed health care service to the Independent Medical Review System and all information that informed the disability insurer’s conclusion if the disability insurer’s decision is to deny, modify, or delay either of the following with respect to an insured up to 26 years of age:SEC. 6.
No reimbursement is required by this act pursuant to Section 6 of Article XIII B of the California Constitution because the only costs that may be incurred by a local agency or school district will be incurred because this act creates a new crime or infraction, eliminates a crime or infraction, or changes the penalty for a crime or infraction, within the meaning of Section 17556 of the Government Code, or changes the definition of a crime within the meaning of Section 6 of Article XIII B of the California Constitution.(a)The Legislature finds and declares all of the following:
(1)The Legislature is aware of rapid advancements in artificial intelligence (AI), specifically regarding large language models and other foundation models developed at the frontier of the discipline, that have demonstrated remarkable abilities and proficiency across various domains, including passing the bar exam and other professional and academic examinations, producing text that plausibly imitates the style of particular individuals, producing highly realistic
images, and writing working computer code.
(2)The Legislature is aware of AI’s potential to improve people’s lives by widening access to, and substantially improving the quality of, medical care, education, software development tools, mental health services, translation services, academic research, and other applications no one has yet anticipated.
(3)The Legislature is cautious about the potential of these frontier AI models, and their future versions and variants, to be used for automated cybercrime, large-scale social engineering and propaganda campaigns, or biological weapon design, as well as other unforeseen malicious uses. The Legislature is aware that in some cases, companies have released large language models that demonstrated early versions of these and other dangerous capabilities despite guardrails intended to prevent these behaviors. Additionally, the Legislature
understands that even leading experts are unable to fully account for how frontier AI models execute complex tasks or thoroughly predict the emergent capabilities future variants of the technology are likely to display.
(4)The Legislature is concerned about the potential for dangerous or even catastrophic unintended consequences to arise from the development or deployment of future frontier AI models.
(5)The Legislature anticipates that, due to the unique potential for self-reinforcing feedback loops, the rapid pace of technical advancements in AI requires a legislative approach that is proactive in anticipating the risks that current and future variants of the technology present to public safety in order to enable the safe harnessing of the technology’s full potential for public benefit.
(6)The Legislature
acknowledges the importance of ensuring that measures intended to safeguard the interests of society at large do not also, inadvertently, concentrate power in the hands of a select few corporations, stifle broad-based innovation, or make new beneficial medical, educational, and myriad other technologies inaccessible or less affordable to those who need them.
(b)It is the intent of the Legislature to enact legislation that would relate to all of the following:
(1)Establishing standards and requirements for the safe development, secure deployment, and responsible scaling of frontier AI models in the California market. This will be achieved by establishing a framework of disclosure requirements for AI models developed using a quantity of computation above a level to be specified either via legislation, or via guidance from an existing or new public agency, intended to apply
exclusively to models on the cutting edge of current capabilities. This framework may include, but is not limited to, requirements that companies and AI laboratories submit concrete plans for responsible scaling of new models when increasing the scale of training computation used beyond that of the largest models currently available, detailed analyses of the risks their models could pose to public safety by malicious use or unintended consequence, the safeguards they have in place to lower these risks, analysis of whether there are levels of AI capabilities that current safeguards aren’t sufficient for, details on what tests they run and how frequently those tests are run in order to get early warnings of those capabilities emerging, and roadmaps for how safeguards would need to improve if risks increase over time. Under those requirements, evidence about model capabilities, risks, and safeguards may be required in advance of model development, at checkpoints during model development for very large model
training runs, and at regular intervals throughout the process in light of additional advancements in tooling and fine-tuning techniques. The reviewing body would have authority to conduct further audits of laboratories whose analysis is not satisfactory.
(2)Measures to ensure that frontier AI models are subject to high standards of precautions against harm to society, including both of the following:
(A)Information security requirements in order to ensure that frontier AI models are protected from advanced persistent threats, including foreign state actors.
(B)Establishing liability for those who fail to take appropriate precautions to prevent both malicious uses and unintended consequences that threaten public safety, to be specified by this legislation or accompanying agency guidance, against significant harm.
Measures to this effect may include data retention requirements to ensure that the role of cutting-edge AI models in damaging incidents can be investigated and understood and that liability for harms can be shared between malicious actors and parties that foreseeably made powerful AI systems available to those actors without appropriate safeguards.
(3)Requiring that commercial cloud-computing vendors implement prudent “Know Your Customer” practices for offerings powerful enough to be used in the development of the most advanced models.
(4)Taking additional steps to ensure that the economic impacts of a transition to a world with highly capable AI systems do not intensify already severe economic inequality and painful workforce dislocation and that the economic benefits of this new technology are widely distributed.
(5)Leveraging California’s world-leading public university and community college systems to advance research into the safe and secure development of AI systems. With appropriate security protocols in place, a state-level version of a national research cloud could help ensure that California plays a globally central role in the rigorous evaluation and development of AI systems. CalCompute would be a collaboration between academics, policymakers, and industry experts from large institutions to guide the development of AI in responsible and secure directions and ensure the benefits of this technology are spread widely.