Multiple bills in California targeting artificial intelligence could have significant implications for providers and digital health companies operating in and out of the state.
The three bills, all of which passed the state's Assembly and Senate earlier this year, have not yet been signed by California Gov. Gavin Newsom. The bills would require providers to disclose when AI is used for patient communication, instruct organizations to test models for bias and provide a structure on how developers may be held liable for harm.
Related: Where AI in healthcare is receiving venture capital investment
A spokesperson from Newsom’s office did not comment on whether he plans to sign or veto the bills.
California’s push to regulate AI is happening as the federal government attempts to thread the needle of providing guardrails that spur adoption while not hampering innovation and model development. While most say federal legislation is necessary, Congress doesn't appear close on passing any bills. This has largely left the door open for industry to govern itself.
Some states, such as California, are taking the lead on legislating the use of AI in healthcare, said René Quashie, vice president of digital health at industry trade group Consumer Technology Association. Quashie predicted more state legislatures would follow a blueprint from California and other states that have already passed AI legislation such as Colorado.
Here are the three California bills and what each means for healthcare.
Healthcare-focused bill would require AI disclosures
AB-3030: Healthcare Services: Artificial Intelligence would require a licensed or certified healthcare provider to disclose their use of generative AI when using it for patient communications. The legislation would require telehealth providers to go a step further and display these disclosures prominently throughout the interaction. For audio communications, the bill said there should be a verbal disclaimer provided at the start and the end of the interaction.
The measure passed through both chambers of the state’s legislature and multiple committees.
This bill could have an impact on clinical documentation, which is a popular space in healthcare AI. Quashie said providers using clinical documentation services that create clinical notes would likely be forced to comply with the bill. While many industry stakeholders are concerned multiple states will pass different sets of AI regulation, at least one industry trade group was satisfied with the approach California is using in this bill.
"If California was going to do something, this is a pretty darn reasonable set of criteria to put in place," said Kyle Zebley, executive director of the American Telehealth Association's lobbying arm, ATA Action. "If you really feel, on behalf of your constituents, you have to act before the federal government, this [bill is] something that would be reasonable and sensible to do."
AI Transparency Act looms for large entities
SB-942 California AI Transparency Act is less targeted on healthcare but would require entities operating in the state with more than one million monthly website visitors or users to disclose which content of theirs was generated by AI. The legislation would require these organizations to also create an AI detection tool and help users assess if content was generated by AI.
Creating the tool while complying with existing laws presents another layer of regulation for healthcare and digital health companies, said Jeremy Sherer, healthcare partner at law firm Orrick. Sherer said the bill would require covered entities to provide a free AI detection tool to users and help them assess if content was generated by AI.
“The AI detection tool is going to have to disclose system data without revealing personal information," said Sherer, who advises digital health startups. "It's another layer of regulation that healthcare industry stakeholders building tools that involve generative AI need to factor in."
Silicon Valley fears proposed AI safety bill
A third AI bill, SB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, would give the state’s attorney general power to bring legal action against AI model developers and provide whistleblower protection for employees, contractors and subcontractors disclosing information to regulators. Beginning in 2026, developers would be required to retain a third-party auditor and perform an independent compliance audit of their model.
Venture capital firms such as Andreessen Horowitz and former Speaker of the House Nancy Pelosi (D-Calif.) encouraged Newsom to oppose the measure. Developers of large language models like Anthropic, Meta and OpenAi have also opposed the bill, according to reports. While the legislation is not focused on healthcare, some in the industry are concerned about its potential impact.
“We are subject to so much regulation. We have invested so many millions of dollars in complying with that regulation,” said Andrew Hines, founder and chief technology officer at electronic medical record company Canvas Medical. “A bill like this, it could really diminish competition as folks are trying to develop new models [and] dis-incentivize development of covered models.”
Investors such as Bryan Sivak, founder and managing partner at early-stage healthcare startup venture capital firm Evidenced, said the bill could change how investors value AI companies and alter the amount of capital needed to build companies.