Skip to main content
A A A

Article

AI is reshaping how we work, learn, and interact, but the law is scrambling to keep up. In 2025, states rolled out a mix of AI regulations, from broad “privacy style” frameworks to narrowly targeted rules for healthcare, therapy, and online platforms. Understanding this patchwork is essential for businesses, professionals, and consumers alike, particularly for those operating in multiple states. In this post, we break down the key AI laws recently passed or updated across the United States in 2025, highlighting key provisions and why they matter.

California

The California AI Transparency Act, initially passed in 2024, was amended by AB 853 to delay implementation until August 2, 2026. In addition, the bill requires that, beginning in 2027, any online platform that distributes content to more than 2,000,000 unique monthly users per year must be able to detect “provenance data,” which is metadata that shows when content was generated or modified. Also starting in 2027, any generative AI hosting platform that is subject to the law will be required to publicly make available the embedded disclosures required by the AI Transparency Act in their outputs.

California also passed two other AI-related bills in recent months. SB 243 expands safety requirements for “companion chatbot” platforms. It requires these platforms to clearly notify users that the chatbot is artificially generated and not human, and it imposes additional disclosure requirements and measures to prohibit sexually explicit content when the user is a minor. The bill also requires chatbot operators to maintain and publicly describe protocols that prevent chatbots from generating content related to suicide or self-harm and file annual reports with California’s Office of Suicide Prevention detailing those safety measures. Importantly, it also provides a private right of action for individuals who have suffered an injury due to a violation of SB 243.

Finally, the Transparency in Frontier Artificial Intelligence Act (TFAIA) requires large AI companies in California that build and train AI models using supercomputers (“frontier models”) to publicly share a safety and risk-management plan, disclose assessments of potentially catastrophic risks (like material and consequential misuse or loss of control), report serious incidents to the state quickly, and protect whistleblowers who raise safety concerns. It also gives the state the power to obtain civil penalties up to $1 million per violation and requires periodic reporting and oversight.

Colorado

Colorado’s Artificial Intelligence Act was signed by the governor with a request that it be “fine tuned” before it goes into effect. The governor called a special legislative session on August 21, in part, to tweak or delay the start of the Act. A bill to delay the effective date of the Act until June 30, 2026 to allow for more time for further substantive negotiations was signed by the governor on August 28, 2025.

Illinois

The Illinois Wellness and Oversight for Psychological Resources Act permits licensed therapy and psychotherapy professionals to use AI tools for administrative or treatment support only after certain disclosures are provided to patients and with full human oversight. The Act prohibits any person from providing, advertising, or offering therapy or psychotherapy services, including through AI tools, unless that person is a licensed professional. The Act took effect immediately upon passage on August 1, 2025.

Oregon

Oregon passed two bills, both effective January 1, 2026. One amends the crime of unlawful dissemination of an intimate image to add digitally created or manipulated images. The other prohibits non-human entities, such as AI, from using certain nursing titles (e.g., RN).

Texas

Texas was prolific in its passage of AI-related bills. The Texas Responsible Artificial Intelligence Governance Act (TRAIGA) is effective January 1, 2026. Its stated purposes are to advance the responsible development and use of AI systems, provide transparency around and protect individuals from risk, and provide notice regarding state agencies’ use of AI. It includes limitations on the use of biometric identifiers in AI systems, requirements for health care providers to disclose the use of AI in services or treatment, and prohibitions against certain uses of AI, such as with the intent to unlawfully discriminate against a protected class or for unlawful pornography. It also establishes a sandbox program to test AI tools.

Texas also passed three more narrowly tailored bills, all of which went into effect on September 1, 2025. HB 581 requires websites that can be used to create certain AI-generated sexual material to take steps to protect minors, including through age verification. Similarly, HB 3133 requires social media platforms to provide a way for consumers to alert them to explicit deep fake material and requires prompt removal of the material. Finally, Texas added to its AI repertoire with SB 1188, which, in part, permits health care practitioners to use AI for diagnostic purposes if they review the records and disclose the use of AI to patients.

Utah

Utah enacted SB 0226, which took effect on May 7, 2025 and imposes new rules around the use of generative AI in consumer transactions and regulated services. The bill expressly provides that it is not a defense to a violation of consumer protection laws to assert that AI was responsible for the violation. The bill also contains separate disclosure requirements for when an individual is interacting with AI in the context of consumer transactions (upon request) and certain services provided by regulated occupations (affirmative prominent disclosure). If a business’s AI implementation clearly and conspicuously discloses that it is AI at the outset of an individual’s interaction with it, the law provides a safe harbor. Utah also passed SB 0332, which extended the repeal date of the Artificial Intelligence Policy Act (UAIPA) to July 1, 2027, which had been previously set to expire on May 1, 2025. Extending the repeal date buys the legislature more time to assess how the UAIPA works in practice and consider evolving AI technologies before deciding whether to renew, revise, or let the existing Act expire altogether.

Washington

Washington passed HB 1205, which makes a person guilty of criminal impersonation if they knowingly distribute a forged digital visual or audio likeness of another person as genuine with the intent to defraud, harass, threaten, or intimidate another person for an unlawful purpose and know (or should know) that the digital likeness is not genuine. There is a carveout, however, that ensures the ability to distribute visual representations or audio recordings for matters of cultural, historical, political, religious, educational, newsworthy, or public interest. Telecommunications service and network providers are exempt from criminal liability under the new law.

Next Steps to Comply with AI Regulations

AI is moving fast, and so is the law. With states taking different approaches, from broad disclosure rules to narrow professional restrictions, it is crucial to know which AI regulations apply to your industry and jurisdiction. Staying informed helps you stay compliant and use AI responsibly, allowing you to build trust with clients and customers in this rapidly changing landscape. Your legal professionals at Miller Nash are available to answer any questions you or your clients may have as states’ efforts to regulate AI continue to evolve.

This article is provided for informational purposes only—it does not constitute legal advice and does not create an attorney-client relationship between the firm and the reader. Readers should consult legal counsel before taking action relating to the subject matter of this article.

  Edit this post