{"id":31711,"date":"2026-05-08T00:36:11","date_gmt":"2026-05-08T00:36:11","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/31711\/"},"modified":"2026-05-08T00:36:11","modified_gmt":"2026-05-08T00:36:11","slug":"recent-ai-regulatory-developments-in-the-united-states","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/31711\/","title":{"rendered":"Recent AI Regulatory Developments in the United States"},"content":{"rendered":"<p>While the EU Artificial Intelligence (AI) Act has set forth a relatively uniform framework for AI regulation in the EU, U.S. AI regulation has so far primarily consisted of a patchwork of state laws\u2014which continue to evolve at a rapid pace. Despite the Trump administration calling for Congress to pass AI legislation that would preempt overly burdensome state laws in its\u00a0<a href=\"https:\/\/www.whitehouse.gov\/wp-content\/uploads\/2026\/03\/03.20.26-National-Policy-Framework-for-Artificial-Intelligence-Legislative-Recommendations.pdf\" rel=\"nofollow noopener\" target=\"_blank\">National Policy Framework for Artificial Intelligence<\/a>, many states appear to be actively moving ahead with new legislation. Here are the top areas the states are targeting, followed by some key takeaways:<\/p>\n<p>                            Companion Chatbots.\u00a0Several states have passed laws that regulate the operation of \u201ccompanion chatbots,\u201d which generally refer to AI systems with a natural language interface that provide human-like responses to user inputs and simulate human conversation and interaction. Many of these laws require operators of companion chatbots to provide a clear and conspicuous disclosure to the user that they are interacting with a chatbot, not a human.\u00a0See, e.g.,\u00a0<a href=\"https:\/\/leginfo.legislature.ca.gov\/faces\/billTextClient.xhtml?bill_id=202520260SB243\" rel=\"nofollow noopener\" target=\"_blank\">CA SB 243<\/a>. New York\u2019s companion chatbot law requires these disclosures to be made at the beginning of a user\u2019s interaction with a companion chatbot and at least every three hours during continued interaction. Washington state\u2019s companion chatbot law, which includes similar requirements, will go into effect in January 2027.\u00a0<a href=\"https:\/\/lawfilesext.leg.wa.gov\/biennium\/2025-26\/Pdf\/Bills\/Session%20Laws\/House\/2225-S.SL.pdf?q=20260406060409\" rel=\"nofollow noopener\" target=\"_blank\">WA HB 2225\u00a0<\/a><a href=\"https:\/\/lawfilesext.leg.wa.gov\/biennium\/2025-26\/Pdf\/Bills\/Session%20Laws\/House\/2225-S.SL.pdf?q=20260406060409\" rel=\"nofollow noopener\" target=\"_blank\">\u00a7 3(2)(a)-(b)<\/a>;\u00a0<a href=\"https:\/\/legislation.nysenate.gov\/pdf\/bills\/2025\/S3008C\" rel=\"nofollow noopener\" target=\"_blank\">NY SB S3008C Art. 47,\u00a0<\/a><a href=\"https:\/\/legislation.nysenate.gov\/pdf\/bills\/2025\/S3008C\" rel=\"nofollow noopener\" target=\"_blank\">\u00a7 1702<\/a>. Other states have imposed harm mitigation and reporting obligations for particularly vulnerable users, such as minors and individuals expressing ideations of self-harm or suicide. Notably, California\u2019s chatbot law will require companion chatbot operators to annually report to the state\u2019s Office of Suicide Prevention their protocols to detect, remove, and respond to instances of suicidal ideation by users beginning on July 1, 2027.\u00a0<a href=\"https:\/\/leginfo.legislature.ca.gov\/faces\/billTextClient.xhtml?bill_id=202520260SB243\" rel=\"nofollow noopener\" target=\"_blank\">CA SB 243\u00a0<\/a><a href=\"https:\/\/leginfo.legislature.ca.gov\/faces\/billTextClient.xhtml?bill_id=202520260SB243\" rel=\"nofollow noopener\" target=\"_blank\">\u00a7 22603(a)(2)-(3)<\/a>. Chatbot laws have also recently been introduced at the federal level. (See the\u00a0<a href=\"https:\/\/www.commerce.senate.gov\/wp-content\/uploads\/2026\/04\/LAN26253.pdf\" rel=\"nofollow noopener\" target=\"_blank\">Children\u2019s Health, Advancement, Trust, Boundaries, and Oversight in Technology Act<\/a>\u00a0(CHATBOT Act) and the\u00a0<a href=\"https:\/\/www.congress.gov\/119\/bills\/s3062\/BILLS-119s3062is.pdf\" rel=\"nofollow noopener\" target=\"_blank\">Guidelines for User Age-Verification and Responsible Dialogue Act<\/a>\u00a0(GUARD Act).)<br \/>\n                            Surveillance Pricing.\u00a0States are also beginning to regulate surveillance pricing, which generally refers to the practice of collecting consumers\u2019 personal information and charging different prices to a consumer or group of consumers for identical goods or services.<\/p>\n<p>On April 28, 2026, Maryland became the first state to\u00a0prohibit\u00a0certain differential pricing practices. Effective October 1, 2026,\u00a0<a href=\"https:\/\/mgaleg.maryland.gov\/2026RS\/bills\/hb\/hb0895e.pdf\" rel=\"nofollow noopener\" target=\"_blank\">HB 0895<\/a>\u00a0will prohibit food retailers and third-party delivery services from (1) using protected class data to offer or price goods in a way that denies consumers equal access to benefits or services; and (2) engaging in dynamic pricing, defined generally as the discriminatory practice of offering or setting a personalized price for a good or service that is specific to a consumer based on the consumer\u2019s personal data. The law exempts certain practices such as loyalty programs, subscription-based contracts, and pricing differences based on costs, supply, or demand.<\/p>\n<p>While the Maryland law prohibits dynamic pricing in certain sectors, New York\u2019s\u00a0<a href=\"https:\/\/www.nysenate.gov\/legislation\/laws\/GBS\/349-A\" rel=\"nofollow noopener\" target=\"_blank\">Algorithmic Pricing Disclosure Act<\/a>\u00a0requires entities to provide a \u201cclear and conspicuous disclosure\u201d that alerts consumers that their personal data is being used for the purpose of setting a personalized price. In July 2025, the National Retail Federation (NRF) sued to block the law, arguing that it violates the First Amendment rights of businesses by compelling them to use specific language in their consumer-facing messaging. The lawsuit was dismissed with prejudice in October 2025. The court determined that the law\u2019s disclosure requirement triggers a more permissive standard of scrutiny because the pricing law &#8220;mandates the disclosure of &#8216;purely factual and uncontroversial&#8217; commercial speech.&#8221;<a href=\"https:\/\/www.wsgr.com\/en\/insights\/recent-ai-regulatory-developments-in-the-united-states.html#1\" rel=\"nofollow noopener\" target=\"_blank\">1<\/a>\u00a0The NRF appealed the decision, but the law remains in effect pending appeal.<\/p>\n<p>                            Nonconsensual Publication of Intimate Images and AI-Generated \u201cDeepfakes.\u201d\u00a0The federal\u00a0<a href=\"https:\/\/www.congress.gov\/119\/plaws\/publ12\/PLAW-119publ12.pdf\" rel=\"nofollow noopener\" target=\"_blank\">Take it Down Act<\/a>\u00a0(TiDA) will go into effect on May 19, 2026. Among other things, TiDA will make it illegal to \u201cknowingly publish\u201d or threaten to publish intimate images without a person\u2019s consent\u2014including AI-generated \u201cdeepfakes.\u201d The law will also require covered platforms (defined as public websites and online services that primarily provide a forum for user-generated content) to remove non-consensual intimate depictions within 48 hours of receiving notice from a victim. Covered platforms must also take steps to remove duplicative content.<\/p>\n<p>Federal law and many state laws concerning obscenity and child sexual abuse material (CSAM) have historically applied to AI-generated images. However, in recent months several states have also expanded the scope of their existing criminal laws to more explicitly include the creation and distribution of certain AI-generated images. For example, California now classifies artificially generated or digitally altered child sexual assault material (CSAM) as child pornography.\u00a0<a href=\"https:\/\/leginfo.legislature.ca.gov\/faces\/billNavClient.xhtml?bill_id=202320240AB1831\" rel=\"nofollow noopener\" target=\"_blank\">CA AB 1831<\/a>\u00a0and\u00a0<a href=\"https:\/\/leginfo.legislature.ca.gov\/faces\/billNavClient.xhtml?bill_id=202320240SB1381\" rel=\"nofollow noopener\" target=\"_blank\">CA SB 1381<\/a>. Similarly,\u00a0<a href=\"https:\/\/leginfo.legislature.ca.gov\/faces\/billNavClient.xhtml?bill_id=202320240SB926\" rel=\"nofollow noopener\" target=\"_blank\">CA SB 926<\/a>\u00a0criminalizes the creation and distribution of computer-generated sexually explicit content.<\/p>\n<p>                            Regulation of High-Risk Use Cases.\u00a0The Colorado state legislature enacted the\u00a0<a href=\"https:\/\/leg.colorado.gov\/sites\/default\/files\/2024a_205_signed.pdf\" rel=\"nofollow noopener\" target=\"_blank\">Colorado Artificial Intelligence Act<\/a>\u00a0(CAIA) in May of 2024, which would have applied to developers and deployers of \u201chigh risk AI systems,\u201d defined as AI systems that make, or are a substantial factor in making, a \u201cconsequential decision.\u201d Although the fate of the CAIA is uncertain as a result of ongoing litigation, new regulations under the\u00a0<a href=\"https:\/\/leginfo.legislature.ca.gov\/faces\/codes_displayText.xhtml?division=3.&amp;part=4.&amp;lawCode=CIV&amp;title=1.81.5\" rel=\"nofollow noopener\" target=\"_blank\">California Consumer Privacy Act<\/a>\u00a0(CCPA) were approved to include a similar framework. Under the\u00a0<a href=\"https:\/\/cppa.ca.gov\/regulations\/pdf\/ccpa_updates_cyber_risk_admt_appr_text.pdf\" rel=\"nofollow noopener\" target=\"_blank\">new CCPA regulations<\/a>, businesses that use \u201cautomated decision-making technology\u201d (ADMT) to make \u201csignificant decisions\u201d about consumers will be required, among other things, to provide consumers with a pre-use notice, the ability to opt out of the use of ADMT, and access to information about the business\u2019s ADMT use. As we noted in a\u00a0<a href=\"https:\/\/www.wsgr.com\/en\/insights\/cppa-approves-new-ccpa-regulations-on-ai-cybersecurity-and-risk-governance-and-advances-updated-data-broker-regulations.html\" rel=\"nofollow noopener\" target=\"_blank\">previous client alert<\/a>, the regulations would cover companies that use AI to \u201csubstantially replace\u201d human decision making surrounding \u201csignificant decisions.\u201d \u201cSignificant decisions\u201d are those that result in the provision or denial of financial or lending services, housing, education enrollment or opportunities, employment or independent contracting opportunities or compensation, or healthcare services. These regulations went into effect on January 1, 2026, but businesses must come into compliance with the new ADMT requirements by January 1, 2027.<br \/>\n                            Generative AI Transparency.\u00a0Some state AI laws have focused on transparency of training data and watermarking. For example,\u00a0<a href=\"https:\/\/leginfo.legislature.ca.gov\/faces\/billNavClient.xhtml?bill_id=202320240AB2013\" rel=\"nofollow noopener\" target=\"_blank\">CA AB 2013<\/a>\u2014which came into effect on January 1, 2026\u2014requires developers of generative AI systems or services to post documentation on their websites regarding the data they used to train their generative AI systems or services.\u00a0<a href=\"https:\/\/leginfo.legislature.ca.gov\/faces\/billNavClient.xhtml?bill_id=202320240SB942\" rel=\"nofollow noopener\" target=\"_blank\">CA SB 942<\/a>\u2014which came into effect on January 1, 2026\u2014requires covered providers to include a latent disclosure in AI-generated images, videos, and audio content created by the covered provider\u2019s generative AI system regarding the provenance of the content.<\/p>\n<p>At the federal level, the\u00a0<a href=\"https:\/\/foushee.house.gov\/imo\/media\/doc\/foushee_protecting_consumers_from_deceptive_ai_act.pdf\" rel=\"nofollow noopener\" target=\"_blank\">Protecting Consumers From Deceptive AI Act<\/a>\u00a0was introduced on April 23, 2026. This bill would direct the National Institute of Standards and Technology (NIST) to develop guidelines for watermarking, digital fingerprinting, and provenance metadata for AI-generated audio and visual content. It would also require NIST to support labeling standards for AI-modified content on platforms and develop frameworks for identifying AI-generated text.<\/p>\n<p>                            Frontier Model Regulation.\u00a0As discussed in a\u00a0<a href=\"https:\/\/www.wsgr.com\/en\/insights\/2026-year-in-preview-ai-regulatory-developments-for-companies-to-watch-out-for.html\" rel=\"nofollow noopener\" target=\"_blank\">prior client alert<\/a>, California and New York have enacted sweeping state laws regulating frontier AI models. Frontier AI models are generally defined as the most advanced general-purpose models that can enable advanced reasoning, generation of images, text, and audio, and the functioning of agentic workflows. Most provisions of California\u2019s Transparency in Frontier AI Act (or\u00a0<a href=\"https:\/\/leginfo.legislature.ca.gov\/faces\/billTextClient.xhtml?bill_id=202520260SB53\" rel=\"nofollow noopener\" target=\"_blank\">CA SB 53<\/a>) became effective on January 1, 2026. Both statutes require large frontier AI model developers to, among other requirements, create and publish an AI safety and security framework, report certain safety incidents, and provide transparency disclosures related to frontier AI models\u2019 risk assessment and use.<\/p>\n<p>In March 2026, New York\u00a0<a href=\"https:\/\/legislation.nysenate.gov\/pdf\/bills\/2025\/S8828\" rel=\"nofollow noopener\" target=\"_blank\">amended<\/a>\u00a0the RAISE Act to more closely mirror CA SB 53, which could potentially ease some multistate compliance challenges for large frontier AI model developers. First, the amendment narrows the scope of frontier developers classified as \u201clarge frontier developers\u201d that are subject to certain additional requirements. Specifically, the amendment replaces the previous compute-based definition of a \u201clarge frontier developer\u201d with a revenue-based threshold that is the same as the one under CA SB 53 (i.e., developers that exceed $500 million in annual gross revenue in the preceding calendar year). Second, the amendment significantly reduces civil penalties that the New York Attorney General may impose from $10 million to $1 million for a first violation, and from $30 million to $3 million for subsequent violations. This change more closely mirrors the penalties under CA SB 53\u2014which are capped at $1 million per violation. Finally, enforcement of the RAISE Act is now delayed until January 2027. This amendment gives frontier AI model developers more time to assess and comply with their revised obligations under the RAISE Act.<\/p>\n<p>Takeaways: How should companies develop and implement a compliance framework around these regulations?<\/p>\n<p>                            Determine which laws apply:\u00a0Although compliance with the emerging patchwork of numerous state laws may seem daunting, some apply to developers, while others apply to deployers; some apply only in specific sectors; and some apply only for specific use cases. In addition, many of them include exceptions.<br \/>\n                            Monitor enforcement and civil litigation trends: Regulators and civil litigants may proceed with enforcement actions and lawsuits even in the absence of new laws, leveraging existing consumer protection statutes and other theories of liability.<br \/>\n                            Develop an AI governance framework:\u00a0This may include developing an internal governance structure for approval of new products\/systems; creating diligence questions for third party vendors; and creating guardrails around AI use. Provide workforce members with guidance on use of off-the-shelf AI tools, such as how to turn off training data.<br \/>\n                            Maintain and update incident response plans: This includes updating and adapting cybersecurity incident response plans for the AI context, in addition to developing incident response plans for other types of AI-related safety and security incidents.<br \/>\n                            Develop compliant external disclosures:\u00a0This may range from disclosures that AI is being used to training data transparency requirements for model developers.<\/p>\n<p>Wilson Sonsini works with clients developing, deploying, and using AI across the regulatory spectrum, and we are actively monitoring state and federal AI laws and regulations as well as litigation and enforcement trends. For more information, please contact\u00a0<a href=\"https:\/\/www.wsgr.com\/en\/people\/maneesha-mithal.html\" rel=\"nofollow noopener\" target=\"_blank\">Maneesha Mithal<\/a>,\u00a0<a href=\"https:\/\/www.wsgr.com\/en\/people\/demian-ahn.html\" rel=\"nofollow noopener\" target=\"_blank\">Demian Ahn<\/a>,\u00a0<a href=\"https:\/\/www.wsgr.com\/en\/people\/hale-melnick.html\" rel=\"nofollow noopener\" target=\"_blank\">Hale Melnick<\/a>,\u00a0<a href=\"https:\/\/www.wsgr.com\/en\/people\/michelle-ullman.html\" rel=\"nofollow noopener\" target=\"_blank\">Michelle Ullman<\/a>, or any member of Wilson Sonsini\u2019s\u00a0<a href=\"https:\/\/www.wsgr.com\/en\/services\/industries\/artificial-intelligence-and-machine-learning.html\" rel=\"nofollow noopener\" target=\"_blank\">Artificial Intelligence and Machine Learning<\/a>\u00a0and\u00a0<a href=\"https:\/\/www.wsgr.com\/en\/services\/practice-areas\/regulatory\/data-privacy-and-cybersecurity.html\" rel=\"nofollow noopener\" target=\"_blank\">Data, Privacy, and Cybersecurity<\/a>\u00a0practices.<\/p>\n<p><a name=\"1\"\/>[1]\u00a0<a href=\"https:\/\/assets.law360news.com\/2397000\/2397728\/https-ecf-nysd-uscourts-gov-doc1-127138350514.pdf\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/assets.law360news.com\/2397000\/2397728\/https-ecf-nysd-uscourts-gov-doc1-127138350514.pdf<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"While the EU Artificial Intelligence (AI) Act has set forth a relatively uniform framework for AI regulation in&hellip;\n","protected":false},"author":2,"featured_media":23335,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[1911,24,5103,16045,16007,25,16005,16006,9377,16058,16057,309,16046,2840,970,16008,387,16035,1639,16009,16011,16010,1950,16012,16030,16029,10346,16036,16047,16048,7151,16013,16031,10190,1066,16014,769,16037,16032,16003,16040,2700,16041,2000,16016,898,16017,16018,16015,138,2594,16028,2369,11183,16033,16042,16020,16019,11708,16043,3423,5245,13756,853,16049,16050,16051,16038,16039,16021,3050,16052,16023,16022,16004,16059,969,2220,16034,16044,16024,4516,1161,16002,1585,4700,16053,16025,106,15216,16054,16026,10311,4481,139,16055,2983,15997,16056,2187,16060,3815,16027,16001,15999,16000,15998],"class_list":{"0":"post-31711","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-ai","8":"tag-acquisitions","9":"tag-ai","10":"tag-antitrust","11":"tag-antitrust-analysis","12":"tag-antitrust-law","13":"tag-artificial-intelligence","14":"tag-attorney","15":"tag-attornies","16":"tag-austin","17":"tag-belgium","18":"tag-brussels","19":"tag-business","20":"tag-business-realignment","21":"tag-ca","22":"tag-california","23":"tag-charitable-planning","24":"tag-china","25":"tag-class-action","26":"tag-class-actions","27":"tag-clean-tech","28":"tag-clean-technology","29":"tag-cleantech","30":"tag-compensation","31":"tag-consumer-regulatory-privacy","32":"tag-corporate-securities","33":"tag-corporate-and-securities","34":"tag-data-security","35":"tag-debt-offerings","36":"tag-domestic","37":"tag-early-public-companies","38":"tag-employee-benefits","39":"tag-employee-benefits-counseling","40":"tag-employee-law","41":"tag-employment-law","42":"tag-energy","43":"tag-energy-clean-technology","44":"tag-environmental","45":"tag-equity-public-offering","46":"tag-equity-public-offerings","47":"tag-goodrich","48":"tag-hart-scott-rodino-filings","49":"tag-hong-kong","50":"tag-hsr-filings","51":"tag-intellectual-property","52":"tag-intellectual-property-litigation","53":"tag-international","54":"tag-internet-law","55":"tag-investment-funds","56":"tag-ip","57":"tag-ipo","58":"tag-law","59":"tag-leading-law-firm","60":"tag-legal","61":"tag-life-sciences","62":"tag-litigation","63":"tag-m-a-transactions","64":"tag-merger","65":"tag-merger-and-acquisitions","66":"tag-mergers-and-acquisitions","67":"tag-mergers-and-acquisitions-transactions","68":"tag-new-york","69":"tag-new-york-city","70":"tag-ny","71":"tag-palo-alto","72":"tag-patents","73":"tag-patents-and-trademarks","74":"tag-public-companies","75":"tag-public-offering","76":"tag-public-offerings","77":"tag-qualified-retirement-plans","78":"tag-real-estate","79":"tag-real-estate-and-environmental","80":"tag-retirement-plan","81":"tag-retirement-plans","82":"tag-rosati","83":"tag-san-fran","84":"tag-san-francisco","85":"tag-seattle","86":"tag-securities-class-action-filings","87":"tag-securities-fraud","88":"tag-securities-litigation","89":"tag-shanghai","90":"tag-silicon-valley","91":"tag-sonsini","92":"tag-start-ups","93":"tag-tax","94":"tag-technology-licensing","95":"tag-technology-transactions","96":"tag-texas","97":"tag-trademark","98":"tag-trademarks","99":"tag-trust-and-estate-planning","100":"tag-tx","101":"tag-venture","102":"tag-venture-capital","103":"tag-venture-capital-fund","104":"tag-virginia","105":"tag-w-s-g-r","106":"tag-wa","107":"tag-washington","108":"tag-washington-state","109":"tag-wealth-management","110":"tag-welfare-benefits","111":"tag-wilson","112":"tag-wilson-sonsini-goodrich-rosati","113":"tag-wilson-sonsini-goodrich-and-rosati","114":"tag-wsgr"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/31711","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=31711"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/31711\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/23335"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=31711"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=31711"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=31711"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}