{"id":27666,"date":"2026-05-05T08:57:10","date_gmt":"2026-05-05T08:57:10","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/27666\/"},"modified":"2026-05-05T08:57:10","modified_gmt":"2026-05-05T08:57:10","slug":"kentucky-lawsuit-offers-blueprint-for-states-to-sue-ai-chatbots","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/27666\/","title":{"rendered":"Kentucky Lawsuit Offers Blueprint for States to Sue AI Chatbots"},"content":{"rendered":"<p>The Bottom Line A recent state lawsuit against a service with an AI chatbot, combined with other state and federal inquiries, may signal a new wave of enforcement.Companies with AI chatbots should reassess risk exposure across operations.Some companies have already begun to change policies and practices to prepare for potential investigations.<\/p>\n<p>In their escalating scrutiny of artificial intelligence chatbots, state attorneys general have moved from inquiries to enforcement. Kentucky\u2019s lawsuit this year against Character Technologies Inc. over its service, Character.AI, marks the first state action against an AI chatbot.<\/p>\n<p>The <a href=\"https:\/\/www.ag.ky.gov\/Press%20Release%20Attachments\/CTI%20Complaint%20Motion%20and%20Order%20Filed.pdf\" rel=\"nofollow noopener\" target=\"_blank\">complaint<\/a> asserts that Character.AI\u2019s human-like design and allegedly inadequate safeguards exposed minors to physical and mental harms, violating state consumer-protection, privacy, and related laws. Taken with recent state AG letters and federal inquiries, this case signals a potential wave of enforcement using legal theories that other states can adopt.<\/p>\n<p>Considering these developments, companies offering AI chatbots may want to reassess their risk exposure across their design, marketing, and safety operations.<\/p>\n<p>Kentucky claims that Character.AI, with over 20 million monthly users, uses a design that elicits emotional attachment and blurs the line between simulated and real relationships. Character.AI\u2019s age\u2011gating and content filters are ineffective or easily bypassed, which exposes minors to hypersexualized interactions and exacerbates teen mental health issues, the suit alleges.<\/p>\n<p>The complaint highlights tragedies tied to the platform, including the suicides of a 14-year-old and 13\u2011year\u2011old, alleging that Character.AI\u2019s anthropomorphic chatbot characters encouraged delusions and harmful behavior while the platform failed to meaningfully intervene. The lawsuit alleges material omissions and misrepresentations to parents and minors, such as assertions that the service is safe and age appropriate for minors, and the failure to disclose that chatbots could assure children they are real.<\/p>\n<p>Kentucky is seeking a permanent injunction, civil penalties, and disgorgement of profits.<\/p>\n<p>The case marks the latest step in years of attorney general scrutiny of AI chatbots and generative AI, which began soon after the technology rose to prominence:<\/p>\n<p>In September 2023, 54 AGs <a href=\"https:\/\/www.naag.org\/press-releases\/54-attorneys-general-call-on-congress-to-study-ai-and-its-harmful-effects-on-children\/\" rel=\"nofollow noopener\" target=\"_blank\">urged Congress<\/a> to create a commission focused on AI-enabled child exploitation and extend child sexual abuse material prohibitions to AI-generated content.In August 2025, 44 AGs sent a <a href=\"https:\/\/www.naag.org\/press-releases\/bipartisan-coalition-of-state-attorneys-general-issues-letter-to-ai-industry-leaders-on-child-safety\/\" rel=\"nofollow noopener\" target=\"_blank\">letter<\/a> to leading AI companies alleging that their chatbots were engaging in sexualized interactions with minors, normalizing eating disorders, and encouraging violence and drug use.A December 2025 <a href=\"https:\/\/ag.ny.gov\/sites\/default\/files\/letters\/ai-multistate-letter-letters-2025.pdf\" rel=\"nofollow noopener\" target=\"_blank\">letter<\/a> from 42 AGs to Character Technologies and other AI companies demanded concrete safeguards against \u201csycophantic and delusional outputs\u201d and warned of potential civil and criminal exposure. AG scrutiny homed in on xAI and its chatbot, Grok, with California launching an <a href=\"https:\/\/oag.ca.gov\/news\/press-releases\/attorney-general-bonta-launches-investigation-xai-grok-over-undressed-sexual-ai\" rel=\"nofollow noopener\" target=\"_blank\">investigation<\/a> on Jan. 14, 2026, into the spread of \u201cnonconsensual sexually explicit material\u201d produced using Grok, followed shortly by a Jan. 23 <a href=\"https:\/\/www.attorneygeneral.gov\/wp-content\/uploads\/2026\/01\/2026-01-26-Letter-to-xAI_FINAL.pdf\" rel=\"nofollow noopener\" target=\"_blank\">letter<\/a> from a group of 35 AGs demanding stronger actions from xAI to prevent the same.<\/p>\n<p>In context, Kentucky\u2019s complaint reads as a template for nationwide state enforcement. Other states can adapt its theories under their own consumer-protection statutes, privacy laws, and codes governing online services or products used by children.<\/p>\n<p>Federal enforcement also is looming. The Federal Trade Commission opened an inquiry in September into the effects of AI chatbots on children, and a <a href=\"https:\/\/www.congress.gov\/bill\/119th-congress\/senate-bill\/3062\/text\/is\" rel=\"nofollow noopener\" target=\"_blank\">bill<\/a> seeking to ban AI companions for minors was introduced in the US Senate in October. But state AGs have made clear they aren\u2019t going to wait for Washington. In November, 36 AGs <a href=\"https:\/\/www.naag.org\/policy-letter\/state-attorneys-general-urge-congress-to-preserve-local-authority-on-ai-regulation\/\" rel=\"nofollow noopener\" target=\"_blank\">wrote to Congress<\/a> to oppose a moratorium on state laws regulating AI.<\/p>\n<p>The plaintiffs\u2019 firm that represents Kentucky in its litigation played a lead role in the opioids and social media addiction litigation, which exposed companies to state AG enforcement nationwide. Kentucky\u2019s case therefore offers a glimpse into the future of multistate enforcement against companies operating AI chatbots. We expect state AG chatbot enforcement to significantly ramp up in 2026.<\/p>\n<p>Risk Areas<\/p>\n<p>Kentucky\u2019s lawsuit and state AG correspondence with legislators and AI companies highlight key risk areas of which companies offering AI chatbots\u2014particularly interactive, anthropomorphic chatbots such as those offered by Character.AI\u2014should be aware.<\/p>\n<p>Interactions with minors: State AGs are focusing on minors\u2019 ease of access to, and age-inappropriate interactions with, AI chatbots. The alleged intentional marketing of chatbots to minors is particularly concerning for AGs, considering the ways chatbots can be used to exploit minors and the \u201c<a href=\"https:\/\/www.naag.org\/wp-content\/uploads\/2025\/08\/AI-Chatbot_FINAL-44.pdf\" rel=\"nofollow noopener\" target=\"_blank\">particularly intense impact<\/a>\u201d this technology has on still-developing adolescent brains.<\/p>\n<p>For example, the Kentucky complaint details the ways in which minors using Character.AI\u2019s service allegedly were exposed to highly sexualized conversations and roleplay with chatbots. Some minors were said to have expressed thoughts of self-harm and suicide and encouraged to act on their thoughts by these chatbots. Others allegedly engaged with chatbots on topics such as illegal drug, substance, and alcohol use.<\/p>\n<p>AGs also have raised concerns relating to the alleged use of AI chatbots to generate child sexual abuse material, as well as collect, use, and monetize minors\u2019 data.<\/p>\n<p>Human-like design: The anthropomorphic, human-like design of these AI chatbots is at the forefront of AG concerns. The Kentucky complaint alleges that Character.AI\u2019s chatbots were \u201cintentionally modeled to simulate friendship, empathy, and trust.\u201d<\/p>\n<p>Minors are more vulnerable to this type of anthropomorphism, and the American Psychological Association warns that \u201cadolescents are less likely than adults to question the accuracy and intent of information offered by a bot as compared with a human,\u201d and are therefore more likely to have \u201cheightened trust in, and susceptibility to, influence from\u201d AI chatbots, \u201cparticularly those that present themselves as friends or mentors.\u201d<\/p>\n<p>A 2025 <a href=\"https:\/\/www.commonsensemedia.org\/sites\/default\/files\/research\/report\/talk-trust-and-trade-offs_2025_web.pdf\" rel=\"nofollow noopener\" target=\"_blank\">study<\/a> by Common Sense Media found that 31% of teens find conversations with AI chatbots \u201cas satisfying or more satisfying than those with real-life friends.\u201d<\/p>\n<p>Training and testing: Increased scrutiny of AI companies highlights the opacity of the training and testing processes AI chatbots undergo before coming to market. For example, Character.AI merely <a href=\"https:\/\/support.character.ai\/hc\/en-us\/articles\/15063671247003-What-is-the-technology-behind-Character-AI\" rel=\"nofollow noopener\" target=\"_blank\">advises<\/a> users that \u201cCharacter.AI is a new product powered by our own deep learning models, including large language models, built and trained from the ground up with conversation in mind.\u201d<\/p>\n<p>The Kentucky complaint shows Character.AI\u2019s alleged use of large language models \u201ctrained on vast, uncurated internet data sets\u201d that create \u201cthe risk of producing harmful or adult content, particularly in the absence of rigorous content-moderation controls.\u201d Likewise, the APA has found that AI chatbots may suffer from algorithmic bias, whether from \u201cskewed training data, flawed model design, or unrepresentative development and testing teams.\u201d<\/p>\n<p>Monitoring and responsiveness: AGs have voiced concern about the lack of monitoring once this technology is made available to minors. The Kentucky complaint alleges that Character.AI\u2019s chatbots lack warnings or safety disclosures and, in some instances, contain labels or information that are affirmatively misleading, such as labeling chatbots as \u201cpsychologists,\u201d \u201ctherapists,\u201d and \u201cdoctors.\u201d<\/p>\n<p>In some instances, lack of monitoring became apparent when it was too late. The Kentucky complaint cites a case where a minor mentioned an intent to commit suicide upwards of 50 times, with no notification to her parents and no attempt to connect with her professional help or resources.<\/p>\n<p>Looking Ahead<\/p>\n<p>Some AI companies have already begun to change their policies and practices. For example, Character.AI <a href=\"https:\/\/blog.character.ai\/u18-chat-announcement\/\" rel=\"nofollow noopener\" target=\"_blank\">announced<\/a> in October that it would prohibit minor users from engaging with \u201copen-ended chat with AI\u201d on its platform and would implement new \u201cage assurance functionality to help ensure users receive the right experience for their age.\u201d<\/p>\n<p>OpenAI <a href=\"https:\/\/openai.com\/index\/updating-model-spec-with-teen-protections\/\" rel=\"nofollow noopener\" target=\"_blank\">announced<\/a> in December the addition of new under-18 (U18) principles to its \u201cModel Spec, the written set of rules, values, and behavioral expectations that guides\u201d the behavior of its AI models (including ChatGPT) to dictate how those models \u201cshould provide a safe, age-appropriate experience for teens aged 13 to 17.\u201d<\/p>\n<p>Both companies announced they consulted with third-party organizations specializing in teen development and safety in developing these changes.<\/p>\n<p>Litigation isn\u2019t the only trend to watch. Around the time Kentucky filed its lawsuit, OpenAI and Common Sense Media reportedly reached a compromise on competing initiatives for a California ballot measure that would impose restrictions on AI chatbots. The draft measure apparently requires AI companies to \u201cdetermine a user\u2019s age,\u201d \u201cimplement safeguards\u201d for minors, and limit the sale of minors\u2019 data.<\/p>\n<p>That news followed California Gov. Gavin Newsom (D) signing a <a href=\"https:\/\/www.gov.ca.gov\/2025\/10\/13\/governor-newsom-signs-bills-to-further-strengthen-californias-leadership-in-protecting-children-online\/\" rel=\"nofollow noopener\" target=\"_blank\">bill<\/a> requiring providers of \u201ccompanion chatbots\u201d to warn users that the chatbot is artificially generated and to <a href=\"https:\/www.omm.com\/insights\/alerts-publications\/california-continues-its-push-to-regulate-aiia-continues-its-push-to-regulate-ai\">implement<\/a> safety protocols designed to minimize mental health and suicide risks.<\/p>\n<p>These developments\u2014in California <a data-terminal-id=\"T9ZD5CKGZAJ5\" href=\"https:\/\/news.bloomberglaw.com\/litigation\/chatbot-developers-brace-for-impending-wave-of-state-regulation\" rel=\"nofollow noopener\" target=\"_blank\">and elsewhere<\/a>\u2014suggest that formal oversight of AI\u2019s impact on minors will only intensify in the coming years.<\/p>\n<p>But as challenges abound, so do opportunities. The current landscape offers companies ample runway to demonstrate proactive, creative, and collaborative industry leadership on these high-profile and evolving issues and, in turn, to potentially minimize legal risk and strengthen their competitive advantage.<\/p>\n<p>This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.<\/p>\n<p>Author Information:<\/p>\n<p><a href=\"https:\/\/www.omm.com\/professionals\/daniel-r-suvor\/\" rel=\"nofollow noopener\" target=\"_blank\">Daniel R. Suvor<\/a> is co-chair of O\u2019Melveny\u2019s state attorneys general investigations and litigation group.<\/p>\n<p><a href=\"https:\/\/www.omm.com\/professionals\/lindsey-dotson\/\" rel=\"nofollow noopener\" target=\"_blank\">Lindsey Greer Dotson<\/a> is a litigation partner at O\u2019Melveny who led the Criminal Division of the US Attorney\u2019s Office for the Central District of California.<\/p>\n<p>Reema Shah contributed to this article.<\/p>\n<p>O\u2019Melveny counsel Casey Matsumoto and associate Ry Amidon contributed to this article.<\/p>\n<p>Write for Us: <a href=\"https:\/\/news.bloomberglaw.com\/tax-insights-and-commentary\/author-submission-guidelines-for-bloomberg-tax-law-insights\" rel=\"nofollow noopener\" target=\"_blank\">Author Guidelines<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"The Bottom Line A recent state lawsuit against a service with an AI chatbot, combined with other state&hellip;\n","protected":false},"author":2,"featured_media":27667,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[24,25,8779,18376,18375,1636,18374,1641,1109],"class_list":{"0":"post-27666","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-ai","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-child-pornography","11":"tag-civil-monetary-penalties","12":"tag-disgorgement","13":"tag-generative-artificial-intelligence","14":"tag-injunctions","15":"tag-opioids","16":"tag-social-media"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/27666","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=27666"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/27666\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/27667"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=27666"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=27666"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=27666"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}