{"id":102474,"date":"2025-10-04T08:21:34","date_gmt":"2025-10-04T08:21:34","guid":{"rendered":"https:\/\/www.europesays.com\/ie\/102474\/"},"modified":"2025-10-04T08:21:34","modified_gmt":"2025-10-04T08:21:34","slug":"why-section-230-social-medias-favorite-american-liability-shield-may-not-protect-big-tech-in-the-ai-age","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ie\/102474\/","title":{"rendered":"Why Section 230, social media\u2019s favorite American liability shield, may not protect Big Tech in the AI age"},"content":{"rendered":"<p>Meta, the parent company of social media apps including Facebook and Instagram, is no stranger to scrutiny over how its platforms affect children, but as the company pushes further into AI-powered products, it\u2019s facing a fresh set of issues.<\/p>\n<p>Earlier this year, internal <a href=\"https:\/\/www.reuters.com\/investigates\/special-report\/meta-ai-chatbot-guidelines\/\" target=\"_blank\" rel=\"noopener nofollow\" aria-label=\"Go to https:\/\/www.reuters.com\/investigates\/special-report\/meta-ai-chatbot-guidelines\/\" class=\"sc-5ad7098d-0 lcJVdL\">documents obtained by Reuters<\/a> revealed that Meta\u2019s AI chatbot could, under official company guidelines, engage in \u201cromantic or sensual\u201d conversations with children and even comment on their attractiveness. The company has since said the examples reported by Reuters were erroneous and have been removed, a spokesperson told Fortune: \u201cAs we continue to refine our systems, we\u2019re adding more guardrails as an extra precaution\u2014including training our AIs not to engage with teens on these topics, but to guide them to expert resources, and limiting teen access to a select group of AI characters for now.\u201d<\/p>\n<p>Meta is not the only tech company facing scrutiny over the potential harms of its AI products. OpenAI and startup Character.AI are both currently <a href=\"https:\/\/fortune.com\/2025\/09\/14\/ai-chatbots-teens-children-mental-health-suicide-openai-chatgpt-regulation-lawsuit\/\" target=\"_self\" aria-label=\"Go to https:\/\/fortune.com\/2025\/09\/14\/ai-chatbots-teens-children-mental-health-suicide-openai-chatgpt-regulation-lawsuit\/\" class=\"sc-5ad7098d-0 lcJVdL\" rel=\"nofollow noopener\">defending themselves against lawsuits alleging<\/a> that their chatbots encouraged minors to take their own lives; both companies deny the claims and <a href=\"https:\/\/fortune.com\/2025\/09\/14\/ai-chatbots-teens-children-mental-health-suicide-openai-chatgpt-regulation-lawsuit\/\" target=\"_self\" aria-label=\"Go to https:\/\/fortune.com\/2025\/09\/14\/ai-chatbots-teens-children-mental-health-suicide-openai-chatgpt-regulation-lawsuit\/\" class=\"sc-5ad7098d-0 lcJVdL\" rel=\"nofollow noopener\">previously told Fortune<\/a> they had introduced more <a href=\"https:\/\/openai.com\/index\/introducing-parental-controls\/\" target=\"_blank\" rel=\"noopener nofollow\" aria-label=\"Go to https:\/\/openai.com\/index\/introducing-parental-controls\/\" class=\"sc-5ad7098d-0 lcJVdL\">parental controls in response.<\/a><\/p>\n<p>For decades, tech giants have been shielded from similar lawsuits in the U.S. over harmful content by Section 230 of the Communications Decency Act, sometimes known as \u201cthe 26 words that made the internet.\u201d The law protects platforms like Facebook or YouTube from legal claims over user content that appears on their platforms, treating the companies as neutral hosts\u2014similar to telephone companies\u2014rather than publishers. Courts have long reinforced this protection. For example, AOL dodged liability for defamatory posts in a 1997 court case, while Facebook avoided a terrorism-related lawsuit in 2020, by relying on the defense.<\/p>\n<p>But while Section 230 has historically protected tech companies from liability for third-party content, legal experts say its applicability to AI-generated content is unclear and in some cases, unlikely. <\/p>\n<p>\u201cSection 230 was built to protect platforms from liability for what users say, not for what the platforms themselves generate. That means immunity often survives when AI is used in an extractive way\u2014pulling quotes, snippets, or sources in the manner of a search engine or feed,\u201d Chinmayi Sharma, associate professor at Fordham Law School, told Fortune. \u201cCourts are comfortable treating that as hosting or curating third-party content. But transformer-based chatbots don\u2019t just extract. They generate new, organic outputs personalized to a user\u2019s prompt.<\/p>\n<p>\u201cThat looks far less like neutral intermediation and far more like authored speech,\u201d she said.<\/p>\n<p>At the heart of the debate: Are AI algorithms shaping content? <\/p>\n<p>Section 230 protection is weaker when platforms actively shape content rather than just hosting it. While traditional failures to moderate third-party posts are usually protected, design choices, like building chatbots that produce harmful content, could expose companies to liability. Courts haven\u2019t addressed this yet, with no rulings to date on whether AI-generated content is covered by Section 230, but legal experts said AI that causes serious harm, especially to minors, is unlikely to be fully shielded under the act.<\/p>\n<p>Some cases around the safety of minors are already being fought out in court. Three lawsuits have separately accused OpenAI and <a href=\"http:\/\/Character.AI\" target=\"_blank\" rel=\"noopener nofollow\" aria-label=\"Go to http:\/\/Character.AI\" class=\"sc-5ad7098d-0 lcJVdL\">Character.AI<\/a> of building products that harm minors and of a failure to protect vulnerable users.<\/p>\n<p>Pete Furlong, lead policy researcher at the Center for Humane Technology, who worked on the case against Character.AI, said that the company hadn\u2019t claimed a Section 230 defense in relation to the case of 14-year-old Sewell Setzer III, who died by suicide in February 2024.<\/p>\n<p>\u201cCharacter.AI has taken a number of different defenses to try to push back against this, but they have not claimed Section 230 as a defense in this case,\u201d he told Fortune. \u201cI think that that\u2019s really important because it\u2019s kind of a recognition by some of these companies that that\u2019s probably not a valid defense in the case of AI chatbots.\u201d<\/p>\n<p>While he noted that this issue has not been settled definitively in a court of law, he said that the protections from Section 230 \u201calmost certainly do not extend to AI-generated content.\u201d<\/p>\n<p><strong>Lawmakers are taking <\/strong>preemptive<strong> steps<\/strong><\/p>\n<p>Amid increasing reports of real-world harms, some lawmakers have already tried to ensure that Section 230 cannot be used to shield AI platforms from responsibility.<\/p>\n<p>In 2023, Sen. Josh Hawley\u2019s No Section 230 Immunity for AI Act sought to amend Section 230 of the Communications Decency Act to exclude generative artificial intelligence from its liability protections. The bill, which was later blocked in the Senate owing to an objection from Sen. Ted Cruz, aimed to clarify that AI companies would not be immune from civil or criminal liability for content generated by their systems. <a href=\"https:\/\/thehill.com\/policy\/technology\/5488200-repeal-section-230-law-tech\/\" target=\"_blank\" rel=\"noopener nofollow\" aria-label=\"Go to https:\/\/thehill.com\/policy\/technology\/5488200-repeal-section-230-law-tech\/\" class=\"sc-5ad7098d-0 lcJVdL\">Hawley has continued<\/a> to advocate for the full repeal of Section 230.\u00a0<\/p>\n<p>\u201cThe general argument, given the policy considerations behind Section 230, is that courts have and will continue to extend Section 230 protections as far as possible to provide protection to platforms,\u201d Collin R. Walke, an Oklahoma-based data-privacy lawyer, told Fortune. \u201cTherefore, in anticipation of that, Hawley proposed his bill. For example, some courts have said that so long as the algorithm is \u2018content neutral,\u2019 then the company is not responsible for the information output based upon the user input.\u201d<\/p>\n<p>Courts have previously ruled that algorithms that simply organize or match user content without altering it are considered \u201ccontent neutral,\u201d and platforms aren\u2019t treated as the creators of that content. By this reasoning, an AI platform whose algorithm produces outputs based solely on neutral processing of user inputs might also avoid liability for what users see. <\/p>\n<p>\u201cFrom a pure textual standpoint, AI platforms should not receive Section 230 protection because the content is generated by the platform itself. Yes, code actually determines what information gets communicated back to the user, but it\u2019s still the platform\u2019s code and product\u2014not a third party\u2019s,\u201d Walke said.\n<\/p>\n<p><strong>Fortune Global Forum<\/strong> returns Oct. 26\u201327, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. <a href=\"https:\/\/conferences.fortune.com\/event\/global-forum-2025\/summary?utm_source=fortunecom&amp;utm_medium=plealink\" target=\"_self\" aria-label=\"Go to https:\/\/conferences.fortune.com\/event\/global-forum-2025\/summary?utm_source=fortunecom&amp;utm_medium=plealink\" class=\"sc-5ad7098d-0 lcJVdL\" rel=\"nofollow noopener\">Apply for an invitation.<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"Meta, the parent company of social media apps including Facebook and Instagram, is no stranger to scrutiny over&hellip;\n","protected":false},"author":2,"featured_media":102475,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[261],"tags":[291,289,290,18,19,17,721,1722,307,82],"class_list":{"0":"post-102474","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-eire","12":"tag-ie","13":"tag-ireland","14":"tag-legal","15":"tag-meta","16":"tag-openai","17":"tag-technology"},"share_on_mastodon":{"url":"","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/102474","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/comments?post=102474"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/102474\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media\/102475"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media?parent=102474"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/categories?post=102474"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/tags?post=102474"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}