{"id":3418,"date":"2026-04-12T14:51:23","date_gmt":"2026-04-12T14:51:23","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/3418\/"},"modified":"2026-04-12T14:51:23","modified_gmt":"2026-04-12T14:51:23","slug":"on-the-record-how-to-protect-children-from-addictive-social-media-a-i","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/3418\/","title":{"rendered":"&#8216;On the Record&#8217;: How to protect children from addictive social media, A.I."},"content":{"rendered":"<p>&#8216;On the Record&#8217;: How to protect children from addictive social media, A.I.<\/p>\n<p>\n\t\t\t\t\t\t\t\t\t\t\t\t\tTwo social media giants found liable for addictive product design apps targeting children and generative AI chatbots presenting even greater danger. How do we protect our children? I talk with parents and *** former prosecutor. Let&#8217;s go on the record from WPBF 25. This is On the Record with Todd McDermott. Good morning. Protecting our children from the dangers of social media and AI begins with parents. We welcome Joe Coleman, Rebecca Soillomez today. Both have school-age children. They co-founded the Palm Beach County chapter of Mothers Against Social Media Addiction and co-founded the Digital Childhood Council of Florida to push for new laws to protect our kids. And I first wanna talk, ladies, thank you for being here. I wanna talk about this huge verdict in California. Uh, that actually was against Meta and Alphabet, the, the owners of Google, regarding these addictive algorithms in their apps, and I just have to ask you first, and either one can answer first what you thought of the verdict when you saw it, especially considering the work you do. Sure, well, I mean this is really *** watershed moment for this movement. It&#8217;s being likened to the big tobacco trials and so this really is going to set the precedent for thousands of other trials that are in the pipeline ready to go. Were you surprised at all? I know that you&#8217;ve been involved in this work for several years, but it, it seemed as if *** lot of people were shocked by these giant tech companies losing. Yes, well, it&#8217;s *** good question. I. I was very hopeful that we would get the verdict that we did, um, and I think one of the ways that I that I have always thought about this and why I think that it was the right decision was if I could spend *** minute telling you *** quick analogy, um, if we could imagine the biggest library in the world with all the information, all the books, and then you put in all the user generated content whether it&#8217;s good or bad, right or wrong, there&#8217;s still so much good information so parents and kids go into this environment, this is our online environment. But what social media companies do is they first they allow you to connect with your friends, that&#8217;s great, but then what 90% of the rest of the time is, it&#8217;s like you have *** clerk standing behind your child with *** notepad and they&#8217;re writing down everything they look at, how long they&#8217;re looking at it, what their emotional state is, where in the library they are, and then they use that information to then put on the bookshelf content that they&#8217;re picking for your. Child to look at it&#8217;s not the content that your child was actively seeking out. It&#8217;s the content that they decided would keep your child unable to look away and that&#8217;s the financial part of this is keeping your kid locked in, locked in. And what were parts of this particular case that you focused on, Rebecca? Was there something in particular you looked at and said this is going to be important? Absolutely. So this was the first case of its kind that got to the discovery process. Any prior cases were held up at the discovery process because these big companies were able to hide behind, it&#8217;s called Section 230, which is *** part of the Communications Decency Act from the 90s and so that basically protected them from *** free speech standpoint. This was the first trial of its kind to get to the discovery process to be tried in front of *** jury, and during that discovery process, it was revealed that internally the companies were well aware of the harmful products that they were. Promoting to children and that they were choosing profits over kids&#8217; safety knowingly and we should, we should point out too these companies chose to go to *** jury trial. They could have settled other big companies did some settled and one of the things that I want people to really realize is this in this library analogy that should be protected. That&#8217;s free speech whether you like that content or not. That is free speech. And but Section 230 was also protecting not just that library but that agent or that clerk who was following your child around and pre-selecting that material. These juries understood you protect the library. You do not protect that agent. That agent, that clerk needs to be held accountable. They chose things, so this was the reason why I was hopeful that that would. The verdicts that we did was that people are starting to really understand the difference and the uniqueness of this trial is that it went after the product design that is really what sets this case apart, and we really need to talk about what they will do should they lose an appeal about their products themselves. I want to point out that when you co-founded the chapter in Palm Beach County for Mama, who&#8217;s Against Social Media Addiction. This was about cell phones and cell phones going to kids that many parents consider too young to have *** cell phone, but this has morphed in *** lot of ways because of generative AI, and I&#8217;m, I&#8217;m assuming that your, your digital console has *** lot to do with that. Has this fight changed for parents? It&#8217;s rapidly changing. Yes, it&#8217;s, uh, there&#8217;s so much more to be concerned about. Um, it, it felt like we&#8217;re starting to make real leeway with getting parents to understand, well, maybe delay giving your child *** smartphone, maybe delay giving them social media, and, but before we&#8217;re even getting that far, AI is coming into play, and it&#8217;s so powerful, um, it&#8217;s potentially amazing, but it&#8217;s also potentially very dangerous and. And we&#8217;re really talking to parents and to schools about ensuring that basic safeguards are there so that our children can get the benefit but not, not be exposed to the harm, and the key word here is generative AI. It&#8217;s *** different form of artificial intelligence. It&#8217;s not just spouting information that it is found from *** database. It&#8217;s creating it and that Rebecca is *** particular danger when it comes to children. Absolutely well because first of all there&#8217;s no accountability and so you know it&#8217;s not something that can be previewed or replicated or you know surveilled by teachers the same way that other tools. Are that are introduced into the classroom other forms of technology, other tools that are used for teaching purposes, um, you know, are, are defined, um, and predetermined and so you&#8217;re able to assess what those risks are and also assess who is accountable for any potential harms that might come from using those tools and since we&#8217;re talking generative AI, we should point out parents can. Actually check school provided devices to find out what their children are being exposed to and being exposed to generate AI. Correct? That&#8217;s ***, that&#8217;s *** great point, Todd, because *** lot of parents don&#8217;t even realize that their children have access to generative AI on their school-issued laptops. Um, oftentimes schools aren&#8217;t even announcing it. It&#8217;s just popping up. It&#8217;s all of *** sudden available to the children. So parents, I do recommend that you bring your child&#8217;s laptop home and you, you see what they can access. You also can engage with them and have active conversations to see how they&#8217;re thinking about it and what they&#8217;re doing to protect our children. It&#8217;s about these chatbots too because your child can get. Very connected to *** chatbot that is that has human characteristics in communicating with them. Absolutely and those can be playing on, you know, values and morals in ways that parents wouldn&#8217;t want *** teacher or *** member of administration to be doing with and interfacing with their children, let alone *** machine. We have *** lot more to talk about. We have another segment, so stay right there and please stay with us. We&#8217;ll talk about trends in childhood AI use and the status of legislation when on the record returns. You&#8217;re watching On the Record on WPBF 25. We&#8217;re back with Rebecca Soilloz and Jill Coleman from Mothers Against Social Media Addiction and the Digital Childhood Council. Let&#8217;s talk about the kids&#8217; online safety at COSA. This is something I know you&#8217;re both involved in and where it stands. This is *** federal bill, had some momentum, but where does it stand now? So in 2024 COSA passed 91 to 3 in the Senate, which is almost unheard of these days. It got held up in the House. It was never brought for *** vote. What these recent trials do is it definitely brings COSA back on the table. What does COSA do, by the way? So the Kids Online Safety Act, it&#8217;s unfortunately it&#8217;s been greatly watered down from its original version, but what COSA really focuses on again is product design. There&#8217;s another piece of legislation called CAPA which is the original form of that was also from the 1990s, so very outdated before social media platforms even existed. But that goes after mostly privacy and data protection and so that infringes less upon some of the constitutional rights that COSA runs against like free speech and censorship. So CAPA is *** bit more of *** probably has *** bit greater of *** chance. It also has unanimous support in the Senate. And is slowly working its way through the House, but it&#8217;s going to be, it sounds like it&#8217;ll be up to the House as it was with COSA for any of this to move forward. Yes, I mean, COSA unfortunately did have majority support in the House, but it wasn&#8217;t brought to the floor for *** vote by the Speaker, by the Speaker of the House from the state of Louisiana at the time. It was later revealed that Meta was going to build *** massive data facility in that state and so. That likely had something to do with the fact that it wasn&#8217;t brought to vote. So much of this is influenced by companies and what their connection is with so many lawmakers. I want to talk about Florida legislation too, right? There was *** bill that again had *** lot of oxygen in Tallahassee. And then it, it went away. Can you explain that to people? Also, also, oddly enough, state Senate, state Senate approving this legislation as well. Yes, so this was, um, uh, there&#8217;s *** primary bill brought by Senator Lee, uh, that flew through the Senate, went through committees, and had very strong support. And when it went to the House, sadly Speaker Perez, uh, stopped, stopped it from moving forward. And what this bill did was we have agreed as *** nation that that states are allowed to regulate child safety with AI. Um, they didn&#8217;t want the states to do broad mass legislation, but child safety was fully on the table for states. So that&#8217;s why we had this bill and that&#8217;s why it was moving and there was really everybody I talked to around Florida was very excited for this. It had basic protections like letting *** child know that they&#8217;re talking to *** machine and not *** person because it really feels like you&#8217;re talking to *** person. Um, basic, uh, health measures and parent visibility is also very important. You have to remember parents don&#8217;t have visibility with social media. That&#8217;s why harms are caused. They also don&#8217;t have visibility with their children and what they&#8217;re speaking about with the AI or what the AI is saying to their child. So this had some very good common sense things and sadly Speaker Perez, um, said we, we, he wants. Florida families to wait for *** national bill. It&#8217;s very sad because what the White House wants. It&#8217;s what the White House wants, but other states are passing very good AI legislation for their kids. So my question is, why are Florida representatives not protecting Florida families today? Why do we have to wait years for some national legislation? We touched so much on, uh, how this movement has changed even in *** year. I know that *** lot of your original work is inspired by social psychologist Jonathan Haidt. He&#8217;s *** bestselling writer. What are the trends he&#8217;s seeing now that you&#8217;re focused on as parents and as leaders in this movement? So one thing that Jonathan Haidt has started to focus on that he admits he did not focus on in his book The Anxious Generation, which really was *** catalyst to *** lot of the components of this movement, was on attentiveness and attention span and how that&#8217;s been greatly shortened. There are lots of there&#8217;s lots of research showing that. The majority of students coming into college haven&#8217;t read *** book cover to cover. College professors are changing their curriculum and not assigning longer pieces of literature because kids just don&#8217;t have the attention span coming in. Um, they recently published, uh, *** really, um, comprehensive study showing how reading and math scores have deteriorated, um, ever since basically the smartphone was introduced and ever since schools moved more towards *** 1 to 1 putting an iPad in *** student&#8217;s hand, is that the idea? exactly, yes, all right, um. I also, I think at that point since we talked about school devices being provided to children, we talked in the last segment about checking to see if generative AI is on your child&#8217;s device, whether they have access to *** chatbot. What other questions do you think parents need to have. In mind asking school administrators, asking educators right now, this is *** wonderful thing to talk about because first parents need to understand that this um AI generative AI product whether it&#8217;s chat GPT or perplexity or whatnot, there&#8217;s so many products out there now. That this is an instructional actor that is interacting with your child. It is not *** tool. *** tool behaves the same for you and me and me over time. *** tool is backed by the manufacturer. Uh, uh, ***, *** tool is predictable. The AI is none of those things. It is, it has the ability to influence your child, to guide their thinking, to guide their beliefs, um, not just answer questions. So it really is an instructional, it functions like an instructional actor. So you, you want to ask your school how they&#8217;re governing this and who is responsible for what the AI says to your child because right now I don&#8217;t think it&#8217;s anyone. The vendor, almost all vendor contracts are as is, which means they&#8217;re not standing behind or backing the output of the AI in that moment. Um, so then the next question is, the insurers, are they backing that AI in that moment? No. And we have so many legal questions here. We&#8217;re gonna leave this conversation now. Rebecca Soillomez, Jill Coleman, thank you so much. You have *** lot more work to do in both of your organizations, but we appreciate your time and we do wanna talk about the legalities when it comes to social media, AI, and court cases with our former state attorney Dave Ehrenberg when on the record returns. Welcome back. We welcome Dave Ehrenberg, former Palm Beach County State Attorney, now in private practice. Dave, I wanna go right to this California jury verdict again concerning Meta and Alphabet, owner of Google, YouTube. Uh, they were found liable for their product, *** product design flaw. Is this the beginning? I know other lawsuits are pending, but is this just opening the door? Absolutely, it&#8217;s *** canary in the coal mine. There are thousands of lawsuits ready to go now because in the past these social media companies were invincible. You had to go through Section 230 of the Communications Decency Act of 1996, which got them pretty much immunity because they were not held liable for what people posted, the content on their sites. But what if it&#8217;s the site themselves that&#8217;s defective. The site is the problem like the tobacco companies when they tried to escape liability for pushing these dangerous cigarettes on people, they were sued because of product liability, *** defective product, and they knew that product was defective in the tobacco case. And does this case, did it show that these giant social media platforms. Knew and know that they put together algorithms that will hook people. Their own internal documents helped take them down. There are things that they decide to do like infinite scroll, like their algorithms to keep people hooked, and they targeted young people because that&#8217;s who you want to target, like cigarette companies, to get them hooked for many years because they&#8217;re more susceptible to it. Their brains are still developing, but unfortunately for some young people it creates depression. And just body dysmorphia and all these issues that the plaintiff in the California case had and the jury made them pay and it was *** $6 million judgment. Uh, both of these companies who chose to go to trial, uh, as you mentioned, uh, earlier in the show, there were other similar platforms that chose to settle. They wanted to go to trial. They were subject to discovery. People got to see how they were talking about customers and about their products, but where they are now, they&#8217;re still going to claim that they are somehow protected and that you can&#8217;t, you can&#8217;t blame *** product for *** mental illness or *** mental episode alone. How does that hold up in an appeal? I do think there&#8217;s *** shot that the verdict could be reduced or even overturned on appeal because the appellate courts are going to say, well, was this *** substantial cause of this young woman&#8217;s mental distress? She had other things in her life going on besides Facebook, but the plaintiff&#8217;s lawyers said, no, we don&#8217;t have to prove it&#8217;s the only cause. It&#8217;s the only reason. Substantial cost and the jury made the decision. So rely on the jury verdict. So you know there&#8217;s *** chance it could get overturned, but for now it&#8217;s *** big victory for families everywhere who&#8217;ve had enough of these companies who hide behind their Section 230, their wall of invincibility. That wall is starting to crumble, and they&#8217;re making billions of dollars doing this. I want to move real quickly to generative AI chatbots. It is, it is the new frontier again, something that, uh, the plaintiff in that case, California never was even exposed to that this came after that for her. Uh, there is *** case, and we don&#8217;t have to talk about the case itself in great detail, but Jupiter Man claims his 36 year old son going through *** divorce became addicted. He was convinced that *** Gemini chatbot actually was his companion. Uh, he eventually committed suicide, but the biggest, the biggest issue here is. The pervasiveness of *** generative AI chatbot, does that present even greater dangers for these companies to have *** chatbot that can communicate on its own and and perhaps make up some of what it&#8217;s doing when exposed to *** human user? Oh yeah, I, that sound you hear the noise of all the lawyers writing their lawsuits as we speak because Section 230 protects Facebook and social media companies from the content that others. Post on their sites, but what if you have *** device that creates its own content, which is what these AI companies do? They&#8217;re creating their own content and so they can be held liable for this. We&#8217;re in uncharted territory here and that&#8217;s why there&#8217;s going to be all these lawsuits, and the first lawsuit was the one that hit Facebook in *** big way in California, but there are going to be thousands more to come. Uh, my last question really is, and I have 30 seconds, and that is. With lawsuits, with jury verdicts, with this movement. Are we going to see federal legislation that will really regulate all this because right now it is still kind of the wild west. It&#8217;s still the wild wild west, uh, but things are changing and the pendulum is going to start moving in the opposite direction. I do expect to see the legislation for *** while, the social media companies, they didn&#8217;t want any regulation because they had Section 230 and they thought that no one could touch them, but now that they are touchable, they&#8217;re going to go to Congress and say, OK, we&#8217;ll. Give something if you give us something that we&#8217;ll give up our complete immunity if you give us some protection. Some regulation will be OK as long as you&#8217;re protecting us from these billion dollar potential judgments. All right, let&#8217;s leave it right there, Dave Ehrenberg. Thank you for your expertise as always, and we&#8217;ll be right back. Thank you for making this part of your morning and as always, we encourage you to be part of the discussion each and every Sunday right here at 10:00 a.m. Till the next time, watch this morning&#8217;s On the Record and all episodes on our website WPBF.com and on the free news app. We hope you have *** wonderful Sunday. We&#8217;ll see you again next week.\n\t\t\t\t\t\t\t\t\t\t\t<\/p>\n<p>&#8216;On the Record&#8217;: How to protect children from addictive social media, A.I.<\/p>\n<p>\t\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/www.europesays.com\/ai\/wp-content\/uploads\/2026\/04\/wpbf.png\" class=\"lazyload lazyload-in-view branding\" alt=\"WPBF logo\"\/><\/p>\n<p>\n\t\t\tUpdated: 10:31 AM EDT Apr 12, 2026\n\t\t<\/p>\n<p>\t\t<a href=\"https:\/\/www.wpbf.com\/article\/hearst-television-news-policy-statements\/14471973\" class=\"editorial-standards border-left\" rel=\"nofollow noopener\" target=\"_blank\">Editorial Standards \u24d8<\/a><\/p>\n<p>\n\t\t\t\t\t\tWPBF 25 goes &#8220;On the Record&#8221; with local organizations working to protect children from the dangers associated with artificial intelligence (A.I.) and social media.Get the latest news updates with the WPBF 25 News app. You can download it here.\n\t\t\t\t\t<\/p>\n<p>WPBF 25 goes &#8220;On the Record&#8221; with local organizations working to protect children from the dangers associated with artificial intelligence (A.I.) and social media.<\/p>\n<p>Get the latest news updates with the WPBF 25 News app. You can download it <a href=\"https:\/\/www.wpbf.com\/article\/get-wpbf-25-news-on-the-go\/999316\" rel=\"nofollow noopener\" target=\"_blank\">here<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"&#8216;On the Record&#8217;: How to protect children from addictive social media, A.I. Two social media giants found liable&hellip;\n","protected":false},"author":2,"featured_media":3419,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[3373,1691,3374,24,25,2876,3377,2869,3378,3379,3376,3375],"class_list":{"0":"post-3418","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-ai","8":"tag-on-the-record-how-to-protect-children-from-addictive-social-media","9":"tag-a-i","10":"tag-addictive-social-medium","11":"tag-ai","12":"tag-artificial-intelligence","13":"tag-child","14":"tag-danger","15":"tag-late-news-update","16":"tag-local-organization","17":"tag-news-app","18":"tag-record","19":"tag-wpbf"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/3418","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=3418"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/3418\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/3419"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=3418"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=3418"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=3418"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}