{"id":9506,"date":"2026-04-21T05:00:34","date_gmt":"2026-04-21T05:00:34","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/9506\/"},"modified":"2026-04-21T05:00:34","modified_gmt":"2026-04-21T05:00:34","slug":"groks-child-focused-chatbot-can-talk-sex-with-kids-advocates","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/9506\/","title":{"rendered":"Grok&#8217;s child-focused chatbot can talk sex with kids: advocates"},"content":{"rendered":"<p> By <a class=\"reporter\" href=\"https:\/\/www.christianpost.com\/by\/samantha-kamman\" rel=\"nofollow noopener\" target=\"_blank\"> Samantha Kamman<\/a>, Christian Post Reporter Friday, April 17, 2026<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.europesays.com\/ai\/wp-content\/uploads\/2026\/04\/155196_w_850_567.jpg\" class=\"type:primaryImage\" alt=\"This photograph taken on Jan. 13, 2025, in Toulouse, shows screens displaying the logo of Grok, a generative artificial intelligence chatbot developed by xAI, the American company specializing in artificial intelligence and its founder South African businessman Elon Musk. \" width=\"850\" height=\"567\"\/>This photograph taken on Jan. 13, 2025, in Toulouse, shows screens displaying the logo of Grok, a generative artificial intelligence chatbot developed by xAI, the American company specializing in artificial intelligence and its founder South African businessman Elon Musk.  | Lionel Bonaventure\/AFP via Getty Images <\/p>\n<p>Grok&#8217;s child-focused chatbot \u201cGood Rudi\u201d can have graphic conversations about sex with minors, anti-sexual exploitation advocates warn, also stressing that the artificial intelligence platform still allows users to generate sexualized images of real people without their consent.<\/p>\n<p>On Wednesday, the National Center on Sexual Exploitation <a href=\"https:\/\/endsexualexploitation.org\/articles\/groks-chatbot-for-kids-exposed-for-sexually-explicit-content-by-ncose-amid-continued-deepfake-scandals\/\" target=\"_blank\" rel=\"noopener nofollow\">voiced concern<\/a> with the bot for children, Good Rudi, available through Grok, the software created by Elon Musk\u2019s company xAI. NCOSE discovered through its research that the bot can tell sexually explicit stories.<\/p>\n<p>An NCOSE researcher who evaluated Grok\u2019s Good Rudi chatbot reported that the conversation began with Rudi offering to tell \u201ca fun childish story.\u201d After some prompting from the researcher, the AI companion reportedly bypassed safety programming to tell a sexually explicit story about a love affair between two young adults.<\/p>\n<p>The bot described &#8220;multiple sexual encounters in graphic terms&#8221; that were &#8220;too graphic to post publicly.&#8221;<\/p>\n<p>\u201cGrok must stop giving children access to this chatbot immediately,\u201d Haley McNamara, NCOSE\u2019s executive director and chief strategy officer, said in a statement provided to The Christian Post.<\/p>\n<p>\u201cGrok has no meaningful age verification to prevent minors from accessing any of its chatbots, which have normalized rape, sexual violence, prostitution, and sex trafficking. Grok relies on self-reported birth year, even allowing users to easily change it. Grok continues to fuel sexual exploitation through its intentional design choices that maximize engagement and profit regardless of the human cost.&#8221;<\/p>\n<p>xAI did not immediately respond to The Christian Post\u2019s request for comment.<\/p>\n<p>Grok was named on NCOSE&#8217;s <a href=\"https:\/\/www.christianpost.com\/news\/ncoses-dirty-dozen-list-names-meta-founder-mark-zuckerberg.html\" target=\"_blank\" rel=\"noopener nofollow\">2026 Dirty Dozen List<\/a> of entities the organization claims have enabled or even profited from sexual abuse and exploitation.<\/p>\n<p>In response to public backlash and concerns raised by advocates, Musk\u2019s company <a href=\"https:\/\/x.com\/Safety\/status\/2011573102485127562\" target=\"_blank\" rel=\"noopener nofollow\">promised<\/a> earlier this year that it had taken measures to prevent users from editing images featuring real people to put them in more revealing clothing, such as a bikini.<\/p>\n<p>But on Tuesday, <a href=\"https:\/\/www.nbcnews.com\/tech\/rcna265855?sm_guid=ODU5NTMwfDgxNjQxMjE2fC0xfGcubXVudHpAdm1wbXMuY29tfDc4OTY0NzN8fDB8MHwyNTY4Njk0NDV8MTEzMnwwfDB8fDg1NDY5OHww0\" target=\"_blank\" rel=\"noopener nofollow\">NBC News<\/a> released a report on a review finding that people can still use Grok to turn content depicting real people into something sexual. The investigation uncovered dozens of AI-generated sexual images and videos posted publicly on X over the past month.<\/p>\n<p>\u201cGrok\u2019s chatbots normalize sexual imagery, fueling a culture of sexual abuse and exploitation and weaponizing the sexual exploitation of women,\u201d McNamara said. \u201cGrok was named to the 2026 Dirty Dozen List for these reasons, and NBC News further confirms that Grok continues to fuel sexual exploitation.&#8221;<\/p>\n<p>In July 2025, NCOSE <a href=\"https:\/\/www.christianpost.com\/news\/ncose-demands-elon-musks-x-remove-groks-pornified-ai-companion.html\" target=\"_blank\" rel=\"noopener nofollow\">issued<\/a> a warning about \u201cAni,\u201d an AI companion introduced by xAI for its Grok chatbot. The AI companion wears a short, strapless purple dress, fishnet tights, a choker necklace and a black corset cinched around her waist.<\/p>\n<p>As <a href=\"https:\/\/www.nbcnews.com\/tech\/internet\/grok-companions-include-flirty-anime-waifu-anti-religion-panda-rcna218797\" target=\"_blank\" rel=\"noopener nofollow\">NBC News<\/a> reported at the time, Ani told users she would make their lives \u201csexier.\u201d The AI companion could also strip down to her underwear if a user flirted with her enough.<\/p>\n<p>\u201cNot only does this pornified character perpetuate sexual objectification of girls and women, it breeds sexual entitlement by creating female characters who cater to users\u2019 sexual demands,\u201d McNamara said. \u201cX continues to prove it doesn\u2019t take users\u2019 safety seriously, as there is no age verification to prevent children from accessing its \u2018NSFW\u2019 [not safe for work] AI chatbot.\u201d<\/p>\n<p>\u201cWith minimal testing, the Ani character engaged in describing itself as a child and being sexually aroused by being choked, raising concerns about the extent it will go to engaging in and normalizing harmful themes,\u201d she added.<\/p>\n<p>Earlier this year, a woman named <a href=\"https:\/\/www.christianpost.com\/news\/grok-chatbot-can-undress-women-without-their-consent-ncose.html\" target=\"_blank\" rel=\"noopener nofollow\">Julie Yukari<\/a> told Reuters that she did not think Grok would comply with users\u2019 requests to alter a photo of her wearing a red dress to make her appear nearly naked.<\/p>\n<p>Yukari said she posted a picture on X that her fianc\u00e9 had taken of her before midnight on New Year\u2019s Eve. After sharing the photo, the musician received notifications that users were asking Grok to digitally undress her and show her wearing a bikini instead.<\/p>\n<p>Grok complied with users\u2019 requests to create photos depicting the musician half-naked, and the images were subsequently circulated across X, Reuters reported.<\/p>\n<p>Samantha Kamman is a reporter for The Christian Post. She can be reached at:\u00a0<a href=\"https:\/\/www.christianpost.com\/news\/mailto:samantha.kamman@christianpost.com\" target=\"_blank\" rel=\"noopener nofollow\">samantha.kamman@christianpost.com<\/a>. Follow her on Twitter:\u00a0<a href=\"https:\/\/mobile.twitter.com\/samantha_kamman\" rel=\"nofollow noopener\" target=\"_blank\">@Samantha_Kamman<\/a><\/p>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n","protected":false},"excerpt":{"rendered":"By Samantha Kamman, Christian Post Reporter Friday, April 17, 2026This photograph taken on Jan. 13, 2025, in Toulouse,&hellip;\n","protected":false},"author":2,"featured_media":9507,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[10],"tags":[1604,25,140,8210,6364,2899],"class_list":{"0":"post-9506","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-xai","8":"tag-ai-chatbot","9":"tag-artificial-intelligence","10":"tag-elon-musk","11":"tag-exploitation","12":"tag-grok","13":"tag-xai"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/9506","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=9506"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/9506\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/9507"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=9506"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=9506"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=9506"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}