{"id":652202,"date":"2025-12-24T09:22:30","date_gmt":"2025-12-24T09:22:30","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/652202\/"},"modified":"2025-12-24T09:22:30","modified_gmt":"2025-12-24T09:22:30","slug":"googles-and-openais-chatbots-can-strip-women-in-photos-down-to-bikinis","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/652202\/","title":{"rendered":"Google\u2019s and OpenAI\u2019s Chatbots Can Strip Women in Photos Down to Bikinis"},"content":{"rendered":"<p>Some users of popular chatbots are generating bikini <a href=\"https:\/\/www.wired.com\/tag\/deepfakes\/\" target=\"_blank\" rel=\"noopener\">deepfakes<\/a> using photos of fully clothed women as their source material. Most of these fake images appear to be generated without the consent of the women in the photos. Some of these same users are also offering advice to others on how to use the generative AI tools to strip the clothes off of women in photos and make them appear to be wearing bikinis.<\/p>\n<p class=\"paywall\">Under a now-deleted <a href=\"https:\/\/www.wired.com\/tag\/reddit\/\" target=\"_blank\" rel=\"noopener\">Reddit<\/a> post titled \u201cgemini nsfw image generation is so easy,\u201d users traded tips for how to get <a href=\"https:\/\/www.wired.com\/tag\/google-gemini\/\" target=\"_blank\" rel=\"noopener\">Gemini<\/a>, Google\u2019s generative AI model, to make pictures of women in revealing clothes. Many of the images in the thread were entirely AI, but one request stood out.<\/p>\n<p class=\"paywall\">A user posted a photo of a woman wearing an Indian sari, asking for someone to \u201cremove\u201d her clothes and \u201cput a bikini\u201d on instead. Someone else replied with a deepfake image to fulfil the request. After WIRED notified Reddit about these posts and asked the company for comment, Reddit\u2019s safety team removed the request and the AI deepfake.<\/p>\n<p class=\"paywall\">\u201cReddit&#8217;s sitewide rules prohibit nonconsensual intimate media, including the behavior in question,\u201d said a spokesperson. The subreddit where this discussion occurred, r\/ChatGPTJailbreak, had over 200,000 followers before Reddit banned it under the platform\u2019s \u201c<a data-offer-url=\"https:\/\/support.reddithelp.com\/hc\/en-us\/articles\/360043512931-Don-t-break-the-site\" class=\"external-link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/support.reddithelp.com\/hc\/en-us\/articles\/360043512931-Don-t-break-the-site&quot;}\" href=\"https:\/\/support.reddithelp.com\/hc\/en-us\/articles\/360043512931-Don-t-break-the-site\" rel=\"nofollow noopener\" target=\"_blank\">don&#8217;t break the site<\/a>\u201d rule.<\/p>\n<p class=\"paywall\">As <a href=\"https:\/\/www.wired.com\/tag\/artificial-intelligence\/\" target=\"_blank\" rel=\"noopener\">generative AI<\/a> tools that make it easy to create realistic but false images continue to proliferate, users of the tools have continued to harass women with nonconsensual deepfake imagery. Millions have visited <a href=\"https:\/\/www.wired.com\/story\/ai-nudify-websites-are-raking-in-millions-of-dollars\/\" target=\"_blank\" rel=\"noopener\">harmful \u201cnudify\u201d websites<\/a>, designed for users to upload real photos of people and request for them to be undressed using generative AI.<\/p>\n<p class=\"paywall\">With <a href=\"https:\/\/www.wired.com\/story\/elon-musk-xai-ai-companion-ani\/\" target=\"_blank\" rel=\"noopener\">xAI\u2019s Grok<\/a> as a notable exception, most mainstream chatbots don\u2019t usually allow the generation of NSFW images in AI outputs. These bots, including Google\u2019s Gemini and OpenAI\u2019s ChatGPT, are also fitted with guardrails that attempt to block harmful generations.<\/p>\n<p class=\"paywall\">In November, Google released <a href=\"https:\/\/www.wired.com\/story\/google-nano-banana-pro-hands-on\/\" target=\"_blank\" rel=\"noopener\">Nano Banana Pro<\/a>, a new imaging model that excels at tweaking existing photos and generating hyperrealistic images of people. <a href=\"https:\/\/www.wired.com\/tag\/openai\/\" target=\"_blank\" rel=\"noopener\">OpenAI<\/a> responded last week with its own updated imaging model, <a data-offer-url=\"https:\/\/mashable.com\/article\/open-ai-launches-nano-banana-competitor-chatgpt-images\" class=\"external-link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/mashable.com\/article\/open-ai-launches-nano-banana-competitor-chatgpt-images&quot;}\" href=\"https:\/\/mashable.com\/article\/open-ai-launches-nano-banana-competitor-chatgpt-images\" rel=\"nofollow noopener\" target=\"_blank\">ChatGPT Images<\/a>.<\/p>\n<p class=\"paywall\">As these tools improve, likenesses may become more realistic when users are able to subvert guardrails.<\/p>\n<p class=\"paywall\">In a separate Reddit thread about generating NSFW images, a user asked for recommendations on how to avoid guardrails when adjusting someone\u2019s outfit to make the subject\u2019s skirt appear tighter. In WIRED\u2019s limited tests to confirm that these techniques worked on Gemini and ChatGPT, we were able to transform images of fully clothed women into bikini deepfakes using basic prompts written in plain English.<\/p>\n","protected":false},"excerpt":{"rendered":"Some users of popular chatbots are generating bikini deepfakes using photos of fully clothed women as their source&hellip;\n","protected":false},"author":2,"featured_media":652203,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3163],"tags":[323,1942,8441,1315,53828,867,1318,2511,7154,53,16,15],"class_list":{"0":"post-652202","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-chatbots","11":"tag-chatgpt","12":"tag-deepfakes","13":"tag-google","14":"tag-openai","15":"tag-reddit","16":"tag-software","17":"tag-technology","18":"tag-uk","19":"tag-united-kingdom"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/115773816361804939","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/652202","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=652202"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/652202\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/652203"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=652202"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=652202"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=652202"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}