{"id":9980,"date":"2026-04-21T09:51:03","date_gmt":"2026-04-21T09:51:03","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/9980\/"},"modified":"2026-04-21T09:51:03","modified_gmt":"2026-04-21T09:51:03","slug":"legal-loopholes-and-embrace-of-ai-allow-grok-to-enable-digital-sexual-abuse","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/9980\/","title":{"rendered":"Legal loopholes and embrace of AI allow Grok to enable digital sexual abuse"},"content":{"rendered":"<p class=\"has-light-gray-background-color has-background\">Real journalists wrote and edited this (not AI)\u2014independent, community-driven journalism survives because you back it.\u00a0<a href=\"https:\/\/prismreports.org\/ways-to-give\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Donate<\/a>\u00a0to sustain Prism\u2019s mission and the humans behind it.<\/p>\n<p>Within 11 days, X\u2019s AI chatbot Grok produced an estimated 3 million sexualized images, 23,000 of which were of children, according to <a href=\"https:\/\/counterhate.com\/research\/grok-floods-x-with-sexualized-images\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">a report<\/a> by the Center for Countering Digital Hate (CCDH). These images were generated between Dec. 29, 2025 and Jan. 8, the time period between the launch of Grok\u2019s photo-editing feature and when it was restricted to paid users after the feature caused public uproar, governmental investigations, and statements by children\u2019s rights organizations, due to Grok\u2019s creation and dissemination of sexualized images of children.\u00a0<\/p>\n<p>AI that nonconsensually produces sexualized images isn\u2019t entirely new, experts say, but the integration of Grok\u2019s photo-editing tool into a widely used social media platform with limited moderation is a rapid escalation of harmful AI. In the most recent example, The Washington Post <a href=\"https:\/\/www.washingtonpost.com\/technology\/2026\/03\/16\/teens-sue-musk-xai-grok\/\" rel=\"nofollow noopener\" target=\"_blank\">reported<\/a> that a group of Tennessee teenagers filed a lawsuit against xAI on March 16, alleging the company\u2019s AI tools were used to create nude images of them that spread across social media and were even bartered for other child sexual abuse material in chatrooms, according to their complaint. <\/p>\n<p>xAI, the company behind Grok, did not respond to Prism\u2019s questions regarding the widespread use of Grok for digital sexual abuse.\u00a0\u00a0<\/p>\n<p>Experts told Prism that \u201c<a href=\"https:\/\/www.wired.com\/story\/undress-app-ai-harm-google-apple-login\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">nudify\u201d apps<\/a>, or software programs that use AI to remove clothes from real photos to make victims appear to be nude without their consent, are a serious threat to women and marginalized people and can lead to life-threatening harassment and public humiliation.\u00a0<\/p>\n<p>\u201cFull-blown sexual violence\u201d\u00a0<\/p>\n<p>On Dec. 29, Elon Musk, the billionaire owner of X, launched a new feature for Grok that allows photo editing through AI. X users were able to send a prompt to Grok to edit a photo, and the bot would post the edited image onto the social media platform. According to Riana Pfefferkorn, a tech policy researcher at the Stanford Institute for Human-Centered Artificial Intelligence, Grok users quickly discovered there weren\u2019t \u201cadequate guardrails against undressing images of minors.\u201d Pfefferkorn explained that nudify apps have been around since at least 2017, but Grok\u2019s feature is unique in that it centralizes the tool within a social sphere.<\/p>\n<p>\u201cWhat makes this different is that in my research into AI-generated child sexual abuse material, all of these different services had be knitted together in order to fully victimize somebody,\u201d Pfefferkorn told Prism, explaining that users previously had to intentionally seek out nudify apps, or access them through advertisements in social media platforms. These apps then took them outside of the original platform to make the content, download it, and then share it to social media.\u00a0<\/p>\n<p>With Grok, everything is vertically integrated: a one-stop shop for effectuating sexual abuse, where you can guarantee that [the victim] will see it because you go into her replies, tag Grok, and Grok then generates the image and posts it right in her replies.<\/p>\n<p>Riana Pfefferkorn, researcher at Stanford Institute for Human-Centered Artificial Intelligence<\/p>\n<p>\u201cWith Grok, everything is vertically integrated: a one-stop shop for effectuating sexual abuse, where you can guarantee that [the victim] will see it because you go into her replies, tag Grok, and Grok then generates the image and posts it right in her replies,\u201d Pfefferkorn said.\u00a0\u00a0<\/p>\n<p>The majority of victims of AI-facilitated sexual abuse are women and girls, according to three experts interviewed by Prism. For Clare McGlynn, a legal expert on the regulation of image-based sexual abuse from the Durham University in the U.K., it\u2019s important to be clear about the harms of this particular kind of sexual abuse. \u201cThis form of abuse for women can be life-threatening, but it can also be life-ending,\u201d McGlynn said, referencing cases in which <a href=\"https:\/\/oecd.ai\/en\/incidents\/2025-11-28-db6b\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">victims died by suicide<\/a> after being blackmailed with AI-generated sexualized images.<\/p>\n<p>\u201cFor many others, [this abuse] is a profound shift in their lives. Many divide their lives into before and after because you lose trust in other individuals,\u201d McGlynn said, adding that the unpredictable longevity of the photos is particularly harmful to victims, who don\u2019t know if or when the images will be shared again.<\/p>\n<p>This type of abuse is primarily about power, Pfefferkorn told Prism, and it is different from using nudify apps for personal sexual gratification. The motivation for publicly posting AI-generated nude images of women is harassment, according to Pfefferkorn, and to drive them out of \u201cpositions of power and authority\u201d and exploit \u201cthe ongoing stigma and shame around sex and sexuality.\u201d<\/p>\n<p>The tech policy researcher connects the use of these apps to a larger societal backlash. \u201cIt\u2019s about trying to exert control over women even if you cannot physically reach them,\u201d Pfefferkorn said. \u201cNow we have technology for sexually humiliating them without ever needing to lay a finger on them. [The harassers] are trying to say, \u2018You should be at home, barefoot, pregnant in the kitchen,\u2019 and roll back women\u2019s rights to where we were over a hundred years ago.\u201d<\/p>\n<p>It isn\u2019t a coincidence that many of the victims of Grok\u2019s nudify features are famous and powerful women. According to the CCDH study, in 11 days, Grok users generated images of actors Selena Gomez, Millie Bobby Brown, and Christina Hendricks, singers Taylor Swift, Billie Eilish, Ariana Grande, Ice Spice, and Nicki Minaj, Swedish Deputy Prime Minister Ebba Busch, and former U.S. Vice President Kamala Harris.\u00a0<\/p>\n<p>For Omny Miranda Martone, founder of the <a href=\"https:\/\/s-v-p-a.org\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Sexual Violence Prevention Association<\/a> (SVPA), recognizing the disempowering nature of sexual violence is essential. \u201cWith public figures\u2014especially anybody related to politics\u2014people are using this to silence people,\u201d Martone told Prism. \u201cWe\u2019ve seen this used against politicians, particularly women of color.\u201d\u00a0<\/p>\n<p>Martone cited New York Rep. Alexandria Ocasio-Cortez as a prominent victim that harassers sought to humiliate with deepfake pornography, which manipulates a photo or video using AI technology to put a person\u2019s face or body in sexually explicit content, something Ocasio-Cortez discussed at length in an <a href=\"https:\/\/www.rollingstone.com\/culture\/culture-features\/aoc-deepfake-ai-porn-personal-experience-defiance-act-1234998491\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">April 2024 interview<\/a> with Rolling Stone.<\/p>\n<p>\u201cThis is a woman of color who has been repeatedly targeted by deepfake pornography in an attempt to silence her,\u201d Martone said. \u201cMost of what we\u2019re seeing\u2014with Grok as an example\u2014is that it\u2019s being used against women and people with marginalized identities, particularly women who are LGBT+ or feminine people who are LGBT+ and women of color, to try to silence them [and] drive them off the internet, so people don\u2019t have to take them seriously.\u201d<\/p>\n<p>Martone was previously a target of deepfake pornography in May due to their advocacy of the Disrupt Explicit Forged Images and Non-Consensual Edits Act (DEFIANCE Act), a bipartisan bill that would give victims of nonconsensual deepfake pornography civil recourse to sue their abusers. Through their work at SVPA, Martone advocated for the bill in media appearances and as part of online campaigns; then attackers used nudify apps in an effort to stifle their support for the bill.\u00a0\u00a0<\/p>\n<p>\u201cPeople were tweeting it and then sent it to the organization, in an attempt to get me fired,\u201d Martone said. \u201cIt is, once again, about the gaining and maintaining of power. Control\u2014and oppression\u2014is the goal.\u201d<\/p>\n<p>People who aren\u2019t advocates or celebrities are also targeted with pornographic AI images and have far fewer resources to get the material taken down. Often, because they are not well-connected, they report the material to X and rarely get a response, Martone said. But even on this smaller scale, it\u2019s still about control and oppression, they said.\u00a0<\/p>\n<p>\u201cIt\u2019s often happening in school settings because somebody rejected somebody else, or because somebody pissed somebody else off,\u201d Martone told Prism. \u201cIt goes back to respectability politics, like somebody who is LGBTQ+ or a woman of color dares to not be polite to somebody else. White cis men think that they\u2019re owed so much that we\u2019re seeing that the tiniest of things result in full-blown sexual violence, and schools don\u2019t know how to take action.\u201d<\/p>\n<p>\u201cIt\u2019s about power and masculinity\u201d\u00a0<\/p>\n<p>Since the worldwide condemnation of Grok\u2019s production of millions of sexual images, X has \u201chalf-heartedly\u201d installed guardrails for the AI photo-editing feature, McGlynn said.\u00a0<\/p>\n<p>On Jan. 14, X <a href=\"https:\/\/x.com\/Safety\/status\/2011573102485127562\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">announced<\/a> that it would implement \u201ctechnological measures to prevent the Grok account on X globally from allowing the editing of images of real people in revealing clothing such as bikinis.\u201d The social media platform also announced that Grok\u2019s photo-editing features would only be accessible to paid subscribers.<\/p>\n<p>\u201cIt hasn\u2019t worked,\u201d McGlynn said. \u201cIt\u2019s not absolutely clear that you can\u2019t now create those non-consensual intimate images.\u201d\u00a0<\/p>\n<p>On Feb. 3, Reuters <a href=\"https:\/\/www.reuters.com\/business\/despite-new-curbs-elon-musks-grok-times-produces-sexualized-images-even-when-2026-02-03\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">reported<\/a> that Grok still produces sexualized images\u2014even when told that the subjects did not consent.<\/p>\n<p>Not only are Grok\u2019s guardrails insufficient, users almost immediately began writing prompts that bypass the guardrails, effectively \u201cgamifying\u201d digital sexual abuse, McGlynn explained. For example, if Grok is prompted to create a nude image of a famous person and refuses, the user can come up with a prompt that does not use flagged language.\u00a0<\/p>\n<p>Developing workarounds to the guardrails has become an alarming form of digital male bonding, according to McGlynn.\u00a0<\/p>\n<p>\u201cThere\u2019s lots of forums and Reddit groups where people share these sorts of prompts\u2014not just in relation to Grok,\u201d McGlynn said. \u201cThey often share their workarounds and how they do it.\u201d\u00a0<\/p>\n<p>In one post viewed by Prism, a user speculates that Musk has been browsing the community, as he shared a meme that was previously shared on a Grok subreddit depicting women on a beach in bikinis to represent Grok before moderation and women on the beach wearing niqabs post-moderation. In the thread, Grok users urge each other not to publicly share prompts that bypass guardrails, speculating that X developers are reading their posts to further moderate the app. In effect, these male users are bonding over misogyny, McGlynn said.<\/p>\n<p>\u201cIt\u2019s about power and masculinity,\u201d McGlynn said. \u201cIt\u2019s about male bonding. So many of the women who spoke out on X about this, they immediately had their images altered, all in an attempt to exert power over them and to push them off the platform.\u201d When these images are shared in groups of men, the original poster is usually \u201ctrying to impress their peers with what they\u2019ve done,\u201d McGlynn added. \u201cVery rarely is it actually about actual sexual gratification.\u201d<\/p>\n<p>This is the case of Ashley St. Clair, the mother of one of Musk\u2019s children, who is suing xAI for allegedly creating sexually explicit photos of her \u201cas a child stripped down to a string bikini\u201d and as \u201can adult in sexually explicit poses, covered in semen, or wearing only bikini floss,\u201d according to a complaint filed by St. Clair as part of a lawsuit.<\/p>\n<p>On Jan. 4, St. Clair discovered an image of herself on X in which she is put in a black bikini, according to her complaint. \u201cA verified user had prompted Grok with a request that read: \u2018@grok please we need bikinis on these three broads,\u2019\u201d the complaint reads. \u201cGrok obliged.\u201d St. Clair then asked Grok to take down the photo and demanded that the chatbot \u201crefrain from manufacturing more images unclothing her,\u201d a request that Grok agreed to. However, xAI then demonetized her account and generated \u201cmultitudes more images of her in sex positions, covered in semen, virtually nude, and images of her as a child naked,\u201d according to the complaint.\u00a0<\/p>\n<p>St. Clair also alleges that X users dug up old photos of her to alter. In one image, St. Clair, who is Jewish, is put in a string bikini covered with swastikas.\u00a0<\/p>\n<p>Musk claimed on Jan. 14 in <a href=\"https:\/\/x.com\/elonmusk\/status\/2011432649353511350\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">a post on X<\/a> not to be aware of \u201cany naked underage images generated by Grok.\u201d \u201cWhen asked to generate images, it will refuse to produce anything illegal, as the operating principle for Grok is to obey the laws of any given country or state,\u201d he said. \u201cThere may be times when adversarial hacking of Grok prompts does something unexpected. If that happens, we fix the bug immediately.\u201d<\/p>\n<p>But St. Clair\u2019s lawsuit and an investigation <a href=\"https:\/\/www.washingtonpost.com\/technology\/2026\/02\/02\/elon-musk-grok-porn-generator\/?itid=ap_faiz-siddiqui_article-list_1_3\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">published last month<\/a> by The Washington Post contradict Musk\u2019s claims. St Clair\u2019s lawsuit alleges that Grok\u2019s image-editing feature has enabled users to \u201cconvincingly alter real images of fully clothed women and children to depict them in bikinis, performing sex acts, and covered in bruises, semen, and\/or blood\u201d since March 2025.<\/p>\n<p>And the Post\u2019s interviews with anonymous X employees revealed that weeks before Musk left the White House last May, employees were served with a waiver from their employer \u201casking them to pledge to work with profane content, including sexual material.\u201d\u00a0<\/p>\n<p>According to these employees, Musk was desperate to increase X\u2019s popularity, leading him to have the social media site embrace \u201csexualized material\u201d by \u201crolling back guardrails on sexual material and ignoring internal warnings about the potentially serious legal and ethical risks of producing such content,\u201d the Post reported.<\/p>\n<p>Legal loopholes and Big Tech lobbying<\/p>\n<p>Even before the controversy surrounding Grok, authorities worldwide have struggled to regulate social media platforms through legislation, in large part because drafting and passing new laws is a lengthy process and technological developments are moving at a much faster pace. But Big Tech companies also lobby legislators to create permissive regulations without transparency, while experts and civil society members are \u201cout-numbered, under-funded, and struggling in the face of corporate dominance,\u201d according to <a href=\"https:\/\/corporateeurope.org\/en\/2025\/01\/bias-baked\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">a 2025 report<\/a> by Corporate Europe Observatory, an organization that helps civil society monitor new developments in deregulation.\u00a0<\/p>\n<p>According to the report, in 2024 alone, Big Tech companies such as Microsoft, Amazon, Huawei, IBM, and Google spent about $77 million on lobbying for digital deregulation in the European Union. \u201cBig Tech firms have sought to curry favour with the new Trump administration by making generous donations to his inauguration, and by weakening content moderation rules,\u201d the <a href=\"https:\/\/corporateeurope.org\/sites\/default\/files\/2025-02\/EU%20Lobby%20League%20briefing%2024.2.2025.pdf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">report reads<\/a>. \u201cIn exchange, tech firms have successfully weaponised the US Government against the EU\u2019s digital regulation.\u201d\u00a0<\/p>\n<p>Until last May, Musk worked directly with the Trump administration at the Department of Government Efficiency, haphazardly created with the goal of cutting federal costs across the country. Among many hasty and potentially unlawful actions made by the department, such as <a href=\"https:\/\/www.pbs.org\/newshour\/politics\/doges-usaid-dismantling-likely-violates-the-constitution-judge-rules\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">dismantling<\/a> the U.S. Agency for International Development, reporting has also revealed that the department developed an \u201c<a href=\"https:\/\/www.propublica.org\/article\/trump-doge-veterans-affairs-ai-contracts-health-care\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">error-prone AI tool<\/a>\u201d to cancel Department of Veteran Affairs contracts. More broadly, the Trump administration has <a href=\"https:\/\/www.politico.com\/news\/2025\/12\/15\/the-white-houses-unabashed-embrace-of-ai-00690885\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">wholly embraced<\/a> AI.<\/p>\n<p>In December, the White House issued an <a href=\"https:\/\/www.whitehouse.gov\/presidential-actions\/2025\/12\/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">executive order<\/a> that allows the Trump administration \u201cto check the most onerous and excessive laws emerging from the States that threaten to stymie [AI] innovation\u201d to ensure that the U.S. \u201cwins\u201d the AI race. Though the executive order claims not to interfere with \u201cchild safety protections,\u201d it is unclear how these efforts will take shape, given that the executive order also defined the need for \u201ca minimally burdensome national standard\u201d that would overpower state-based regulations.\u00a0<\/p>\n<p>Despite the widespread embrace of AI technology by the administration, President Donald Trump announced <a href=\"https:\/\/www.politico.com\/news\/2026\/02\/27\/ai-industry-fears-partial-nationalization-as-anthropic-fight-escalates-00805453\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">a boycott of Anthropic\u2019s Claude AI<\/a> last month after the company refused to clear the technology for military use. Hours later, a different AI company, OpenAI, announced that it is entering into an agreement with the Department of Defense, leading Trump\u2019s critics to question whether the administration will only partner with tech companies that uphold its ideologies.\u00a0<\/p>\n<p>Big Tech\u2019s lobbying efforts and newfound ties to the White House alarm experts, who say that only regulation can stop digital sexual abuse. The problem is that X does \u201cits own thing\u201d with no real consequences, McGlynn said, making digital sexual abuse difficult to regulate. \u201cNext time some new tool comes around or some scandal comes around, I don\u2019t think X is going to be doing anything different,\u201d McGlynn said, noting that the real political challenge is standing up to Musk.\u00a0<\/p>\n<p>Current legislation fails to hold Grok or its users accountable because only people who post AI-generated content on social media can be held legally accountable. In the case of xAI, it\u2019s Grok that posts the material prompted by the user, creating a legal loophole in which the prompting user cannot be charged with any crime and xAI cannot be held criminally responsible for the dissemination of nonconsensual pornographic images because Grok is not a person.\u00a0<\/p>\n<p>For example, under the DEFIANCE Act that recently passed in the Senate, victims of deepfake pornography could file lawsuits against people who solicited nonconsensual sexually explicit material. Additionally, the bill determines a 10-year statute of limitations, which wouldn\u2019t start until a person discovered the violation against them or turned 18. The proposed law would also grant victims privacy protections that would allow them to use pseudonyms or request the redaction of personal information in court documents to avoid retraumatization.\u00a0<\/p>\n<p>Unlike the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act (TAKE IT DOWN Act) that criminalizes and punishes deepfake pornography, the DEFIANCE Act is entirely focused on civil courts and returning agency to victims. While current law punishes the users with up to two years of imprisonment and harsher penalties for images involving minors, the DEFIANCE Act attempts to reckon with the retraumatizing tendencies of the criminal legal system. The proposed law covers the creation, distribution, publication, sharing, and solicitation of nonconsensual, artificially generated explicit materials, allowing victims to bring the case to civil courts and have more control over the case.\u00a0<\/p>\n<p>Trump threw his support behind the TAKE IT DOWN Act. Originally introduced by Sen. Ted Cruz in June 2024, the president signed the bill into law in May. According to victims and advocates, the law does not address the larger problem at the root. According to Miranda of SVPA, who considered the experiences of survivors when collaborating on the writing of the DEFIANCE Act, changing the culture is necessary to fully prevent sexual abuse\u2014including deepfake pornography.\u00a0<\/p>\n<p>\u201cThis is a complex problem, and digital sexual violence isn\u2019t necessarily new,\u201d Miranda said. \u201cThe mechanisms, the technology that\u2019s being used is new, but the motivations behind it, the values, the attitudes, the driving force behind people\u2019s desire to perpetrate it, is not new\u2014and that\u2019s what takes longer to fix.\u201d\u00a0<\/p>\n<p>\u201cRegulating Big Tech so it\u2019s harder for them to perpetrate, that\u2019s a little bit of an easier solution,\u201d Miranda added, \u201cbut long term, we need to make sure we\u2019re addressing that people don\u2019t have the desire to perpetrate.\u201d\u00a0<\/p>\n<p>Miranda cited early education focused on consent, autonomy, and respect as the inroad to a longer-term solution. \u201cWe need to address the root causes,\u201d they said. \u201cReal prevention of sexual violence requires addressing and really counteracting them.\u201d<\/p>\n<p>Editorial Team:<br \/>Tina Vasquez, Lead Editor<br \/>Lara Witt, Top Editor<br \/>Rashmee Kumar, Copy Editor<\/p>\n<p>\n\tRelated<\/p>\n","protected":false},"excerpt":{"rendered":"Real journalists wrote and edited this (not AI)\u2014independent, community-driven journalism survives because you back it.\u00a0Donate\u00a0to sustain Prism\u2019s mission&hellip;\n","protected":false},"author":2,"featured_media":9981,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[10],"tags":[24,25,1069,8527,2065,4929,8528,8529,140,2154,8530,8531,6364,8532,8533,8534,8535,8536,1109,781,134,6846,8537,8538,2899],"class_list":{"0":"post-9980","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-xai","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-big-tech","11":"tag-child-sexual-abuse-images","12":"tag-children","13":"tag-consent","14":"tag-digital-harassament","15":"tag-digital-sexual-abuse","16":"tag-elon-musk","17":"tag-feature","18":"tag-gender-justice","19":"tag-girls","20":"tag-grok","21":"tag-image-editing","22":"tag-lgbtqia","23":"tag-nudify-apps","24":"tag-sexual-abuse","25":"tag-sexual-violence","26":"tag-social-media","27":"tag-tech","28":"tag-technology","29":"tag-women","30":"tag-women-of-color","31":"tag-x-twitter","32":"tag-xai"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/9980","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=9980"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/9980\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/9981"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=9980"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=9980"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=9980"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}