{"id":931638,"date":"2026-05-01T22:58:27","date_gmt":"2026-05-01T22:58:27","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/931638\/"},"modified":"2026-05-01T22:58:27","modified_gmt":"2026-05-01T22:58:27","slug":"reduced-symptom-reporting-quality-during-human-chatbot-versus-human-physician-interactions","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/931638\/","title":{"rendered":"Reduced symptom reporting quality during human\u2013chatbot versus human\u2013physician interactions"},"content":{"rendered":"<p>Healthcare is undergoing a digital transformation characterized by the rapid integration of artificial intelligence (AI) and telemedicine platforms into primary care and triage settings<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 1\" title=\"Topol, E. J. High-performance medicine: the convergence of human and artificial intelligence. Nat. Med. 25, 44&#x2013;56 (2019).\" href=\"http:\/\/www.nature.com\/articles\/s44360-026-00116-y#ref-CR1\" id=\"ref-link-section-d55241549e499\" target=\"_blank\" rel=\"noopener\">1<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 2\" title=\"Z&#xF6;ller, N. et al. Human&#x2013;AI collectives most accurately diagnose clinical vignettes. Proc. Natl Acad. Sci. USA 122, e2426153122 (2025).\" href=\"http:\/\/www.nature.com\/articles\/s44360-026-00116-y#ref-CR2\" id=\"ref-link-section-d55241549e502\" target=\"_blank\" rel=\"noopener\">2<\/a>. Central to any care setting are patients\u2019 self-reported symptoms, which guide diagnostic workflows and clinical decision-making<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 3\" title=\"Popovich, I., Szecket, N. &amp; Nahill, A. Framing of clinical information affects physicians&#x2019; diagnostic accuracy. Emerg. Med. J. 36, 589&#x2013;594 (2019).\" href=\"http:\/\/www.nature.com\/articles\/s44360-026-00116-y#ref-CR3\" id=\"ref-link-section-d55241549e506\" target=\"_blank\" rel=\"noopener\">3<\/a>. This is particularly true for consumer-facing applications where patients initiate contact via online symptom checkers, AI-based self-triage tools or general-purpose large language models (LLMs) before accessing professional care<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 4\" title=\"Kopka, M., von Kalckreuth, N. &amp; Feufel, M. A. Accuracy of online symptom assessment applications, large language models, and laypeople for self-triage decisions. NPJ Digit. Med. 8, 178 (2025).\" href=\"http:\/\/www.nature.com\/articles\/s44360-026-00116-y#ref-CR4\" id=\"ref-link-section-d55241549e510\" target=\"_blank\" rel=\"noopener\">4<\/a>. In such self-triage scenarios, any degradation in the quality of patients\u2019 free-text input can directly undermine downstream triage recommendations, even when the underlying LLM or algorithm performs well under standardized test conditions<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 5\" title=\"Bean, A. M. et al. Reliability of LLMs as medical assistants for the general public: a randomized preregistered study. Nat. Med. 32, 609&#x2013;615 (2026).\" href=\"http:\/\/www.nature.com\/articles\/s44360-026-00116-y#ref-CR5\" id=\"ref-link-section-d55241549e514\" target=\"_blank\" rel=\"noopener\">5<\/a>.<\/p>\n<p>Research on human\u2013AI interaction has suggested that individuals adapt their language and behaviour based on perceptions of the recipient\u2019s nature (human versus AI)<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 6\" title=\"Goergen, J., de Bellis, E. &amp; Klesse, A. K. AI assessment changes human behavior. Proc. Natl Acad. Sci. USA 122, e2425439122 (2025).\" href=\"http:\/\/www.nature.com\/articles\/s44360-026-00116-y#ref-CR6\" id=\"ref-link-section-d55241549e521\" target=\"_blank\" rel=\"noopener\">6<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 7\" title=\"Mou, Y. &amp; Xu, K. The media inequality: comparing the initial human&#x2013;human and human&#x2013;AI social interactions. Comput. Hum. Behav. 72, 432&#x2013;440 (2017).\" href=\"http:\/\/www.nature.com\/articles\/s44360-026-00116-y#ref-CR7\" id=\"ref-link-section-d55241549e524\" target=\"_blank\" rel=\"noopener\">7<\/a>. Such adaptations could compromise clinical utility: if patients alter their symptom descriptions when communicating with an AI chatbot, the medical suitability of those narratives may also change. Understanding these dynamics becomes crucial as digital-first strategies expand and the use of medically unsupervised LLMs for health queries increases rapidly<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 8\" title=\"Haupt, C. E. &amp; Marks, M. AI-generated medical advice&#x2014;GPT and beyond. JAMA 329, 1349&#x2013;1350 (2023).\" href=\"http:\/\/www.nature.com\/articles\/s44360-026-00116-y#ref-CR8\" id=\"ref-link-section-d55241549e528\" target=\"_blank\" rel=\"noopener\">8<\/a>.<\/p>\n<p>Prior work has primarily focused on the \u2018output side\u2019 of human\u2013AI interaction in the medical domain. For instance, identical medical advice was found to be judged as less reliable, less empathic and less worth following when labelled as AI generated compared with human generated<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 9\" title=\"Reis, M., Reis, F. &amp; Kunde, W. Influence of believed AI involvement on the perception of digital medical advice. Nat. Med. 30, 3098&#x2013;3100 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s44360-026-00116-y#ref-CR9\" id=\"ref-link-section-d55241549e535\" target=\"_blank\" rel=\"noopener\">9<\/a>. However, studies examining the \u2018input side\u2019 of this interaction remain scarce. Notably, no study has systematically examined how the perceived recipient (human physician versus AI chatbot) affects the generation of symptom reports, especially regarding their suitability for an initial medical urgency assessment. Understanding these differences is essential to optimize digital triage approaches and to ensure equitable clinical outcomes when patients interact with AI-supported platforms or LLMs.<\/p>\n<p>We conducted a preregistered online experiment with a large UK-based sample (n\u2009=\u2009500), stratified by demographics to approximate the composition of the UK adult population. Participants were asked to provide symptom reports for two common diseases (unusual headache and flu; randomized order), in the belief that their report would be received either by an AI chatbot or by a human physician (that is, ncases\u2009=\u20091,000). Participants were randomly allocated to one of the two recipient types (AI or physician) and provided symptom reports of both diseases for the same recipient. We then tested whether the suitability of these symptom reports for a medical urgency assessment was influenced by the believed recipient (AI chatbot versus human physician). A report was deemed as more suitable for a medical urgency assessment (on a scale from 1 to 5) the better the urgency of a required treatment could be inferred (see scoring scheme in Supplementary Material <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"supplementary material anchor\" href=\"http:\/\/www.nature.com\/articles\/s44360-026-00116-y#MOESM1\" target=\"_blank\" rel=\"noopener\">5<\/a>). Suitability assessments were conducted using LLM-based ratings, which were validated by physicians from the relevant specialties; both LLMs and physicians were blinded to the recipient type of each report.<\/p>\n<p>We found that participants\u2019 symptom reports were 8% less suitable for an initial medical urgency assessment when participants believed they were interacting with an AI chatbot instead of a human physician (based on ratings from GPT-5.2; see Fig. <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"figure anchor\" href=\"http:\/\/www.nature.com\/articles\/s44360-026-00116-y#Fig1\" target=\"_blank\" rel=\"noopener\">1<\/a>), non-standardized regression coefficient |b\u2009|\u2009= 0.22, P\u2009&lt;\u20090.001 (Cohen\u2019s d\u2009=\u20090.34, 95% confidence interval (CI), 0.16\u20130.52).<\/p>\n<p><b id=\"Fig1\" class=\"c-article-section__figure-caption\" data-test=\"figure-caption-text\">Fig. 1: Suitability of symptom reports by recipient type.<\/b><img decoding=\"async\" aria-describedby=\"figure-1-desc ai-alt-disclaimer-figure-1-1\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2026\/05\/44360_2026_116_Fig1_HTML.png\" alt=\"Fig. 1: Suitability of symptom reports by recipient type.\" loading=\"lazy\" width=\"685\" height=\"399\"\/>The alternative text for this image may have been generated using AI.<\/p>\n<p>In the AI condition (mean = 2.60, s.e.m. = 0.04), symptom reports were 8% less suitable for an initial medical urgency assessment compared with the human doctor condition (mean = 2.82, s.e.m. = 0.04; rated by GPT-5.2). We calculated a linear mixed-effect model to test for differences between the two conditions. Error bars represent the standard errors of individual subject means.<\/p>\n<p>To validate these ratings, we randomly selected 200 symptom reports from each condition (that is, 400 reports in total) and obtained additional assessments from four licensed physicians (two neurologists for the \u2018headache\u2019 reports and two internists\/pulmonologists for the \u2018flu\u2019 reports; intraclass correlations (ICC) of human raters: ICC(1,2)\u2009=\u20090.75, 95% CI, 0.70\u20130.80). Two experts in the respective domain independently evaluated each report, and we calculated the average of their ratings (interrater reliability of GPT-5.2 and average human-coded suitability ratings: ICC(1,1)\u2009=\u20090.70, 95% CI, 0.65\u20130.75). We also repeated our analyses with suitability ratings generated by other LLMs (DeepSeek R1, GPT-o3 mini, GPT-o4 mini, GPT-4o mini, GPT-4.1 mini) and compared these ratings with the human experts\u2019 assessments. For each LLM, suitability ratings were significantly reduced for the AI compared with the human physician group, |b\u2009|values\u2009\u2265 0.15, P values\u2009\u2264\u20090.008. Importantly, this effect was also evident for the reduced subset of human-coded responses, |b\u2009|\u2009= 0.26, P\u2009=\u20090.021. GPT-5.2 achieved the highest interrater reliability of all tested LLMs (see Supplementary Material <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"supplementary material anchor\" href=\"http:\/\/www.nature.com\/articles\/s44360-026-00116-y#MOESM1\" target=\"_blank\" rel=\"noopener\">2<\/a> for details). Table <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"table anchor\" href=\"http:\/\/www.nature.com\/articles\/s44360-026-00116-y#Tab1\" target=\"_blank\" rel=\"noopener\">1<\/a> shows the average suitability ratings for both recipient conditions and different raters.<\/p>\n<p><b id=\"Tab1\" data-test=\"table-caption\">Table 1 Average suitability ratings (s.d. in parentheses) for each combination of recipient and rater type (based on the unaggregated data)<\/b><\/p>\n<p>Exploratory analyses further revealed that symptom reports from the group with a human physician as the believed recipient were significantly more detailed (that is, contained significantly more characters) than those from the AI chatbot group (human doctor: mean\u2009=\u2009255.60, s.d.\u2009=\u2009111.56; AI chatbot: mean\u2009=\u2009228.75, s.d.\u2009=\u200984.81), |b\u2009|\u2009= 26.85, P\u2009=\u20090.003, and the level of detail positively affected suitability ratings (GPT-5.2), |b\u2009|\u2009&lt; 0.01, P\u2009&lt;\u20090.001. Crucially, there was a significant indirect effect of recipient type on suitability ratings via detail level, |b\u2009|\u2009= 0.13, 95% bootstrap CI, 0.05\u20130.21. This outcome suggests that differences in suitability between perceived recipients may be driven by differences in the level of detail in symptom reports.<\/p>\n<p>Our findings extend previous research on anti-AI biases in healthcare from the output stage\u2014how patients evaluate AI-generated advice\u2014to the input stage, where patients formulate the information on which AI systems depend. This distinction matters because input-stage degradation operates upstream of any clinical algorithm, rendering it invisible to standard model benchmarks. Although the observed effects are of moderate size at the individual level, they could be highly meaningful at scale. This especially applies to consumer-facing applications, where millions of users may rely on AI chatbots or LLM-based assistants for an initial medical urgency assessment before contacting clinicians<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 4\" title=\"Kopka, M., von Kalckreuth, N. &amp; Feufel, M. A. Accuracy of online symptom assessment applications, large language models, and laypeople for self-triage decisions. NPJ Digit. Med. 8, 178 (2025).\" href=\"http:\/\/www.nature.com\/articles\/s44360-026-00116-y#ref-CR4\" id=\"ref-link-section-d55241549e803\" target=\"_blank\" rel=\"noopener\">4<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 8\" title=\"Haupt, C. E. &amp; Marks, M. AI-generated medical advice&#x2014;GPT and beyond. JAMA 329, 1349&#x2013;1350 (2023).\" href=\"http:\/\/www.nature.com\/articles\/s44360-026-00116-y#ref-CR8\" id=\"ref-link-section-d55241549e806\" target=\"_blank\" rel=\"noopener\">8<\/a>. In such settings, users\u2019 belief that they are interacting with AI could lead to less suitable symptom reports, resulting in less accurate triage recommendations or risk assessments, regardless of the underlying model\u2019s performance<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 5\" title=\"Bean, A. M. et al. Reliability of LLMs as medical assistants for the general public: a randomized preregistered study. Nat. Med. 32, 609&#x2013;615 (2026).\" href=\"http:\/\/www.nature.com\/articles\/s44360-026-00116-y#ref-CR5\" id=\"ref-link-section-d55241549e810\" target=\"_blank\" rel=\"noopener\">5<\/a>.<\/p>\n<p>Our exploratory analyses suggest that the present effect may be driven by less detailed requests when participants believe they are interacting with AI. In this case, important information for a medical triage may be incomplete or absent. As this was not directly measured, we can only speculate about the reasons for this outcome. Potential explanations could be the so-called uniqueness neglect (the belief that AI cannot take the unique characteristics of the user into account)<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 10\" title=\"Longoni, C., Bonezzi, A. &amp; Morewedge, C. K. Resistance to medical artificial intelligence. J. Consum. Res. 46, 629&#x2013;650 (2019).\" href=\"http:\/\/www.nature.com\/articles\/s44360-026-00116-y#ref-CR10\" id=\"ref-link-section-d55241549e817\" target=\"_blank\" rel=\"noopener\">10<\/a> as well as a perceived lack of transparency regarding AI\u2019s decision-making processes<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 11\" title=\"Cadario, R., Longoni, C. &amp; Morewedge, C. K. Understanding, explaining, and utilizing medical artificial intelligence. Nat. Hum. Behav. 5, 1636&#x2013;1642 (2021).\" href=\"http:\/\/www.nature.com\/articles\/s44360-026-00116-y#ref-CR11\" id=\"ref-link-section-d55241549e821\" target=\"_blank\" rel=\"noopener\">11<\/a>. In line with such reasoning, framing a medical AI tool as more (versus less) capable of providing customized (versus generic) responses was found to increase the willingness to use such a tool<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 10\" title=\"Longoni, C., Bonezzi, A. &amp; Morewedge, C. K. Resistance to medical artificial intelligence. J. Consum. Res. 46, 629&#x2013;650 (2019).\" href=\"http:\/\/www.nature.com\/articles\/s44360-026-00116-y#ref-CR10\" id=\"ref-link-section-d55241549e825\" target=\"_blank\" rel=\"noopener\">10<\/a>. Similarly, providing additional explanations regarding AI\u2019s decision-making processes increased acceptance of medical AI<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 11\" title=\"Cadario, R., Longoni, C. &amp; Morewedge, C. K. Understanding, explaining, and utilizing medical artificial intelligence. Nat. Hum. Behav. 5, 1636&#x2013;1642 (2021).\" href=\"http:\/\/www.nature.com\/articles\/s44360-026-00116-y#ref-CR11\" id=\"ref-link-section-d55241549e829\" target=\"_blank\" rel=\"noopener\">11<\/a>. When interacting with AI, individuals may abbreviate or withhold comprehensive symptom information. This could be due to general scepticism about the algorithm\u2019s diagnostic capabilities, privacy concerns or the belief that this information is unnecessary for an AI tool. For consumer-facing systems, this suggests that interface design, onboarding instructions and in-conversation prompts should explicitly encourage users to provide rich, narrative symptom descriptions (for instance, by giving concrete examples of high-quality reports or dynamically probing for missing details) to mitigate input-stage degradation.<\/p>\n<p>Overall, our study highlights further obstacles for AI adoption in healthcare. In line with recent findings<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 5\" title=\"Bean, A. M. et al. Reliability of LLMs as medical assistants for the general public: a randomized preregistered study. Nat. Med. 32, 609&#x2013;615 (2026).\" href=\"http:\/\/www.nature.com\/articles\/s44360-026-00116-y#ref-CR5\" id=\"ref-link-section-d55241549e836\" target=\"_blank\" rel=\"noopener\">5<\/a>, technical issues are not the main challenge for real-world deployment of medical AI. Rather, the main obstacle is how humans interact with such tools. To facilitate effective AI integration, future research should further elucidate human\u2013AI interaction within this context. For instance, the presentation of the AI system (for example, its degree of human likeness<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 12\" title=\"Sestino, A. &amp; D&#x2019;Angelo, A. in Personalized Medicine Meets Artificial Intelligence (eds Cesario, A. et al.) 249&#x2013;260 (Springer, 2023); &#010;                https:\/\/doi.org\/10.1007\/978-3-031-32614-1_17&#010;                &#010;              \" href=\"http:\/\/www.nature.com\/articles\/s44360-026-00116-y#ref-CR12\" id=\"ref-link-section-d55241549e840\" target=\"_blank\" rel=\"noopener\">12<\/a> or (believed) capability of taking the user\u2019s unique characteristics into account<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 10\" title=\"Longoni, C., Bonezzi, A. &amp; Morewedge, C. K. Resistance to medical artificial intelligence. J. Consum. Res. 46, 629&#x2013;650 (2019).\" href=\"http:\/\/www.nature.com\/articles\/s44360-026-00116-y#ref-CR10\" id=\"ref-link-section-d55241549e844\" target=\"_blank\" rel=\"noopener\">10<\/a>) or explanatory features regarding the AI\u2019s decision-making processes<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 11\" title=\"Cadario, R., Longoni, C. &amp; Morewedge, C. K. Understanding, explaining, and utilizing medical artificial intelligence. Nat. Hum. Behav. 5, 1636&#x2013;1642 (2021).\" href=\"http:\/\/www.nature.com\/articles\/s44360-026-00116-y#ref-CR11\" id=\"ref-link-section-d55241549e848\" target=\"_blank\" rel=\"noopener\">11<\/a> could moderate this effect. Finally, recent evidence indicates that public scepticism towards medical AI extends beyond the technology itself to the physicians who use AI tools<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 13\" title=\"Reis, M., Reis, F. &amp; Kunde, W. Public perception of physicians who use artificial intelligence. JAMA Netw. Open 8, e2521643 (2025).\" href=\"http:\/\/www.nature.com\/articles\/s44360-026-00116-y#ref-CR13\" id=\"ref-link-section-d55241549e852\" target=\"_blank\" rel=\"noopener\">13<\/a>. Future research should therefore examine whether patients even provide less accurate reports of their symptoms when interacting with human physicians, in case they suspect their physician is relying on AI assistance.<\/p>\n<p>As a limitation for the external validity of our findings, it should be noted that we used hypothetical scenarios with selected medical conditions for all participants. This design choice allowed us to conduct a highly controlled investigation of this previously unexplored research question. To enhance the real-world implications of our findings, we explicitly selected common and relatable conditions, which one can easily remember or imagine and for which medical consultation of LLMs is likely. Furthermore, we invited the participants of our experiment for a follow-up study to analyse how their previous experience with the presented conditions affects our findings. Importantly, our results remained robust when including only participants who indicated that they had experienced the respective symptoms at the time they participated in the study (see Supplementary Material <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"supplementary material anchor\" href=\"http:\/\/www.nature.com\/articles\/s44360-026-00116-y#MOESM1\" target=\"_blank\" rel=\"noopener\">4<\/a> for details). At the same time, real-world patient reports may differ in several aspects from simulated symptom reporting (for example, regarding emotional salience and informational completeness). Thus, a crucial next step for future research is to establish whether the present effect generalizes to real-world clinical encounters, where the emotional salience and stakes differ.<\/p>\n","protected":false},"excerpt":{"rendered":"Healthcare is undergoing a digital transformation characterized by the rapid integration of artificial intelligence (AI) and telemedicine platforms&hellip;\n","protected":false},"author":2,"featured_media":931639,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4316],"tags":[3968,105,225498,109732,160090,225497,21373,4348,29112,53238,20181,16,15],"class_list":{"0":"post-931638","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-healthcare","8":"tag-general","9":"tag-health","10":"tag-health-economics","11":"tag-health-humanities","12":"tag-health-promotion-and-disease-prevention","13":"tag-health-psychology","14":"tag-health-services","15":"tag-healthcare","16":"tag-human-behaviour","17":"tag-maternal-and-child-health","18":"tag-medicine-public-health","19":"tag-uk","20":"tag-united-kingdom"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/116501801048177159","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/931638","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=931638"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/931638\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/931639"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=931638"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=931638"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=931638"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}