{"id":90312,"date":"2025-07-25T03:16:19","date_gmt":"2025-07-25T03:16:19","guid":{"rendered":"https:\/\/www.europesays.com\/us\/90312\/"},"modified":"2025-07-25T03:16:19","modified_gmt":"2025-07-25T03:16:19","slug":"detection-of-breast-cancer-using-machine-learning-and-explainable-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/us\/90312\/","title":{"rendered":"Detection of breast cancer using machine learning and explainable artificial intelligence"},"content":{"rendered":"<p>Dataset description<\/p>\n<p>This paper used the \u201cUCTH Breast Cancer Dataset\u201d for machine learning analysis. The patient data was uploaded to a reliable repository called Mendeley Data in the year 2023<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 17\" title=\"Eteng, I. et al. UCTH Breast Cancer Dataset, V2, (2023). &#010;                  https:\/\/doi.org\/10.17632\/63fpbc9cm4.2&#010;                  &#010;                \" href=\"http:\/\/www.nature.com\/articles\/s41598-025-12644-w#ref-CR17\" id=\"ref-link-section-d59169348e1048\" target=\"_blank\" rel=\"noopener\">17<\/a>. It was obtained from the University of Calabar Teaching Hospital, Nigeria, by observing 213 patients over two years. It contained nine features: age, menopause, tumor size, involved nodes, area of breast affected, metastasis, quadrant affected, previous history of cancer, and diagnosis result. Age and tumor size are continuous variables. Menopause, involved nodes, breast, metastasis, breast quadrant, and history are categorical variables. The categorical target variable is \u2018diagnosis result\u2019, which contains \u20180\u2019 for benign and \u20181\u2019 for malignant diagnosis. Table\u00a0<a data-track=\"click\" data-track-label=\"link\" data-track-action=\"table anchor\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-12644-w#Tab2\" target=\"_blank\" rel=\"noopener\">2<\/a> presents a comprehensive description of the features within the dataset.<\/p>\n<p><b id=\"Tab2\" data-test=\"table-caption\">Table 2 Explanation of the dataset\u2019s features.<\/b>Statistical preprocessing<\/p>\n<p>The research utilized Jamovi to draw statistical and descriptive conclusions<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 18\" title=\"&#x15E;ahin, M. &amp; Aybek, E. Jamovi: an easy to use statistical software for the social scientists. Int. J. Assess. Tools Educ. 6 (4), 670&#x2013;692 (2019).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-12644-w#ref-CR18\" id=\"ref-link-section-d59169348e1273\" target=\"_blank\" rel=\"noopener\">18<\/a>. The descriptive analysis for continuous variables is given in Table\u00a0<a data-track=\"click\" data-track-label=\"link\" data-track-action=\"table anchor\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-12644-w#Tab3\" target=\"_blank\" rel=\"noopener\">3<\/a>. For visualizing the distribution of numerical data, violin plots are shown in Fig.\u00a0<a data-track=\"click\" data-track-label=\"link\" data-track-action=\"figure anchor\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-12644-w#Fig2\" target=\"_blank\" rel=\"noopener\">2<\/a>. According to the plot, older women are more prone to malignant breast tumors than younger women. Greater tumor size indicated a malignant diagnosis. T-tests were utilized to check the importance of the continuous features. The feature is considered significant if the p-value is less than 0.001. From Table\u00a0<a data-track=\"click\" data-track-label=\"link\" data-track-action=\"table anchor\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-12644-w#Tab4\" target=\"_blank\" rel=\"noopener\">4<\/a> it is concluded that both tumor size and age are necessary features.<\/p>\n<p><b id=\"Tab3\" data-test=\"table-caption\">Table 3 Descriptive analysis of continuous variables.<\/b><b id=\"Fig2\" class=\"c-article-section__figure-caption\" data-test=\"figure-caption-text\">Fig. 2<\/b><a class=\"c-article-section__figure-link\" data-test=\"img-link\" data-track=\"click\" data-track-label=\"image\" data-track-action=\"view figure\" href=\"https:\/\/www.nature.com\/articles\/s41598-025-12644-w\/figures\/2\" rel=\"nofollow noopener\" target=\"_blank\"><img decoding=\"async\" aria-describedby=\"Fig2\" src=\"https:\/\/www.europesays.com\/us\/wp-content\/uploads\/2025\/07\/41598_2025_12644_Fig2_HTML.png\" alt=\"figure 2\" loading=\"lazy\" width=\"685\" height=\"248\"\/><\/a><\/p>\n<p>Violin plots. (<b>a<\/b>) Age (<b>b<\/b>) Tumor size.<\/p>\n<p><b id=\"Tab4\" data-test=\"table-caption\">Table 4 Independent samples t-test.<\/b><\/p>\n<p>Categorical variables are analyzed using bar plots shown in Fig.\u00a0<a data-track=\"click\" data-track-label=\"link\" data-track-action=\"figure anchor\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-12644-w#Fig3\" target=\"_blank\" rel=\"noopener\">3<\/a>. It showed the number of patients with benign and malignant tumors for different features. From the graphs, it is interpreted that breast cancer is not adverse in patients who haven\u2019t reached menopause. The diagnosis of this cancer is also observed when the tumor has spread to the auxiliary nodes. Metastasis is noted to be prominent in the case of breast cancer. Malignant tumours have been reported when it affected the upper outer quadrant. Patients with a previous history of cancer are more prone to be diagnosed with this carcinoma. These bar plots help analyze the dataset in-depth. A chi-square test is done to identify significant categorical features. The results are shown in Table\u00a0<a data-track=\"click\" data-track-label=\"link\" data-track-action=\"table anchor\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-12644-w#Tab5\" target=\"_blank\" rel=\"noopener\">5<\/a>. The features: Menopause, Involved nodes, Breast Quadrant, and Metastasis are inferred to be necessary attributes as per chi-square tests.<\/p>\n<p><b id=\"Fig3\" class=\"c-article-section__figure-caption\" data-test=\"figure-caption-text\">Fig. 3<\/b><a class=\"c-article-section__figure-link\" data-test=\"img-link\" data-track=\"click\" data-track-label=\"image\" data-track-action=\"view figure\" href=\"https:\/\/www.nature.com\/articles\/s41598-025-12644-w\/figures\/3\" rel=\"nofollow noopener\" target=\"_blank\"><img decoding=\"async\" aria-describedby=\"Fig3\" src=\"https:\/\/www.europesays.com\/us\/wp-content\/uploads\/2025\/07\/41598_2025_12644_Fig3_HTML.png\" alt=\"figure 3\" loading=\"lazy\" width=\"685\" height=\"943\"\/><\/a><\/p>\n<p>Bar plots for categorical variables (<b>a<\/b>) Breast (<b>b<\/b>) Menopause (<b>c<\/b>) Metastasis (<b>d<\/b>) Breast quadrant (<b>e<\/b>) Involved nodes (<b>f<\/b>) History.<\/p>\n<p>Data preprocessing<\/p>\n<p>Data preprocessing enables converting unprocessed data into a format that is easy to read and use for analysis. This research used preprocessing to avoid using missing values as well as outliers and to optimize the input feature size during analysis. Data shuffling was first done to prevent the model from recalling the order. The dataset had 13 null values, represented as \u2018NaN\u2019, that are later removed to attain uniformity. Label encoding was used to convert categorical text data into numerical values since machine learning algorithms need numerical input. It assigns a distinct integer to each category that machine learning algorithms can process. Data scaling avoids any bias towards the larger values in the dataset. Max-Abs-scaling was used to transform all the numbers between 1 and \u2212\u20091 using its maximum absolute value.<\/p>\n<p>Mutual information and Pearson\u2019s correlation are utilized to determine the important features. The correlation coefficients between any two sets of characteristics are displayed as a heatmap in Pearson\u2019s correlation matrix. The values 1,0 and \u2212\u20091 indicate positive, zero, and negative correlation, respectively. The heatmap is shown in Fig.\u00a0<a data-track=\"click\" data-track-label=\"link\" data-track-action=\"figure anchor\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-12644-w#Fig4\" target=\"_blank\" rel=\"noopener\">4<\/a>, according to which Involved nodes, Metastasis, Tumor size, and Age were highly correlated to the diagnosis result. Mutual Information is a univariate filtering method where the significance of a feature is calculated separately. It shows the dependency between two variables with the concept of entropy. In Fig.\u00a0<a data-track=\"click\" data-track-label=\"link\" data-track-action=\"figure anchor\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-12644-w#Fig5\" target=\"_blank\" rel=\"noopener\">5<\/a>, the qualities are ordered in order of importance. The important features, according to mutual information, are involved nodes, tumor size, metastasis, age, menopause, breast quadrant, and history. The visualization for the distribution of the target variable (diagnostic result) is shown as a pie chart in Fig.\u00a0<a data-track=\"click\" data-track-label=\"link\" data-track-action=\"figure anchor\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-12644-w#Fig6\" target=\"_blank\" rel=\"noopener\">6<\/a>. From the figure, it is seen that there is a slight imbalance in the data, which might induce biases in the model performance. The Borderline-SMOTE is applied to the training data to balance the classes by creating synthetic samples. This balanced the dataset to 50% for both cases<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 19\" title=\"Rattan, V., Mittal, R., Singh, J. &amp; Malik, V. Analyzing the application of SMOTE on machine learning classifiers. In 2021 International Conference on Emerging Smart Computing and Informatics (ESCI) 2021 Mar 5 (pp. 692&#x2013;695). IEEE.\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-12644-w#ref-CR19\" id=\"ref-link-section-d59169348e1968\" target=\"_blank\" rel=\"noopener\">19<\/a>. Furthermore, the dataset was divided into test and training data in the proportion of 30:70.<\/p>\n<p><b id=\"Fig4\" class=\"c-article-section__figure-caption\" data-test=\"figure-caption-text\">Fig. 4<\/b><a class=\"c-article-section__figure-link\" data-test=\"img-link\" data-track=\"click\" data-track-label=\"image\" data-track-action=\"view figure\" href=\"https:\/\/www.nature.com\/articles\/s41598-025-12644-w\/figures\/4\" rel=\"nofollow noopener\" target=\"_blank\"><img decoding=\"async\" aria-describedby=\"Fig4\" src=\"https:\/\/www.europesays.com\/us\/wp-content\/uploads\/2025\/07\/41598_2025_12644_Fig4_HTML.png\" alt=\"figure 4\" loading=\"lazy\" width=\"685\" height=\"251\"\/><\/a><\/p>\n<p>Pearsons correlation heatmap.<\/p>\n<p><b id=\"Fig5\" class=\"c-article-section__figure-caption\" data-test=\"figure-caption-text\">Fig. 5<\/b><a class=\"c-article-section__figure-link\" data-test=\"img-link\" data-track=\"click\" data-track-label=\"image\" data-track-action=\"view figure\" href=\"https:\/\/www.nature.com\/articles\/s41598-025-12644-w\/figures\/5\" rel=\"nofollow noopener\" target=\"_blank\"><img decoding=\"async\" aria-describedby=\"Fig5\" src=\"https:\/\/www.europesays.com\/us\/wp-content\/uploads\/2025\/07\/41598_2025_12644_Fig5_HTML.png\" alt=\"figure 5\" loading=\"lazy\" width=\"685\" height=\"319\"\/><\/a><\/p>\n<p>Mutual information of features.<\/p>\n<p><b id=\"Fig6\" class=\"c-article-section__figure-caption\" data-test=\"figure-caption-text\">Fig. 6<\/b><a class=\"c-article-section__figure-link\" data-test=\"img-link\" data-track=\"click\" data-track-label=\"image\" data-track-action=\"view figure\" href=\"https:\/\/www.nature.com\/articles\/s41598-025-12644-w\/figures\/6\" rel=\"nofollow noopener\" target=\"_blank\"><img decoding=\"async\" aria-describedby=\"Fig6\" src=\"https:\/\/www.europesays.com\/us\/wp-content\/uploads\/2025\/07\/41598_2025_12644_Fig6_HTML.png\" alt=\"figure 6\" loading=\"lazy\" width=\"685\" height=\"757\"\/><\/a>Machine learning and explainable artificial intelligence<\/p>\n<p>The study employed multiple machine learning classification techniques and used a stacking algorithm to combine these classifiers. Eight classifiers used are XGBoost, LightGBM, CatBoost, AdaBoost, KNN, Decision Tree, Logistic Regression, and Random Forest. Although they use different approaches, XGBoost, LightGBM, CatBoost, AdaBoost, and Random Forest integrate multiple tree models for better performance. Decision trees, logistic regression, and K-NN work without combining multiple models. LightGBM and Xgboost are optimal for speed and performance. The outputs from the above classifiers are trained on a meta-classifier by the stacking algorithm. As the stacking methodology integrates the unique strengths from each base model, which improves the model performance by reducing the over-fitting. It also enhances the generalization as it is trained on predictions from the base models, which mitigate biases and errors that are prominent in any single model. The architecture for it is shown in Fig.\u00a0<a data-track=\"click\" data-track-label=\"link\" data-track-action=\"figure anchor\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-12644-w#Fig7\" target=\"_blank\" rel=\"noopener\">7<\/a>. Hyperparameters are set before training to control how the model learns. It is performed to determine the ideal set of hyperparameters that optimize the model\u2019s performance, generalizing the learning process to respond well to unseen data. GridSearchCV is used for hyperparameter tuning using 5-fold cross-validation in this research. A detailed flowchart of the methodology followed is shown in Fig.\u00a0<a data-track=\"click\" data-track-label=\"link\" data-track-action=\"figure anchor\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-12644-w#Fig8\" target=\"_blank\" rel=\"noopener\">8<\/a>.<\/p>\n<p><b id=\"Fig7\" class=\"c-article-section__figure-caption\" data-test=\"figure-caption-text\">Fig. 7<\/b><a class=\"c-article-section__figure-link\" data-test=\"img-link\" data-track=\"click\" data-track-label=\"image\" data-track-action=\"view figure\" href=\"https:\/\/www.nature.com\/articles\/s41598-025-12644-w\/figures\/7\" rel=\"nofollow noopener\" target=\"_blank\"><img decoding=\"async\" aria-describedby=\"Fig7\" src=\"https:\/\/www.europesays.com\/us\/wp-content\/uploads\/2025\/07\/41598_2025_12644_Fig7_HTML.png\" alt=\"figure 7\" loading=\"lazy\" width=\"685\" height=\"381\"\/><\/a><\/p>\n<p>XAI techniques improve the model performance, interpret the results, and provide transparency to the model\u2019s predictions. The necessity for XAI: improve the model\u2019s reliability by identifying the cause for misclassifications; provide transparency to the model, which helps doctors in decision making and for patients to understand the rationale behind the treatment. It also helps identify the important features for the detection of breast cancer. This study used five XAI techniques: SHAP, LIME, Eli5, QLattice, and Anchor. SHAP (Shapley Additive exPlanations) is a method for interpreting complex machine learning models that assigns an important value to each feature known as the Shapley value. These values measure the impact of each input attribute on the model\u2019s predictions<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 20\" title=\"Nguyen, H. T., Cao, H. Q., Nguyen, K. V. &amp; Pham, N. D. Evaluation of explainable artificial intelligence: Shap, lime, and cam. InProceedings of the FPT AI Conference 2021 May (pp. 1&#x2013;6).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-12644-w#ref-CR20\" id=\"ref-link-section-d59169348e2066\" target=\"_blank\" rel=\"noopener\">20<\/a>. It provides detailed, individualized explanations for both doctors and patients. The individual feature contribution is critical in refining the model by pinpointing unexpected feature interactions. SHAP is a model-agnostic that can be applied to any machine learning model, in this case, on STACK, a combination of various models. LIME (Local Interpretable Model-agnostic Explanations) is an agnostic model that explains the sample\u2019s local surroundings<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 21\" title=\"Khater, T., Hussain, A., Mahmoud, S. &amp; Yasen, S. Explainable ai for breast cancer detection: A lime-driven approach. In 2023 16th International Conference on Developments in eSystems Engineering (DeSE) 2023 Dec 18 (pp. 540&#x2013;545). IEEE.\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-12644-w#ref-CR21\" id=\"ref-link-section-d59169348e2070\" target=\"_blank\" rel=\"noopener\">21<\/a>. It modifies minor aspects of data to observe the impact of the changes on the predictions, therefore facilitating the identification of the most relevant features. It is especially useful for scenarios involving specific patient predictions. It provides transparency at the individual level. LIME provides flexibility across different models without modification like SHAP. LIME provides explanations that are easy for patients or non-technical stakeholders to understand. Eli5(Explain Like I\u2019m Five) allows us to explain the weights and predictions of the classifiers. It offers global and local explanations<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 22\" title=\"Vishwarupe, V. et al. Explainable AI and interpretable machine learning: A case study in perspective. Procedia Comput. Sci. 204, 869&#x2013;876 (2022).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-12644-w#ref-CR22\" id=\"ref-link-section-d59169348e2074\" target=\"_blank\" rel=\"noopener\">22<\/a>. By analyzing the contributions of different features to the model\u2019s predictions, Eli5 can help identify biases that might have been inadvertently introduced during the model training process. It helps understand the model\u2019s inner workings and troubleshoot the existing issues. For instance, if a model disproportionately weighs a non-relevant feature, it might indicate overfitting or data quality issues. It is a user-friendly interface and an excellent choice for text data, like in this study, as it can explain how input text elements affect the model\u2019s predictions, offering insights into the workings of complex models. QLattice explainability is a technique that looks for patterns and connections in data<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 23\" title=\"Wenninger, S., Kaymakci, C. &amp; Wiethe, C. Explainable long-term Building energy consumption prediction using qlattice. Appl. Energy. 308, 118300 (2022).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-12644-w#ref-CR23\" id=\"ref-link-section-d59169348e2078\" target=\"_blank\" rel=\"noopener\">23<\/a>. It investigates a broad range of alternative models that offer understandable explanations for the relationships in the data rather than merely fitting the data to a predetermined model. The QLattice model is a set of mathematical expressions that can connect output and input via infinite spatial paths<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 23\" title=\"Wenninger, S., Kaymakci, C. &amp; Wiethe, C. Explainable long-term Building energy consumption prediction using qlattice. Appl. Energy. 308, 118300 (2022).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-12644-w#ref-CR23\" id=\"ref-link-section-d59169348e2082\" target=\"_blank\" rel=\"noopener\">23<\/a>. Unlike many machine learning models, QLattice focuses on finding simple, interpretable formulas for users to understand how inputs are transformed into outputs, making the models inherently transparent. It identifies key features and provides information on how they interact with each other to affect the outcome. It is robust to new data, as it can adapt by exploring new models, making it responsive to changes in data patterns. Anchor is also a model-agnostic interpretation created by Marco Tulio Ribeiro. It makes use of if-then rules known as anchors<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 24\" title=\"Ribeiro, M. T., Singh, S., Guestrin, C. &amp; Anchors High-precision model-agnostic explanations. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 32, No. 1). (2018).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-12644-w#ref-CR24\" id=\"ref-link-section-d59169348e2087\" target=\"_blank\" rel=\"noopener\">24<\/a>. In medical diagnosis, an anchor might specify that if certain symptoms and test results are present, the diagnosis will invariably be a specific condition. It explains a certain decision by identifying the critical factors that significantly impact it. It accurately describes why a particular decision was made, even if the overall model behavior is complex and non-linear, making it locally faithful. This method helps the user develop confidence in the model by outlining the justification for each prediction. The summary of characteristics of the XAI techniques used is shown in Table\u00a0<a data-track=\"click\" data-track-label=\"link\" data-track-action=\"table anchor\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-12644-w#Tab6\" target=\"_blank\" rel=\"noopener\">6<\/a>. Various techniques enhance the reliability and versatility of interpretative outputs through cross-verification of explanations. The study\u2019s XAI techniques complement each other in terms of speed, adaptability, and ease of interpretability for doctors and patients. Due to the different insights in the model\u2019s decision-making process, integrating all five techniques leverages unique strengths.<\/p>\n<p><b id=\"Tab6\" data-test=\"table-caption\">Table 6 Characteristics of five XAI techniques used.<\/b><b id=\"Fig8\" class=\"c-article-section__figure-caption\" data-test=\"figure-caption-text\">Fig. 8<\/b><a class=\"c-article-section__figure-link\" data-test=\"img-link\" data-track=\"click\" data-track-label=\"image\" data-track-action=\"view figure\" href=\"https:\/\/www.nature.com\/articles\/s41598-025-12644-w\/figures\/8\" rel=\"nofollow noopener\" target=\"_blank\"><img decoding=\"async\" aria-describedby=\"Fig8\" src=\"https:\/\/www.europesays.com\/us\/wp-content\/uploads\/2025\/07\/41598_2025_12644_Fig8_HTML.png\" alt=\"figure 8\" loading=\"lazy\" width=\"685\" height=\"628\"\/><\/a><\/p>\n<p>Flow diagram-based methodology.<\/p>\n","protected":false},"excerpt":{"rendered":"Dataset description This paper used the \u201cUCTH Breast Cancer Dataset\u201d for machine learning analysis. The patient data was&hellip;\n","protected":false},"author":3,"featured_media":90313,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[21],"tags":[691,33979,738,60128,15634,235,10020,60127,60126,1141,10046,8523,10047,159,158,67,132,68],"class_list":{"0":"post-90312","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-ai-in-healthcare","10":"tag-artificial-intelligence","11":"tag-attribute-selection","12":"tag-breast-cancer","13":"tag-cancer","14":"tag-diagnosis","15":"tag-ensemble-classifier","16":"tag-explainable-artificial-intelligence","17":"tag-health-care","18":"tag-humanities-and-social-sciences","19":"tag-machine-learning","20":"tag-multidisciplinary","21":"tag-science","22":"tag-technology","23":"tag-united-states","24":"tag-unitedstates","25":"tag-us"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@us\/114911706045153142","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/90312","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/comments?post=90312"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/90312\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media\/90313"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media?parent=90312"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/categories?post=90312"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/tags?post=90312"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}