{"id":185337,"date":"2025-08-29T19:14:16","date_gmt":"2025-08-29T19:14:16","guid":{"rendered":"https:\/\/www.europesays.com\/us\/185337\/"},"modified":"2025-08-29T19:14:16","modified_gmt":"2025-08-29T19:14:16","slug":"assessment-of-university-students-earthquake-coping-strategies-using-artificial-intelligence-methods","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/us\/185337\/","title":{"rendered":"Assessment of university students\u2019 earthquake coping strategies using artificial intelligence methods"},"content":{"rendered":"<p>In this section; information about Coping with Earthquake Stress Strategy Dataset, Confusion Matrix and Performance Metrics, Matthews Correlation Coefficient, Chi-Square Test, Correlation Matrix, Cross Validation, Artificial Intelligence Models, Logistic Regression, Bagging and Random Forest are given. Flow chart of the study is shown in Fig. <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"figure anchor\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#Fig1\" rel=\"nofollow noopener\" target=\"_blank\">1<\/a>. All methods were performed in accordance with the relevant guidelines and regulations, and the study was approved by Ethics Committee for Social and Behavioral Sciences at Necmettin Erbakan University, in line with the Declaration of Helsinki.<\/p>\n<p>Coping with earthquake stress strategy dataset (CESS dataset)<\/p>\n<p>In the study, the Scale of Coping Strategies with Earthquake Stress developed by Yondem and Eren was used<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 19\" title=\"Yondem, Z. D. &amp; ve Eren, A. Deprem Stresi &#x130;le Ba&#x15F; Etme stratejileri &#xD6;l&#xE7;e&#x11F;inin Ge&#xE7;erlik ve G&#xFC;venirlik &#xC7;al&#x131;&#x15F;malar&#x131;. T&#xFC;rk Psikolojik Dan&#x131;&#x15F;ma Ve Rehberlik Dergisi. 3 (30), 60&#x2013;75 (2008).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#ref-CR19\" id=\"ref-link-section-d204527936e479\" rel=\"nofollow noopener\" target=\"_blank\">19<\/a>. Earthquake Stress Coping Strategies Scale was applied to 858 people. The data were collected electronically and made available for analysis. The created data set consists of 24 variables in total. These extracted features are transferred to the dataset. Using the cross-validation method, the trained data are directed to classification models. The trained algorithms provide output according to the determined classes.<\/p>\n<p>In the data set creation stages, demographic questions were first asked and then a scale consisting of 16 questions was used. Informed consent was obtained from all participants prior to their involvement in the study. Data were collected from 858 university students between 19.01.2024 and 07.03.2024. In total, a data set consisting of 24 variables was obtained. The 24 variables are shown in Table <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"table anchor\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#Tab1\" rel=\"nofollow noopener\" target=\"_blank\">1<\/a>. The dataset in Table\u00a0<a data-track=\"click\" data-track-label=\"link\" data-track-action=\"table anchor\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#Tab1\" rel=\"nofollow noopener\" target=\"_blank\">1<\/a> includes both categorical variables (e.g., gender, residence type) and ordinal Likert-scale items. Categorical variables were processed using label encoding to convert them into numerical values. The Likert-scale items were treated as ordinal data to preserve their inherent order. No normalization or scaling was applied to these variables. Class labels were determined based on the average scores participants received from the 16-item CESS scale, using threshold values as follows: 1.00\u20132.50\u2009=\u2009Disagree; 2.51\u20133.00\u2009=\u2009Agree; 3.01\u20135.00\u2009=\u2009Strongly Agree. While setting these thresholds, the frequency distribution of the mean CESS scores was examined. It was observed that the frequencies clustered within these defined intervals. This distribution is illustrated in Fig.\u00a0<a data-track=\"click\" data-track-label=\"link\" data-track-action=\"figure anchor\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#Fig2\" rel=\"nofollow noopener\" target=\"_blank\">2<\/a>. Therefore, the data were categorized into three distinct classes for classification purposes. This classification approach reflects the ordinal nature of Likert-type data. Standard classifiers were selected over ordinal-specific ones as the clustered distributions enabled effective nominal separation, as evidenced by high MCC values. The results obtained from these variables consist of 3 classes. These are:<\/p>\n<p><b id=\"Tab1\" data-test=\"table-caption\">Table 1 Coping with earthquake stress strategy dataset Variables.<\/b><b id=\"Fig1\" class=\"c-article-section__figure-caption\" data-test=\"figure-caption-text\">Fig. 1<\/b><a class=\"c-article-section__figure-link\" data-test=\"img-link\" data-track=\"click\" data-track-label=\"image\" data-track-action=\"view figure\" href=\"https:\/\/www.nature.com\/articles\/s41598-025-17555-4\/figures\/1\" rel=\"nofollow noopener\" target=\"_blank\"><img decoding=\"async\" aria-describedby=\"Fig1\" src=\"https:\/\/www.europesays.com\/us\/wp-content\/uploads\/2025\/08\/41598_2025_17555_Fig1_HTML.png\" alt=\"figure 1\" loading=\"lazy\" width=\"685\" height=\"587\"\/><\/a><b id=\"Fig2\" class=\"c-article-section__figure-caption\" data-test=\"figure-caption-text\">Fig. 2<\/b><a class=\"c-article-section__figure-link\" data-test=\"img-link\" data-track=\"click\" data-track-label=\"image\" data-track-action=\"view figure\" href=\"https:\/\/www.nature.com\/articles\/s41598-025-17555-4\/figures\/2\" rel=\"nofollow noopener\" target=\"_blank\"><img decoding=\"async\" aria-describedby=\"Fig2\" src=\"https:\/\/www.europesays.com\/us\/wp-content\/uploads\/2025\/08\/41598_2025_17555_Fig2_HTML.png\" alt=\"figure 2\" loading=\"lazy\" width=\"685\" height=\"728\"\/><\/a><\/p>\n<p>Threshold-based frequency distributions.<\/p>\n<p><b id=\"Tab2\" data-test=\"table-caption\">Table 2 Sample confusion matrix notation and Explanations.<\/b><\/p>\n<ul class=\"u-list-style-bullet\">\n<li>\n<p>Disagree (This person disagrees with the Coping with Earthquake Stress Strategy).<\/p>\n<\/li>\n<li>\n<p>Agree (This person agrees with the Coping with Earthquake Stress Strategy).<\/p>\n<\/li>\n<li>\n<p>Strongly agree (This person strongly agrees with the Coping with Earthquake Stress Strategy).<\/p>\n<\/li>\n<\/ul>\n<p>The Logistic Regression algorithm was chosen as a fundamental method due to its high interpretability and suitability for multi-class classification problems. Ensemble algorithms such as Random Forest and Bagging were preferred for their robustness in handling the complex and non-linear relationships often observed in psychological data, their resistance to overfitting, and their ability to achieve high accuracy. These ensemble methods enhance model generalizability by reducing variance.<\/p>\n<p>Confusion matrix and performance metrics<\/p>\n<p>Confusion Matrix and Performance Metrics are basic tools used to evaluate and compare the performance of artificial intelligence algorithms. These tools are used to determine how accurately the model works in classification processes and prediction processes<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 20\" title=\"Gerz, S. et al. Classification of industrial and commercial facilities using machine learning techniques. Intell. Methods Eng. Sci. 3 (2), 46&#x2013;53. &#010;                  https:\/\/doi.org\/10.58190\/imiens.2024.98&#010;                  &#010;                 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#ref-CR20\" id=\"ref-link-section-d204527936e1145\" rel=\"nofollow noopener\" target=\"_blank\">20<\/a>.<\/p>\n<p>It is a table used to evaluate the performance of artificial intelligence algorithms. The table provides 4 pieces of information: TP, TN, FP and FN. According to this information, it can be observed which types of errors the algorithm makes and in which classes it is successful<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Kishore, B. et al. Computer-aided multiclass classification of corn from corn images integrating deep feature extraction. Comput. Intell. Neurosci. 2022 (1), 2062944. &#10;                  https:\/\/doi.org\/10.1155\/2022\/2062944&#10;                  &#10;                 (2022).\" href=\"#ref-CR21\" id=\"ref-link-section-d204527936e1152\">21<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Koklu, N. &amp; Sulak, S. A. The Systematic Analysis of Adults&#x2019; Environmental Sensory Tendencies Dataset. Data Brief&#xA0;110640. (2024). &#10;                  https:\/\/doi.org\/10.1016\/j.dib.2024.110640&#10;                  &#10;                \" href=\"#ref-CR22\" id=\"ref-link-section-d204527936e1152_1\">22<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 23\" title=\"Xu, J., Zhang, Y. &amp; Miao, D. Three-way confusion matrix for classification: A measure driven view. Inf. Sci. 507, 772&#x2013;794. &#010;                  https:\/\/doi.org\/10.1016\/j.ins.2019.06.064&#010;                  &#010;                 (2020).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#ref-CR23\" id=\"ref-link-section-d204527936e1155\" rel=\"nofollow noopener\" target=\"_blank\">23<\/a>. The sample confusion matrix representation and explanations in this study are presented in Table\u00a0<a data-track=\"click\" data-track-label=\"link\" data-track-action=\"table anchor\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#Tab2\" rel=\"nofollow noopener\" target=\"_blank\">2<\/a>. The calculation of TP, TN, FP and FN values are presented in Table <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"table anchor\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#Tab3\" rel=\"nofollow noopener\" target=\"_blank\">3<\/a>.<\/p>\n<p><b id=\"Tab3\" data-test=\"table-caption\">Table 3 The calculation of TP, TN, FP and FN values.<\/b><\/p>\n<p>It consists of statistical metrics used in artificial intelligence methods and data science to measure how well the model performs<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 24\" title=\"Sadeghpour, A. &amp; Ozay, G. Investigating the predictive capabilities of ANN, RSM, and ANFIS in assessing the collapse potential of RC structures. Arab. J. Sci. Eng. &#010;                  https:\/\/doi.org\/10.1007\/s13369-024-09618-x&#010;                  &#010;                 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#ref-CR24\" id=\"ref-link-section-d204527936e1386\" rel=\"nofollow noopener\" target=\"_blank\">24<\/a>. These metrics are used to evaluate the accuracy, precision, errors or overall performance of the model<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 25\" title=\"R&#xE1;cz, A., Bajusz, D. &amp; H&#xE9;berger, K. Multi-level comparison of machine learning classifiers and their performance metrics. Molecules 24 (15), 2811. &#010;                  https:\/\/doi.org\/10.3390\/molecules24152811&#010;                  &#010;                 (2019).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#ref-CR25\" id=\"ref-link-section-d204527936e1390\" rel=\"nofollow noopener\" target=\"_blank\">25<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 26\" title=\"Koklu, N. &amp; Sulak, S. A. Using artificial intelligence techniques for the analysis of obesity status according to the individuals&#x2019; social and physical activities. Sinop &#xDC;niversitesi Fen Bilimleri Dergisi. 9 (1), 217&#x2013;239. &#010;                  https:\/\/doi.org\/10.33484\/sinopfbd.1445215&#010;                  &#010;                 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#ref-CR26\" id=\"ref-link-section-d204527936e1393\" rel=\"nofollow noopener\" target=\"_blank\">26<\/a>. In this study, accuracy, precision, recall and F1-score metrics are used. The descriptions and formulas of the metrics used are given in Table\u00a0<a data-track=\"click\" data-track-label=\"link\" data-track-action=\"table anchor\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#Tab4\" rel=\"nofollow noopener\" target=\"_blank\">4<\/a>.<\/p>\n<p><b id=\"Tab4\" data-test=\"table-caption\">Table 4 Descriptions and formulas of performance metrics and Matthews correlation Coefficient.<\/b><\/p>\n<p>Four main performance metrics are used to evaluate the success of the model. Accuracy represents the ratio of correct predictions made by the model to the total number of predictions. Precision indicates the proportion of correctly predicted positive results among all positive predictions. Recall shows how well the model identifies the actual positive cases. The F1-Score is the balanced average of precision and recall, used to measure the overall performance of the model. The formulas for these metrics are presented in Table\u00a0<a data-track=\"click\" data-track-label=\"link\" data-track-action=\"table anchor\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#Tab4\" rel=\"nofollow noopener\" target=\"_blank\">4<\/a>.<\/p>\n<p>Matthews correlation coefficient (MCC)<\/p>\n<p>Matthews Correlation Coefficient (MCC) is a robust and comprehensive statistical measure used to evaluate the performance of classification models. It is especially valued for providing reliable results on imbalanced datasets. Unlike basic metrics such as accuracy, MCC takes into account true positives, true negatives, false positives, and false negatives simultaneously, offering a more nuanced assessment<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 27\" title=\"Chicco, D. &amp; Jurman, G. The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation. BMC Genom. 21 (1). &#010;                  https:\/\/doi.org\/10.1186\/s12864-019-6413-7&#010;                  &#010;                 (2020).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#ref-CR27\" id=\"ref-link-section-d204527936e1630\" rel=\"nofollow noopener\" target=\"_blank\">27<\/a>. Its values range from \u2212\u20091 to +\u20091, where +\u20091 indicates perfect classification, 0 corresponds to random guessing, and \u2212\u20091 signifies complete misclassification. While commonly applied in binary classification problems, MCC also has generalized versions suitable for multi-class classification. By reflecting the overall balance of performance across all classes, MCC provides a fair and holistic evaluation of a model. In this study, the MCC values calculated for Logistic Regression, Random Forest, and Bagging algorithms were high, highlighting their accuracy and reliability as key indicators of model performance. The MCC values are shown in tables for algorithms.<\/p>\n<p>Chi-square test (\u03c7\u00b2)<\/p>\n<p>Chi-Square Test (\u03c7\u00b2): The Chi-Square test is a statistical method used in classification problems to determine whether a feature has a significant association with target classes. It is especially preferred when working with categorical data. This test helps identify the degree to which each feature is related to the classes, aiming to determine which features truly contribute to the classification process<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 28\" title=\"McHugh, M. L. The Chi-square test of independence. Biochem. Med. 143. &#010;                  https:\/\/doi.org\/10.11613\/bm.2013.018&#010;                  &#010;                 (2013).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#ref-CR28\" id=\"ref-link-section-d204527936e1643\" rel=\"nofollow noopener\" target=\"_blank\">28<\/a>.<\/p>\n<p>The Chi-Square test can be applied directly to the data without the need for model training<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 29\" title=\"Lancaster, H. O. &amp; Seneta, E. Chi-Square Distribution. In Encyclopedia of Biostatistics. (2005). &#010;                  https:\/\/doi.org\/10.1002\/0470011815.b2a15018&#010;                  &#010;                \" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#ref-CR29\" id=\"ref-link-section-d204527936e1650\" rel=\"nofollow noopener\" target=\"_blank\">29<\/a>. This allows for identifying significant variables independently of any specific model. As a result of the test, each feature receives a score that reflects its ability to discriminate between classes. Features with high scores are considered more important, while those with low scores can be deemed unnecessary and removed from the dataset.<\/p>\n<p>This approach not only speeds up the model training process but can also improve accuracy. Additionally, its fast execution on large datasets is a significant advantage. The Chi-Square test is particularly effective and reliable as a preliminary analysis tool when working with categorical data or numerical data that can be discretized into categories.<\/p>\n<p>As a result of the Chi-Square test, the three features contributing most significantly to the classification process were identified as CESS_10 (\u201cI believe in fate and that it cannot be changed,\u201d \u03c7\u00b2 = 130.674), CESS_13 (\u201cI try to be more optimistic about life,\u201d \u03c7\u00b2 = 85.521), and CESS_11 (\u201cI fulfill my religious duties more often,\u201d \u03c7\u00b2 = 79.574) (Table\u00a0<a data-track=\"click\" data-track-label=\"link\" data-track-action=\"table anchor\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#Tab5\" rel=\"nofollow noopener\" target=\"_blank\">5<\/a>).<\/p>\n<p><b id=\"Tab5\" data-test=\"table-caption\">Table 5 Results of \u03c7\u00b2 variables.<\/b>Correlation matrix<\/p>\n<p>A correlation matrix is a symmetric square matrix that quantitatively represents the relationships between variables in a multivariate data set. This matrix contains the Pearson correlation coefficients between each pair of variables, ranging from \u2212\u20091 to +\u20091, indicating the direction and strength of the relationship. The diagonal elements of the matrix are always 1 because the correlation of a variable with itself is strictly positive. The correlation matrix is used as a fundamental tool in multidimensional data analysis, factor analysis and various statistical modeling techniques, providing researchers with critical information in understanding the complex relationships between variables<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 30\" title=\"Nasr, M., Giroux, B. &amp; Dupuis, J. C. A novel time-domain polarization filter based on a correlation matrix analysis. Geophysics 86 (2), V91&#x2013;V106. &#010;                  https:\/\/doi.org\/10.1190\/geo2020-0002.1&#010;                  &#010;                 (2021).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#ref-CR30\" id=\"ref-link-section-d204527936e2028\" rel=\"nofollow noopener\" target=\"_blank\">30<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 31\" title=\"Sarmah, B., Nair, N., Mehta, D. &amp; Pasquali, S. Learning embedded representation of the stock correlation matrix using graph machine learning. ArXiv Preprint. &#010;                  https:\/\/doi.org\/10.48550\/arXiv.2207.07183&#010;                  &#010;                 (2022). arXiv:2207.07183.\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#ref-CR31\" id=\"ref-link-section-d204527936e2031\" rel=\"nofollow noopener\" target=\"_blank\">31<\/a>.<\/p>\n<p>\u22121: It represents a strong and negative correlation between two variables.<\/p>\n<p>0: It represents the absence of a relationship between two variables.<\/p>\n<p>1: It represents a strong and positive correlation between two variables<\/p>\n<p>Cross validation<\/p>\n<p>Cross validation is a statistical resampling method used to evaluate the performance of an artificial intelligence model on unseen data as objectively and accurately as possible. In this method, the data set is divided into k parts and one part at a time is used as a test set. The rest are used as the training set. This process is repeated for all parts and the performance of the model is measured at each iteration. The overall performance of the model is determined by averaging the results. This approach helps to understand how the model performs on different data subsets and helps to obtain more reliable results<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 22\" title=\"Koklu, N. &amp; Sulak, S. A. The Systematic Analysis of Adults&#x2019; Environmental Sensory Tendencies Dataset. Data Brief&#xA0;110640. (2024). &#010;                  https:\/\/doi.org\/10.1016\/j.dib.2024.110640&#010;                  &#010;                \" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#ref-CR22\" id=\"ref-link-section-d204527936e2052\" rel=\"nofollow noopener\" target=\"_blank\">22<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 32\" title=\"Bergmeir, C. &amp; Ben&#xED;tez, J. M. On the use of cross-validation for time series predictor evaluation. Inf. Sci. 191, 192&#x2013;213. &#010;                  https:\/\/doi.org\/10.1016\/j.ins.2011.12.028&#010;                  &#010;                 (2012).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#ref-CR32\" id=\"ref-link-section-d204527936e2055\" rel=\"nofollow noopener\" target=\"_blank\">32<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 33\" title=\"Nti, I. K., Nyarko-Boateng, O. &amp; Aning, J. Performance of machine learning algorithms with different K values in K-fold crossvalidation. Int. J. Inform. Technol. Comput. Sci. 13 (6), 61&#x2013;71. &#010;                  https:\/\/doi.org\/10.5815\/ijitcs.2021.06.05&#010;                  &#010;                 (2021).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#ref-CR33\" id=\"ref-link-section-d204527936e2058\" rel=\"nofollow noopener\" target=\"_blank\">33<\/a>. Cross validation is presented in Fig.\u00a0<a data-track=\"click\" data-track-label=\"link\" data-track-action=\"figure anchor\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#Fig3\" rel=\"nofollow noopener\" target=\"_blank\">3<\/a>. In this study, stratified folds were intentionally not used due to the relatively small size and clustered nature of the dataset, which could lead to overly homogeneous folds and affect model training. Instead, we used random folds to better reflect the natural variability of the data.<\/p>\n<p><b id=\"Fig3\" class=\"c-article-section__figure-caption\" data-test=\"figure-caption-text\">Fig. 3<\/b><a class=\"c-article-section__figure-link\" data-test=\"img-link\" data-track=\"click\" data-track-label=\"image\" data-track-action=\"view figure\" href=\"https:\/\/www.nature.com\/articles\/s41598-025-17555-4\/figures\/3\" rel=\"nofollow noopener\" target=\"_blank\"><img decoding=\"async\" aria-describedby=\"Fig3\" src=\"https:\/\/www.europesays.com\/us\/wp-content\/uploads\/2025\/08\/41598_2025_17555_Fig3_HTML.png\" alt=\"figure 3\" loading=\"lazy\" width=\"685\" height=\"435\"\/><\/a>Artificial intelligence models<\/p>\n<p>Artificial intelligence models are computer systems designed to perform various tasks. These models try to mimic human intelligence and demonstrate advanced capabilities in certain areas<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 34\" title=\"Lu, Y. Artificial intelligence: a survey on evolution, models, applications and future trends. J. Manage. Analytics. 6 (1), 1&#x2013;29. &#010;                  https:\/\/doi.org\/10.1080\/23270012.2019.1570365&#010;                  &#010;                 (2019).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#ref-CR34\" id=\"ref-link-section-d204527936e2092\" rel=\"nofollow noopener\" target=\"_blank\">34<\/a>. Logistic Regression, bagging and random forest algorithms were used in this study.<\/p>\n<p>Logistic regression<\/p>\n<p>Logistic regression is a statistical method and artificial intelligence algorithm used in classification problems. It is mainly used to model the relationship between independent variables and a categorical dependent variable. It uses the logistic function to transform the input variables into a probability value between 0 and 1. This probability usually represents the probability of an event occurring or belonging to a class. Logistic regression is widely used, especially in binary classification problems, but can also be adapted to multiclass problems<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Koklu, N. &amp; Sulak, S. A. Classification of environmental attitudes with artificial intelligence algorithms. Intell. Methods Eng. Sci. 3 (2), 54&#x2013;62. &#10;                  https:\/\/doi.org\/10.58190\/imiens.2024.99&#10;                  &#10;                 (2024).\" href=\"#ref-CR35\" id=\"ref-link-section-d204527936e2103\">35<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Nusinovici, S. et al. Logistic regression was as good as machine learning for predicting major chronic diseases. J. Clin. Epidemiol. 122, 56&#x2013;69. &#10;                  https:\/\/doi.org\/10.1016\/j.jclinepi.2020.03.002&#10;                  &#10;                 (2020).\" href=\"#ref-CR36\" id=\"ref-link-section-d204527936e2103_1\">36<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 37\" title=\"Rymarczyk, T., Koz&#x142;owski, E., K&#x142;osowski, G. &amp; Niderla, K. Logistic regression for machine learning in process tomography. Sensors 19 (15), 3400. &#010;                  https:\/\/doi.org\/10.3390\/s19153400&#010;                  &#010;                 (2019).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#ref-CR37\" id=\"ref-link-section-d204527936e2106\" rel=\"nofollow noopener\" target=\"_blank\">37<\/a>.<\/p>\n<p>Bagging<\/p>\n<p>It is an ensemble learning method and is used to improve prediction performance. In this model, random samples are taken from the original dataset and an independent model is trained on each sample. Then, the predictions of all models are combined to form a prediction. Voting is used for classification problems and averaging for regression problems<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 38\" title=\"Breiman, L. Bagging predictors. Mach. Learn. 24, 123&#x2013;140. &#010;                  https:\/\/doi.org\/10.1007\/BF00058655&#010;                  &#010;                 (1996).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#ref-CR38\" id=\"ref-link-section-d204527936e2118\" rel=\"nofollow noopener\" target=\"_blank\">38<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 39\" title=\"Pino-Mej&#xED;as, R., Jim&#xE9;nez-Gamero, M. D., Cubiles-de-la-Vega, M. D. &amp; Pascual-Acosta, A. Reduced bootstrap aggregating of learning algorithms. Pattern Recognit. Lett. 29 (3), 265&#x2013;271. &#010;                  https:\/\/doi.org\/10.1016\/j.patrec.2007.10.002&#010;                  &#010;                 (2008).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#ref-CR39\" id=\"ref-link-section-d204527936e2121\" rel=\"nofollow noopener\" target=\"_blank\">39<\/a>.<\/p>\n<p>Random forest<\/p>\n<p>Random Forest is a powerful artificial intelligence algorithm that is one of the ensemble learning methods and combines a large number of decision trees. This algorithm is based on the bagging method and additionally uses feature randomization. Each decision tree is trained on a randomly selected sample from the original dataset and a randomly selected subset of features is used at each node<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 40\" title=\"Sulak, S. A. &amp; Koklu, N. Predicting student dropout using machine learning algorithms. Intell. Methods Eng. Sci. 3 (3), 91&#x2013;98. &#010;                  https:\/\/doi.org\/10.58190\/imiens.2024.103&#010;                  &#010;                 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#ref-CR40\" id=\"ref-link-section-d204527936e2133\" rel=\"nofollow noopener\" target=\"_blank\">40<\/a>. This approach increases the diversity of the model and reduces the risk of overlearning. In the prediction phase, the results of all trees are combined. Majority voting is used for classification and averaging is used for regression. Random Forest has the advantages of high accuracy, good generalization ability and applicability to different types of data<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 26\" title=\"Koklu, N. &amp; Sulak, S. A. Using artificial intelligence techniques for the analysis of obesity status according to the individuals&#x2019; social and physical activities. Sinop &#xDC;niversitesi Fen Bilimleri Dergisi. 9 (1), 217&#x2013;239. &#010;                  https:\/\/doi.org\/10.33484\/sinopfbd.1445215&#010;                  &#010;                 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#ref-CR26\" id=\"ref-link-section-d204527936e2137\" rel=\"nofollow noopener\" target=\"_blank\">26<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 41\" title=\"Rigatti, S. J. Random forest. J. Insur. Med. 47 (1), 31&#x2013;39. &#010;                  https:\/\/doi.org\/10.17849\/insm-47-01-31-39.1&#010;                  &#010;                 (2017).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#ref-CR41\" id=\"ref-link-section-d204527936e2140\" rel=\"nofollow noopener\" target=\"_blank\">41<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 42\" title=\"Sarica, A., Cerasa, A. &amp; Quattrone, A. Random forest algorithm for the classification of neuroimaging data in alzheimer&#x2019;s disease: a systematic review. Front. Aging Neurosci. 9, 329. &#010;                  https:\/\/doi.org\/10.3389\/fnagi.2017.00329&#010;                  &#010;                 (2017).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-17555-4#ref-CR42\" id=\"ref-link-section-d204527936e2143\" rel=\"nofollow noopener\" target=\"_blank\">42<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"In this section; information about Coping with Earthquake Stress Strategy Dataset, Confusion Matrix and Performance Metrics, Matthews Correlation&hellip;\n","protected":false},"author":3,"featured_media":185338,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[21],"tags":[691,738,104075,16721,10046,104076,4197,40515,10047,26925,98400,159,158,67,132,68],"class_list":{"0":"post-185337","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-coping-with-earthquake-stress-scale-cess","11":"tag-engineering","12":"tag-humanities-and-social-sciences","13":"tag-logistic-regression-bagging","14":"tag-machine-learning-algorithms","15":"tag-mathematics-and-computing","16":"tag-multidisciplinary","17":"tag-natural-hazards","18":"tag-random-forest","19":"tag-science","20":"tag-technology","21":"tag-united-states","22":"tag-unitedstates","23":"tag-us"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@us\/115113653956489150","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/185337","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/comments?post=185337"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/185337\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media\/185338"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media?parent=185337"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/categories?post=185337"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/tags?post=185337"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}