{"id":42474,"date":"2025-07-06T04:19:16","date_gmt":"2025-07-06T04:19:16","guid":{"rendered":"https:\/\/www.europesays.com\/us\/42474\/"},"modified":"2025-07-06T04:19:16","modified_gmt":"2025-07-06T04:19:16","slug":"chinese-inscription-restoration-based-on-artificial-intelligent-models","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/us\/42474\/","title":{"rendered":"Chinese inscription restoration based on artificial intelligent models"},"content":{"rendered":"<p>Related technologies and practices<\/p>\n<p>Machine learning is a branch of artificial intelligence, whose goal is to make computers have human intelligence<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 9\" title=\"Liu, H., Chaudhary, M. &amp; Wang, H. Towards trustworthy and aligned machine learning: a data-centric survey with causality perspectives. arXiv preprint arXiv:2307.16851. &#010;                  https:\/\/doi.org\/10.48550\/arXiv.2307.16851&#010;                  &#010;                 (2023).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR9\" id=\"ref-link-section-d661859020e422\" rel=\"nofollow noopener\" target=\"_blank\">9<\/a>. Deep learning is an important branch of machine learning, with complex structures that construct powerful multi-layer neural network models<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 10\" title=\"LeCun, Y., Bengio, Y. &amp; Hinton, G. Deep learning. Nature 521, 436&#x2013;444 (2015).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR10\" id=\"ref-link-section-d661859020e426\" rel=\"nofollow noopener\" target=\"_blank\">10<\/a>.<\/p>\n<p>When faced with a situation where the inscriptions have missing characters, filling in the missing characters becomes a crucial and challenging task in inscription restoration. Currently, the mainstream deep learning models for this task are based on a Transformer, which is a complicated but powerful architecture to process sequential data through self-attention mechanisms and capture long-distance dependencies of the text<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 11\" title=\"Vaswani, A. et al. Attention is all you need. Advances in Neural Information Processing Systems 30. &#010;                  https:\/\/doi.org\/10.48550\/arXiv.1706.03762&#010;                  &#010;                 (2017).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR11\" id=\"ref-link-section-d661859020e433\" rel=\"nofollow noopener\" target=\"_blank\">11<\/a>. BERT<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 12\" title=\"Devlin, J., Chang, M.-W., Lee, K. &amp; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proc. 2019 conference of the North American Chapter of the Association For Computational Linguistics: Human Language Technologies Vol. 1 (long and short papers). 4171&#x2013;4186. &#010;                  https:\/\/doi.org\/10.48550\/arXiv.1810.04805&#010;                  &#010;                 (2019).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR12\" id=\"ref-link-section-d661859020e437\" rel=\"nofollow noopener\" target=\"_blank\">12<\/a> and RoBERTa<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 13\" title=\"Liu, Y. et al. Roberta: A robustly optimized Bert pretraining approach. arXiv preprint arXiv:1907.11692. &#010;                  https:\/\/doi.org\/10.48550\/arXiv.1907.11692&#010;                  &#010;                 (2019).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR13\" id=\"ref-link-section-d661859020e441\" rel=\"nofollow noopener\" target=\"_blank\">13<\/a> are two famous models based on Transformers. In practice, BERT and RoBERTa have shown excellent performance in natural language understanding tasks. Assael et al. proposed a model called Pythia, which combines LSTM<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 14\" title=\"Hochreiter, S. &amp; Schmidhuber, J. Long short-term memory. Neural Comput. 9, 1735&#x2013;1780 (1997).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR14\" id=\"ref-link-section-d661859020e445\" rel=\"nofollow noopener\" target=\"_blank\">14<\/a> and an attention mechanism to predict missing ancient Greek inscriptions. On the PHI-ML dataset, Pythia\u2019s prediction error rate reached 30.1%, lower than that of epigraphers<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 15\" title=\"Assael, Y., Sommerschield, T. &amp; Prag, J. Restoring ancient text using deep learning: a case study on Greek epigraphy. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. 6369&#x2013;6376, &#010;                  https:\/\/doi.org\/10.18653\/v1\/D19-1668&#010;                  &#010;                 (2019).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR15\" id=\"ref-link-section-d661859020e449\" rel=\"nofollow noopener\" target=\"_blank\">15<\/a>. Afterwards, Assael et al. further proposed Ithaca, which was inspired by BigBird<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 16\" title=\"Zaheer, M. et al. Big bird: transformers for longer sequences. Adv. neural Inf. Process. Syst. 33, 17283&#x2013;17297 (2020).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR16\" id=\"ref-link-section-d661859020e454\" rel=\"nofollow noopener\" target=\"_blank\">16<\/a>. Its architecture consists of multiple Transformer decoders that can effectively capture contextual information. It achieved an accuracy of 62% when restoring damaged text. After collaborating with epigraphers, the accuracy was improved to 72%<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 17\" title=\"Assael, Y. et al. Restoring and attributing ancient texts using deep neural networks. Nature 603, 280&#x2013;283 (2022).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR17\" id=\"ref-link-section-d661859020e458\" rel=\"nofollow noopener\" target=\"_blank\">17<\/a>. Fetaya et al. achieved an accuracy of 88.5% in the testing task of recovering Babylonian script based on recurrent neural networks<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 18\" title=\"Fetaya, E., Lifshitz, Y., Aaron, E. &amp; Gordin, S. Restoration of fragmentary Babylonian texts using recurrent neural networks. Proc. Natl. Acad. Sci. 117, 22743&#x2013;22751. &#010;                  https:\/\/doi.org\/10.1073\/pnas.2003794117&#010;                  &#010;                 (2020).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR18\" id=\"ref-link-section-d661859020e462\" rel=\"nofollow noopener\" target=\"_blank\">18<\/a>. Kang et al. proposed a multi-task learning method based on a Transformer network to effectively restore and transform the historical records of Joseon Dynasty<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 19\" title=\"Kang, K. et al. Restoring and mining the records of the Joseon dynasty via neural language modeling and machine translation. In Proceedings of 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021. 4031&#x2013;4042, &#010;                  https:\/\/doi.org\/10.18653\/v1\/2021.naacl-main.317&#010;                  &#010;                 (2021).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR19\" id=\"ref-link-section-d661859020e466\" rel=\"nofollow noopener\" target=\"_blank\">19<\/a>.<\/p>\n<p>Since Chinese characters are not composed of alphabets but of graphic radicals, NLP models trained for missing or incomplete ancient Chinese texts require context. Yu et al. achieved good results in automatic sentence-breaking and punctuation models for ancient Chinese by training BERT in ancient Chinese<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 20\" title=\"Yu, J., Wei, Y. &amp; Zhang, Y. Automatic ancient Chinese texts segmentation based on BERT. J. Chin. Inf. Process. 33, 57&#x2013;63 (2019).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR20\" id=\"ref-link-section-d661859020e473\" rel=\"nofollow noopener\" target=\"_blank\">20<\/a>. Dongbo et al. constructed SikuBERT and SikuRoBERTa for the intelligent processing of ancient Chinese texts using a high-quality Chinese text corpus. The result showed that their proposed models outperformed the basic BERT model and other models in ancient Chinese text-processing tasks<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 21\" title=\"Wang, D. et al. Construction and application of pre-trained models of siku quanshu in orientation to digital humanities. Libr. Trib. 42, 31&#x2013;43 (2022).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR21\" id=\"ref-link-section-d661859020e477\" rel=\"nofollow noopener\" target=\"_blank\">21<\/a>. Sheng et al. constructed a high-quality text corpus of ancient books of Chinese traditional medicine, trained and compared several deep learning models, and found that the Roberta model had the best performance, which could help the restoration of ancient books of traditional Chinese medicine<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 22\" title=\"Sheng, W., Lu, Y., Liu, W., Hu, W. &amp; Zhou, C. The restoration of missing texts in ancient books of Traditional Chinese Medicine based on deep learning. Chin. J. Med. Libr. Inf. Sci. 31, 1&#x2013;7. &#010;                  https:\/\/doi.org\/10.3969\/j.issn.1671-3982.2022.08.001&#010;                  &#010;                 (2022).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR22\" id=\"ref-link-section-d661859020e481\" rel=\"nofollow noopener\" target=\"_blank\">22<\/a>. Zheng J. and Sun J. used ensemble learning methods to integrate Chinese BERT, Chinese RoBERTa, SikuBERT, and SikuRoBERTa for prediction tasks related to ancient Chinese. The grid search method composed of SikuBERT and SikuRoBERTa achieved the best performance in the ensemble model<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 23\" title=\"Zheng, J. &amp; Sun, J. Ensembles of BERT models for ancient chinese processing. In 16th International Conference on Advanced Computer Theory and Engineering (ICACTE) 1&#x2013;7. &#010;                  https:\/\/doi.org\/10.1109\/ICACTE59887.2023.10335238&#010;                  &#010;                 (IEEE, 2023).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR23\" id=\"ref-link-section-d661859020e485\" rel=\"nofollow noopener\" target=\"_blank\">23<\/a>. Han et al. proposed RAC-BERT, an improved BERT model based on Chinese radical parts. By replacing random Chinese characters with the same radical, the model reduced the computational complexity while maintaining higher performance<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 24\" title=\"Han, L. et al. RAC-BERT: character radical enhanced BERT for ancient Chinese. In CCF International Conference on Natural Language Processing and Chinese Computing 759&#x2013;771. &#010;                  https:\/\/doi.org\/10.1007\/978-3-031-44696-2_59&#010;                  &#010;                 (Springer, 2023).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR24\" id=\"ref-link-section-d661859020e489\" rel=\"nofollow noopener\" target=\"_blank\">24<\/a>.<\/p>\n<p>CV models are also useful in inscription restoration. This kind of task involves the partially damaged characters in inscriptions. At present, the main model architectures used for ancient inscriptions are convolutional neural networks (CNN), generative adversarial networks (GAN), Transformers, and their extended architectures.<\/p>\n<p>Convolutional neural network is a common model composed of convolutional layers, activation functions, pooling layers, and fully connected layers. CNN effectively extracts local features from images<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 25\" title=\"Hadji, I. &amp; Wildes, R. P. What do we understand about convolutional networks? arXiv preprint arXiv:1803.08834. &#010;                  https:\/\/doi.org\/10.48550\/arXiv.1803.08834&#010;                  &#010;                 (2018).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR25\" id=\"ref-link-section-d661859020e500\" rel=\"nofollow noopener\" target=\"_blank\">25<\/a>. Zhang used CNN to extract features from residual inscriptions. His model adopted the cross-layer idea of ResNet<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 26\" title=\"He, K., Zhang, X., Ren, S. &amp; Sun, J. Deep residual learning for image recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 770&#x2013;778. &#010;                  https:\/\/doi.org\/10.1109\/CVPR.2016.90&#010;                  &#010;                 (2016).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR26\" id=\"ref-link-section-d661859020e504\" rel=\"nofollow noopener\" target=\"_blank\">26<\/a>. By adding residual modules based on VGGNet<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 27\" title=\"Simonyan, K. &amp; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. &#010;                  https:\/\/doi.org\/10.48550\/arXiv.1409.1556&#010;                  &#010;                 (2014).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR27\" id=\"ref-link-section-d661859020e508\" rel=\"nofollow noopener\" target=\"_blank\">27<\/a>, the accuracy of residual inscription text recognition was improved<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 28\" title=\"Zhang, W. Q. Research on Recognition Algorithm of Damaged Inscriptions Based on Deep Learning, North University of China. &#010;                  https:\/\/doi.org\/10.27470\/d.cnki.ghbgc.2020.000398&#010;                  &#010;                 (2020).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR28\" id=\"ref-link-section-d661859020e512\" rel=\"nofollow noopener\" target=\"_blank\">28<\/a>. Xing and Ren addressed the issue of insufficient character information extraction in existing models by improving the context encoder<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 29\" title=\"Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T. &amp; Efros, A. A. Context encoders: feature learning by inpainting. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 2536&#x2013;2544. &#010;                  https:\/\/doi.org\/10.1109\/CVPR.2016.278&#010;                  &#010;                 (2016).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR29\" id=\"ref-link-section-d661859020e516\" rel=\"nofollow noopener\" target=\"_blank\">29<\/a> and adding dilated convolutions to learn the structural features of characters, thereby repairing missing stroke inscriptions<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 30\" title=\"Xing, C. &amp; Ren, Z. Binary inscription character inpainting based on improved context encoders. IEEE Access 11, 55834&#x2013;55843 (2023).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR30\" id=\"ref-link-section-d661859020e521\" rel=\"nofollow noopener\" target=\"_blank\">30<\/a>. Feng et al. used DenseNet<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 31\" title=\"Huang, G., Liu, Z., Van Der Maaten, L. &amp; Weinberger, K. Q. Densely connected convolutional networks. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 4700&#x2013;4708. &#010;                  https:\/\/doi.org\/10.1109\/CVPR.2017.243&#010;                  &#010;                 (2017).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR31\" id=\"ref-link-section-d661859020e525\" rel=\"nofollow noopener\" target=\"_blank\">31<\/a> from the backbone network to alleviate gradient vanishing and model degradation, while enhancing feature reuse and transfer to improve the recognition performance of ancient handwritten texts<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 32\" title=\"Feng, R., Zhao, F., Chen, S., Zhang, S. &amp; Zhu, S. A handwritten ancient text detector based on improved feature pyramid network. Pattern Recognit. Lett. 172, 195&#x2013;202 (2023).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR32\" id=\"ref-link-section-d661859020e529\" rel=\"nofollow noopener\" target=\"_blank\">32<\/a>. Zhao et al. proposed the Ga-RFR network, which reduces feature redundancy in the generated feature maps by using gated convolutions instead of regular convolutions, thereby improving the restoration performance of Chinese inscriptions<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 33\" title=\"Zhao, L. et al. Ga-RFR: Recurrent Feature Reasoning with Gated Convolution for Chinese Inscriptions Image Inpainting. In Proc. International Conference on Artificial Neural Networks. 320&#x2013;331. &#010;                  https:\/\/doi.org\/10.1007\/978-3-031-44210-0_26&#010;                  &#010;                 (Springer, 2023).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR33\" id=\"ref-link-section-d661859020e533\" rel=\"nofollow noopener\" target=\"_blank\">33<\/a>. Lou Y. also used gated convolution to improve deep networks, reducing the generation of a large amount of redundant feature information, and combining multiple loss functions to enhance the model\u2019s ability to repair inscription images<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 34\" title=\"Lou, Y. Research on Inscription Image Inpainting Technology Based on Gated Convolution and Attention Mechanism, Qilu University of Technology. &#010;                  https:\/\/doi.org\/10.27278\/d.cnki.gsdqc.2024.000626&#010;                  &#010;                 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR34\" id=\"ref-link-section-d661859020e537\" rel=\"nofollow noopener\" target=\"_blank\">34<\/a>.<\/p>\n<p>GAN consists of generators and discriminators, which constantly competes during the training process, ultimately enabling the generator to generate samples increasingly difficult to distinguish from real data<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 35\" title=\"Goodfellow, I. et al. Generative adversarial networks. Commun. ACM 63, 139&#x2013;144 (2020).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR35\" id=\"ref-link-section-d661859020e544\" rel=\"nofollow noopener\" target=\"_blank\">35<\/a>. Its approach and functionality make it more suitable for image-based text restoration tasks. GAN provides a new solution to the restoration of inscriptions, especially for large-scale missing inscriptions. This network can fill in the gaps in the image through generative techniques. For example, Li N. and Yang W. used a GAN with global and local consistency preservation (GLC-GAN) to complete handwritten text images, and proposed a two-level completion system consisting of rough and fine completion<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 36\" title=\"Nong-qin, L. &amp; Wei-xin, Y. Handwritten character completion based on generative adversarial networks. J. Graph. 40, 878 (2019).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR36\" id=\"ref-link-section-d661859020e548\" rel=\"nofollow noopener\" target=\"_blank\">36<\/a>. Wenjun et al. imitated human writing behavior and proposed a dual-branch structure character recovery network EA-GAN that integrated GAN and attention. This network has good performance in extracting character features, and even if the damaged area of the text is large, EA-GAN can also accurately recover damaged characters<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 37\" title=\"Wenjun, Z., Benpeng, S., Ruiqi, F., Xihua, P. &amp; Shanxiong, C. EA-GAN: restoration of text in ancient Chinese books based on an example attention generative adversarial network. Herit. Sci. 11, 42 (2023).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR37\" id=\"ref-link-section-d661859020e552\" rel=\"nofollow noopener\" target=\"_blank\">37<\/a>. Liu et al. used an edge detection module to collect edge information on character strokes and guided GAN to learn the structure and semantics of characters. Their method achieved better repair quality<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 38\" title=\"Liu, H., He, X., Zhu, J. &amp; He, X. Inscription-image inpainting with edge structure reconstruction. In International Conference on Image and Graphics 16&#x2013;27. &#010;                  https:\/\/doi.org\/10.1007\/978-3-031-46311-2_2&#010;                  &#010;                 (Springer, 2023).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR38\" id=\"ref-link-section-d661859020e556\" rel=\"nofollow noopener\" target=\"_blank\">38<\/a>.<\/p>\n<p>Based on the Transformer architecture, Chen et al. proposed a lightweight Qin Bamboo Slips text recognition model QBSC Transformer, which used a fusion of separable convolution and window self-attention mechanism to extract global features of Qin Bamboo Slips text. It significantly reduced computational complexity while maintaining high accuracy<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 39\" title=\"Chen, M., Chen, B. &amp; Xia, R. Qin bamboo slips character recognition algorithm based on lightweight transformer model. Comput. Simul. 42, 459&#x2013;467 (2025).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR39\" id=\"ref-link-section-d661859020e563\" rel=\"nofollow noopener\" target=\"_blank\">39<\/a>. Hao and Chen combined Swin Transformer and Mask R-CNN<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 40\" title=\"He, K., Gkioxari, G., Doll&#xE1;r, P. &amp; Girshick, R. Mask R-CNN. In Proc. IEEE International Conference on Computer Vision 2961&#x2013;2969. &#010;                  https:\/\/doi.org\/10.1109\/ICCV.2017.322&#010;                  &#010;                 (2017).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR40\" id=\"ref-link-section-d661859020e567\" rel=\"nofollow noopener\" target=\"_blank\">40<\/a> to perform text segmentation on Chinese inscriptions<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 41\" title=\"Hao, S. &amp; Chen, Y. A text detection algorithm for ancient Chinese writing based on Swin transformer and mask R-CNN. In Proc. of Third International Conference on Computer Science and Communication Technology (ICCSCT 2022). 1250602. &#010;                  https:\/\/doi.org\/10.1117\/12.2661859&#010;                  &#010;                 (SPIE, 2022).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR41\" id=\"ref-link-section-d661859020e571\" rel=\"nofollow noopener\" target=\"_blank\">41<\/a>. Swin Transformer effectively processed image data by designing hierarchical feature maps and windowed self-attention mechanisms. It also showed excellent performance in image classification tasks<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 42\" title=\"Liu, Z. et al. Swin transformer: hierarchical vision transformer using shifted windows. In Proc. IEEE\/CVF International Conference on Computer Vision 10012&#x2013;10022. &#010;                  https:\/\/doi.org\/10.1109\/ICCV48922.2021.00986&#010;                  &#010;                 (2021).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR42\" id=\"ref-link-section-d661859020e575\" rel=\"nofollow noopener\" target=\"_blank\">42<\/a>.<\/p>\n<p>In addition, other models and repair strategies achieved satisfactory results. Lin et al. judged whether Chinese characters were damaged based on Chinese character splitting and embedding, and then used a bipartite graph neural network to predict the possible results of Chinese characters<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 43\" title=\"Lin, G., Wu, N., He, M., Zhang, E. &amp; Sun, Q. Damaged inscription recognition based on hierarchical decomposition embedding and bipartite graph. J. Electron. Inf. Technol. 46, 564&#x2013;573 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR43\" id=\"ref-link-section-d661859020e582\" rel=\"nofollow noopener\" target=\"_blank\">43<\/a>. Sun C. and Hou M. focused on using edge detection methods to smooth and fill the extracted text contour to restore the fuzzy text<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 44\" title=\"Sun, C. &amp; Hou, M. Virtual restoration of stone inscriptions based on image enhancement and edge detection. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 10, 217&#x2013;222 (2023).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR44\" id=\"ref-link-section-d661859020e586\" rel=\"nofollow noopener\" target=\"_blank\">44<\/a>.<\/p>\n<p>For repairing inscriptions, NLP models and CV models have achieved exciting results along with limitations. For NLP models, the performance is greatly limited by the volume of prepared ancient texts since the grammar, syntax, genre, and usage heavily vary in different dynastic periods. For the writing of Chinese characters, it is challenging to recognize a Chinese character in thousands of regular and variant character repositories with over 30 kinds of fonts<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 45\" title=\"Wang, C. et al. Review of Chinese characters generation and font transfer based on deep learning. J. Image Graph. 27, 3415&#x2013;3428 (2022).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR45\" id=\"ref-link-section-d661859020e593\" rel=\"nofollow noopener\" target=\"_blank\">45<\/a>. It forces us to try two kinds of models to restore Chinese ancient inscriptions in text and vision dimensions. It is a naturally thoughtful solution. The NLP model is undoubtedly essential to missing or damaged characters prediction. It will output many possible characters in descending order of probabilities. However, expert can still hardly decide the right one due to the hallucination. If there remain partial radicals of the character, we can get characters with the same partial radicals through ancient character CV models. Experts will choose the more likely characters in smaller range from the two groups of predicted characters. However, CV models are not available if the character completely disappears.<\/p>\n<p>Technical roadmap<\/p>\n<p>The architecture proposed in this paper is to predict missing or incomplete characters in Chinese inscriptions through NLP model and CV model. We introduce the pre-trained models to help the model training, as well as provide the joint tactic of two models. The technical roadmap is illustrated by the flowchart in Fig. <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"figure anchor\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#Fig2\" rel=\"nofollow noopener\" target=\"_blank\">2<\/a>.<\/p>\n<p><b id=\"Fig2\" class=\"c-article-section__figure-caption\" data-test=\"figure-caption-text\">Fig. 2<\/b><a class=\"c-article-section__figure-link\" data-test=\"img-link\" data-track=\"click\" data-track-label=\"image\" data-track-action=\"view figure\" href=\"https:\/\/www.nature.com\/articles\/s40494-025-01900-x\/figures\/2\" rel=\"nofollow noopener\" target=\"_blank\"><img decoding=\"async\" aria-describedby=\"Fig2\" src=\"https:\/\/www.europesays.com\/us\/wp-content\/uploads\/2025\/07\/40494_2025_1900_Fig2_HTML.png\" alt=\"figure 2\" loading=\"lazy\" width=\"685\" height=\"721\"\/><\/a><\/p>\n<p>Technology Roadmap of inscription restoration.<\/p>\n<p>In the model training step, texts and images of inscriptions are pre-processed for higher quality. Then, the NLP model and CV model are trained based on selected pre-trained models. When models are validated and deployed, missing or incomplete characters are fed into the models for restoration. Top n predicted characters with higher possibilities are introduced to experts for the final decision. If characters are incomplete, that means characters are partially damaged, and the CV model outputs complete characters based on incomplete images. In these most common cases, characters with the biggest f1_score are preferred recommendations. f1_score is introduced here to leverage two possibility values of NLP model and CV model, whose original definition involves precision and recall. In this research, f1_score is redefined with two possibility values in Eq. <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"equation anchor\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#Equ1\" rel=\"nofollow noopener\" target=\"_blank\">1<\/a>.<\/p>\n<p>$${\\rm{f}}1\\_{\\text{score}}=\\frac{2* {{\\rm{P}}}_{n}* {{\\rm{P}}}_{v}}{{{\\rm{P}}}_{n}+{{\\rm{P}}}_{v}}$$<\/p>\n<p>\n                    (1)\n                <\/p>\n<p>Where pn is the possibility of character output by NLP model and pv is the possibility of character output by CV model.<\/p>\n<p>Predictions resulting from two different models contribute more to selecting the right character than any individual model. Figure <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"figure anchor\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#Fig3\" rel=\"nofollow noopener\" target=\"_blank\">3<\/a> shows a prediction example.<\/p>\n<p><b id=\"Fig3\" class=\"c-article-section__figure-caption\" data-test=\"figure-caption-text\">Fig. 3<\/b><a class=\"c-article-section__figure-link\" data-test=\"img-link\" data-track=\"click\" data-track-label=\"image\" data-track-action=\"view figure\" href=\"https:\/\/www.nature.com\/articles\/s40494-025-01900-x\/figures\/3\" rel=\"nofollow noopener\" target=\"_blank\"><img decoding=\"async\" aria-describedby=\"Fig3\" src=\"https:\/\/www.europesays.com\/us\/wp-content\/uploads\/2025\/07\/40494_2025_1900_Fig3_HTML.png\" alt=\"figure 3\" loading=\"lazy\" width=\"685\" height=\"360\"\/><\/a>Data preparation<\/p>\n<p>For the inscription textual data, the types of collected data include but are not limited to:<\/p>\n<ol class=\"u-list-style-none\">\n<li>\n                    a.<\/p>\n<p>Metal and stone vessels: bronzes, coins, etc.<\/p>\n<\/li>\n<li>\n                    b.<\/p>\n<p>Rocks: cliff, stone carving, tablet, etc.<\/p>\n<\/li>\n<li>\n                    c.<\/p>\n<p>Stele: tombstone, merit stele, chronicle stele, inscription stele, religious stele, etc.<\/p>\n<\/li>\n<li>\n                    d.<\/p>\n<p>Others: seals, oracle bones, epitaphs, statue tablets, stone scriptures, tower inscriptions, architectural inscriptions, etc.<\/p>\n<\/li>\n<\/ol>\n<p>We firstly removed punctuation marks, because there are no punctuation marks in ancient Chinese inscriptions. Then, we marked the missing or incomplete characters with unweighted special symbols to keep them from the computation during model training.<\/p>\n<p>Since the writing fonts of inscriptions are firmly sparse, we were confronted with the challenges of recognition of characters in inscriptions from different historical periods or stylistic variations. We conducted dataset augment with the help of multiple localized experts few-shot font generation network (MX-Font). As a few-shot font generation (FFG) model, MX-Font can extract multiple style features not explicitly conditioned on component labels, but automatically by multiple experts to represent different local concepts. With the multiple experts, MX-Font can capture diverse local concepts and show the generalization ability to unseen languages<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 46\" title=\"Park, S., Chun, S., Cha, J., Lee, B., &amp; Shim, H. Multiple heads are better than one: few-shot font generation with multiple localized experts. In Proc. IEEE International Conference on Computer Vision 13880&#x2013;13889. &#010;                  https:\/\/doi.org\/10.1109\/ICCV48922.2021.01364&#010;                  &#010;                 (2021).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR46\" id=\"ref-link-section-d661859020e800\" rel=\"nofollow noopener\" target=\"_blank\">46<\/a>.<\/p>\n<p>We collected 22,000 calligraphy images of traditional Chinese characters from public calligraphy repositories and inscription databases. Only the stele text data from Zhejiang University stele inscription database (Fig. <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"figure anchor\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#Fig4\" rel=\"nofollow noopener\" target=\"_blank\">4<\/a>)<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 47\" title=\"Zhejiang stele inscription database. &#010;                  http:\/\/csid.zju.edu.cn\/tomb\/stone&#010;                  &#010;                .\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR47\" id=\"ref-link-section-d661859020e810\" rel=\"nofollow noopener\" target=\"_blank\">47<\/a> and other inscription databases were up to 11,588 pieces of inscriptions.<\/p>\n<p><b id=\"Fig4\" class=\"c-article-section__figure-caption\" data-test=\"figure-caption-text\">Fig. 4<\/b><a class=\"c-article-section__figure-link\" data-test=\"img-link\" data-track=\"click\" data-track-label=\"image\" data-track-action=\"view figure\" href=\"https:\/\/www.nature.com\/articles\/s40494-025-01900-x\/figures\/4\" rel=\"nofollow noopener\" target=\"_blank\"><img decoding=\"async\" aria-describedby=\"Fig4\" src=\"https:\/\/www.europesays.com\/us\/wp-content\/uploads\/2025\/07\/40494_2025_1900_Fig4_HTML.png\" alt=\"figure 4\" loading=\"lazy\" width=\"685\" height=\"442\"\/><\/a><\/p>\n<p>Zhejiang University stele inscription database.<\/p>\n<p>We classified calligraphy images into different stylistic groups such as Kaishu, Xingshu, Caoshu, Weibei, Xiaozhuan, Lishu and etc. Then OpenCV was used to convert calligraphy images into SVG images, and FontForge<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 48\" title=\"FontForge. &#010;                  https:\/\/github.com\/fontforge\/fontforge&#010;                  &#010;                .\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR48\" id=\"ref-link-section-d661859020e835\" rel=\"nofollow noopener\" target=\"_blank\">48<\/a> was used to convert the Chinese character SVG images into TTF files. These TTF files were merged with the same font style files of MX-Font. Finally, we used MX-Font to get images of over 19000 traditional Chinese characters in different calligraphy styles. Examples are shown in Fig. <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"figure anchor\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#Fig5\" rel=\"nofollow noopener\" target=\"_blank\">5<\/a>.<\/p>\n<p><b id=\"Fig5\" class=\"c-article-section__figure-caption\" data-test=\"figure-caption-text\">Fig. 5<\/b><a class=\"c-article-section__figure-link\" data-test=\"img-link\" data-track=\"click\" data-track-label=\"image\" data-track-action=\"view figure\" href=\"https:\/\/www.nature.com\/articles\/s40494-025-01900-x\/figures\/5\" rel=\"nofollow noopener\" target=\"_blank\"><img decoding=\"async\" aria-describedby=\"Fig5\" src=\"https:\/\/www.europesays.com\/us\/wp-content\/uploads\/2025\/07\/40494_2025_1900_Fig5_HTML.jpg\" alt=\"figure 5\" loading=\"lazy\" width=\"685\" height=\"505\"\/><\/a><\/p>\n<p>Traditional Chinese character font styles.<\/p>\n<p>In addition, due to the long history, surfaces of many inscriptions have become illegible, so we oversampled the image data to simulate the defacement of inscriptions. We used the following operations to simulate the damage state of inscription text: scratch, loss, stain and blur. We first performed the following operations several times respectively, then merged the processed images with the original data, and expanded the dataset.<\/p>\n<p>The scratch operation: randomly drawing line segments on the image.<\/p>\n<p>The loss operation: randomly cropping the image in random size.<\/p>\n<p>The stain operation: randomly covering areas with irregular shapes.<\/p>\n<p>The blur operation: adding 8 different levels of noise to the image. Gaussian noise and salt-pepper noise were respectively used. In addition, we also used all these operations in a image.<\/p>\n<p>Take the Chinese character \u201c\u9515\u201d as an example, the augmented images are shown in Fig. <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"figure anchor\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#Fig6\" rel=\"nofollow noopener\" target=\"_blank\">6<\/a>.<\/p>\n<p><b id=\"Fig6\" class=\"c-article-section__figure-caption\" data-test=\"figure-caption-text\">Fig. 6<\/b><a class=\"c-article-section__figure-link\" data-test=\"img-link\" data-track=\"click\" data-track-label=\"image\" data-track-action=\"view figure\" href=\"https:\/\/www.nature.com\/articles\/s40494-025-01900-x\/figures\/6\" rel=\"nofollow noopener\" target=\"_blank\"><img decoding=\"async\" aria-describedby=\"Fig6\" src=\"https:\/\/www.europesays.com\/us\/wp-content\/uploads\/2025\/07\/40494_2025_1900_Fig6_HTML.png\" alt=\"figure 6\" loading=\"lazy\" width=\"685\" height=\"267\"\/><\/a><\/p>\n<p>augmented images of Chinese character.<\/p>\n<p>NLP model preparation<\/p>\n<p>The Mask Language Model (MLM) is suitable for the pre-processed task of inscriptions. Meanwhile, since Chinese ancient inscriptions are all traditional characters, the writing formats and word styles are much more diverse like those in ancient books, and there have been no pre-trained models of inscriptions as yet. We chose SikuRoBERTa as the pre-trained model according to Zheng and Sun\u2019s research. Siku series models pay more attention to the current and contextual tokens than present Bert-based models<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 49\" title=\"Zheng, J. &amp; Sun, J. Exploring the word structure of ancient Chinese encoded in BERT models. Proc. 16th International Conference on Advanced Computer Theory and Engineering (ICACTE) 41&#x2013;45. &#010;                  https:\/\/doi.org\/10.1109\/ICACTE59887.2023.10335485&#010;                  &#010;                 (IEEE, 2023).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR49\" id=\"ref-link-section-d661859020e906\" rel=\"nofollow noopener\" target=\"_blank\">49<\/a>. We have also expanded the vocabulary of SikuRoBERTa by adding 757 new Chinese characters found in collected inscriptions.<\/p>\n<p>As an improved version of Bert, Roberta removes the next sentence prediction task and deepens the mask language task suitable for predicting missing or incomplete characters in inscriptions. It is praised for dynamic masks, larger batches, and more training data to obtain better performance<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 13\" title=\"Liu, Y. et al. Roberta: A robustly optimized Bert pretraining approach. arXiv preprint arXiv:1907.11692. &#010;                  https:\/\/doi.org\/10.48550\/arXiv.1907.11692&#010;                  &#010;                 (2019).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR13\" id=\"ref-link-section-d661859020e913\" rel=\"nofollow noopener\" target=\"_blank\">13<\/a>, in which the dynamic mask can increase the diversity of datasets and enable the model to learn different modes of data better. Taking \u5510\u79d8\u66f8\u76e3\u77e5\u7ae0\u4e4b\u5f8c\u4e5f as an example, the role of dynamic mask results are shown in Table <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"table anchor\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#Tab1\" rel=\"nofollow noopener\" target=\"_blank\">1<\/a>:<\/p>\n<p>The pre-processed dataset was fed into the model. In order that the model can learn features of small inscriptions dataset compared with the pre-trained NLP model, we oversampled the dataset to 10 times the original volume to increase the weight of fine-tuning data and reduce the learning rate. As a result, the model well learned the context information contained in inscriptions. By the way, the mask scale was set to 0.15 proposed by MLM.<\/p>\n<p>CV model preparation<\/p>\n<p>Character recognition is also regarded as a kind of image classification task. We chose the Swin Transformer as the pre-trained model for incomplete character restoration. Compared with vision Transformers directly applying the standard Transformer structure to images<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 50\" title=\"Dosovitskiy, A. et al. An image is worth 16x16 words: transformers for image recognition at scale. Conference paper of the International Conference on Learning Representations (ICLR) 2021. &#010;                  https:\/\/openreview.net\/pdf?id=YicbFdNTTy&#010;                  &#010;                 (2021).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR50\" id=\"ref-link-section-d661859020e1035\" rel=\"nofollow noopener\" target=\"_blank\">50<\/a>, the Swin Transformer introduces the hierarchical construction method commonly used in CNN to build a hierarchical Transformer<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 42\" title=\"Liu, Z. et al. Swin transformer: hierarchical vision transformer using shifted windows. In Proc. IEEE\/CVF International Conference on Computer Vision 10012&#x2013;10022. &#010;                  https:\/\/doi.org\/10.1109\/ICCV48922.2021.00986&#010;                  &#010;                 (2021).\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#ref-CR42\" id=\"ref-link-section-d661859020e1039\" rel=\"nofollow noopener\" target=\"_blank\">42<\/a>, which effectively extracts strokes, structures, and other textual features in images of texts at different levels.<\/p>\n<p>For example, at a lower level, the model may focus on strokes. At a higher level, the model may focus on the overall layout of the text. In addition, the shifted window of Swin Transformer allows the model to capture the relative position and layout information between strokes.<\/p>\n<p>Evaluation metrics<\/p>\n<p>Perplexity defined in Eq. <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"equation anchor\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#Equ2\" rel=\"nofollow noopener\" target=\"_blank\">2<\/a> is used to evaluate the NLP model after training.<\/p>\n<p>$${\\rm{P}}{\\rm{e}}{\\rm{r}}{\\rm{p}}{\\rm{l}}{\\rm{e}}{\\rm{x}}{\\rm{i}}{\\rm{t}}{\\rm{y}}={{\\rm{e}}}^{-\\frac{1}{N}\\displaystyle \\mathop{\\sum }\\limits_{i=0}^{N}log\\,p(x|{x}_{ <\/p>\n<p>\n                    (2)\n                <\/p>\n<p>Where N is the total number of tokens, x is the character in the text, and p is the probability of x predicted by the model. Perplexity indicates the performance of the model to predict the text. The lower the value is, the better the performance is.<\/p>\n<p>We used accuracy to evaluate the trained CV model. It is defined in Eq. <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"equation anchor\" href=\"http:\/\/www.nature.com\/articles\/s40494-025-01900-x#Equ3\" rel=\"nofollow noopener\" target=\"_blank\">3<\/a>.<\/p>\n<p>$${Accuracy}=\\frac{{TP}+{TN}}{{TP}+{TN}+{FP}+{FN}}$$<\/p>\n<p>\n                    (3)\n                <\/p>\n<p>Where TP (True Positive) is the number of positive predictions whose actual values are positive.<\/p>\n<p>TN (True Negative) is the number of negative predictions whose actual values are negative.<\/p>\n<p>FP (False Positive) is the number of positive predictions of whose actual values are negative.<\/p>\n<p>FN (False Negative) is the number of negative predictions whose actual values are positive.<\/p>\n","protected":false},"excerpt":{"rendered":"Related technologies and practices Machine learning is a branch of artificial intelligence, whose goal is to make computers&hellip;\n","protected":false},"author":3,"featured_media":42475,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[21],"tags":[691,738,834,19533,158,67,132,68],"class_list":{"0":"post-42474","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-general","11":"tag-materials-science","12":"tag-technology","13":"tag-united-states","14":"tag-unitedstates","15":"tag-us"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@us\/114804369871752099","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/42474","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/comments?post=42474"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/42474\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media\/42475"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media?parent=42474"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/categories?post=42474"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/tags?post=42474"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}