The last period marked the rise of the Chinese AI industry; Chinese companies are now shipping highly cost-effective, open-source models that rival their Western counterparts in many ways.
However, like the entire AI industry, these models suffer from hallucination; in some Chinese models, this issue has even worsened over time. China has taken some steps to address this problem, primarily through its legal system; however, it remains an unresolved challenge overall.
In February 2026, a Weibo user reported that they tried the Tiger broker, which is integrated into Deepseek to analyze Alibaba’s financial report, and were surprised by the result. They cross-checked the number provided by the AI and found that it was made up. This represents a striking example of how these models could hallucinate and their harmful impact.
Chinese researchers have begun analyzing the phenomenon of hallucination; for example, a joint study by researchers at Fudan and the Shanghai AI Laboratory established a benchmark for a Chinese language model. Additionally, a team of scientists from the University of Science and Technology of China and Tencent’s YouTu Lab introduced a tool to fight AI hallucination.
Chinese AI companies, thus far, have not taken strong measures and appear to be primarily interested in legally protecting themselves; for example, DeepSeek, Qwen 3, and Manus’ terms of use clearly indicate that the companies are not responsible for any errors generated by their models. Only two companies briefly mentioned reducing the hallucination of the technical model cards
At the state level, China is leveraging laws and polices to hold creators of AI models accountable and requiring them to ensure trustworthiness. For instance, the Governance Principles for a New Generation of Artificial Intelligence call for the gradual achievement of trustworthiness, and Article four of Interim Measures for the Management of Generative Artificial Intelligence Services prohibits content such as fake and harmful information. It also requests the implementation of effective measures to increase the accuracy and reliability of generated content, and article seven asks for increasing the truth, accuracy, objectivity, and diversity of training data. In September 2025, China released Version 2 of AI security governance which considered hallucination as one of the risks that should be mitigated.
China’s interest in regulating hallucinations is largely motivated by its overall information flow control policy; clearly, the Chinese leadership is not interested in seeing circulated false information that undermines its legitimacy or contradicts the official state narrative.
Despite all this effort, China needs to focus more on this issue, given the performance of its models. For instance, Recent evaluations for the truthfulness and reliability of AI models show that Chinese models still lag behind International models such as GPT-5 and Claude series. And deep seek recent R1 model hallucinates more than its V3 predecessor.
Chinese models are gaining widespread adoption. Currently, the total global downloads of Chinese open-source models are expected to surpass those of American ones in the near future. It appears that users still prioritize cheap and open access to the Chinese models over the quality of information, except in Italy, where regulators previously probed DeepSeek over the risk of false information.