AI applications continue to rapidly expand into all areas of life. They are transforming processes and workflows in the domains they permeate, while also creating new opportunities. However, alongside these contributions, AI also brings various risks, ranging from compromising data security to leaving individuals vulnerable, reinforcing biases, deepening inequalities and generating misinformation. These risks vary in scale and nature depending on the specific characteristics of the field in which AI is applied.
Journalism is one of the fields most profoundly affected by AI, and it is deeply felt across a wide spectrum, including data analysis, content creation, content personalization and editorial processes. It has become an especially valuable ally in investigative journalism. Moreover, AI now contributes to every stage of the news cycle, including the gathering, reporting, storytelling and distribution of news. In areas where digitalization is extensive, AI acts as a transformative force. Given that journalism is one such field, many researchers argue that AI is not merely a tool in journalism but a transformative power that is reshaping the profession itself.
The widespread adoption of machine learning has opened new horizons, particularly for investigative journalism. It has enabled the easy analysis of big data based on the specific details of a given topic, as well as the identification of underlying patterns within the data. This in-depth contribution has significantly facilitated and enhanced the quality of investigative journalism and news production, especially in complex fields such as elections, health, education, finance and monetary markets, and sports. Thanks to AI, information with news value and complex narratives, previously difficult to detect due to structural complexity, can now be uncovered and presented to the public. As a result, news production capacity has increased significantly with AI technologies. For news agencies in particular, this increased capacity provides a major advantage in terms of both public influence and economic gain.
On the other hand, it has also become possible to conduct in-depth public opinion analysis through social media and other digital platforms. In this way, reader and viewer responses to news content can be evaluated more comprehensively. Additionally, analyzing user preferences on news platforms and recommending new content accordingly has become a common practice, helping to extend the time users spend on these platforms.
One of the most significant contributions of AI is its ability to enable personalized content production. AI, which is widely used to generate personalized educational content in the field of education, has similarly started to be extensively applied in journalism for collecting, evaluating and distributing personalized content tailored to individual users. In short, AI technologies are making increasingly essential contributions to enhancing productivity and efficiency in journalism. The expectation is that the time saved through this increase in productivity will be used to improve the overall quality of journalism.
Research findings on the impact of AI on employee productivity indicate that increases in efficiency and output are particularly significant among low- and medium-skilled workers. In other words, AI technologies help compensate for skill gaps in these employee groups. When used in journalism in this way – complementing rather than replacing humans – AI can enhance productivity without causing major negative effects on employment. At the same time, it can create additional time that journalists can devote to improving the quality of their reporting.
However, there is a clear risk that journalism positions involving routine tasks, such as writing standard news reports and performing data analysis, may be fully taken over by AI. On the other hand, as noted above, the integration of AI technologies into journalism as a transformative force requires workers in the field to rapidly acquire new skills to remain relevant in a changing industry. Therefore, improving AI literacy and skills among journalism professionals is of critical importance. Without investment in the development of these capabilities, many journalists may face the risk of losing their current positions.
On the other hand, the greatest risk associated with personalized news content is the reduction in content diversity and the reinforcement of informational comfort zones by directing users toward echo chamber-like content. As a result, individuals are increasingly exposed to information that supports their existing beliefs and attitudes, while their access to differing opinions and news becomes limited. This makes it more difficult for people to encounter diverse content, and the interpretation of events begins to vary significantly depending on the boundaries of each echo chamber. One of the greatest risks facing modern societies is the clustering of the public into distinct groups and their confinement within echo chambers. As AI further enhances the personalization of news content, it is likely to intensify the formation of these echo chambers. This poses a serious threat to the overall health and cohesion of modern societies.
Although AI is highly capable of analyzing big data and detecting patterns, the lack of transparency in how these analyses are conducted due to the “black box” nature of many AI systems raises serious concerns, particularly in news content production and investigative journalism. The opaque nature of AI-generated analysis and content can result in the production of news that lacks transparency and accountability. Since AI itself cannot be held responsible for the content it produces, an important question arises: Can journalists who use AI in this way be held accountable for non-transparent content and analysis? This issue is also actively debated in the academic world.
For example, as generative AI tools began to be used in the production of scientific articles and even appeared as co-authors in some cases, editorial teams of academic journals faced intense debate over whether AI could be recognized as an author. Prestigious journals such as Science have taken a firm stance, stating not only that AI cannot be listed as an author, but also that AI-generated content, such as text or graphic,s should not be used in academic articles at all. However, more flexible policies have gradually emerged. According to these, AI can never be considered a co-author, but if it contributes to the quality of a scientific article, its role in the production process must be clearly disclosed within the article. At the heart of all these debates and efforts to find solutions lies the fundamental issue that AI cannot bear responsibility for its contributions and cannot be held accountable for its actions. A similar precaution must be implemented in the field of journalism as well.
Another major concern regarding the widespread use of AI in journalism is the risk of perpetuating biases. Since AI technologies make predictions, optimizations, and generate content based on real-world data, the training data effectively serves as a form of memory. This “memory” can contain biased judgments and linguistic patterns related to religion, race, gender and other characteristics of different social groups – biases that can be directly reproduced in new content. As a result, AI-generated journalistic content may replicate these same biases, leading to the proliferation of biased news. Furthermore, when such biased content circulates within echo chambers and is repeatedly interpreted through the lens of partial perspectives, it increases the risk of deepening social inequalities. The same dynamic is present in culturally embedded content generation through AI. As we discussed in a previous article titled “The Powerful Wave of Orientalism Driven by Artificial Intelligence,” AI applications continue to produce content that preserves orientalist tones. These systems attempt to maintain control over the right to represent “the East” from a detached, often Western and white-centric perspective, disconnected from the reality of the cultures they depict.
In addition, with the advancement of artificial intelligence technologies, the production of highly realistic yet false video content (deepfakes) has become increasingly widespread. The ease with which such manipulative and misleading content can be created not only heightens social unrest but also poses threats to individual safety. In this context, another risk is the potential of AI to generate false content, which has negative implications for journalism. As is well known, generative AI sometimes produces information that appears coherent within the text but is factually incorrect, a phenomenon referred to as “hallucination” or “confabulation.” Relying entirely on AI for news content production increases not only the risk of biased reporting but also the risk of misinformation. Therefore, editorial oversight is critically important in eliminating such risks. To ensure this, editorial teams must possess a strong level of AI literacy, and this literacy must be continuously updated.
In summary, AI applications have a transformative and therefore far-reaching impact on the field of journalism. The opportunities it provides have already significantly reshaped processes and workflows in this domain and have led to notable economic gains. However, it is also clear that this transformation brings numerous risks, ranging from negative effects on employment in journalism to the production of biased and false content. As in other fields, the most human-centered approach in journalism is to use AI technologies in a way that complements human effort rather than replaces it. Otherwise, while the economic benefits of AI may concentrate in the hands of a narrow group, the risks it poses will affect broader segments of society. Moreover, the risks associated with AI have made editorial oversight more critical than ever before. In this context, increasing AI literacy and supporting the development of related skills will enhance the potential to benefit from these technologies in a balanced and responsible way.