{"id":18079,"date":"2026-04-27T09:11:17","date_gmt":"2026-04-27T09:11:17","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/18079\/"},"modified":"2026-04-27T09:11:17","modified_gmt":"2026-04-27T09:11:17","slug":"potential-futures-for-the-ipccs-approach-to-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/18079\/","title":{"rendered":"Potential futures for the IPCC\u2019s approach to artificial intelligence"},"content":{"rendered":"<p>Rise of the agents<\/p>\n<p>In this scenario, agentic AI becomes widespread within the next few years, fundamentally changing how professionals work. Agentic AI is where artificial agents use natural language interfaces to execute sequences of actions on users\u2032 behalf<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 42\" title=\"Gabriel, I. et al. The ethics of advanced AI assistants. Preprint at &#010;                  https:\/\/doi.org\/10.48550\/arXiv.2404.16244&#010;                  &#010;                 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s43247-026-03514-y#ref-CR42\" id=\"ref-link-section-d442367237e645\" rel=\"nofollow noopener\" target=\"_blank\">42<\/a>; bringing up questions about the agent\u2019s alignment with human priorities, the influence of the agents on their human users, and the potential need for regulatory agents to monitor other agents<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 43\" title=\"Gabriel, I., Keeling, G., Manzini, A. &amp; Evans, J. We need a new ethics for a world of AI agents. Nature 644, 38&#x2013;40 (2025).\" href=\"http:\/\/www.nature.com\/articles\/s43247-026-03514-y#ref-CR43\" id=\"ref-link-section-d442367237e649\" rel=\"nofollow noopener\" target=\"_blank\">43<\/a>.<\/p>\n<p>Literature search<\/p>\n<p>As these agents become ubiquitous across academia, research, government, and the private sector, there will be implications for literature search\u2014agents may collaborate to scout the literature. Agents could be embedded within the IPCC process, continuously screening the literature. Strict bounds for automated inclusion, such as confining agents to known databases for peer-reviewed literature, could make it challenging to include \u201cgray\u201d (i.e., not peer-reviewed) literature or Indigenous Knowledge. Conversely, expanding the scope to include gray literature, where AI could help break language barriers and enable cross-lingual search, would place a significant verification burden on authors, given the mixed quality of such sources.<\/p>\n<p>Synthesis and assessment<\/p>\n<p>Another set of agents could synthesize the literature and update draft chapters, including generating data visualizations created from the underlying data in cited papers, whilst maintaining a detailed log for authors\u2019 verifications against established benchmarks, and for expert reviewers and the public for transparency.<\/p>\n<p>Communication<\/p>\n<p>IPCC reports must be written with content optimized not just for human readers, but for AI consumption and processing. Consequently, much of the assessment and its communication transforms into agent-to-agent information exchange, where content is accessed, interpreted, and transmitted through AI intermediaries (i.e. Large Language Models\u2013LLMs), in addition to humans. This requires the IPCC to consider how its assessments can be structured for optimal LLMs and human parsing and synthesis.<\/p>\n<p>A critical concern emerges under such a scenario: what happens when users extract IPCC text and process it through chatbots for interpretation? Such practices are already occurring<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 44\" title=\"Thulke, D. et al. ClimateGPT: towards AI synthesizing interdisciplinary research on climate change. Preprint at: &#010;                  https:\/\/doi.org\/10.48550\/arXiv.2401.09646&#010;                  &#010;                 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s43247-026-03514-y#ref-CR44\" id=\"ref-link-section-d442367237e679\" rel=\"nofollow noopener\" target=\"_blank\">44<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 45\" title=\"Vaghefi, S. A. et al. ChatClimate: grounding conversational AI in climate science. Commun. Earth Environ. 4, 480 (2023).\" href=\"http:\/\/www.nature.com\/articles\/s43247-026-03514-y#ref-CR45\" id=\"ref-link-section-d442367237e682\" rel=\"nofollow noopener\" target=\"_blank\">45<\/a>. Should the IPCC pre-empt potentially problematic third-party interpretations by providing \u201cofficial\u201d chatbot access? On the other hand, having a record of agentic decisions could provide a \u201ctraceability\u201d in the IPCC\u2019s reasoning that might enhance its credibility for some. Still, there will continue to be inherent limitations in LLMs used for communicating scientific reports<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 2\" title=\"Al Khourdajie, A. The role of artificial intelligence in climate change scientific assessments. PLoS Clim. 4, e0000706 (2025).\" href=\"http:\/\/www.nature.com\/articles\/s43247-026-03514-y#ref-CR2\" id=\"ref-link-section-d442367237e686\" rel=\"nofollow noopener\" target=\"_blank\">2<\/a>. These include probabilistic pattern generation that can erode scientific nuance, stochastic outputs that challenge reproducibility, static parametric knowledge that may become outdated unless augmented with new knowledge retrieval (i.e, Retrieval-Augmented Generation, RAG), and hallucinations that produce plausible-sounding but false information.<\/p>\n<p>                    Implications for IPCC as an institution<\/p>\n<p>This raises fundamental questions about workflows, audience and format. First, should the IPCC develop its own agentic capabilities for IPCC authors? Second, how much should the IPCC optimize its outputs to be read and processed by AI agents, and what changes to the preparation and presentation of the report should be undertaken with agentic readers in mind? Third, should the IPCC actively house an LLM to help readers navigate the report and its data? The IPCC already produces multiple formats (PDFs, printed copies, webpages, infographics). Creating an official LLM that reproduces paragraphs verbatim would be technically straightforward, and it could essentially operate as an enhanced search functionality that could prove beneficial for accessibility, albeit this comes at the risk of hallucinations, among other limitations that are inherent to the LLM architecture. Alternatively, RAG systems, which ground responses directly in source documents rather than relying solely on parametric knowledge (\u201coriginal\u201d LLM knowledge), could mitigate hallucination risks by linking outputs back to specific passages in the reports. However, even RAG-based approaches are not immune to limitations, as the underlying language model may still misinterpret or misrepresent retrieved content<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 2\" title=\"Al Khourdajie, A. The role of artificial intelligence in climate change scientific assessments. PLoS Clim. 4, e0000706 (2025).\" href=\"http:\/\/www.nature.com\/articles\/s43247-026-03514-y#ref-CR2\" id=\"ref-link-section-d442367237e697\" rel=\"nofollow noopener\" target=\"_blank\">2<\/a>.<\/p>\n<p>Furthermore, in such a highly automated environment, the traditional human-led approval process for SPMs may suddenly appear to be \u2018outdated\u2019. But any attempt at automating the generation of SPMs creates tensions with the still human-led and consensus-based negotiation mode in the UNFCCC prevalent in IPCC panel sessions. More broadly, if agentic AI is widespread in society, it may call the traditional pacing of an IPCC assessment cycle (5\u20137 years) into question. Frequently, there are suggestions for the IPCC to produce short, accessible reports with updated information that are more tailored to nimbly respond to knowledge needs, or even move to enabling ongoing learning processes in a dynamic approach<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 46\" title=\"Hermansen, E. A. T., Boasson, E. L. &amp; Peters, G. P. Climate action post-Paris: how can the IPCC stay relevant? npj Clim. Action 2, 30 (2023).\" href=\"http:\/\/www.nature.com\/articles\/s43247-026-03514-y#ref-CR46\" id=\"ref-link-section-d442367237e704\" rel=\"nofollow noopener\" target=\"_blank\">46<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 47\" title=\"Asayama, S. et al. Three institutional pathways to envision the future of the IPCC. Nat. Clim. Chang. 13, 877&#x2013;880 (2023).\" href=\"http:\/\/www.nature.com\/articles\/s43247-026-03514-y#ref-CR47\" id=\"ref-link-section-d442367237e707\" rel=\"nofollow noopener\" target=\"_blank\">47<\/a>. With the development of agentic AI, professionals may be accustomed to tasking agents with discovering instant answers, and the speed of IPCC assessment may feel obsolete.<\/p>\n<p>                    Implications for IPCC authors<\/p>\n<p>Adoption of agentic AI would fundamentally transform the author\u2019s role from a writer to an expert verifier and agent manager.<\/p>\n<p>                    Wider social implications<\/p>\n<p>Whilst AI intermediaries could enhance access to text, they risk diluting the IPCC&#8217;s carefully chosen language around uncertainty and confidence. The very authority of the IPCC, built on human expertise and deliberation, could be undermined by the perception that machines are the primary constructors, readers and interpreters of its work.<\/p>\n<p>                  Superior truth machine<\/p>\n<p>In this scenario, there is a low degree of automation in knowledge processing\u2014humans guide the process and make the final judgments\u2014and there is high trust in AI-generated output. AI-generated output becomes viewed as superior to human equivalents, and is often turned to as an arbiter of truth or in social and political disputes about knowledge. LLMs gain trust in terms of being perceived as more comprehensive and balanced, and free from human political biases. Users become accustomed to AI summaries that are accurate and readable, leading to higher perceptions of credibility and trustworthiness of the authors<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 48\" title=\"Markowitz, D. M. From complexity to clarity: how AI enhances perceptions of scientists and the public&#x2019;s understanding of science. PNAS Nexus 3, pgae387 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s43247-026-03514-y#ref-CR48\" id=\"ref-link-section-d442367237e738\" rel=\"nofollow noopener\" target=\"_blank\">48<\/a>. Private platforms for scientific summaries that are already being launched by scientific publishers present themselves as tools for human creators to undertake scientific synthesis and consensus building and emerge as competitors to traditional assessments, potentially offering more rapid summaries or even providing functions of assessment and knowledge synthesis.<\/p>\n<p>Literature search<\/p>\n<p>Humans are directing the literature search, but their expert choices are open to greater scrutiny, as anyone could use a trusted AI tool to check the same body of literature for omissions. AI augmentation could facilitate inclusion of literature in multiple languages through multilingual search and summarization, including technical reports. The IPCC has also been critiqued for having reductive approaches<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 49\" title=\"Carmona, R. et al. Analysing engagement with indigenous peoples in the intergovernmental panel on climate Change&#x2019;s sixth assessment report. npj Clim. Action 2, 29 (2023).\" href=\"http:\/\/www.nature.com\/articles\/s43247-026-03514-y#ref-CR49\" id=\"ref-link-section-d442367237e749\" rel=\"nofollow noopener\" target=\"_blank\">49<\/a> and structural limitations that limit incorporation of Indigenous Knowledge<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 50\" title=\"Rashidi, P. &amp; Lyons, K. Democratizing global climate governance? The case of indigenous representation in the Intergovernmental Panel on Climate Change (IPCC). Globalizations 20, 1312&#x2013;1327 (2023).\" href=\"http:\/\/www.nature.com\/articles\/s43247-026-03514-y#ref-CR50\" id=\"ref-link-section-d442367237e753\" rel=\"nofollow noopener\" target=\"_blank\">50<\/a> and traditional knowledge<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 51\" title=\"Ford, J. D. et al. Including indigenous knowledge and experience in IPCC assessment reports. Nat. Clim. Change 6, 349&#x2013;353 (2016).\" href=\"http:\/\/www.nature.com\/articles\/s43247-026-03514-y#ref-CR51\" id=\"ref-link-section-d442367237e757\" rel=\"nofollow noopener\" target=\"_blank\">51<\/a>; GenAI could be employed in ways that enhance or limit this incorporation, with the possibility for building AI systems that draw on Indigenous knowledge systems<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 52\" title=\"Lewis, J. E., Whaanga, H. &amp; Yolg&#xF6;rmez, C. Abundant intelligences: placing AI within Indigenous knowledge frameworks. AI Soc. 40, 2141&#x2013;2157 (2025).\" href=\"http:\/\/www.nature.com\/articles\/s43247-026-03514-y#ref-CR52\" id=\"ref-link-section-d442367237e761\" rel=\"nofollow noopener\" target=\"_blank\">52<\/a>, but also risks around erosion of cultural knowledge or data-grabbing that does not accord with principles of Indigenous Data Sovereignty<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 53\" title=\"Perera, M. et al. Indigenous peoples and artificial intelligence: a systematic review and future directions. Big Data Soc. 12, 20539517251349170 (2025).\" href=\"http:\/\/www.nature.com\/articles\/s43247-026-03514-y#ref-CR53\" id=\"ref-link-section-d442367237e765\" rel=\"nofollow noopener\" target=\"_blank\">53<\/a>. In this scenario, expanding the scope in this way places a verification burden on authors, making the extent of such inclusion a matter of human direction and judgment.<\/p>\n<p>Synthesis and assessment<\/p>\n<p>The primary challenge comes not from automation within the IPCC, but from externally produced \u201cshadow versions\u201d of the reports. These AI-generated alternative assessments\u2014 produced by groups with specific political or scientific agendas\u2014could emerge to contest the IPCC\u2019s findings using the same corpus of literature. Shadow versions could also emerge during the review process, followed by claims that the IPCC ignored superior AI-assisted input. This creates an arms race dynamic. Government delegations, invested in maintaining influence over the \u201cofficial\u201d climate narrative, will aim to reinforce the IPCC version, particularly for the SPM approval.<\/p>\n<p>Communication<\/p>\n<p>Users will increasingly question whether they should trust human-crafted text over AI alternatives, particularly when AI versions seemingly appear to be more up to the task by updating more frequently or being more comprehensive in scope. The wide availability of competing AI-generated versions of reality allows users to \u201cshop for\u201d interpretations that align with their preferences.<\/p>\n<p>                    Implications for IPCC as an institution<\/p>\n<p>If different chapters or even whole Working Groups adopt varying approaches to AI integration, these differences will become publicly visible, exposing weaknesses in human-only assessment. The institution may also face pressure from \u201cgotcha\u201d papers that highlight discrepancies or show user preferences for AI-generated summaries, hence potentially undermining the IPCC\u2019s credibility.<\/p>\n<p>If shadow assessments are inevitable, it becomes untenable for the IPCC to take a restrictive stance towards AI. It will be under pressure to develop its strategy to incorporate and respond to AI alternatives proactively. The question becomes how to maintain the authority of human-led assessment when AI output is perceived to be superior. Embedding AI with clear benchmarking criteria may be necessary for the IPCC to defend the credibility of its main products, not only the Working Group and Special Reports, but also the carefully crafted SPMs.<\/p>\n<p>When it comes to the SPMs, how much they deviate from the underlying assessment will become an object of AI-supported re-analysis\u2014and governments will deliberate them in approval plenaries with the help of the various AI agents that they can build or access. Countries that make use of technology ecosystems developed in China or the United States may have their results influenced by the technology stacks\u2014the resources, chips, networks, applications and algorithms, data, and GenAI models\u2014that have developed in those national contexts, and the affordances of those systems may impact their workflows. At the same time, this AI-supported deliberation doesn\u2019t affect the way the negotiated SPM can be used in the still human-led and consensus-based UNFCCC negotiations.<\/p>\n<p>                    Implications for authors<\/p>\n<p>AI\u2019s legitimacy and perceived superiority bring challenges for authors. Should each chapter team develop an official AI-generated shadow version, against which they justify their differing judgment? On the one hand, this could elucidate the value of their expertise, but equally could prove cognitively demanding and demoralizing. This would shift the authors\u2019 role from primary synthesizers and evaluators to expert adjudicators, who must develop clear technical and scientific benchmarks to justify why their conclusions should be preferred to a trusted AI output. At the same time, many authors may embrace AI models as scientific collaborators in their own research<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 48\" title=\"Markowitz, D. M. From complexity to clarity: how AI enhances perceptions of scientists and the public&#x2019;s understanding of science. PNAS Nexus 3, pgae387 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s43247-026-03514-y#ref-CR48\" id=\"ref-link-section-d442367237e806\" rel=\"nofollow noopener\" target=\"_blank\">48<\/a>, which would spill over into their IPCC work.<\/p>\n<p>                    Wider social implications<\/p>\n<p>This widespread trust in AI as an arbiter of truth in science has profound social consequences. First, there is the risk that diverse forms of knowledge will be completely disregarded. Second, as social science literature points out, the struggle about policy alternatives and the meaning of climate change itself is the basis of legitimacy in democratic decision-making. If AI is used in climate governance in ways that shortcut this debate by presenting a single \u201ccorrect\u201d assessment, there is a risk of closing down policy options in ways that limit robust decision-making<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 54\" title=\"Machen, R. &amp; Pearce, W. Anticipating the challenges of AI in climate governance: an urgent dilemma for democracies. WIREs Clim. Change 16, e70002 (2025).\" href=\"http:\/\/www.nature.com\/articles\/s43247-026-03514-y#ref-CR54\" id=\"ref-link-section-d442367237e818\" rel=\"nofollow noopener\" target=\"_blank\">54<\/a>. In a study of the challenges of AI in climate governance, Ruth Machen and Warren Pearce caution about \u201cnot just the importing of particular methods but also of particular logics, esthetics, and values into processes of environmental governance\u201d<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 54\" title=\"Machen, R. &amp; Pearce, W. Anticipating the challenges of AI in climate governance: an urgent dilemma for democracies. WIREs Clim. Change 16, e70002 (2025).\" href=\"http:\/\/www.nature.com\/articles\/s43247-026-03514-y#ref-CR54\" id=\"ref-link-section-d442367237e822\" rel=\"nofollow noopener\" target=\"_blank\">54<\/a>.<\/p>\n<p>                  Anticipatory resistance<\/p>\n<p>In this scenario, AI augments humans, but people are also wary of it, leading to pressure to take precautionary or restrictive approaches that may underutilize AI\u2019s potential. At the same time, its augmentation capabilities are not equally accessible\u2014leading some to bring up the equity dimensions of restrictive approaches.<\/p>\n<p>Literature search<\/p>\n<p>Some critics\/observers\/voices reframe restrictive policies on AI use as gatekeeping mechanisms through which developed countries maintain epistemic control. However, the central challenge is the escalating pressure of managing the ever-growing body of scientific literature manually. Human synthesis and assessment are viewed as a premium product; at the same time, authors face a deluge of literature, and the underlying literature\u2019s quality is under constant critique, making synthesis and assessment more difficult.<\/p>\n<p>Synthesis and assessment<\/p>\n<p>There remains some pressure to make use of AI tools, to save time and make participation easier, but critics are concerned about examples of bias in AI leading to biased syntheses<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 55\" title=\"Debnath, R., Creutzig, F., Sovacool, B. K. &amp; Shuckburgh, E. Harnessing human and machine intelligence for planetary-level climate action. npj Clim. Action 2, 20 (2023).\" href=\"http:\/\/www.nature.com\/articles\/s43247-026-03514-y#ref-CR55\" id=\"ref-link-section-d442367237e851\" rel=\"nofollow noopener\" target=\"_blank\">55<\/a> even when humans are in the driver\u2019s seat, and AI is merely augmenting their work. The credibility of any AI-generated content is increasingly questioned. Climate advocacy groups mobilize against AI overuse in the IPCC, criticizing alignment with the same techno-optimist paradigm driving ecological crisis. This creates an internal crisis: authors know that highly automated tools could process the deluge of literature more efficiently, but also believe that using them would compromise the report\u2019s credibility. Governments face pressure from constituents suspicious of AI involvement.<\/p>\n<p>Communication<\/p>\n<p>There is increased demand for using AI to make the IPCC reports easier for speakers of varied languages to navigate and use in policy decisions. At the same time, interfaces for querying and having conversations about the reports face questions of bias as well.<\/p>\n<p>                    Implications for IPCC as an institution<\/p>\n<p>The institution must balance real concerns about AI quality and access with the potential reputational risk that AI restrictions perpetuate existing power imbalances in the assessment process and accessibility to its findings. For member governments, the degree of involvement of AI features in different parts of the underlying assessment becomes an additional layer of scrutiny during the SPM approval.<\/p>\n<p>                    Implications for authors<\/p>\n<p>Authors from different regions experience AI tools differently. Whilst developed country authors with limited technical skills struggle with advanced features, developing country authors may view even basic AI translation and writing assistance as transformative, creating new avenues for tension within author teams. Authors also feel overtaxed by the amount of literature to synthesize and assess, and knowing that there are tools that could help becomes a source of frustration.<\/p>\n<p>                    Wider societal implications<\/p>\n<p>The IPCC\u2019s legitimacy is derived from its traditional, human-deliberative process, which is framed as a core strength. At the same time, parts of the broader climate community begin to view the IPCC as dated, or even as holding back progress, given its power to set norms.<\/p>\n<p>                  Public backlash<\/p>\n<p>Widespread backlash to GenAI in this highly automated scenario has a variety of drivers: concern within academia about using AI, perceived deskilling, surveillance applications of AI, unscrupulous behavior on the part of AI companies, environmental and energy use impacts of AI, or experience with socially damaging or misaligned AI products. AI may induce job loss, or it may lead to a speculative bubble wherein companies were not able to monetize its applications and stock market losses have ripple effects\u2014either outcome can lead to backlash. For the IPCC, a legitimacy crisis emerges from the use of AI, and the perceived \u201cpurity\u201d of human-led scientific assessments becomes compromised. This scenario illustrates how social norms about technology can override technical merit. Media narratives about GenAI shape public reception negatively<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 56\" title=\"Yang, X., Song, B., Chen, L., Ho, S. S. &amp; Sun, J. Technological optimism surpasses fear of missing out: a multigroup analysis of presumed media influence on generative AI technology adoption across varying levels of technological optimism. Comput. Hum. Behav. 162, 108466 (2025).\" href=\"http:\/\/www.nature.com\/articles\/s43247-026-03514-y#ref-CR56\" id=\"ref-link-section-d442367237e896\" rel=\"nofollow noopener\" target=\"_blank\">56<\/a>, creating a context where any AI involvement taints perceived legitimacy.<\/p>\n<p>Literature search<\/p>\n<p>Despite the availability of automated tools capable of comprehensively scanning the literature, low trust in AI and social perceptions of AI lead to non-adoption by some authors or chapters, also reflecting disciplinary cultures. Concrete choices about workflows trigger broader cultural and ideological debates about the role of AI in society. Literature that uses GenAI may be filtered out.<\/p>\n<p>Synthesis and assessment<\/p>\n<p>Documentation of automated workflows becomes a major challenge because report authors have become accustomed to opacity in automated workflows for scientific production. The cognitive demands and demands on authors\u2019 time mount in comparison to other work duties.<\/p>\n<p>Communication<\/p>\n<p>AI detection tools may be used by critics to accuse the IPCC of using GenAI and discredit it. The IPCC Bureau and Secretariat may also invest time in developing communications interfaces only to have them not adopted, with the whole project being rejected.<\/p>\n<p>                    Implications for IPCC as an institution<\/p>\n<p>The IPCC faces attacks from multiple directions. Climate advocates hostile to AI due to its environmental impacts feel betrayed by an institution that sanctions its use, and use of AI tools becomes weaponized as evidence of corporate capture or techno-solutionism, drawing on research that suggests that AI biases perceptions of environmental challenges in terms of proposing incremental solutions rather than radical or transformative ones, and avoid associating environmental challenges with social justice issues<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 57\" title=\"Van Der Ven, H. et al. Does artificial intelligence bias perceptions of environmental challenges? Environ. Res. Lett. 20, 014009 (2025).\" href=\"http:\/\/www.nature.com\/articles\/s43247-026-03514-y#ref-CR57\" id=\"ref-link-section-d442367237e930\" rel=\"nofollow noopener\" target=\"_blank\">57<\/a>. Meanwhile, supporters of AI use criticize restrictions as evidence of the IPCC\u2019s capture by advocacy interests, and frame non-use as a Luddite stance. The institution might be forced to publicly adopt a restrictive stance on AI to maintain legitimacy with key stakeholders, even at the cost of internal efficiency and completeness. In this light, the human-led SPM approval process holds the potential to be more prominently presented as a core asset of the organization.<\/p>\n<p>                    Implications for authors<\/p>\n<p>Using AI in academic contexts becomes stigmatized, with researchers unwilling to face reputational damage from association with AI tools. Author teams fragment between those who view AI as necessary for comprehensive assessment and those who see it as fundamentally compromising scientific integrity. The collegial spirit needed for producing an assessment erodes under such conditions.<\/p>\n<p>                    Implications for wider society<\/p>\n<p>In the public eye, the perception of division within the climate research community over the methods of assessment spills over into perceptions of divisions about the findings of climate science itself, eroding trust.<\/p>\n","protected":false},"excerpt":{"rendered":"Rise of the agents In this scenario, agentic AI becomes widespread within the next few years, fundamentally changing&hellip;\n","protected":false},"author":2,"featured_media":18080,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[24,25,6839,2878,12980,88,617,12979],"class_list":{"0":"post-18079","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-ai","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-climate-change","11":"tag-communication","12":"tag-earth-sciences","13":"tag-environment","14":"tag-general","15":"tag-research-management"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/18079","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=18079"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/18079\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/18080"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=18079"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=18079"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=18079"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}