{"id":370264,"date":"2025-11-10T23:38:18","date_gmt":"2025-11-10T23:38:18","guid":{"rendered":"https:\/\/www.europesays.com\/us\/370264\/"},"modified":"2025-11-10T23:38:18","modified_gmt":"2025-11-10T23:38:18","slug":"environmental-impact-and-net-zero-pathways-for-sustainable-artificial-intelligence-servers-in-the-usa","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/us\/370264\/","title":{"rendered":"Environmental impact and net-zero pathways for sustainable artificial intelligence servers in the USA"},"content":{"rendered":"<p>The methodology framework of this study aims to achieve two goals: (1) draft the energy\u2013water\u2013climate impacts of AI servers in the United States from 2024 to 2030 to handle the massive concerns about AI developments, and (2) identify the best and worst practices of each influencing factor to scheme the net-zero pathways for realizing water and climate targets set for 2030. Compared with many previous climate pathway studies, which often extend predictions to 2050 for better integrating climate goals, this study focuses on the period from 2024 to 2030 due to the great uncertainties surrounding the future of AI applications and hardware development. For assessing these uncertainties, scenario-based projections are first constructed to obtain potential capacity-increasing patterns of AI servers. Technology dynamics, such as SUO and ALC adoption, are defined with best, base and worst scenarios, and a similar method is employed to capture the impact of grid decarbonization and spatial distribution. The utilized models and data required during the calculation process are illustrated in the following sections. More details on model assumptions and data generation are provided in sections 1\u20134 of <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"supplementary material anchor\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#MOESM1\" target=\"_blank\" rel=\"noopener\">Supplementary Information<\/a>.<\/p>\n<p>Data description and discussion<\/p>\n<p>This section provides a comprehensive overview of the data used in this study. Historical DGX (Nvidia\u2019s high-performance AI server line) parameters were sourced from official documentation, and future scenarios were projected on the basis of historical configurations and current industry forecasts. To attain the units of AI servers, we collected the most updated industrial report data for projecting the future manufacturing capacity of CoWoS technology, which is the bottleneck for top-tier AI server production. The data resources of the preceding process have been introduced and validated in section 1 of <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"supplementary material anchor\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#MOESM1\" target=\"_blank\" rel=\"noopener\">Supplementary Information<\/a>. AI server electricity usage was assessed using recent experimental data on maximum power<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 44\" title=\"Patel, P. et al. Characterizing power management opportunities for LLMs in the cloud. In Proc. 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems Vol. 3, 207&#x2013;222 (Association for Computing Machinery, 2024).\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR44\" id=\"ref-link-section-d44482331e1186\" target=\"_blank\" rel=\"noopener\">44<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 45\" title=\"Dodge, J. et al. Measuring the carbon intensity of AI in cloud instances. In Proc. 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT &#x2019;22) 1877&#x2013;1894 (Association for Computing Machinery, 2022).\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR45\" id=\"ref-link-section-d44482331e1189\" target=\"_blank\" rel=\"noopener\">45<\/a>, idle power<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 44\" title=\"Patel, P. et al. Characterizing power management opportunities for LLMs in the cloud. In Proc. 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems Vol. 3, 207&#x2013;222 (Association for Computing Machinery, 2024).\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR44\" id=\"ref-link-section-d44482331e1193\" target=\"_blank\" rel=\"noopener\">44<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 46\" title=\"Hu, Q., Sun, P., Yan, S., Wen, Y. &amp; Zhang, T. Characterization and prediction of deep learning workloads in large-scale GPU datacenters. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC &#x2019;21), 1-15 (2021). &#010;                https:\/\/doi.org\/10.1145\/3458817.3476223&#010;                &#010;              \" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR46\" id=\"ref-link-section-d44482331e1196\" target=\"_blank\" rel=\"noopener\">46<\/a> and utilization rate<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Hu, Q., Sun, P., Yan, S., Wen, Y. &amp; Zhang, T. Characterization and prediction of deep learning workloads in large-scale GPU datacenters. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC &#x2019;21), 1-15 (2021). &#10;                https:\/\/doi.org\/10.1145\/3458817.3476223&#10;                &#10;              \" href=\"#ref-CR46\" id=\"ref-link-section-d44482331e1200\">46<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Jeon, M. et al. Analysis of large-scale multi-tenant GPU clusters for DNN training workloads. In 2019 USENIX Annual Technical Conference (USENIX ATC 19), 947&#x2013;960 (2019). &#10;                https:\/\/www.usenix.org\/conference\/atc19\/presentation\/jeon&#10;                &#10;              \" href=\"#ref-CR47\" id=\"ref-link-section-d44482331e1200_1\">47<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Weng, Q. et al. Beware of Fragmentation: scheduling GPU-sharing workloads with fragmentation gradient descent. In 2023 USENIX Annual Technical Conference (USENIX ATC 23), 995&#x2013;1008 (2023). &#10;                https:\/\/www.usenix.org\/conference\/atc23\/presentation\/weng&#10;                &#10;              \" href=\"#ref-CR48\" id=\"ref-link-section-d44482331e1200_2\">48<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 49\" title=\"Hu, Q. et al. Characterization of large language model development in the datacenter. In 21st USENIX Symposium on Networked Systems Design and Implementation (NSDI 24), 709&#x2013;729 (2024). &#010;                https:\/\/www.usenix.org\/conference\/nsdi24\/presentation\/hu&#010;                &#010;              \" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR49\" id=\"ref-link-section-d44482331e1203\" target=\"_blank\" rel=\"noopener\">49<\/a>, derived from existing AI server systems. PUE and WUE values for AI data centres across different locations were calculated using operational data from previous studies<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 30\" title=\"Lei, N. &amp; Masanet, E. Climate- and technology-specific PUE and WUE estimations for US data centers using a hybrid statistical and thermodynamics-based approach. Resour. Conserv. Recycl. 182, 106323 (2022).\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR30\" id=\"ref-link-section-d44482331e1207\" target=\"_blank\" rel=\"noopener\">30<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 50\" title=\"Lei, N. &amp; Masanet, E. Statistical analysis for predicting location-specific data center PUE and its improvement potential. Energy 201, 117556 (2020).\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR50\" id=\"ref-link-section-d44482331e1210\" target=\"_blank\" rel=\"noopener\">50<\/a> and industrial resources<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 51\" title=\"3M specialty fluids. 3M &#010;                https:\/\/www.3m.com\/3M\/en_US\/p\/c\/electronics-components\/specialty-fluids\/&#010;                &#010;               (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR51\" id=\"ref-link-section-d44482331e1215\" target=\"_blank\" rel=\"noopener\">51<\/a>, combined with the collected average climate data for each state<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 52\" title=\"Meteonorm 8.2 Global Meteorological Database (version 8.2). Meteotest AG &#010;                https:\/\/mn8.meteonorm.com&#010;                &#010;               (2023).\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR52\" id=\"ref-link-section-d44482331e1219\" target=\"_blank\" rel=\"noopener\">52<\/a>. The allocation ratios of AI servers to each state were determined on the basis of configurations of existing and planned AI data centres, which are collected from reports of major AI companies in the United States, as data resources detailed in section 2 of <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"supplementary material anchor\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#MOESM1\" target=\"_blank\" rel=\"noopener\">Supplementary Information<\/a>. In addition, projections for grid carbon and water factors were derived from the ReEDS model<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 31\" title=\"Ho, J. et al. Regional Energy Deployment System (ReEDS) Model Documentation Version 2020 (National Renewable Energy Lab, 2021); &#010;                https:\/\/docs.nrel.gov\/docs\/fy21osti\/78195.pdf&#010;                &#010;              \" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR31\" id=\"ref-link-section-d44482331e1226\" target=\"_blank\" rel=\"noopener\">31<\/a>, using its default scenario data<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 33\" title=\"2023 electricity ATB technologies and data overview. NREL &#010;                https:\/\/atb.nrel.gov\/electricity\/2023\/index&#010;                &#010;               (2023).\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR33\" id=\"ref-link-section-d44482331e1230\" target=\"_blank\" rel=\"noopener\">33<\/a>. All datasets employed in this study are publicly available, with most originating from well-established sources. A key uncertainty lies in estimating the number of manufactured AI server units, as official supply-chain reports remain largely opaque. To maintain transparency and ensure reproducibility, we rely on the best available industry reports rather than commercial sources such as International Data Cooperation data<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 53\" title=\"Worldwide Quarterly AI Infrastructure Tracker (IDC, 2024); &#010;                https:\/\/www.idc.com\/getdoc.jsp?containerId=IDC_P37251&#010;                &#010;              \" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR53\" id=\"ref-link-section-d44482331e1234\" target=\"_blank\" rel=\"noopener\">53<\/a>, which are not granted for open access and would limit future validation despite their potential to provide better estimates. The validations of applied data are further detailed in sections 1 and 4 of <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"supplementary material anchor\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#MOESM1\" target=\"_blank\" rel=\"noopener\">Supplementary Information<\/a>.<\/p>\n<p>AI server power capacity projections<\/p>\n<p>The energy consumption of AI servers is projected to be driven predominantly by top-tier models designed for large-scale generative AI computing<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 6\" title=\"de Vries, A. The growing energy footprint of artificial intelligence. Joule 7, 2191&#x2013;2194 (2023).\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR6\" id=\"ref-link-section-d44482331e1250\" target=\"_blank\" rel=\"noopener\">6<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 7\" title=\"Crawford, K. Generative AI&#x2019;s environmental costs are soaring&#x2014;and mostly secret. Nature 626, 693 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR7\" id=\"ref-link-section-d44482331e1253\" target=\"_blank\" rel=\"noopener\">7<\/a>. This trend is attributed to their substantial power requirements and the increasing number of units being deployed. In this study, we estimate the power capacity of these high-performance AI servers by examining a critical manufacturing bottleneck: the CoWoS process<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 54\" title=\"TSMC reportedly sensing increased orders again, CoWoS production capacity surges. TrendForce &#010;                https:\/\/www.trendforce.com\/news\/&#010;                &#010;               (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR54\" id=\"ref-link-section-d44482331e1257\" target=\"_blank\" rel=\"noopener\">54<\/a>. This process, which is controlled nearly exclusively by the Taiwan Semiconductor Manufacturing Company, serves as a key determinant of the manufacturing capacity for AI servers in recent years<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 55\" title=\"TSMC explores radical new chip packaging approach to feed AI boom. NIKKEI Asia &#010;                https:\/\/asia.nikkei.com\/business\/tech\/semiconductors\/tsmc-explores-radical-new-chip-packaging-approach-to-feed-ai-boom&#010;                &#010;               (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR55\" id=\"ref-link-section-d44482331e1261\" target=\"_blank\" rel=\"noopener\">55<\/a>. Our analysis uses forecast data and projection assumptions of the CoWoS process to estimate total production capacity. Other factors are integral to translating this capacity into the power capacity of AI servers: the CoWoS size of AI chips, which determines how many chips can be produced by each wafer; the rated power of future AI servers, which reflects the power demand per unit; and the adoption patterns of AI servers, which dictate the mix of various server types over time. The values of these factors are derived mainly from the DGX systems produced by Nvidia, which is the dominant product for the top-tier AI server markets<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 56\" title=\"Shah, A. Nvidia shipped 3.76 million data-center GPUs in 2023, according to study. HPC Wire &#010;                https:\/\/www.hpcwire.com\/2024\/06\/10\/nvidia-shipped-3-76-million-data-center-gpus-in-2023-according-to-study&#010;                &#010;               (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR56\" id=\"ref-link-section-d44482331e1265\" target=\"_blank\" rel=\"noopener\">56<\/a>.<\/p>\n<p>Considering the influencing factors for the total AI server capacity shipments and existing uncertainties, we generate distinct scenarios as follows:<\/p>\n<ul class=\"u-list-style-bullet\">\n<li>\n<p>Mid-case scenario: the CoWoS capacity is projected to slightly increase after 2026, consistent with the growth rate in 2023. Under this scenario, AI servers\u2019 rated power is expected to have a linear relationship with the anticipated die size increase while adoption patterns remain aligned with current trajectory.<\/p>\n<\/li>\n<li>\n<p>Low-demand scenario: characterized by lower CoWoS capacity growth and lower AI server rated power compared with the mid-case scenario, this reflects a scenario of lower overall demand for AI servers.<\/p>\n<\/li>\n<li>\n<p>Low-power scenario: maintains the same assumptions as the mid-case scenario but with lower AI server rated power, representing efficiency gains in AI hardware and software development.<\/p>\n<\/li>\n<li>\n<p>High-application scenario: assumes lower AI server rated power alongside high CoWoS capacity, capturing the potential rebound effect where efficiency gains drive increased AI workload deployment.<\/p>\n<\/li>\n<li>\n<p>High-demand scenario: features higher CoWoS capacity expansion, higher AI server rated power and higher adoption of new servers compared with the mid-case scenario, reflecting a scenario of strong AI server demand.<\/p>\n<\/li>\n<\/ul>\n<p>Based on the assumptions and scenarios outlined, the annual projections for top-tier AI server shipments and their average rated power are calculated as follows:<\/p>\n<p>$$\\begin{array}{l}{N}_{\\mathrm{AI}}=\\frac{{C}_{\\mathrm{CoWoS}}\\times {R}_{\\mathrm{Nvidia}}\\times {\\sum }_{i}{R}_{i}{n}_{i}}{{N}_{\\mathrm{GPU}}}\\\\ {\\bar{P}}_{\\mathrm{AI}}=\\mathop{\\sum }\\limits_{i}{R}_{i}{P}_{i}\\end{array}$$<\/p>\n<p>\n                    (1)\n                <\/p>\n<p>where \\({N}_{\\mathrm{AI}}\\) and \\({\\bar{P}}_{\\mathrm{AI}}\\) represent the annually projected shipments and average rated power of the top-tier AI servers. \\({R}_{\\mathrm{AI}}\\) is the ratio of CoWoS capacity allocated to top-tier AI servers and is set as 40% for 2022, 40.7% for 2023, 48.5% for 2024 and 54.3% for 2025, according to industry reports<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 57\" title=\"Kung, F. See Generative AI&#x2019;s Impact on the AI Server Market to 2025 (TrendForce, 2024); &#010;                https:\/\/files.futurememorystorage.com\/proceedings\/2024\/20240808_BMKT-301-1_KUNG.pdf&#010;                &#010;              \" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR57\" id=\"ref-link-section-d44482331e1507\" target=\"_blank\" rel=\"noopener\">57<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 58\" title=\"Nvidia secures 60% of TSMC&#x2019;s doubled CoWoS capacity for 2025. Digitimes Asia &#010;                https:\/\/www.digitimes.com\/news\/a20241122PD200\/nvidia-tsmc-capacity-cowos-2025.html&#010;                &#010;               (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR58\" id=\"ref-link-section-d44482331e1510\" target=\"_blank\" rel=\"noopener\">58<\/a>. For years beyond 2025, this ratio is assumed to remain constant at the 2025 value due to a lack of further data. The sensitivity analysis regarding this value is provided in Fig. <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"figure anchor\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#Fig6\" target=\"_blank\" rel=\"noopener\">6<\/a>. \\({C}_{\\mathrm{CoWoS}}\\) is the projected CoWoS capacity within each scenario. \\({N}_{\\mathrm{GPU}}\\) is the number of graphic processor units (GPUs) per server and is set as 8, reflecting the configuration of most commonly used AI server systems<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 59\" title=\"DGX systems: built for the unique demands of AI (Nvidia, 2024); &#010;                https:\/\/www.nvidia.com\/en-gb\/data-center\/dgx-systems\/&#010;                &#010;              \" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR59\" id=\"ref-link-section-d44482331e1562\" target=\"_blank\" rel=\"noopener\">59<\/a>. In addition, \\({R}_{i}\\), \\({n}_{i}\\) and \\({P}_{i}\\) represent the projected adoption ratio, units yield per CoWoS wafer and rated power of the ith type of chip at each year, respectively. The details of the projections and related data resources are provided in section 1 of <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"supplementary material anchor\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#MOESM1\" target=\"_blank\" rel=\"noopener\">Supplementary Information<\/a>, Supplementary Figs. <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"supplementary material anchor\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#MOESM1\" target=\"_blank\" rel=\"noopener\">1\u20134<\/a> and Supplementary Table <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"supplementary material anchor\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#MOESM1\" target=\"_blank\" rel=\"noopener\">1<\/a>.<\/p>\n<p>AI server electricity usage calculation<\/p>\n<p>The applied AI server electricity usage model is a utilization-based approach initially derived from CPU (central processing unit)-dominant servers<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 60\" title=\"Fan, X., Weber, W.-D. &amp; Barroso, L. A. Power provisioning for a warehouse-sized computer. In Proc. 34th Annual International Symposium on Computer Architecture Vol. 35, 13&#x2013;23 (Association for Computing Machinery, 2007).\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR60\" id=\"ref-link-section-d44482331e1671\" target=\"_blank\" rel=\"noopener\">60<\/a> and can be written as the following:<\/p>\n<p>$${P}_{\\mathrm{server}}=({P}_{\\max }-{P}_{\\mathrm{idle}})u+{P}_{\\mathrm{idle}}$$<\/p>\n<p>\n                    (2)\n                <\/p>\n<p>The preceding model assumes the total server power has a linear relationship with the processor utilization rate u. While this relationship has been well validated for CPU machines, its application to GPU utilization is less established except for a few cases<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 61\" title=\"Masanet, E. R., Brown, R. E., Shehabi, A., Koomey, J. G. &amp; Nordman, B. Estimating the energy use and efficiency potential of US data centers. Proc. IEEE 99, 1440&#x2013;1453 (2011).\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR61\" id=\"ref-link-section-d44482331e1746\" target=\"_blank\" rel=\"noopener\">61<\/a>. However, several recent studies have shown a strong correlation between GPU utilization and overall server power consumption when dealing with AI workloads<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 44\" title=\"Patel, P. et al. Characterizing power management opportunities for LLMs in the cloud. In Proc. 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems Vol. 3, 207&#x2013;222 (Association for Computing Machinery, 2024).\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR44\" id=\"ref-link-section-d44482331e1750\" target=\"_blank\" rel=\"noopener\">44<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 46\" title=\"Hu, Q., Sun, P., Yan, S., Wen, Y. &amp; Zhang, T. Characterization and prediction of deep learning workloads in large-scale GPU datacenters. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC &#x2019;21), 1-15 (2021). &#010;                https:\/\/doi.org\/10.1145\/3458817.3476223&#010;                &#010;              \" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR46\" id=\"ref-link-section-d44482331e1753\" target=\"_blank\" rel=\"noopener\">46<\/a>, indicating that GPUs are the dominant contributors to energy use in AI servers<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 45\" title=\"Dodge, J. et al. Measuring the carbon intensity of AI in cloud instances. In Proc. 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT &#x2019;22) 1877&#x2013;1894 (Association for Computing Machinery, 2022).\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR45\" id=\"ref-link-section-d44482331e1757\" target=\"_blank\" rel=\"noopener\">45<\/a>. Although systematic experimental validation specific to GPUs is still limited, the consistency of findings across various case studies supports the assumption that the linear relationship applies here as well. The maximum power \\({P}_{\\max }\\) and idle power \\({P}_{\\mathrm{idle}}\\) are generated on the basis of the recent DGX system experimental results, and their values are set as 23% and 88% of the server rated power, respectively<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 44\" title=\"Patel, P. et al. Characterizing power management opportunities for LLMs in the cloud. In Proc. 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems Vol. 3, 207&#x2013;222 (Association for Computing Machinery, 2024).\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR44\" id=\"ref-link-section-d44482331e1812\" target=\"_blank\" rel=\"noopener\">44<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 46\" title=\"Hu, Q., Sun, P., Yan, S., Wen, Y. &amp; Zhang, T. Characterization and prediction of deep learning workloads in large-scale GPU datacenters. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC &#x2019;21), 1-15 (2021). &#010;                https:\/\/doi.org\/10.1145\/3458817.3476223&#010;                &#010;              \" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR46\" id=\"ref-link-section-d44482331e1815\" target=\"_blank\" rel=\"noopener\">46<\/a>. The sensitivity analysis was conducted to quantify the uncertainty, as shown in Supplementary Fig. <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"figure anchor\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#Fig6\" target=\"_blank\" rel=\"noopener\">6<\/a>. Moreover, the GPU processor utilization \\(u\\) is calculated as the following:<\/p>\n<p>$$u={u}_{\\mathrm{active}}\\times {r}_{\\mathrm{active}}$$<\/p>\n<p>\n                    (3)\n                <\/p>\n<p>where \\({u}_{\\mathrm{active}}\\) and \\({r}_{\\mathrm{active}}\\) represent the average processor utilization of active GPUs and the ratio of active GPUs to total GPUs, respectively. Note that the \\({u}_{\\mathrm{active}}\\) and \\({r}_{\\mathrm{active}}\\) commonly have higher values during training compared with inference<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 62\" title=\"Ye, Z., et al. Deep learning workload scheduling in GPU datacenters: a survey. ACM Comput. Surv. 56, 146 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR62\" id=\"ref-link-section-d44482331e1968\" target=\"_blank\" rel=\"noopener\">62<\/a>. Specifically, we use currently available AI traces, including Philly trace<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 47\" title=\"Jeon, M. et al. Analysis of large-scale multi-tenant GPU clusters for DNN training workloads. In 2019 USENIX Annual Technical Conference (USENIX ATC 19), 947&#x2013;960 (2019). &#010;                https:\/\/www.usenix.org\/conference\/atc19\/presentation\/jeon&#010;                &#010;              \" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR47\" id=\"ref-link-section-d44482331e1972\" target=\"_blank\" rel=\"noopener\">47<\/a>, Helios trace<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 46\" title=\"Hu, Q., Sun, P., Yan, S., Wen, Y. &amp; Zhang, T. Characterization and prediction of deep learning workloads in large-scale GPU datacenters. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC &#x2019;21), 1-15 (2021). &#010;                https:\/\/doi.org\/10.1145\/3458817.3476223&#010;                &#010;              \" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR46\" id=\"ref-link-section-d44482331e1976\" target=\"_blank\" rel=\"noopener\">46<\/a>, PAI trace<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 48\" title=\"Weng, Q. et al. Beware of Fragmentation: scheduling GPU-sharing workloads with fragmentation gradient descent. In 2023 USENIX Annual Technical Conference (USENIX ATC 23), 995&#x2013;1008 (2023). &#010;                https:\/\/www.usenix.org\/conference\/atc23\/presentation\/weng&#010;                &#010;              \" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR48\" id=\"ref-link-section-d44482331e1981\" target=\"_blank\" rel=\"noopener\">48<\/a> and Acme trace<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 49\" title=\"Hu, Q. et al. Characterization of large language model development in the datacenter. In 21st USENIX Symposium on Networked Systems Design and Implementation (NSDI 24), 709&#x2013;729 (2024). &#010;                https:\/\/www.usenix.org\/conference\/nsdi24\/presentation\/hu&#010;                &#010;              \" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR49\" id=\"ref-link-section-d44482331e1985\" target=\"_blank\" rel=\"noopener\">49<\/a>, to determine the \\({r}_{\\mathrm{active}}\\) for training and inference tasks. These traces provide comprehensive analyses on the relationship between GPU utilization rate and job characteristics. Based on the data provided in these works, the \\({r}_{\\mathrm{active}}\\) is set as 50% and 90% for inference and training, respectively. Moreover, the \\({u}_{\\mathrm{active}}\\) values are further determined on the basis of recent experimental studies<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 44\" title=\"Patel, P. et al. Characterizing power management opportunities for LLMs in the cloud. In Proc. 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems Vol. 3, 207&#x2013;222 (Association for Computing Machinery, 2024).\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR44\" id=\"ref-link-section-d44482331e2055\" target=\"_blank\" rel=\"noopener\">44<\/a>. The values are set as 50% and 80% for inference and training, respectively. Therefore, the processor utilization rates for inference and training in this work are set as 25% and 72%, respectively. Following the previous works<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 63\" title=\"Wu, C.-J. et al. Sustainable AI: environmental implications, challenges and opportunities. Proc. Mach. Learn. Syst. 4, 795&#x2013;813 (2022).\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR63\" id=\"ref-link-section-d44482331e2060\" target=\"_blank\" rel=\"noopener\">63<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 64\" title=\"Berthelot, A., Caron, E., Jay, M. &amp; Lef&#xE8;vre, L. Estimating the environmental impact of generative-AI services using an LCA-based methodology. Procedia CIRP 122, 707&#x2013;712 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR64\" id=\"ref-link-section-d44482331e2063\" target=\"_blank\" rel=\"noopener\">64<\/a>, our base estimations assume 30% of computing capacity for training and 70% for inference. A detailed sensitivity analysis on the impact of these utilization rate settings is provided in Fig. <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"figure anchor\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#Fig6\" target=\"_blank\" rel=\"noopener\">6<\/a>.<\/p>\n<p>Assessment of the environmental footprints of AI servers<\/p>\n<p>This study employs a state-level allocation method to evaluate the energy, water and carbon footprints of AI servers. To capture the current and future distributions of AI server capacity, we compiled data of current and in-construction large-scale data centres belonging to major purchasers of top-tier AI servers, including Google, Meta, Microsoft, AWS, XAI and Tesla. The analysis incorporates the location, building area and construction year of each data centre to calculate the state-level distribution of server capacity by annually aggregating the total building area for each state. On the basis of our calculations, no major changes in spatial distribution are projected between 2024 and 2030, even with the anticipated addition of new data centres. Therefore, we assume the current spatial distribution will remain constant from 2024 to 2030 to account for uncertainties in directly integrating the projected contributions of in-construction data centres. Further details on the methodology and spatial distribution results are provided in section 2 of <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"supplementary material anchor\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#MOESM1\" target=\"_blank\" rel=\"noopener\">Supplementary Information<\/a>.<\/p>\n<p>For each state, the actual energy consumption can be derived from the server electricity usage and the PUE value of AI data centres. Meanwhile, the water footprint and carbon emissions should be analysed across three scopes. Scope 1 encompasses the on-site water footprint, calculated on the basis of on-site WUE (shortened as WUE in this work) and on-site carbon emissions (typically negligible for data centres<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 18\" title=\"Siddik, M. A. B., Shehabi, A. &amp; Marston, L. The environmental footprint of data centers in the United States. Environ. Res. Lett. 16, 064017 (2021).\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR18\" id=\"ref-link-section-d44482331e2085\" target=\"_blank\" rel=\"noopener\">18<\/a>). Scope 2 includes off-site water footprint and carbon emissions, which are contingent on the local grid power supply portfolio. Scope 3, representing embodied water footprint and carbon emissions during facility manufacturing, lies beyond the spatial scope of this study. A regional PUE and WUE model, following the idea in previous research<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 30\" title=\"Lei, N. &amp; Masanet, E. Climate- and technology-specific PUE and WUE estimations for US data centers using a hybrid statistical and thermodynamics-based approach. Resour. Conserv. Recycl. 182, 106323 (2022).\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR30\" id=\"ref-link-section-d44482331e2089\" target=\"_blank\" rel=\"noopener\">30<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 50\" title=\"Lei, N. &amp; Masanet, E. Statistical analysis for predicting location-specific data center PUE and its improvement potential. Energy 201, 117556 (2020).\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR50\" id=\"ref-link-section-d44482331e2092\" target=\"_blank\" rel=\"noopener\">50<\/a>, is applied to estimate the PUE and WUE values of AI data centres in different states. This hybrid model integrates thermodynamics and statistical data to generate estimations on the basis of local climate data. Specifically, we collected the average climate data of each state between 2024 and 2030 from an existing climate model<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 52\" title=\"Meteonorm 8.2 Global Meteorological Database (version 8.2). Meteotest AG &#010;                https:\/\/mn8.meteonorm.com&#010;                &#010;               (2023).\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR52\" id=\"ref-link-section-d44482331e2096\" target=\"_blank\" rel=\"noopener\">52<\/a>, which is then employed in calculating the PUE and WUE values of each state. Considering that the specific cooling settings for AI data centres are unknown, the base values are calculated by averaging the worst and best cases. The model parameters are detailed in Supplementary Table <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"supplementary material anchor\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#MOESM1\" target=\"_blank\" rel=\"noopener\">2<\/a>. Subsequently, the Scope 2 water footprint and carbon emissions are calculated on the basis of the grid water and carbon factors derived from the ReEDS model<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 31\" title=\"Ho, J. et al. Regional Energy Deployment System (ReEDS) Model Documentation Version 2020 (National Renewable Energy Lab, 2021); &#010;                https:\/\/docs.nrel.gov\/docs\/fy21osti\/78195.pdf&#010;                &#010;              \" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR31\" id=\"ref-link-section-d44482331e2103\" target=\"_blank\" rel=\"noopener\">31<\/a>. This approach also allows us to incorporate the projected data-centre load data, which can further interact with the grid system through services such as demand response. The validation of the ReEDS model results by using current high-resolution data is presented in Supplementary Figs. <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"supplementary material anchor\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#MOESM1\" target=\"_blank\" rel=\"noopener\">7<\/a> and <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"supplementary material anchor\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#MOESM1\" target=\"_blank\" rel=\"noopener\">8<\/a>, and the related discussion is presented in section 4 of <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"supplementary material anchor\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#MOESM1\" target=\"_blank\" rel=\"noopener\">Supplementary Information<\/a>. Optimization and analytical techniques are employed to determine optimal parameters during the simulation to generate the best and worst practices concerning industrial efficiency efforts, spatial distributions and grid decarbonization. Moreover, the water scarcity and remaining renewable energy potential data of each state are computed on the basis of the calculated environmental cost and standard data from previous literature<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 18\" title=\"Siddik, M. A. B., Shehabi, A. &amp; Marston, L. The environmental footprint of data centers in the United States. Environ. Res. Lett. 16, 064017 (2021).\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR18\" id=\"ref-link-section-d44482331e2117\" target=\"_blank\" rel=\"noopener\">18<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 65\" title=\"Lopez, A., Roberts, B., Heimiller, D., Blair, N. &amp; Porro, G. US renewable energy technical potentials: a GIS-based analysis (NREL, 2012); &#010;                https:\/\/www.osti.gov\/servlets\/purl\/1219777&#010;                &#010;              \" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR65\" id=\"ref-link-section-d44482331e2120\" target=\"_blank\" rel=\"noopener\">65<\/a>. The preceding calculation process depends mainly on previously established approaches, and its integration into our framework is further discussed in sections 3 and 4 of <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"supplementary material anchor\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#MOESM1\" target=\"_blank\" rel=\"noopener\">Supplementary Information<\/a>.<\/p>\n<p>Uncertainties and limitations<\/p>\n<p>There are substantial uncertainties inherent in projecting the evolution of AI servers. Our analysis presents a range of scenarios based on current data to evaluate the impacts of data-centre operational efficiency, spatial distribution and grid development. However, several key uncertainties remain an unmodelled field for this work. For a better understanding of our study and to outline future research directions, these uncertainties are categorized as follows:<\/p>\n<ul class=\"u-list-style-bullet\">\n<li>\n<p>Model and algorithm innovations: the model and algorithm breakthroughs in the AI industry could fundamentally alter computing requirements.<\/p>\n<\/li>\n<li>\n<p>Supply-chain uncertainties: the complex production process of AI servers may reveal new bottlenecks beyond the current CoWoS technology, leading to varying expansion patterns.<\/p>\n<\/li>\n<li>\n<p>Hardware and facility evolutions: continued improvements in AI computing hardware and data-centre efficiency may substantially affect the environmental impact of these servers.<\/p>\n<\/li>\n<li>\n<p>Out-of-scope factors: there are other major contributors that are out of the scope of this study and would be critical to the process, such as market forces and geopolitical influences.<\/p>\n<\/li>\n<\/ul>\n<p>The impacts of these factors are multifaceted and challenging to model with existing data. For example, while the recent release of DeepSeek has been interpreted as reducing the energy demands of AI servers, it may also trigger a rebound effect by spurring increased AI computing activity, ultimately resulting in higher overall energy, water and carbon footprints<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 43\" title=\"Pipe, A. &amp; Rattner, N. How DeepSeek&#x2019;s lower-power, less-data model stacks up. Wall Street Journal (16 February 2025); &#010;                https:\/\/www.wsj.com\/tech\/ai\/deepseek-ai-how-it-works-725cb464&#010;                &#010;              \" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#ref-CR43\" id=\"ref-link-section-d44482331e2164\" target=\"_blank\" rel=\"noopener\">43<\/a>. However, no fresh data have become available to simulate this complex process, based on our best knowledge when drafting. To further assess the influence of unpredictable uncertainties, we conducted a sensitivity analysis on key factors, including manufacturing capacities for AI servers, US allocation ratios, server lifetimes, idle and maximum power ratios and training\/inference distributions. As shown in Fig. <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"figure anchor\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#Fig6\" target=\"_blank\" rel=\"noopener\">6<\/a>, our findings suggest that the key conclusions of this study are expected to remain robust as long as the impact of future uncertainties does not notable exceed the ranges considered. Given the highly dynamic nature of AI evolution, our modelling approach allows for future revisions as more data become available on potential shifts in industry trends.<\/p>\n<p>Reporting summary<\/p>\n<p>Further information on research design is available in the <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"supplementary material anchor\" href=\"http:\/\/www.nature.com\/articles\/s41893-025-01681-y#MOESM2\" target=\"_blank\" rel=\"noopener\">Nature Portfolio Reporting Summary<\/a> linked to this article.<\/p>\n","protected":false},"excerpt":{"rendered":"The methodology framework of this study aims to achieve two goals: (1) draft the energy\u2013water\u2013climate impacts of AI&hellip;\n","protected":false},"author":3,"featured_media":370265,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[23],"tags":[746,81664,40515,159,1754,67,132,68],"class_list":{"0":"post-370264","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-environment","8":"tag-environment","9":"tag-environmental-sciences","10":"tag-mathematics-and-computing","11":"tag-science","12":"tag-sustainable-development","13":"tag-united-states","14":"tag-unitedstates","15":"tag-us"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@us\/115528040633730858","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/370264","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/comments?post=370264"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/370264\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media\/370265"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media?parent=370264"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/categories?post=370264"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/tags?post=370264"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}