{"id":218263,"date":"2025-12-06T08:47:11","date_gmt":"2025-12-06T08:47:11","guid":{"rendered":"https:\/\/www.europesays.com\/ie\/218263\/"},"modified":"2025-12-06T08:47:11","modified_gmt":"2025-12-06T08:47:11","slug":"ai-labs-like-meta-deepseek-and-xai-earned-worst-grades-possible-on-an-existential-safety-index","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ie\/218263\/","title":{"rendered":"AI labs like Meta, Deepseek, and Xai earned worst grades possible on an existential safety index"},"content":{"rendered":"<p>A recent report card from an AI safety watchdog isn\u2019t one that tech companies will want to stick on the fridge.<\/p>\n<p>The Future of Life Institute\u2019s\u00a0<a aria-label=\"Go to https:\/\/futureoflife.org\/ai-safety-index-winter-2025\/\" class=\"\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" href=\"https:\/\/futureoflife.org\/ai-safety-index-winter-2025\/\">latest AI safety index<\/a>\u00a0found that major AI labs fell short on most measures of AI responsibility, with few letter grades rising above a C. The org graded eight companies across categories like safety frameworks, risk assessment, and current harms.<\/p>\n<p>Perhaps most glaring was the \u201cexistential safety\u201d line, where companies scored Ds and Fs across the board. While many of these companies are explicitly chasing superintelligence, they lack a plan for safely managing it, according to Max Tegmark, MIT professor and president of the Future of Life Institute.<\/p>\n<p>\u201cReviewers found this kind of jarring,\u201d Tegmark told us.<\/p>\n<p>The reviewers in question were a panel of AI academics and governance experts who examined publicly available material as well as survey responses submitted by five of the eight companies.<\/p>\n<p>Anthropic, OpenAI, and <a aria-label=\"Go to https:\/\/fortune.com\/company\/alphabet\/\" class=\"\" target=\"_blank\" href=\"https:\/\/fortune.com\/company\/alphabet\/\" rel=\"nofollow noopener\">Google<\/a><a aria-label=\"Go to https:\/\/fortune.com\/company\/deepmind\/\" class=\"\" target=\"_blank\" href=\"https:\/\/fortune.com\/company\/deepmind\/\" rel=\"nofollow noopener\">DeepMind<\/a> took the top three spots with an overall grade of C+ or C. Then came, in order, Elon Musk\u2019s Xai, Z.ai, <a aria-label=\"Go to https:\/\/fortune.com\/company\/facebook\/\" class=\"\" target=\"_blank\" href=\"https:\/\/fortune.com\/company\/facebook\/\" rel=\"nofollow noopener\">Meta<\/a>, DeepSeek, and Alibaba, all of which got Ds or a D-.<\/p>\n<p>Tegmark blames a lack of regulation that has meant the cutthroat competition of the AI race trumps safety precautions. California\u00a0<a aria-label=\"Go to https:\/\/www.techbrew.com\/stories\/2025\/10\/02\/california-ai-law-us-tech-regulation\" class=\"\" href=\"https:\/\/www.techbrew.com\/stories\/2025\/10\/02\/california-ai-law-us-tech-regulation\" rel=\"nofollow noopener\" target=\"_blank\">recently passed<\/a>\u00a0the first law that requires frontier AI companies to disclose safety information around catastrophic risks, and New York is\u00a0<a aria-label=\"Go to https:\/\/www.techbrew.com\/stories\/2025\/11\/18\/new-york-ai-safety-bill-alex-bores\" class=\"\" href=\"https:\/\/www.techbrew.com\/stories\/2025\/11\/18\/new-york-ai-safety-bill-alex-bores\" rel=\"nofollow noopener\" target=\"_blank\">currently within spitting distance<\/a>\u00a0as well. Hopes for federal legislation are\u00a0<a aria-label=\"Go to https:\/\/www.techbrew.com\/stories\/2025\/12\/02\/trump-gop-push-to-block-state-ai-laws\" class=\"\" href=\"https:\/\/www.techbrew.com\/stories\/2025\/12\/02\/trump-gop-push-to-block-state-ai-laws\" rel=\"nofollow noopener\" target=\"_blank\">dim<\/a>, however.<\/p>\n<p>\u201cCompanies have an incentive, even if they have the best intentions, to always rush out new products before the competitor does, as opposed to necessarily putting in a lot of time to make it safe,\u201d Tegmark said.<\/p>\n<p>In lieu of government-mandated standards, Tegmark said the industry has begun to take the group\u2019s regularly released safety indexes more seriously; four of the five American companies now respond to its survey (Meta is the only holdout.) And companies have made some improvements over time, Tegmark said, mentioning Google\u2019s transparency around its whistleblower policy as an example.<\/p>\n<p>But real-life harms reported around issues like\u00a0<a aria-label=\"Go to https:\/\/www.nytimes.com\/2025\/10\/24\/magazine\/character-ai-chatbot-lawsuit-teen-suicide-free-speech.html\" class=\"\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" href=\"https:\/\/www.nytimes.com\/2025\/10\/24\/magazine\/character-ai-chatbot-lawsuit-teen-suicide-free-speech.html\">teen suicides<\/a>\u00a0that chatbots allegedly encouraged,\u00a0<a aria-label=\"Go to https:\/\/www.reuters.com\/investigates\/special-report\/meta-ai-chatbot-guidelines\/\" class=\"\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" href=\"https:\/\/www.reuters.com\/investigates\/special-report\/meta-ai-chatbot-guidelines\/\">inappropriate interactions<\/a>\u00a0with minors, and\u00a0<a aria-label=\"Go to https:\/\/www.axios.com\/2025\/11\/13\/anthropic-china-claude-code-cyberattack\" class=\"\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" href=\"https:\/\/www.axios.com\/2025\/11\/13\/anthropic-china-claude-code-cyberattack\">major cyberattacks<\/a>\u00a0have also raised the stakes of the discussion, he said.<\/p>\n<p>\u201c[They] have really made a lot of people realize that this isn\u2019t the future we\u2019re talking about\u2014it\u2019s now,\u201d Tegmark said.<\/p>\n<p>The Future of Life Institute recently enlisted public figures as diverse as Prince Harry and Meghan Markle, former Trump aide Steve Bannon, <a aria-label=\"Go to https:\/\/fortune.com\/company\/apple\/\" class=\"\" target=\"_blank\" href=\"https:\/\/fortune.com\/company\/apple\/\" rel=\"nofollow noopener\">Apple<\/a> co-founder Steve Wozniak, and rapper Will.i.am to sign a\u00a0<a aria-label=\"Go to https:\/\/superintelligence-statement.org\/\" class=\"\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" href=\"https:\/\/superintelligence-statement.org\/\">statement<\/a>\u00a0opposing work that could lead to superintelligence.<\/p>\n<p>Tegmark said he would like to see something like \u201can FDA for AI where companies first have to convince experts that their models are safe before they can sell them.<\/p>\n<p>\u201cThe AI industry is quite unique in that it\u2019s the only industry in the US making powerful technology that\u2019s less regulated than sandwiches\u2014basically not regulated at all,\u201d Tegmark said. \u201cIf someone says, \u2018I want to open a new sandwich shop near Times Square,\u2019 before you can sell the first sandwich, you need a health inspector to check your kitchen and make sure it\u2019s not full of rats\u2026If you instead say, \u2018Oh no, I\u2019m not going to sell any sandwiches. I\u2019m just going to release superintelligence.\u2019 OK! No need for any inspectors, no need to get any approvals for anything.\u201d<\/p>\n<p>\u201cSo the solution to this is very obvious,\u201d Tegmark added. \u201cYou just stop this corporate welfare of giving AI companies exemptions that no other companies get.\u201d<\/p>\n<p>This report was <a aria-label=\"Go to https:\/\/www.techbrew.com\/stories\/2025\/12\/05\/ai-labs-future-of-life-institute-report-card\" class=\"\" href=\"https:\/\/www.techbrew.com\/stories\/2025\/12\/05\/ai-labs-future-of-life-institute-report-card\" rel=\"nofollow noopener\" target=\"_blank\">originally published<\/a> by <a aria-label=\"Go to https:\/\/www.techbrew.com\/\" class=\"\" href=\"https:\/\/www.techbrew.com\/\" rel=\"nofollow noopener\" target=\"_blank\">Tech Brew<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"A recent report card from an AI safety watchdog isn\u2019t one that tech companies will want to stick&hellip;\n","protected":false},"author":2,"featured_media":218264,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[261],"tags":[291,289,290,8736,18,1647,19,17,1721,1722,22548,82,879],"class_list":{"0":"post-218263","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-deepseek","12":"tag-eire","13":"tag-elon-musk","14":"tag-ie","15":"tag-ireland","16":"tag-mark-zuckerberg","17":"tag-meta","18":"tag-morning-brew","19":"tag-technology","20":"tag-x"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@ie\/115671757165353421","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/218263","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/comments?post=218263"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/218263\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media\/218264"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media?parent=218263"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/categories?post=218263"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/tags?post=218263"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}