{"id":18315,"date":"2026-04-27T12:12:13","date_gmt":"2026-04-27T12:12:13","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/18315\/"},"modified":"2026-04-27T12:12:13","modified_gmt":"2026-04-27T12:12:13","slug":"openai-just-changed-its-principals-heres-whats-changing","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/18315\/","title":{"rendered":"OpenAI just changed its principals. Here\u2019s what\u2019s changing"},"content":{"rendered":"<p class=\"mb-4 text-lg md:leading-8 break-words\">OpenAI is less concerned with artificial general intelligence (AGI) than it was almost a decade ago and is instead prioritising a broader rollout of its technology, according to a new mission statement for the company.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">On Sunday, OpenAI published an update to the company\u2019s \u201cOur Principles\u201d document, which sets out how the company will run its technology in the future.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">There are some key differences between this new set of principles and what the company prioritised almost a decade ago, when it was a nascent non-profit artificial intelligence (AI) research organisation.<\/p>\n<p>De-emphasis on artificial general intelligence<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">In<a data-yga=\"{\" ylinkelement=\"\" href=\"https:\/\/web.archive.org\/web\/20230714043611\/https:\/\/openai.com\/charter\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"elm:link;elmt:article_link;slk:2018;itc:0;sec:content-canvas\" class=\"link \"> 2018<\/a>, OpenAI was staunchly focused on artificial general superintelligence (AGI): the idea that their technology would surpass human intelligence, but now, it is just part of the company\u2019s wider AI rollout.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">Both versions of the company\u2019s principles say that OpenAI\u2019s mission is to guarantee this technology \u201cbenefits all of humanity,\u201d but the 2018 version explicitly mentions building it safely and beneficially.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">\u201cOur primary fiduciary duty is to humanity,\u201d the document reads. \u201cWe anticipate needing to marshal substantial resources to fulfil our mission, but will always diligently act to minimise conflicts of interest among our employees and stakeholders that could compromise broad benefit.\u201d<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">Related<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">The<a data-yga=\"{\" ylinkelement=\"\" href=\"https:\/\/openai.com\/index\/our-principles\/\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"elm:link;elmt:article_link;slk:2026 version;itc:0;sec:content-canvas\" class=\"link \"> 2026 version<\/a>, however, said it needs to continue to build safe systems, but that society needs to contend with \u201ceach successive level of AI capability, understand it, integrate it, and figure out the best path forward together.\u201d<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">The way forward, as CEO and cofounder Sam Altman sees it in 2026, is to democratise AI at all levels by giving everyone access to it and resisting the idea that the technology could \u201cconsolidate power in the hands of the few\u201d.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">The 2026 principles document also said that it expects OpenAI to work with governments, international agencies and other AGI initiatives to \u201csufficiently solve serious alignment, safety or societal problems before proceeding further\u201d with its work.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">Some examples of doing that could include using ChatGPT to fight back against models that could create new pathogens or integrate cyber-resilient models into critical infrastructure.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">Altman gave some clues for OpenAI\u2019s de-emphasis of AGI on his personal blog earlier this month.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">AGI has a \u201cring of power\u201d to it that \u201cmakes people do crazy things,\u201d Altman wrote. To fight back, he said the only solution is to \u201corient towards sharing the technology with people broadly, and for no one to have the ring.\u201d<\/p>\n<p>OpenAI will no longer step aside to compete with a safety product<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">In 2018, OpenAI said it was concerned that AGI development was becoming \u201ca competitive race without time for adequate safety precautions.\u201d<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">It was committed to putting a stop to its own models to assist any project that was \u201cvalue-aligned, [and] a safety-conscious project,\u201d which comes closer to building AGI.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">\u201cWe will work out specifics\u2026 but a typical triggering condition might be a \u2018better-than-even chance of success in the next two years,\u201d the 2018 document reads.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">In 2026, there is no mention of stepping aside to help a greater cause. Instead, the document acknowledges that OpenAI \u201cis a much larger force in the world than it was a few years ago,\u201d and pledges to be transparent about when and how its operating principles could change.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">Related<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">The company has been in major competition with several rivals, including Anthropic.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">In February, Anthropic<a data-yga=\"{\" ylinkelement=\"\" href=\"https:\/\/www.euronews.com\/next\/2026\/03\/24\/anthropic-v-us-department-of-war-ai-company-challenges-government-in-court\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"elm:link;elmt:article_link;slk:refused;itc:0;sec:content-canvas\" class=\"link \"> refused<\/a> to give the US President Donald Trump\u2019s administration unfettered access to its AI for the military, which led to the company being labelled a supply chain risk and ordered federal agents to stop using Anthropic&#8217;s AI assistant Claude in March.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">On February 28, OpenAI then stepped in to fill the void, signing a deal with the Department of War, which<a data-yga=\"{\" ylinkelement=\"\" href=\"https:\/\/au.news.yahoo.com\/cancel-chatgpt-ai-boycott-surges-105144733.html\" data-ylk=\"elm:link;elmt:article_link;slk:saw some users;itc:0;sec:content-canvas;outcm:mb_qualified_link;_E:mb_qualified_link;ct:story;\" class=\"link  yahoo-link\" rel=\"nofollow noopener\" target=\"_blank\"> saw some users<\/a> boycotting ChatGPT in favour of Claude.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">Anthropic was also<a data-yga=\"{\" ylinkelement=\"\" href=\"https:\/\/www.euronews.com\/business\/2026\/04\/18\/the-rapid-ascent-of-anthropic-inside-the-strategy-behind-an-800-billion-valuation\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"elm:link;elmt:article_link;slk:valued;itc:0;sec:content-canvas\" class=\"link \"> valued<\/a> this month at $800 billion (\u20ac696 billion), on par with OpenAI.<\/p>\n<p>Vague society-wide callouts<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">In the 2026 document, OpenAI asks for several societal changes so the world can better adapt to AI.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">\u201cWe envision a world with widespread flourishing at a level that is currently difficult to imagine,\u201d the document reads. \u201cA lot of the things we\u2019ve only let ourselves dream about in sci-fi could become reality, and most people could live more meaningful lives than most are able to today.\u201d<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">Related<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">This future is not guaranteed, because AI can either be \u201cheld by a small handful of companies using and controlling superintelligence,\u201d or \u201cheld in a decentralised way by people,\u201d the document reads.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">The principles document also reiterates some of OpenAI\u2019s recent<a data-yga=\"{\" ylinkelement=\"\" href=\"https:\/\/www.euronews.com\/next\/2026\/04\/07\/robot-taxes-four-day-work-week-inside-openais-plan-for-an-ai-driven-economy\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"elm:link;elmt:article_link;slk:policy suggestions;itc:0;sec:content-canvas\" class=\"link \"> policy suggestions<\/a>, such as asking governments to consider \u201cnew economic models\u201d and to develop new technology that will drive down the costs of AI infrastructure.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">\u201cA lot of the things that we do that look weird\u2014buying huge amounts of compute while our revenue is relatively small\u2026 are driven by our fundamental belief in a future of universal prosperity,\u201d the document reads.<\/p>\n","protected":false},"excerpt":{"rendered":"OpenAI is less concerned with artificial general intelligence (AGI) than it was almost a decade ago and is&hellip;\n","protected":false},"author":2,"featured_media":18316,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4],"tags":[6744,3013,25,4167,157,370],"class_list":{"0":"post-18315","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-agi","8":"tag-agi","9":"tag-artificial-general-intelligence","10":"tag-artificial-intelligence","11":"tag-document","12":"tag-openai","13":"tag-sam-altman"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/18315","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=18315"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/18315\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/18316"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=18315"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=18315"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=18315"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}