The Duke and Duchess of Sussex have joined artificial intelligence pioneers and Nobel laureates in calling for a ban on developing superintelligent AI systems.
Harry and Meghan are among the signatories of a statement calling for “a prohibition on the development of superintelligence”. Artificial superintelligence (ASI) is the term for AI systems, yet to be developed, that exceed human levels of intelligence at all cognitive tasks.
The statement calls for the ban to stay in place until there is “broad scientific consensus” on developing ASI “safely and controllably” and once there is “strong public buy-in”.
It has also been signed by the AI pioneer and Nobel laureate Geoffrey Hinton, along with his fellow “godfather” of modern AI, Yoshua Bengio; the Apple co-founder Steve Wozniak; the UK entrepreneur Richard Branson; Susan Rice, a former US national security adviser under Barack Obama; the former Irish president Mary Robinson, and the British author and broadcaster Stephen Fry. Other Nobel laureates who signed include Beatrice Fihn, Frank Wilczek, John C Mather, and Daron Acemoğlu.
The statement, targeted at governments, tech firms and lawmakers, was organised by the Future of Life Institute (FLI), a US-based AI safety group that called for a hiatus in developing powerful AI systems in 2023, soon after the emergence of ChatGPT made AI a political and public talking point around the world.
In July, Mark Zuckerberg, the chief executive of the Facebook parent Meta, one of the big AI developers in the US, said development of superintelligence was “now in sight”. However, some experts have said talk of ASI reflects competitive positioning among tech companies spending hundreds of billions of dollars on AI this year alone, rather than the sector being close to achieving any technical breakthroughs.
Nonetheless, FLI says the prospect of ASI being achieved “in the coming decade” carries a host of threats ranging from taking all human jobs to losses of civil liberties, exposing countries to national security risks and even threatening humanity with extinction. Existential fears about AI focus on the potential ability of a system to evade human control and safety guidelines and trigger actions contrary to human interests.
FLI released a US national poll showing that approximately three-quarters of Americans want robust regulation on advanced AI, with six out 10 believing that superhuman AI should not be made until it is proven safe or controllable. The survey of 2,000 US adults added that only 5% supported the status quo of fast, unregulated development.
skip past newsletter promotion
A weekly dive in to how technology is shaping our lives
Privacy Notice: Newsletters may contain information about charities, online ads, and content funded by outside parties. If you do not have an account, we will create a guest account for you on theguardian.com to send you this newsletter. You can complete full registration at any time. For more information about how we use your data see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.
after newsletter promotion
The leading AI companies in the US, including the ChatGPT developer OpenAI and Google, have made the development of artificial general intelligence – the theoretical state where AI matches human levels of intelligence at most cognitive tasks – an explicit goal of their work. Although this is one notch below ASI, some experts also warn it could carry an existential risk by, for instance, being able to improve itself towards reaching superintelligent levels, while also carrying an implicit threat for the modern labour market.