The world may not have noticed, but we’ve crossed one of the most anticipated milestones in computing history. When the idea of passing the Turing test — machines conversing indistinguishably from humans — became reality, daily life barely flinched. Yet, according to Sam Altman, CEO of OpenAI, the shift marks the start of something much deeper: a world where artificial intelligence doesn’t just talk like humans but thinks and discovers like them. 

“Most of the world still thinks about AI as chatbots and better search,” Altman noted, “but today, we have systems that can outperform the smartest humans at some of our most challenging intellectual competitions.” 

From chatbots to discovery engines 

Altman revealed that AI’s capabilities have evolved from completing second-long human tasks to now handling challenges that take hours — and soon, days or weeks. The exponential drop in the cost of intelligence, estimated at 40x per year, is accelerating that transition. 

By 2026, OpenAI expects systems capable of making small scientific discoveries, and by 2028, those capable of major breakthroughs. “We do not know how to think about systems that can do tasks that would take a person centuries,” Altman admitted, hinting at the profound cognitive leap ahead. 

Society’s co-evolution with AI 

Despite rapid progress, Altman believes daily life will remain relatively stable. “Society finds ways to co-evolve with the technology,” he said. “The way we live has a lot of inertia even with much better tools.” 

He envisions AI as a force for abundance — transforming healthcare, climate modeling, and education — while also redefining work and the economy. “It is even possible that the fundamental socioeconomic contract will have to change,” he said, but stressed that in a world of “widely distributed abundance,” lives can become far more fulfilling. 

Safety, oversight and need for ‘AI building codes’ 

OpenAI’s approach, Altman emphasised, is rooted in safety as empowerment. “Although the potential upsides are enormous, we treat the risks of superintelligent systems as potentially catastrophic,” he said. The company is calling for shared safety standards among frontier labs, akin to building codes or fire safety norms. 

Altman proposed several principles for navigating AI’s future responsibly: 

  • Shared standards and transparency among leading AI labs. 
  • Public oversight proportional to AI’s capabilities. 
  • An AI resilience ecosystem, comparable to cybersecurity frameworks for the internet. 
  • Continuous impact measurement by both labs and governments. 
  • Individual empowerment, ensuring that people can use AI “on their own terms.” 

Building a resilient AI future 

Altman argued that as AI approaches superintelligence, collaboration between nations and frontier developers will be critical. He cited the need for coordination to prevent bioterrorism, manage self-improving systems, and ensure accountability to public institutions. 

“The high-order bit should be accountability,” Altman said. “But how we get there might have to differ from the past.” 

Ultimately, Altman sees AI as a foundational utility — on par with electricity or clean water — one that must be broadly accessible and aligned with human goals. “The north star,” he said, “should be helping empower people to achieve their goals.”