OpenAI is setting its sights on a groundbreaking frontier: superintelligence. In a recent blog post, CEO Sam Altman outlined the company’s plans to prioritize the development of artificial intelligence systems that surpass human capabilities, marking a shift from traditional AI goals to more ambitious horizons.
Altman described superintelligence as a transformative technology capable of achieving feats far beyond the capabilities of current AI. “This sounds like science fiction right now, and somewhat crazy to even talk about it. That’s alright—we’ve been there before and we’re OK with being there again,” he wrote. He pointed to accelerating scientific discovery as a prime example of how superintelligence could benefit society.
From AGI to superintelligence
The decision to focus on superintelligence reflects OpenAI’s confidence in its ability to build artificial general intelligence (AGI)—AI systems that match human-level capabilities. Superintelligence, by contrast, represents a leap forward, with systems that can exceed human abilities.
While the concept is promising, it comes with significant risks. OpenAI has been vocal about the challenges of aligning superintelligent systems with human values. As far back as 2023, the company formed a specialized team to address these risks, dedicating 20% of its computing power to training a “human-level automated alignment researcher.” However, by mid-2024, the team was disbanded amid concerns that safety efforts were being overshadowed by product development.
Despite these setbacks, Altman emphasized OpenAI’s commitment to safety. “We believe the best way to make an AI system safe is by iteratively and gradually releasing it into the world, giving society time to adapt and co-evolve with the technology,” he wrote.
Debates on the timeline to superintelligence
The timeline for achieving superintelligence remains a topic of debate. OpenAI previously estimated it could happen within a decade, but Altman recently suggested it might be just a few thousand days away. Skeptics, however, are not convinced.
Brent Smolinski, IBM’s VP and global head of Technology and Data Strategy, dismissed these projections as exaggerated. “I don’t think we’re even in the right zip code for getting to superintelligence,” he stated, citing AI’s current reliance on massive datasets, its limited scope, and the absence of consciousness or self-awareness.
Smolinski suggested that quantum computing might be the key to unlocking true superintelligence, a milestone IBM predicts could begin addressing real-world problems before 2030.
In the near term, AI agents—semi-autonomous generative AI systems—are expected to become increasingly prevalent. These agents can interact with applications and make decisions in unstructured environments, with use cases ranging from software development to customer service.
Altman predicted that 2025 could see the first widespread deployment of AI agents in the workforce, fundamentally altering business operations. Research from Gartner supports this, forecasting that by 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024.
As OpenAI embarks on this ambitious journey, the company remains focused on balancing innovation with caution. Altman expressed optimism about the transformative potential of superintelligence but underscored the importance of acting with care to ensure these advancements benefit humanity.
“We’re pretty confident that in the next few years, everyone will see what we see—that the need to act with great care while maximizing broad benefit and empowerment is so important,” Altman concluded.
While the path to superintelligence is fraught with uncertainty, OpenAI’s bold vision could redefine the future of technology and its role in society.