最新麻豆视频

Technology

Is superintelligent AI just around the corner, or just a sci-fi dream?

Tech CEOs are promising increasingly outlandish visions of the 2030s, powered by "superintelligence", but the reality is that even the most advanced AI models can still struggle with simple puzzles

By Alex Wilkins

13 June 2025

Are machines about to become smarter than humans?

Chan2545/iStockphoto/Getty Images

If you take the leaders of artificial intelligence companies at their word, their products mean that the coming decade will be quite unlike any in human history: a golden era of “radical abundance鈥, where high-energy physics is 鈥渟olved鈥 and we see the beginning of space colonisation. But researchers working with today鈥檚 most powerful AI systems are finding a different reality, in which even the best models are failing to solve basic puzzles that most humans find trivial, while the promise of AI that can 鈥渞eason鈥 seems to be overblown. So, whom should you believe?

Sam Altman and Demis Hassabis, the CEOs of OpenAI and Google DeepMind, respectively, have both made recent claims that powerful, world-altering AI systems are just around the corner. In , Altman writes that 鈥渢he 2030s are likely going to be wildly different from any time that has come before鈥, speculating that we might go 鈥渇rom a major materials science breakthrough one year to true high-bandwidth brain-computer interfaces the next year鈥.

Hassabis, in an , also said that in the 2030s, artificial general intelligence (AGI) will start to solve problems like 鈥渃uring terrible diseases鈥, leading to 鈥渕uch healthier and longer lifespans,鈥 as well as finding new energy sources. 鈥淚f that all happens,鈥 said Hassabis in the interview, 鈥渢hen it should be an era of maximum human flourishing, where we travel to the stars and colonize the galaxy.鈥

This vision relies heavily on the assumption that large language models (LLMs) like ChatGPT get more capable the more training data and computer power we throw at them. This 鈥渟caling law鈥 seems to have held true for the past few years, but there have been hints of it faltering. For example, OpenAI鈥檚 recent GPT-4.5 model, which likely cost hundreds of millions of dollars to train, achieved only modest improvements over its predecessor GPT-4. And that cost is nothing compared with future spending, with reports suggesting that in an attempt to achieve 鈥渟uperintelligence鈥.

Money isn’t the only attempted solution to this problem, however 鈥 AI firms have also turned to 鈥渞easoning鈥 models, like OpenAI鈥檚 o1, which was released last year. These models use more computing time and so take longer to produce a response, feeding their own outputs back into themselves. This iterative process has been labelled 聽鈥渃hain-of-thought鈥, in an effort to draw comparisons to the way a person might think through problems step by step. 鈥淭here were legitimate reasons to be concerned about AI plateauing,鈥 Noam Brown at OpenAI told 最新麻豆视频 last year, but o1 and models like it meant that the 鈥渟caling law鈥 could continue, he argued.

Free newsletter

Sign up to The Daily

The latest on what鈥檚 new in science and why it matters each day.

最新麻豆视频. Science news and long reads from expert journalists, covering developments in science, technology, health and the environment on the website and the magazine.

Yet recent research has found these reasoning models can stumble on even simple logic puzzles. For example, researchers at Apple AI company DeepSeek鈥檚 reasoning models and Anthropic鈥檚 Claude thinking models, which work like OpenAI鈥檚 o1-family of models. The researchers found they have 鈥渓imitations in exact computation: they fail to use explicit algorithms and reason inconsistently across puzzles鈥, the researchers wrote.

The team tested the AI on several puzzles, such as a scenario in which a person has to transport items across a river in the fewest number of steps, and Tower of Hanoi, a game where you must move rings one by one between three poles without placing a larger ring on top of a smaller one. Though the models could solve the puzzles at their easiest settings, they struggled with increasing the number of rings or items to transport. While we would spend a longer time thinking about a more complex problem, the researchers found that the AI models used fewer 鈥渢okens鈥 鈥 chunks of information 鈥 as the complexity of the problems increased, suggesting that the 鈥渢hinking鈥 time the models displayed is an illusion.

鈥淭he damaging part is that these are tasks easily solvable,鈥 says at City, University of London. 鈥淲e already knew 50 years ago how to use symbolic AI reasoning to solve these.鈥 It is possible that these newer systems can be fixed and improved to eventually be able to reason through complex problems, but this research shows it鈥檚 unlikely to happen purely through increasing the size of the models or the computational resources given to them, says Garcez.

It is also a reminder that these models still struggle to solve scenarios they haven’t seen outside of their training data, says at the University of Sheffield. 鈥淭hey work quite well actually in many cases, like finding, collating information and then summarising it, but these models have been trained to do these kinds of things, and it appears magic, but it isn’t 鈥 they have been trained to do this,鈥 says Aletras. 鈥淣ow, I think the Apple research has found a blind spot.鈥

Meanwhile, other research is showing that increased 鈥渢hinking鈥 time can actually hurt an AI model鈥檚 performance. and his colleagues at the University of Maryland tested DeepSeek鈥檚 models and found that longer 鈥渃hain of thought鈥 processes . For example, for one mathematical benchmark, they found that tripling the amount of tokens used by a model can increase its performance by about 5 per cent. But using 10 to 15 times as many tokens again dropped the benchmark score by around 17 per cent.

In some cases, it appears the 鈥渃hain of thought鈥 output produced by an AI bears little relation to the eventual answer it provides. When , at Arizona State University and his colleagues found that even when the AI solved the problem, its 鈥渃hain of thought鈥 output contained mistakes that weren鈥檛 reflected in the final solution. What’s more, feeding the AI a meaningless 鈥渃hain of thought鈥 could actually produce better answers.

鈥淥ur results challenge the prevailing assumption that intermediate tokens or ‘chains of thought’ can be semantically interpreted as the traces of internal reasoning of the AI models, and caution against anthropomorphising them that way,鈥 says Kambhampati.

Indeed, all of the studies suggest that 鈥渢hinking鈥 or 鈥渞easoning鈥 labels for these AI models are a misnomer, says at the IT University of Copenhagen in Denmark. 鈥淔or as long as I’ve been in this field, every popular technique I can think of has been first hyped up with some vague cognitively-sounding analogy, which [was] then eventually proved wrong.鈥

at the University of Cambridge points out that LLMs still have clear applications in text generation and other tasks, but says the latest research suggests we may struggle to ever make them tackle the kind of complex problems Altman and Hassabis have promised will be solved in just a few years.

鈥淔undamentally, there is a mismatch between what these models are trained to do, which is next-word prediction, as opposed to what we are trying to get them to do, which is to produce reasoning,鈥 says Vlachos.

OpenAI disagrees, however. 鈥淥ur work shows that reasoning methods like chain-of-thought can significantly improve performance on complex problems, and we鈥檙e actively working to expand these capabilities through better training, evaluation, and model design,鈥 says a spokesperson. DeepSeek didn’t respond to a request for comment.

Topics:

Sign up to our weekly newsletter

Receive a weekly dose of discovery in your inbox! We'll also keep you up to date with 最新麻豆视频 events and special offers.

Sign up
Piano Exit Overlay Banner Mobile Piano Exit Overlay Banner Desktop