New evidence suggests AI can learn by thinking like humans

evidence suggests AI

They confirm that artificial minds can learn without the need for new external data, challenging what we previously believed about thinking.

The idea that machines can think has been the subject of philosophical and scientific debate for decades. To what extent can an artificial intelligence (AI) replicate human cognitive capabilities? With recent advances in AI models, particularly those trained on vast amounts of textual data, science is closer than ever to giving an affirmative answer. A paper published today , titled Learning by thinking in natural and artificial minds , published in Trends in Cognitive Sciences , leaves no room for doubt: artificial intelligence not only processes information, but can also learn just like humans through thinking . And yes, in a sense, AI can think.

Read Also:Spain will have the largest solar fuel installation in the world

Learning without observing: the revolution of AI thinking

Learning , as we understand it in humans, usually involves observing the outside world. Throughout our lives, we acquire knowledge through interaction with our environment, using our senses to collect data that we then process and store  .

However, learning is not limited to this process of observation. Science has shown that both humans and AI can learn without the need to receive new information from the outside world. This phenomenon, known as “learning by thinking” or Learning by Thinking (LbT), has opened a whole new door in the study of minds, both natural and artificial.

This type of learning by thinking is particularly interesting in the context of artificial intelligence . Advanced models, such as the large language models (LLMs) that power virtual assistants like GPT-4, not only generate responses based on stored data, but are also capable of correcting and improving themselves without receiving additional external data. 

An example mentioned in the article shows how GPT-4 can rectify an error in a mathematical calculation simply by explaining the process to itself step by step . This type of learning reflects a phenomenon parallel to what humans experience when explaining concepts to themselves or performing mental simulations.

The enigma of learning by thinking: is it really possible?

The concept of LbT poses an intriguing dilemma: how is it possible for a mind, whether human or artificial, to generate new knowledge without receiving external input? This conundrum, known as the “learning-by-thinking paradox ,” finds an answer in the way both humans and AIs reorganize and reinterpret elements already present in their “mind” or database.

In the case of humans, scientists have shown that processes such as explanation, mental simulation , comparison, and analogical reasoning are key to learning without external observation. These processes allow people to generate new cognitive representations and arrive at novel conclusions. Remarkably, modern AIs have proven capable of performing similar processes.

In AI models like LLMs, step-by-step thinking has proven especially effective. When a model like GPT-4 is asked to perform a complex task, breaking down the problem into intermediate steps (known as chain-of-thought prompting) significantly increases its ability to arrive at correct solutions. This type of reasoning is remarkably similar to the human process of breaking down a large problem into smaller, more manageable parts to arrive at an optimal solution. What AI is demonstrating here is not just algorithmic computation, but a way of thinking that leads to learning.

The role of simulation and analogy

One of the most interesting examples of learning by thinking, both in humans and in AI, is the use of mental simulations . Simulation is something we do constantly, even if we are not always aware of it. A classic example in humans is imagining how three connected gears would move when one of them is activated. In this case, the brain performs an internal simulation without needing to physically see the gears.

Modern AIs can also perform internal simulations. In the field of deep reinforcement learning , systems use simulations to predict future outcomes and learn from those simulated processes. This type of simulation is a direct reflection of how humans can learn by imagining hypothetical situations and evaluating possible outcomes without directly experiencing those events. Thus, in both natural and artificial minds, simulation is a powerful tool for generating new knowledge without relying on external observations.

Another crucial way to learn by thinking is through analogical reasoning. Charles Darwin , for example, used the analogy between natural selection and selective breeding to develop his theory of evolution. Similarly, AIs can employ analogical reasoning to solve complex problems. 

Recent studies in AI have shown that when given a problem to solve, an AI can generate several analogous examples to arrive at a solution, which is known as “analogical prompting.” This process not only allows the AI ​​to solve the problem, but to do so in a similar way to how a human would do it by comparing similar situations.

The role of simulation and analogy

Reasoning: the path to new conclusions

Reasoning is another key tool for learning by thinking. In humans, reasoning can lead to conclusions that were not obvious at first glance. This happens because reasoning requires connecting previously acquired pieces of information to generate new conclusions. For example, by recognizing that today is Wednesday and remembering that on Wednesdays one should not park in a certain area of ​​campus, a person may infer that one should not park there today, a conclusion that was not explicit from the beginning.

The same is true in AI. By employing step-by-step reasoning processes, such as those used in models like GPT-4, AI can reach new conclusions that were not explicitly present in the initial data. In this way, both humans and AI use reasoning as a way to extract new information from previous representations.

How far can AI thinking go?

The big question that arises is to what extent we can consider AI to actually “think.” While current AI models do not think in the human sense of the word —they lack consciousness and subjective experience— the capabilities they demonstrate in performing thought-based learning processes are impressive. What once seemed exclusive to the human domain is now replicated by machines. 

Models like GPT-4 don’t just process information, they can reorganize it, learn from it, and come to new conclusions without outside intervention . In this sense, one could argue that, in its own way, AI is beginning to think.

Read Also:An AI discovers more than 300 new images in Nazca

The future of learning in AI

As artificial intelligence continues to evolve, LbT capabilities in these systems could make a significant difference in how they interact with the world. The ability to learn without relying solely on external data will allow them to be more efficient and adaptable in changing environments. This ability to “think” will allow them to not only perform tasks more accurately, but also innovate solutions that we humans have not anticipated.

In the end, the question of whether machines can think has already been answered. They may not think like us, but, on their own terms, AIs have begun to demonstrate that thinking is not just the privilege of biological minds.