Zhejiang University Research Team Proposes New Approach: Teaching AI How the Human Brain Understands the World
The Difference Between Human and Model Thinking
When the human brain processes concepts, it first forms a set of categorical relationships. Swans and owls look different, but humans still classify them both as birds. Moving up a level, birds and horses can be further grouped into the animal category. When encountering something new, humans often first consider what it resembles from past experience and which broad category it likely belongs to. Humans continuously learn new concepts, organize these experiences, and use this relational framework to recognize new things and adapt to new situations.

Models also classify, but their formation process is different. They primarily rely on patterns that repeatedly appear in massive datasets. The more frequently a specific object appears, the easier it is for the model to recognize it. However, models struggle more at the level of forming broader categories. They need to capture commonalities among multiple objects and then group these commonalities into the same category. Existing models still have significant shortcomings here. As parameters continue to increase, performance on concrete concept tasks improves, while performance on abstract concept tasks sometimes even declines.

A commonality between the human brain and models is that both internally form a set of categorical relationships. However, their emphases differ. The higher-order visual regions of the human brain naturally distinguish broad categories like living and non-living things. Models can separate specific objects but find it difficult to stably form these larger classifications. This difference leads to the human brain being more adept at applying past experience to new objects, allowing for rapid categorization of unseen things. Models, in contrast, rely more heavily on existing knowledge, making them more likely to fixate on surface features when encountering novel objects. The method proposed in the paper revolves around this characteristic, using brain signals to constrain the model’s internal structure, making its classification approach more akin to that of the human brain.
The Zhejiang University Team’s Solution
The team’s proposed solution is also unique—it doesn’t involve simply adding more parameters, but rather uses a small amount of brain signal data for supervision. These brain signals come from recordings of brain activity when humans view images. The paper states the goal as transferring “human conceptual structures” to DNNs. This means teaching the model, as much as possible, how the human brain classifies, generalizes, and groups related concepts together.

The team conducted experiments using 150 known training categories and 50 unseen test categories. The results showed that as this training progressed, the distance between the model’s representations and brain representations continuously decreased. This change occurred in both categories, indicating that the model was not just learning individual samples but was genuinely beginning to learn a conceptual organization method closer to that of the human brain.
After this training, the model demonstrated stronger learning capabilities with few samples and performed better in novel situations. In a task requiring the model to distinguish abstract concepts like living vs. non-living with only minimal examples provided, the model’s performance improved by an average of 20.5%, even surpassing control models with significantly more parameters. The team also conducted 31 additional specialized tests, where several model types showed improvements close to 10%.
Over the past few years, the familiar path in the modeling industry has been larger model scale. The Zhejiang University team chose a different direction, moving from “bigger is better” to “structured is smarter.” Scaling up is indeed useful, but it primarily improves performance on familiar tasks. Abstract understanding and transfer capabilities, inherent to humans, are equally crucial for AI. This requires making AI’s thinking structure more closely resemble the human brain in the future. The value of this direction lies in refocusing the industry’s attention from mere scale expansion back to the cognitive structure itself.
Neosoul and the Future
This points to a broader possibility: AI evolution may not occur solely during the model training phase. Model training can determine how AI organizes concepts and forms higher-quality judgment structures. However, after entering the real world, another layer of AI evolution is just beginning: how an AI agent’s judgments are recorded, tested, and how it continuously grows and evolves through real-world competition, learning and evolving on its own like a human. This is precisely what Neosoul is doing now. Neosoul doesn’t just have AI agents produce answers; it places AI agents within a system of continuous prediction, verification, settlement, and selection. This allows them to constantly optimize themselves through predictions and outcomes, preserving better structures and eliminating worse ones. What the Zhejiang University team and Neosoul jointly point towards is actually the same goal: enabling AI to not just solve problems but to possess comprehensive thinking capabilities and continuously evolve.
هذا المقال مصدره من الانترنت: Zhejiang University Research Team Proposes New Approach: Teaching AI How the Human Brain Understands the World
Related: Trump, the world’s largest oil trader
How much can a single post be worth? At 7:05 AM Eastern Time on March 23, Trump posted an all-caps message on Truth Social, the gist of which was: The US and Iran have had “very good, productive conversations” over the past two days, and he has ordered a five-day pause on strikes against Iranian power plants and energy facilities. When this post was published, the US stock market had not yet opened. But the futures market is real-time. Within minutes, Dow Jones futures surged by over 1,000 points, and S&P 500 futures rose by 2.7%. Brent crude oil plummeted from $113 per barrel to $98, a drop exceeding 13%. A reporter from the well-known foreign media outlet *Fortune* later calculated that from the moment the post was published until…







