New! Sign up for our email newsletter on Substack.

AI Learns Like Us, but With the Same Flaws

It takes only a few board games before you can learn the rules of a new one almost instantly. That ability to shift from slow practice to quick inference is a defining feature of human learning and, according to a new study, it is also emerging in artificial intelligence.

Researchers at Brown University report that the way humans combine working memory and long-term memory has a striking parallel in how AI systems blend two modes of learning. The study, published in the Proceedings of the National Academy of Sciences, shows that the trade-offs and synergies between these processes line up in surprising ways. The finding not only reshapes how scientists think about the brain, it also points to how more intuitive AI might be built.

Jake Russin, a postdoctoral research associate in computer science at Brown, led the work while holding a joint appointment in the labs of cognitive scientist Michael Frank and computer scientist Ellie Pavlick. His focus was the intersection of computational neuroscience and machine learning, and he suspected the line between human and machine cognition was thinner than it seemed.

“These results help explain why a human looks like a rule-based learner in some circumstances and an incremental learner in others,” Russin said.

In everyday life, these two modes show up everywhere. You can memorize the rules of tic-tac-toe in a single sitting, but learning to play a piano piece takes weeks of repetition. Psychologists have long known that humans switch between fast “in-context” learning and slower incremental learning. What had remained hazy was how those two streams integrate.

To test his theory, Russin turned to meta-learning, a method in which AI systems are trained not just on tasks, but on how to learn tasks. One experiment mirrored human studies: the AI was trained on lists of colors and animals, then asked to identify novel pairings such as “green giraffe.” At first, it stumbled. But after being challenged with 12,000 related tasks, the system began to infer rules quickly, showing that flexible in-context learning arose only after sufficient incremental learning.

Pavlick offered a plainspoken analogy:

“At the first board game, it takes you a while to figure out how to play,” she said. “By the time you learn your hundredth board game, you can pick up the rules of play quickly, even if you’ve never seen that particular game before.”

The team also discovered a trade-off familiar to anyone who has struggled to remember: harder tasks, once solved, are more likely to stick. Frank noted that this mirrors a paradox in human cognition. Errors cue long-term memory updates, while easy in-context solutions boost flexibility without anchoring knowledge for the long haul.

For Frank, who specializes in biologically inspired models, the work was a way to fold disparate theories together. “Our results hold reliably across multiple tasks and bring together disparate aspects of human learning that neuroscientists hadn’t grouped together until now,” he said.

Beyond theory, the implications are practical. The researchers suggest that acknowledging the dual nature of learning is essential for building AI that is both capable and trustworthy, particularly in sensitive areas like mental health. Pavlick emphasized that understanding the overlap between human and AI cognition is key to designing systems that people can trust.

The study, supported by the Office of Naval Research and the National Institute of General Medical Sciences, underscores how much remains to be mapped in the shared territory of brains and machines. The resemblance may not mean AI is “thinking” like us—but it does mean we might learn something about ourselves by watching it learn.

Humans rely on two types of learning: quick “in-context” learning, like picking up the rules of a simple game, and slow incremental learning, like practicing a musical instrument. The Brown University team showed that AI trained through meta-learning develops a similar pattern. Flexible learning only emerges after extensive practice, and trade-offs appear between retention and flexibility. These parallels suggest both opportunities and limits for how AI might mimic or diverge from human thought.

Journal: Proceedings of the National Academy of Sciences
DOI: 10.1073/pnas.2510270122


Quick Note Before You Read On.

ScienceBlog.com has no paywalls, no sponsored content, and no agenda beyond getting the science right. Every story here is written to inform, not to impress an advertiser or push a point of view.

Good science journalism takes time — reading the papers, checking the claims, finding researchers who can put findings in context. We do that work because we think it matters.

If you find this site useful, consider supporting it with a donation. Even a few dollars a month helps keep the coverage independent and free for everyone.


Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.