Recommendation Algorithms Change How You Learn, Not Just What You See

Recommendation algorithms don’t just show you more of what you like. They actually change what you learn and how confident you feel about it, according to a new study in the Journal of Experimental Psychology: General.

Researchers at Ohio State University tested this with 346 people learning to categorize fake aliens: crystal-like creatures with six hidden features. Some participants could explore freely. Others made their own choices about which features to check, but an algorithm decided which alien they’d see next, based on what they’d clicked before.

The algorithm created filter bubbles fast. Once it noticed someone checking certain features, it showed them more aliens with those same features. People ended up focusing on just a few dimensions and ignoring the rest. Their knowledge got narrow.

Nothing to do with politics

This wasn’t about ideology or pre-existing beliefs. Participants knew absolutely nothing about these aliens going in. The distortion came purely from interacting with the algorithm.

“But our study shows that even when you know nothing about a topic, these algorithms can start building biases immediately and can lead to a distorted view of reality,” said Giwon Bahg, who led the study.

After training, participants had to classify new aliens they’d never seen. The people who learned under algorithmic guidance got a lot of answers wrong. But here’s the thing: they felt confident about those wrong answers.

When they saw aliens from categories they’d never encountered during training, they should have been uncertain. Instead, they often picked a familiar category and rated their confidence high. The confusion matrices showed systematic, coherent misclassifications, like they’d built neat but completely inaccurate theories about how the alien world worked.

The overconfidence problem

“People miss information when they follow an algorithm, but they think what they do know generalizes to other features and other parts of the environment that they have never experienced,” said study co-author Brandon Turner.

That’s the real danger. The narrowness feels like expertise. The gaps don’t register as gaps. You never realize you’re missing the full picture.

The study used fictional aliens specifically to strip away all the noise; no prior beliefs, no values, no personal history. Even in this artificial setup, participants fell into selective attention and overconfident generalization. The algorithm didn’t just reflect their preferences. It helped create them.

In the real world, where everything carries social meaning, the stakes are obviously higher. Personalized feeds are built to maximize engagement, not to diversify what you see. And this study shows that systems optimized for keeping you scrolling can reshape how you understand everything else.

The researchers raise one question that’s hard to shake: what happens when kids, still building basic mental models about the world, learn primarily through systems that feed them more of whatever they just consumed?

The study used college students getting course credit, which is standard for this kind of research but also means the results might look different with younger or older participants. The team is planning follow-up work to test whether people can learn to recognize when an algorithm is shaping their view and whether that awareness helps.

For now, the takeaway is straightforward. If you’re learning something new mostly through algorithmic feeds (YouTube tutorials, TikTok explainers, algorithm-sorted search results) you’re not just getting information. You’re getting a version of reality that’s been filtered through what you’ve already looked at. And you’ll probably feel more sure about what you know than you should.

Journal: Journal of Experimental Psychology: General
DOI: https://doi.org/10.1037/xge0001763


Discover more from NeuroEdge

Subscribe to get the latest posts sent to your email.

Leave a Comment