Why do some science news stories catch our eye, even if they use exaggerated, irrelevant or inaccurate information?
Readers favor a certain explanatory style when learning about complicated scientific topics. This bias is the subject of new paper in the journal Cognition from researchers at the University of Pennsylvania. Deena Weisberg, a senior fellow in the psychology department at the School of Arts & Sciences, philosophy graduate student Jordan Taylor and former postdoc Emily Hopkins found that people prefer descriptions with information from more reductive scientific fields, even when those details aren’t relevant to understanding the finding.
One way people make a complex idea clear is to break it down into basic, digestible chunks. “If I’m trying to talk about how a car works,” Weisberg said, “I have to stop talking about the whole car at a certain point and instead, start talking about its components and how they work together.”
She said describing a scientific discovery may be similar: We prefer clarifications that separate findings into smaller parts or refer to more fundamental processes, otherwise called reductive explanations. But are we biased toward these even when they are unhelpful or unnecessary to comprehending the science?
Weisberg began studying this effect as a graduate student. She researched people’s biases toward neuroscience language in news stories about psychology, such as when an article referred to a specific part of the brain. Weisberg and Hopkins still notice this emphasis on neuroscience in science news.
“If you do a search for the phrase ‘literally changes your brain,’ you will get thousands of hits,” said Hopkins, who completed this work while a postdoc at Penn. “‘Meditation literally changes your brain,’ ‘marriage literally changes your brain.’ People are really fascinated by this.” Everything we do changes our brain in some way, Hopkins points out, and such headlines are illogical and often irrelevant to the actual scientific findings.
The researchers wanted to know whether this phenomenon applied only to neuroscience or more broadly as a general bias in descriptions and understanding of scientific findings. So they set up an experiment to test how well reductive information convinced people reading explanations of scientific facts.
First, they decided on a hierarchy of sciences based on which fields cited each other the most. From least to most reductive, that went from social sciences such as sociology and political science to psychology, neuroscience, biology, chemistry and physics. “It was partly intuitive,” Hopkins said. “Mental activity can reduce down to the brain, and neural activity can reduce to the cells and the biology of the brain. That can reduce down to the chemical processes, which can reduce down to the physics.”
Weisberg and Hopkins then partnered with experts across the University of Pennsylvania to write a series of explanations for scientific topics looking at two variables: quality of explanation, or good versus bad, and whether reductive information was included versus not included.
“Good” explanations detailed a phenomenon well, while “bad” ones lacked certain relevant information to understand that phenomenon’s occurrence. “Reductive” ones included extra information from a more reductive field, such as explaining how a cell behaves biologically by referring to its internal chemistry. “Non-reductive” explanations used only evidence from a single science, for instance, talking about how the cell’s biological behavior works with the rest of the biological system. Critically, the reductive explanations did not add any information necessary to grasping the phenomenon.
Study participants, which included Penn undergraduates and a sample of people recruited through a crowdsourcing website, received explanations that used different combinations of variables, reductive good, reductive bad, non-reductive good and non-reductive bad, then rated how satisfied they felt with each type of explanation.
The researchers found across all sciences, participants showed a significant bias for reductive details, rating good descriptions that used reductive information higher than those without reductive information. The same held true for logically flawed explanations.
“It’s not like their compass for judging information is entirely broken in any sense,” Weisberg said. “The strongest and most consistent finding is that they can tell between good and bad. The good ones are always better than the bad. Within that, you get a bump for the reductive explanation.”
Though most people will not enter a scientific field themselves, it is important to recognize what biases come into play when they try to learn about a scientific discovery, Weisberg said. Such considerations may have ramifications for how science is taught.
“Most college undergraduates are not going to be professional scientists,” she said, “but they are going to be consumers of science, people who are using it in some way in their daily lives.”
In future research, Weisberg and Hopkins said they plan to investigate whether and how other factors like graduate studies or a general knowledge of scientific practices can reduce this bias.
The study was supported by the John Templeton Foundation.