People Show ‘Blind Insight’ Into Decision Making Performance

People can gauge the accuracy of their decisions, even if their decision making performance itself is no better than chance, according to a new study published in Psychological Science, a journal of the Association for Psychological Science.

In the study, people who showed chance-level decision making still reported greater confidence about decisions that turned out to be accurate and less confidence about decisions that turned out to be inaccurate. The findingssuggest that the participants must have had some unconscious insight into their decision making, even though they failed to use the knowledge in making their original decision, a phenomenon the researchers call “blind insight.”

“The existence of blind insight tells us that our knowledge of the likely accuracy of our decisions — our  metacognition— does not always derive directly from the same information used to make those decisions, challenging both everyday intuition and dominant theoretical models of metacognition,” says researcher Ryan Scott of the University of Sussex in the UK.

Metacognition, the ability to think about and evaluate our own mental processes, plays a fundamental role in memory, learning, self-regulation, social interaction, and signals marked differences in mental states, such as with certain mental illnesses or states of consciousness.

“Consciousness research reveals many instances in which people are able to make accurate decisions without knowing it, that is, in the absence of metacognition” says Scott.  The most famous example of this is blindsight, in which people are able to discriminate visual stimuli even though they report that they can’t see the stimuli and that their discrimination judgments are mere guesses.

Scott and colleagues wanted to know whether the opposite scenario — metacognitive insight in the absence of accurate decision making — could also occur:

“We wondered: Can a person lack accuracy in their decisions but still be more confident when their decision is right than when it’s wrong?” Scott explains.

The researchers looked at data from 450 student volunteers, aged 18 to 40. The volunteers were presented with a “short-term memory task” in which they were shown strings of letters and were asked to memorize them. After the memory task, the researchers revealed that the order of the letters in the strings actually obeyed a complex set of rules.

The participants were then shown a new set of letter strings, half of which followed the same rules, and were asked to classify which of the strings were “correct.” For each string, they rated whether or not it followed the rules and how confident they were in that judgment.

To explore the relationship between decision making and metacognition, the researchers examined data from participants whose performance was at or below chance for the first 75% of the test strings (inaccurate decision makers) and data from participants who performed significantly above chance over the same proportion of trials (accurate decision makers).

Looking at the data from the remaining 25% of trials, the researchers found that, despite their overall chance-level performance, inaccurate decision makers made reliable confidence judgments about their decisions. In fact, the reliability of their confidence judgments did not differ from the reliability of confidence judgments made by accurate decision makers.

In other words, the participants exhibited the opposite dissociation to blindsight: They knew when they were wrong, despite being unable to make accurate judgments. The researchers decided to name the phenomenon “blind insight” to reflect that relationship.

Taken together, these findings do not support the type of bottom-up, hierarchical model of metacognition proposed by many researchers. Using signal detection theory, such models hold that low-level sensory signals drive first-order judgments (e.g., “Is this correct?”) and, ultimately, second-order metacognitive judgments (e.g., “How confident am I about whether this is correct?”).

In this study, however, there was no reliable signal driving decision making for inaccurate decision makers; thus, according to the established models, there would be no signal available to drive second-order confidence judgments.  The fact that confidence was found to be greater for correct responses demonstrates that such a hierarchical model is flawed. Based on these findings, the researchers argue that there must be other pathways that lead to metacognitive insight, and a radical revision of models of metacognition is required.

The full article is available online.

Study co-authors include Zoltan Dienes, Adam B. Barrett, Daniel Bor, and Anil K. Seth of the University of Sussex.

All data and materials have been made publicly available via Open Science Framework and can be accessed at https://osf.io/ivdk4/files/. The complete Open Practices Disclosure for this article can be found at http://pss.sagepub.com/content/by/supplemental-data. This article has received badges for Open Data and Open Materials. More information about the Open Practices badges can be found at https://osf.io/tvyxz/wiki/view/ and http://pss.sagepub.com/content/25/1/3.full.


Substack subscription form sign up