You know what you’re going to say before you say it, right? Not necessarily, research suggests. A study from researchers at Lund University in Sweden shows that auditory feedback plays an important role in helping us determine what we’re saying as we speak. The study is published in Psychological Science, a journal of the Association for Psychological Science.
“Our results indicate that speakers listen to their own voices to help specify the meaning of what they are saying,” says researcher Andreas Lind of Lund University, lead author of the study.
Theories about how we produce speech often assume that we start with a clear, preverbal idea of what to say that goes through different levels of encoding to finally become an utterance.
But the findings from this study support an alternative model in which speech is more than just a dutiful translation of this preverbal message:
“These findings suggest that the meaning of an utterance is not entirely internal to the speaker, but that it is also determined by the feedback we receive from our utterances, and from the inferences we draw from the wider conversational context,” Lind explains.
For the study, Lind and colleagues recruited Swedish participants to complete a classic Stroop test, which provided a controlled linguistic setting. During the Stroop test, participants were presented with various color words (e.g., “red” or “green”) one at a time on a screen and were tasked with naming the color of the font that each word was printed in, rather than the color that the word itself signified.
The participants wore headphones that provided real-time auditory feedback as they took the test — unbeknownst to them, the researchers had rigged the feedback using a voice-triggered playback system. This system allowed the researchers to substitute specific phonologically similar but semantically distinct words (“grey”, “green”) in real time, a technique they call “Real-time Speech Exchange” or RSE.
Data from the 78 participants indicated that when the timing of the insertions was right, only about one third of the exchanges were detected.
On many of the non-detected trials, when asked to report what they had said, participants reported the word they had heard through feedback, rather than the word they had actually said. Because accuracy on the task was actually very high, the manipulated feedback effectively led participants to believe that they had made an error and said the wrong word.
Overall, Lind and colleagues found that participants accepted the manipulated feedback as having been self-produced on about 85% of the non-detected trials.
Together, these findings suggest that our understanding of our own utterances, and our sense of agency for those utterances, depend to some degree on inferences we make after we’ve made them.
Most surprising, perhaps, is the fact that while participants received several indications about what they actually said — from their tongue and jaw, from sound conducted through the bone, and from their memory of the correct alternative on the screen — they still treated the manipulated words as though they were self-produced.
This suggests, says Lind, that the effect may be even more pronounced in everyday conversation, which is less constrained and more ambiguous than the context offered by the Stroop test.
“In future studies, we want to apply RSE to situations that are more social and spontaneous — investigating, for example, how exchanged words might influence the way an interview or conversation develops,” says Lind.
“While this is technically challenging to execute, it could potentially tell us a great deal about how meaning and communicative intentions are formed in natural discourse,” he concludes.
Co-authors on the study include Lars Hall, Björn Breidegard, and Christian Balkenius of Lund University and Petter Johansson Lund University and Uppsala University. This work was supported by Uno Otterstedt’s Foundation (Grant EKDO2010/54), the Crafoord Foundation (Grant 20101020), the Swedish Research Council, the Bank of Sweden Tercentenary Foundation, the Pufendorf Institute, and the European Union Goal-Leaders project (Grant FP7 270108).
I aggree with what the writer says, people respond better to what they hear than to what they read. Often times when I study I memorize better what I atter with my mouth than what I just scan or read. What we hear will determine our response.
It is quite amazing on how our brains work.I never thought that speech can be manipulated . This actually makes sense because I also remember something better if I say it out loud .This helps when having to study difficult concepts.
This was a very interesting reading. I watched brain games a few times and saw how our brains can be manipulated, but I never thought that our speech can be manipulated to. The research that has been done on this aspects is something that is so part of our daily lives that we does not see how our speech are being manipulated on a daily basis. You can plan out exactly what you want to say to someone, but when it comes to that, it never comes out the way you have planned it. A persons reactions on what you say and their response, play a role in what you are going to say next. So I fully agree with what the author has been saying in this post.
This research shows that the brain responds better to what we hear than to what we see.It is much more easier for the brain to memorize what we read out loud than reading without saying what is red. That’s why we need to hear what we say to know what we are saying.