What’s particularly new about this study is that it not only shows that silent reading causes high-frequency electrical activity in auditory areas, but it shows that these areas are specific to voices speaking a language. This activity was only present when the person was paying attention to the task.
The authors believe that these results back up the hypothesis that we all produce an “inner voice” when reading silently. And it is enhanced by attention, suggesting that it’s probably not an automatic process, but something that occurs when we attentively process what we are reading. And the next time you read silently, remember that it’s not quite to silent to your brain.
Read the full piece at Silent reading isn’t so silent, at least, not to your brain | Neurotic Physiology
I was at a conference recently in which a Pulitzer Prize-winning writer of the narrative style gave tips about how to do it well. One tip was to have someone read your writing to you out loud and make changes so that it sounds better that way rather than just reading it silently.
Being a journalist and therefore inherently skeptical, I wondered to myself if optimizing text so that it sounds better when it is read out loud (to the writer, no less, but that’s a different point) might actually make the piece less effective when a person reads it to him/herself — if the aesthetic and content of prose hit people differently when they hear it vs. when they read it.
We know from media research on news consumption, for instance, that comprehension starts more slowly when people are listening to a TV/radio story than when they read a story in a newspaper/magazine/screen. That is why we teach in broadcast writing to start the lede of the story with some filler info before getting to the important stuff — “Police in Columbia this evening have reported that…” — to give listeners time to start really paying attention. Whereas for text articles we teach that the first five words need to contain critical information that hooks a reader’s attention — “Two men killed a clerk…” — or else the percentage of readers shifting to something else increases drastically.
Of course that’s not quite the same thing. It has at least something to do with the fact that readers can easily backtrack in a text story to reacquire information that didn’t sink in during the first pass whereas listeners just miss it in a broadcast story (which is why we also teach repetition in broadcast stories but not in a print story, where it is annoying and a waste of space). Similarly, you can get away with more convoluted sentences in text articles than you can in aural stories where people more easily lose track of information structures; conversely, aural stories can use more conversational and informal structures, even incomplete sentences, whereas those can easily generate confusion in written text.
But just the fact of these differences made me wonder if there might be other — perhaps linguistic or neurological — factors that result in something that sounds good when read out loud not necessarily coming across as effectively when quietly read, or vice versa.
Fortunately I have a great source on the subject in my son, who is just shy of finishing his doctorate in linguistics, the scientific study of language and how people use it. Oliver put me onto this recent article cited above.
The research seems to say that, yes, writers of the read word should pay particular attention to how their text sounds in their readers’ heads if they want to have their stories connect with those readers more effectively.
- Subvocalization is a key component of reading for comprehension and retention. Even skilled readers do it, and if you block their ability to subvocalize by making them count or mumble while they read, their comprehension plummets. Only trained speed-readers can avoid this. The effect is attenuated if they can go back and reread things.
- There are different theories for why this is the case, but a solid one (popular with linguists) is that normal literacy involves learned processes that are tacked on to natural language processing. Reading involves converting a visual representation into something like an acoustic one (or at least a phonological one), and then processing it like speech. So this view would say that for the vast majority of literate people, all reading is eye-to-“ear”-to-brain. (There are some exceptions. Common, short words aren’t reliably subvocalized.)
- For neurotypical people, listening leads to better comprehension than reading, assuming the listener is paying close attention. Again, things are different if the text is available for rereading.
Interesting. I guess that gives me a good reason to use the text-to-speech engine on my laptop.