Ever wonder how your brain distinguishes all the sounds in a language? How does it know “b” is different from “z”?
Researchers may now be closer to understanding how the brain processes sounds, or at least those made in English. Taking advantage of a group of hospitalized epilepsy patients who had electrodes hooked directly to their brains to monitor for seizures, Dr. Edward Chang and his colleagues at the University of California, San Francisco, and University of California, Berkeley, were able to listen in on the brain as it listened to 500 English sentences spoken by 400 different native English speakers.
A specific part of the brain, the superior temporal gyrus, is responsible for translating auditory signals into something the brain “hears.” Until recently, however, neuroscientists assumed that the smallest chunk of sound that the brain distinguished were phonemes, such as the “b” or “z” sounds. But Chang and his team revealed that the brain parsed English sounds even further, into something they called “features.” Linguists have known about these differences, named plosives or fricatives and which occur because of the way the lips or tongue have to move air in order to make the sounds. But Chang’s work showed for the first time that the brain processes sounds in much smaller units as well.
MORE: Toddlers’ Early Language Skills May Influence Later Anger Management
“We needed to have the right level of resolution on the order of millimeters and milliseconds to address this kind of question,” he says. “It’s about how small or microscopic neural responses can be recorded.” Having the electrodes implanted directly on the six participants’ brains allowed Chang and his team to record nerve firings with unprecedented spatial and temporal resolution.
And why do the findings matter? Mapping how the brain processes sounds like plosives and fricatives could lead to better understanding of conditions such as dyslexia and autism, which involve problems encoding acoustic signals. “When we can pinpoint down to the level of individual speech sounds, and how those are being processed by the brain, then we can have a much more powerful model of how to think about these disorders,” he says.
For example, comparing the way native and non-native English speakers encode English sounds could reveal how much of the processing is hard-wired or learned. For example, constant exposure to certain sounds can strengthen some neural connections and weaken others, which could explain why some sounds in foreign languages are difficult for non-native speakers to hear and make. Similar differences could be occurring in people with learning or speech disorders.
MORE: Understanding How the Brain Speaks Two Languages
Still, says Chang, languages around the world share a surprising number of linguistic features, which might suggest that the differences between languages are mostly learned and malleable. “It’s a fascinating, complicated system,” he says. “This is just the starting point. There is a ton more work to do in understanding how we work up from individual chunks of features to syllables and rods, and from there to meaning.”
More Must-Reads from TIME
- Donald Trump Is TIME's 2024 Person of the Year
- Why We Chose Trump as Person of the Year
- Is Intermittent Fasting Good or Bad for You?
- The 100 Must-Read Books of 2024
- The 20 Best Christmas TV Episodes
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- Merle Bombardieri Is Helping People Make the Baby Decision
Contact us at letters@time.com