The first few months of a baby’s life come with a flurry of challenges on a still-developing brain. Sights, sounds, smells and touches as well as other emotional experiences flood in, waiting to be processed and filed away as the foundation for everything from language to emotions and how to socialize with others. What happens if things are not finding their right place in the brain during these critical months? Some research suggests it results in developmental delays later on—and that’s just what neuroscientist April Benasich and her colleagues from Rutgers University found in a new study, published in the Journal of Neuroscience.
Previous studies done by both Benasich and others show that the brains of children who learn to speak later or who develop reading disorders like dyslexia showed differences in detecting small differences in speech, such as the difference between da and ba, when they were infants. Other research has come to similar conclusions.
Genetic factors certainly play a role, but up to 10% of the babies Benasich has studied had no family history of developmental problems, yet still showed language trouble when they started talking. That’s why she turned to studying the brain maps of healthy babies before they learned to speak. These routes show how infants detect and respond to sounds in their environment—from words spoken to them to the humming of a dishwasher. In these early months, their brains are primed to sort out this cacophony of auditory stimuli and start making more refined distinctions between them. Doing so requires distinguishing between tiny differences, both in the sounds themselves as well as in frequencies. “Babies do this naturally; this is their job, since they want to be able to pick sounds out quickly and figure out whether they need to pay attention to them,” says Benasich.
For the babies in this study, she adorned them with skull caps studded with electronic sensors that would draw a map of their EEGs as they were presented with different, non-linguistic tones. Some of the babies were played sounds that changed ever so slightly, such as in their tone or frequency, and whenever there was a change, a small video in the corner of a screen they were looking at popped up. The babies naturally turned to watch the video, so the scientists used these eye turns as a signal that the babies had heard and recognized the transition in sounds, and were expecting to see the video. Another group of babies were played the same sounds but without the video training, and a control group didn’t hear the sounds at all.
It wasn’t the sounds themselves that were important, but the changes in them that were key to priming the babies’ brains. Those who were trained to pay attention to the changes in the sounds, for example, showed more robust mapping of language sounds later on when they started to babble; by 18 months, these infants showed brain mapping patterns similar to those in two year olds. They were faster at discriminating different sounds, and quicker to pay attention to even tiny differences in inflection or frequency compared to babies who weren’t given the sounds. The babies who only listened to the sounds without the training fell somewhere between these two groups when it came to their language mapping networks.
Benasich says that the training lays the foundation in babies’ brains to become more efficient in processing language sounds, including very tiny variations among them. Their brains are setting up different neural routes for each sound, like a well-organized airport with separate runways designated for northbound and southbound flights. Other babies were less adept at this, essentially routing every sound through the same neural network, akin to sending every plane off the same runway, leading to delays as some have to bank and redirect in the opposite direction. In similar ways, says Benasich, in language, this cruder processing of sounds could result in delays in reading or speaking or language acquisition, and toddlers end up having to “manually” process the sounds in a more tedious and less automatic process. “Instead of automatically discriminating sounds without pausing, they have to stop and think and what that sound might be, and that leads them to hesitate a little,” she says. “That small hesitation makes a huge difference in how well they learn and process language.”
The training, she says, was minimal – the babies’ parents brought them in for six to eight minute sessions once a week for about six weeks. Yet she was “surprised by how robust the effects are for the babies.”
The study involved healthy babies who did not have risk factors for language disorders, so the training only helped them to enhance their later language learning. But the team is currently studying a group of babies at higher risk of having language deficits, either because of genetic risk factors or by having siblings affected by such disorders. If these babies show different brain patterns compared to those not at risk, then it’s possible that EEG patterns in response to sounds could predict which infants are at risk of developing language problems even before they start to talk.
Benasich is also working on developing her test into a parent-friendly toy that parents can buy and use with their babies; if their babies are developing normally, then the training can only accelerate and enhance their language skills later on, while for those who are struggling, the training could help them to avoid learning disabilities when they start school. It’s not possible to screen every baby, but if parents and doctors are able to take advantage of such a tool, then she hopes that more language-based disorders might be avoided. “Babies naturally do this, but for those who are having trouble, we are guiding them to pay more attention to things that are important in their environment, such as language-based sounds,“ she says. “We think we could make a huge difference in the number of kids who end up with learning problems.”