Infant Exposure to Brief Auditory Cues Can Support Language Development
It matters what your baby hears. Even during sleep, the sounds that infants are exposed to can play a big role in language development, especially for babies at risk of language delays, according to a Rutgers University-Newark neuroscientist.
Although it’s well-known that music and speech boost babies’ ability to learn, there’s robust evidence that certain brief auditory cues in an infant's environments are analyzed by the developing brain and used to guide the formation of networks involved in language processing.
Researcher April Benasich, an expert in early brain plasticity who studies infant language and cognitive development, demonstrated that infants who were passively exposed to a series of brief non-speech sounds once a week for six weeks were able to more accurately identify and discriminate syllables and had better language scores at 12 and 18 months compared to infants who had not received that exposure.
Her findings were published in the journal Cerebral Cortex.
The study is important because it’s the first to show that passive exposure to non-speech sounds—which contains tiny acoustic transitions in the 10s of milliseconds, similar to those that allow babies to detect that language is present—facilitate the formation and strengthening of neuronal connections that are essential to language processing.
Previous research in Benasich’s lab showed that interactive exposure to certain auditory cues had a significant impact on critical brain networks and improved both attention and infant language outcomes over time. But the jury was still out on whether passively exposing infants to these same types of sounds would have an effect on language networks. In fact, there were impressive impacts on both language processing and later language outcomes. Results suggest that supporting rapid auditory processing abilities early in development, even with only passive exposure, can positively influence later language.
“The ability to impact developing language networks passively is a very important step forward. The passive route provides a simpler, cheaper alternative to promote optimal networks, allowing parents the opportunity to support typical development at home as well as offering a path to an accessible intervention in the clinic or pediatrician’s office for infants at high risk for developmental language disorders,’’ said Benasich, a professor of neuroscience at Rutgers-Newark’s Center for Molecular and Behavioral Neuroscience and the nation’s first endowed chair in Developmental Cognitive Neuroscience.
Her previous research found that measures of rapid auditory processing ability can be used to identify infants at highest risk of language delay and impairment, providing an opportunity to intervene and mitigate outcomes.
Benasich believes that neuroscientists must move faster to provide the public with tools to improve brain health. To that end, she co-founded a company called RAPT Ventures, Inc. (RVI), which recently launched its first product, the RAPTbaby Smarter Sleep Sound Machine, designed to give parents a soothing, cognitively supportive sound environment for infants and young children. The machine’s variation of sound and tone is what makes it effective, she said.
“Babies need the small sound transitions that brains must analyze to develop language,’’ she explained. “Their brains are hard wired to analyze any pertinent environmental sounds coming in. If those sounds are all the same frequency, all at the same intensity, the brain might stop listening for these important variations which could impede the creation of language networks.”
For Benasich, making important neuroscience findings more accessible to the public and to healthcare providers is a critical part of her mission as a scientist and informs the products her company creates. “I feel a strong responsibility to make that happen, particularly where it pertains to the development of young brains,” she said.