by New York University March 04, 2019
How is our speech shaped by what we hear? The answer varies, depending on the make-up of our brain's pathways, a team of neuroscientists has found. The research, which maps how we synchronize our words with the rhythm of the sounds we hear, offers potential methods for diagnosing speech-related afflictions and evaluating cognitive-linguistic development in children.
"Some people spontaneously synchronize the pace of their
speech to match the rhythm of the speech they are hearing, while others do not," explains Florencia Assaneo, a post-doctoral researcher in New York University's Department of Psychology and the lead author of the study, which appears in the journal
Nature Neuroscience. "Whether you synchronize or not predicts functional and structural aspects of our language brain network as well as our ability to learn new words."
"These discoveries result from a novel
behavioral test, which reveals how individual differences are predictive of audio-motor synchronization and neurophysiological function, among other phenomena," adds David Poeppel, a professor of psychology and neuroscience at NYU and director of the Max Planck Institute for Empirical Aesthetics in Frankfurt and the study's senior author. "The potency of such a test as a tool may lead to new discoveries in language research and perhaps help to spot afflictions such as Alzheimer's, Parkinson's, or multiple sclerosis."
Extensive research has been done on how we synchronize our
body movements to sound input, such as tapping our foot to the rhythm of a song. But less understood is how our brain functions in a similar speech scenario, such as singing along to a favorite tune.
The question of whether the human ability to speak is tightly connected with our ability to synchronize to the world around us is a significant one. For example, it's known that preschoolers' proficiency in synchronizing their bodies to a beat predicts their language abilities.
However, scientists have not examined whether there is a direct link between speech production rhythms—i.e., the coordinated movements of the tongue, lips, and jaw that constitute speech—and the rhythms of the perceived audio signal.
"In other words, are our mouths coupled to our ears?" Assaneo asks.
To explore this question, the scientists, who also included researchers from the University of Barcelona and Catalan Institution for Research and Advanced Studies (ICREA), conducted a series of experiments in which the subjects listened to a rhythmic sequence of syllables (e.g., "lah," "di," "fum") and at the same time, were asked to continuously whisper the syllable "tah".
The findings, based on more than 300 test subjects, revealed an unexpected division in how we verbalize sounds in response to what we hear. Some spontaneously synchronized their whispering to the syllable sequence (high synchronizers) while others remained impervious to the external rhythm (low synchronizers).
This division raised additional questions, such as: Does the grouping based on this test tap into how people's brains are organized? And does it have any behavioral consequences with broader significance? To answer these, the researchers deployed a battery of additional techniques.
First, the researchers asked whether white matter pathways, which affect learning and coordinate communication among different parts of the brain, differ between groups. To do this, they studied MRI data from the subjects. Here, the team found that high synchronizers have more white matter volume in the pathways connecting speech-perception (listening) areas with speech-production (speaking) areas than do low synchronizers.
Second, they used magnetoencephalography (MEG), a technique that tracks neural dynamics, to record brain activity while participants passively listened to rhythmic syllable sequences. High synchronizers showed more brain-to-stimulus synchrony than did low synchronizers: their neural activity oscillated at the same frequency as the perceived syllable rate in the part of the brain linked to speech-motor planning.
"This implies that areas related to speech production are also recruited during speech perception, which likely helps us track external speech rhythms," observes Assaneo.
Finally, the scientists tested if being a high or low synchronizer predicts how well people learn new words. Specifically, they studied the early stages of language learning: the ability to identify a sound as a word—even without knowing the meaning. The results showed that high synchronizers were better learners of these words than were low synchronizers.
"In everyday life, this could give the high synchronizers an advantage," notes Assaneo. "For example, when in a foreign country, they may more easily pick up words in an unfamiliar language from the people talking around them."
More information: Spontaneous synchronization to speech reveals neural mechanisms facilitating language learning, Nature Neuroscience (2019). DOI: 10.1038/s41593-019-0353-z , https://www.nature.com/articles/s41593-019-0353-z
https://medicalxpress.com/news/2019-03-rhythm-language-brain-path.html
No comments:
Post a Comment