Scientists have discovered the region of the brain that is sensitive to the timing of speech and plays a key role in human language.
Timing is an important part of human speech, and for one to understand what others are saying the brain needs to learn how to interpret different time signatures, Duke University reported.
Speech is made up of different time measurements: phonemes are the shortest unit of speech, and last for between 30 and 60 milliseconds; syllables last for between 200 and 300 milliseconds, and whole words are even longer. To deal with this information, the auditory system is believed to sample information in "chunks" that are equivalent to an average consonant or syllable.
In a recent study, a team of researchers cut recordings of foreign speech in chunks ranging between 30 and 660 milliseconds in length and reassembled them using a new algorithm that creates what the researchers dubbed "speech quilts." The team found the shorter the speech quilt the greater the disruption to the speech's original structure.
The speech quilts were played to participants while they underwent brain scans, revealing a region of the brain called the superior temporal sulcus (STS) became highly active during the 480- and 960-millisecond quilts, but not so much during the 30-millisecond quilts.
"That was pretty exciting. We knew we were onto something," said Tobias Overath, an assistant research professor of psychology and neuroscience at Duke.
The STS works to integrate auditory and other sensory information and this is the first time it has been shown to respond to time structures in speech. To back up their findings, the researchers tested other control sounds that mimicked speech. The control stimuli were also rearranged into quilts and played to participants. The researchers observed the subjects' brains did not respond to the control quilts.
"We really went to great lengths to be certain that the effect we were seeing in STS was due to speech-specific processing and not due to some other explanation, for example, pitch in the sound or it being a natural sound as opposed to some computer-generated sound," Overath said.
The findings were published in a recent edition of the journal Nature Neuroscience.