Characterizing the temporal processing of speech in the human auditory cortex

Project: Research project

Project Details

Description

Project Summary Time is the fundamental dimension of sound, and temporal integration is thus fundamental to speech perception. To recognize a complex structure such as a word in fluent speech, the brain must integrate across many different timescales spanning tens to hundreds of milliseconds. These timescales are considerably longer than the duration of responses at the auditory nerve. Therefore, the auditory cortex must integrate acoustic information over long and varied timescales to encode linguistic units. On the other hand, the nature of the intermediate units of representation between sound and meaning remains debated. Focal brain injuries have shown selective impairment at all levels of linguistic processing (phonemic, phonotactic, and semantic) but current models of spoken word recognition disagree on the existence and type of these representational levels. The neural basis of temporal and linguistic processing remains speculative partly due to the limited spatiotemporal resolution of noninvasive human neuroimaging techniques which is needed to study the encoding of fluent speech. Our multi- PI proposal overcomes these challenges by assembling a team of researchers and clinicians with complementary expertise at NYU and Columbia University. We propose to record invasively from a large number of neurosurgical patients, which provides a rare and unique opportunity to collect direct cortical recordings across several auditory regions. We propose novel experimental paradigms and analysis methods to investigate where, when, and how acoustic features of speech are integrated over time to encode linguistic units. Our experimental paradigms will determine the functional and anatomical organization of stimulus integration periods in primary and nonprimary auditory cortical regions and relate the temporal processing in these regions to the emergence of phonemic-, phonotactic-, and semantic-level representations. Finally, we will determine the nonlinear computational mechanisms that enable the auditory cortex to integrate fast features over long durations, which is essential in speech recognition. Understanding of the temporal processing of speech in primary and nonprimary auditory cortex is critical for developing complete models of speech perception in the human brain, which is essential to understanding of how these processes break down in speech and communication disorders.
StatusFinished
Effective start/end date9/1/215/31/22

Funding

  • National Institute on Deafness and Other Communication Disorders: US$681,122.00

ASJC Scopus Subject Areas

  • Neuroscience(all)
  • Speech and Hearing

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.