Background
The association between brain regions involved in speech production and those that play a role in the perception of acoustic signals is not yet fully understood. In this study, we compared brain activations that occurred during speech production with activations resulting from perceptual discrimination of vocalized sounds using ultra-high field 7 Tesla functional magnetic resonance imaging (fMRI) at 1 mm isotropic voxel resolution.
Methods
Twenty subjects (7 men, mean age 29 ± 9.0 years) completed speech perception task fMRI and speech production task fMRI at the Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital (MGH). The Institutional Review Board at MGH approved the study and all subjects gave written informed consent. fMRI blood oxygenation level dependent (BOLD) signal data were obtained using a simultaneous multi-slice (SMS) echo planar imaging (EPI) acquisition (TR 2.8 s). In a phoneme discrimination task, subjects were presented with pairs of /ba/ or /da/ syllables (stimulus onset asynchrony 1 s), which were randomized as either identical sounds or sounds that were separated by three intervals along an 8-step continuum between the prototypic /ba/ and /da/ sounds. After presentation of each stimulus pair, the subjects were asked to indicate whether the two syllables they heard were the same or different. Responses were given by pressing one of two buttons using the right-hand index or middle finger. All sounds were presented with MR-compatible, insert-style headphones at a comfortable volume level. In a speech-sound production task, subjects were asked to produce a silent lip-round vowel /u/ in response to the visual cue ”U” or to purse their lips in response to seeing the cue “P”. The cues were presented randomly every 2-4 s. 60 trials of each condition, interspersed with fixation trials, were collected over two runs.
Results
Preliminary univariate fMRI analyses using a parametric modulation modeling approach in general linear model (GLM) indicated that BOLD activations related to phoneme category variability in the /ba/–/da/ discrimination task were strongest in both the left precentral-premotor and inferior frontal cortex areas. Largely the same regions were also activated in the silent vocalization vs. lip pursing task. Conversely, the BOLD signal changes associated with purely acoustic variability of sounds were strongest in the bilateral auditory cortices.
Conclusions
The results support the hypothesis that articulatory-motor networks in the left hemisphere that activate during speech production also activate during perceptual categorization of acoustic signals.
Funding
Research reported in this abstract was supported by the National Institute on Deafness and Other Communication Disorders of the National Institutes of Health under Award Numbers K23DC018022, R01DC017991, R01DC016765, R01DC016915.