Visual phonetic processing localized using speech and nonspeech face gestures in video and point‐light displays |
| |
Authors: | Lynne E. Bernstein Jintao Jiang Dimitrios Pantazis Zhong‐Lin Lu Anand Joshi |
| |
Affiliation: | 1. Division of Communication and Auditory Neuroscience, House Ear Institute, Los Angeles, California;2. Department of Psychology and Neuroscience Graduate Program, University of Southern California, Los Angeles, California;3. Department of Electrical Engineering, Signal and Image Processing Institute, University of Southern California, Los Angeles, California;4. Brain and Creativity Institute, University of Southern California, Los Angeles, California |
| |
Abstract: | The talking face affords multiple types of information. To isolate cortical sites with responsibility for integrating linguistically relevant visual speech cues, speech and nonspeech face gestures were presented in natural video and point‐light displays during fMRI scanning at 3.0T. Participants with normal hearing viewed the stimuli and also viewed localizers for the fusiform face area (FFA), the lateral occipital complex (LOC), and the visual motion (V5/MT) regions of interest (ROIs). The FFA, the LOC, and V5/MT were significantly less activated for speech relative to nonspeech and control stimuli. Distinct activation of the posterior superior temporal sulcus and the adjacent middle temporal gyrus to speech, independent of media, was obtained in group analyses. Individual analyses showed that speech and nonspeech stimuli were associated with adjacent but different activations, with the speech activations more anterior. We suggest that the speech activation area is the temporal visual speech area (TVSA), and that it can be localized with the combination of stimuli used in this study. Hum Brain Mapp, 2010. © 2010 Wiley‐Liss, Inc. |
| |
Keywords: | visual perception speech perception functional magnetic resonance imaging lipreading speechreading phonetics gestures temporal lobe frontal lobe parietal lobe |
|
|