首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Speech comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources - e.g. voice, face, gesture, linguistic context - to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact to facilitate comprehension. Our specific aims were to investigate the neural responses to these different information sources, alone and in interaction, and further to use behavioural speech comprehension scores to address sites of intelligibility-related activation in multifactorial speech comprehension. In fMRI, participants passively watched videos of spoken sentences, in which we varied Auditory Clarity (with noise-vocoding), Visual Clarity (with Gaussian blurring) and Linguistic Predictability. Main effects of enhanced signal with increased auditory and visual clarity were observed in overlapping regions of posterior STS. Two-way interactions of the factors (auditory × visual, auditory × predictability) in the neural data were observed outside temporal cortex, where positive signal change in response to clearer facial information and greater semantic predictability was greatest at intermediate levels of auditory clarity. Overall changes in stimulus intelligibility by condition (as determined using an independent behavioural experiment) were reflected in the neural data by increased activation predominantly in bilateral dorsolateral temporal cortex, as well as inferior frontal cortex and left fusiform gyrus. Specific investigation of intelligibility changes at intermediate auditory clarity revealed a set of regions, including posterior STS and fusiform gyrus, showing enhanced responses to both visual and linguistic information. Finally, an individual differences analysis showed that greater comprehension performance in the scanning participants (measured in a post-scan behavioural test) were associated with increased activation in left inferior frontal gyrus and left posterior STS. The current multimodal speech comprehension paradigm demonstrates recruitment of a wide comprehension network in the brain, in which posterior STS and fusiform gyrus form sites for convergence of auditory, visual and linguistic information, while left-dominant sites in temporal and frontal cortex support successful comprehension.  相似文献   

2.
When speech is degraded, word report is higher for semantically coherent sentences (e.g., her new skirt was made of denim) than for anomalous sentences (e.g., her good slope was done in carrot). Such increased intelligibility is often described as resulting from "top-down" processes, reflecting an assumption that higher-level (semantic) neural processes support lower-level (perceptual) mechanisms. We used time-resolved sparse fMRI to test for top-down neural mechanisms, measuring activity while participants heard coherent and anomalous sentences presented in speech envelope/spectrum noise at varying signal-to-noise ratios (SNR). The timing of BOLD responses to more intelligible speech provides evidence of hierarchical organization, with earlier responses in peri-auditory regions of the posterior superior temporal gyrus than in more distant temporal and frontal regions. Despite Sentence content × SNR interactions in the superior temporal gyrus, prefrontal regions respond after auditory/perceptual regions. Although we cannot rule out top-down effects, this pattern is more compatible with a purely feedforward or bottom-up account, in which the results of lower-level perceptual processing are passed to inferior frontal regions. Behavioral and neural evidence that sentence content influences perception of degraded speech does not necessarily imply "top-down" neural processes.  相似文献   

3.
Human brain function draws on predictive mechanisms that exploit higher‐level context during lower‐level perception. These mechanisms are particularly relevant for situations in which sensory information is compromised or incomplete, as for example in natural speech where speech segments may be omitted due to sluggish articulation. Here, we investigate which brain areas support the processing of incomplete words that were predictable from semantic context, compared with incomplete words that were unpredictable. During functional magnetic resonance imaging (fMRI), participants heard sentences that orthogonally varied in predictability (semantically predictable vs. unpredictable) and completeness (complete vs. incomplete, i.e. missing their final consonant cluster). The effects of predictability and completeness interacted in heteromodal semantic processing areas, including left angular gyrus and left precuneus, where activity did not differ between complete and incomplete words when they were predictable. The same regions showed stronger activity for incomplete than for complete words when they were unpredictable. The interaction pattern suggests that for highly predictable words, the speech signal does not need to be complete for neural processing in semantic processing areas. Hum Brain Mapp 37:704–716, 2016. © 2015 Wiley Periodicals, Inc.  相似文献   

4.
The brain networks supporting speech identification and comprehension under difficult listening conditions are not well specified. The networks hypothesized to underlie effortful listening include regions responsible for executive control. We conducted meta‐analyses of auditory neuroimaging studies to determine whether a common activation pattern of the frontal lobe supports effortful listening under different speech manipulations. Fifty‐three functional neuroimaging studies investigating speech perception were divided into three independent Activation Likelihood Estimate analyses based on the type of speech manipulation paradigm used: Speech‐in‐noise (SIN, 16 studies, involving 224 participants); spectrally degraded speech using filtering techniques (15 studies involving 270 participants); and linguistic complexity (i.e., levels of syntactic, lexical and semantic intricacy/density, 22 studies, involving 348 participants). Meta‐analysis of the SIN studies revealed higher effort was associated with activation in left inferior frontal gyrus (IFG), left inferior parietal lobule, and right insula. Studies using spectrally degraded speech demonstrated increased activation of the insula bilaterally and the left superior temporal gyrus (STG). Studies manipulating linguistic complexity showed activation in the left IFG, right middle frontal gyrus, left middle temporal gyrus and bilateral STG. Planned contrasts revealed left IFG activation in linguistic complexity studies, which differed from activation patterns observed in SIN or spectral degradation studies. Although there were no significant overlap in prefrontal activation across these three speech manipulation paradigms, SIN and spectral degradation showed overlapping regions in left and right insula. These findings provide evidence that there is regional specialization within the left IFG and differential executive networks underlie effortful listening.  相似文献   

5.
Everyday communication is accompanied by visual information from several sources, including co‐speech gestures, which provide semantic information listeners use to help disambiguate the speaker's message. Using fMRI, we examined how gestures influence neural activity in brain regions associated with processing semantic information. The BOLD response was recorded while participants listened to stories under three audiovisual conditions and one auditory‐only (speech alone) condition. In the first audiovisual condition, the storyteller produced gestures that naturally accompany speech. In the second, the storyteller made semantically unrelated hand movements. In the third, the storyteller kept her hands still. In addition to inferior parietal and posterior superior and middle temporal regions, bilateral posterior superior temporal sulcus and left anterior inferior frontal gyrus responded more strongly to speech when it was further accompanied by gesture, regardless of the semantic relation to speech. However, the right inferior frontal gyrus was sensitive to the semantic import of the hand movements, demonstrating more activity when hand movements were semantically unrelated to the accompanying speech. These findings show that perceiving hand movements during speech modulates the distributed pattern of neural activation involved in both biological motion perception and discourse comprehension, suggesting listeners attempt to find meaning, not only in the words speakers produce, but also in the hand movements that accompany speech. Hum Brain Mapp, 2009. © 2009 Wiley‐Liss, Inc.  相似文献   

6.
Gestures are an important part of human communication. However, little is known about the neural correlates of gestures accompanying speech comprehension. The goal of this study is to investigate the neural basis of speech-gesture interaction as reflected in activation increase and decrease during observation of natural communication.Fourteen German participants watched video clips of 5 s duration depicting an actor who performed metaphoric gestures to illustrate the abstract content of spoken sentences. Furthermore, video clips of isolated gestures (without speech), isolated spoken sentences (without gestures) and gestures in the context of an unknown language (Russian) were additionally presented while functional magnetic resonance imaging (fMRI) data were acquired.Bimodal speech and gesture processing led to left hemispheric activation increases of the posterior middle temporal gyrus, the premotor cortex, the inferior frontal gyrus, and the right superior temporal sulcus. Activation reductions during the bimodal condition were located in the left superior temporal gyrus and the left posterior insula. Gesture related activation increases and decreases were dependent on language semantics and were not found in the unknown-language condition.Our results suggest that semantic integration processes for bimodal speech plus gesture comprehension are reflected in activation increases in the classical left hemispheric language areas. Speech related gestures seem to enhance language comprehension during the face-to-face communication.  相似文献   

7.
An event related potential, known as the N400, has been particularly useful in investigating language processing as it serves as a neural index for semantic prediction. There are numerous studies on the functional segregation of N400 neural sources; however, the oscillatory dynamics of functional connections among the relevant sources has remained elusive. In this study we acquired magnetoencephalography data during a classic N400 paradigm, where the semantic predictability of a fixed target noun was manipulated in simple German sentences. We conducted inter‐regional functional connectivity (FC) and time–frequency analysis on known regions of the semantic network, encompassing bilateral temporal, and prefrontal cortices. Increased FC was found in less predicted (LP) nouns compared with highly predicted (HP) nouns in three connections: (a) right inferior frontal gyrus (IFG) and right middle temporal gyrus (MTG) from 0 to 300 ms mainly within the alpha band, (b) left lateral orbitofrontal (LOF) and right IFG around 400 ms within the beta band, and (c) left superior temporal gyrus (STG) and left LOF from 300 to 700 ms in the beta and low gamma bands. Furthermore, gamma spectral power (31–70 Hz) was stronger in HP nouns than in LP nouns in left anterior temporal cortices in earlier time windows (0–200 ms). Our findings support recent theories in language comprehension, suggesting fronto‐temporal top–down connections are mainly mediated through beta oscillations while gamma band frequencies are involved in matching between prediction and input.  相似文献   

8.
In this quantitative review, we specified the anatomical basis of brain activity reported in the Temporo‐Parietal Junction (TPJ) in Theory of Mind (ToM) research. Using probabilistic brain atlases, we labeled TPJ peak coordinates reported in the literature. This was carried out for four different atlas modalities: (i) gyral‐parcellation, (ii) sulco‐gyral parcellation, (iii) cytoarchitectonic parcellation and (iv) connectivity‐based parcellation. In addition, our review distinguished between two ToM task types (false belief and social animations) and a nonsocial task (attention reorienting). We estimated the mean probabilities of activation for each atlas label, and found that for all three task types part of TPJ activations fell into the same areas: (i) Angular Gyrus (AG) and Lateral Occpital Cortex (LOC) in terms of a gyral atlas, (ii) AG and Superior Temporal Sulcus (STS) in terms of a sulco‐gyral atlas, (iii) areas PGa and PGp in terms of cytoarchitecture and (iv) area TPJp in terms of a connectivity‐based parcellation atlas. Beside these commonalities, we also found that individual task types showed preferential activation for particular labels. Main findings for the right hemisphere were preferential activation for false belief tasks in AG/PGa, and in Supramarginal Gyrus (SMG)/PFm for attention reorienting. Social animations showed strongest selective activation in the left hemisphere, specifically in left Middle Temporal Gyrus (MTG). We discuss how our results (i.e., identified atlas structures) can provide a new reference for describing future findings, with the aim to integrate different labels and terminologies used for studying brain activity around the TPJ. Hum Brain Mapp 38:4788–4805, 2017. © 2017 Wiley Periodicals, Inc.  相似文献   

9.
Spoken language comprehension is known to involve a large left-dominant network of fronto-temporal brain regions, but there is still little consensus about how the syntactic and semantic aspects of language are processed within this network. In an fMRI study, volunteers heard spoken sentences that contained either syntactic or semantic ambiguities as well as carefully matched low-ambiguity sentences. Results showed ambiguity-related responses in the posterior left inferior frontal gyrus (pLIFG) and posterior left middle temporal regions. The pLIFG activations were present for both syntactic and semantic ambiguities suggesting that this region is not specialised for processing either semantic or syntactic information, but instead performs cognitive operations that are required to resolve different types of ambiguity irrespective of their linguistic nature, for example by selecting between possible interpretations or reinterpreting misparsed sentences. Syntactic ambiguities also produced activation in the posterior middle temporal gyrus. These data confirm the functional relationship between these two brain regions and their importance in constructing grammatical representations of spoken language.  相似文献   

10.
Studies of the neural basis of spoken language comprehension typically focus on aspects of auditory processing by varying signal intelligibility, or on higher‐level aspects of language processing such as syntax. Most studies in either of these threads of language research report brain activation including peaks in the superior temporal gyrus (STG) and/or the superior temporal sulcus (STS), but it is not clear why these areas are recruited in functionally different studies. The current fMRI study aims to disentangle the functional neuroanatomy of intelligibility and syntax in an orthogonal design. The data substantiate functional dissociations between STS and STG in the left and right hemispheres: first, manipulations of speech intelligibility yield bilateral mid‐anterior STS peak activation, whereas syntactic phrase structure violations elicit strongly left‐lateralized mid STG and posterior STS activation. Second, ROI analyses indicate all interactions of speech intelligibility and syntactic correctness to be located in the left frontal and temporal cortex, while the observed right‐hemispheric activations reflect less specific responses to intelligibility and syntax. Our data demonstrate that the mid‐to‐anterior STS activation is associated with increasing speech intelligibility, while the mid‐to‐posterior STG/STS is more sensitive to syntactic information within the speech. Hum Brain Mapp, 2010. © 2009 Wiley‐Liss, Inc.  相似文献   

11.
The aim of this event‐related fMRI study was to investigate the cortical networks involved in case processing, an operation that is crucial to language comprehension yet whose neural underpinnings are not well‐understood. What is the relationship of these networks to those that serve other aspects of syntactic and semantic processing? Participants read Basque sentences that contained case violations, number agreement violations or semantic anomalies, or that were both syntactically and semantically correct. Case violations elicited activity increases, compared to correct control sentences, in a set of parietal regions including the posterior cingulate, the precuneus, and the left and right inferior parietal lobules. Number agreement violations also elicited activity increases in left and right inferior parietal regions, and additional activations in the left and right middle frontal gyrus. Regions‐of‐interest analyses showed that almost all of the clusters that were responsive to case or number agreement violations did not differentiate between these two. In contrast, the left and right anterior inferior frontal gyrus and the dorsomedial prefrontal cortex were only sensitive to semantic violations. Our results suggest that whereas syntactic and semantic anomalies clearly recruit distinct neural circuits, case, and number violations recruit largely overlapping neural circuits and that the distinction between the two rests on the relative contributions of parietal and prefrontal regions, respectively. Furthermore, our results are consistent with recently reported contributions of bilateral parietal and dorsolateral brain regions to syntactic processing, pointing towards potential extensions of current neurocognitive theories of language. Hum Brain Mapp, 2012. © 2011 Wiley Periodicals, Inc.  相似文献   

12.
Humans can understand spoken or written sentences presented at extremely fast rates of ~400 wpm, far exceeding the normal speech rate (~150 wpm). How does the brain cope with speeded language? And what processing bottlenecks eventually make language incomprehensible above a certain presentation rate? We used time-resolved fMRI to probe the brain responses to spoken and written sentences presented at five compression rates, ranging from intelligible (60-100% of the natural duration) to challenging (40%) and unintelligible (20%). The results show that cortical areas differ sharply in their activation speed and amplitude. In modality-specific sensory areas, activation varies linearly with stimulus duration. However, a large modality-independent left-hemispheric language network, including the inferior frontal gyrus (pars orbitalis and triangularis) and the superior temporal sulcus, shows a remarkably time-invariant response, followed by a sudden collapse for unintelligible stimuli. Finally, linear and nonlinear responses, reflecting a greater effort as compression increases, are seen at various prefrontal and parietal sites. We show that these profiles fit with a simple model according to which the higher stages of language processing operate at a fixed speed and thus impose a temporal bottleneck on sentence comprehension. At presentation rates faster than this internal processing speed, incoming words must be buffered, and intelligibility vanishes when buffer storage and retrieval operations are saturated. Based on their temporal and amplitude profiles, buffer regions can be identified with the left inferior frontal/anterior insula, precentral cortex, and mesial frontal cortex.  相似文献   

13.
14.
In everyday conversation, listeners often rely on a speaker's gestures to clarify any ambiguities in the verbal message. Using fMRI during naturalistic story comprehension, we examined which brain regions in the listener are sensitive to speakers' iconic gestures. We focused on iconic gestures that contribute information not found in the speaker's talk, compared with those that convey information redundant with the speaker's talk. We found that three regions—left inferior frontal gyrus triangular (IFGTr) and opercular (IFGOp) portions, and left posterior middle temporal gyrus (MTGp)—responded more strongly when gestures added information to nonspecific language, compared with when they conveyed the same information in more specific language; in other words, when gesture disambiguated speech as opposed to reinforced it. An increased BOLD response was not found in these regions when the nonspecific language was produced without gesture, suggesting that IFGTr, IFGOp, and MTGp are involved in integrating semantic information across gesture and speech. In addition, we found that activity in the posterior superior temporal sulcus (STSp), previously thought to be involved in gesture‐speech integration, was not sensitive to the gesture‐speech relation. Together, these findings clarify the neurobiology of gesture‐speech integration and contribute to an emerging picture of how listeners glean meaning from gestures that accompany speech. Hum Brain Mapp 35:900–917, 2014. © 2012 Wiley Periodicals, Inc.  相似文献   

15.
In human face-to-face communication, the content of speech is often illustrated by coverbal gestures. Behavioral evidence suggests that gestures provide advantages in the comprehension and memory of speech. Yet, how the human brain integrates abstract auditory and visual information into a common representation is not known. Our study investigates the neural basis of memory for bimodal speech and gesture representations. In this fMRI study, 12 participants were presented with video clips showing an actor performing meaningful metaphoric gestures (MG), unrelated, free gestures (FG), and no arm and hand movements (NG) accompanying sentences with an abstract content. After the fMRI session, the participants performed a recognition task. Behaviorally, the participants showed the highest hit rate for sentences accompanied by meaningful metaphoric gestures. Despite comparable old/new discrimination performances (d') for the three conditions, we obtained distinct memory-related left-hemispheric activations in the inferior frontal gyrus (IFG), the premotor cortex (BA 6), and the middle temporal gyrus (MTG), as well as significant correlations between hippocampal activation and memory performance in the metaphoric gesture condition. In contrast, unrelated speech and gesture information (FG) was processed in areas of the left occipito-temporal and cerebellar region and the right IFG just like the no-gesture condition (NG). We propose that the specific left-lateralized activation pattern for the metaphoric speech-gesture sentences reflects semantic integration of speech and gestures. These results provide novel evidence about the neural integration of abstract speech and gestures as it contributes to subsequent memory performance.  相似文献   

16.
The kana pick-out test has been widely used in Japan to evaluate the ability to divide attention in both adult and pediatric patients. However, the neural substrates underlying the ability to divide attention using the kana pick-out test, which requires participants to pick out individual letters (vowels) in a story while also reading for comprehension, thus requiring simultaneous allocation of attention to both activities, are still unclear. Moreover, outside of the clinical area, neuroimaging studies focused on the mechanisms of divided attention during complex story comprehension are rare. Thus, the purpose of the present study, to clarify the neural substrates of kana pick-out test, improves our current understanding of the basic neural mechanisms of dual task performance in verbal memory function. We compared patterns of activation in the brain obtained during performance of the individual tasks of vowel identification and story comprehension, to levels of activation when participants performed the two tasks simultaneously during the kana pick-out test. We found that activations of the left dorsal inferior frontal gyrus and superior parietal lobule increase in functional connectivity to a greater extent during the dual task condition compared to the two single task conditions. In contrast, activations of the left fusiform gyrus and middle temporal gyrus, which are significantly involved in picking out letters and complex sentences during story comprehension, respectively, were reduced in the dual task condition compared to during the two single task conditions. These results suggest that increased activations of the dorsal inferior frontal gyrus and superior parietal lobule during dual task performance may be associated with the capacity for attentional resources, and reduced activations of the left fusiform gyrus and middle temporal gyrus may reflect the difficulty of concurrent processing of the two tasks. In addition, the increase in synchronization between the left dorsal inferior frontal gyrus and superior parietal lobule in the dual task condition may induce effective communication between these brain regions and contribute to more attentional processing than in the single task condition, due to greater and more complex demands on voluntary attentional resources.  相似文献   

17.
Studies of spoken and written language suggest that the perception of sentences engages the left anterior and posterior temporal cortex and the left inferior frontal gyrus to a greater extent than nonsententially structured material, such as word lists. This study sought to determine whether the same is true when the language is gestural and perceived visually. Regional neural activity was measured using functional MRI while Deaf and hearing native signers of British Sign Language (BSL) detected semantic anomalies in well‐formed BSL sentences and when they detected nonsense signs in lists of unconnected BSL signs. Processing BSL sentences, when contrasted with signed lists, was reliably associated with greater activation in the posterior portions of the left middle and superior temporal gyri and in the left inferior frontal cortex, but not in the anterior temporal cortex, which was activated to a similar extent whether lists or sentences were processed. Further support for the specificity of these areas for processing the linguistic—rather than visuospatial—features of signed sentences came from a contrast of hearing native signers and hearing sign‐naïve participants. Hearing signers recruited the left posterior temporal and inferior frontal regions during BSL sentence processing to a greater extent than hearing nonsigners. These data suggest that these left perisylvian regions are differentially associated with sentence processing, whatever the modality of the linguistic input. Hum Brain Mapp, 2005. © 2005 Wiley‐Liss, Inc.  相似文献   

18.
Expectations and prior knowledge are thought to support the perceptual analysis of incoming sensory stimuli, as proposed by the predictive‐coding framework. The current fMRI study investigated the effect of prior information on brain activity during the decoding of degraded speech stimuli. When prior information enabled the comprehension of the degraded sentences, the left middle temporal gyrus and the left angular gyrus were activated, highlighting a role of these areas in meaning extraction. In contrast, the activation of the left inferior frontal gyrus (area 44/45) appeared to reflect the search for meaningful information in degraded speech material that could not be decoded because of mismatches with the prior information. Our results show that degraded sentences evoke instantaneously different percepts and activation patterns depending on the type of prior information, in line with prediction‐based accounts of perception. Hum Brain Mapp 35:61–74, 2014. © 2012 Wiley Periodicals, Inc.  相似文献   

19.
Fronto-temporal interactions during overt verbal initiation and suppression   总被引:1,自引:0,他引:1  
The Hayling Sentence Completion Task (HSCT) is known to activate left hemisphere frontal and temporal language regions. However, the effective connectivity between frontal and temporal language regions associated with the task has yet to be examined. The aims of the study were to examine activation and effective connectivity during the HSCT using a functional magnetic resonance imaging (fMRI) paradigm in which participants made overt verbal responses. We predicted that producing an incongruent response (response suppression), compared to a congruent one (response initiation), would be associated with greater activation in the left prefrontal cortex and an increase in the effective connectivity between temporal and frontal regions. Fifteen participants were scanned while completing 80 sentence stems. The congruency and constraint of sentences varied across trials. Dynamic Causal Modeling (DCM) and Bayesian Model Selection (BMS) were used to compare a set of alternative DCMs of fronto-temporal connectivity. The HSCT activated regions in the left temporal and prefrontal cortices, and the cuneus. Response suppression was associated with greater activation in the left middle and orbital frontal gyri and the bilateral precuneus than response initiation. Left middle temporal and frontal regions identified by the conventional fMRI analyses were entered into the DCM analysis. Using a systematic BMS procedure, the optimal DCM showed that the connection from the left middle temporal gyrus, which was driven by verbal stimuli per se, was significantly increased in strength during response suppression compared to initiation. Greater effective connectivity between left temporal and prefrontal regions during response suppression may reflect the transfer of information from posterior temporal regions where semantic and lexical information is stored to prefrontal regions where it is manipulated in preparation for an appropriate response.  相似文献   

20.
Functional studies in schizophrenia demonstrate prominent abnormalities within the left inferior frontal gyrus (IFG) and also suggest the functional connectivity abnormalities in language network including left IFG and superior temporal gyrus during semantic processing. White matter connections between regions involved in the semantic network have also been indicated in schizophrenia. However, an association between functional and anatomical connectivity disruptions within the semantic network in schizophrenia has not been established. Functional (using levels of processing paradigm) as well as diffusion tensor imaging data from 10 controls and 10 chronic schizophrenics were acquired and analyzed. First, semantic encoding specific activation was estimated, showing decreased activation within the left IFG in schizophrenia. Second, functional time series were extracted from this area, and left IFG specific functional connectivity maps were produced for each subject. In an independent analysis, tract‐based spatial statistics (TBSS) was used to compare fractional anisotropy (FA) values between groups, and to correlate these values with functional connectivity maps. Schizophrenia patients showed weaker functional connectivity within the language network that includes left IFG and left superior temporal sulcus/middle temporal gyrus. FA was reduced in several white matter regions including left inferior frontal and left internal capsule. Finally, left inferior frontal white matter FA was positively correlated with connectivity measures of the semantic network in schizophrenics, but not in controls. Our results indicate an association between anatomical and functional connectivity abnormalities within the semantic network in schizophrenia, suggesting further that the functional abnormalities observed in this disorder might be directly related to white matter disruptions. Hum Brain Mapp, 2009. © 2009 Wiley‐Liss, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号