首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Neuroimaging studies of auditory and visual phonological processing have revealed activation of the left inferior and middle frontal gyri. However, because of task differences in these studies (e.g., consonant discrimination versus rhyming), the extent to which this frontal activity is due to modality-specific linguistic processes or to more general task demands involved in the comparison and storage of stimuli remains unclear. An fMRI experiment investigated the functional neuroanatomical basis of phonological processing in discrimination and rhyming tasks across auditory and visual modalities. Participants made either "same/different" judgments on the final consonant or rhyme judgments on auditorily or visually presented pairs of words and pseudowords. Control tasks included "same/different" judgments on pairs of single tones or false fonts and on the final member in pairs of sequences of tones or false fonts. Although some regions produced expected modality-specific activation (i.e., left superior temporal gyrus in auditory tasks, and right lingual gyrus in visual tasks), several regions were active across modalities and tasks, including posterior inferior frontal gyrus (BA 44). Greater articulatory recoding demands for processing of pseudowords resulted in increased activation for pseudowords relative to other conditions in this frontal region. Task-specific frontal activation was observed for auditory pseudoword final consonant discrimination, likely due to increased working memory demands of selection (ventrolateral prefrontal cortex) and monitoring (mid-dorsolateral prefrontal cortex). Thus, the current study provides a systematic comparison of phonological tasks across modalities, with patterns of activation corresponding to the cognitive demands of performing phonological judgments on spoken and written stimuli.  相似文献   

2.
The role of attention in speech comprehension is not well understood. We used fMRI to study the neural correlates of auditory word, pseudoword, and nonspeech (spectrally rotated speech) perception during a bimodal (auditory, visual) selective attention task. In three conditions, Attend Auditory (ignore visual), Ignore Auditory (attend visual), and Visual (no auditory stimulation), 28 subjects performed a one-back matching task in the assigned attended modality. The visual task, attending to rapidly presented Japanese characters, was designed to be highly demanding in order to prevent attention to the simultaneously presented auditory stimuli. Regardless of stimulus type, attention to the auditory channel enhanced activation by the auditory stimuli (Attend Auditory>Ignore Auditory) in bilateral posterior superior temporal regions and left inferior frontal cortex. Across attentional conditions, there were main effects of speech processing (word+pseudoword>rotated speech) in left orbitofrontal cortex and several posterior right hemisphere regions, though these areas also showed strong interactions with attention (larger speech effects in the Attend Auditory than in the Ignore Auditory condition) and no significant speech effects in the Ignore Auditory condition. Several other regions, including the postcentral gyri, left supramarginal gyrus, and temporal lobes bilaterally, showed similar interactions due to the presence of speech effects only in the Attend Auditory condition. Main effects of lexicality (word>pseudoword) were isolated to a small region of the left lateral prefrontal cortex. Examination of this region showed significant word>pseudoword activation only in the Attend Auditory condition. Several other brain regions, including left ventromedial frontal lobe, left dorsal prefrontal cortex, and left middle temporal gyrus, showed Attention x Lexicality interactions due to the presence of lexical activation only in the Attend Auditory condition. These results support a model in which neutral speech presented in an unattended sensory channel undergoes relatively little processing beyond the early perceptual level. Specifically, processing of phonetic and lexical-semantic information appears to be very limited in such circumstances, consistent with prior behavioral studies.  相似文献   

3.
Studies on memory retrieval suggest a reactivation of cortical regions engaged during encoding, such that visual or auditory areas reactivate for visual or auditory memories. The content specificity and any emotion dependency of such reactivations are still unclear. Because distinct visual areas are specialized in processing distinct stimulus categories, we tested for face and word specific reactivations during a memory task using functional magnetic resonance imaging (fMRI). Furthermore, because visual processing and memory are both modulated by emotion, we compared reactivation for stimuli encoded in a neutral or emotionally significant context. In the learning phase, participants studied pairs of stimuli that consisted of either a scene and a face, or a scene and a word. Scenes were either neutral or negative, but did not contain faces or words. In the test phase scenes were presented alone (one in turn), and participants indicated whether it was previously paired with a face, a word, or was new. Results from the test phase showed activation in a functionally defined face-responsive region in the right fusiform gyrus, as well as in a word-responsive region in the left inferior temporal gyrus, for scenes previously paired with faces and words, respectively. Reactivation tended to be larger in both the face- and word-responsive regions when the associated scene was negative as compared to neutral. However, relative to neutral context, the recall of faces and words paired with a negative context produced smaller activations in brain regions associated with social and semantic processing, respectively, as well as poorer memory performance overall. Taken together, these results support the idea of cortical memory reactivations, even at a content-specific level, and further suggest that emotional context may produce opposite effects on reactivations in early sensory areas and more elaborate processing in higher-level cortical areas.  相似文献   

4.
During object manipulation the brain integrates the visual, auditory, and haptic experience of an object into a unified percept. Previous brain imaging studies have implicated for instance the dorsal part of the lateral occipital complex in visuo-tactile and the posterior superior temporal sulcus in audio-visual integration of object-related inputs (Amedi et al., 2005). Yet it is still unclear which brain regions represent object-specific information of all three sensory modalities. To address this question, we performed two complementary functional magnetic resonance imaging experiments. In the first experiment, we identified brain regions which were consistently activated by unimodal visual, auditory, and haptic processing of manipulable objects relative to non-object control stimuli presented in the same modality. In the second experiment, we assessed regional brain activations when participants had to match object-related information that was presented simultaneously in two or all three modalities. Only a well-defined region in left fusiform gyrus (FG) showed an object-specific activation during unisensory processing in the visual, auditory, and tactile modalities. The same region was also consistently activated during multisensory matching of object-related information across all three senses. Taken together, our results suggest that this region is central to the recognition of manipulable objects. A putative role of this FG region is to unify object-specific information provided by the visual, auditory, and tactile modalities into trisensory object representations.  相似文献   

5.
Auditory and somatosensory responses to paired stimuli were investigated for commonality of frontal activation that may be associated with gating using magnetoencephalography (MEG). A paired stimulus paradigm for each sensory evoked study tested right and left hemispheres independently in ten normal controls. MR-FOCUSS, a current density technique, imaged simultaneously active cortical sources. Each subject showed source localization, in the primary auditory or somatosensory cortex, for the respective stimuli following both the first (S1) and second (S2) impulses. Gating ratios for the auditory M50 response, equivalent to the P50 in EEG, were 0.54+/-0.24 and 0.63+/-0.52 for the right and left hemispheres. Somatosensory gating ratios were evaluated for early and late latencies as the pulse duration elicits extended response. Early gating ratios for right and left hemispheres were 0.69+/-0.21 and 0.69+/-0.41 while late ratios were 0.81+/-0.41 and 0.80+/-0.48. Regions of activation in the frontal cortex, beyond the primary auditory or somatosensory cortex, were mapped within 25 ms of peak S1 latencies in 9/10 subjects during auditory stimulus and in 10/10 subjects for somatosensory stimulus. Similar frontal activations were mapped within 25 ms of peak S2 latencies for 75% of auditory responses and for 100% of somatosensory responses. Comparison between modalities showed similar frontal region activations for 17/20 S1 responses and for 13/20 S2 responses. MEG offers a technique for evaluating cross modality gating. The results suggest similar frontal sources are simultaneously active during auditory and somatosensory habituation.  相似文献   

6.
In a natural environment, non-verbal emotional communication is multimodal (i.e. speech melody, facial expression) and multifaceted concerning the variety of expressed emotions. Understanding these communicative signals and integrating them into a common percept is paramount to successful social behaviour. While many previous studies have focused on the neurobiology of emotional communication in the auditory or visual modality alone, far less is known about multimodal integration of auditory and visual non-verbal emotional information. The present study investigated this process using event-related fMRI. Behavioural data revealed that audiovisual presentation of non-verbal emotional information resulted in a significant increase in correctly classified stimuli when compared with visual and auditory stimulation. This behavioural gain was paralleled by enhanced activation in bilateral posterior superior temporal gyrus (pSTG) and right thalamus, when contrasting audiovisual to auditory and visual conditions. Further, a characteristic of these brain regions, substantiating their role in the emotional integration process, is a linear relationship between the gain in classification accuracy and the strength of the BOLD response during the bimodal condition. Additionally, enhanced effective connectivity between audiovisual integration areas and associative auditory and visual cortices was observed during audiovisual stimulation, offering further insight into the neural process accomplishing multimodal integration. Finally, we were able to document an enhanced sensitivity of the putative integration sites to stimuli with emotional non-verbal content as compared to neutral stimuli.  相似文献   

7.
We have investigated the neural basis of perceptual certainty using a simple discrimination paradigm. Psychophysical experiments have shown that a pair of identical electrical stimuli to the skin or a pair of auditory clicks to the ears are consistently perceived as two separate events in time when the inter-stimulus interval (ISIs) is long, and perceived as simultaneous events when the ISIs are very short. The perceptual certainty of having received one or two stimuli decreases when the ISI lies between these two extremes and this is reflected in inconsistent reporting of the percept across trials. In two fMRI experiments, 14 healthy subjects received either paired electrical pulses delivered to the forearm (ISIs=5-110 ms) or paired auditory clicks presented binaurally (ISIs=1-20 ms). For each subject and modality, we calculated a consistency index (CI) representing the level of perceptual certainty. The task activated pre-SMA and anterior cingulate cortex, plus the cerebellum and the basal ganglia. Critically, activity in the right putamen was linearly dependent on CI for both tactile and auditory discrimination, with topographically distinct effects in the two modalities. These results support a role for the human putamen in the "automatic" perception of temporal features of tactile and auditory stimuli.  相似文献   

8.
One of the most consistent electrophysiological deficits reported in the schizophrenia literature is the failure to inhibit, or properly gate, the neuronal response to the second stimulus of an identical pair (i.e., sensory gating). Although animal and invasive human studies have consistently implicated the auditory cortex, prefrontal cortex and hippocampus in mediating the sensory gating response, localized activation in these structures has not always been reported during non-invasive imaging modalities. In the current experiment, event-related FMRI and a variant of the traditional gating paradigm were utilized to examine how the gating network differentially responded to the processing of pairs of identical and non-identical tones. Two single-tone conditions were also presented so that they could be used to estimate the HRF for paired stimuli, reconstructed based on actual hemodynamic responses, to serve as a control non-gating condition. Results supported an emerging theory that the gating response for both paired-tone conditions was primarily mediated by auditory and prefrontal cortex, with potential contributions from the thalamus. Results also indicated that the left auditory cortex may play a preferential role in determining the stimuli that should be inhibited (gated) or receive further processing due to novelty of information. In contrast, there was no evidence of hippocampal involvement, suggesting that future work is needed to determine what role it may play in the gating response.  相似文献   

9.
An important step in perceptual processing is the integration of information from different sensory modalities into a coherent percept. It has been suggested that such crossmodal binding might be achieved by transient synchronization of neurons from different modalities in the gamma-frequency range (> 30 Hz). Here we employed a crossmodal priming paradigm, modulating the semantic congruency between visual–auditory natural object stimulus pairs, during the recording of the high density electroencephalogram (EEG). Subjects performed a semantic categorization task. Analysis of the behavioral data showed a crossmodal priming effect (facilitated auditory object recognition) in response to semantically congruent stimuli. Differences in event-related potentials (ERP) were found between 250 and 350 ms, which were localized to left middle temporal gyrus (BA 21) using a distributed linear source model. Early gamma-band activity (40–50 Hz) was increased between 120 ms and 180 ms following auditory stimulus onset for semantically congruent stimulus pairs. Source reconstruction for this gamma-band response revealed a maximal increase in left middle temporal gyrus (BA 21), an area known to be related to the processing of both complex auditory stimuli and multisensory processing. The data support the hypothesis that oscillatory activity in the gamma-band reflects crossmodal semantic-matching processes in multisensory convergence sites.  相似文献   

10.
Schmid C  Büchel C  Rose M 《NeuroImage》2011,55(1):304-311
Visual dominance refers to the observation that in bimodal environments vision often has an advantage over other senses in human. Therefore, a better memory performance for visual compared to, e.g., auditory material is assumed. However, the reason for this preferential processing and the relation to the memory formation is largely unknown. In this fMRI experiment, we manipulated cross-modal competition and attention, two factors that both modulate bimodal stimulus processing and can affect memory formation. Pictures and sounds of objects were presented simultaneously in two levels of recognisability, thus manipulating the amount of cross-modal competition. Attention was manipulated via task instruction and directed either to the visual or the auditory modality. The factorial design allowed a direct comparison of the effects between both modalities. The resulting memory performance showed that visual dominance was limited to a distinct task setting. Visual was superior to auditory object memory only when allocating attention towards the competing modality. During encoding, cross-modal competition and attention towards the opponent domain reduced fMRI signals in both neural systems, but cross-modal competition was more pronounced in the auditory system and only in auditory cortex this competition was further modulated by attention. Furthermore, neural activity reduction in auditory cortex during encoding was closely related to the behavioural auditory memory impairment. These results indicate that visual dominance emerges from a less pronounced vulnerability of the visual system against competition from the auditory domain.  相似文献   

11.
Associative emotional learning, which is important for the social emotional functioning of individuals and is often impaired in psychiatric illnesses, is in part mediated by dopamine and glutamate pathways in the brain. The protein DARPP-32 is involved in the regulation of dopaminergic and glutaminergic signaling. Consequently, it has been suggested that the haplotypic variants of the gene PPP1R1B that encodes DARPP-32 are associated with working memory and emotion processing.We hypothesized that PPP1R1B should have a significant influence on the network of brain regions involved in associative emotional learning that are rich in DARPP-32, namely the striatum, prefrontal cortex (comprising the medial frontal gyrus and inferior frontal gyrus (IFG)), amygdala and parahippocampal gyrus (PHG). Dynamic causal models were applied to functional MRI data to investigate how brain connectivity during an associative emotional learning task is affected by different single-nucleotide polymorphisms (SNPs) of PPP1R1B: rs879606, rs907094 and rs3764352.Compared to heterozygotes, homozygotes with GTA alleles displayed increased intrinsic connectivity between the IFG and PHG, as well as increased excitability of the PHG for negative emotional stimuli. We have also elucidated the directionality of these genetic influences. Our data suggest that homozygotes with GTA alleles involve stronger functional connections between brain areas in order to maintain activation of these regions. Homozygotes might engage a greater degree of motivational learning and integration of information to perform the emotional learning task correctly.We conclude that PPP1R1B is associated with the neural network involved in associative emotional learning.  相似文献   

12.
Adams RB  Janata P 《NeuroImage》2002,16(2):361-377
Knowledge about environmental objects derives from representations of multiple object features both within and across sensory modalities. While our understanding of the neural basis for visual object representation in the human and nonhuman primate brain is well advanced, a similar understanding of auditory objects is in its infancy. We used a name verification task and functional magnetic resonance imaging (fMRI) to characterize the neural circuits that are activated as human subjects match visually presented words with either simultaneously presented pictures or environmental sounds. The difficulty of the matching judgment was manipulated by varying the level of semantic detail at which the words and objects were compared. We found that blood oxygen level dependent (BOLD) signal was modulated in ventral and dorsal regions of the inferior frontal gyrus of both hemispheres during auditory and visual object categorization, potentially implicating these areas as sites for integrating polymodal object representations with concepts in semantic memory. As expected, BOLD signal increases in the fusiform gyrus varied with the semantic level of object categorization, though this effect was weak and restricted to the left hemisphere in the case of auditory objects.  相似文献   

13.
14.
Kang E  Lee DS  Kang H  Hwang CH  Oh SH  Kim CS  Chung JK  Lee MC 《NeuroImage》2006,32(1):423-431
Speech perception in face-to-face conversation involves processing of speech sounds (auditory) and speech-associated mouth/lip movements (visual) from a speaker. Using PET where no scanner noise was present, brain regions involved in speech cue processing were investigated with the normal hearing subjects with no previous lip-reading training (N = 17) carrying out a semantic plausibility decision on spoken sentences delivered in a movie file. Multimodality was ensured at the sensory level in all four conditions. Sensory-specific speech cue of one sensory modality, i.e., auditory speech (A condition) or mouth movement (V condition), was delivered with a control stimulus of the other modality whereas speech cues of both sensory modalities (AV condition) were delivered during bimodal condition. In comparison to the control condition, extensive activations in the superior temporal regions were observed bilaterally during the A condition but these activations were reduced in extent and left lateralized during the AV condition. Polymodal region such as left posterior superior temporal sulcus (pSTS) involved in cross-modal interaction/integration of audiovisual speech was found to be activated during the A and more so during the AV conditions but not during the V condition. Activations were observed in Broca's (BA 44), medial frontal (BA 8), and anterior ventrolateral prefrontal (BA 47) regions in the left during the V condition, where lip-reading performance was less successful. Results indicated that the speech-associated lip movements (visual speech cue) rendered suppression on the activity in the right auditory temporal regions. Overadditivity (AV > A + V) observed in the right postcentral region during the bimodal condition relative to the sum of unimodal speech conditions was also associated with reduced activity during the V condition. These findings suggested that visual speech cue could exert an inhibitory modulatory effect on the brain activities in the right hemisphere during the cross-modal interaction of audiovisual speech perception.  相似文献   

15.
The majority of working memory research has been carried out within the visual and auditory modalities, leaving it unclear how other modalities would map onto currently proposed working memory models. In this study we examined the previously uninvestigated area of olfactory working memory. Our aim was to investigate if olfactory working memory would engage prefrontal regions known to be involved in working memory for other sensory modalities. Using positron emission tomography we measured cerebral blood flow changes in 12 volunteers during an olfactory working memory task and a comparison visual working memory task. Our findings indicate that both olfactory and face working memory engaged dorsolateral and ventrolateral frontal cortex when the task requirements were matched; a conjunction analysis indicated overlap in the distribution of activity in the two tasks. Similarities and differences in activity were noted in parietal lobe regions, with both tasks engaging inferior areas of 40/7, but only visual working memory showing increased activity within left superior parietal cortex. The findings support the idea that working memory processes engage frontal cortical areas independent of the modality of input, but do not rule out the possibility of modality-specific neural populations within dorsolateral or ventrolateral cortex.  相似文献   

16.
Kansaku K  Hanakawa T  Wu T  Hallett M 《NeuroImage》2004,22(2):904-911
Simple reaction time, a simple model of sensory-to-motor behavior, has been extensively investigated and its role in inferring elementary mental organization has been postulated. However, little is known about the neuronal mechanisms underlying it. To elucidate the neuronal substrates, functional magnetic resonance imaging (fMRI) signals were collected during a simple reaction task paradigm using simple cues consisting of different modalities and simple triggered movements executed by different effectors. We hypothesized that a specific neural network that characterizes simple reaction time would be activated irrespective of the input modalities and output effectors. Such a neural network was found in the right posterior superior temporal cortex, right premotor cortex, left ventral premotor cortex, cerebellar vermis, and medial frontal gyrus. The right posterior superior temporal cortex and right premotor cortex were also activated by different modality sensory cues in the absence of movements. The shared neural network may play a role in sensory triggered movements.  相似文献   

17.
The current study examined developmental changes in activation and effective connectivity among brain regions during a phonological processing task, using fMRI. Participants, ages 9-15, were scanned while performing rhyming judgments on pairs of visually presented words. The orthographic and phonological similarity between words in the pair was independently manipulated, so that rhyming judgment could not be based on orthographic similarity. Our results show a developmental increase in activation in the dorsal part of left inferior frontal gyrus (IFG), accompanied by a decrease in the dorsal part of left superior temporal gyrus (STG). The coupling of dorsal IFG with other selected brain regions involved in the phonological decision increased with age, while the coupling of STG decreased with age. These results suggest that during development there is a shift from reliance on sensory auditory representations to reliance on phonological segmentation and covert articulation for performing rhyming judgment on visually presented words. In addition, we found a developmental increase in activation in left posterior parietal cortex that was not accompanied by a change in its connectivity with the other regions. These results suggest that maturational changes within a cortical region are not necessarily accompanied by an increase in its interactions with other regions and its contribution to the task. Our results are consistent with the idea that there is reduced reliance on primary sensory processes as task-relevant processes mature and become more efficient during development.  相似文献   

18.
To form a unified percept of our environment, the human brain integrates information within and across the senses. This MEG study investigated interactions within and between sensory modalities using a frequency analysis of steady-state responses that are elicited time-locked to periodically modulated stimuli. Critically, in the frequency domain, interactions between sensory signals are indexed by crossmodulation terms (i.e. the sums and differences of the fundamental frequencies). The 3 × 2 factorial design, manipulated (1) modality: auditory, visual or audiovisual (2) steady-state modulation: the auditory and visual signals were modulated only in one sensory feature (e.g. visual gratings modulated in luminance at 6 Hz) or in two features (e.g. tones modulated in frequency at 40 Hz & amplitude at 0.2 Hz). This design enabled us to investigate crossmodulation frequencies that are elicited when two stimulus features are modulated concurrently (i) in one sensory modality or (ii) in auditory and visual modalities. In support of within-modality integration, we reliably identified crossmodulation frequencies when two stimulus features in one sensory modality were modulated at different frequencies. In contrast, no crossmodulation frequencies were identified when information needed to be combined from auditory and visual modalities. The absence of audiovisual crossmodulation frequencies suggests that the previously reported audiovisual interactions in primary sensory areas may mediate low level spatiotemporal coincidence detection that is prominent for stimulus transients but less relevant for sustained SSR responses. In conclusion, our results indicate that information in SSRs is integrated over multiple time scales within but not across sensory modalities at the primary cortical level.  相似文献   

19.
Electrophysiological studies in nonhuman primates and other mammals have shown that sensory cues from different modalities that appear at the same time and in the same location can increase the firing rate of multisensory cells in the superior colliculus to a level exceeding that predicted by summing the responses to the unimodal inputs. In contrast, spatially disparate multisensory cues can induce a profound response depression. We have previously demonstrated using functional magnetic resonance imaging (fMRI) that similar indices of crossmodal facilitation and inhibition are detectable in human cortex when subjects listen to speech while viewing visually congruent and incongruent lip and mouth movements. Here, we have used fMRI to investigate whether similar BOLD signal changes are observable during the crossmodal integration of nonspeech auditory and visual stimuli, matched or mismatched solely on the basis of their temporal synchrony, and if so, whether these crossmodal effects occur in similar brain areas as those identified during the integration of audio-visual speech. Subjects were exposed to synchronous and asynchronous auditory (white noise bursts) and visual (B/W alternating checkerboard) stimuli and to each modality in isolation. Synchronous and asynchronous bimodal inputs produced superadditive BOLD response enhancement and response depression across a large network of polysensory areas. The most highly significant of these crossmodal gains and decrements were observed in the superior colliculi. Other regions exhibiting these crossmodal interactions included cortex within the superior temporal sulcus, intraparietal sulcus, insula, and several foci in the frontal lobe, including within the superior and ventromedial frontal gyri. These data demonstrate the efficacy of using an analytic approach informed by electrophysiology to identify multisensory integration sites in humans and suggest that the particular network of brain areas implicated in these crossmodal integrative processes are dependent on the nature of the correspondence between the different sensory inputs (e.g. space, time, and/or form).  相似文献   

20.
The processing of syntactic and semantic information in written sentences by native (L1) and non-native (L2) speakers was investigated in an fMRI experiment. This was done by means of a violation paradigm, in which participants read sentences containing either a syntactic, a semantic, or no violation. The results of this study were compared to those of a previous fMRI study, in which auditory sentence processing in L1 and L2 was investigated. The results indicate greater activation for L2 speakers as compared to L1 speakers when reading sentences in several language- and motor-related brain regions. The processing of syntactically incorrect sentences elicited no reliably greater activation in language areas in L2 speakers. In L1 speakers, on the other hand, syntactic processing, as compared to semantic processing, was associated with increased activation in left mid to posterior superior temporal gyrus. In response to the processing of semantically incorrect sentences, both L2 and L1 speakers demonstrated increased involvement of left inferior frontal gyrus. The results of this study were compared to a previously conducted fMRI study, which made use of identical sentence stimuli in the auditory modality. Results from the two studies are in general agreement with one another, although some differences in the response of brain areas very proximal to primary perceptual processing areas (i.e. primary auditory and visual cortex) were observed in conjunction with presentation in the different modalities. The combined results provide evidence that L1 and L2 speakers rely on the same cortical network to process language, although with a higher level of activation in some regions for L2 processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号