首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Detecting changes in a stream of sensory information is vital to animals and humans. While there have been several studies of automatic change detection in various sensory modalities, olfactory change detection is largely unstudied. We investigated brain regions responsive to both passive and active detection of olfactory change using fMRI. Nine right-handed healthy, normosmic subjects (five men) were scanned in two conditions while breathing in synchrony with a metronome. In one condition, subjects mentally counted infrequent odors (Attend condition), whereas in the other condition, subjects' attention was directed elsewhere as they counted auditory tones (Ignore condition). Odors were delivered via a nasal cannula using a computer-controlled air-dilution olfactometer. Infrequently occurring olfactory stimuli evoked significant (P < .05, corrected) activity in the subgenual cingulate and in central posterior orbitofrontal cortex, but only in the Ignore condition, as confirmed by direct comparison of the Ignore session with the Attend session (P < .05, corrected). Subgenual cingulate and posterior orbital cortex may therefore play a role in detecting discrepant olfactory events while attention is otherwise engaged in another sensory modality.  相似文献   

2.
The analysis of auditory deviant events outside the focus of attention is a fundamental capacity of human information processing and has been studied in experiments on Mismatch Negativity (MMN) and the P3a component in evoked potential research. However, generators contributing to these components are still under discussion. Here we assessed cortical blood flow to auditory stimulation in three conditions. Six healthy subjects were presented with standard tones, frequency deviant tones (MMN condition), and complex novel sounds (Novelty condition), while attention was directed to a nondemanding visual task. Analysis of the MMN condition contrasted with thestandard condition revealed blood flow changes in the left and right superior temporal gyrus, right superior temporal sulcus and left inferior frontal gyrus. Complex novel sounds contrasted with the standard condition activated the left superior temporal gyrus and the left inferior and middle frontal gyrus. A small subcortical activation emerged in the left parahippocampal gyrus and an extended activation was found covering the right superior temporal gyrus. Novel sounds activated the right inferior frontal gyrus when controlling for deviance probability. In contrast to previous studies our results indicate a left hemisphere contribution to a frontotemporal network of auditory deviance processing. Our results provide further evidence for a contribution of the frontal cortex to the processing of auditory deviance outside the focus of directed attention.  相似文献   

3.
Gaab N  Gaser C  Zaehle T  Jancke L  Schlaug G 《NeuroImage》2003,19(4):1417-1426
Auditory functional magnetic resonance imaging tasks are challenging since the MR scanner noise can interfere with the auditory stimulation. To avoid this interference a sparse temporal sampling method with a long repetition time (TR = 17 s) was used to explore the functional anatomy of pitch memory. Eighteen right-handed subjects listened to a sequence of sine-wave tones (4.6 s total duration) and were asked to make a decision (depending on a visual prompt) whether the last or second to last tone was the same or different as the first tone. An alternating button press condition served as a control. Sets of 24 axial slices were acquired with a variable delay time (between 0 and 6 s) between the end of the auditory stimulation and the MR acquisition. Individual imaging time points were combined into three clusters (0-2, 3-4, and 5-6 s after the end of the auditory stimulation) for the analysis. The analysis showed a dynamic activation pattern over time which involved the superior temporal gyrus, supramarginal gyrus, posterior dorsolateral frontal regions, superior parietal regions, and dorsolateral cerebellar regions bilaterally as well as the left inferior frontal gyrus. By regressing the performance score in the pitch memory task with task-related MR signal changes, the supramarginal gyrus (left>right) and the dorsolateral cerebellum (lobules V and VI, left>right) were significantly correlated with good task performance. The SMG and the dorsolateral cerebellum may play a critical role in short-term storage of pitch information and the continuous pitch discrimination necessary for performing this pitch memory task.  相似文献   

4.
This 3-T fMRI study investigates brain regions similarly and differentially involved with listening and covert production of singing relative to speech. Given the greater use of auditory-motor self-monitoring and imagery with respect to consonance in singing, brain regions involved with these processes are predicted to be differentially active for singing more than for speech. The stimuli consisted of six Japanese songs. A block design was employed in which the tasks for the subject were to listen passively to singing of the song lyrics, passively listen to speaking of the song lyrics, covertly sing the song lyrics visually presented, covertly speak the song lyrics visually presented, and to rest. The conjunction of passive listening and covert production tasks used in this study allow for general neural processes underlying both perception and production to be discerned that are not exclusively a result of stimulus induced auditory processing nor to low level articulatory motor control. Brain regions involved with both perception and production for singing as well as speech were found to include the left planum temporale/superior temporal parietal region, as well as left and right premotor cortex, lateral aspect of the VI lobule of posterior cerebellum, anterior superior temporal gyrus, and planum polare. Greater activity for the singing over the speech condition for both the listening and covert production tasks was found in the right planum temporale. Greater activity in brain regions involved with consonance, orbitofrontal cortex (listening task), subcallosal cingulate (covert production task) were also present for singing over speech. The results are consistent with the PT mediating representational transformation across auditory and motor domains in response to consonance for singing over that of speech. Hemispheric laterality was assessed by paired t tests between active voxels in the contrast of interest relative to the left-right flipped contrast of interest calculated from images normalized to the left-right reflected template. Consistent with some hypotheses regarding hemispheric specialization, a pattern of differential laterality for speech over singing (both covert production and listening tasks) occurs in the left temporal lobe, whereas, singing over speech (listening task only) occurs in right temporal lobe.  相似文献   

5.
Joanisse MF  Gati JS 《NeuroImage》2003,19(1):64-79
Speech perception involves recovering the phonetic form of speech from a dynamic auditory signal containing both time-varying and steady-state cues. We examined the roles of inferior frontal and superior temporal cortex in processing these aspects of auditory speech and nonspeech signals. Event-related functional magnetic resonance imaging was used to record activation in superior temporal gyrus (STG) and inferior frontal gyrus (IFG) while participants discriminated pairs of either speech syllables or nonspeech tones. Speech stimuli differed in either the consonant or the vowel portion of the syllable, whereas the nonspeech signals consisted of sinewave tones differing along either a dynamic or a spectral dimension. Analyses failed to identify regions of activation that clearly contrasted the speech and nonspeech conditions. However, we did identify regions in the posterior portion of left and right STG and left IFG yielding greater activation for both speech and nonspeech conditions that involved rapid temporal discrimination, compared to speech and nonspeech conditions involving spectral discrimination. The results suggest that, when semantic and lexical factors are adequately ruled out, there is significant overlap in the brain regions involved in processing the rapid temporal characteristics of both speech and nonspeech signals.  相似文献   

6.
The neural substrates underlying speech perception are still not well understood. Previously, we found dissociation of speech and nonspeech processing at the earliest cortical level (AI), using speech and nonspeech complexity dimensions. Acoustic differences between speech and nonspeech stimuli in imaging studies, however, confound the search for linguistic-phonetic regions. Presently, we used sinewave speech (SWsp) and nonspeech (SWnon), which replace speech formants with sinewave tones, in order to match acoustic spectral and temporal complexity while contrasting phonetics. Chord progressions (CP) were used to remove the effects of auditory coherence and object processing. Twelve normal RH volunteers were scanned with fMRI while listening to SWsp, SWnon, CP, and a baseline condition arranged in blocks. Only two brain regions, in bilateral superior temporal sulcus, extending more posteriorly on the left, were found to prefer the SWsp condition after accounting for acoustic modulation and coherence effects. Two regions responded preferentially to the more frequency-modulated stimuli, including one that overlapped the right temporal phonetic area and another in the left angular gyrus far from the phonetic area. These findings are proposed to form the basis for the two subtypes of auditory word deafness. Several brain regions, including auditory and non-auditory areas, preferred the coherent auditory stimuli and are likely involved in auditory object recognition. The design of the current study allowed for separation of acoustic spectrotemporal, object recognition, and phonetic effects resulting in distinct and overlapping components.  相似文献   

7.
We used functional magnetic resonance imaging (fMRI) to localize the brain areas involved in the imagery analogue of the verbal transformation effect, that is, the perceptual changes that occur when a speech form is cycled in rapid and continuous mental repetition. Two conditions were contrasted: a baseline condition involving the simple mental repetition of speech sequences, and a verbal transformation condition involving the mental repetition of the same items with an active search for verbal transformation. Our results reveal a predominantly left-lateralized network of cerebral regions activated by the verbal transformation task, similar to the neural network involved in verbal working memory: the left inferior frontal gyrus, the left supramarginal gyrus, the left superior temporal gyrus, the anterior part of the right cingulate cortex, and the cerebellar cortex, bilaterally. Our results strongly suggest that the imagery analogue of the verbal transformation effect, which requires percept analysis, form interpretation, and attentional maintenance of verbal material, relies on a working memory module sharing common components of speech perception and speech production systems.  相似文献   

8.
Cortical regions engaged by sentence processing were mapped using functional MRI. The influence of input modality (spoken word vs. print input) and parsing difficulty (sentences containing subject-relative vs. object-relative clauses) was assessed. Auditory presentation was associated with pronounced activity at primary auditory cortex and across the superior temporal gyrus bilaterally. Printed sentences by contrast evoked major activity at several posterior sites in the left hemisphere, including the angular gyrus, supramarginal gyrus, and the fusiform gyrus in the occipitotemporal region. In addition, modality-independent regions were isolated, with greatest overlap seen in the inferior frontal gyrus (IFG). With respect to sentence complexity, object-relative sentences evoked heightened responses in comparison to subject-relative sentences at several left hemisphere sites, including IFG, the middle/superior temporal gyrus, and the angular gyrus. These sites showing modulation of activity as a function of sentence type, independent of input mode, arguably form the core of a cortical system essential to sentence parsing.  相似文献   

9.
The ability to create new meanings from combinations of words is one important function of the language system. We investigated the neural correlates of combinatorial semantic processing using fMRI. During scanning, participants performed a rating task on auditory word or pseudoword strings that differed in the presence of combinatorial and word-level semantic information. Stimuli included normal sentences comprised of thematically related words that could be readily combined to produce a more complex meaning, semantically incongruent sentences in which content words were randomly replaced with other content words, pseudoword sentences, and versions of these three sentence types in which syntactic structure was removed by randomly re-ordering the words. Several regions showed greater BOLD signal for stimuli with words than for those with pseudowords, including the left angular gyrus, left superior temporal sulcus, and left inferior frontal gyrus, suggesting that these areas are involved in semantic access at the single word level. In the angular and inferior frontal gyri these differences emerged early in the course of the hemodynamic response. An effect of combinatorial semantic structure was observed in the left angular gyrus and left lateral temporal lobe, which showed greater activation for normal compared to semantically incongruent sentences. These effects appeared later in the time course of the hemodynamic response, beginning after the entire stimulus had been presented. The data indicate a complex spatiotemporal pattern of activity associated with computation of word and sentence-level semantic information, and suggest a particular role for the left angular gyrus in processing overall sentence meaning.  相似文献   

10.
Neuroimaging studies of auditory and visual phonological processing have revealed activation of the left inferior and middle frontal gyri. However, because of task differences in these studies (e.g., consonant discrimination versus rhyming), the extent to which this frontal activity is due to modality-specific linguistic processes or to more general task demands involved in the comparison and storage of stimuli remains unclear. An fMRI experiment investigated the functional neuroanatomical basis of phonological processing in discrimination and rhyming tasks across auditory and visual modalities. Participants made either "same/different" judgments on the final consonant or rhyme judgments on auditorily or visually presented pairs of words and pseudowords. Control tasks included "same/different" judgments on pairs of single tones or false fonts and on the final member in pairs of sequences of tones or false fonts. Although some regions produced expected modality-specific activation (i.e., left superior temporal gyrus in auditory tasks, and right lingual gyrus in visual tasks), several regions were active across modalities and tasks, including posterior inferior frontal gyrus (BA 44). Greater articulatory recoding demands for processing of pseudowords resulted in increased activation for pseudowords relative to other conditions in this frontal region. Task-specific frontal activation was observed for auditory pseudoword final consonant discrimination, likely due to increased working memory demands of selection (ventrolateral prefrontal cortex) and monitoring (mid-dorsolateral prefrontal cortex). Thus, the current study provides a systematic comparison of phonological tasks across modalities, with patterns of activation corresponding to the cognitive demands of performing phonological judgments on spoken and written stimuli.  相似文献   

11.
The human auditory cortex plays a special role in speech recognition. It is therefore necessary to clarify the functional roles of individual auditory areas. We applied functional magnetic resonance imaging (fMRI) to examine cortical responses to speech sounds, which were presented under the dichotic and diotic (binaural) listening conditions. We found two different response patterns in multiple auditory areas and language-related areas. In the auditory cortex, the medial portion of the secondary auditory area (A2), as well as a part of the planum temporale (PT) and the superior temporal gyrus and sulcus (ST), showed greater responses under the dichotic condition than under the diotic condition. This dichotic selectivity may reflect acoustic differences and attention-related factors such as spatial attention and selective attention to targets. In contrast, other parts of the auditory cortex showed comparable responses to the dichotic and diotic conditions. We found similar functional differentiation in the inferior frontal (IF) cortex. These results suggest that multiple auditory and language areas may play a pivotal role in integrating the functional differentiation for speech recognition.  相似文献   

12.
Callan AM  Callan DE  Masaki S 《NeuroImage》2005,28(3):553-562
Left fusiform gyrus and left angular gyrus are considered to be respectively involved with visual form processing and associating visual and auditory (phonological) information in reading. However, there are a number of studies that fail to show the contribution of these regions in carrying out these aspects of reading. Considerable differences in the type of stimuli and tasks used in the various studies may account for the discrepancy in results. This functional magnetic resonance imaging (fMRI) study attempts to control aspects of experimental stimuli and tasks to specifically investigate brain regions involved with visual form processing and character-to-phonological (i.e., simple grapheme-to-phonological) conversion processing for single letters. Subjects performed a two-back identification task using known Japanese, and previously unknown Korean, and Thai phonograms before and after training on one of the unknown language orthographies. Japanese subjects learned either five Korean or five Thai phonograms. Brain regions related to visual form processing were assessed by comparing activity related to native (Japanese) phonograms with that of non-native (Korean and Thai) phonograms. There was no significant differential brain activity for visual form processing. Brain regions related to character-to-phonological conversion processing were assessed by comparing pre- and post-tests of trained non-native phonograms with that of native phonograms and non-trained non-native phonograms. Significant differential activation post-relative to pre-training exclusively for the trained non-native phonograms was found in left angular gyrus. In addition, psychophysiologic interaction (PPI) analysis revealed greater integration of left angular gyrus with primary visual cortex as well as with superior temporal gyrus for the trained phonograms post-relative to pre-training. The results suggest that left angular gyrus is involved with character-to-phonological conversion in letter perception.  相似文献   

13.
Neurophysiological research suggests that understanding the actions of others harnesses neural circuits that would be used to produce those actions directly. We used fMRI to examine brain areas active during language comprehension in which the speaker was seen and heard while talking (audiovisual) or heard but not seen (audio-alone) or when the speaker was seen talking with the audio track removed (video-alone). We found that audiovisual speech perception activated a network of brain regions that included cortical motor areas involved in planning and executing speech production and areas subserving proprioception related to speech production. These regions included the posterior part of the superior temporal gyrus and sulcus, the pars opercularis, premotor cortex, adjacent primary motor cortex, somatosensory cortex, and the cerebellum. Activity in premotor cortex and posterior superior temporal gyrus and sulcus was modulated by the amount of visually distinguishable phonemes in the stories. None of these regions was activated to the same extent in the audio- or video-alone conditions. These results suggest that integrating observed facial movements into the speech perception process involves a network of multimodal brain regions associated with speech production and that these areas contribute less to speech perception when only auditory signals are present. This distributed network could participate in recognition processing by interpreting visual information about mouth movements as phonetic information based on motor commands that could have generated those movements.  相似文献   

14.
The neural correlates of speech monitoring overlap with neural correlates of speech comprehension and production. However, it is unclear how these correlates are organized within functional connectivity networks, and how these networks interact to subserve speech monitoring. We applied spatial and temporal independent component analysis (sICA and tICA) to a functional magnetic resonance imaging (fMRI) experiment involving overt speech production, comprehension and monitoring. SICA and tICA respectively decompose fMRI data into spatial and temporal components that can be interpreted as distributed estimates of functional connectivity and concurrent temporal dynamics in one or more regions of fMRI activity. Using sICA we found multiple connectivity components that were associated with speech perception (auditory and left fronto-temporal components) and production (bilateral central sulcus and default-mode components), but not with speech monitoring. In order to further investigate if speech monitoring could be mapped in the auditory cortex as a unique temporal process, we applied tICA to voxels of the sICA auditory component. Amongst the temporal components we found a single, unique component that matched the speech monitoring temporal pattern. We used this temporal component as a new predictor for whole-brain activity and found that it correlated positively with bilateral auditory cortex, and negatively with the supplementary motor area (SMA). Psychophysiological interaction analysis of task and activity in bilateral auditory cortex and SMA showed that functional connectivity changed with task conditions. These results suggest that speech monitoring entails a dynamic coupling between different functional networks. Furthermore, we demonstrate that overt speech comprises multiple networks that are associated with specific speech-related processes. We conclude that the sequential combination of sICA and tICA is a powerful approach for the analysis of complex, overt speech tasks.  相似文献   

15.
The separation of concurrent sounds is paramount to human communication in everyday settings. The primary auditory cortex and the planum temporale are thought to be essential for both the separation of physical sound sources into perceptual objects and the comparison of those representations with previously learned acoustic events. To examine the role of these areas in speech separation, we measured brain activity using event-related functional Magnetic Resonance Imaging (fMRI) while participants were asked to identify two phonetically different vowels presented simultaneously. The processing of brief speech sounds (200 ms in duration) activated the thalamus and superior temporal gyrus bilaterally, left anterior temporal lobe, and left inferior temporal gyrus. A comparison of fMRI signals between trials in which participants successfully identified both vowels as opposed to when only one of the two vowels was recognized revealed enhanced activity in left thalamus, Heschl's gyrus, superior temporal gyrus, and the planum temporale. Because participants successfully identified at least one of the two vowels on each trial, the difference in fMRI signal indexes the extra computational work needed to segregate and identify successfully the other concurrently presented vowel. The results support the view that auditory cortex in or near Heschl's gyrus as well as in the planum temporale are involved in sound segregation and reveal a link between left thalamo-cortical activation and the successful separation and identification of simultaneous speech sounds.  相似文献   

16.
Kang E  Lee DS  Kang H  Hwang CH  Oh SH  Kim CS  Chung JK  Lee MC 《NeuroImage》2006,32(1):423-431
Speech perception in face-to-face conversation involves processing of speech sounds (auditory) and speech-associated mouth/lip movements (visual) from a speaker. Using PET where no scanner noise was present, brain regions involved in speech cue processing were investigated with the normal hearing subjects with no previous lip-reading training (N = 17) carrying out a semantic plausibility decision on spoken sentences delivered in a movie file. Multimodality was ensured at the sensory level in all four conditions. Sensory-specific speech cue of one sensory modality, i.e., auditory speech (A condition) or mouth movement (V condition), was delivered with a control stimulus of the other modality whereas speech cues of both sensory modalities (AV condition) were delivered during bimodal condition. In comparison to the control condition, extensive activations in the superior temporal regions were observed bilaterally during the A condition but these activations were reduced in extent and left lateralized during the AV condition. Polymodal region such as left posterior superior temporal sulcus (pSTS) involved in cross-modal interaction/integration of audiovisual speech was found to be activated during the A and more so during the AV conditions but not during the V condition. Activations were observed in Broca's (BA 44), medial frontal (BA 8), and anterior ventrolateral prefrontal (BA 47) regions in the left during the V condition, where lip-reading performance was less successful. Results indicated that the speech-associated lip movements (visual speech cue) rendered suppression on the activity in the right auditory temporal regions. Overadditivity (AV > A + V) observed in the right postcentral region during the bimodal condition relative to the sum of unimodal speech conditions was also associated with reduced activity during the V condition. These findings suggested that visual speech cue could exert an inhibitory modulatory effect on the brain activities in the right hemisphere during the cross-modal interaction of audiovisual speech perception.  相似文献   

17.
Healthy subjects show increased activation in left temporal lobe regions in response to speech sounds compared to complex nonspeech sounds. Abnormal lateralization of speech-processing regions in the temporal lobes has been posited to be a cardinal feature of schizophrenia. Event-related fMRI was used to test the hypothesis that schizophrenic patients would show an abnormal pattern of hemispheric lateralization when detecting speech compared with complex nonspeech sounds in an auditory oddball target-detection task. We predicted that differential activation for speech in the vicinity of the superior temporal sulcus would be greater in schizophrenic patients than in healthy subjects in the right hemisphere, but less in patients than in healthy subjects in the left hemisphere. Fourteen patients with schizophrenia (selected from an outpatient population, 2 females, 12 males, mean age 35.1 years) and 29 healthy subjects (8 females, 21 males, mean age 29.3 years) were scanned while they performed an auditory oddball task in which the oddball stimuli were either speech sounds or complex nonspeech sounds. Compared to controls, individuals with schizophrenia showed greater differential activation between speech and nonspeech in right temporal cortex, left superior frontal cortex, and the left temporal-parietal junction. The magnitude of the difference in the left temporal parietal junction was significantly correlated with severity of disorganized thinking. This study supports the hypothesis that aberrant functional lateralization of speech processing is an underlying feature of schizophrenia and suggests the magnitude of the disturbance in speech-processing circuits may be associated with severity of disorganized thinking.  相似文献   

18.
Frühholz S  Grandjean D 《NeuroImage》2012,62(3):1658-1666
Vocal expressions commonly elicit activity in superior temporal and inferior frontal cortices, indicating a distributed network to decode vocally expressed emotions. We examined the involvement of this fronto-temporal network for the decoding of angry voices during attention towards (explicit attention) or away from emotional cues in voices (implicit attention) based on a reanalysis of previous data (Frühholz, S., Ceravolo, L., Grandjean, D., 2012. Cerebral Cortex 22, 1107-1117). The general network revealed high interconnectivity of bilateral inferior frontal gyrus (IFG) to different bilateral voice-sensitive regions in mid and posterior superior temporal gyri. Right superior temporal gyrus (STG) regions showed connectivity to the left primary auditory cortex and secondary auditory cortex (AC) as well as to high-level auditory regions. This general network revealed differences in connectivity depending on the attentional focus. Explicit attention to angry voices revealed a specific right-left STG network connecting higher-level AC. During attention to a nonemotional vocal feature we also found a left-right STG network implicitly elicited by angry voices that also included low-level left AC. Furthermore, only during this implicit processing there was widespread interconnectivity between bilateral IFG and bilateral STG. This indicates that while implicit attention to angry voices recruits extended bilateral STG and IFG networks for the sensory and evaluative decoding of voices, explicit attention to angry voices solely involves a network of bilateral STG regions probably for the integrative recognition of emotional cues from voices.  相似文献   

19.
In visual perception of emotional stimuli, low- and high-level appraisal processes have been found to engage different neural structures. Beyond emotional facial expression, emotional prosody is an important auditory cue for social interaction. Neuroimaging studies have proposed a network for emotional prosody processing that involves a right temporal input region and explicit evaluation in bilateral prefrontal areas. However, the comparison of different appraisal levels has so far relied upon using linguistic instructions during low-level processing, which might confound effects of processing level and linguistic task. In order to circumvent this problem, we examined processing of emotional prosody in meaningless speech during gender labelling (implicit, low-level appraisal) and emotion labelling (explicit, high-level appraisal). While bilateral amygdala, left superior temporal sulcus and right parietal areas showed stronger blood oxygen level-dependent (BOLD) responses during implicit processing, areas with stronger BOLD responses during explicit processing included the left inferior frontal gyrus, bilateral parietal, anterior cingulate and supplemental motor cortex. Emotional versus neutral prosody evoked BOLD responses in right superior temporal gyrus, bilateral anterior cingulate, left inferior frontal gyrus, insula and bilateral putamen. Basal ganglia and right anterior cingulate responses to emotional versus neutral prosody were particularly pronounced during explicit processing. These results are in line with an amygdala-prefrontal-cingulate network controlling different appraisal levels, and suggest a specific role of the left inferior frontal gyrus in explicit evaluation of emotional prosody. In addition to brain areas commonly related to prosody processing, our results suggest specific functions of anterior cingulate and basal ganglia in detecting emotional prosody, particularly when explicit identification is necessary.  相似文献   

20.
The purpose of this study was to reveal functional areas of the brain modulating processing of selective auditory or visual attention toward utterances. Regional cerebral blood flow was measured in six normal volunteers using positron emission tomography during two selective attention tasks and a control condition. The auditory task activated the auditory, inferior parietal, prefrontal, and anterior cingulate cortices. The visual task activated the visual association, inferior parietal, and prefrontal cortices. Both conditions activated the same area in the superior temporal sulcus. During the visual task, deactivation was observed in the auditory cortex. These results indicate that there exists a modality-dependent selective attention mechanism which activates or deactivates cortical areas in different ways.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号