首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
2.
Knebel JF  Murray MM 《NeuroImage》2012,59(3):2808-2817
Despite myriad studies, neurophysiologic mechanisms mediating illusory contour (IC) sensitivity remain controversial. Among the competing models one favors feed-forward effects within lower-tier cortices (V1/V2). Another situates IC sensitivity first within higher-tier cortices, principally lateral-occipital cortices (LOC), with later feedback effects in V1/V2. Still others postulate that LOC are sensitive to salient regions demarcated by the inducing stimuli, whereas V1/V2 effects specifically support IC sensitivity. We resolved these discordances by using misaligned line gratings, oriented either horizontally or vertically, to induce ICs. Line orientation provides an established assay of V1/V2 modulations independently of IC presence, and gratings lack salient regions. Electrical neuroimaging analyses of visual evoked potentials (VEPs) disambiguated the relative timing and localization of IC sensitivity with respect to that for grating orientation. Millisecond-by-millisecond analyses of VEPs and distributed source estimations revealed a main effect of grating orientation beginning at 65 ms post-stimulus onset within the calcarine sulcus that was followed by a main effect of IC presence beginning at 85 ms post-stimulus onset within the LOC. There was no evidence for differential processing of ICs as a function of the orientation of the grating. These results support models wherein IC sensitivity occurs first within the LOC.  相似文献   

3.
Kang E  Lee DS  Kang H  Hwang CH  Oh SH  Kim CS  Chung JK  Lee MC 《NeuroImage》2006,32(1):423-431
Speech perception in face-to-face conversation involves processing of speech sounds (auditory) and speech-associated mouth/lip movements (visual) from a speaker. Using PET where no scanner noise was present, brain regions involved in speech cue processing were investigated with the normal hearing subjects with no previous lip-reading training (N = 17) carrying out a semantic plausibility decision on spoken sentences delivered in a movie file. Multimodality was ensured at the sensory level in all four conditions. Sensory-specific speech cue of one sensory modality, i.e., auditory speech (A condition) or mouth movement (V condition), was delivered with a control stimulus of the other modality whereas speech cues of both sensory modalities (AV condition) were delivered during bimodal condition. In comparison to the control condition, extensive activations in the superior temporal regions were observed bilaterally during the A condition but these activations were reduced in extent and left lateralized during the AV condition. Polymodal region such as left posterior superior temporal sulcus (pSTS) involved in cross-modal interaction/integration of audiovisual speech was found to be activated during the A and more so during the AV conditions but not during the V condition. Activations were observed in Broca's (BA 44), medial frontal (BA 8), and anterior ventrolateral prefrontal (BA 47) regions in the left during the V condition, where lip-reading performance was less successful. Results indicated that the speech-associated lip movements (visual speech cue) rendered suppression on the activity in the right auditory temporal regions. Overadditivity (AV > A + V) observed in the right postcentral region during the bimodal condition relative to the sum of unimodal speech conditions was also associated with reduced activity during the V condition. These findings suggested that visual speech cue could exert an inhibitory modulatory effect on the brain activities in the right hemisphere during the cross-modal interaction of audiovisual speech perception.  相似文献   

4.
The feasibility of mapping transient, randomly occurring neuropsychological events using independent component analysis (ICA) was evaluated in an auditory sentence-monitoring fMRI experiment, in which prerecorded short sentences of random content were presented in varying temporal patterns. The efficacy of ICA on fMRI data with such temporal characteristics was assessed by a series of simulation studies, as well as by human activation studies. The effects of contrast-to-noise ratio level, spatially varied hemodynamic response within a brain region, time lags of the responses among brain regions, and different simulated activation locations on the ICA were investigated in the simulations. Component maps obtained from the auditory sentence-monitoring experiments in each subject using ICA showed distinct activation in bilateral auditory and language cortices, as well as in superior sensorimotor cortices, consistent with previous PET studies. The associated time courses in the activated brain regions matched well to the timing of the sentence presentation, as evidenced by the recorded button-press response signals. Methods for ICA component ordering that may rank highly the components of primary interest in such experiments were developed. The simulation results characterized the performance of ICA under various conditions and may provide useful information for experimental design and data interpretation.  相似文献   

5.
Specht K  Reul J 《NeuroImage》2003,20(4):1944-1954
With this study, we explored the blood oxygen level-dependent responses within the temporal lobe to short auditory stimuli of different classes. To address this issue, we performed an attentive listening event-related fMRI study, where subjects were required to concentrate during the presentation of different types of stimuli. Because the order of stimuli was randomized and not predictable for the subject, the observed differences between the stimuli types were interpreted as an automatic effect and were not affected by attention. We used three types of stimuli: tones, sounds of animals and instruments, and words. We found in all cases bilateral activations of the primary and secondary auditory cortex. The strength and lateralization depended on the type of stimulus. The tone trials led to the weakest and smallest activations. The perception of sounds increased the activated network bilaterally into the superior temporal sulcus mainly on the right and the perception of words led to the highest activation within the left superior temporal sulcus as well as in left inferior frontal gyrus. Within the left temporal sulcus, we were able to distinguish between different subsystems, showing an extending activation from posterior to anterior for speech and speechlike information. Whereas posterior parts were involved in analyzing the complex auditory structure of sounds and speech, the middle and anterior parts responded strongest only in the perception of speech. In summary, a functional segregation of the temporal lobes into several subsystems responsible for auditory processing was visible. A lateralization for verbal stimuli to the left and sounds to the right was already detectable when short stimuli were used.  相似文献   

6.
Vibrotactile stimuli can facilitate hearing, both in hearing-impaired and in normally hearing people. Accordingly, the sounds of hands exploring a surface contribute to the explorer's haptic percepts. As a possible brain basis of such phenomena, functional brain imaging has identified activations specific to audiotactile interaction in secondary somatosensory cortex, auditory belt area, and posterior parietal cortex, depending on the quality and relative salience of the stimuli. We studied 13 subjects with non-invasive functional magnetic resonance imaging (fMRI) to search for auditory brain areas that would be activated by touch. Vibration bursts of 200 Hz were delivered to the subjects' fingers and palm and tactile pressure pulses to their fingertips. Noise bursts served to identify auditory cortex. Vibrotactile-auditory co-activation, addressed with minimal smoothing to obtain a conservative estimate, was found in an 85-mm3 region in the posterior auditory belt area. This co-activation could be related to facilitated hearing at the behavioral level, reflecting the analysis of sound-like temporal patterns in vibration. However, even tactile pulses (without any vibration) activated parts of the posterior auditory belt area, which therefore might subserve processing of audiotactile events that arise during dynamic contact between hands and environment.  相似文献   

7.
Independent component analysis (ICA) is a family of unsupervised learning algorithms that have proven useful for the analysis of the electroencephalogram (EEG) and magnetoencephalogram (MEG). ICA decomposes an EEG/MEG data set into a basis of maximally temporally independent components (ICs) that are learned from the data. As with any statistic, a concern with using ICA is the degree to which the estimated ICs are reliable. An IC may not be reliable if ICA was trained on insufficient data, if ICA training was stopped prematurely or at a local minimum (for some algorithms), or if multiple global minima were present. Consequently, evidence of ICA reliability is critical for the credibility of ICA results. In this paper, we present a new algorithm for assessing the reliability of ICs based on applying ICA separately to split-halves of a data set. This algorithm improves upon existing methods in that it considers both IC scalp topographies and activations, uses a probabilistically interpretable threshold for accepting ICs as reliable, and requires applying ICA only three times per data set. As evidence of the method's validity, we show that the method can perform comparably to more time intensive bootstrap resampling and depends in a reasonable manner on the amount of training data. Finally, using the method we illustrate the importance of checking the reliability of ICs by demonstrating that IC reliability is dramatically increased by removing the mean EEG at each channel for each epoch of data rather than the mean EEG in a prestimulus baseline.  相似文献   

8.
Potes C  Gunduz A  Brunner P  Schalk G 《NeuroImage》2012,61(4):841-848
Previous studies demonstrated that brain signals encode information about specific features of simple auditory stimuli or of general aspects of natural auditory stimuli. How brain signals represent the time course of specific features in natural auditory stimuli is not well understood. In this study, we show in eight human subjects that signals recorded from the surface of the brain (electrocorticography (ECoG)) encode information about the sound intensity of music. ECoG activity in the high gamma band recorded from the posterior part of the superior temporal gyrus as well as from an isolated area in the precentral gyrus was observed to be highly correlated with the sound intensity of music. These results not only confirm the role of auditory cortices in auditory processing but also point to an important role of premotor and motor cortices. They also encourage the use of ECoG activity to study more complex acoustic features of simple or natural auditory stimuli.  相似文献   

9.
Processing syntax is believed to be a higher cognitive function involving cortical regions outside sensory cortices. In particular, previous studies revealed that early syntactic processes at around 100-200 ms affect brain activations in anterior regions of the superior temporal gyrus (STG), while independent studies showed that pure auditory perceptual processing is related to sensory cortex activations. However, syntax-related modulations of sensory cortices were reported recently, thereby adding diverging findings to the previous studies. The goal of the present magnetoencephalography study was to localize the cortical regions underlying early syntactic processes and those underlying perceptual processes using a within-subject design. Sentences varying the factors syntax (correct vs. incorrect) and auditory space (standard vs. change of interaural time difference (ITD)) were auditorily presented. Both syntactic and auditory spatial anomalies led to very early activations (40-90 ms) in the STG. Around 135 ms after violation onset, differential effects were observed for syntax and auditory space, with syntactically incorrect sentences leading to activations in the anterior STG, whereas ITD changes elicited activations more posterior in the STG. Furthermore, our observations strongly indicate that the anterior and the posterior STG are activated simultaneously when a double violation is encountered. Thus, the present findings provide evidence of a dissociation of speech-related processes in the anterior STG and the processing of auditory spatial information in the posterior STG, compatible with the view of different processing streams in the temporal cortex.  相似文献   

10.
目的利用脑磁源成像观察正常人在躯体感觉刺激时中枢的兴奋情况。方法研究包括10名正常志愿者。通过专用电刺激仪刺激手部皮肤,固定电流脉冲0.3ms,刺激间期0.5s,叠加1000次。用等电流偶极描述局灶性皮质活动,受试者的头部空间形态用球形模型化。结果体感刺激明显激活位于对侧中央前、后回的第一躯体感觉皮层和颞上回。颞叶ECD的潜伏期比第一躯体感觉中枢的长,ECD的强度较低。结论体感刺激激活对侧第一躯体感觉皮层,颞叶参与处理体感刺激。  相似文献   

11.
A visual task for semantic access involves a number of brain regions. However, previous studies either examined the role of each region separately using univariate approach, or analyzed a single brain network using covariance connectivity analysis. We hypothesize that these brain regions construct several functional networks underpinning a word semantic access task, these networks being engaged in different cognitive components with distinct temporal characters. In this paper, multivariate independent component analysis (ICA) was used to reveal these networks based on functional magnetic resonance imaging (fMRI) data acquired during a visual and an auditory word semantic judgment task. Our results demonstrated that there were three task-related independent components (ICs), corresponding to various cognitive components involved in the visual task. Furthermore, ICA separation on the auditory task showed consistency of the results with our hypothesis, regardless of the input modalities.  相似文献   

12.
Jessen S  Kotz SA 《NeuroImage》2011,58(2):665-674
Face-to-face communication works multimodally. Not only do we employ vocal and facial expressions; body language provides valuable information as well. Here we focused on multimodal perception of emotion expressions, monitoring the temporal unfolding of the interaction of different modalities in the electroencephalogram (EEG). In the auditory condition, participants listened to emotional interjections such as "ah", while they saw mute video clips containing emotional body language in the visual condition. In the audiovisual condition participants saw video clips with matching interjections. In all three conditions, the emotions "anger" and "fear", as well as non-emotional stimuli were used. The N100 amplitude was strongly reduced in the audiovisual compared to the auditory condition, suggesting a significant impact of visual information on early auditory processing. Furthermore, anger and fear expressions were distinct in the auditory but not the audiovisual condition. Complementing these event-related potential (ERP) findings, we report strong similarities in the alpha- and beta-band in the visual and the audiovisual conditions, suggesting a strong visual processing component in the perception of audiovisual stimuli. Overall, our results show an early interaction of modalities in emotional face-to-face communication using complex and highly natural stimuli.  相似文献   

13.
The neural substrates underlying speech perception are still not well understood. Previously, we found dissociation of speech and nonspeech processing at the earliest cortical level (AI), using speech and nonspeech complexity dimensions. Acoustic differences between speech and nonspeech stimuli in imaging studies, however, confound the search for linguistic-phonetic regions. Presently, we used sinewave speech (SWsp) and nonspeech (SWnon), which replace speech formants with sinewave tones, in order to match acoustic spectral and temporal complexity while contrasting phonetics. Chord progressions (CP) were used to remove the effects of auditory coherence and object processing. Twelve normal RH volunteers were scanned with fMRI while listening to SWsp, SWnon, CP, and a baseline condition arranged in blocks. Only two brain regions, in bilateral superior temporal sulcus, extending more posteriorly on the left, were found to prefer the SWsp condition after accounting for acoustic modulation and coherence effects. Two regions responded preferentially to the more frequency-modulated stimuli, including one that overlapped the right temporal phonetic area and another in the left angular gyrus far from the phonetic area. These findings are proposed to form the basis for the two subtypes of auditory word deafness. Several brain regions, including auditory and non-auditory areas, preferred the coherent auditory stimuli and are likely involved in auditory object recognition. The design of the current study allowed for separation of acoustic spectrotemporal, object recognition, and phonetic effects resulting in distinct and overlapping components.  相似文献   

14.
Kang E  Lee DS  Kang H  Lee JS  Oh SH  Lee MC  Kim CS 《NeuroImage》2004,22(3):1173-1181
Brain plasticity was investigated, which underlies the gaining of auditory sensory and/or auditory language in deaf children with an early onset deafness after cochlear implantation (CI) surgery. This study examined both the glucose metabolism of the brain and the auditory speech learning using 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) and the Central Institute of Deaf (CID) test, respectively, both before and after the CI surgery. In a within analysis comparing the pre-CI and the post-CI PET results, CI itself resulted in an increase in the glucose metabolism in the medial visual cortex, the bilateral thalamus, and the posterior cingulate. Compared with the normal hearing controls, the brain activity of the deaf children was greater in the medial visual cortex and bilateral occipito-parietal junctions after the CI. The better speech perception ability was associated with increases in activity in the higher visual areas such as middle occipito-temporal junction (hMT/V5) and posterior inferior temporal region (BA 21/37) in the left hemisphere and associated with decreases in activity in the right inferior parieto-dorsal prefrontal region. These findings suggest that the speech learning resulted in a greater demand of the visual and visuospatial processings subserved by the early visual cortex and parietal cortices. However, only those deaf children who successfully learned the auditory language after CI used more visual motion perception for mouth movement in the left hMT/V5 region and less somatosensory function in the right parieto-frontal region.  相似文献   

15.
Event-related potential (ERP) studies of the brain's response to infrequent, target (oddball) stimuli elicit a sequence of physiological events, the most prominent and well studied being a complex, the P300 (or P3) peaking approximately 300 ms post-stimulus for simple stimuli and slightly later for more complex stimuli. Localization of the neural generators of the human oddball response remains challenging due to the lack of a single imaging technique with good spatial and temporal resolution. Here, we use independent component analyses to fuse ERP and fMRI modalities in order to examine the dynamics of the auditory oddball response with high spatiotemporal resolution across the entire brain. Initial activations in auditory and motor planning regions are followed by auditory association cortex and motor execution regions. The P3 response is associated with brainstem, temporal lobe, and medial frontal activity and finally a late temporal lobe "evaluative" response. We show that fusing imaging modalities with different advantages can provide new information about the brain.  相似文献   

16.
Speech perception can use not only auditory signals, but also visual information from seeing the speaker's mouth. The relative timing and relative location of auditory and visual inputs are both known to influence crossmodal integration psychologically, but previous imaging studies of audiovisual speech focused primarily on just temporal aspects. Here we used Positron Emission Tomography (PET) during audiovisual speech processing to study how temporal and spatial factors might jointly affect brain activations. In agreement with previous work, synchronous versus asynchronous audiovisual speech yielded increased activity in multisensory association areas (e.g., superior temporal sulcus [STS]), plus in some unimodal visual areas. Our orthogonal manipulation of relative stimulus position (auditory and visual stimuli presented at same location vs. opposite sides) and stimulus synchrony showed that (i) ventral occipital areas and superior temporal sulcus were unaffected by relative location; (ii) lateral and dorsal occipital areas were selectively activated for synchronous bimodal stimulation at the same external location; (iii) right inferior parietal lobule was activated for synchronous auditory and visual stimuli at different locations, that is, in the condition classically associated with the 'ventriloquism effect' (shift of perceived auditory position toward the visual location). Thus, different brain regions are involved in different aspects of audiovisual integration. While ventral areas appear more affected by audiovisual synchrony (which can influence speech identification), more dorsal areas appear to be associated with spatial multisensory interactions.  相似文献   

17.
The way humans comprehend narrative speech plays an important part in human development and experience. A group of 313 children with ages 5-18 were subjected to a large-scale functional magnetic resonance imaging (fMRI) study in order to investigate the neural correlates of auditory narrative comprehension. The results were analyzed to investigate the age-related brain activity changes involved in the narrative language comprehension circuitry. We found age-related differences in brain activity which may either reflect changes in local neuroplasticity (of the regions involved) in the developing brain or a more global transformation of brain activity related to neuroplasticity. To investigate this issue, Structural Equation Modeling (SEM) was applied to the results obtained from a group independent component analysis (Schmithorst, V.J., Holland, S.K., et al., 2005. Cognitive modules utilized for narrative comprehension in children: a functional magnetic resonance imaging study. NeuroImage) and the age-related differences were examined in terms of changes in path coefficients between brain regions. The group Independent Component Analysis (ICA) had identified five bilateral task-related components comprising the primary auditory cortex, the mid-superior temporal gyrus, the most posterior aspect of the superior temporal gyrus, the hippocampus, the angular gyrus and the medial aspect of the parietal lobule (precuneus/posterior cingulate). Furthermore, a left-lateralized network (sixth component) was also identified comprising the inferior frontal gyrus (including Broca's area), the inferior parietal lobule, and the medial temporal gyrus. The components (brain regions) for the SEM were identified based on the ICA maps and the results are discussed in light of recent neuroimaging studies corroborating the functional segregation of Broca's and Wernicke's areas and the important role played by the right hemisphere in narrative comprehension. The classical Wernicke-Geschwind (WG) model for speech processing is expanded to a two-route model involving a direct route between Broca's and Wernicke's area and an indirect route involving the parietal lobe.  相似文献   

18.
In a natural environment, non-verbal emotional communication is multimodal (i.e. speech melody, facial expression) and multifaceted concerning the variety of expressed emotions. Understanding these communicative signals and integrating them into a common percept is paramount to successful social behaviour. While many previous studies have focused on the neurobiology of emotional communication in the auditory or visual modality alone, far less is known about multimodal integration of auditory and visual non-verbal emotional information. The present study investigated this process using event-related fMRI. Behavioural data revealed that audiovisual presentation of non-verbal emotional information resulted in a significant increase in correctly classified stimuli when compared with visual and auditory stimulation. This behavioural gain was paralleled by enhanced activation in bilateral posterior superior temporal gyrus (pSTG) and right thalamus, when contrasting audiovisual to auditory and visual conditions. Further, a characteristic of these brain regions, substantiating their role in the emotional integration process, is a linear relationship between the gain in classification accuracy and the strength of the BOLD response during the bimodal condition. Additionally, enhanced effective connectivity between audiovisual integration areas and associative auditory and visual cortices was observed during audiovisual stimulation, offering further insight into the neural process accomplishing multimodal integration. Finally, we were able to document an enhanced sensitivity of the putative integration sites to stimuli with emotional non-verbal content as compared to neutral stimuli.  相似文献   

19.
We investigated cerebral processing of audiovisual speech stimuli in humans using functional magnetic resonance imaging (fMRI). Ten healthy volunteers were scanned with a 'clustered volume acquisition' paradigm at 3 T during observation of phonetically matching (e.g., visual and acoustic /y/) and conflicting (e.g., visual /a/ and acoustic /y/) audiovisual vowels. Both stimuli activated the sensory-specific auditory and visual cortices, along with the superior temporal, inferior frontal (Broca's area), premotor, and visual-parietal regions bilaterally. Phonetically conflicting vowels, contrasted with matching ones, specifically increased activity in Broca's area. Activity during phonetically matching stimuli, contrasted with conflicting ones, was not enhanced in any brain region. We suggest that the increased activity in Broca's area reflects processing of conflicting visual and acoustic phonetic inputs in partly disparate neuron populations. On the other hand, matching acoustic and visual inputs would converge on the same neurons.  相似文献   

20.
Wessel JR  Ullsperger M 《NeuroImage》2011,54(3):2105-2115
Following the development of increasingly precise measurement instruments and fine-grain analysis tools for electroencephalographic (EEG) data, analysis of single-trial event-related EEG has considerably widened the utility of this non-invasive method to investigate brain activity. Recently, independent component analysis (ICA) has become one of the most prominent techniques for increasing the feasibility of single-trial EEG. This blind source separation technique extracts statistically independent components (ICs) from the EEG raw signal. By restricting the signal analysis to those ICs representing the processes of interest, single-trial analysis becomes more flexible. Still, the selection-criteria for in- or exclusion of certain ICs are largely subjective and unstandardized, as is the actual selection process itself. We present a rationale for a bottom-up, data-driven IC selection approach, using clear-cut inferential statistics on both temporal and spatial information to identify components that significantly contribute to a certain event-related brain potential (ERP). With time-range being the only necessary input, this approach considerably reduces the pre-assumptions for IC selection and promotes greater objectivity of the selection process itself. To test the validity of the approach presented here, we present results from a simulation and re-analyze data from a previously published ERP experiment on error processing. We compare the ERP-based IC selections made by our approach to the selection made based on mere signal power. The comparison of ERP integrity, signal-to-noise ratio, and single-trial properties of the back-projected ICs outlines the validity of the approach presented here. In addition, functional validity of the extracted error-related EEG signal is tested by investigating whether it is predictive for subsequent behavioural adjustments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号