首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
This study analyzed high‐density event‐related potentials (ERPs) within an electrical neuroimaging framework to provide insights regarding the interaction between multisensory processes and stimulus probabilities. Specifically, we identified the spatiotemporal brain mechanisms by which the proportion of temporally congruent and task‐irrelevant auditory information influences stimulus processing during a visual duration discrimination task. The spatial position (top/bottom) of the visual stimulus was indicative of how frequently the visual and auditory stimuli would be congruent in their duration (i.e., context of congruence). Stronger influences of irrelevant sound were observed when contexts associated with a high proportion of auditory‐visual congruence repeated and also when contexts associated with a low proportion of congruence switched. Context of congruence and context transition resulted in weaker brain responses at 228 to 257 ms poststimulus to conditions giving rise to larger behavioral cross‐modal interactions. Importantly, a control oddball task revealed that both congruent and incongruent audiovisual stimuli triggered equivalent non‐linear multisensory interactions when congruence was not a relevant dimension. Collectively, these results are well explained by statistical learning, which links a particular context (here: a spatial location) with a certain level of top‐down attentional control that further modulates cross‐modal interactions based on whether a particular context repeated or changed. The current findings shed new light on the importance of context‐based control over multisensory processing, whose influences multiplex across finer and broader time scales. Hum Brain Mapp 37:273–288, 2016. © 2015 Wiley Periodicals, Inc.  相似文献   

2.
This study investigated the interaction between top-down attentional control and multisensory processing in humans. Using semantically congruent and incongruent audiovisual stimulus streams, we found target detection to be consistently improved in the setting of distributed audiovisual attention versus focused visual attention. This performance benefit was manifested as faster reaction times for congruent audiovisual stimuli and as accuracy improvements for incongruent stimuli, resulting in a resolution of stimulus interference. Electrophysiological recordings revealed that these behavioral enhancements were associated with reduced neural processing of both auditory and visual components of the audiovisual stimuli under distributed versus focused visual attention. These neural changes were observed at early processing latencies, within 100-300 ms poststimulus onset, and localized to auditory, visual, and polysensory temporal cortices. These results highlight a novel neural mechanism for top-down driven performance benefits via enhanced efficacy of sensory neural processing during distributed audiovisual attention relative to focused visual attention.  相似文献   

3.
We propose an "Enhanced Perceptual Functioning" model encompassing the main differences between autistic and non-autistic social and non-social perceptual processing: locally oriented visual and auditory perception, enhanced low-level discrimination, use of a more posterior network in "complex" visual tasks, enhanced perception of first order static stimuli, diminished perception of complex movement, autonomy of low-level information processing toward higher-order operations, and differential relation between perception and general intelligence. Increased perceptual expertise may be implicated in the choice of special ability in savant autistics, and in the variability of apparent presentations within PDD (autism with and without typical speech, Asperger syndrome) in non-savant autistics. The overfunctioning of brain regions typically involved in primary perceptual functions may explain the autistic perceptual endophenotype.  相似文献   

4.
One of the principal functions of the nervous system is to synthesize information from multiple sensory channels into a coherent behavioral and perceptual gestalt. A critical feature of this multisensory synthesis is the sorting and coupling of information derived from the same event. One of the singular features of stimuli conveying such information is their contextual or semantic congruence. Illustrating this fact, subjects are typically faster and more accurate when performing tasks that include congruent compared to incongruent cross-modal stimuli. Using functional magnetic resonance imaging, we demonstrate that activity in select brain areas is sensitive to the contextual congruence among cross-modal cues and to task difficulty. The anterior cingulate gyrus and adjacent medial prefrontal cortices showed significantly greater activity when visual and auditory stimuli were contextually congruent (i.e., matching) than when they were nonmatching. Although activity in these regions was also dependent on task difficulty, showing decreased activity with decreasing task difficulty, the activity changes associated with stimulus congruence predominated.  相似文献   

5.
Individuals with autism spectrum disorders (ASD) exhibit alterations in sensory processing, including changes in the integration of information across the different sensory modalities. In the current study, we used the sound-induced flash illusion to assess multisensory integration in children with ASD and typically-developing (TD) controls. Thirty-one children with ASD and 31 age and IQ matched TD children (average age = 12 years) were presented with simple visual (i.e., flash) and auditory (i.e., beep) stimuli of varying number. In illusory conditions, a single flash was presented with 2–4 beeps. In TD children, these conditions generally result in the perception of multiple flashes, implying a perceptual fusion across vision and audition. In the present study, children with ASD were significantly less likely to perceive the illusion relative to TD controls, suggesting that multisensory integration and cross-modal binding may be weaker in some children with ASD. These results are discussed in the context of previous findings for multisensory integration in ASD and future directions for research.  相似文献   

6.
The role of multisensory memories in unisensory object discrimination   总被引:2,自引:0,他引:2  
Past multisensory experiences can influence current unisensory processing and memory performance. Repeated images are better discriminated if initially presented as auditory-visual pairs, rather than only visually. An experience's context thus plays a role in how well repetitions of certain aspects are later recognized. Here, we investigated factors during the initial multisensory experience that are essential for generating improved memory performance. Subjects discriminated repeated versus initial image presentations intermixed within a continuous recognition task. Half of initial presentations were multisensory, and all repetitions were only visual. Experiment 1 examined whether purely episodic multisensory information suffices for enhancing later discrimination performance by pairing visual objects with either tones or vibrations. We could therefore also assess whether effects can be elicited with different sensory pairings. Experiment 2 examined semantic context by manipulating the congruence between auditory and visual object stimuli within blocks of trials. Relative to images only encountered visually, accuracy in discriminating image repetitions was significantly impaired by auditory-visual, yet unaffected by somatosensory-visual multisensory memory traces. By contrast, this accuracy was selectively enhanced for visual stimuli with semantically congruent multisensory pasts and unchanged for those with semantically incongruent multisensory pasts. The collective results reveal opposing effects of purely episodic versus semantic information from auditory-visual multisensory events. Nonetheless, both types of multisensory memory traces are accessible for processing incoming stimuli and indeed result in distinct visual object processing, leading to either impaired or enhanced performance relative to unisensory memory traces. We discuss these results as supporting a model of object-based multisensory interactions.  相似文献   

7.
Long latency auditory brain potentials were recorded while subjects listened to bi-syllabic words spoken with an emotional expression and concurrently viewed congruent or incongruent facial expressions. Analysis of the auditory waveforms suggests the existence of a positive deflection around 240 ms post-stimulus with a clear posterior topography (the P2b component). This potential is subsequent upon the modality-specific auditory N1-P2 components and precedes the amodal N2-P3 complex. Congruent face-voice trials elicited an earlier P2b component than incongruent trials suggesting that auditory processing is delayed in the presence of an incongruent facial context. These electrophysiological results are consistent with previous behavioural studies showing an acceleration of reaction times for rating voice expressions that are part of congruent bimodal stimulus pairs. A source localisation analysis performed on the scalp EEG during the time-window corresponding to the P2b component disclosed a single dipole solution in the anterior cingulate cortex.  相似文献   

8.
The synchronous occurrence of the unisensory components of a multisensory stimulus contributes to their successful merging into a coherent perceptual representation. Oscillatory gamma-band responses (GBRs, 30-80 Hz) have been linked to feature integration mechanisms and to multisensory processing, suggesting they may also be sensitive to the temporal alignment of multisensory stimulus components. Here we examined the effects on early oscillatory GBR brain activity of varying the precision of the temporal synchrony of the unisensory components of an audio-visual stimulus. Audio-visual stimuli were presented with stimulus onset asynchronies ranging from -125 to +125 ms. Randomized streams of auditory (A), visual (V), and audio-visual (AV) stimuli were presented centrally while subjects attended to either the auditory or visual modality to detect occasional targets. GBRs to auditory and visual components of multisensory AV stimuli were extracted for five subranges of asynchrony (e.g., A preceded by V by 100+/-25 ms, by 50+/-25 ms, etc.) and compared with GBRs to unisensory control stimuli. Robust multisensory interactions were observed in the early GBRs when the auditory and visual stimuli were presented with the closest synchrony. These effects were found over medial-frontal brain areas after 30-80 ms and over occipital brain areas after 60-120 ms. A second integration effect, possibly reflecting the perceptual separation of the two sensory inputs, was found over occipital areas when auditory inputs preceded visual by 100+/-25 ms. No significant interactions were observed for the other subranges of asynchrony. These results show that the precision of temporal synchrony can have an impact on early cross-modal interactions in human cortex.  相似文献   

9.
While most typically developing (TD) participants have a coarse-to-fine processing style, people with autism spectrum disorder (ASD) seem to be less globally and more locally biased when processing visual information. The stimulus-specific spatial frequency content might be directly relevant to determine this temporal hierarchy of visual information processing in people with and without ASD. We implemented a semantic priming task in which (in)congruent coarse and/or fine spatial information preceded target categorization. Our results indicated that adolescents with ASD made more categorization errors than TD adolescents and needed more time to process the prime stimuli. Simultaneously, however, our findings argued for a processing advantage in ASD, when the prime stimulus contains detailed spatial information and presentation time permits explicit visual processing.  相似文献   

10.
The successful integration of visual and auditory stimuli requires information about whether visual and auditory signals originate from corresponding places in the external world. Here we report crossmodal effects of spatially congruent and incongruent audio-visual (AV) stimulation. Visual and auditory stimuli were presented from one of four horizontal locations in external space. Seven healthy human subjects had to assess the spatial fit of a visual stimulus (i.e. a gray-scaled picture of a cartoon dog) and a simultaneously presented auditory stimulus (i.e. a barking sound). Functional magnetic resonance imaging (fMRI) revealed two distinct networks of cortical regions that processed preferentially either spatially congruent or spatially incongruent AV stimuli. Whereas earlier visual areas responded preferentially to incongruent AV stimulation, higher visual areas of the temporal and parietal cortex (left inferior temporal gyrus [ITG], right posterior superior temporal gyrus/sulcus [pSTG/STS], left intra-parietal sulcus [IPS]) and frontal regions (left pre-central gyrus [PreCG], left dorsolateral pre-frontal cortex [DLPFC]) responded preferentially to congruent AV stimulation. A position-resolved analysis revealed three robust cortical representations for each of the four visual stimulus locations in retinotopic visual regions corresponding to the representation of the horizontal meridian in area V1 and at the dorsal and ventral borders between areas V2 and V3. While these regions of interest (ROIs) did not show any significant effect of spatial congruency, we found subregions within ROIs in the right hemisphere that showed an incongruency effect (i.e. an increased fMRI signal during spatially incongruent compared to congruent AV stimulation). We interpret this finding as a correlate of spatially distributed recurrent feedback during mismatch processing: whenever a spatial mismatch is detected in multisensory regions (such as the IPS), processing resources are re-directed to low-level visual areas.  相似文献   

11.
The current debate on mechanisms of action understanding and recognition has re-opened the question of how perceptual and motor systems are linked. It has been proposed that the human motor system has a role in action perception; however, there is still no direct evidence that actions can modulate early neural processes associated with perception of meaningful actions. Here we show that plans for action modulate the perceptual processing of observed actions within 200 ms of stimulus onset. We examined event-related potentials to images of hand gestures presented while participants planned either a matching (congruent) or non-matching (incongruent) gesture. The N170/VPP, representing visual processing of hand gestures, was reliably altered when participants concurrently planned congruent versus incongruent actions. In a second experiment, we showed that this congruency effect was specific to action planning and not to more general semantic aspects of action representation. Our findings demonstrate that actions encoded via the motor system have a direct effect on visual processing, and thus imply a bi-directional link between action and perception in the human brain. We suggest that through forward modelling, intended actions can facilitate the encoding of sensory inputs that would be expected as a consequence of the action.  相似文献   

12.
Thorne JD  Debener S 《Neuroreport》2008,19(5):553-557
Multisensory behavioral benefits generally occur when one modality provides improved or disambiguating information to another. Here, we show benefits when no information is apparently provided. Participants performed an auditory frequency discrimination task in which auditory stimuli were paired with uninformative visual stimuli. Visual-auditory stimulus onset asynchrony was varied between -10 ms (sound first) to 80 ms without compromising perceptual simultaneity. In most stimulus onset asynchrony conditions, response times to audiovisual pairs were significantly shorter than auditory-alone controls. This suggests a general processing advantage for multisensory stimuli over unisensory stimuli, even when only one modality is informative. Response times were shortest with an auditory delay of 65 ms, indicating an audiovisual 'perceptual optimum' that may be related to processing simultaneity.  相似文献   

13.
This study investigated the neural basis of the effect of gaze direction on facial expression processing in children with and without ASD, using event-related potential (ERP). Children with ASD (10-17-year olds) and typically developing (TD) children (9-16-year olds) were asked to determine the emotional expressions (anger or fearful) of a facial stimulus with a direct or averted gaze, and the ERPs were recorded concurrently. In TD children, faces with a congruent expression and gaze direction in approach-avoidance motivation, such as an angry face with a direct gaze (i.e., approaching motivation) and a fearful face with an averted gaze (i.e., avoidant motivation), were recognized more accurately and elicited larger N170 amplitudes than motivationally incongruent facial stimuli (an angry face with an averted gaze and a fearful face with a direct gaze). These results demonstrated the neural basis and time course of integration of facial expression and gaze direction in TD children and its impairment in children with ASD.  相似文献   

14.
This literature review aims to interpret behavioural and electrophysiological studies addressing auditory processing in children and adults with autism spectrum disorder (ASD). Data have been organised according to the applied methodology (behavioural versus electrophysiological studies) and according to stimulus complexity (pure versus complex tones versus speech sounds). In line with the weak central coherence (WCC) theory of autism we aimed to investigate whether individuals with ASD show a more locally and less globally oriented processing style in the auditory modality. To avoid the possible confound of stimulus complexity, this influence was taken into account as an additional hypothesis. The review reveals that the identification and discrimination of isolated acoustic features (in particular pitch processing) is generally intact or enhanced in individuals with ASD, for pure as well as for complex tones and speech sounds. It thus appears that the local processing advantage is not influenced by stimulus complexity. Individuals with ASD are also less susceptible to global interference of speech-like material. A deficit in global auditory processing, however, is less universally confirmed. We propose that the observed pattern of auditory enhancements and deficits in ASD may be related to an atypical pattern of right hemisphere dominance. As the right and left hemisphere are relatively more specialized in spectral versus temporal auditory processing, respectively, right hemisphere dominance in ASD could provoke enhanced pitch and vowel processing, whereas left hemisphere deficiencies might explain speech perception problems and temporal processing deficits.  相似文献   

15.
Successful information processing requires the focusing of attention on a certain stimulus property and the simultaneous suppression of irrelevant information. The Stroop task is a useful paradigm to study such attentional top‐down control in the presence of interference. Here, we investigated the neural correlates of an auditory Stroop task using fMRI. Subjects focused either on tone pitch (relatively high or low; phonetic task) or on the meaning of a spoken word (high/low/good; semantic task), while ignoring the other stimulus feature. We differentiated between task‐related (phonetic incongruent vs. semantic incongruent) and sensory‐level interference (phonetic incongruent vs. phonetic congruent). Task‐related interference activated similar regions as in visual Stroop tasks, including the anterior cingulate cortex (ACC) and the presupplementary motor‐area (pre‐SMA). More specifically, we observed that the very caudal/posterior part of the ACC was activated and not the dorsal/anterior region. Because identical stimuli but different task demands are compared in this contrast, it reflects conflict at a relatively high processing level. A more conventional contrast between incongruent and congruent phonetic trials was associated with a different cluster in the pre‐SMA/ACC which was observed in a large number of previous studies. Finally, functional connectivity analysis revealed that activity within the regions activated in the phonetic incongruent vs. semantic incongruent contrast was more strongly interrelated during semantically vs. phonetically incongruent trials. Taken together, we found (besides activation of regions well‐known from visual Stroop tasks) activation of the very caudal and posterior part of the ACC due to task‐related interference in an auditory Stroop task. Hum Brain Mapp, 2009. © 2009 Wiley‐Liss, Inc.  相似文献   

16.
We studied the interactions in neural processing of auditory and visual speech by recording event-related brain potentials (ERPs). Unisensory (auditory - A and visual - V) and audiovisual (AV) vowels were presented to 11 subjects. AV vowels were phonetically either congruent (e.g., acoustic /a/ and visual /a/) or incongruent (e.g., acoustic /a/ and visual /y/). ERPs to AV stimuli and the sum of the ERPs to A and V stimuli (A+V) were compared. Similar ERPs to AV and A+V were hypothesized to indicate independent processing of A and V stimuli. Differences on the other hand would suggest AV interactions. Three deflections, the first peaking at about 85 ms after the A stimulus onset, were significantly larger in the ERPs to A+V than in the ERPs to both congruent and incongruent AV stimuli. We suggest that these differences reflect AV interactions in the processing of general, non-phonetic, features shared by the acoustic and visual stimulus (spatial location, coincidence in time). The first difference in the ERPs to incongruent and congruent AV vowels peaked at 155 ms from the A stimuli onset. This and two later differences are suggested to reflect interactions at phonetic level. The early general AV interactions probably reflect modified activity in the sensory-specific cortices, whereas the later phonetic AV interactions are likely generated in the heteromodal cortices. Thus, our results suggest that sensory-specific and heteromodal brain regions participate in AV speech integration at separate latencies and are sensitive to different features of A and V speech stimuli.  相似文献   

17.
Autism spectrum disorders (ASD) are characterized by a deficit of dorsal visual stream processing as well as the impairment of inhibitory control capability. However, the cognitive processing mechanisms of executive dysfunction have not been addressed. In the present study, the endogenous Posner paradigm task was administered to 15 children with ASD and 16 typically developing (TD) children to simultaneously investigate and compare the behavioral performance and event-related potentials (ERPs) measures. Children with ASD showed slower reaction time in the incongruent condition but did not significantly differ in the overall conditions and in response accuracy as compared to TD children. The ASD group also exhibited significant impairment on measures of inhibitory control. In terms of ERPs regarding early and late inhibition, there were no significant differences found with regard to N2 latency, N2 amplitude, and P3 amplitude in children with ASD relative to TD children, but the ASD group manifested prolonged latency on the P3 component to target stimuli, especially in the incongruent condition, which is indicative of slow and inefficient stimulus classification speed as compared to TD children.  相似文献   

18.
Conflict-related cognitive processes are critical for adapting to sudden environmental changes that confront the individual with inconsistent or ambiguous information. Thus, these processes play a crucial role to cope with daily life. Generally, conflicts tend to accumulate especially in complex and threatening situations. Therefore, the question arises how conflict-related cognitive processes are modulated by the close succession of conflicts. In the present study, we investigated the effect of interactions between different types of conflict on performance as well as on electrophysiological parameters. A task-irrelevant auditory stimulus and a task-relevant visual stimulus were presented successively. The auditory stimulus consisted of a standard or deviant tone, followed by a congruent or incongruent Stroop stimulus. After standard prestimuli, performance deteriorated for incongruent compared to congruent Stroop stimuli, which were accompanied by a widespread negativity for incongruent versus congruent stimuli in the event-related potentials (ERPs). However, after deviant prestimuli, performance was better for incongruent than for congruent Stroop stimuli and an additional early negativity in the ERP emerged with a fronto-central maximum. Our data show that deviant auditory prestimuli facilitate specifically the processing of stimulus-related conflict, providing evidence for a conflict-priming effect.  相似文献   

19.
Incongruent auditory and visual stimuli can elicit audiovisual illusions such as the McGurk effect where visual /ka/ and auditory /pa/ fuse into another percept such as/ta/. In the present study, human brain activity was measured with adaptation functional magnetic resonance imaging to investigate which brain areas support such audiovisual illusions. Subjects viewed trains of four movies beginning with three congruent /pa/ stimuli to induce adaptation. The fourth stimulus could be (i) another congruent /pa/, (ii) a congruent /ka/, (iii) an incongruent stimulus that evokes the McGurk effect in susceptible individuals (lips /ka/ voice /pa/), or (iv) the converse combination that does not cause the McGurk effect (lips /pa/ voice/ ka/). This paradigm was predicted to show increased release from adaptation (i.e. stronger brain activation) when the fourth movie and the related percept was increasingly different from the three previous movies. A stimulus change in either the auditory or the visual stimulus from /pa/ to /ka/ (iii, iv) produced within‐modality and cross‐modal responses in primary auditory and visual areas. A greater release from adaptation was observed for incongruent non‐McGurk (iv) compared to incongruent McGurk (iii) trials. A network including the primary auditory and visual cortices, nonprimary auditory cortex, and several multisensory areas (superior temporal sulcus, intraparietal sulcus, insula, and pre‐central cortex) showed a correlation between perceiving the McGurk effect and the fMRI signal, suggesting that these areas support the audiovisual illusion. Hum Brain Mapp, 2010. © 2009 Wiley‐Liss, Inc.  相似文献   

20.
Visual skills, including numerosity estimation are reported to be superior in autism spectrum disorders (ASD). This phenomenon is attributed to individuals with ASD processing local features, rather than the Gestalt. We examined the neural correlates of numerosity estimation in adults with and without ASD, to disentangle perceptual atypicalities from numerosity processing. Fourteen adults with ASD and matched typically developed (TD) controls estimated the number of dots (80–150) arranged either randomly (local information) or in meaningful patterns (global information) while brain activity was recorded with magnetoencephalography (MEG). Behavioral results showed no significant group difference in the errors of estimation. However, numerical estimation in ASD was more variable across numerosities than TD and was not affected by the global arrangement of the dots. At 80–120 ms, MEG analyses revealed early significant differences (TD > ASD) in source amplitudes in visual areas, followed from 120 to 400 ms by group differences in temporal, and then parietal regions. After 400 ms, a source was found in the superior frontal gyrus in TD only. Activation in temporal areas was differently sensitive to the global arrangement of dots in TD and ASD. MEG data show that individuals with autism exhibit widespread functional abnormalities. Differences in temporal regions could be linked to atypical global perception. Occipital followed by parietal and frontal differences might be driven by abnormalities in the processing and conversion of visual input into a number‐selective neural code and complex cognitive decisional stages. These results suggest overlapping atypicalities in sensory, perceptual and number‐related processing during numerosity estimation in ASD. Hum Brain Mapp 35:4362–4385, 2014. © 2014 Wiley Periodicals, Inc .  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号