首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Object recognition benefits maximally from multimodal sensory input when stimulus presentation is noisy, or degraded. Whether this advantage can be attributed specifically to the extent of overlap in object‐related information, or rather, to object‐unspecific enhancement due to the mere presence of additional sensory stimulation, remains unclear. Further, the cortical processing differences driving increased multisensory integration (MSI) for degraded compared with clear information remain poorly understood. Here, two consecutive studies first compared behavioral benefits of audio‐visual overlap of object‐related information, relative to conditions where one channel carried information and the other carried noise. A hierarchical drift diffusion model indicated performance enhancement when auditory and visual object‐related information was simultaneously present for degraded stimuli. A subsequent fMRI study revealed visual dominance on a behavioral and neural level for clear stimuli, while degraded stimulus processing was mainly characterized by activation of a frontoparietal multisensory network, including IPS. Connectivity analyses indicated that integration of degraded object‐related information relied on IPS input, whereas clear stimuli were integrated through direct information exchange between visual and auditory sensory cortices. These results indicate that the inverse effectiveness observed for identification of degraded relative to clear objects in behavior and brain activation might be facilitated by selective recruitment of an executive cortical network which uses IPS as a relay mediating crossmodal sensory information exchange.  相似文献   

2.
In this study, we aimed to understand how whole‐brain neural networks compute sensory information integration based on the olfactory and visual system. Task‐related functional magnetic resonance imaging (fMRI) data was obtained during unimodal and bimodal sensory stimulation. Based on the identification of multisensory integration processing (MIP) specific hub‐like network nodes analyzed with network‐based statistics using region‐of‐interest based connectivity matrices, we conclude the following brain areas to be important for processing the presented bimodal sensory information: right precuneus connected contralaterally to the supramarginal gyrus for memory‐related imagery and phonology retrieval, and the left middle occipital gyrus connected ipsilaterally to the inferior frontal gyrus via the inferior fronto‐occipital fasciculus including functional aspects of working memory. Applied graph theory for quantification of the resulting complex network topologies indicates a significantly increased global efficiency and clustering coefficient in networks including aspects of MIP reflecting a simultaneous better integration and segregation. Graph theoretical analysis of positive and negative network correlations allowing for inferences about excitatory and inhibitory network architectures revealed—not significant, but very consistent—that MIP‐specific neural networks are dominated by inhibitory relationships between brain regions involved in stimulus processing.  相似文献   

3.
Numerous studies have demonstrated that the vision of lip movements can alter the perception of auditory speech syllables (McGurk effect). While there is ample evidence for integration of text and auditory speech, there are only a few studies on the orthographic equivalent of the McGurk effect. Here, we examined whether written text, like visual speech, can induce an illusory change in the perception of speech sounds on both the behavioural and neural levels. In a sound categorization task, we found that both text and visual speech changed the identity of speech sounds from an /aba/‐/ada/ continuum, but the size of this audiovisual effect was considerably smaller for text than visual speech. To examine at which level in the information processing hierarchy these multisensory interactions occur, we recorded electroencephalography in an audiovisual mismatch negativity (MMN, a component of the event‐related potential reflecting preattentive auditory change detection) paradigm in which deviant text or visual speech was used to induce an illusory change in a sequence of ambiguous sounds halfway between /aba/ and /ada/. We found that only deviant visual speech induced an MMN, but not deviant text, which induced a late P3‐like positive potential. These results demonstrate that text has much weaker effects on sound processing than visual speech does, possibly because text has different biological roots than visual speech.  相似文献   

4.
Atypical sensory responses are common in autism spectrum disorder (ASD). While evidence suggests impaired auditory–visual integration for verbal information, findings for nonverbal stimuli are inconsistent. We tested for sensory symptoms in children with ASD (using the Adolescent/Adult Sensory Profile) and examined unisensory and bisensory processing with a nonverbal auditory–visual paradigm, for which neurotypical adults show bisensory facilitation. ASD participants reported more atypical sensory symptoms overall, most prominently in the auditory modality. On the experimental task, reduced response times for bisensory compared to unisensory trials were seen in both ASD and control groups, but neither group showed significant race model violation (evidence of intermodal integration). Findings do not support impaired bisensory processing for simple nonverbal stimuli in high-functioning children with ASD.  相似文献   

5.
Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal conditions. Children with ASD were poorer than controls at recognizing stimuli in the unimodal conditions, but once performance on this measure was controlled for, no group difference was found in the bimodal condition. A group of participants with ASD were also trained to develop their speech-reading ability. Training improved visual accuracy and this also improved the children's ability to utilize visual information in their processing of speech. Overall results were compared to predictions from mathematical models based on integration and non-integration, and were most consistent with the integration model. We conclude that, whilst they are less accurate in recognizing stimuli in the unimodal condition, children with ASD show normal integration of visual and auditory speech stimuli. Given that training in recognition of visual speech was effective, children with ASD may benefit from multi-modal approaches in imitative therapy and language training.  相似文献   

6.
There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory–cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio–visual stimuli. Behavioral performance and cortical processing of auditory and audio–visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio–visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non‐speech conditions, which was reflected by a strong visual modulation of auditory–cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio–visual conditions in central auditory implant patients is based on enhanced audio–visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206–2225, 2017. © 2017 Wiley Periodicals, Inc.  相似文献   

7.
Edges are crucial for the formation of coherent objects from sequential sensory inputs within a single modality. Moreover, temporally coincident boundaries of perceptual objects across different sensory modalities facilitate crossmodal integration. Here, we used functional magnetic resonance imaging in order to examine the neural basis of temporal edge detection across modalities. Onsets of sensory inputs are not only related to the detection of an edge but also to the processing of novel sensory inputs. Thus, we used transitions from input to rest (offsets) as convenient stimuli for studying the neural underpinnings of visual and acoustic edge detection per se. We found, besides modality-specific patterns, shared visual and auditory offset-related activity in the superior temporal sulcus and insula of the right hemisphere. Our data suggest that right hemispheric regions known to be involved in multisensory processing are crucial for detection of edges in the temporal domain across both visual and auditory modalities. This operation is likely to facilitate cross-modal object feature binding based on temporal coincidence.  相似文献   

8.
The integration of visual and auditory spatial information is important for building an accurate perception of the external world, but the fundamental mechanisms governing such audiovisual interaction have only partially been resolved. The earliest interface between auditory and visual processing pathways is in the midbrain, where the superior (SC) and inferior colliculi (IC) are reciprocally connected in an audiovisual loop. Here, we investigate the mechanisms of audiovisual interaction in the midbrain by recording neural signals from the SC and IC simultaneously in anesthetized ferrets. Visual stimuli reliably produced band‐limited phase locking of IC local field potentials (LFPs) in two distinct frequency bands: 6–10 and 15–30 Hz. These visual LFP responses co‐localized with robust auditory responses that were characteristic of the IC. Imaginary coherence analysis confirmed that visual responses in the IC were not volume‐conducted signals from the neighboring SC. Visual responses in the IC occurred later than retinally driven superficial SC layers and earlier than deep SC layers that receive indirect visual inputs, suggesting that retinal inputs do not drive visually evoked responses in the IC. In addition, SC and IC recording sites with overlapping visual spatial receptive fields displayed stronger functional connectivity than sites with separate receptive fields, indicating that visual spatial maps are aligned across both midbrain structures. Reciprocal coupling between the IC and SC therefore probably serves the dynamic integration of visual and auditory representations of space.  相似文献   

9.
Several studies have shown sensorimotor deficits in speech processing in individuals with idiopathic Parkinson's disease (PD). The underlying neural mechanisms, however, remain poorly understood. In the present event‐related potential (ERP) study, 18 individuals with PD and 18 healthy controls were exposed to frequency‐altered feedback (FAF) while producing a sustained vowel and listening to the playback of their own voice. Behavioral results revealed that individuals with PD produced significantly larger vocal compensation for pitch feedback errors than healthy controls, and exhibited a significant positive correlation between the magnitude of their vocal responses and the variability of their unaltered vocal pitch. At the cortical level, larger P2 responses were observed for individuals with PD compared with healthy controls during active vocalization due to left‐lateralized enhanced activity in the superior and inferior frontal gyrus, premotor cortex, inferior parietal lobule, and superior temporal gyrus. These two groups did not differ, however, when they passively listened to the playback of their own voice. Individuals with PD also exhibited larger P2 responses during active vocalization when compared with passive listening due to enhanced activity in the inferior frontal gyrus, precental gyrus, postcentral gyrus, and middle temporal gyrus. This enhancement effect, however, was not observed for healthy controls. These findings provide neural evidence for the abnormal auditory–vocal integration for voice control in individuals with PD, which may be caused by their deficits in the detection and correction of errors in voice auditory feedback. Hum Brain Mapp 37:4248–4261, 2016. © 2016 Wiley Periodicals, Inc.  相似文献   

10.
Coordinated attention to information from multiple senses is fundamental to our ability to respond to salient environmental events, yet little is known about brain network mechanisms that guide integration of information from multiple senses. Here we investigate dynamic causal mechanisms underlying multisensory auditory–visual attention, focusing on a network of right‐hemisphere frontal–cingulate–parietal regions implicated in a wide range of tasks involving attention and cognitive control. Participants performed three ‘oddball’ attention tasks involving auditory, visual and multisensory auditory–visual stimuli during fMRI scanning. We found that the right anterior insula (rAI) demonstrated the most significant causal influences on all other frontal–cingulate–parietal regions, serving as a major causal control hub during multisensory attention. Crucially, we then tested two competing models of the role of the rAI in multisensory attention: an ‘integrated’ signaling model in which the rAI generates a common multisensory control signal associated with simultaneous attention to auditory and visual oddball stimuli versus a ‘segregated’ signaling model in which the rAI generates two segregated and independent signals in each sensory modality. We found strong support for the integrated, rather than the segregated, signaling model. Furthermore, the strength of the integrated control signal from the rAI was most pronounced on the dorsal anterior cingulate and posterior parietal cortices, two key nodes of saliency and central executive networks respectively. These results were preserved with the addition of a superior temporal sulcus region involved in multisensory processing. Our study provides new insights into the dynamic causal mechanisms by which the AI facilitates multisensory attention.  相似文献   

11.
Mental imagery is a complex cognitive process that resembles the experience of perceiving an object when this object is not physically present to the senses. It has been shown that, depending on the sensory nature of the object, mental imagery also involves correspondent sensory neural mechanisms. However, it remains unclear which areas of the brain subserve supramodal imagery processes that are independent of the object modality, and which brain areas are involved in modality‐specific imagery processes. Here, we conducted a functional magnetic resonance imaging study to reveal supramodal and modality‐specific networks of mental imagery for auditory and visual information. A common supramodal brain network independent of imagery modality, two separate modality‐specific networks for imagery of auditory and visual information, and a common deactivation network were identified. The supramodal network included brain areas related to attention, memory retrieval, motor preparation and semantic processing, as well as areas considered to be part of the default‐mode network and multisensory integration areas. The modality‐specific networks comprised brain areas involved in processing of respective modality‐specific sensory information. Interestingly, we found that imagery of auditory information led to a relative deactivation within the modality‐specific areas for visual imagery, and vice versa. In addition, mental imagery of both auditory and visual information widely suppressed the activity of primary sensory and motor areas, for example deactivation network. These findings have important implications for understanding the mechanisms that are involved in generation of mental imagery.  相似文献   

12.
A frontoparietal network of brain regions is often implicated in both auditory and visual information processing. Although it is possible that the same set of multimodal regions subserves both modalities, there is increasing evidence that there is a differentiation of sensory function within frontoparietal cortex. Magnetic resonance imaging (MRI) in humans was used to investigate whether different frontoparietal regions showed intrinsic biases in connectivity with visual or auditory modalities. Structural connectivity was assessed with diffusion tractography and functional connectivity was tested using functional MRI. A dorsal–ventral gradient of function was observed, where connectivity with visual cortex dominates dorsal frontal and parietal connections, while connectivity with auditory cortex dominates ventral frontal and parietal regions. A gradient was also observed along the posterior–anterior axis, although in opposite directions in prefrontal and parietal cortices. The results suggest that the location of neural activity within frontoparietal cortex may be influenced by these intrinsic biases toward visual and auditory processing. Thus, the location of activity in frontoparietal cortex may be influenced as much by stimulus modality as the cognitive demands of a task. It was concluded that stimulus modality was spatially encoded throughout frontal and parietal cortices, and was speculated that such an arrangement allows for top–down modulation of modality‐specific information to occur within higher‐order cortex. This could provide a potentially faster and more efficient pathway by which top–down selection between sensory modalities could occur, by constraining modulations to within frontal and parietal regions, rather than long‐range connections to sensory cortices. Hum Brain Mapp 38:255–270, 2017. © 2016 Wiley Periodicals, Inc.  相似文献   

13.
The etymology of schizophrenia implies poor functional integration of sensory, cognitive and affective processes. Multisensory integration (MSI) is a spontaneous perceptual–cognitive process by which relevant information from multiple sensory modalities is extracted to generate a holistic experience. Deficits in MSI may hinder prompt and appropriate behavioural responses in a complex and transient environment. Despite extensive investigation of sensory, cognitive and affective processing in patients with schizophrenia, little is known about how MSI is affected in the illness. We systemically searched the PubMed electronic database and reviewed twenty-nine behavioural and neuroimaging studies examining MSI in patients with schizophrenia. The available evidence indicates impaired MSI for non-emotional stimuli in schizophrenia, especially for linguistic information. There is also evidence for altered MSI for emotional stimuli, although findings are inconsistent and may be modality-specific. Brain functional alterations in the superior temporal cortex and inferior frontal cortex appear to underlie the deficits in both non-emotional and emotional MSI. The limitations of the experimental paradigms used and directions for future research are also discussed.  相似文献   

14.
Coding for the degree of disorder in a temporally unfolding sensory input allows for optimized encoding of these inputs via information compression and predictive processing. Prior neuroimaging work has examined sensitivity to statistical regularities within single sensory modalities and has associated this function with the hippocampus, anterior cingulate, and lateral temporal cortex. Here we investigated to what extent sensitivity to input disorder, quantified by Markov entropy, is subserved by modality‐general or modality‐specific neural systems when participants are not required to monitor the input. Participants were presented with rapid (3.3 Hz) auditory and visual series varying over four levels of entropy, while monitoring an infrequently changing fixation cross. For visual series, sensitivity to the magnitude of disorder was found in early visual cortex, the anterior cingulate, and the intraparietal sulcus. For auditory series, sensitivity was found in inferior frontal, lateral temporal, and supplementary motor regions implicated in speech perception and sequencing. Ventral premotor and central cingulate cortices were identified as possible candidates for modality‐general uncertainty processing, exhibiting marginal sensitivity to disorder in both modalities. The right temporal pole differentiated the highest and lowest levels of disorder in both modalities, but did not show general sensitivity to the parametric manipulation of disorder. Our results indicate that neural sensitivity to input disorder relies largely on modality‐specific systems embedded in extended sensory cortices, though uncertainty‐related processing in frontal regions may be driven by both input modalities. Hum Brain Mapp 35:1111–1128, 2014. © 2013 Wiley Periodicals, Inc.  相似文献   

15.
The timing of personal movement with respect to external events has previously been investigated using a synchronized finger‐tapping task with a sequence of auditory or visual stimuli. While visuomotor synchronization is more accurate with moving stimuli than with stationary stimuli, it remains unclear whether the same principle holds true in the auditory domain. Although the right inferior–superior parietal lobe (IPL/SPL), a center of auditory motion processing, is expected to be involved in auditory–motor synchronization with moving sounds, its functional relevance has not yet been investigated. The aim of the present study was thus to clarify whether horizontal auditory motion affects the accuracy of finger‐tapping synchronized with sounds, as well as whether the application of transcranial direct current stimulation (tDCS) to the right IPL/SPL affects this. Nineteen healthy right‐handed participants performed a task in which tapping was synchronized with both stationary sounds and sounds that created apparent horizontal motion. This task was performed before and during anodal, cathodal and sham tDCS application to the right IPL/SPL in separate sessions. The time difference between the onset of the sounds and tapping was larger with apparently moving sounds than with stationary sounds. Cathodal tDCS decreased this difference, anodal tDCS increased the variance of the difference and sham stimulation had no effect. These results supported the hypothesis that auditory motion disturbs efficient auditory–motor synchronization and that the right IPL/SPL plays an important role in tapping in synchrony with moving sounds via auditory motion processing.  相似文献   

16.
Speakers rapidly adjust their ongoing vocal productions to compensate for errors they hear in their auditory feedback. It is currently unclear what role attention plays in these vocal compensations. This event‐related potential (ERP) study examined the influence of selective and divided attention on the vocal and cortical responses to pitch errors heard in auditory feedback regarding ongoing vocalisations. During the production of a sustained vowel, participants briefly heard their vocal pitch shifted up two semitones while they actively attended to auditory or visual events (selective attention), or both auditory and visual events (divided attention), or were not told to attend to either modality (control condition). The behavioral results showed that attending to the pitch perturbations elicited larger vocal compensations than attending to the visual stimuli. Moreover, ERPs were likewise sensitive to the attentional manipulations: P2 responses to pitch perturbations were larger when participants attended to the auditory stimuli compared to when they attended to the visual stimuli, and compared to when they were not explicitly told to attend to either the visual or auditory stimuli. By contrast, dividing attention between the auditory and visual modalities caused suppressed P2 responses relative to all the other conditions and caused enhanced N1 responses relative to the control condition. These findings provide strong evidence for the influence of attention on the mechanisms underlying the auditory–vocal integration in the processing of pitch feedback errors. In addition, selective attention and divided attention appear to modulate the neurobehavioral processing of pitch feedback errors in different ways.  相似文献   

17.
There is a vigorous debate as to whether visual perception and imagery share the same neuronal networks, whether the primary visual cortex is necessarily involved in visual imagery, and whether visual imagery functions are lateralized in the brain. Two patients with brain damage from closed head injury were submitted to tests of mental imagery in the visual, tactile, auditory, gustatory, olfactory and motor domains, as well as to an extensive testing of cognitive functions. A computerized mapping procedure was used to localize the site and to assess the extent of the lesions. One patient showed pure visual mental imagery deficits in the absence of imagery deficits in other sensory domains as well as in the motor domain, while the other patient showed both visual and tactile imagery deficits. Perceptual, language, and memory deficits were conspicuously absent. Computerized analysis of the lesions showed a massive involvement of the left temporal lobe in both patients and a bilateral parietal lesion in one patient. In both patients the calcarine cortex with the primary visual area was bilaterally intact. Our study indicates that: (i) visual imagery deficits can occur independently from deficits of visual perception; (ii) visual imagery deficits can occur when the primary visual cortex is intact and (iii) the left temporal lobe plays an important role in visual mental imagery.  相似文献   

18.
A growing body of literature demonstrates impaired multisensory integration (MSI) in patients with schizophrenia compared to non-psychiatric individuals. One of the most basic measures of MSI is intersensory facilitation of reaction times (RTs), in which bimodal targets, with cues from two sensory modalities, are detected faster than unimodal targets. This RT speeding is generally attributed to super-additive processing of multisensory targets. In order to test whether patients with schizophrenia are impaired on this basic measure of MSI, we assessed the degree of intersensory facilitation for a sample of 20 patients compared to 20 non-psychiatric individuals using a very simple target detection task. RTs were recorded for participants to detect targets that were either unimodal (auditory alone, A; visual alone, V) or bimodal (auditory + visual, AV). RT distributions to detect bimodal targets were compared with predicted RT distributions based on the summed probability distribution of each participant's RTs to visual alone and auditory alone targets. Patients with schizophrenia showed less RT facilitation when detecting bimodal targets relative to non-psychiatric individuals, even when groups were matched for unimodal RTs. Within the schizophrenia group, RT benefit was correlated with negative symptoms, such that patients with greater negative symptoms showed the least RT facilitation (r2 = 0.20, p < 0.05). Additionally, schizophrenia patients who experienced both auditory and visual hallucinations showed less multisensory benefit compared to patients who experienced only auditory hallucinations, indicating that the presence of hallucinations in two modalities may more strongly impair MSI compared to hallucinations in only one modality.  相似文献   

19.
20.
Here we investigate brain functional connectivity in patients with visual snow syndrome (VSS). Our main objective was to understand more about the underlying pathophysiology of this neurological syndrome. Twenty‐four patients with VSS and an equal number of gender and age‐matched healthy volunteers attended MRI sessions in which whole‐brain maps of functional connectivity were acquired under two conditions: at rest while watching a blank screen and during a visual paradigm consisting of a visual‐snow like stimulus. Eight unilateral seed regions were selected a priori based on previous observations and hypotheses; four seeds were placed in key anatomical areas of the visual pathways and the remaining were derived from a pre‐existing functional analysis. The between‐group analysis showed that patients with VSS had hyper and hypoconnectivity between key visual areas and the rest of the brain, both in the resting state and during a visual stimulation, compared with controls. We found altered connectivity internally within the visual network; between the thalamus/basal ganglia and the lingual gyrus; between the visual motion network and both the default mode and attentional networks. Further, patients with VSS presented decreased connectivity during external sensory input within the salience network, and between V5 and precuneus. Our results suggest that VSS is characterised by a widespread disturbance in the functional connectivity of several brain systems. This dysfunction involves the pre‐cortical and cortical visual pathways, the visual motion network, the attentional networks and finally the salience network; further, it represents evidence of ongoing alterations both at rest and during visual stimulus processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号