首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The development of the human brain continues through to early adulthood. It has been suggested that cortical plasticity during this protracted period of development shapes circuits in associative transmodal regions of the brain. Here we considered how cortical plasticity during development might contribute to the coordinated brain activity required for speech motor learning. Specifically, we examined patterns of brain functional connectivity (FC), whose strength covaried with the capacity for speech audio-motor adaptation in children ages 5–12 and in young adults of both sexes. Children and adults showed distinct patterns of the encoding of learning in the brain. Adult performance was associated with connectivity in transmodal regions that integrate auditory and somatosensory information, whereas children rely on basic somatosensory and motor circuits. A progressive reliance on transmodal regions is consistent with human cortical development and suggests that human speech motor adaptation abilities are built on cortical remodeling, which is observable in late childhood and is stabilized in adults.SIGNIFICANCE STATEMENT A protracted period of neuro plasticity during human development is associated with extensive reorganization of associative cortex. We examined how the relationship between FC and speech motor learning capacity are reconfigured in conjunction with this cortical reorganization. Young adults and children aged 5–12 years showed distinctly different patterns. Mature brain networks related to learning included associative cortex, which integrates auditory and somatosensory feedback in speech, whereas the immature networks in children included motor regions of the brain. These patterns are consistent with the cortical reorganization that is initiated in late childhood. The result provides insights into the human biology of speech as well as to the mature neural mechanisms for multisensory integration in motor learning.  相似文献   

2.
This fMRI study explores brain regions involved with perceptual enhancement afforded by observation of visual speech gesture information. Subjects passively identified words presented in the following conditions: audio-only, audiovisual, audio-only with noise, audiovisual with noise, and visual only. The brain may use concordant audio and visual information to enhance perception by integrating the information in a converging multisensory site. Consistent with response properties of multisensory integration sites, enhanced activity in middle and superior temporal gyrus/sulcus was greatest when concordant audiovisual stimuli were presented with acoustic noise. Activity found in brain regions involved with planning and execution of speech production in response to visual speech presented with degraded or absent auditory stimulation, is consistent with the use of an additional pathway through which speech perception is facilitated by a process of internally simulating the intended speech act of the observed speaker.  相似文献   

3.
Learning letter‐speech sound correspondences is a major step in reading acquisition and is severely impaired in children with dyslexia. Up to now, it remains largely unknown how quickly neural networks adopt specific functions during audiovisual integration of linguistic information when prereading children learn letter‐speech sound correspondences. Here, we simulated the process of learning letter‐speech sound correspondences in 20 prereading children (6.13–7.17 years) at varying risk for dyslexia by training artificial letter‐speech sound correspondences within a single experimental session. Subsequently, we acquired simultaneously event‐related potentials (ERP) and functional magnetic resonance imaging (fMRI) scans during implicit audiovisual presentation of trained and untrained pairs. Audiovisual integration of trained pairs correlated with individual learning rates in right superior temporal, left inferior temporal, and bilateral parietal areas and with phonological awareness in left temporal areas. In correspondence, a differential left‐lateralized parietooccipitotemporal ERP at 400 ms for trained pairs correlated with learning achievement and familial risk. Finally, a late (650 ms) posterior negativity indicating audiovisual congruency of trained pairs was associated with increased fMRI activation in the left occipital cortex. Taken together, a short (<30 min) letter‐speech sound training initializes audiovisual integration in neural systems that are responsible for processing linguistic information in proficient readers. To conclude, the ability to learn grapheme‐phoneme correspondences, the familial history of reading disability, and phonological awareness of prereading children account for the degree of audiovisual integration in a distributed brain network. Such findings on emerging linguistic audiovisual integration could allow for distinguishing between children with typical and atypical reading development. Hum Brain Mapp 38:1038–1055, 2017. © 2016 Wiley Periodicals, Inc.  相似文献   

4.
This study examined unisensory and multisensory speech perception in 8–17 year old children with autism spectrum disorders (ASD) and typically developing controls matched on chronological age, sex, and IQ. Consonant–vowel syllables were presented in visual only, auditory only, matched audiovisual, and mismatched audiovisual (“McGurk”) conditions. Participants with ASD displayed deficits in visual only and matched audiovisual speech perception. Additionally, children with ASD reported a visual influence on heard speech in response to mismatched audiovisual syllables over a wider window of time relative to controls. Correlational analyses revealed associations between multisensory speech perception, communicative characteristics, and responses to sensory stimuli in ASD. Results suggest atypical speech perception is linked to broader behavioral characteristics of ASD.  相似文献   

5.
Children use information from both the auditory and visual modalities to aid in understanding speech. A dramatic illustration of this multisensory integration is the McGurk effect, an illusion in which an auditory syllable is perceived differently when it is paired with an incongruent mouth movement. However, there are significant interindividual differences in McGurk perception: some children never perceive the illusion, while others always do. Because converging evidence suggests that the posterior superior temporal sulcus (STS) is a critical site for multisensory integration, we hypothesized that activity within the STS would predict susceptibility to the McGurk effect. To test this idea, we used BOLD fMRI in 17 children aged 6-12 years to measure brain responses to the following three audiovisual stimulus categories: McGurk incongruent, non-McGurk incongruent, and congruent syllables. Two separate analysis approaches, one using independent functional localizers and another using whole-brain voxel-based regression, showed differences in the left STS between perceivers and nonperceivers. The STS of McGurk perceivers responded significantly more than that of nonperceivers to McGurk syllables, but not to other stimuli, and perceivers' hemodynamic responses in the STS were significantly prolonged. In addition to the STS, weaker differences between perceivers and nonperceivers were observed in the fusiform face area and extrastriate visual cortex. These results suggest that the STS is an important source of interindividual variability in children's audiovisual speech perception.  相似文献   

6.
We live in a multisensory world and one of the challenges the brain is faced with is deciding what information belongs together. Our ability to make assumptions about the relatedness of multisensory stimuli is partly based on their temporal and spatial relationships. Stimuli that are proximal in time and space are likely to be bound together by the brain and ascribed to a common external event. Using this framework we can describe multisensory processes in the context of spatial and temporal filters or windows that compute the probability of the relatedness of stimuli. Whereas numerous studies have examined the characteristics of these multisensory filters in adults and discrepancies in window size have been reported between infants and adults, virtually nothing is known about multisensory temporal processing in childhood. To examine this, we compared the ability of 10 and 11 year olds and adults to detect audiovisual temporal asynchrony. Findings revealed striking and asymmetric age-related differences. Whereas children were able to identify asynchrony as readily as adults when visual stimuli preceded auditory cues, significant group differences were identified at moderately long stimulus onset asynchronies (150-350 ms) where the auditory stimulus was first. Results suggest that changes in audiovisual temporal perception extend beyond the first decade of life. In addition to furthering our understanding of basic multisensory developmental processes, these findings have implications on disorders (e.g., autism, dyslexia) in which emerging evidence suggests alterations in multisensory temporal function.  相似文献   

7.
Numerous studies have demonstrated that the vision of lip movements can alter the perception of auditory speech syllables (McGurk effect). While there is ample evidence for integration of text and auditory speech, there are only a few studies on the orthographic equivalent of the McGurk effect. Here, we examined whether written text, like visual speech, can induce an illusory change in the perception of speech sounds on both the behavioural and neural levels. In a sound categorization task, we found that both text and visual speech changed the identity of speech sounds from an /aba/‐/ada/ continuum, but the size of this audiovisual effect was considerably smaller for text than visual speech. To examine at which level in the information processing hierarchy these multisensory interactions occur, we recorded electroencephalography in an audiovisual mismatch negativity (MMN, a component of the event‐related potential reflecting preattentive auditory change detection) paradigm in which deviant text or visual speech was used to induce an illusory change in a sequence of ambiguous sounds halfway between /aba/ and /ada/. We found that only deviant visual speech induced an MMN, but not deviant text, which induced a late P3‐like positive potential. These results demonstrate that text has much weaker effects on sound processing than visual speech does, possibly because text has different biological roots than visual speech.  相似文献   

8.
Everyday experience affords us many opportunities to learn about objects through multiple senses using physical interaction. Previous work has shown that active motor learning of unisensory items enhances memory and leads to the involvement of motor systems during subsequent perception. However, the impact of active motor learning on subsequent perception and recognition of associations among multiple senses has not been investigated. Twenty participants were included in an fMRI study that explored the impact of active motor learning on subsequent processing of unisensory and multisensory stimuli. Participants were exposed to visuo-motor associations between novel objects and novel sounds either through self-generated actions on the objects or by observing an experimenter produce the actions. Immediately after exposure, accuracy, RT, and BOLD fMRI measures were collected with unisensory and multisensory stimuli in associative perception and recognition tasks. Response times during audiovisual associative and unisensory recognition were enhanced by active learning, as was accuracy during audiovisual associative recognition. The difference in motor cortex activation between old and new associations was greater for the active than the passive group. Furthermore, functional connectivity between visual and motor cortices was stronger after active learning than passive learning. Active learning also led to greater activation of the fusiform gyrus during subsequent unisensory visual perception. Finally, brain regions implicated in audiovisual integration (e.g., STS) showed greater multisensory gain after active learning than after passive learning. Overall, the results show that active motor learning modulates the processing of multisensory associations.  相似文献   

9.
In the present study, we investigated the possibility that bimodal audiovisual stimulation of the affected hemifield can improve perception of the visual events in the blind hemifield of hemianopic patients, as it was previously demonstrated in neglect patients. Moreover, it has been shown that "hetero-modal" and "sensory-specific" cortices are involved in cross-modal integration. Thus, the second aim of the present study was to examine whether audiovisual integration influences visual detection in patients with different cortical lesions responsible of different kinds of visual disorders. More specifically, we investigated cross-modal, audiovisual integration in patients with visual impairment due to a visual field deficit (e.g., hemianopia) or visuospatial attentional deficit (e.g., neglect) and patients with both hemianopia and neglect. Patients were asked to detect visual stimuli presented alone or in combination with auditory stimuli that could be spatially aligned or not with the visual ones. The results showed an enhancement of visual detection in cross-modal condition (spatially aligned condition) comparing to unimodal visual condition only in patients with hemianopia or neglect; by contrast, the multisensory integration did not occur when patients presented both deficits. These data suggest that patients with visual disorders can enormously benefit the multisensory integration. Moreover, they showed a different influence of cortical lesion on multisensory integration. Thus, the present results show the important adaptive meaning of multisensory integration and are very promising with respect to the possibility of recovery from visual and spatial impairments.  相似文献   

10.
Seeing a speaker''s face benefits speech comprehension, especially in challenging listening conditions. This perceptual benefit is thought to stem from the neural integration of visual and auditory speech at multiple stages of processing, whereby movement of a speaker''s face provides temporal cues to auditory cortex, and articulatory information from the speaker''s mouth can aid recognizing specific linguistic units (e.g., phonemes, syllables). However, it remains unclear how the integration of these cues varies as a function of listening conditions. Here, we sought to provide insight on these questions by examining EEG responses in humans (males and females) to natural audiovisual (AV), audio, and visual speech in quiet and in noise. We represented our speech stimuli in terms of their spectrograms and their phonetic features and then quantified the strength of the encoding of those features in the EEG using canonical correlation analysis (CCA). The encoding of both spectrotemporal and phonetic features was shown to be more robust in AV speech responses than what would have been expected from the summation of the audio and visual speech responses, suggesting that multisensory integration occurs at both spectrotemporal and phonetic stages of speech processing. We also found evidence to suggest that the integration effects may change with listening conditions; however, this was an exploratory analysis and future work will be required to examine this effect using a within-subject design. These findings demonstrate that integration of audio and visual speech occurs at multiple stages along the speech processing hierarchy.SIGNIFICANCE STATEMENT During conversation, visual cues impact our perception of speech. Integration of auditory and visual speech is thought to occur at multiple stages of speech processing and vary flexibly depending on the listening conditions. Here, we examine audiovisual (AV) integration at two stages of speech processing using the speech spectrogram and a phonetic representation, and test how AV integration adapts to degraded listening conditions. We find significant integration at both of these stages regardless of listening conditions. These findings reveal neural indices of multisensory interactions at different stages of processing and provide support for the multistage integration framework.  相似文献   

11.
Is audiovisual integration subserved by the superior colliculus in humans?   总被引:1,自引:0,他引:1  
The brain effectively integrates multisensory information to enhance perception. For example, audiovisual stimuli typically yield faster responses than isolated unimodal ones (redundant signal effect, RSE). Here, we show that the audiovisual RSE is likely subserved by a neural site of integration (neural coactivation), rather than by an independent-channels mechanism such as race models. This neural site is probably the superior colliculus (SC), because an RSE explainable by neural coactivation does not occur with purple or blue stimuli, which are invisible to the SC; such an RSE only occurs for spatially and temporally coincident audiovisual stimuli, in strict adherence with the multisensory responses in the SC of the cat. These data suggest that audiovisual integration in humans occurs very early during sensory processing, in the SC.  相似文献   

12.
Intermodal binding between affective information that is seen as well as heard triggers a mandatory process of audiovisual integration. In order to track the time course of this audiovisual binding, event related brain potentials were recorded while subjects saw facial expression and concurrently heard auditory fragment. The results suggest that the combination of the two inputs is early in time (110 ms post-stimulus) and translates as a specific enhancement in amplitude of the auditory NI component. These findings are compatible with previous functional neuroimaging results of audiovisual speech showing strong audiovisual interactions in auditory cortex in the form of magnetic response amplifications, as well as with electrophysiological studies demonstrating early audiovisual interactions (before 200 ms post-stimulus). Moreover, our results show that the informational content present in the two modalities plays a crucial role in triggering the intermodal binding process.  相似文献   

13.
Associations between obstructive sleep apnea and motor speech disorders in adults have been suggested, though little has been written about possible effects of sleep apnea on speech acquisition in children with motor speech disorders. This report details the medical and speech history of a nonverbal child with seizures and severe apraxia of speech. For 6 years, he made no functional gains in speech production, despite intensive speech therapy. After tonsillectomy for obstructive sleep apnea at age 6 years, he experienced a reduction in seizures and rapid growth in speech production. The findings support a relationship between obstructive sleep apnea and childhood apraxia of speech. The rather late diagnosis and treatment of obstructive sleep apnea, especially in light of what was such a life-altering outcome (gaining functional speech), has significant implications. Most speech sounds develop during ages 2-5 years, which is also the peak time of occurrence of adenotonsillar hypertrophy and childhood obstructive sleep apnea. Hence it is important to establish definitive diagnoses, and to consider early and more aggressive treatments for obstructive sleep apnea, in children with motor speech disorders.  相似文献   

14.
The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non‐malleable for auditory‐leading stimulus pairs and wider and trainable for visual‐leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory‐before‐visual vs. visual‐before‐auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory‐leading stimulus pairs (group 1), visual‐leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non‐trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory–visual vs. visual–auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory–visual) did not affect the other type (e.g. visual–auditory), even if trainable by within‐condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory‐leading vs. visual‐leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration.  相似文献   

15.
The role of attention in multisensory integration (MI) is presently uncertain, with some studies supporting an automatic, pre-attentive process and others suggesting possible modulation through selective attention. The goal of this functional magnetic resonance imaging study was to investigate the role of spatial attention on the processing of congruent audiovisual speech stimuli (here indexing MI). Subjects were presented with two simultaneous visual streams (speaking lips in the left and right visual hemifields) plus a single central audio stream (spoken words). In the selective attention conditions, the auditory stream was congruent with one of the two visual streams. Subjects attended to either the congruent or the incongruent visual stream, allowing the comparison of brain activity for attended vs. unattended MI while the amount of multisensory information in the environment and the overall attentional requirements were held constant. Meridian mapping and a lateralized 'speaking-lips' localizer were used to identify early visual areas and to localize regions responding to contralateral visual stimulations. Results showed that attention to the congruent audiovisual stimulus resulted in increased activation in the superior temporal sulcus, striate and extrastriate retinotopic visual cortex, and superior colliculus. These findings demonstrate that audiovisual integration and spatial attention jointly interact to influence activity in an extensive network of brain areas, including associative regions, early sensory-specific visual cortex and subcortical structures that together contribute to the perception of a fused audiovisual percept.  相似文献   

16.
Autism involves various perceptual benefits and deficits, but it is unclear if the disorder also involves anomalous audiovisual integration. To address this issue, we compared the performance of high-functioning adults with autism and matched controls on experiments investigating the audiovisual integration of speech, spatiotemporal relations, and temporal numerosity. In each experiment, performance for both groups was faster and more accurate when audiovisual information was congruent rather than incongruent. Importantly, audiovisual congruency did not affect the control group more than the autism group. These results suggest that the ability to integrate between the auditory and visual sense modalities is unimpaired among high-functioning adults with autism.  相似文献   

17.
The ability to establish associations between visual objects and speech sounds is essential for human reading. Understanding the neural adjustments required for acquisition of these arbitrary audiovisual associations can shed light on fundamental reading mechanisms and help reveal how literacy builds on pre-existing brain circuits. To address these questions, the present longitudinal and cross-sectional MEG studies characterize the temporal and spatial neural correlates of audiovisual syllable congruency in children (age range, 4–9 years; 22 males and 20 females) learning to read. Both studies showed that during the first years of reading instruction children gradually set up audiovisual correspondences between letters and speech sounds, which can be detected within the first 400 ms of a bimodal presentation and recruit the superior portions of the left temporal cortex. These findings suggest that children progressively change the way they treat audiovisual syllables as a function of their reading experience. This reading-specific brain plasticity implies (partial) recruitment of pre-existing brain circuits for audiovisual analysis.SIGNIFICANCE STATEMENT Linking visual and auditory linguistic representations is the basis for the development of efficient reading, while dysfunctional audiovisual letter processing predicts future reading disorders. Our developmental MEG project included a longitudinal and a cross-sectional study; both studies showed that children''s audiovisual brain circuits progressively change as a function of reading experience. They also revealed an exceptional degree of neuroplasticity in audiovisual neural networks, showing that as children develop literacy, the brain progressively adapts so as to better detect new correspondences between letters and speech sounds.  相似文献   

18.
Acoustic speech is easier to detect in noise when the talker can be seen. This finding could be explained by integration of multisensory inputs or refinement of auditory processing from visual guidance. In two experiments, we studied two‐interval forced‐choice detection of an auditory ‘ba’ in acoustic noise, paired with various visual and tactile stimuli that were identically presented in the two observation intervals. Detection thresholds were reduced under the multisensory conditions vs. the auditory‐only condition, even though the visual and/or tactile stimuli alone could not inform the correct response. Results were analysed relative to an ideal observer for which intrinsic (internal) noise and efficiency were independent contributors to detection sensitivity. Across experiments, intrinsic noise was unaffected by the multisensory stimuli, arguing against the merging (integrating) of multisensory inputs into a unitary speech signal, but sampling efficiency was increased to varying degrees, supporting refinement of knowledge about the auditory stimulus. The steepness of the psychometric functions decreased with increasing sampling efficiency, suggesting that the ‘task‐irrelevant’ visual and tactile stimuli reduced uncertainty about the acoustic signal. Visible speech was not superior for enhancing auditory speech detection. Our results reject multisensory neuronal integration and speech‐specific neural processing as explanations for the enhanced auditory speech detection under noisy conditions. Instead, they support a more rudimentary form of multisensory interaction: the otherwise task‐irrelevant sensory systems inform the auditory system about when to listen.  相似文献   

19.
Previous studies have suggested audiovisual multisensory integration (MSI) may be atypical in Autism Spectrum Disorder (ASD). However, much of the research having found an alteration in MSI in ASD involved socio-communicative stimuli. The goal of the current study was to investigate MSI abilities in ASD using lower-level stimuli that are not socio-communicative in nature by testing susceptibility to auditory-guided visual illusions. Adolescents and adults with ASD and typically-developing (TD) individuals were shown to have similar susceptibility to a fission illusion. However, the ASD group was significantly more susceptible to the fusion illusion. Results suggest that individuals with ASD demonstrate MSI on the flash-beep illusion task but that their integration of audiovisual sensory information may be less selective than for TD individuals.  相似文献   

20.
Since Kraepelin called dementia praecox what we nowadays call schizophrenia, cognitive dysfunction has been regarded as central to its psychopathological profile. Disturbed experience and integration of emotions are, both intuitively and experimentally, likely to be intermediates between basic, non-social cognitive disturbances and functional outcome in schizophrenia. While a number of studies have consistently proven that, as part of social cognition, recognition of emotional faces and voices is disturbed in schizophrenics, studies on multisensory integration of facial and vocal affect are rare. We investigated audiovisual integration of emotional faces and voices in three groups: schizophrenic patients, non-schizophrenic psychosis patients and mentally healthy controls, all diagnosed by means of the Schedules of Clinical Assessment in Neuropsychiatry (SCAN 2.1). We found diminished crossmodal influence of emotional faces on emotional voice categorization in schizophrenics, but not in non-schizophrenia psychosis patients. Results are discussed in the perspective of recent theories on multisensory integration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号