首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
The integration of multiple sensory modalities is a key aspect of brain function, allowing animals to take advantage of concurrent sources of information to make more accurate perceptual judgments. For many years, multisensory integration in the cerebral cortex was deemed to occur only in high‐level “polysensory” association areas. However, more recent studies have suggested that cross‐modal stimulation can also influence neural activity in areas traditionally considered to be unimodal. In particular, several human neuroimaging studies have reported that extrastriate areas involved in visual motion perception are also activated by auditory motion, and may integrate audiovisual motion cues. However, the exact nature and extent of the effects of auditory motion on the visual cortex have not been studied at the single neuron level. We recorded the spiking activity of neurons in the middle temporal (MT) and medial superior temporal (MST) areas of anesthetized marmoset monkeys upon presentation of unimodal stimuli (moving auditory or visual patterns), as well as bimodal stimuli (concurrent audiovisual motion). Despite robust, direction selective responses to visual motion, none of the sampled neurons responded to auditory motion stimuli. Moreover, concurrent moving auditory stimuli had no significant effect on the ability of single MT and MST neurons, or populations of simultaneously recorded neurons, to discriminate the direction of motion of visual stimuli (moving random dot patterns with varying levels of motion noise). Our findings do not support the hypothesis that direct interactions between MT, MST and areas low in the hierarchy of auditory areas underlie audiovisual motion integration.  相似文献   

2.
This study examined unisensory and multisensory speech perception in 8–17 year old children with autism spectrum disorders (ASD) and typically developing controls matched on chronological age, sex, and IQ. Consonant–vowel syllables were presented in visual only, auditory only, matched audiovisual, and mismatched audiovisual (“McGurk”) conditions. Participants with ASD displayed deficits in visual only and matched audiovisual speech perception. Additionally, children with ASD reported a visual influence on heard speech in response to mismatched audiovisual syllables over a wider window of time relative to controls. Correlational analyses revealed associations between multisensory speech perception, communicative characteristics, and responses to sensory stimuli in ASD. Results suggest atypical speech perception is linked to broader behavioral characteristics of ASD.  相似文献   

3.
Seeing a speaker''s face benefits speech comprehension, especially in challenging listening conditions. This perceptual benefit is thought to stem from the neural integration of visual and auditory speech at multiple stages of processing, whereby movement of a speaker''s face provides temporal cues to auditory cortex, and articulatory information from the speaker''s mouth can aid recognizing specific linguistic units (e.g., phonemes, syllables). However, it remains unclear how the integration of these cues varies as a function of listening conditions. Here, we sought to provide insight on these questions by examining EEG responses in humans (males and females) to natural audiovisual (AV), audio, and visual speech in quiet and in noise. We represented our speech stimuli in terms of their spectrograms and their phonetic features and then quantified the strength of the encoding of those features in the EEG using canonical correlation analysis (CCA). The encoding of both spectrotemporal and phonetic features was shown to be more robust in AV speech responses than what would have been expected from the summation of the audio and visual speech responses, suggesting that multisensory integration occurs at both spectrotemporal and phonetic stages of speech processing. We also found evidence to suggest that the integration effects may change with listening conditions; however, this was an exploratory analysis and future work will be required to examine this effect using a within-subject design. These findings demonstrate that integration of audio and visual speech occurs at multiple stages along the speech processing hierarchy.SIGNIFICANCE STATEMENT During conversation, visual cues impact our perception of speech. Integration of auditory and visual speech is thought to occur at multiple stages of speech processing and vary flexibly depending on the listening conditions. Here, we examine audiovisual (AV) integration at two stages of speech processing using the speech spectrogram and a phonetic representation, and test how AV integration adapts to degraded listening conditions. We find significant integration at both of these stages regardless of listening conditions. These findings reveal neural indices of multisensory interactions at different stages of processing and provide support for the multistage integration framework.  相似文献   

4.
Previous studies have suggested audiovisual multisensory integration (MSI) may be atypical in Autism Spectrum Disorder (ASD). However, much of the research having found an alteration in MSI in ASD involved socio-communicative stimuli. The goal of the current study was to investigate MSI abilities in ASD using lower-level stimuli that are not socio-communicative in nature by testing susceptibility to auditory-guided visual illusions. Adolescents and adults with ASD and typically-developing (TD) individuals were shown to have similar susceptibility to a fission illusion. However, the ASD group was significantly more susceptible to the fusion illusion. Results suggest that individuals with ASD demonstrate MSI on the flash-beep illusion task but that their integration of audiovisual sensory information may be less selective than for TD individuals.  相似文献   

5.
In this study we examined the social behaviors of 4- to 12-year-old children with autism spectrum disorders (ASD; N = 24) during three tradic interactions with an adult confederate and an interaction partner, where the interaction partner varied randomly among (1) another adult human, (2) a touchscreen computer game, and (3) a social dinosaur robot. Children spoke more in general, and directed more speech to the adult confederate, when the interaction partner was a robot, as compared to a human or computer game interaction partner. Children spoke as much to the robot as to the adult interaction partner. This study provides the largest demonstration of social human-robot interaction in children with autism to date. Our findings suggest that social robots may be developed into useful tools for social skills and communication therapies, specifically by embedding social interaction into intrinsic reinforcers and motivators.  相似文献   

6.
In the past, notions of embodiment have been applied to robotics mainly in the realm of very simple robots, and supporting low‐level mechanisms such as dynamics and navigation. In contrast, most human‐like, interactive, and socially adept robotic systems turn away from embodiment and use amodal, symbolic, and modular approaches to cognition and interaction. At the same time, recent research in Embodied Cognition (EC) is spanning an increasing number of complex cognitive processes, including language, nonverbal communication, learning, and social behavior. This article suggests adopting a modern EC approach for autonomous robots interacting with humans. In particular, we present three core principles from EC that may be applicable to such robots: (a) modal perceptual representation, (b) action/perception and action/cognition integration, and (c) a simulation‐based model of top‐down perceptual biasing. We describe a computational framework based on these principles, and its implementation on two physical robots. This could provide a new paradigm for embodied human–robot interaction based on recent psychological and neurological findings.  相似文献   

7.
Evaluation of faces is an important dimension of social relationships. A degraded sensitivity to facial perceptual cues might contribute to atypical social interactions in autism spectrum disorder (ASD). The current study investigated whether face based social judgment is atypical in ASD and if so, whether it could be related to a degraded sensitivity to facial perceptual cues. Individuals with ASD (n = 33) and IQ- and age-matched controls (n = 38) were enrolled in this study. Watching a series of photographic or synthetic faces, they had to judge them for “kindness”. In synthetic stimuli, the amount of perceptual cues available could be either large or small. We observed that social judgment was atypical in the ASD group on photographic stimuli, but, contrarily to the prediction based on the degraded sensitivity hypothesis, analyses on synthetic stimuli found a similar performance and a similar effect of the amount of perceptual cues in both groups. Further studies on perceptual differences between photographs and synthetic pictures of faces might help understand atypical social judgment in ASD.  相似文献   

8.
On the neuronal basis for multisensory convergence: a brief overview   总被引:4,自引:0,他引:4  
For multisensory stimulation to effect perceptual and behavioral responses, information from the different sensory systems must converge on individual neurons. A great deal is already known regarding processing within the separate sensory systems, as well as about many of the integrative and perceptual/behavioral effects of multisensory processing. However, virtually nothing is known about the functional architecture that underlies multisensory convergence even though it is an integral step to this processing sequence. This paper seeks to summarize the findings pertinent to multisensory convergence, and to initiate the identification of specific convergence patterns that may underlie different multisensory perceptual and behavioral effects.  相似文献   

9.
Thorne JD  Debener S 《Neuroreport》2008,19(5):553-557
Multisensory behavioral benefits generally occur when one modality provides improved or disambiguating information to another. Here, we show benefits when no information is apparently provided. Participants performed an auditory frequency discrimination task in which auditory stimuli were paired with uninformative visual stimuli. Visual-auditory stimulus onset asynchrony was varied between -10 ms (sound first) to 80 ms without compromising perceptual simultaneity. In most stimulus onset asynchrony conditions, response times to audiovisual pairs were significantly shorter than auditory-alone controls. This suggests a general processing advantage for multisensory stimuli over unisensory stimuli, even when only one modality is informative. Response times were shortest with an auditory delay of 65 ms, indicating an audiovisual 'perceptual optimum' that may be related to processing simultaneity.  相似文献   

10.
Fifteen children with autism spectrum disorders (ASD) and twenty-one children without ASD completed six perceptual tasks designed to characterize the nature of the audiovisual processing difficulties experienced by children with ASD. Children with ASD scored significantly lower than children without ASD on audiovisual tasks involving human faces and voices, but scored similarly to children without ASD on audiovisual tasks involving nonhuman stimuli (bouncing balls). Results suggest that children with ASD may use visual information for speech differently from children without ASD. Exploratory results support an inverse association between audiovisual speech processing capacities and social impairment in children with ASD.  相似文献   

11.
Individuals with autism spectrum disorders (ASD) report difficulty integrating simultaneously presented visual and auditory stimuli (Iarocci & McDonald, 2006), albeit showing enhanced perceptual processing of unisensory stimuli, as well as an enhanced role of perception in higher-order cognitive tasks (Enhanced Perceptual Functioning (EPF) model; Mottron, Dawson, Soulières, Hubert, & Burack, 2006). Individuals with an ASD also integrate auditory-visual inputs over longer periods of time than matched typically developing (TD) peers (Kwakye, Foss-Feig, Cascio, Stone & Wallace, 2011). To tease apart the dichotomy of both extended multisensory processing and enhanced perceptual processing, we used behavioral and electrophysiological measurements of audio-visual integration among persons with ASD. 13 TD and 14 autistics matched on IQ completed a forced choice multisensory semantic congruence task requiring speeded responses regarding the congruence or incongruence of animal sounds and pictures. Stimuli were presented simultaneously or sequentially at various stimulus onset asynchronies in both auditory first and visual first presentations. No group differences were noted in reaction time (RT) or accuracy. The latency at which congruent and incongruent waveforms diverged was the component of interest. In simultaneous presentations, congruent and incongruent waveforms diverged earlier (circa 150 ms) among persons with ASD than among TD individuals (around 350 ms). In sequential presentations, asymmetries in the timing of neuronal processing were noted in ASD which depended on stimulus order, but these were consistent with the nature of specific perceptual strengths in this group. These findings extend the Enhanced Perceptual Functioning Model to the multisensory domain, and provide a more nuanced context for interpreting ERP findings of impaired semantic processing in ASD.  相似文献   

12.
BackgroundSocially assistive robots have the potential to become a powerful therapeutic tool for individuals affected by Autism Spectrum Disorder (ASD). However, to date, only a few studies explored the efficacy of robot-assisted training embedded in structured clinical protocols. The current study aimed at investigating the beneficial effects of introducing a toy robot, as a new tool for clinicians, in the treatment plan carried out by an Italian healthcare institution.MethodIn collaboration with the healthcare professionals of Piccolo Cottolengo Genovese di Don Orione, we designed a robot-mediated activity aimed at improving social skills in children with ASD. Twenty-four children with ASD (Age = 5.79 ± 1.02, 5 females) completed the activities with the robot in a cross-over design, during a period of ten weeks. Their social skills were assessed before and after the robot intervention activities, using the Early Social Communication Scale (ESCS).ResultsResults showed that the combination of robot-assisted training with standard therapy was more effective than the standard therapy alone, in terms of improvement of social skills. Specifically, after the robot-assisted training, children with ASD improved in their ability to generate and respond to behavioral requests, and in their tendency to initiate and maintain social interaction with the adult.ConclusionsOur results support the idea that robot-assisted interventions can be combined with the standard treatment plan to improve clinical outcomes.  相似文献   

13.
BackgroundIndividuals with autism spectrum disorder (ASD) tend to show deficits in engaging with humans. Previous findings have shown that robot-based training improves the gestural recognition and production of children with ASD. It is not known whether social robots perform better than human therapists in teaching children with ASD.AimsThe present study aims to compare the learning outcomes in children with ASD and intellectual disabilities from robot-based intervention on gestural use to those from human-based intervention.Methods and proceduresChildren aged six to 12 with low-functioning autism were randomly assigned to the robot group (N = 12) and human group (N = 11). In both groups, human experimenters or social robots engaged in daily life conversations and demonstrated to children 14 intransitive gestures in a highly-structured and standardized intervention protocol.Outcomes and resultsChildren with ASD in the human group were as likely to recognize gestures and produce them accurately as those in the robot group in both training and new conversations. Their learning outcomes maintained for at least two weeks.Conclusions and implicationsThe social cues found in the human-based intervention might not influence gestural learning. It does not matter who serves as teaching agents when the lessons are highly structured.  相似文献   

14.
Observing a speaker's articulations substantially improves the intelligibility of spoken speech, especially under noisy listening conditions. This multisensory integration of speech inputs is crucial to effective communication. Appropriate development of this ability has major implications for children in classroom and social settings, and deficits in it have been linked to a number of neurodevelopmental disorders, especially autism. It is clear from structural imaging studies that there is a prolonged maturational course within regions of the perisylvian cortex that persists into late childhood, and these regions have been firmly established as being crucial to speech and language functions. Given this protracted maturational timeframe, we reasoned that multisensory speech processing might well show a similarly protracted developmental course. Previous work in adults has shown that audiovisual enhancement in word recognition is most apparent within a restricted range of signal-to-noise ratios (SNRs). Here, we investigated when these properties emerge during childhood by testing multisensory speech recognition abilities in typically developing children aged between 5 and 14 years, and comparing them with those of adults. By parametrically varying SNRs, we found that children benefited significantly less from observing visual articulations, displaying considerably less audiovisual enhancement. The findings suggest that improvement in the ability to recognize speech-in-noise and in audiovisual integration during speech perception continues quite late into the childhood years. The implication is that a considerable amount of multisensory learning remains to be achieved during the later schooling years, and that explicit efforts to accommodate this learning may well be warranted.  相似文献   

15.
Individuals with autism spectrum disorders (ASD) exhibit alterations in sensory processing, including changes in the integration of information across the different sensory modalities. In the current study, we used the sound-induced flash illusion to assess multisensory integration in children with ASD and typically-developing (TD) controls. Thirty-one children with ASD and 31 age and IQ matched TD children (average age = 12 years) were presented with simple visual (i.e., flash) and auditory (i.e., beep) stimuli of varying number. In illusory conditions, a single flash was presented with 2–4 beeps. In TD children, these conditions generally result in the perception of multiple flashes, implying a perceptual fusion across vision and audition. In the present study, children with ASD were significantly less likely to perceive the illusion relative to TD controls, suggesting that multisensory integration and cross-modal binding may be weaker in some children with ASD. These results are discussed in the context of previous findings for multisensory integration in ASD and future directions for research.  相似文献   

16.
Multisensory interactions are a fundamental feature of brain organization. Principles governing multisensory processing have been established by varying stimulus location, timing and efficacy independently. Determining whether and how such principles operate when stimuli vary dynamically in their perceived distance (as when looming/receding) provides an assay for synergy among the above principles and also means for linking multisensory interactions between rudimentary stimuli with higher-order signals used for communication and motor planning. Human participants indicated movement of looming or receding versus static stimuli that were visual, auditory, or multisensory combinations while 160-channel EEG was recorded. Multivariate EEG analyses and distributed source estimations were performed. Nonlinear interactions between looming signals were observed at early poststimulus latencies (~75 ms) in analyses of voltage waveforms, global field power, and source estimations. These looming-specific interactions positively correlated with reaction time facilitation, providing direct links between neural and performance metrics of multisensory integration. Statistical analyses of source estimations identified looming-specific interactions within the right claustrum/insula extending inferiorly into the amygdala and also within the bilateral cuneus extending into the inferior and lateral occipital cortices. Multisensory effects common to all conditions, regardless of perceived distance and congruity, followed (~115 ms) and manifested as faster transition between temporally stable brain networks (vs summed responses to unisensory conditions). We demonstrate the early-latency, synergistic interplay between existing principles of multisensory interactions. Such findings change the manner in which to model multisensory interactions at neural and behavioral/perceptual levels. We also provide neurophysiologic backing for the notion that looming signals receive preferential treatment during perception.  相似文献   

17.
Autism is a complex disorder, characterized by social, cognitive, communicative, and motor symptoms. One suggestion, proposed in the current study, to explain the spectrum of symptoms is an underlying impairment in multisensory integration (MSI) systems such as a mirror neuron-like system. The mirror neuron system, thought to play a critical role in skills such as imitation, empathy, and language can be thought of as a multisensory system, converting sensory stimuli into motor representations. Consistent with this, we report preliminary evidence for deficits in a task thought to tap into MSI--"the bouba-kiki task" in children with ASD. The bouba-kiki effect is produced when subjects are asked to pair nonsense shapes with nonsense "words". We found that neurotypical children chose the nonsense "word" whose phonemic structure corresponded with the visual shape of the stimuli 88% of the time. This is presumably because of mirror neuron-like multisensory systems that integrate the visual shape with the corresponding motor gestures used to pronounce the nonsense word. Surprisingly, individuals with ASD only chose the corresponding name 56% of the time. The poor performance by the ASD group on this task suggests a deficit in MSI, perhaps related to impaired MSI brain systems. Though this is a behavioral study, it provides a testable hypothesis for the communication impairments in children with ASD that implicates a specific neural system and fits well with the current findings suggesting an impairment in the mirror systems in individuals with ASD.  相似文献   

18.
In autism spectrum disorder (ASD), atypical integration of visual depth cues may be due to flattened perceptual priors or selective fusion. The current study attempts to disentangle these explanations by psychophysically assessing within-modality integration of ordinal (occlusion) and metric (disparity) depth cues while accounting for sensitivity to stereoscopic information. Participants included 22 individuals with ASD and 23 typically developing matched controls. Although adults with ASD were found to have significantly poorer stereoacuity, they were still able to automatically integrate conflicting depth cues, lending support to the idea that priors are intact in ASD. However, dissimilarities in response speed variability between the ASD and TD groups suggests that there may be differences in the perceptual decision-making aspect of the task.  相似文献   

19.
This fMRI study explores brain regions involved with perceptual enhancement afforded by observation of visual speech gesture information. Subjects passively identified words presented in the following conditions: audio-only, audiovisual, audio-only with noise, audiovisual with noise, and visual only. The brain may use concordant audio and visual information to enhance perception by integrating the information in a converging multisensory site. Consistent with response properties of multisensory integration sites, enhanced activity in middle and superior temporal gyrus/sulcus was greatest when concordant audiovisual stimuli were presented with acoustic noise. Activity found in brain regions involved with planning and execution of speech production in response to visual speech presented with degraded or absent auditory stimulation, is consistent with the use of an additional pathway through which speech perception is facilitated by a process of internally simulating the intended speech act of the observed speaker.  相似文献   

20.
Inexpensive personal robots will soon become available to a large portion of the population. Currently, most consumer robots are relatively simple single-purpose machines or toys. In order to be cost effective and thus widely accepted, robots will need to be able to accomplish a wide range of tasks in diverse conditions. Learning these tasks from demonstrations offers a convenient mechanism to customize and train a robot by transferring task related knowledge from a user to a robot. This avoids the time-consuming and complex process of manual programming. The way in which the user interacts with a robot during a demonstration plays a vital role in terms of how effectively and accurately the user is able to provide a demonstration. Teaching through demonstrations is a social activity, one that requires bidirectional communication between a teacher and a student. The work described in this paper studies how the user’s visual observation of the robot and the robot’s auditory cues affect the user’s ability to teach the robot in a social setting. Results show that auditory cues provide important knowledge about the robot’s internal state, while visual observation of a robot can hinder an instructor due to incorrect mental models of the robot and distractions from the robot’s movements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号