首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Seeing or hearing manual actions activates the mirror neuron system, i.e., specialized neurons within motor areas which fire not only when an action is performed but also when it is passively perceived. Although it has been shown that mirror neurons respond to either action-specific vision or sound, it remains a topic of debate whether and how vision and sound interact during action perception.Here we used transcranial magnetic stimulation to explore multimodal interactions in the human motor system, namely at the level of the primary motor cortex (M1). Corticomotor excitability in M1 was measured while subjects perceived unimodal visual (V), unimodal auditory (A), or multimodal (V + A) stimuli of a simple hand action. In addition, incongruent multimodal stimuli were included, in which incongruent vision or sound was presented simultaneously with the auditory or visual action stimulus. A selective response increase was observed to the congruent multimodal stimulus as compared to the unimodal and incongruent multimodal stimuli.These findings speak in favour of ‘shared’ action representations in the human motor system that are evoked in a ‘modality-dependent’ way, i.e., they are elicited most robustly by the simultaneous presentation of congruent auditory and visual stimuli. Multimodality in the perception of hand movements bears functional similarities to speech perception, suggesting that multimodal convergence is a generic feature of the mirror system which applies to action perception in general.  相似文献   

2.
Misophonia is a common disorder characterized by the experience of strong negative emotions of anger and anxiety in response to certain everyday sounds, such as those generated by other people eating, drinking, and breathing. The commonplace nature of these “trigger” sounds makes misophonia a devastating disorder for sufferers and their families. How such innocuous sounds trigger this response is unknown. Since most trigger sounds are generated by orofacial movements (e.g., chewing) in others, we hypothesized that the mirror neuron system related to orofacial movements could underlie misophonia. We analyzed resting state fMRI (rs-fMRI) connectivity (N = 33, 16 females) and sound-evoked fMRI responses (N = 42, 29 females) in misophonia sufferers and controls. We demonstrate that, compared with controls, the misophonia group show no difference in auditory cortex responses to trigger sounds, but do show: (1) stronger rs-fMRI connectivity between both auditory and visual cortex and the ventral premotor cortex responsible for orofacial movements; (2) stronger functional connectivity between the auditory cortex and orofacial motor area during sound perception in general; and (3) stronger activation of the orofacial motor area, specifically, in response to trigger sounds. Our results support a model of misophonia based on “hyper-mirroring” of the orofacial actions of others with sounds being the “medium” via which action of others is excessively mirrored. Misophonia is therefore not an abreaction to sounds, per se, but a manifestation of activity in parts of the motor system involved in producing those sounds. This new framework to understand misophonia can explain behavioral and emotional responses and has important consequences for devising effective therapies.SIGNIFICANCE STATEMENT Conventionally, misophonia, literally “hatred of sounds” has been considered as a disorder of sound emotion processing, in which “simple” eating and chewing sounds produced by others cause negative emotional responses. Our data provide an alternative but complementary perspective on misophonia that emphasizes the action of the trigger-person rather than the sounds which are a byproduct of that action. Sounds, in this new perspective, are only a “medium” via which action of the triggering-person is mirrored onto the listener. This change in perspective has important consequences for devising therapies and treatment methods for misophonia. It suggests that, instead of focusing on sounds, which many existing therapies do, effective therapies should target the brain representation of movement.  相似文献   

3.
The observation of actions can lead, in some cases, to the repetition of those same actions. In other words, motor programs similar to those observed can be recruited. Since this phenomenon is expressed when in the presence of another individual, it has been named social facilitation. In the present study we investigated whether the observation and/or hearing of eating actions facilitate eating behaviors in observing/listening pig-tailed macaques. In experiment 1, the observation of an eating room mate significantly enhanced eating behavior in the observer. Similar results were obtained (experiment 2) in response to the sound of eating actions but not to control sounds (experiment 3). We propose that eating facilitation triggered by observation or listening of eating actions can rely on the mirror neuron system of ventral premotor cortex that provides a matching between the observed/listened action and the executed action. This matching system can subsequently trigger the motor programs necessary for repeating the observed/heard actions.  相似文献   

4.
The observation of an action modulates motor cortical outputs in specific ways, in part through mediation of the mirror neuron system. Sometimes we infer a meaning to an observed action based on integration of the actual percept with memories. Here, we conducted a series of experiments in healthy adults to investigate whether such inferred meanings can also modulate motor cortical outputs in specific ways. We show that brief observation of a neutral stimulus mimicking a hand does not significantly modulate motor cortical excitability (Study 1) although, after prolonged exposure, it can lead to a relatively nonspecific modulation (Study 2). However, when such a neutral stimulus is preceded by exposure to a hand stimulus, the latter appears to serve as a prime, perhaps enabling meaning to the neutral stimulus, which then modulates motor cortical excitability in accordance with mirror neuron‐driving properties (Studies 2 and 3). Overall results suggest that a symbolic value ascribed to an otherwise neutral stimulus can modulate motor cortical outputs, revealing the influence of top‐down inputs on the mirror neuron system. These findings indicate a novel aspect of the human mirror neuron system: an otherwise neutral stimulus can acquire specific mirror neuron‐driving properties in the absence of a direct association between motor practice and perception. This significant malleability in the way that the mirror neuron system can code otherwise meaningless (i.e. arbitrarily associated) stimuli may contribute to coding communicative signals such as language. This may represent a mirror neuron system feature that is unique to humans.  相似文献   

5.
Synesthesia is a condition in which perceptual or cognitive stimuli (e.g., a written letter) trigger atypical additional percepts (e.g., the color yellow). Although these cross-modal pairings appear idiosyncratic in that they superficially differ from synesthete to synesthete, underlying patterns do exist and these can, in some circumstances, reflect the cross-modal intuitions of nonsynesthetes (e.g., higher pitch sounds tend to be “seen” in lighter colors by synesthetes and are also paired to lighter colors by nonsynesthetes in cross-modal matching tasks). We recently showed that grapheme-color synesthetes are more sensitive to sound symbolism (i.e., cross-modal sound-meaning correspondences) in natural language compared to nonsynesthetes. Accordingly, we hypothesize that sound symbolism may be a guiding force in synesthesia to dictate what types of synesthetic experiences are triggered by words. We tested this hypothesis by examining the cross-modal mappings of lexical–gustatory synesthete, JIW, for whom words trigger flavor experiences. We show that certain phonological features (e.g., front vowels) systematically trigger particular categories of taste (e.g., bitter) in his synesthesia. Some of these associations agree with sound symbolic patterns in natural language. This supports the view that synesthesia may be an exaggeration of cross-modal associations found in the general population and that sound symbolic properties of language may arise from similar mechanisms as those found in synesthesia.  相似文献   

6.
In order to explore the activation dynamics of the human action recognition system, we investigated electrophysiological distinctions between the brain responses to sounds produced by human finger and tongue movements. Of special interest were the questions of how early these differences may occur, and whether the neural activation at the early stages of processing involves cortical motor representations related to the generation of these sounds. For this purpose we employed a high-density EEG set-up and recorded mismatch negativity (MMN) using a recently developed novel multideviant paradigm which allows acquisition of a high number of trials within a given time period. Deviant stimuli were naturally recorded finger and tongue clicks, as well as control stimuli with similar physical features but without the clear action associations (this was tested in a separate behavioural experiment). Both natural stimuli produced larger MMNs than their respective control stimuli at approximately 100 ms, indicating activation of memory traces for familiar action-related sounds. Furthermore, MMN topography at this latency differed between the brain responses to the natural finger and natural tongue sounds. Source estimation revealed the strongest sources for finger sounds in centrolateral areas of the left hemisphere, suggesting that hearing a sound related to finger actions evokes activity in motor areas associated with the dominant hand. Furthermore, tongue sounds produced activation in more inferior brain areas. Our data suggest that motor areas in the human brain are part of neural systems subserving the early automatic recognition of action-related sounds.  相似文献   

7.
An observation that neurons in the motor cortex of the monkey are active both when the monkey performs a specific action and when he watches an actor executing the same action led to the mirror-system hypothesis. This hypothesis suggests that primates perceive and interpret others' actions by generating an internal motor representation (e.g., simulation). Recent evidence suggests that humans have a similar mirror system. In this review, we focus on the essential congruence between the motor and visual properties of an action. We summarize behavioral and imaging studies in humans that show that observing others' actions can interfere with our own motor execution. We discuss a framework for understanding such an internal representation and suggest that the activity in the parietal cortex during observation of others' actions is based on the sensory-to-motor remapping properties of this region, which are necessary for fine control of our own actions.  相似文献   

8.
Sziklas V  Petrides M 《Hippocampus》2004,14(8):931-934
Rats with lesions of the hippocampus, the mammillary region, the anterior thalamic nuclei, and normal control animals were trained on a conditional associative learning task in which they had to learn to make one of two motor responses (i.e., turn left or right), depending on which one of two visual cues was presented. Damage to the hippocampus severely impaired performance of this task. By contrast, rats with lesions of the mammillary region or the anterior thalamic nuclei were able to acquire the task at a rate comparable to that of the normal animals. These findings demonstrate that hippocampal lesions impair the ability to form arbitrary associations between visual cues and kinesthetic responses (body turns) and, furthermore, suggest that the hippocampus does not rely on input from its major subcortical targets for learning such visual-kinesthetic associations.  相似文献   

9.
Deficit of auditory space perception in patients with visuospatial neglect   总被引:4,自引:0,他引:4  
There have been many studies of visuospatial neglect, but fewer studies of neglect in relation with other sensory modalities. In the present study we investigate the performance of six right brain damaged (RBD) patients with left visual neglect and six RBD patients without neglect in an auditory spatial task. Previous work on sound localisation in neglect patients adopted measure of sound localisation based on directional motor responses (e.g., pointing to sounds) or judgement of sound position with respect to the body midline (auditory midline task). However, these measures might be influenced by non-auditory biases related with motor and egocentric components. Here we adopted a perceptual measure of sound localisation, consisting in a verbal judgement of the relative position (same or different) of two sequentially presented sounds. This task was performed in a visual and in a blindfolded condition. The results revealed that sound localisation performance of visuospatial neglect patients was severely impaired with respect to that of RBD controls, especially when sounds originated in contralesional hemispace. In such condition, neglect patients were always unable to discriminate the relative position of the two sounds. No difference in performance emerged as a function of the visual condition in either group. These results demonstrate a perceptual deficit of sound localisation in patients with visuospatial neglect, suggesting that the spatial deficits of these patients can arise multimodally for the same portion of external space.  相似文献   

10.
In the absence of visual information, our brain is able to recognize the actions of others by representing their sounds as a motor event. Previous studies have provided evidence for a somatotopic activation of the listener's motor cortex during perception of the sound of highly familiar motor acts. The present experiments studied (a) how the motor system is activated by action-related sounds that are newly acquired and (b) whether these sounds are represented with reference to extrinsic features related to action goals rather than with respect to lower-level intrinsic parameters related to the specific movements. TMS was used to measure the correspondence between auditory and motor codes in the listener's motor system. We compared the corticomotor excitability in response to the presentation of auditory stimuli void of previous motor meaning before and after a short training period in which these stimuli were associated with voluntary actions. Novel cross-modal representations became manifest very rapidly. By disentangling the representation of the muscle from that of the action's goal, we further showed that passive listening to newly learnt action-related sounds activated a precise motor representation that depended on the variable contexts to which the individual was exposed during testing. Our results suggest that the human brain embodies a higher-order audio-visuo-motor representation of perceived actions, which is muscle-independent and corresponds to the goals of the action.  相似文献   

11.
Recent neurophysiological evidence has shown that sound position can be coded in multiple frames of reference in the animal brain (i.e. head-centred, eye-centred, or intermediate head/eye centred). Here, we provide evidence for multiple coding of sound positions in humans, by studying pointing to sounds in 14 right brain-damaged (RBD) patients with or without visual neglect (a visuospatial neurological disturbance typically affecting contralesional space). Patients were asked to indicate the position of free-field sounds, either with a hand-pointing or with a head-turning response. Pointing movements were performed either blindfolded or with eyes open, but no visual feedback was available about sound position or the motor response. All RBD patients showed some impairment in sound localisation, particularly for sounds towards the contralesional side. In addition, task-irrelevant vision was more detrimental for hand-pointing than head-turning responses, only for neglect patients. We propose that this finding reflects visual coding of sound position when the eyes are open, which extends the pathological visuospatial bias of neglect patients to sound localisation. Moreover, the absence of any modulatory effects of ambient vision when head-turning responses were adopted suggests task-dependent visual coding of sound position, in agreement with multiple frames of reference for sound localisation.  相似文献   

12.
Sound discrimination is essential in many species for communicating and foraging. Bats, for example, use sounds for echolocation and communication. In the bat auditory cortex there are neurons that process both sound categories, but how these neurons respond to acoustic transitions, that is, echolocation streams followed by a communication sound, remains unknown. Here, we show that the acoustic context, a leading sound sequence followed by a target sound, changes neuronal discriminability of echolocation versus communication calls in the cortex of awake bats of both sexes. Nonselective neurons that fire equally well to both echolocation and communication calls in the absence of context become category selective when leading context is present. On the contrary, neurons that prefer communication sounds in the absence of context turn into nonselective ones when context is added. The presence of context leads to an overall response suppression, but the strength of this suppression is stimulus specific. Suppression is strongest when context and target sounds belong to the same category, e.g.,echolocation followed by echolocation. A neuron model of stimulus-specific adaptation replicated our results in silico. The model predicts selectivity to communication and echolocation sounds in the inputs arriving to the auditory cortex, as well as two forms of adaptation, presynaptic frequency-specific adaptation acting in cortical inputs and stimulus-unspecific postsynaptic adaptation. In addition, the model predicted that context effects can last up to 1.5 s after context offset and that synaptic inputs tuned to low-frequency sounds (communication signals) have the shortest decay constant of presynaptic adaptation.SIGNIFICANCE STATEMENT We studied cortical responses to isolated calls and call mixtures in awake bats and show that (1) two neuronal populations coexist in the bat cortex, including neurons that discriminate social from echolocation sounds well and neurons that are equally driven by these two ethologically different sound types; (2) acoustic context (i.e., other natural sounds preceding the target sound) affects natural sound selectivity in a manner that could not be predicted based on responses to isolated sounds; and (3) a computational model similar to those used for explaining stimulus-specific adaptation in rodents can account for the responses observed in the bat cortex to natural sounds. This model depends on segregated feedforward inputs, synaptic depression, and postsynaptic neuronal adaptation.  相似文献   

13.
Crucial to our everyday social functioning is an ability to interpret the behaviors of others. This process involves a rapid understanding of what a given action is not only in a physical sense (e.g., a precision grip around the stem of a wine glass) but also in a semantic sense (e.g., an invitation to "cheers"). The functional properties of fronto-parietal mirror neurons (MNs), which respond to both observed and executed actions, have been a topic of much debate in the cognitive neuroscience literature. The controversy surrounds the role of the "mirror neuron system" in action understanding: do MNs allow us to comprehend others' actions by allowing us to internally represent their behaviors or do they simply activate a direct motor representation of the perceived act without recourse to its meaning? This review outlines evidence from both human and primate literatures, indicating the importance of end-goals in action representations within the motor system and their predominance in influencing action plans. We integrate this evidence with recent views regarding the complex and dynamic nature of the mirror neuron system and its ability to respond to broad motor outcomes.  相似文献   

14.
Circumscribed hemispheric lesions in the right hemisphere have been shown to impair auditory spatial functions. Due to a strong crossmodal links that exist between vision and audition, in the present study, we have hypothesized that multisensory integration can play a specific role in recovery from spatial representational deficits. To this aim, a patient with severe auditory localization defect was asked to indicate verbally the spatial position where the sound was presented. The auditory targets were presented at different spatial locations, at 8 degrees, 24 degrees, 40 degrees, 56 degrees to either sides of the central fixation point. The task was performed either in a unimodal condition (i.e., only sounds were presented) or in crossmodal conditions (i.e., a visual stimulus was presented simultaneously to the auditory target). In the crossmodal conditions, the visual cue was presented either at the same spatial position as the sound or at 16 degrees or 32 degrees, nasal or temporal, of spatial disparity from the auditory target. The results showed that a visual stimulus strongly improves the patient's ability to localize the sounds, but only when it was presented in the same spatial position of the auditory target.  相似文献   

15.
Categorical perception (CP) is highly evident in audition when listeners’ perception of speech sounds abruptly shifts identity despite equidistant changes in stimulus acoustics. While CP is an inherent property of speech perception, how (if) it is expressed in other auditory modalities (e.g., music) is less clear. Moreover, prior neuroimaging studies have been equivocal on whether attentional engagement is necessary for the brain to categorically organize sound. To address these questions, we recorded neuroelectric brain responses [event‐related potentials (ERPs)] from listeners as they rapidly categorized sounds along a speech and music continuum (active task) or during passive listening. Behaviorally, listeners’ achieved sharper psychometric functions and faster identification for speech than musical stimuli, which was perceived in a continuous mode. Behavioral results coincided with stronger ERP differentiation between prototypical and ambiguous tokens (i.e., categorical processing) for speech but not for music. Neural correlates of CP were only observed when listeners actively attended to the auditory signal. These findings were corroborated by brain‐behavior associations; changes in neural activity predicted more successful CP (psychometric slopes) for active but not passively evoked ERPs. Our results demonstrate auditory categorization is influenced by attention (active > passive) and is stronger for more familiar/overlearned stimulus domains (speech > music). In contrast to previous studies examining highly trained listeners (i.e., musicians), we infer that (i) CP skills are largely domain‐specific and do not generalize to stimuli for which a listener has no immediate experience and (ii) categorical neural processing requires active engagement with the auditory stimulus.  相似文献   

16.
The literature is bereft of information about the age at which infants with Down syndrome (DS) acquire motor skills and the percentage of infants that do so by the age of 12 months. Therefore, it is necessary to identify the difference in age, in relation to typical infants, at which motor skills were acquired and the percentage of infants with DS that acquire them in the first year of life. Infants with DS (N = 20) and typical infants (N = 25), both aged between 3 and 12 months, were evaluated monthly using the AIMS. In the prone position, a difference of up to 3 months was found for the acquisition of the 3rd to 16th skill. There was a difference in the percentage of infants with DS who acquired the 10th to 21st skill (from 71% to 7%). In the supine position, a difference of up to one month was found from the 3rd to 7th skill; however, 100% were able to perform these skills. In the sitting position, a difference of 1–4 months was found from the 1st to 12th skill, ranging from 69% to 29% from the 9th to 12th. In the upright position, the difference was 2–3 months from the 3rd to 8th skill. Only 13% acquired the 8th skill and no other skill was acquired up to the age of 12 months. The more complex the skills the greater the difference in age between typical infants and those with DS and the lower the percentage of DS individuals who performed the skills in the prone, sitting and upright positions. None of the DS infants were able to stand without support.  相似文献   

17.
Left hemisphere motor facilitation in response to manual action sounds   总被引:7,自引:0,他引:7  
Previous studies indicate that the motor areas of both hemispheres are active when observing actions. Here we explored how the motor areas of each hemisphere respond to the sounds associated with actions. We used transcranial magnetic stimulation (TMS) to measure motor corticospinal excitability of hand muscles while listening to sounds. Sounds associated with bimanual actions produced greater motor corticospinal excitability than sounds associated with leg movements or control sounds. This facilitation was exclusively lateralized to the left hemisphere, the dominant hemisphere for language. These results are consistent with the hypothesis that action coding may be a precursor of language.  相似文献   

18.
Regular sequences of sounds (i.e., non-random) can usually be described by several, equally valid rules. Rules allowing extrapolation from one sound to the next are termed local rules, those that define relations between temporally non-adjacent sounds are termed global rules. The aim of the present study was to determine whether both local and global rules can be simultaneously extracted from a sound sequence even when attention is directed away from the auditory stimuli. The pre-attentive representation of a sequence of two alternating tones (differing only in frequency) was investigated using the mismatch negativity (MMN) auditory event-related potential. Both local- and global-rule violations of tone alternation elicited the MMN component while subjects ignored the auditory stimuli. This finding suggests that (a) pre-attentive auditory processes can extract both local and global rules from sound sequences, and (b) that several regularity representations of a sound sequence are simultaneously maintained during the pre-attentive phase of auditory stimulus processing.  相似文献   

19.
Recent work has been mixed with respect to the notion of embodied semantics, which suggests that processing linguistic stimuli referring to motor-related concepts recruits the same sensorimotor regions of cortex involved in the execution and observation of motor acts or the objects associated with those acts. In this study, we asked whether lesions to key sensorimotor regions would preferentially impact the comprehension of stimuli associated with the use of the hand, mouth or foot. Twenty-seven patients with left-hemisphere strokes and 10 age- and education-matched controls were presented with pictures and words representing objects and actions typically associated with the use of the hand, mouth, foot or no body part at all (i.e., neutral). Picture/sound pairs were presented simultaneously, and participants were required to press a space bar only when the item pairs matched (i.e., congruent trials). We conducted two different analyses: 1) we compared task performance of patients with and without lesions in several key areas previously implicated in the putative human mirror neuron system (i.e., Brodmann areas 4/6, 1/2/3, 21 and 44/45), and 2) we conducted Voxel-based Lesion-Symptom Mapping analyses (VLSM; Bates et al., 2003) to identify additional regions associated with the processing of effector-related versus neutral stimuli. Processing of effector-related stimuli was associated with several regions across the left hemisphere, and not solely with premotor/motor or somatosensory regions. We also did not find support for a somatotopically-organized distribution of effector-specific regions. We suggest that, rather than following the strict interpretation of homuncular somatotopy for embodied semantics, these findings support theories proposing the presence of a greater motor-language network which is associated with, but not restricted to, the network responsible for action execution and observation.  相似文献   

20.
Motor unit action potentials (MUPs) recorded by a monopolar needle electrode in normal and neuropathic muscles were computer-simulated. Five experienced electromyographers acted as examiners and assessed the firing sounds of these MUPs without seeing them on a display monitor. They judged whether the sounds were crisp or close enough to accept for the evaluation of MUP parameters and whether, when judged acceptable, they were neuropathic-polyphasic. The examiners recognized motor unit (MU) sound as crisp or polyphasic when the MUP obtained was 0.15-0.2 mm from the edge of the MU territory. When the intensity of the sound decreased, they were unable to perceive it as crisp. When the intensity exceeded the saturation level of loudspeaker output, the sound was perceived as polyphasic, but the wave form of the MUP was not. When the frequency of the neuropathic MUP was lowered, the examiners were unable to determine whether the MUP was polyphasic. MUPs recognized as acceptable for evaluation can be distinguished by listening to MU sounds. The audio amplifier gain must be appropriately adjusted for each MUP amplitude in order to assess whether an individual MU sound is crisp or polyphasic before MUP parameters are measured on a display monitor.Copyright 2000 John Wiley & Sons, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号