首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Oscillatory signals in human magnetoencephalogram were investigated as correlates of cortical network activity in response to sound lateralization changes. Previously, we found lateralized presentations of a monosyllabic word to elicit posterior temporo-parietal gamma-band activity, possibly reflecting synchronization of neuronal assemblies in putative auditory dorsal stream areas. In addition, beta activity was decreased over sensorimotor regions, suggesting the activation of motor networks involved in orientating. The present study investigated responses to lateralization changes of both a barking dog sound and a distorted noise to test whether beta desynchronization would depend on the sound's relevance for orientating. Eighteen adults listened passively to 900 samples of each sound in separate location mismatch paradigms with midline standards and both right- and left-lateralized deviants. Lateralized distorted noises were accompanied by enhanced spectral amplitude at 58-73 Hz over right temporo-parietal cortex. Left-lateralized barking dog sounds elicited right and right-lateralized sounds elicited bilateral temporo-parietal spectral amplitude increases at approximately 77 Hz. This replicated the involvement of posterior temporo-parietal areas in auditory spatial processing. Only barking dog sounds, but not distorted noises, gave rise to 30 Hz desynchronization over contralateral sensorimotor areas, parieto-frontal gamma coherence increases and beta coherence reductions between sensorimotor and prefrontal sensors. Apparently passive listening to lateralized natural sounds with a potential biological relevance led to an activation of motor networks involved in the automatic preparation for orientating. Parieto-frontal coherence increases may reflect enhanced coupling of networks involved in the integration of auditory spatial and motor processes.  相似文献   

2.
This study addressed the extraction of long-term sound familiarity as a higher-order feature of complex, environmental sounds. Physically variable, familiar animal sounds and spectrotemporally matched, unfamiliar control sounds were presented. Participants ignored the acoustic stimuli. Infrequent deviant sounds violated the familiarity status established by the standard sounds, but no regularity on the basis of physical features. In the auditory event-related potential, deviants elicited a negative-going deflection over parietal scalp areas around 230 ms. This effect occurred for familiar deviants among unfamiliar standards and for unfamiliar deviants among familiar standards. The results indicate the establishment of an auditory regularity based on sound familiarity. This reflects the extraction of sound familiarity outside the focus of attention.  相似文献   

3.
Summary. Environmental sounds convey specific meanings and the neural circuitry for their recognition may have preceded language. To dissociate semantic mnemonic from sensory perceptual processing of non-verbal sound stimuli we systematically altered the inherent semantic properties of non-verbal sounds from natural and man-made sources while keeping their acoustic characteristics closely matched. We hypothesized that acoustic analysis of complex non-verbal sounds would be right lateralized in auditory cortex regardless of meaning content and that left hemisphere regions would be engaged when meaningful concept could be extracted. Using H215O-PET imaging and SPM data analysis, we demonstrated that activation of the left superior temporal and left parahippocampal gyrus along with left inferior frontal regions was specifically associated with listening to meaningful sounds. In contrast, for both types of sounds, acoustic analysis was associated with activation of right auditory cortices. We conclude that left hemisphere brain regions are engaged when sounds are meaningful or intelligible.  相似文献   

4.
Hemispheric differences in the temporal processing of musical sounds within the primary auditory cortex were investigated using functional magnetic resonance imaging (fMRI) time series analysis on a 3.0 T system in right-handed individuals who had no formal training in music. The two hemispheres exhibited a clear-cut asymmetry in the time pattern of fMRI signals. A large transient signal component was observed in the left primary auditory cortex immediately after the onset of musical sounds, while only sustained activation, without an initial transient component, was seen in the right primary auditory cortex. The observed difference was believed to reflect differential segmentation in primary auditory cortical sound processing. Although the left primary auditory cortex processed the entire 30-s musical sound stimulus as a single event, the right primary auditory cortex had low-level processing of sounds with multiple segmentations of shorter time scales. The study indicated that musical sounds are processed as 'sounds with contents', similar to how language is processed in the left primary auditory cortex.  相似文献   

5.
We wished to determine whether multiple sound patterns can be simultaneously represented in the temporary auditory buffer (auditory sensory memory), when subjects have no task related to the sounds. To this end we used the mismatch negativity (MMN) event-related potential, an electric brain response elicited when a frequent sound is infrequently replaced by a different sound. The MMN response is based on the presence of the auditory sensory memory trace of the frequent sounds, which exists whether or not these sounds are in the focus of the subject's attention. Subjects watching a movie were presented with sound sequences consisting of two frequent sound patterns, each formed of four different tones and an infrequent pattern consisting of the first two tones of one of the frequent sound pattern and the last two tones of the other frequent sound pattern. The infrequent sound pattern elicited an MMN, indicating that multiple sound patterns are formed at an early, largely automatic stage of auditory processing.  相似文献   

6.
Autistics exhibit a contrasting combination of auditory behavior, with enhanced pitch processing abilities often coexisting with reduced orienting towards complex speech sounds. Based on an analogous dissociation observed in vision, we expected that autistics’ auditory behavior with respect to complex sound processing may result from atypical activity in non-primary auditory cortex. We employed fMRI to explore the neural basis of complex non-social sound processing in 15 autistic and 13 non-autistics, using a factorial design in which auditory stimuli varied in spectral and temporal complexity. Spectral complexity was modulated by varying the harmonic content, whereas temporal complexity was modulated by varying frequency modulation depth. The detection task was performed similarly by autistics and non-autistics. In both groups, increasing spectral or temporal complexity was associated with activity increases in primary (Heschl's gyrus) and non-primary (anterolateral and posterior superior temporal gyrus) auditory cortex Activity was right-lateralized for spectral and left-lateralized for temporal complexity. Increasing temporal complexity was associated with greater activity in anterolateral superior temporal gyrus in non-autistics and greater effects in Heschl's gyrus in autistics. While we observed similar hierarchical functional organization for auditory processing in both groups, autistics exhibited diminished activity in non-primary auditory cortex and increased activity in primary auditory cortex in response to the presentation of temporally, but not of spectrally complex sounds. Greater temporal complexity effects in regions sensitive to acoustic features and reduced temporal complexity effects in regions sensitive to more abstract sound features could represent a greater focus towards perceptual aspects of speech sounds in autism.  相似文献   

7.
Using electrophysiology, we have examined two questions in relation to musical training – namely, whether it enhances sensory encoding of the human voice and whether it improves the ability to ignore irrelevant auditory change. Participants performed an auditory distraction task, in which they identified each sound as either short (350 ms) or long (550 ms) and ignored a change in timbre of the sounds. Sounds consisted of a male and a female voice saying a neutral sound [a], and of a cello and a French Horn playing an F3 note. In some blocks, musical sounds occurred on 80% of trials, while voice sounds on 20% of trials. In other blocks, the reverse was true. Participants heard naturally recorded sounds in half of experimental blocks and their spectrally‐rotated versions in the other half. Regarding voice perception, we found that musicians had a larger N1 event‐related potential component not only to vocal sounds but also to their never before heard spectrally‐rotated versions. We therefore conclude that musical training is associated with a general improvement in the early neural encoding of complex sounds. Regarding the ability to ignore irrelevant auditory change, musicians' accuracy tended to suffer less from the change in timbre of the sounds, especially when deviants were musical notes. This behavioral finding was accompanied by a marginally larger re‐orienting negativity in musicians, suggesting that their advantage may lie in a more efficient disengagement of attention from the distracting auditory dimension.  相似文献   

8.
Deficit of auditory space perception in patients with visuospatial neglect   总被引:4,自引:0,他引:4  
There have been many studies of visuospatial neglect, but fewer studies of neglect in relation with other sensory modalities. In the present study we investigate the performance of six right brain damaged (RBD) patients with left visual neglect and six RBD patients without neglect in an auditory spatial task. Previous work on sound localisation in neglect patients adopted measure of sound localisation based on directional motor responses (e.g., pointing to sounds) or judgement of sound position with respect to the body midline (auditory midline task). However, these measures might be influenced by non-auditory biases related with motor and egocentric components. Here we adopted a perceptual measure of sound localisation, consisting in a verbal judgement of the relative position (same or different) of two sequentially presented sounds. This task was performed in a visual and in a blindfolded condition. The results revealed that sound localisation performance of visuospatial neglect patients was severely impaired with respect to that of RBD controls, especially when sounds originated in contralesional hemispace. In such condition, neglect patients were always unable to discriminate the relative position of the two sounds. No difference in performance emerged as a function of the visual condition in either group. These results demonstrate a perceptual deficit of sound localisation in patients with visuospatial neglect, suggesting that the spatial deficits of these patients can arise multimodally for the same portion of external space.  相似文献   

9.
The recognition of environmental sounds is an important feature of higher auditory processing and essential for everyday life. The present study aimed to investigate the potential impairment of this mental function in schizophrenia. This work on immediate sound recognition is complementary to recent studies on auditory linguistic processing. Fifteen schizophrenic patients and 30 control subjects were asked to identify 43 complex environmental sounds from different categories and rate their familiarity when na?ve to the sounds. In consecutive experiments, patients and control subjects rated the sounds according to emotional valence and arousal, as well as imageability. In both groups, correct identification of non-verbal sounds was highly associated with familiarity. Statistical analysis by group demonstrated a significantly higher error rate in identifying sounds in patients suffering from schizophrenia compared to healthy control subjects. In contrast, the affective recognition of the complex sounds was preserved in the schizophrenic patients. These results suggest a disturbance of higher-order, auditory mnemonic processing in schizophrenic patients in the non-linguistic domain. We discuss their abnormal responses in the context of recent theories of auditory physiological and semantic processing deficits in schizophrenia.  相似文献   

10.
Sensory responses to courtship signals can be altered by reproductive hormones. In seasonally‐breeding female songbirds, for example, sound‐induced immediate early gene expression in the auditory pathway is selective for male song over behaviourally irrelevant sounds only when plasma oestradiol reaches breeding‐like levels. This selectivity has been hypothesised to be mediated by the release of monoaminergic neuromodulators in the auditory pathway. We previously showed that in oestrogen‐primed female white‐throated sparrows, exposure to male song induced dopamine and serotonin release in auditory regions. To mediate hormone‐dependent selectivity, this release must be (i) selective for song and (ii) modulated by endocrine state. Therefore, in the present study, we addressed both questions by conducting playbacks of song or a control sound to females in a breeding‐like or a nonbreeding endocrine state. We then used high‐performance liquid chromatography to measure turnover of dopamine, norepinephrine and serotonin in the auditory midbrain and forebrain. We found that sound‐induced turnover of dopamine and serotonin depended on endocrine state; hearing sound increased turnover in the auditory forebrain only in the birds in a breeding‐like endocrine state. Contrary to our expectations, these increases occurred in response to either song or artificial tones; in other words, they were not selective for song. The selectivity of sound‐induced monoamine release was thus strikingly different from that of immediate early gene responses described in previous studies. We did, however, find that constitutive monoamine release was altered by endocrine state; irrespective of whether the birds heard sound or not, turnover of serotonin in the auditory forebrain was higher in a breeding‐like state than in a nonbreeding endocrine state. The results of the present study suggest that dopaminergic and serotonergic responses to song and other sounds, as well as serotonergic tone in auditory areas, could be seasonally modulated.  相似文献   

11.
Laine M  Kwon MS  Hämäläinen H 《Neuroreport》2007,18(16):1697-1701
Automatic detection of auditory changes that violate a regular sound sequence is indexed by the mismatch negativity (MMN) component of the event-related potential. The MMN is considered to reflect an auditory sensory memory and attention switching mechanism. Our aim was to study whether the auditory MMN can be associated with visual cues that have predictive value. By using visual cues that predicted the appearance of a deviant sound in most but not all of the cases, we were able to elicit MMN not only to the deviant sounds but also to those regular sounds that were misleadingly preceded by the visual cue. This result indicates high flexibility in the human automatic auditory change detection system, as it is affected by short-term visual-auditory associative learning.  相似文献   

12.
To explore the neural processes underlying concurrent sound segregation, auditory evoked fields (AEFs) were measured using magnetoencephalography (MEG). To induce the segregation of two auditory objects we manipulated harmonicity and onset synchrony. Participants were presented with complex sounds with (i) all harmonics in-tune (ii) the third harmonic mistuned by 8% of its original value (iii) the onset of the third harmonic delayed by 160 ms compared to the other harmonics. During recording, participants listened to the sounds and performed an auditory localisation task whereas in another session they ignored the sounds and performed a visual localisation task. Active and passive listening was chosen to evaluate the contribution of attention on sound segregation. Both cues - inharmonicity and onset asynchrony - elicited sound segregation, as participants were more likely to report correctly on which side they heard the third harmonic when it was mistuned or delayed compared to being in-tune with all other harmonics. AEF activity associated with concurrent sound segregation was identified over both temporal lobes. We found an early deflection at ∼75 ms (P75m) after sound onset, probably reflecting an automatic registration of the mistuned harmonic. Subsequent deflections, the object-related negativity (ORNm) and a later displacement (P230m) seem to be more general markers of concurrent sound segregation, as they were elicited by both mistuning and delaying the third harmonic. Results indicate that the ORNm reflects relatively automatic, bottom-up sound segregation processes, whereas the P230m is more sensitive to attention, especially with inharmonicity as the cue for concurrent sound segregation.  相似文献   

13.
Horizontal sound localization relies on the extraction of binaural acoustic cues by integration of the signals from the two ears at the level of the brainstem. The present experiment was aimed at detecting the sites of binaural integration in the human brainstem using functional magnetic resonance imaging and a binaural difference paradigm, in which the responses to binaural sounds were compared with the sum of the responses to the corresponding monaural sounds. The experiment also included a moving sound condition, which was contrasted against a spectrally and energetically matched stationary sound condition to assess which of the structures that are involved in general binaural processing are specifically specialized in motion processing. The binaural difference contrast revealed a substantial binaural response suppression in the inferior colliculus in the midbrain, the medial geniculate body in the thalamus and the primary auditory cortex. The effect appears to reflect an actual reduction of the underlying activity, probably brought about by binaural inhibition or refractoriness at the level of the superior olivary complex. Whereas all structures up to and including the primary auditory cortex were activated as strongly by the stationary as by the moving sounds, non-primary auditory fields in the planum temporale responded selectively to the moving sounds. These results suggest a hierarchical organization of auditory spatial processing in which the general analysis of binaural information begins as early as the brainstem, while the representation of dynamic binaural cues relies on non-primary auditory fields in the planum temporale.  相似文献   

14.
It is commonly assumed that different perceptual qualities arising from sensory stimuli depend on their physical nature being transformed by specific peripheral receptors, for example, colour, vibration or heat. A notable unexplained exception is the low and high repetition rates of any sound perceived as rhythm or pitch, respectively. Using auditory discrimination learning in bilaterally auditory cortex ablated animals, we demonstrate that the perceptual quality of sounds depends on the way the brain processes stimuli rather than on their physical nature. In this context, cortical and subcortical processing steps have different roles in analysing different aspects of sounds with the complete analysis accomplished not before information converges in the auditory cortex.  相似文献   

15.
It has been suggested that both the posterior parietal cortex (PPC) and the extrastriate occipital cortex (OC) participate in the spatial processing of sounds. However, the precise time-course of their contribution remains unknown, which is of particular interest, considering that it could give new insights into the mechanisms underlying auditory space perception. To address this issue, we have used event-related transcranial magnetic stimulation (TMS) to induce virtual lesions of either the right PPC or right OC at different delays in subjects performing a sound lateralization task. Our results confirmed that these two areas participate in the spatial processing of sounds. More precisely, we found that TMS applied over the right OC 50 msec after the stimulus onset significantly impaired the localization of sounds presented either to the right or to the left side. Moreover, right PPC virtual lesions induced 100 and 150 msec after sound presentation led to a rightward bias for stimuli delivered on the center and on the left side, reproducing transiently the deficits commonly observed in hemineglect patients. The finding that the right OC is involved in sound processing before the right PPC suggests that the OC exerts a feedforward influence on the PPC during auditory spatial processing.  相似文献   

16.
The aim of the present study was to clarify whether ERPs recorded directly from the human frontal cortex contributed to the auditory N1 and mismatch negativity (MMN) elicited by changes in non-phonetic and phonetic sounds. We examined the role of prefrontal cortex in the processing of stimulus repetition and change in a 6-year-old child undergoing presurgical evaluation for epilepsy. EEG was recorded from three bilateral sub-dural electrode strips located over lateral prefrontal areas during unattended auditory stimulation. EEG epochs were averaged to obtain event-related potentials (ERPs) to repeating (standard) tones and to infrequent (deviant) shorter duration tones and complex sounds (telephone buzz). In another condition, ERPs were recorded to standard and deviant syllables, /ba/ and /da/, respectively. ERPs to vibration stimuli delivered to the fingertips were not observed at any of the sub-dural electrodes, confirming modality specificity of the auditory responses. Focal auditory ERPs consisting of P100 and N150 deflections were recorded to both tones and phonemes over the right lateral prefrontal cortex. These responses were insensitive to the serial position of the repeating sound in the stimulus train. Deviant tones evoked an MMN peaking at around 128 ms. Deviant complex sounds evoked ERPs with a similar onset latency and morphology but with an approximately two-fold increase in peak-to-peak amplitude. We conclude that right lateral prefrontal cortex (Brodmann's area 45) is involved in early stages of processing repeating sounds and sound changes.  相似文献   

17.
Natural and behaviorally relevant sounds are characterized by temporal modulations of their waveforms, which carry important cues for sound segmentation and communication. Still, there is little consensus as to how this temporal information is represented in auditory cortex. Here, by using functional magnetic resonance imaging (fMRI) optimized for studying the auditory system, we report the existence of a topographically ordered spatial representation of temporal sound modulation rates in human auditory cortex. We found a topographically organized sensitivity within auditory cortex to sounds with varying modulation rates, with enhanced responses to lower modulation rates (2 and 4 Hz) on lateral parts of Heschl's gyrus (HG) and faster modulation rates (16 and 32 Hz) on medial HG. The representation of temporal modulation rates was distinct from the representation of sound frequencies (tonotopy) that was orientated roughly orthogonal. Moreover, the combination of probabilistic anatomical maps with a previously proposed functional delineation of auditory fields revealed that the distinct maps of temporal and spectral sound features both prevail within two presumed primary auditory fields hA1 and hR. Our results reveal a topographically ordered representation of temporal sound cues in human primary auditory cortex that is complementary to maps of spectral cues. They thereby enhance our understanding of the functional parcellation and organization of auditory cortical processing.  相似文献   

18.
Noise is usually detrimental to auditory perception. However, recent psychophysical studies have shown that low levels of broadband noise may improve signal detection. Here, we measured auditory evoked fields (AEFs) while participants listened passively to low-pitched and high-pitched tones (Experiment 1) or complex sounds that included a tuned or a mistuned component that yielded the perception of concurrent sound objects (Experiment 2). In both experiments, stimuli were embedded in low or intermediate levels of Gaussian noise or presented without background noise. For each participant, the AEFs were modeled with a pair of dipoles in the superior temporal plane, and the effects of noise were examined on the resulting source waveforms. In both experiments, the N1m was larger when the stimuli were embedded in low background noise than in the no-noise control condition. Complex sounds with a mistuned component generated an object-related negativity that was larger in the low-noise condition. The results show that low-level background noise facilitates AEFs associated with sound onset and can be beneficial for sorting out concurrent sound objects. We suggest that noise-induced increases in transient evoked responses may be mediated via efferent feedback connections between the auditory cortex and lower auditory centers.  相似文献   

19.
Circumscribed hemispheric lesions in the right hemisphere have been shown to impair auditory spatial functions. Due to a strong crossmodal links that exist between vision and audition, in the present study, we have hypothesized that multisensory integration can play a specific role in recovery from spatial representational deficits. To this aim, a patient with severe auditory localization defect was asked to indicate verbally the spatial position where the sound was presented. The auditory targets were presented at different spatial locations, at 8 degrees, 24 degrees, 40 degrees, 56 degrees to either sides of the central fixation point. The task was performed either in a unimodal condition (i.e., only sounds were presented) or in crossmodal conditions (i.e., a visual stimulus was presented simultaneously to the auditory target). In the crossmodal conditions, the visual cue was presented either at the same spatial position as the sound or at 16 degrees or 32 degrees, nasal or temporal, of spatial disparity from the auditory target. The results showed that a visual stimulus strongly improves the patient's ability to localize the sounds, but only when it was presented in the same spatial position of the auditory target.  相似文献   

20.
The brain organizes sound into coherent sequences, termed auditory streams. We asked whether task-irrelevant sounds would be detected as separate auditory streams in a natural listening environment that included three simultaneously active sound sources. Participants watched a movie with sound while street-noise and sequences of naturally varying footstep sounds were presented in the background. Occasional deviations in the footstep sequences elicited the mismatch negativity (MMN) event-related potential. The elicitation of MMN showed that the regular features of the footstep sequences had been registered and their violations detected, which could only occur if the footstep sequence had been detected as a separate auditory stream. Our results demonstrate that sounds are organized into auditory streams irrespective of their relevance to ongoing behavior.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号