首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
OBJECTIVES: Being the two complementary cues to directional hearing, interaural time and intensity disparities (ITD and IID, respectively), are known to be separately encoded in the brain stem. We address the question as to whether their codes are collapsed into a single lateralization code subcortically or they reach the cortex via separate channels and are processed there in different areas. METHODS: Two continuous trains of 100/s clicks were dichotically presented. At 2 s intervals either an interaural time delay of 1ms or an interaural level difference of 20 dB (HL) was introduced for 50 ms, shifting the intracranial sound image laterally for this brief period of time. Long-latency responses to these directional stimuli, which had been tested to evoke no potentials under monotic or diotic conditions, as well as to sound pips of 50 ms duration were recorded from 124 scalp electrodes. Scalp potential and current density maps at N1 latency were obtained from thirteen normal subjects. A 4-sphere head model with bilaterally symmetrical dipoles was used for source analysis and a simplex algorithm preceded by a genetic algorithm was employed for solving the inverse problem. RESULTS: Inter- and intra-subject comparisons showed that the N1 responses evoked by IID and ITD as well as by sound pip stimuli had significantly different scalp topographies and interhemispheric dominance patterns. Significant location and orientation differences between their estimated dipole sources were also noted. CONCLUSIONS: We conclude that interaural time and intensity disparities (thus the lateral shifts of a sound image caused by these two cues) are processed in different ways and/or in different areas in auditory cortex.  相似文献   

2.
The study of spatial processing in the auditory system usually requires complex experimental setups, using arrays of speakers or speakers mounted on moving arms. These devices, while allowing precision in the presentation of the spatial attributes of sound, are complex, expensive and limited. Alternative approaches rely on virtual space sound delivery. In this paper, we describe a virtual space algorithm that enables accurate reconstruction of eardrum waveforms for arbitrary sound sources moving along arbitrary trajectories in space. A physical validation of the synthesis algorithm is performed by comparing waveforms recorded during real motion with waveforms synthesized by the algorithm. As a demonstration of possible applications of the algorithm, virtual motion stimuli are used to reproduce psychophysical results in humans and for studying responses of barn owls to auditory motion stimuli.  相似文献   

3.
《Clinical neurophysiology》2021,132(9):2110-2122
ObjectiveDuring early childhood, the development of communication skills, such as language and speech perception, relies in part on auditory system maturation. Because auditory behavioral tests engage cognition, mapping auditory maturation in the absence of cognitive influence remains a challenge. Furthermore, longitudinal investigations that capture auditory maturation within and between individuals in this age group are scarce. The goal of this study is to longitudinally measure auditory system maturation in early childhood using an objective approach.MethodsWe collected frequency-following responses (FFR) to speech in 175 children, ages 3–8 years, annually for up to five years. The FFR is an objective measure of sound encoding that predominantly reflects auditory midbrain activity. Eliciting FFRs to speech provides rich details of various aspects of sound processing, namely, neural timing, spectral coding, and response stability. We used growth curve modeling to answer three questions: 1) does sound encoding change across childhood? 2) are there individual differences in sound encoding? and 3) are there individual differences in the development of sound encoding?ResultsSubcortical auditory maturation develops linearly from 3-8 years. With age, FFRs became faster, more robust, and more consistent. Individual differences were evident in each aspect of sound processing, while individual differences in rates of change were observed for spectral coding alone.ConclusionsBy using an objective measure and a longitudinal approach, these results suggest subcortical auditory development continues throughout childhood, and that different facets of auditory processing follow distinct developmental trajectories.SignificanceThe present findings improve our understanding of auditory system development in typically-developing children, opening the door for future investigations of disordered sound processing in clinical populations.  相似文献   

4.
《Clinical neurophysiology》2007,118(1):177-185
ObjectiveMismatch negativity (MMN), a change-specific component of the auditory event-related potential (ERP), is sensitive to deficits in central auditory processing associated with many clinical conditions. The aim of this study was to obtain a comprehensive multi-dimensional profile of central auditory processing by extending the recently developed fast multi-feature MMN paradigm [Näätänen R, Pakarinen S, Rinne T, Takegata R. The mismatch negativity (MMN): towards the optimal paradigm. Clin Neurophysiol 2004;115:140–144].MethodsMMN responses to changes in sound duration, frequency, intensity, and perceived sound-source location at six different magnitudes of deviation were recorded from healthy young adults by using the multi-feature MMN paradigm. In addition, behavioural discrimination accuracy and speed were measured to examine the relationship between MMN and behavioural performance.ResultsAll the 24 sound changes elicited significant MMNs. MMN amplitude increased and latency decreased with increasing magnitude of sound change. Furthermore, the MMN amplitude and latency predicted the subjects’ accuracy and speed in detecting these deviations.ConclusionsThis new paradigm provides an extensive auditory discrimination profile for several auditory attributes at different deviation magnitudes in a minimal recording time.SignificanceThe auditory discrimination profiles can offer a comprehensive view of the development, plasticity, and deficits of central auditory processing.  相似文献   

5.
Absolute pitch (AP) is the ability to recognize pitch chroma of tonal sound without external references, providing a unique model of the human auditory system (Zatorre: Nat Neurosci 6 ( 2003 ) 692–695). In a previous study (Kim and Knösche: Hum Brain Mapp ( 2016 ) 3486–3501), we identified enhanced intracortical myelination in the right planum polare (PP) in musicians with AP, which could be a potential site for perceptional processing of pitch chroma information. We speculated that this area, which initiates the ventral auditory pathway, might be crucially involved in the perceptual stage of the AP process in the context of the “dual pathway hypothesis” that suggests the role of the ventral pathway in processing nonspatial information related to the identity of an auditory object (Rauschecker: Eur J Neurosci 41 ( 2015 ) 579–585). To test our conjecture on the ventral pathway, we investigated resting state functional connectivity (RSFC) using functional magnetic resonance imaging (fMRI) from musicians with varying degrees of AP. Should our hypothesis be correct, RSFC via the ventral pathway is expected to be stronger in musicians with AP, whereas such group effect is not predicted in the RSFC via the dorsal pathway. In the current data, we found greater RSFC between the right PP and bilateral anteroventral auditory cortices in musicians with AP. In contrast, we did not find any group difference in the RSFC of the planum temporale (PT) between musicians with and without AP. We believe that these findings support our conjecture on the critical role of the ventral pathway in AP recognition. Hum Brain Mapp 38:3899–3916, 2017. © 2017 Wiley Periodicals, Inc.  相似文献   

6.
Behavioural lateralisation is evident across most animal taxa, although few marsupial and no fossorial species have been studied. Twelve wombats (Lasiorhinus latifrons) were bilaterally presented with eight sounds from different contexts (threat, neutral, food) to test for auditory laterality. Head turns were recorded prior to and immediately following sound presentation. Behaviour was recorded for 150 seconds after presentation. Although sound differentiation was evident by the amount of exploration, vigilance, and grooming performed after different sound types, this did not result in different patterns of head turn direction. Similarly, left–right proportions of head turns, walking events, and food approaches in the post-sound period were comparable across sound types. A comparison of head turns performed before and after sound showed a significant change in turn direction (χ2 1=10.65, p=.001) from a left preference during the pre-sound period (mean 58% left head turns, CI 49–66%) to a right preference in the post-sound (mean 43% left head turns, CI 40–45%). This provides evidence of a right auditory bias in response to the presentation of the sound. This study therefore demonstrates that laterality is evident in southern hairy-nosed wombats in response to a sound stimulus, although side biases were not altered by sounds of varying context.  相似文献   

7.
The ability of the auditory system to resolve sound temporal information is crucial for the understanding of human speech and other species‐specific communications. Gap detection threshold, i.e. the ability to detect the shortest duration of a silent interval in a sound, is commonly used to study the auditory temporal resolution. Behavioral studies in humans and rats have shown that normal developing infants have higher gap detection thresholds than adults; however, the underlying neural mechanism is not fully understood. In the present study, we determined and compared the neural gap detection thresholds in the primary auditory cortex of three age groups of rats: the juvenile group (postnatal day 20–30), adult group I (8–10 weeks), and adult group II (28–30 weeks). We found age‐related changes in auditory temporal acuity in the auditory cortex, i.e. the proportion of cortical units with short neural gap detection thresholds (< 5 ms) was much lower in juvenile groups compared with that in both adult groups at a constant sound level, and no significant differences in neural gap detection thresholds were found between the two adult groups. In addition, units in the auditory cortex of each group generally showed better gap detection thresholds at higher sound levels than at lower sound levels, exhibiting a level‐dependent temporal acuity. These results provided evidence for neural correlates of age‐related changes in behavioral gap detection ability during postnatal hearing development.  相似文献   

8.
《Neurological research》2013,35(6):625-629
Abstract

Objective: To test the hypothesis that some of the abnormal sensory perceptions that characterize autism may be explained by an abnormal activation of non-classical (extra-lemniscal) sensory pathways.

Methods: Twenty-one individuals, 18–45 years of age who were diagnosed with autism participated in the study. Sounds (clicks presented at a rate of 40 per second and 65 dB above the normal threshold) were applied through earphones. Electrical stimulation (100 μS rectangular impulses at a rate of 4 per second) was applied through electrodes placed on the skin over the median nerve at the wrist. The participants were asked to match the loudness of the sound with and without the electrical stimulation applied to the median nerve.

Results: Electrical stimulation of the median nerve at the wrist in individuals with autism could change the perception of loudness of sounds presented to one ear through an earphone showing a statistically significant abnormal sensory cross-modal interaction.

Discussion: We interpreted our results to support the hypothesis that some individuals with autism have an abnormal cross-modal interaction between the auditory and the somatosensory systems. Cross-modal interaction between senses such as hearing and the somatosensory system does not occur normally in adults. As only the non-classical (extralemniscal) ascending auditory pathways receive somatosensory input, the presence of cross-modal interaction in autistic individuals is a sign that autism is associated with abnormal involvement of the non-classical auditory pathways, implying that sensory information is processed by different populations of neurons than in non-autistic individuals.  相似文献   

9.
《Neural networks》1999,12(1):31-42
The barn owl is a nocturnal predator that is able to capture mice in complete darkness using only sound to localize prey. Two binaural cues are used by the barn owl to determine the spatial position of a sound source: differences in the time of arrival of sounds at the two ears for the azimuth (interaural time differences (ITDs)) and differences in their amplitude for the elevation (interaural level differences (ILDs)). Neurophysiological investigations have revealed that two different neural pathways starting from the cochlea seem to be specialized for processing ITDs and ILDs. Much evidence suggests that in the barn owl the localization of the azimuth is based on a cross-correlation-like treatment of the auditory inputs at the two ears. In particular, in the external nucleus of the inferior colliculus (ICx), where cells are activated by specific values of ITD, neural activation has been recently observed to be dependent on some measure of the level of cross-correlation between the input auditory signals. However, it has also been observed that these neurons are less sensitive to noise than predicted by direct binaural cross-correlation. The mechanisms underlying such signal-to-noise improvement are not known. In this paper, by focusing on a model of the barn owl's neural pathway to the optic tectum dedicated to the localization of the azimuth, we study the mechanisms by which the ITD tuning of ICx units is achieved. By means of analytical examinations and computer simulations, we show that strong analogies exist between the process by which the barn owl evaluates the azimuth of a sound source and the generalized cross-correlation algorithm, one of the most robust methods for the estimate of time delays.  相似文献   

10.
11.
The auditory system is unique in its ability to precisely detect the timing of perceptual events and use this information to update motor plans, a skill that is crucial for language. However, the characteristics of the auditory system that enable this temporal precision are only beginning to be understood. Previous work has shown that participants who can tap consistently to a metronome have neural responses to sound with greater phase coherence from trial to trial. We hypothesized that this relationship is driven by a link between the updating of motor output by auditory feedback and neural precision. Moreover, we hypothesized that neural phase coherence at both fast time scales (reflecting subcortical processing) and slow time scales (reflecting cortical processing) would be linked to auditory–motor timing integration. To test these hypotheses, we asked participants to synchronize to a pacing stimulus, and then changed either the tempo or the timing of the stimulus to assess whether they could rapidly adapt. Participants who could rapidly and accurately resume synchronization had neural responses to sound with greater phase coherence. However, this precise timing was limited to the time scale of 10 ms (100 Hz) or faster; neural phase coherence at slower time scales was unrelated to performance on this task. Auditory–motor adaptation therefore specifically depends upon consistent auditory processing at fast, but not slow, time scales.  相似文献   

12.
Sound localization and delay lines--do mammals fit the model?   总被引:4,自引:0,他引:4  
  相似文献   

13.
Orienting responses to audiovisual events have shorter reaction times and better accuracy and precision when images and sounds in the environment are aligned in space and time. How the brain constructs an integrated audiovisual percept is a computational puzzle because the auditory and visual senses are represented in different reference frames: the retina encodes visual locations with respect to the eyes; whereas the sound localisation cues are referenced to the head. In the well‐known ventriloquist effect, the auditory spatial percept of the ventriloquist's voice is attracted toward the synchronous visual image of the dummy, but does this visual bias on sound localisation operate in a common reference frame by correctly taking into account eye and head position? Here we studied this question by independently varying initial eye and head orientations, and the amount of audiovisual spatial mismatch. Human subjects pointed head and/or gaze to auditory targets in elevation, and were instructed to ignore co‐occurring visual distracters. Results demonstrate that different initial head and eye orientations are accurately and appropriately incorporated into an audiovisual response. Effectively, sounds and images are perceptually fused according to their physical locations in space independent of an observer's point of view. Implications for neurophysiological findings and modelling efforts that aim to reconcile sensory and motor signals for goal‐directed behaviour are discussed.  相似文献   

14.
Abstract

Objectives:

Surgical management of tumors in the sacropelvic region is a challenging field of spine surgery because of the region’s complex local anatomy and biomechanics. Recent developments in anesthesia and intensive care have allowed us to perform extended surgeries focused on the en bloc resection of sacropelvic tumors. Various techniques for the resection and for the reconstruction were published in the last decade.

Methods:

Sacropelvic tumor resection techniques and methods for the biomechanical and soft-tissue reconstruction are reviewed in this paper.

Results:

The literature data is based on case reports and case-series. Several different techniques were developed for the lumbopelvic stabilization after sacropelvic tumor resection according to three different reconstruction principles (spinopelvic fixation (SPF), posterior pelvic ring fixation (PRF), and anterior spinal column fixation (ACF)); however, long-term follow-up data and comparative studies of the different techniques are still missing. Soft-tissue reconstruction can be performed according to an algorithm depending on the surgical approach, but relatively high complication rates are reported with all reconstruction strategies. The clinical outcome of such surgeries should ideally be evaluated in three dimensions; surgical-, oncological-, and functional outcomes. The last and most important step of the presurgical planning procedure is a careful presentation of the surgical goals and risks to the patient, who must provide a fully informed consent before surgery can proceed.

Discussion:

Sacropelvic tumors are rare conditions. In the last decade, growing evidence was published on resection and reconstruction techniques for these tumors; however, experience at most medical centers is limited due to the low numbers of cases. The formation of international expert groups and the initiation of multicenter studies are strongly encouraged to produce a high level of evidence in this special field of spine surgery.  相似文献   

15.
《Clinical neurophysiology》2010,121(9):1526-1539
ObjectiveTo understand the functional roles of brain regions related in the auditory spatial localization, we recorded auditory event-related potentials (ERPs) and estimated their source generators using the dipole tracing method.MethodsTarget sound stimuli perceived as coming from two directions (−90°, +90° where 0° was straight behind the subject within the azimuth in the interaural plane) were randomly presented with two distracter stimuli for providing difficulty of detection. The distracter stimuli were 75° behind the target stimuli (easy task) and 45° behind the target stimuli (difficult task).ResultsCompared with the passive listening tasks, distinct potentials appeared in the easy task at the early (110–150 ms: N1-late) time windows of ERPs and in the difficult task at the late (450–800 ms: slow wave, SW) time windows of ERPs. Dipoles were estimated to be at the posterior auditory cortex, precuneus and thalamus for N1-late, and the middle/inferior frontal gyrus, anterior region of superior temporal gyrus and parahippocampal gyrus for SW for both tasks.ConclusionsDifficulty of sound localization may affect brain function related to analyzing features of the spatial cue, eventually identifying the spatial location, and attention.SignificanceBrain regions responsible for sound localization may show different activity patterns depending on the functional roles of each brain region.  相似文献   

16.
Tinnitus is characterized by an ongoing conscious perception of a sound in the absence of any external sound source. Chronic tinnitus is notoriously characterized by its resistance to treatment. In the present study the objective was to verify whether the neural generators and/or the neural tinnitus network, evaluated through EEG recordings, change over time as previously suggested by MEG. We therefore analyzed the source-localized EEG recordings of a very homogenous group of left-sided narrow-band noise tinnitus patients. Results indicate that the generators involved in tinnitus of recent onset seem to change over time with increased activity in several brain areas [auditory cortex, supplementary motor area and dorsal anterior cingulate cortex (dACC) plus insula], associated with a decrease in connectivity between the different auditory and nonauditory brain structures. An exception to this general connectivity decrease is an increase in gamma-band connectivity between the left primary and secondary auditory cortex and the left insula, and also between the auditory cortices and the right dorsal lateral prefrontal cortex. These networks are both connected to the left parahippocampal area. Thus acute and chronic tinnitus are related to differential activity and connectivity in a network comprising the auditory cortices, insula, dACC and premotor cortex.  相似文献   

17.
To evaluate auditory spatial cognitive function, age correlations for event-related potentials (ERPs) in response to auditory stimuli with a Doppler effect were studied in normal children. A sound with a Doppler effect is perceived as a moving audio image. A total of 99 normal subjects (age range, 4–21 years) were tested. In the task-relevant oddball paradigm, P300 and key-press reaction time were elicited using auditory stimuli (1000 Hz fixed and enlarged tones with a Doppler effect). From the age of 4 years, the P300 latency for the enlarged tone with a Doppler effect shortened more rapidly with age than did the P300 latency for tone-pips, and the latencies for the different conditions became similar towards the late teens. The P300 of auditory stimuli with a Doppler effect may be used to evaluate auditory spatial cognitive function in children.  相似文献   

18.
OBJECTIVES: To localise the brain lesion that causes disturbances of sound lateralisation and to examine the correlation between such deficit and unilateral visuospatial neglect. METHOD: There were 29 patients with right brain damage, 15 patients with left brain damage, and 22 healthy controls, who had normal auditory and binaural thresholds. A device was used that delivered sound to the left and right ears with an interaural time difference using headphones. The amplitude (an index of ability to detect sound image shifts from the centre) and midpoint (an index of deviation of the interaural time difference range perceived as the centre) parameters of interaural time difference were analysed in each subject using 10 consecutive stable saw toothed waves. RESULTS: The amplitude of interaural time difference was significantly higher in patients with right brain damage than in controls. The midpoint of the interaural time difference was significantly more deviated in patients with right brain damage than in those with left brain damage and controls (p<0. 05). Patients with right brain damage with lesions affecting both the parietal lobe and auditory pathway showed a significantly higher amplitude and deviated midpoint than the controls, whereas right brain damage with involvement of only the parietal lobe showed a midpoint significantly deviated from the controls (p<0.05). Abnormal sound lateralisation correlated with unilateral visuospatial neglect (p<0.05). CONCLUSIONS: The right parietal lobe plays an important part in sound lateralisation. Sound lateralisation is also influenced by lesions of the right auditory pathway, although the effect of such lesions is less than that of the right parietal lobe. Disturbances of sound lateralisation correlate with unilateral visuospatial neglect.  相似文献   

19.
The effect of passive whole-body tilt in the frontal plane on the lateralization of dichotic sound was investigated in human subjects. Pure-tone pulses (1 kHz, 100 ms duration) with various interaural time differences were presented via headphones while the subject was in an upright position or tilted 45 degrees or 90 degrees to the left or right. Subjects made two-alternative forced-choice (left/right) judgements on the intracranial sound image. During body tilt, the auditory median plane of the head, computed from the resulting psychometric functions, was always shifted to the upward ear, indicating a shift of the auditory percept to the downward ear, that is, in the direction of gravitational linear acceleration. The mean maximum magnitude of the auditory shift obtained with 90 degrees body tilt was 25 micro s. On the one hand, these findings suggest a certain influence of the otolith information about body position relative to the direction of gravity on the representation of auditory space. However, in partial contradiction to previous work, which had assumed existence of a significant 'audiogravic illusion', the very slight magnitude of the present effect rather reflects the excellent stability in the neural processing of auditory spatial cues in humans. Thus, it might be misleading to use the term 'illusion' for this quite marginal effect.  相似文献   

20.
The effect of passive whole-body rotation about the earth-vertical axis on the lateralization of dichotic sound was investigated in human subjects. Pure-tone pulses (1 kHz; 0.1 s duration) with various interaural time differences were presented via headphones during brief, low-amplitude rotation (angular acceleration 400 degrees/s2; maximum velocity 90 degrees/s; maximum displacement 194 degrees ). Subjects made two-alternative forced-choice (left/right) judgements on the acoustic stimuli. The auditory median plane of the head was shifted opposite to the direction of rotation, indicating a shift of the intracranial auditory percept in the direction of rotation. The mean magnitude of the shift was 10.7 micros. This result demonstrates a slight, but significant, influence of rotation on sound lateralization, suggesting that vestibular information is taken into account by the brain for accurate localization of stationary sound sources during natural head and body motion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号