首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Modulations of amplitude and frequency are common features of natural sounds, and are prominent in behaviorally important communication sounds. The mammalian auditory cortex is known to contain representations of these important stimulus parameters. This study describes the distributed representations of tone frequency and modulation rate in the rat primary auditory cortex (A1). Detailed maps of auditory cortex responses to single tones and tone trains were constructed from recordings from 50-60 microelectrode penetrations introduced into each hemisphere. Recorded data demonstrated that the cortex uses a distributed coding strategy to represent both spectral and temporal information in the rat, as in other species. Just as spectral information is encoded in the firing patterns of neurons tuned to different frequencies, temporal information appears to be encoded using a set of filters covering a range of behaviorally important repetition rates. Although the average A1 repetition rate transfer function (RRTF) was low-pass with a sharp drop-off in evoked spikes per tone above 9 pulses per second (pps), individual RRTFs exhibited significant structure between 4 and 10 pps, including substantial facilitation or depression to tones presented at specific rates. No organized topography of these temporal filters could be determined.  相似文献   

2.
The responses of neuronal clusters to amplitude-modulated tones were studied in five auditory cortical fields of the anesthetized cat: the primary auditory field (AI), second auditory field (AII), anterior auditory field (AAF), posterior auditory field (PAF) and the ventro-posterior auditory field (VPAF). Modulation transfer functions (MTFs) for amplitude-modulated tones were obtained at 172 cortical locations. MTFs were constructed by measuring firing rate (rate-MTFs) and response synchronization (synchronization-MTFs) to sinusoidal and rectangular waveform modulation of CF-tones. The MTFs were characterized by their 'best-modulation frequency' (BMF) and a measure of their quality of 'sharpness' (Q2dB). These characteristics were compared for the five fields. Rate and synchronization MTFs for sinusoidal and rectangular modulation produced similar estimates of BMF and Q2dB. Comparison of averaged BMFs between the cortical fields revealed relatively high BMFs in AAF (mean: 31.1 Hz for synchronization to sinusoidal AM) and moderately high BMFs in AI (14.2 Hz) whereas BMFs encountered in AII, VPAF and PAF were generally low (7.0, 5.2, and 6.8 Hz). The MTFs were relatively broadly tuned (low Q2dB) in AAF and sharper in a low modulation group containing AII, PAF and VPAF. The ventro-posterior field was the most sensitive to changes in the modulation waveform. We conclude that there are significant differences between auditory cortical fields with respect to their temporal response characteristics and that the assessment of these response characteristics reveals important aspects of the functional significance of auditory cortical fields for the coding and representation of complex sounds.  相似文献   

3.
It is well known that the post-natal loss of sensory input in one modality can result in crossmodal reorganization of the deprived cortical areas, but deafness fails to induce crossmodal effects in cat primary auditory cortex (A1). Because the core auditory regions (A1, and anterior auditory field AAF) are arranged as separate, parallel processors, it cannot be assumed that early-deafness affects one in the same manner as the other. The present experiments were conducted to determine if crossmodal effects occur in the anterior auditory field (AAF). Using mature cats (n = 3), ototoxically deafened postnatally, single-unit recordings were made in the gyral and sulcal portions of the AAF. In contrast to the auditory responsivity found in the hearing controls, none of the neurons in early-deafened AAF were activated by auditory stimulation. Instead, the majority (78%) were activated by somatosensory cues, while fewer were driven by visual stimulation (44%; values include unisensory and bimodal neurons). Somatosensory responses could be activated from all locations on the body surface but most often occurred on the head, were often bilateral (e.g., occupied portions of both sides of the body), and were primarily excited by low-threshold hair receptors. Visual receptive fields were large, collectively represented the contralateral visual field, and exhibited conventional response properties such as movement direction and velocity preferences. These results indicate that, following post-natal deafness, both somatosensory and visual modalities participate in crossmodal reinnervation of the AAF, consistent with the growing literature that documents deafness-induced crossmodal plasticity outside A1.  相似文献   

4.
P Heil  R Rajan  D R Irvine 《Hearing research》1992,63(1-2):135-156
The spatial distribution of neuronal responses to tones and frequency-modulated (FM) stimuli was mapped along the 'isofrequency' dimension of the primary auditory cortex (AI) of barbiturate-anesthetized cats. In each cat, electrode penetrations roughly orthogonal to the cortical surface were closely spaced (average separation approximately 130 microns) along the dorsoventral extent of a single 'isofrequency' strip in high frequency parts of AI (> 15 kHz). Characteristic frequency (CF), minimum threshold, sharpness of frequency tuning (Q10 and Q20), the dynamic range of the spike count-intensity function at CF, sensitivity to the rate of change of frequency (RCF) and to the direction of frequency-modulation (DS) were determined for contralaterally-presented tone and FM stimuli. Sharpness of tuning attained maximum values at central loci along the dorsoventral 'isofrequency' axis and values declined towards more dorsal and more ventral locations. Minimum threshold and dynamic range varied between high and low values in a similar and correlated periodic fashion. Their combined organization yielded an orderly spatial representation of response strength, relative to maximum, as a function of stimulus amplitude. The distributions of the most common forms of FM rate sensitivity (RCF response categories) and best RCF along 'isofrequency' strips were significantly non-random although there was a considerable degree of variability between cats. FM directional preference and sensitivity appeared to be randomly distributed. Sharpness of tuning may be related to the analysis of the spectral content of an acoustic stimulus, both minimum threshold and dynamic range are related to the encoding of stimulus intensity, and measures of FM rate and directional sensitivity assess the coding of temporal changes of stimulus spectra. The independent, or for minimum threshold and dynamic range dependent, topographic organizations of these neuronal parameters therefore suggest parallel and independent processing of these aspects of acoustic signals in AI.  相似文献   

5.
The organisation and response properties of the rat auditory cortex were investigated with single and multi-unit electrophysiological recording. Two tonotopically organised 'core' fields, i.e. the primary (A1) and anterior (AAF) auditory fields, as well as three non-tonotopically organised 'belt' fields, i.e. the posterodorsal (PDB), dorsal (DB) and anterodorsal (ADB) belt fields, were identified. Compared to neurones in A1, units in AAF exhibited broader frequency tuning, as well as shorter minimum, modal and mean first spike latencies. In addition, units in AAF showed significantly higher thresholds and best SPLs, as well as broader dynamic ranges. Units in PDB, DB and ADB were characterised by strong responses to white noise and showed either poor or no responses to pure tones. The differences in response properties found between the core and belt fields may reflect a functional specificity in processing different features of auditory stimuli. The present study also combined microelectrode mapping with Nissl staining to determine if the physiological differences between A1 and AAF corresponded to cytoarchitectonically defined borders. Both A1 and AAF were located within temporal cortex 1 (Te1), with AAF occupying an anteroventral subdivision of Te1, indicating that the two neighbouring, physiologically distinct fields are cytoarchitectonically homogeneous.  相似文献   

6.
The head-related transfer function (HRTF) of the cat adds directionally dependent energy minima to the amplitude spectrum of complex sounds. These spectral notches are a principal cue for the localization of sound source elevation. Physiological evidence suggests that the dorsal cochlear nucleus (DCN) plays a critical role in the brainstem processing of this directional feature. Type O units in the central nucleus of the inferior colliculus (ICC) are a primary target of ascending DCN projections and, therefore, may represent midbrain specializations for the auditory processing of spectral cues for sound localization. Behavioral studies confirm a loss of sound orientation accuracy when DCN projections to the inferior colliculus are surgically lesioned. This study used simple analogs of HRTF notches to characterize single-unit response patterns in the ICC of decerebrate cats that may contribute to the directional sensitivity of the brain's spectral processing pathways. Manipulations of notch frequency and bandwidth demonstrated frequency-specific excitatory responses that have the capacity to encode HRTF-based cues for sound source location. These response patterns were limited to type O units in the ICC and have not been observed for the projection neurons of the DCN. The unique spectral integration properties of type O units suggest that DCN influences are transformed into a more selective representation of sound source location by a local convergence of wideband excitatory and frequency-tuned inhibitory inputs.  相似文献   

7.
Language processing from the cochlea to auditory association cortices shows side-dependent specificities with an apparent left hemispheric dominance. The aim of this article was to propose to nonspeech specialists a didactic review of two complementary theories about hemispheric asymmetry in speech processing. Starting from anatomico-physiological and clinical observations of auditory asymmetry and interhemispheric connections, this review then exposes behavioral (dichotic listening paradigm) as well as functional (functional magnetic resonance imaging and positron emission tomography) experiments that assessed hemispheric specialization for speech processing. Even though speech at an early phonological level is regarded as being processed bilaterally, a left-hemispheric dominance exists for higher-level processing. This asymmetry may arise from a segregation of the speech signal, broken apart within nonprimary auditory areas in two distinct temporal integration windows--a fast one on the left and a slower one on the right--modeled through the asymmetric sampling in time theory or a spectro-temporal trade-off, with a higher temporal resolution in the left hemisphere and a higher spectral resolution in the right hemisphere, modeled through the spectral/temporal resolution trade-off theory. Both theories deal with the concept that lower-order tuning principles for acoustic signal might drive higher-order organization for speech processing. However, the precise nature, mechanisms, and origin of speech processing asymmetry are still being debated. Finally, an example of hemispheric asymmetry alteration, which has direct clinical implications, is given through the case of auditory aging that mixes peripheral disorder and modifications of central processing.  相似文献   

8.
The field L complex is thought to be the highest auditory centre and the input in the song vocal nuclei. Different anatomical and functional subdivisions have been described in field L. Auditory neurons of field L are well activated by natural sounds and especially by species-specific sounds. A complex sound coding appears to exist in field L. However, until now, the spatial organization of the different functional subdivisions has been described only using artificial sounds. Here, we investigated the spatial distribution of neuronal responses in field L to species-specific songs. Starlings seemed to be a very appropriate species for this investigation, both because of their complex vocal behaviour that implies different levels of categorization and their neuronal responses towards complex song elements. Multi-unit recordings were performed in wild starlings that were awake. The method of backward correlation was used to visualize the functional organization and we represented the neuronal responses as both activity maps and correlation maps. The use of natural sounds allowed us to define several functional sub-areas with different neuronal processing. These results show that field L is involved in a more complex task than simple frequency processing.  相似文献   

9.
Multi-unit (MU) activity and local field potentials (LFP) were simultaneously recorded from 161 sites in the middle cortical layers of the primary auditory cortex (AI) and the anterior auditory field (AAF) in 51 cats. Responses were obtained for frequencies between 625 Hz and 40 kHz, at intensities from 75 dB SPL to threshold. We compared the response properties of MU activity and LFP triggers, in terms of characteristic frequency (CF), threshold at CF, minimum latency and frequency tuning-curve bandwidth 20 dB above threshold. On average, thresholds at CF were significantly lower for LFP events than those for MU spikes (4.6 dB for AI, and 3 dB for AAF). Minimum latencies were significantly shorter for LFP events than for MU spikes (1.5 ms in AI, and 1.7 ms in AAF). Frequency tuning curves were significantly broader for LFP events than those for MU spikes (1.0 octave in AI, and 1.3 octaves in AAF). In contrast, the CF was not significantly different between LFP events and MU spikes. The LFP results indicate that cortical neurons receive convergent sub-cortical inputs from a broad frequency range. The sharper tuning curves for MU activity compared to those of LFP events are likely the result of intracortical inhibitory processes.  相似文献   

10.
There is a complex functional organization of the central auditory system from the brainstem to primary and associative auditory cortices. Functional neuroimaging has been used to visualize and confirm the spatial distribution of brain activation in temporal areas for the processing of simple acoustic stimuli. Brain activity is much more complex for words, and different networks can be recruited when phonological, lexical and semantic levels of processing are engaged.  相似文献   

11.
We assessed the spatial-tuning properties of units in the cat's anterior auditory field (AAF) and compared them with those observed previously in the primary (A1) and posterior auditory fields (PAF). Multi-channel, silicon-substrate probes were used to record single- and multi-unit activity from the right hemispheres of alpha-chloralose-anesthetized cats. Spatial tuning was assessed using broadband noise bursts that varied in azimuth or elevation. Response latencies were slightly, though significantly, shorter in AAF than A1, and considerably shorter in both of those fields than in PAF. Compared to PAF, spike counts and latencies were more poorly modulated by changes in stimulus location in AAF and A1, particularly at higher sound pressure levels. Moreover, units in AAF and A1 demonstrated poorer level tolerance than units in PAF with spike rates modulated as much by changes in stimulus intensity as changes in stimulus location. Finally, spike-pattern-recognition analyses indicated that units in AAF transmitted less spatial information, on average, than did units in PAF-an observation consistent with recent evidence that PAF is necessary for sound-localization behavior, whereas AAF is not.  相似文献   

12.
In this study, we assessed the changes in spontaneous activity and frequency tuning by simultaneous recording of multi-units and local field potentials in primary auditory cortex (AI), anterior auditory field (AAF) and secondary auditory cortex (AII) of cats before and immediately after 30 min exposure to a loud (93 123 dB SPL) pure tone. The average difference of the pure tone and the characteristic frequency (CF) was less than one octave for 70% of the recordings. We found that the mean threshold at CF increased significantly in AI and in AAF but not in AII. The mean CF for units in AI decreased significantly, whereas no significant effect was noted in AAF and AII. The mean frequency-tuning curve bandwidth decreased significantly in AII. Spontaneous activity increased significantly in AI, did not change in AAF, and decreased significantly in AII. Inter-area neural synchrony was not affected. Multi-unit response areas were usually similarly affected as local field potentials based response areas because the 'damaged area', defined as the response surface before minus the surface after the trauma, was very similar. This suggests that the damage reflects peripheral activity changes. Enhancement of frequency response areas around CF, but at least one octave below the frequency of the traumatizing tone, was found most frequently in AAF and suggests a reduction of inhibition likely as a result of the peripheral hearing loss.  相似文献   

13.
Temporal differences between the two ears are critical for spatial hearing. They can be described along axes of interaural time difference (ITD) and interaural correlation, and their processing starts in the brainstem with the convergence of monaural pathways which are tuned in frequency and which carry temporal information. In previous studies, we examined the bandwidth (BW) of frequency tuning at two stages: the auditory nerve (AN) and inferior colliculus (IC), and showed that BW depends on characteristic frequency (CF) but that there is no difference in the mean BW of these two structures when measured in a binaural, temporal framework. This suggested that there is little frequency convergence in the ITD pathway between AN and IC and that frequency selectivity determined by the cochlear filter is preserved up to the IC. Unexpectedly, we found that AN and IC neurons can be similar in CF and BW, yet responses to changes in interaural correlation in the IC were different than expected from coincidence patterns (“pseudo-binaural” responses) in the AN. To better understand this, we here examine the responses of bushy cells, which provide monaural inputs to binaural neurons. Using broadband noise, we measured BW and correlation sensitivity in the cat trapezoid body (TB), which contains the axons of bushy cells. This allowed us to compare these two metrics at three stages in the ITD pathway. We found that BWs in the TB are similar to those in the AN and IC. However, TB neurons were found to be more sensitive to changes in stimulus correlation than AN or IC neurons. This is consistent with findings that show that TB fibers are more temporally precise than AN fibers, but is surprising because it suggests that the temporal information available monaurally is not fully exploited binaurally.  相似文献   

14.
May BJ 《Hearing research》2000,148(1-2):74-87
The role of the dorsal cochlear nucleus (DCN) in directional hearing was evaluated by measuring sound localization behaviors before and after cats received lesions of the dorsal and intermediate acoustic striae (DAS/IAS). These lesions are presumed to disrupt spectral processing in the DCN without affecting binaural time and level difference cues that exit the cochlear nucleus via the ventral acoustic stria. Prior to DAS/IAS lesions, cats made accurate head orientation responses toward sound sources in the frontal sound field. After a unilateral DAS/IAS lesion, subjects showed increased errors in the azimuth and elevation of their responses; in addition, the final orientation of head movements tended to be more variable. Largest deficits in response elevation were observed in the hemifield that was ipsilateral to the lesion. When a second lesion was placed in the opposite DAS/IAS, increased orientation errors were observed throughout the frontal field. Nonetheless, bilaterally lesioned cats showed normal discrimination of changes in sound source location when tested with a spatial acuity task. These findings support previous interpretations that the DCN contributes to sound orientation behavior, and further suggest that the identification of absolute sound source locations and the discrimination between spatial locations involve independent auditory processing mechanisms.  相似文献   

15.
In recent magnetoencephalographic studies, we established a novel component of the auditory evoked field, which is elicited by a transition from noise to pitch in the absence of a change in energy. It is referred to as the 'pitch onset response'. To extend our understanding of pitch-related neural activity, we compared transient and sustained auditory evoked fields in response to a 2000-ms segment of noise and a subsequent 1000-ms segment of regular interval sound (RIS). RIS provokes the same long-term spectral representation in the auditory system as noise, but is distinguished by a definite pitch, the salience of which depends on the degree of temporal regularity. The stimuli were presented at three steps of increasing regularity and two spectral bandwidths. The auditory evoked fields were recorded from both cerebral hemispheres of twelve subjects with a 37-channel magnetoencephalographic system. Both the transient and the sustained components evoked by noise and RIS were sensitive to spectral bandwidth. Moreover, the pitch salience of the RIS systematically affected the pitch onset response, the sustained field, and the off-response. This indicates that the underlying neural generators reflect the emergence, persistence and offset of perceptual attributes derived from the temporal regularity of a sound.  相似文献   

16.
The present study investigated the relative importance of temporal and spectral cues in voice gender discrimination and vowel recognition by normal-hearing subjects listening to an acoustic simulation of cochlear implant speech processing and by cochlear implant users. In the simulation, the number of speech processing channels ranged from 4 to 32, thereby varying the spectral resolution; the cutoff frequencies of the channels envelope filters ranged from 20 to 320 Hz, thereby manipulating the available temporal cues. For normal-hearing subjects, results showed that both voice gender discrimination and vowel recognition scores improved as the number of spectral channels was increased. When only 4 spectral channels were available, voice gender discrimination significantly improved as the envelope filter cutoff frequency was increased from 20 to 320 Hz. For all spectral conditions, increasing the amount of temporal information had no significant effect on vowel recognition. Both voice gender discrimination and vowel recognition scores were highly variable among implant users. The performance of cochlear implant listeners was similar to that of normal-hearing subjects listening to comparable speech processing (4–8 spectral channels). The results suggest that both spectral and temporal cues contribute to voice gender discrimination and that temporal cues are especially important for cochlear implant users to identify the voice gender when there is reduced spectral resolution.  相似文献   

17.
A temporal compound is a complex pattern associated with a sequence of brief (30-100 msec) acoustic events whose identity can be distinguished but whose order cannot be reported. In the present study, two frequency glides were concatenated to form a 400-msec temporal compound consisting of 10 40-msec glides. Fourteen young adults were asked to discriminate these temporal compounds in a same-different paradigm employing a dichotic probe technique. Results supported the hypotheses that same judgments of temporal compounds involve global, or right hemisphere, processing and that different judgments of temporal compounds involve analytical, or left hemisphere, processing. Event-related potential (ERP) data revealed an interaction between side attended (right or left) and type of judgment (same or different). Same stimuli presented from the left side elicited greater ERP responses than different stimuli presented from the left side; conversely, different stimuli presented from the right side elicited greater ERP responses than same stimuli presented from the right side. Reaction times showed the "fast-same" effect, consistently observed in this paradigm.  相似文献   

18.
Spatial hearing facilitates the perceptual organization of complex soundscapes into accurate mental representations of sound sources in the environment. Yet, the role of binaural cues in auditory scene analysis (ASA) has received relatively little attention in recent neuroscientific studies employing novel, spectro-temporally complex stimuli. This may be because a stimulation paradigm that provides binaurally derived grouping cues of sufficient spectro-temporal complexity has not yet been established for neuroscientific ASA experiments. Random-chord stereograms (RCS) are a class of auditory stimuli that exploit spectro-temporal variations in the interaural envelope correlation of noise-like sounds with interaurally coherent fine structure; they evoke salient auditory percepts that emerge only under binaural listening. Here, our aim was to assess the usability of the RCS paradigm for indexing binaural processing in the human brain. To this end, we recorded EEG responses to RCS stimuli from 12 normal-hearing subjects. The stimuli consisted of an initial 3-s noise segment with interaurally uncorrelated envelopes, followed by another 3-s segment, where envelope correlation was modulated periodically according to the RCS paradigm. Modulations were applied either across the entire stimulus bandwidth (wideband stimuli) or in temporally shifting frequency bands (ripple stimulus). Event-related potentials and inter-trial phase coherence analyses of the EEG responses showed that the introduction of the 3- or 5-Hz wideband modulations produced a prominent change-onset complex and ongoing synchronized responses to the RCS modulations. In contrast, the ripple stimulus elicited a change-onset response but no response to ongoing RCS modulation. Frequency-domain analyses revealed increased spectral power at the fundamental frequency and the first harmonic of wideband RCS modulations. RCS stimulation yields robust EEG measures of binaurally driven auditory reorganization and has potential to provide a flexible stimulation paradigm suitable for isolating binaural effects in ASA experiments.  相似文献   

19.
An overview is presented of auditory brainstem responses (ABRs) and middle- and long-latency auditory evoked responses recorded from clinical populations and from an experimental model, the cat. The research strategy of this program is to use auditory evoked responses as surface probes of central auditory processing functions and of substrate systems. Comparisons of ABRs between normal and mentally handicapped populations indicate specific types of abnormalities in particular subpopulations. Generator substrates of these responses have been suggested from analytical animal experiments. Middle- and long-latency responses are now beginning to be compared in normal and mentally handicapped subjects. Experimental data from the cat suggest that these responses reflect the relatively independent activation of several parallel forebrain systems which receive input from brainstem levels.  相似文献   

20.
Jerger S 《Ear and hearing》2007,28(6):754-765
Perception concerns the identification and interpretation of sensory stimuli in our external environment. The purpose of this review is to survey contemporary views about effects of mild to severe sensorineural hearing impairment (HI) in children on perceptual processing. The review is one of a series of papers resulting from a workshop on Outcomes Research in Children with Hearing Loss sponsored by The National Institute on Deafness and Other Communication Disorders/National Institutes of Health. Children with HI exhibit heterogeneous patterns of results. In general, however, perceptual processing of the (a) auditory properties of nonspeech reveals some problems in processing spectral information, but not temporal information; (b) auditory properties of speech reveals some problems in processing temporal sequences, variation in spatial location, and voice onset times, but not in processing talker-gender, weighting acoustic cues, or covertly orienting to the spatial location of sound; (c) linguistic properties of speech reveals some problems in processing general linguistic content, semantic content, and phonological content. The normalcy/abnormalcy of results varies as a function of degree of loss and task demands. As a general rule, children with severe HI have more abnormalities than children with mild to moderate HI. Auditory linguistic properties are also generally processed more abnormally than auditory nonverbal properties. This outcome implies that childhood HI has less effect on more physical, developmentally earlier properties that are characterized by less contingent processing. Some perceptual properties that are processed in a more automatic manner by normal listeners are processed in a more controlled manner by children with HI. This outcome implies that deliberate perceptual processing in the presence of childhood HI requires extra effort and more mental resources, thus limiting the availability of processing resources for other tasks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号