首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Cortical responses are adjusted and optimized throughout life to meet changing behavioral demands and to compensate for peripheral damage. The cholinergic nucleus basalis (NB) gates cortical plasticity and focuses learning on behaviorally meaningful stimuli. By systematically varying the acoustic parameters of the sound paired with NB activation, we have previously shown that tone frequency and amplitude modulation rate alter the topography and selectivity of frequency tuning in primary auditory cortex. This result suggests that network-level rules operate in the cortex to guide reorganization based on specific features of the sensory input associated with NB activity. This report summarizes recent evidence that temporal response properties of cortical neurons are influenced by the spectral characteristics of sounds associated with cholinergic modulation. For example, repeated pairing of a spectrally complex (ripple) stimulus decreased the minimum response latency for the ripple, but lengthened the minimum latency for tones. Pairing a rapid train of tones with NB activation only increased the maximum following rate of cortical neurons when the carrier frequency of each train was randomly varied. These results suggest that spectral and temporal parameters of acoustic experiences interact to shape spectrotemporal selectivity in the cortex. Additional experiments with more complex stimuli are needed to clarify how the cortex learns natural sounds such as speech.  相似文献   

2.
Auditory cortex updates incoming information on a segment by segment basis for human speech and animal communication. Measuring repetition rate transfer functions (RRTFs) captures temporal responses to repetitive sounds. In this study, we used repetitive click trains to describe the spatial distribution of RRTF responses in cat anterior auditory field (AAF) and to discern potential variations in local temporal processing capacity. A majority of RRTF filters are band-pass. Temporal parameters estimated from RRTFs and corrected for characteristic frequency or latency dependencies are non-homogeneously distributed across AAF. Unlike the shallow global gradient observed in spectral receptive field parameters, transitions from loci with high to low temporal parameters are steep. Quantitative spatial analysis suggests non-uniform, circumscribed local organization for temporal pattern processing superimposed on global organization for spectral processing in cat AAF.  相似文献   

3.
OBJECTIVE: Harmonic complex tones consisting of four or more continuous harmonics of a certain stem frequency are perceived as the pitch of the fundamental frequency tone, it is referred to as the missing fundamental phenomenon (MFP). It is considered that the MFP is produced in the central auditory system, not in the periphery. However, it remains unclear where and how complex sounds is integrated. Using 306ch magnetoencephalography (MEG), we investigated when and where the MFP was integrated in the auditory cortex. METHOD: We examined six subjects who were selected by MEG in 12 healthy right-handed adult volunteers with normal auditory sensation. Ears were randomly stimulated with five different complex tones consist of fundamental frequency tone and harmonic complex tones. The location and direction of equivalent current dipoles (ECD) were evaluated at P50 and N100 in the right temporal lobe by MEG. Dispersion of the source of ECD was respectively evaluated on their brain MRI. RESULTS: Stimulation of ears with harmonic complex tones and the stem frequency tone revealed the localization of P50 and N100 ECD in the transverse temporal gyrus and their peripheral superior temporal gyrus. Although the sources of P50 ECD for harmonic complex tones and the fundamental tone were varied around the transverse temporal gyrus and superior temporal gyrus, the sources of N100 ECD were almost identical at the transverse temporal gyrus, demonstrating the MFP. This phenomenon were similarly observed, even when dichotic listening were stimulated. CONCLUSION: These findings suggest that the MFP occurs in the transverse temporal gyrus and the superior temporal gyrus, which are the primary auditory cortex, between P50 and N100.  相似文献   

4.
This study investigates the effects of spectral separation of sounds on the ability of goldfish to acquire independent information about two simultaneous complex sources. Goldfish were conditioned to a complex sound made up of two sets of repeated acoustic pulses: a high-frequency pulse with a spectral envelope centered at 625 Hz, and a low-frequency pulse type centered at 240, 305, 390, or 500 Hz. The pulses were presented with each pulse type alternating with an overall pulse repetition rate of 40 pulses per second (pps), and a 20-pps rate between identical pulses. Two control groups were conditioned to the 625-Hz pulse alone, repeated at 40 and 20 pps, respectively. All groups were tested for generalization to the 625-Hz pulse repeated alone at several rates. If the two pulse types in the complex resulted in independent auditory streams, the animals were expected to generalize to the 625-Hz pulse trains as if they were repeated at 20 pps during conditioning. It was hypothesized that as the center frequency of the low-frequency pulse approached that of the 625-Hz pulse, the alternating trains would be perceived as a single auditory stream with a repetition rate of 40 pps. The group conditioned to alternating 625- and 240-Hz pulses generalized least, with maximum generalization at 20 Hz, suggesting that the animals formed at least one perceptual stream with a repetition rate of 20 pps. The other alternating pulse groups generalized to intermediate degrees. Goldfish can segregate at least one "auditory stream" from a complex mixture of sources. Segregation can be based on spectral envelope and grows more robust with growing spectral separation between the simultaneous sources. Auditory stream segregation and auditory scene analysis are shared among human listeners, European starlings, and goldfish, and may be primitive characteristics of the vertebrate sense of hearing.  相似文献   

5.
Wang X 《Hearing research》2007,229(1-2):81-93
In contrast to the visual system, the auditory system has longer subcortical pathways and more spiking synapses between the peripheral receptors and the cortex. This unique organization reflects the needs of the auditory system to extract behaviorally relevant information from a complex acoustic environment using strategies different from those used by other sensory systems. The neural representations of acoustic information in auditory cortex can be characterized by three types: (1) isomorphic (faithful) representations of acoustic structures; (2) non-isomorphic transformations of acoustic features and (3) transformations from acoustical to perceptual dimensions. The challenge facing auditory neurophysiologists is to understand the nature of the latter two transformations. In this article, I will review recent studies from our laboratory regarding temporal discharge patterns in auditory cortex of awake marmosets and cortical representations of time-varying signals. Findings from these studies show that (1) firing patterns of neurons in auditory cortex are dependent on stimulus optimality and context and (2) the auditory cortex forms internal representations of sounds that are no longer faithful replicas of their acoustic structures.  相似文献   

6.
Sinex DG  Sabes JH  Li H 《Hearing research》2002,168(1-2):150-162
Responses of inferior colliculus neurons to simplified stimuli that may engage mechanisms that contribute to auditory scene analysis were obtained. The stimuli were harmonic complex tones, which are heard by human listeners as single sounds, and the same tones with one component 'mistuned', which are heard as two separate sounds. The temporal discharge pattern elicited by a harmonic complex tone usually resembled the same neuron's response to a pure tone. In contrast, tones with a mistuned component elicited responses with distinctive, stereotypical temporal patterns that were not obviously related to the stimulus waveform. For a particular stimulus configuration, the discharge pattern was similar across neurons with different pure-tone frequency selectivity. A computational model that compared response envelopes across multiple narrow bands successfully reproduced the stereotypical response patterns elicited by different stimulus configurations. The results suggest that mistuning created a temporally synchronous distributed representation of the mistuned component that could be identified by higher auditory centers in the presence of the ongoing response produced by the remaining components; this kind of representation might facilitate the identification of individual sound sources in complex acoustic environments.  相似文献   

7.
Auditory nerve single-unit population studies have demonstrated that phase-locking plays a dominant role in the neural encoding of both the spectrum and voice pitch of speech sounds. Phase-locked neural activity underlying the scalp-recorded human frequency-following response (FFR) has also been shown to encode certain spectral features of steady-state and time-variant speech sounds as well as pitch of several complex sounds that produce time-invariant pitch percepts. By extension, it was hypothesized that the human FFR may preserve pitch-relevant information for speech sounds that elicit time-variant as well as steady-state pitch percepts. FFRs were elicited in response to the four lexical tones of Mandarin Chinese as well as to a complex auditory stimulus which was spectrally different but equivalent in fundamental frequency (f0) contour to one of the Chinese tones. Autocorrelation-based pitch extraction measures revealed that the FFR does indeed preserve pitch-relevant information for all stimuli. Phase-locked interpeak intervals closely followed f0. Spectrally different stimuli that were equivalent in F0 similarly showed robust interpeak intervals that followed f0. These FFR findings support the viability of early, population-based 'predominant interval' representations of pitch in the auditory brainstem that are based on temporal patterns of phase-locked neural activity.  相似文献   

8.
Recent evidence suggests that sensitivity to the temporal fine structure (TFS) of sounds is adversely affected by cochlear hearing loss. This may partly explain the difficulties experienced by people with cochlear hearing loss in understanding speech when background sounds, especially fluctuating backgrounds, are present. We describe a test for assessing sensitivity to TFS. The test can be run using any PC with a sound card. The test involves discrimination of a harmonic complex tone (H), with a fundamental frequency F0, from a tone in which all harmonics are shifted upwards by the same amount in Hertz, resulting in an inharmonic tone (I). The phases of the components are selected randomly for every stimulus. Both tones have an envelope repetition rate equal to F0, but the tones differ in their TFS. To prevent discrimination based on spectral cues, all tones are passed through a fixed bandpass filter, usually centred at 11F0. A background noise is used to mask combination tones. The results show that, for normal-hearing subjects, learning effects are small, and the effect of the level of testing is also small. The test provides a simple, quick, and robust way to measure sensitivity to TFS.  相似文献   

9.
Most information about neuronal properties in primary auditory cortex (AI) has been gathered using simple artificial sounds such as pure tones and broad-band noise. These sounds are very different from the natural sounds that are processed by the auditory system in real world situations. In an attempt to bridge this gap, simple tonal stimuli and a standard set of six natural sounds were used to create models relating the responses of neuronal clusters in AI of barbiturate-anesthetized cats to the two classes of stimuli. A significant correlation was often found between the response to the separate frequency components of the natural sounds and the response to the natural sound itself. At the population level, this correlation resulted in a rate profile that represented robustly the spectral profiles of the natural sounds. There was however a significant scatter in the responses to the natural sound around the predictions based on the responses to tonal stimuli. Going the other way, in order to understand better the non-linearities in the responses to natural sounds, responses of neuronal clusters were characterized using second order Volterra kernel analysis of their responses to natural sounds. This characterization predicted reasonably well the amplitude of the response to other natural sounds, but could not reproduce the responses to tonal stimuli. Thus, second order non-linear characterizations, at least those using the Volterra kernel model, do not interpolate well between responses to tones and to natural sounds in auditory cortex.  相似文献   

10.
Studies in several mammalian species have demonstrated that auditory cortical neurons respond strongly to single frequency-modulated (FM) sweeps, and that most responses are selective for sweep direction and/or rate. In the present study, we used extracellular recordings to examine how neurons in the auditory cortices of anesthetized rats respond to continuous, periodic trains of FM sweeps (described previously by deCharms et al., Science 280 (1998) pp. 1439–1444, as moving auditory gratings). Consistent with previous observations in owl monkeys, we found that the majority of cortical neurons responded selectively to trains of either up-sweeps or down-sweeps; selectivity for down-sweeps was most common. Periodic responses were typically evoked only by sweep trains with repetition rates less than 12 sweeps per second. Directional differences in responses were dependent on repetition rate. Our results support the proposal that a combination of both spectral and temporal acoustic features determines the responses of auditory cortical neurons to sound, and add to the growing body of evidence indicating that the traditional view of the auditory cortex as a frequency analyzer is not sufficient to explain how the mammalian brain represents complex sounds.  相似文献   

11.
Positron emission tomography (PET) was used to investigate the neural systems involved in the central processing of different auditory stimuli. Noise, pure tone and pure-tone pulses, music and speech were presented monaurally. O-15-water PET scans were obtained in relation to these stimulations presented to five normal hearing and healthy subjects. All stimuli were related to a basic scan in silence. Processing of simple auditory stimuli, such as pure tones and noise, predominantly activate the left transverse temporal gyrus (Brodmann area [BA] 41), whereas sounds with discontinued acoustic patterns, such as pure-tone pulse trains, activated parts of the auditory association area in the superior temporal gyri (BA 42) in both hemispheres. Moreover, sounds with complex spectral, intensity, and temporal structures (words, speech, music) activated spatially even more extensive associative auditory areas in both hemispheres (BA 21, 22). PET has revealed a remarkable potential to investigate early central auditory processing, and has provided evidence of the coexistence of functionally linked, but individually active parallel and serial auditory networks.  相似文献   

12.
Krishnan A  Plack CJ 《Hearing research》2011,275(1-2):110-119
Psychoacoustic studies have shown that complex tones containing resolved harmonics evoke stronger pitches than complex tones with only unresolved harmonics. Also, unresolved harmonics presented in alternating sine and cosine (ALT) phase produce a doubling of pitch. We examine here whether the temporal pattern of phase-locked neural activity reflected in the scalp recorded human frequency following response (FFR) preserves information relevant to pitch strength, and to the doubling of pitch for ALT stimuli. Results revealed stronger neural periodicity strength for resolved stimuli, although the effect of resolvability was weak compared to the effect observed behaviorally; autocorrelation functions and FFR spectra suggest a different pattern of phase-locked neural activity for ALT stimuli with resolved and unresolved harmonics consistent with the doubling of pitch observed in our behavioral estimates; and the temporal pattern of neural activity underlying pitch encoding appears to be similar at the auditory nerve (auditory nerve model response) and the rostral brainstem level (FFR). These findings suggest that the phase-locked neural activity reflected in the scalp recorded FFR preserves neural information relevant to pitch that could serve as an electrophysiological correlate of the behavioral pitch measure. The scalp recorded FFR may provide for a non-invasive analytic tool to evaluate neural encoding of complex sounds in humans.  相似文献   

13.
Numerous studies have demonstrated that the frequency spectrum of sounds is represented in the neural code of single auditory nerve fibres both spatially and temporally, but few experiments have been designed to test which of these two representations of frequency is used in the discrimination of complex sounds such as speech and music. This paper reviews the roles of place and temporal coding of frequency in the nervous system as a basis for frequency discrimination of complex sounds such as those in speech. Animal studies based on frequency analysis in the cochlea have shown that the place code changes systematically as a function of sound intensity and therefore lacks the robustness required to explain pitch perception (in humans), which is nearly independent of sound intensity. Further indication that the place principle plays a minor role in discrimination of speech comes from observations that signs of impairment of the spectral analysis in the cochlea in some individuals are not associated with impairments in speech discrimination. The importance of temporal coding is supported by the observation that injuries to the auditory nerve, assumed to impair temporal coherence of the discharges of auditory nerve fibres, are associated with grave impairments in speech discrimination. These observations indicate that temporal coding of sounds is more important for discrimination of speech than place coding. The implications of these findings for the design of prostheses such as cochlear implants are discussed.  相似文献   

14.
Li H  Sabes JH  Sinex DG 《Hearing research》2006,220(1-2):116-125
In order to examine the effect of inhibition on processing auditory temporal information, responses of single neurons in the inferior colliculus of the chinchilla to sinusoidally amplitude-modulated (SAM) tones alone and the presence of a steady-state tone were obtained. The carrier frequency of the SAM tone was either the characteristic frequency (CF) or a frequency in the inhibitory response area of a studied neuron. When the carrier frequency was set to the neuron’s CF, neurons responded in synchrony to the SAM-tone envelope, as expected. When the carrier frequency was set to a frequency at which pure tones produced inhibition, SAM tones elicited little or no response, also as expected. However, when the same SAM tone was paired with a pure tone whose frequency was set to the neuron’s CF, responses synchronized to the SAM tone envelope were obtained. These modulated responses were typically one-half cycle out-of-phase with the response to the SAM tone at CF, suggesting that they arose from cyclic inhibition and release from inhibition by the SAM tone. The results demonstrate that the representation of temporal information by inferior colliculus neurons is influenced by temporally-patterned inhibition arising from locations remote from CF.  相似文献   

15.
Two phenomena of auditory cortex activity have recently attracted attention, namely that the primary field can show different types of learning-related changes of sound representation and that during learning even this early auditory cortex is under strong multimodal influence. Based on neuronal recordings in animal auditory cortex during instrumental tasks, in this review we put forward the hypothesis that these two phenomena serve to derive the task-specific meaning of sounds by associative learning. To understand the implications of this tenet, it is helpful to realize how a behavioral meaning is usually derived for novel environmental sounds. For this purpose, associations with other sensory, e.g. visual, information are mandatory to develop a connection between a sound and its behaviorally relevant cause and/or the context of sound occurrence. This makes it plausible that in instrumental tasks various non-auditory sensory and procedural contingencies of sound generation become co-represented by neuronal firing in auditory cortex. Information related to reward or to avoidance of discomfort during task learning, that is essentially non-auditory, is also co-represented. The reinforcement influence points to the dopaminergic internal reward system, the local role of which for memory consolidation in auditory cortex is well-established. Thus, during a trial of task performance, the neuronal responses to the sounds are embedded in a sequence of representations of such non-auditory information. The embedded auditory responses show task-related modulations of auditory responses falling into types that correspond to three basic logical classifications that may be performed with a perceptual item, i.e. from simple detection to discrimination, and categorization. This hierarchy of classifications determine the semantic "same-different" relationships among sounds. Different cognitive classifications appear to be a consequence of learning task and lead to a recruitment of different excitatory and inhibitory mechanisms and to distinct spatiotemporal metrics of map activation to represent a sound. The described non-auditory firing and modulations of auditory responses suggest that auditory cortex, by collecting all necessary information, functions as a "semantic processor" deducing the task-specific meaning of sounds by learning.  相似文献   

16.
Acoustic signals are generally encoded in the peripheral auditory system of vertebrates by a duality scheme. For frequency components that fall within the excitatory tuning curve, individual eighth nerve fibers can encode the effective spectral energy by a spike-rate code, while simultaneously preserving the signal waveform periodicity of lower frequency components by phase-locked spike-train discharges. To explore how robust this duality of representation may be in the presence of noise, we recorded the responses of auditory fibers in the eighth nerve of the Tokay gecko to tonal stimuli when masking noise was added simultaneously. We found that their spike-rate functions reached plateau levels fairly rapidly in the presence of noise, so the ability to signal the presence of a tone by a concomitant change in firing rate was quickly lost. On the other hand, their synchronization functions maintained a high degree of phase-locked firings to the tone even in the presence of high-intensity masking noise, thus enabling a robust detection of the tonal signal. Critical ratios (CR) and critical bandwidths showed that in the frequency range where units are able to phaselock to the tonal periodicity, the CR bands were relatively narrow and the bandwidths were independent of noise level. However, to higher frequency tones where phaselocking fails and only spike-rate codes apply, the CR bands were much wider and depended upon noise level, so that their ability to filter tones out of a noisy background degraded with increasing noise levels. The greater robustness of phase-locked temporal encoding contrasted with spike-rate coding verifies a important advantage in using lower frequency signals for communication in noisy environments.  相似文献   

17.
It has been established that current cochlear implants do not supply adequate spectral information for perception of tonal languages. Comprehension of a tonal language, such as Mandarin Chinese, requires recognition of lexical tones. New strategies of cochlear stimulation such as variable stimulation rate and current steering may provide the means of delivering more spectral information and thus may provide the auditory fine-structure required for tone recognition. Several cochlear implant signal processing strategies are examined in this study, the continuous interleaved sampling (CIS) algorithm, the frequency amplitude modulation encoding (FAME) algorithm, and the multiple carrier frequency algorithm (MCFA). These strategies provide different types and amounts of spectral information. Pattern recognition techniques can be applied to data from Mandarin Chinese tone recognition tasks using acoustic models as a means of testing the abilities of these algorithms to transmit the changes in fundamental frequency indicative of the four lexical tones. The ability of processed Mandarin Chinese tones to be correctly classified may predict trends in the effectiveness of different signal processing algorithms in cochlear implants. The proposed techniques can predict trends in performance of the signal processing techniques in quiet conditions but fail to do so in noise.  相似文献   

18.
The neural basis of low pitch was investigated in the present study by recording a brainstem potential from the scalp of human subjects during presentation of complex tones which evoke a variable sensation of pitch. The potential recorded, the frequency-following response (FFR), reflects the temporal discharge activity of auditory neurons in the upper brainstem pathway. It was used as an index of neural periodicity in order to determine the extent to which the low pitch of complex tones is encoded in the temporal discharge activity of auditory brainstem neurons. A tone composed of harmonics of a common fundamental produces a sensation of pitch equal to that of the 'missing' fundamental. Such signals generate brainstem potentials which are spectrally similar to FFR recorded in response to sinusoidal signals equal in frequency to the missing fundamental. Both types of signals generate FFR which are periodic, with a frequency similar to the perceived pitch of the stimuli. It is shown that the FFR to the missing fundamental is not the result of a distortion product by recording FFR to a complex signal in the presence of low-frequency bandpass noise. Neither is the FFR the result of neural synchronization to the waveform envelope modulation pattern. This was determined by recording FFR to inharmonic and quasi-frequency-modulated signals. It was also determined that the 'existence region' for FFR to the missing fundamental lies below 2 kHz and that the most favorable spectral region for FFR to complex tones is between 0.5 and 1.0 kHz. These results are consistent with the hypothesis that far-field-recorded FFR does reflect neural activity germane to the processing of low pitch and that such pitch-relevant activity is based on the temporal discharge patterns of neurons in the upper auditory brainstem pathway.  相似文献   

19.
Cochlear implant listeners receive auditory stimulation through amplitude-modulated electric pulse trains. Auditory nerve studies in animals demonstrate qualitatively different patterns of firing elicited by low versus high pulse rates, suggesting that stimulus pulse rate might influence the transmission of temporal information through the auditory pathway. We tested in awake guinea pigs the temporal acuity of auditory cortical neurons for gaps in cochlear implant pulse trains. Consistent with results using anesthetized conditions, temporal acuity improved with increasing pulse rates. Unlike the anesthetized condition, however, cortical neurons responded in the awake state to multiple distinct features of the gap-containing pulse trains, with the dominant features varying with stimulus pulse rate. Responses to the onset of the trailing pulse train (Trail-ON) provided the most sensitive gap detection at 1,017 and 4,069 pulse-per-second (pps) rates, particularly for short (25 ms) leading pulse trains. In contrast, under conditions of 254 pps rate and long (200 ms) leading pulse trains, a sizeable fraction of units demonstrated greater temporal acuity in the form of robust responses to the offsets of the leading pulse train (Lead-OFF). Finally, TONIC responses exhibited decrements in firing rate during gaps, but were rarely the most sensitive feature. Unlike results from anesthetized conditions, temporal acuity of the most sensitive units was nearly as sharp for brief as for long leading bursts. The differences in stimulus coding across pulse rates likely originate from pulse rate-dependent variations in adaptation in the auditory nerve. Two marked differences from responses to acoustic stimulation were: first, Trail-ON responses to 4,069 pps trains encoded substantially shorter gaps than have been observed with acoustic stimuli; and second, the Lead-OFF gap coding seen for <15 ms gaps in 254 pps stimuli is not seen in responses to sounds. The current results may help to explain why moderate pulse rates around 1,000 pps are favored by many cochlear implant listeners.  相似文献   

20.
PD Dr. T. Rahne 《HNO》2013,61(3):202-210

Background

The task of assigning concurrent sounds to different auditory objects is known to depend on temporal and spectral cues. When tones of high and low frequencies are presented in alternation, they can be perceived as a single (integrated) melody, or as two parallel (segregated) melodic lines, according to the presentation rate and frequency distance between the sounds. At an intermediate distance or stimulation rate, the percept is ambiguous and alternates between segregated and integrated. This work studied whether an ambiguous sound organization could be modulated towards a robust integrated or a segregated percept by the synchronous presentation of visual cues.

Methods

Two interleaved sets of sounds, one high frequency and one low frequency set were presented with concurrent visual stimuli, synchronized to either a within-set frequency pattern or to the across-set intensity pattern. Elicitation of the mismatch negativity (MMN) component of event-related brain potentials served as indices for the segregated organization, when no task was performed with the sounds. As a result, MMN was elicited only when the visual pattern promoted the segregation of the sounds.

Results

By spatial analysis of the distribution of electromagnetic potentials, four separated neuronal sources underlying the obtained MMN response were identified. One pair was located bilaterally in temporal cortical structures and another pair in occipital areas, representing the auditory and visual origin of the MMN response, evoked by inverted triplets as used in this study. Thus, the results demonstrate cross-modal effects of visual information on auditory object perception.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号