首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Natural auditory scenes possess highly structured statistical regularities, which are dictated by the physics of sound production in nature, such as scale‐invariance. We recently identified that natural water sounds exhibit a particular type of scale invariance, in which the temporal modulation within spectral bands scales with the centre frequency of the band. Here, we tested how neurons in the mammalian primary auditory cortex encode sounds that exhibit this property, but differ in their statistical parameters. The stimuli varied in spectro‐temporal density and cyclo‐temporal statistics over several orders of magnitude, corresponding to a range of water‐like percepts, from pattering of rain to a slow stream. We recorded neuronal activity in the primary auditory cortex of awake rats presented with these stimuli. The responses of the majority of individual neurons were selective for a subset of stimuli with specific statistics. However, as a neuronal population, the responses were remarkably stable over large changes in stimulus statistics, exhibiting a similar range in firing rate, response strength, variability and information rate, and only minor variation in receptive field parameters. This pattern of neuronal responses suggests a potentially general principle for cortical encoding of complex acoustic scenes: while individual cortical neurons exhibit selectivity for specific statistical features, a neuronal population preserves a constant response structure across a broad range of statistical parameters.  相似文献   

2.
The brain parses the auditory environment into distinct sounds by identifying those acoustic features in the environment that have common relationships (e.g., spectral regularities) with one another and then grouping together the neuronal representations of these features. Although there is a large literature that tests how the brain tracks spectral regularities that are predictable, it is not known how the auditory system tracks spectral regularities that are not predictable and that change dynamically over time. Furthermore, the contribution of brain regions downstream of the auditory cortex to the coding of spectral regularity is unknown. Here, we addressed these two issues by recording electrocorticographic activity, while human patients listened to tone‐burst sequences with dynamically varying spectral regularities, and identified potential neuronal mechanisms of the analysis of spectral regularities throughout the brain. We found that the degree of oscillatory stimulus phase consistency (PC) in multiple neuronal‐frequency bands tracked spectral regularity. In particular, PC in the delta‐frequency band seemed to be the best indicator of spectral regularity. We also found that these regularity representations existed in multiple regions throughout cortex. This widespread reliable modulation in PC – both in neuronal‐frequency space and in cortical space – suggests that phase‐based modulations may be a general mechanism for tracking regularity in the auditory system specifically and other sensory systems more generally. Our findings also support a general role for the delta‐frequency band in processing the regularity of auditory stimuli.  相似文献   

3.
Previous behavioural studies in human subjects have demonstrated the importance of amplitude modulations to the process of intelligible speech perception. In functional neuroimaging studies of amplitude modulation processing, the inherent assumption is that all sounds are decomposed into simple building blocks, i.e. sinusoidal modulations. The encoding of complex and dynamic stimuli is often modelled to be the linear addition of a number of sinusoidal modulations and so, by investigating the response of the cortex to sinusoidal modulation, an experimenter can probe the same mechanisms used to encode speech. The experiment described in this paper used magnetoencephalography to measure the auditory steady-state response produced by six sounds, all modulated in amplitude at the same frequency but which formed a continuum from sinusoidal to pulsatile modulation. Analysis of the evoked response shows that the magnitude of the envelope-following response is highly non-linear, with sinusoidal amplitude modulation producing the weakest steady-state response. Conversely, the phase of the steady-state response was related to the shape of the modulation waveform, with the sinusoidal amplitude modulation producing the shortest latency relative to the other stimuli. It is shown that a point in auditory cortex produces a strong envelope following response to all stimuli on the continuum, but the timing of this response is related to the shape of the modulation waveform. The results suggest that steady-state response characteristics are determined by features of the waveform outside of the modulation domain and that the use of purely sinusoidal amplitude modulations may be misleading, especially in the context of speech encoding.  相似文献   

4.
The rat primary auditory cortex was explored for neuronal responses to pure tones and sinusoidally amplitude-modulated (SAM) and frequency-modulated (SFM) stimuli. Units showed phase-locked responses to SAM stimulation (55%) and SFM stimulation (80%), with modulation frequencies up to 18 Hz. Tuning characteristics to the modulation frequency were mainly band-pass with best modulation frequencies (BMFs) between 4 and 15 Hz. Units with synchronized activity to SFM stimulation showed three response types with respect to the direction of the frequency modulation: 52% were selective to the upward direction, 30% to the downward direction, and 18% had no preference. Triangular frequency modulations were used to test if units were tuned to specific modulation frequencies or to specific rates of frequency change. In the vast majority of units tested the response characteristics were strongly influenced by varying the modulation frequency, whereas varying the rate of frequency change had little effect in the stimulus range used. Units that showed phase-locked responses to SAM and SFM stimulation had similar activity patterns in response to both types of stimuli. BMFs for SAM and SFM stimulation were significantly correlated. Intrinsic oscillations of up to 20 Hz could be seen in the spontaneous activity and after the stimuli independent of the stimulus type. Oscillation frequencies were significantly correlated with the BMFs of the respective units. The results are discussed in terms of a mechanism for periodicity detection based on a temporal code. This could be important for the recognition of complex acoustic signals.  相似文献   

5.
Detection of statistical irregularities, measured as a prediction error response, is fundamental to the perceptual monitoring of the environment. We studied whether prediction error response is associated with neural oscillations or asynchronous broadband activity. Electrocorticography was conducted in three male monkeys, who passively listened to the auditory roving oddball stimuli. Local field potentials (LFPs) recorded over the auditory cortex underwent spectral principal component analysis, which decoupled broadband and rhythmic components of the LFP signal. We found that the broadband component captured the prediction error response, whereas none of the rhythmic components were associated with statistical irregularities of sounds. The broadband component displayed more stochastic, asymmetrical multifractal properties than the rhythmic components, which revealed more self-similar dynamics. We thus conclude that the prediction error response is captured by neuronal populations generating asynchronous broadband activity, defined by irregular dynamic states, which, unlike oscillatory rhythms, appear to enable the neural representation of auditory prediction error response.SIGNIFICANCE STATEMENT This study aimed to examine the contribution of oscillatory and asynchronous components of auditory local field potentials in the generation of prediction error responses to sensory irregularities, as this has not been directly addressed in the previous studies. Here, we show that mismatch negativity—an auditory prediction error response—is driven by the asynchronous broadband component of potentials recorded in the auditory cortex. This finding highlights the importance of nonoscillatory neural processes in the predictive monitoring of the environment. At a more general level, the study demonstrates that stochastic neural processes, which are often disregarded as neural noise, do have a functional role in the processing of sensory information.  相似文献   

6.
Natural sounds are characterized by complex patterns of sound intensity distributed across both frequency (spectral modulation) and time (temporal modulation). Perception of these patterns has been proposed to depend on a bank of modulation filters, each tuned to a unique combination of a spectral and a temporal modulation frequency. There is considerable physiological evidence for such combined spectrotemporal tuning. However, direct behavioral evidence is lacking. Here we examined the processing of spectrotemporal modulation behaviorally using a perceptual-learning paradigm. We trained human listeners for ~1 h/d for 7 d to discriminate the depth of spectral (0.5 cyc/oct; 0 Hz), temporal (0 cyc/oct; 32 Hz), or upward spectrotemporal (0.5 cyc/oct; 32 Hz) modulation. Each trained group learned more on their respective trained condition than did controls who received no training. Critically, this depth-discrimination learning did not generalize to the trained stimuli of the other groups or to downward spectrotemporal (0.5 cyc/oct; -32 Hz) modulation. Learning on discrimination also led to worsening on modulation detection, but only when the same spectrotemporal modulation was used for both tasks. Thus, these influences of training were specific to the trained combination of spectral and temporal modulation frequencies, even when the trained and untrained stimuli had one modulation frequency in common. This specificity indicates that training modified circuitry that had combined spectrotemporal tuning, and therefore that circuits with such tuning can influence perception. These results are consistent with the possibility that the auditory system analyzes sounds through filters tuned to combined spectrotemporal modulation.  相似文献   

7.
Sensory systems use adaptive strategies to code for the changing environment on different time scales. Short-term adaptation (up to 100 ms) reflects mostly synaptic suppression mechanisms after response to a stimulus. Long-term adaptation (up to a few seconds) is reflected in the habituation of neuronal responses to constant stimuli. Very long-term adaptation (several weeks) can lead to plastic changes in the cortex, most often facilitated during early development, by stimulus relevance or by behavioral states such as attention. In this study, we show that long-term adaptation with a time course of tens of minutes is detectable in anesthetized adult cat auditory cortex after a few minutes of listening to random-frequency tone pips. After the initial post-onset suppression, a slow recovery of the neuronal response strength to tones at or near their best frequency was observed for low-rate random sounds (four pips per octave per second) during stimulation. The firing rate at the end of stimulation (15 min) reached levels close to that observed during the initial onset response. The effect, visible for both spikes and, to a smaller extent, local field potentials, decreased with increasing spectro-temporal density of the sound. The spectro-temporal density of sound may therefore be of particular relevance in cortical processing. Our findings suggest that low stimulus rates may produce a specific acoustic environment that shapes the primary auditory cortex through very different processing than for spectro-temporally more dense and complex sounds.  相似文献   

8.
Neural representations of even temporally unstructured stimuli can show complex temporal dynamics. In many systems, neuronal population codes show 'progressive differentiation', whereby population responses to different stimuli grow further apart during a stimulus presentation. Here we analysed the response of auditory cortical populations in rats to extended tones. At onset (up to 300 ms), tone responses involved strong excitation of a large number of neurons; during sustained responses (after 500 ms) overall firing rate decreased, but most cells still showed statistically significant rate modulation. Population vector trajectories evoked by different tone frequencies expanded rapidly along an initially similar trajectory in the first tens of milliseconds after tone onset, later diverging to smaller amplitude fixed points corresponding to sustained responses. The angular difference between onset and sustained responses to the same tone was greater than between different tones in the same stimulus epoch. No clear orthogonalization of responses was found with time, and predictability of the stimulus from population activity also decreased during this period compared with onset. The question of whether population activity grew more or less sparse with time depended on the precise mathematical sense given to this term. We conclude that auditory cortical population responses to tones differ from those reported in many other systems, with progressive differentiation not seen for sustained stimuli. Sustained acoustic stimuli are typically not behaviorally salient: we hypothesize that the dynamics we observe may instead allow an animal to maintain a representation of such sounds, at low energetic cost.  相似文献   

9.
Temporal information in acoustic signals is important for the perception of environmental sounds, including speech. This review focuses on several aspects of temporal processing within human auditory cortex and its relevance for the processing of speech sounds. Periodic non-speech sounds, such as trains of acoustic clicks and bursts of amplitude-modulated noise or tones, can elicit different percepts depending on the pulse repetition rate or modulation frequency. Such sounds provide convenient methodological tools to study representation of timing information in the auditory system. At low repetition rates of up to 8-10 Hz, each individual stimulus (a single click or a sinusoidal amplitude modulation cycle) within the sequence is perceived as a separate event. As repetition rates increase up to and above approximately 40 Hz, these events blend together, giving rise first to the percept of flutter and then to pitch. The extent to which neural responses of human auditory cortex encode temporal features of acoustic stimuli is discussed within the context of these perceptual classes of periodic stimuli and their relationship to speech sounds. Evidence for neural coding of temporal information at the level of the core auditory cortex in humans suggests possible physiological counterparts to perceptual categorical boundaries for periodic acoustic stimuli. Temporal coding is less evident in auditory cortical fields beyond the core. Finally, data suggest hemispheric asymmetry in temporal cortical processing.  相似文献   

10.
Changes in modulation rate are important cues for parsing acoustic signals, such as speech. We parametrically controlled modulation rate via the correlation coefficient (r) of amplitude spectra across fixed frequency channels between adjacent time frames: broadband modulation spectra are biased toward slow modulate rates with increasing r, and vice versa. By concatenating segments with different r, acoustic changes of various directions (e.g., changes from low to high correlation coefficients, that is, random‐to‐correlated or vice versa) and sizes (e.g., changes from low to high or from medium to high correlation coefficients) can be obtained. Participants listened to sound blocks and detected changes in correlation while MEG was recorded. Evoked responses to changes in correlation demonstrated (a) an asymmetric representation of change direction: random‐to‐correlated changes produced a prominent evoked field around 180 ms, while correlated‐to‐random changes evoked an earlier response with peaks at around 70 and 120 ms, whose topographies resemble those of the canonical P50m and N100m responses, respectively, and (b) a highly non‐linear representation of correlation structure, whereby even small changes involving segments with a high correlation coefficient were much more salient than relatively large changes that did not involve segments with high correlation coefficients. Induced responses revealed phase tracking in the delta and theta frequency bands for the high correlation stimuli. The results confirm a high sensitivity for low modulation rates in human auditory cortex, both in terms of their representation and their segregation from other modulation rates.  相似文献   

11.
Short-latency auditory responses were obtained by cross-correlation of continuous, pseudorandom noise stimuli with averaged scalp potentials from adults with normal hearing. Responses were recorded for spectrum levels of 14-74 dB for noise bandwidths from 800 to 6000 Hz. At the lowest intensity level of broadband noise, all 10 subjects showed replicable cross-correlation functions (CCFs), which were characterized by prominent positive peaks at delays (latencies) of 5-7 msec. Male subjects exhibited longer delays than females. Delay (latency) increased with decreasing stimulus intensity. Very early responses (less than 2 msec) attributable to cochlear microphonic, which were prominent in earlier work on guinea pigs, were not well seen in these human data. CCFs for responses to band-limited stimuli and off-line derivation of band-limited CCFs for responses evoked by broadband stimuli both showed that this technique is most sensitive to frequency-following behavior at low frequencies (less than 800 Hz). However, definite phase-locked responses to even the highest passband (3100-6200 Hz) were seen. These results support the use of the CCF technique as an efficient method of frequency-specific assessment of the auditory system.  相似文献   

12.
Neuromagnetic studies in humans and single-unit studies in monkeys have provided conflicting views regarding the role of primary auditory cortex (A1) in pitch encoding. While the former support a topographic organization based on the pitch of complex tones, single-unit studies support the classical tonotopic organization of A1 defined by the spectral composition of the stimulus. It is unclear whether the incongruity of these findings is due to limitations of noninvasive recordings or whether the discrepancy genuinely reflects pitch representation based on population encoding. To bridge these experimental approaches, we examined neuronal ensemble responses in A1 of the awake monkey using auditory evoked potential (AEP), multiple-unit activity (MUA) and current source density (CSD) techniques. Macaque monkeys can perceive the missing fundamental of harmonic complex tones and therefore serve as suitable animal models for studying neural encoding of pitch. Pure tones and harmonic complex tones missing the fundamental frequency (f0) were presented at 60 dB SPL to the ear contralateral to the hemisphere from which recordings were obtained. Laminar response profiles in A1 reflected the spectral content rather than the pitch (missing f0) of the compound stimuli. These findings are consistent with single-unit data and indicate that the cochleotopic organization is preserved at the level of A1. Thus, it appears that pitch encoding of multi-component sounds is more complex than suggested by noninvasive studies, which are based on the assumption of a single dipole generator within the superior temporal gyrus. These results support a pattern recognition mechanism of pitch encoding based on a topographic representation of stimulus spectral composition at the level of A1.  相似文献   

13.
The human middle temporal complex (hMT+) has a crucial biological relevance for the processing and detection of direction and speed of motion in visual stimuli. Here, we characterized how neuronal populations in hMT+ encode the speed of moving visual stimuli. We evaluated human intracranial electrocorticography (ECoG) responses elicited by square‐wave dartboard moving stimuli with different spatial and temporal frequency to investigate whether hMT+ neuronal populations encode the stimulus speed directly, or whether they separate motion into its spatial and temporal components. We extracted two components from the ECoG responses: (1) the power in the high‐frequency band (HFB: 65–95 Hz) as a measure of the neuronal population spiking activity and (2) a specific spectral component that followed the frequency of the stimulus's contrast reversals (SCR responses). Our results revealed that HFB neuronal population responses to visual motion stimuli exhibit distinct and independent selectivity for spatial and temporal frequencies of the visual stimuli rather than direct speed tuning. The SCR responses did not encode the speed or the spatiotemporal frequency of the visual stimuli. We conclude that the neuronal populations measured in hMT+ are not directly tuned to stimulus speed, but instead encode speed through separate and independent spatial and temporal frequency tuning. Hum Brain Mapp 38:293–307, 2017. © 2016 Wiley Periodicals, Inc.  相似文献   

14.
We characterised task‐related top‐down signals in monkey auditory cortex cells by comparing single‐unit activity during passive sound exposure with neuronal activity during a predictable and unpredictable reaction‐time task for a variety of spectral‐temporally modulated broadband sounds. Although animals were not trained to attend to particular spectral or temporal sound modulations, their reaction times demonstrated clear acoustic spectral‐temporal sensitivity for unpredictable modulation onsets. Interestingly, this sensitivity was absent for predictable trials with fast manual responses, but re‐emerged for the slower reactions in these trials. Our analysis of neural activity patterns revealed a task‐related dynamic modulation of auditory cortex neurons that was locked to the animal's reaction time, but invariant to the spectral and temporal acoustic modulations. This finding suggests dissociation between acoustic and behavioral signals at the single‐unit level. We further demonstrated that single‐unit activity during task execution can be described by a multiplicative gain modulation of acoustic‐evoked activity and a task‐related top‐down signal, rather than by linear summation of these signals.  相似文献   

15.
Auditory perception is improved when stimuli are predictable, and this effect is evident in a modulation of the activity of neurons in the auditory cortex as shown previously. Human listeners can better predict the presence of duration deviants embedded in stimulus streams with fixed interonset interval (isochrony) and repeated duration pattern (regularity), and neurons in the auditory cortex of macaque monkeys have stronger sustained responses in the 60–140 ms post‐stimulus time window under these conditions. Subsequently, the question has arisen whether isochrony or regularity in the sensory input contributed to the enhancement of the neuronal and behavioural responses. Therefore, we varied the two factors isochrony and regularity independently and measured the ability of human subjects to detect deviants embedded in these sequences as well as measuring the responses of neurons the primary auditory cortex of macaque monkeys during presentations of the sequences. The performance of humans in detecting deviants was significantly increased by regularity. Isochrony enhanced detection only in the presence of the regularity cue. In monkeys, regularity increased the sustained component of neuronal tone responses in auditory cortex while isochrony had no consistent effect. Although both regularity and isochrony can be considered as parameters that would make a sequence of sounds more predictable, our results from the human and monkey experiments converge in that regularity has a greater influence on behavioural performance and neuronal responses.  相似文献   

16.
Onsets are salient and important transient (i.e. dynamic) features of acoustic signals, and evoke vigorous responses from most auditory neurons, but paradoxically these onset responses have most often been analysed with respect to steady-state stimulus features, e.g. the sound pressure level (SPL). In nearly all studies concerned with the coding of differences in SPL at the two ears (interaural level differences; ILDs), which provide a major cue for the azimuthal location of high frequency sound sources, interaural onset disparities were covaried with ILD, but the possibly confounding effects of this covariation on neuronal responses have been entirely neglected. Therefore, dichotic stimulus paradigms were designed here in which onset and steady-state features were varied independently. Responses were recorded from single neurons in the inferior colliculus of rats, anaesthetized with pentobarbital and xylazine. It is demonstrated that onset responses, or the onset response components of neurons with more complex temporal response patterns, are dependent on the binaural combination of dynamic envelope features associated with conventional ILD stimulus paradigms, but not on the binaural combination of steady-state SPLs reached after the onset. In contrast, late or sustained response components appear more sensitive to the binaural combination of steady-state SPLs. These data stress the general necessity for a separate analysis of onset and late response components, with respect to different stimulus features, and suggest a need for re-evaluation of existing studies on ILD coding. The sensitivity of onset responses to the binaural combination of envelope transients, rather than to steady-state ILD, is in line with their sensitivity to other interaural envelope disparities, created by stationary or moving sounds.  相似文献   

17.
Acetylcholine (ACh), acting via muscarinic receptors, is known to modulate neuronal responsiveness in primary sensory neocortex. The administration of ACh to cortical neurons facilitates or suppresses responses to sensory stimuli, and these effects can endure well beyond the period of ACh application. In the present study, we sought to determine whether ACh produces a general change in sensory information processing, or whether it can specifically alter the processing of sensory stimuli with which it was "paired". To answer this question, we restricted acoustic stimulation in the presence of ACh to a single frequency, and determined single neuron frequency receptive fields in primary auditory cortex before and after this pairing. During its administration, ACh produced mostly facilitatory effects on spontaneous activity and on responses to the single frequency tone. Examination of frequency receptive fields after ACh administration revealed receptive field modifications in 56% of the cells. In half of these cases, the receptive field alterations were highly specific to the frequency of the tone previously paired with ACh. Thus ACh can produce stimulus-specific modulation of auditory information processing. An additional and unexpected finding was that the type of modulation during ACh administration did not predict the type of receptive field modulation observed after ACh administration; this may be related to the physiological "context" of the same stimulus in two different conditions. The implications of these findings for learning-induced plasticity in the auditory cortex is discussed.  相似文献   

18.
Communication sounds across all mammals consist of multiple frequencies repeated in sequence. The onset and offset of vocalizations are potentially important cues for recognizing distinct units, such as phonemes and syllables, which are needed to perceive meaningful communication. The superior paraolivary nucleus (SPON) in the auditory brainstem has been implicated in the processing of rhythmic sounds. Here, we compared how best frequency tones (BFTs), broadband noise (BBN), and natural mouse calls elicit onset and offset spiking in the mouse SPON. The results demonstrate that onset spiking typically occurs in response to BBN, but not BFT stimulation, while spiking at the sound offset occurs for both stimulus types. This effect of stimulus bandwidth on spiking is consistent with two of the established inputs to the SPON from the octopus cells (onset spiking) and medial nucleus of the trapezoid body (offset spiking). Natural mouse calls elicit two main spiking peaks. The first spiking peak, which is weak or absent with BFT stimulation, occurs most consistently during the call envelope, while the second spiking peak occurs at the call offset. This suggests that the combined spiking activity in the SPON elicited by vocalizations reflects the entire envelope, that is, the coarse amplitude waveform. Since the output from the SPON is purely inhibitory, it is speculated that, at the level of the inferior colliculus, the broadly tuned first peak may improve the signal‐to‐noise ratio of the subsequent, more call frequency‐specific peak. Thus, the SPON may provide a dual inhibition mechanism for tracking phonetic boundaries in social‐vocal communication.  相似文献   

19.
Echolocating bats can recognize 3-D objects exclusively through the analysis of the reflections of their ultrasonic emissions. For objects of small size, the spectral interference pattern of the acoustic echoes encodes information about the structure of an object. For some naturally occurring objects such as, e.g., flowers, the interference pattern as well as the echo amplitude can regularly change with the object's size, and bats should be able to compensate for both of these changes for reliable, size-invariant object recognition. In this study, electrophysiological responses of units in the auditory cortex of the bat Phyllostomus discolor were investigated using extracellular recording techniques. Acoustical stimuli consisted of echoes of virtual two-front objects that varied in size. Thus, the echoes changed systematically in amplitude and spectral envelope pattern. Whereas 30% of units simply encoded echo loudness, a considerable number of units (20%) encoded a specific spectral envelope shape independent of stimulus amplitude. In addition, a small number of cortical units (3%) were found that showed response-invariance for a covariation of echo amplitude and echo spectral envelope. The response of these two classes of units could not be simply predicted from the excitatory frequency response areas. The results show that units in the bat auditory cortex exist that might serve for the recognition of characteristic object-specific spectral echo patterns created by, e.g., flowers or other objects independent of object size or echo amplitude.  相似文献   

20.
This article presents a characterization of cortical responses to artificial and natural temporally patterned sounds in the bat species Carollia perspicillata, a species that produces vocalizations at rates above 50 Hz. Multi‐unit activity was recorded in three different experiments. In the first experiment, amplitude‐modulated (AM) pure tones were used as stimuli to drive auditory cortex (AC) units. AC units of both ketamine‐anesthetized and awake bats could lock their spikes to every cycle of the stimulus modulation envelope, but only if the modulation frequency was below 22 Hz. In the second experiment, two identical communication syllables were presented at variable intervals. Suppressed responses to the lagging syllable were observed, unless the second syllable followed the first one with a delay of at least 80 ms (i.e., 12.5 Hz repetition rate). In the third experiment, natural distress vocalization sequences were used as stimuli to drive AC units. Distress sequences produced by C. perspicillata contain bouts of syllables repeated at intervals of ~60 ms (16 Hz). Within each bout, syllables are repeated at intervals as short as 14 ms (~71 Hz). Cortical units could follow the slow temporal modulation flow produced by the occurrence of multisyllabic bouts, but not the fast acoustic flow created by rapid syllable repetition within the bouts. Taken together, our results indicate that even in fast vocalizing animals, such as bats, cortical neurons can only track the temporal structure of acoustic streams modulated at frequencies lower than 22 Hz.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号