首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
Auditory motion can be simulated by presenting binaural sounds with time-varying interaural intensity differences. We studied the human cortical response to both the direction and the rate of illusory motion by recording the auditory evoked magnetic fields with a 122-channel whole-head neuromagnetometer. The illusion of motion from left to right, right to left, and towards and away from the subject was produced by varying a 6-dB intensity difference between the two ears in the middle of a 600-ms tone. Both the onset and the intensity transition within the stimulus elicited clear responses in auditory cortices of both hemispheres, with the strongest responses occurring about 100 ms after the stimulus and transition onsets. The transition responses were significantly earlier and larger for fast than slow shifts and larger in the hemisphere contralateral to the increase in stimulus intensity for azimuthal shifts. Transition response amplitude varied with the direction of the simulated motion, suggesting that these responses are mediated by directionally selective cells in auditory cortex.  相似文献   

2.
Role of cat primary auditory cortex for sound-localization behavior   总被引:7,自引:0,他引:7  
Small lesions designed to completely destroy the cortical zone of representation of a restricted band of frequency were introduced within the primary auditory cortex (AI) in adult cats. Physiological mapping was used to guide placement of lesions. Sound-localization performance was evaluated prior to and after induction of these lesions in a seven-choice free-sound-field apparatus. All tested cats had profound contralateral hemifield deficits for the localization of brief tones at frequencies roughly corresponding to those whose representations were destroyed by the lesion. Sound-localization performance was normal at all other test frequencies. In a single adult cat, a massive lesion destroyed nearly all auditory cortex unilaterally, with only the representation of a narrow band of frequency within AI spared by the lesion. This cat had normal abilities for azimuthal sound localization across that frequency band but a profound contralateral deficit for the azimuthal localization of brief sounds at all other frequencies. Recorded sound-localization deficits were permanent. Localization of long-duration tones was not affected by a unilateral AI lesion. These studies indicate that, at least in cats, AI is necessary for normal binaural sound-localization behavior; among auditory cortical fields, AI is sufficient for normal binaural sound-localization behavior; sound-location representation is organized by frequency channel in the auditory forebrain; and AI in each hemisphere contributes to only contralateral free-sound-field location representation.  相似文献   

3.
Bilateral cochlear implantation aims to restore binaural hearing, important for spatial hearing, to children who are deaf. Improvements over unilateral implant use are attributed largely to the detection of interaural level differences (ILDs) but emerging evidence of impaired sound localization and binaural fusion suggest that these binaural cues are abnormally coded by the auditory system. We used multichannel electroencephalography (EEG) to assess cortical responses to ILDs in two groups: 13 children who received early bilateral cochlear implants (CIs) simultaneously, known to protect the developing auditory cortices from unilaterally driven reorganization, and 15 age matched peers with normal hearing. EEG source analyses indicated a dominance of right auditory cortex in both groups. Expected reductions in activity to ipsilaterally weighted ILDs were evident in the right hemisphere of children with normal hearing. By contrast, cortical activity in children with CIs showed: (1) limited ILD sensitivity in either cortical hemisphere, (2) limited correlation with reliable behavioral right-left lateralization of ILDs (in 10/12 CI users), and (3) deficits in parieto-occipital areas and the cerebellum. Thus, expected cortical ILD coding develops with normal hearing but is affected by developmental deafness despite early and simultaneous bilateral implantation. Findings suggest that impoverished fidelity of ILDs in independently functioning CIs may be impeding development of cortical ILD sensitivity in children who are deaf but do not altogether limit benefits of listening with bilateral CIs. Future efforts to provide consistent/accurate ILDs through auditory prostheses including CIs could improve binaural hearing for children with hearing loss.  相似文献   

4.
It is well known that, following an early visual deprivation, the neural network involved in processing auditory spatial information undergoes a profound reorganization. In particular, several studies have demonstrated an extensive activation of occipital brain areas, usually regarded as essentially “visual”, when early blind subjects (EB) performed a task that requires spatial processing of sounds. However, little is known about the possible consequences of the activation of occipitals area on the function of the large cortical network known, in sighted subjects, to be involved in the processing of auditory spatial information. To address this issue, we used event-related transcranial magnetic stimulation (TMS) to induce virtual lesions of either the right intra-parietal sulcus (rIPS) or the right dorsal extrastriate occipital cortex (rOC) at different delays in EB subjects performing a sound lateralization task. Surprisingly, TMS applied over rIPS, a region critically involved in the spatial processing of sound in sighted subjects, had no influence on the task performance in EB. In contrast, TMS applied over rOC 50 ms after sound onset, disrupted the spatial processing of sounds originating from the contralateral hemifield. The present study shed new lights on the reorganisation of the cortical network dedicated to the spatial processing of sounds in EB by showing an early contribution of rOC and a lesser involvement of rIPS.  相似文献   

5.
This report maps the organization of the primary auditory cortex of the pallid bat in terms of frequency tuning, selectivity for behaviorally relevant sounds, and interaural intensity difference (IID) sensitivity. The pallid bat is unusual in that it localizes terrestrial prey by passively listening to prey-generated noise transients (1-20 kHz), while reserving high-frequency (<30 kHz) echolocation for obstacle avoidance. The functional organization of its auditory cortex reflects the need for specializations in echolocation and passive sound localization. Best frequencies were arranged tonotopically with a general increase in the caudolateral to rostromedial direction. Frequencies between 24 and 32 kHz were under-represented, resulting in hypertrophy of frequencies relevant for prey localization and echolocation. Most neurons (83%) tuned <30 kHz responded preferentially to broadband or band-pass noise over single tones. Most neurons (62%) tuned >30 kHz responded selectively or exclusively to the 60- to 30-kHz downward frequency-modulated (FM) sweep used for echolocation. Within the low-frequency region, neurons were placed in two groups that occurred in two separate clusters: those selective for low- or high-frequency band-pass noise and suppressed by broadband noise, and neurons that showed no preference for band-pass noise over broadband noise. Neurons were organized in homogeneous clusters with respect to their binaural response properties. The distribution of binaural properties differed in the noise- and FM sweep-preferring regions, suggesting task-dependent differences in binaural processing. The low-frequency region was dominated by a large cluster of binaurally inhibited neurons with a smaller cluster of neurons with mixed binaural interactions. The FM sweep-selective region was dominated by neurons with mixed binaural interactions or monaural neurons. Finally, this report describes a cortical substrate for systematic representation of a spatial cue, IIDs, in the low-frequency region. This substrate may underlie a population code for sound localization based on a systematic shift in the distribution of activity across the cortex with sound source location.  相似文献   

6.
Auditory motion can be simulated by presenting binaural sounds with time-varying interaural time delays. Human cortical responses to the rate of auditory motion were studied by recording auditory evoked magnetic fields with a 122-channel whole-head magnetometer. Auditory motion from central to right and then to central was produced by varying interaural time differences between ears. The results showed that the N1m latencies and amplitudes were not affected by the fluctuation of interaural time delay; however, the peak amplitude of P2m significantly increased as a function of fluctuation of the interaural time delay.  相似文献   

7.
In an attempt to delineate the assumed 'what' and 'where' processing streams, we studied the processing of spatial sound in the human cortex by using magnetoencephalography in the passive and active recording conditions and two kinds of spatial stimuli: individually constructed, highly realistic spatial (3D) stimuli and stimuli containing interaural time difference (ITD) cues only. The auditory P1m, N1m, and P2m responses of the event-related field were found to be sensitive to the direction of sound source in the azimuthal plane. In general, the right-hemispheric responses to spatial sounds were more prominent than the left-hemispheric ones. The right-hemispheric P1m and N1m responses peaked earlier for sound sources in the contralateral than for sources in the ipsilateral hemifield and the peak amplitudes of all responses reached their maxima for contralateral sound sources. The amplitude of the right-hemispheric P2m response reflected the degree of spatiality of sound, being twice as large for the 3D than ITD stimuli. The results indicate that the right hemisphere is specialized in the processing of spatial cues in the passive recording condition. Minimum current estimate (MCE) localization revealed that temporal areas were activated both in the active and passive condition. This initial activation, taking place at around 100 ms, was followed by parietal and frontal activity at 180 and 200 ms, respectively. The latter activations, however, were specific to attentional engagement and motor responding. This suggests that parietal activation reflects active responding to a spatial sound rather than auditory spatial processing as such.  相似文献   

8.
Where is 'where' in the human auditory cortex?   总被引:4,自引:0,他引:4  
We examine the functional characteristics of auditory cortical areas that are sensitive to spatial cues in the human brain, and determine whether they can be dissociated from parietal lobe mechanisms. Three positron emission tomography (PET) experiments were conducted using a speaker array permitting quasi free-field sound presentation within the scanner. Posterior auditory cortex responded to sounds that varied in their spatial distribution, but only when multiple complex stimuli were presented simultaneously, implicating this cortical system in disambiguation of overlapping auditory sources. We also found that the right inferior parietal cortex is specifically recruited in localization tasks, and that its activity predicts behavioral performance, consistent with its involvement in sensorimotor integration and spatial transformation. These findings clarify the functional roles of posterior auditory and parietal cortices, and help to reconcile competing models of auditory cortical organization.  相似文献   

9.
The aim of the present study was to evaluate how hemispherectomized subjects localize sounds in free field using residual auditory structures under monaural testing conditions. The main objective of using a monaural condition with these subjects, who lack the terminal fields of auditory projections on one side, was to evaluate how the crossed and uncrossed pathways compare, with the aim of resolving this biologically critical function. In this model, crossed and uncrossed inputs refer to auditory stimulation presented to the unobstructed ear on the contralateral and the ipsilateral side of the intact hemisphere, respectively. Three hemispherectomized subjects (Hs) and ten control subjects (Cs) were tested for their accuracy to localize broad band noise bursts (BBNBs) of fixed intensity presented on the horizontal plane. BBNBs were delivered randomly through 16 loudspeakers mounted at 10 degrees intervals on a calibrated perimeter frame located inside an anechoic chamber. Subjects had to report the apparent stimulus location by pointing to its perceived position on the perimeter. Hs were less accurate than Cs in the baseline binaural condition, confirming the finding that with a single hemisphere and/or residual (subcortical) structures they cannot analyze binaural cues to sound localization as efficiently as with two fully functional hemispheres. In the monaural condition, Hs localized poorly when they had to depend on the uncrossed input, but performed as well or even better than the Cs with the crossed input. These findings suggest that monaural spectral cues, which constitute the only residual cue to localization under the monaural testing condition, are treated more efficiently, that is, they lead to better localization performance when relayed to the cortex via crossed pathways than through uncrossed pathways.  相似文献   

10.
Functional imaging studies have shown that information relevant to sound recognition and sound localization are processed in anatomically distinct cortical networks. We have investigated the functional organization of these specialized networks by evaluating acute effects of circumscribed hemispheric lesions. Thirty patients with a primary unilateral hemispheric lesion, 15 with right-hemispheric damage (RHD) and 15 with left-hemispheric damage (LHD), were evaluated for their capacity to recognise environmental sounds, to localize sounds in space and to perceive sound motion. One patient with RHD and 2 with LHD had a selective deficit in sound recognition; 3 with RHD a selective deficit in sound localization; 2 with LHD a selective deficit in sound motion perception; 4 with RHD and 3 with LHD a combined deficit of sound localization and motion perception; 2 with RHD and 1 with LHD a combined deficit of sound recognition and motion perception; and 1 with LHD a combined deficit of sound recognition, localization and motion perception. Five patients with RHD and 6 with LHD had normal performance in all three domains. Deficient performance in sound recognition, sound localization and/or sound motion perception was always associated with a lesion that involved the shared auditory structures and the specialized What and/or Where networks, while normal performance was associated with lesions within or outside these territories. Thus, damage to regions known to be involved in auditory processing in normal subjects is necessary, but not sufficient for a deficit to occur. Lesions of a specialized network was not always associated with the corresponding deficit. Conversely, specific deficits tended not be associated predominantly with lesions of the corresponding network; e.g. deficits in auditory spatial tasks were observed in patients whose lesions involved to a larger extent the shared auditory structures and the specialized What network than the specialized Where network, and deficits in sound recognition in patients whose lesions involved mostly the shared auditory structures and to a varying degree the specialized What network. The human auditory cortex consists of functionally defined auditory areas, whose intrinsic organization is currently not understood. In particular, areas involved in the What and Where pathways can be conceived as: (1) specialized regions, in which lesions cause dysfunction limited to the damaged part; observed deficits should be then related to the specialization of the damaged region and their magnitude to the extent of the damage; or (2) specialized networks, in which lesions cause dysfunction that may spread over the two specialized networks; observed deficits may then not be related to the damaged region and their magnitude not proportional to the extent of the damage. Our results support strongly the network hypothesis.  相似文献   

11.
Attentional modulation of human auditory cortex   总被引:2,自引:0,他引:2  
Attention powerfully influences auditory perception, but little is understood about the mechanisms whereby attention sharpens responses to unattended sounds. We used high-resolution surface mapping techniques (using functional magnetic resonance imaging, fMRI) to examine activity in human auditory cortex during an intermodal selective attention task. Stimulus-dependent activations (SDAs), evoked by unattended sounds during demanding visual tasks, were maximal over mesial auditory cortex. They were tuned to sound frequency and location, and showed rapid adaptation to repeated sounds. Attention-related modulations (ARMs) were isolated as response enhancements that occurred when subjects performed pitch-discrimination tasks. In contrast to SDAs, ARMs were localized to lateral auditory cortex, showed broad frequency and location tuning, and increased in amplitude with sound repetition. The results suggest a functional dichotomy of auditory cortical fields: stimulus-determined mesial fields that faithfully transmit acoustic information, and attentionally labile lateral fields that analyze acoustic features of behaviorally relevant sounds.  相似文献   

12.
Ferrets were tested in a semicircular apparatus to determine the effects of auditory cortical lesions on their ability to localize sounds in space. They were trained to initiate trials while facing forward in the apparatus, and sounds were presented from one of two loudspeakers located in the horizontal plane. Minimum audible angles were obtained for three different positions, viz., the left hemifield, with loudspeakers centered around -60 degrees azimuth; the right hemifield, with loudspeakers centered around +60 degrees azimuth; and the midline with loudspeakers centered around 0 degrees azimuth. Animals with large bilateral lesions had severe impairments in localizing a single click in the midline test. Following complete destruction of the auditory cortex performance was only marginally above the level expected by chance even at large angles of speaker separation. Severe impairments were also found in localization of single clicks in both left and right lateral fields. In contrast, bilateral lesions restricted to the primary auditory cortex resulted in minimal impairments in midline localization. The same lesions, however, produced severe impairments in localization of single clicks in both left and right lateral fields. Large unilateral lesions that destroyed auditory cortex in one hemisphere resulted in an inability to localize single clicks in the contralateral hemifield. In contrast, no impairments were found in the midline test or in the ipsilateral hemifield. Unilateral lesions of the primary auditory cortex resulted in severe contralateral field deficits equivalent to those seen following complete unilateral destruction of auditory cortex. No deficits were seen in either the midline or the ipsilateral tests.  相似文献   

13.
Localization of sounds by the auditory system is based on the analysis of three sources of information: interaural level differences (ILD, caused by an attenuation of the sound as it travels to the more distant ear), interaural time differences (ITD, caused by the additional amount of time it takes for the sound to arrive at the more distant ear), and spectral cues (caused by direction-specific spectral filter properties of the pinnae). Although in a number of psychophysiological studies cortical processes of ITD and ILD analysis were investigated, there is hitherto no evidence on the cortical processing of spectral cues for sound localization. The objective of the present experiment was to test whether it is possible to observe electrophysiological correlates of sound localization based on spectral cues. In an auditory oddball experiment, 80 ms of broadband noise from varying free field locations were presented to inattentive participants. Mismatch negativities (MMNs) were observed for pairs of standards and location deviants located symmetrically with respect to the interaural axis. As interaural time and level differences are identical for such pairs of sounds, the observed MMNs most likely reflect cognitive processes of sound localization utilizing the spectral filter properties of the pinnae. MMN latencies suggest that sound localization based on spectral cues is slower than ITD- or ILD-based localization.  相似文献   

14.
To date, most physiological studies that investigated binaural auditory processing have addressed the topic rather exclusively in the context of sound localization. However, there is strong psychophysical evidence that binaural processing serves more than only sound localization. This raises the question of how binaural processing of spatial cues interacts with cues important for feature detection. The temporal structure of a sound is one such feature important for sound recognition. As a first approach, we investigated the influence of binaural cues on temporal processing in the mammalian auditory system. Here, we present evidence that binaural cues, namely interaural intensity differences (IIDs), have profound effects on filter properties for stimulus periodicity of auditory midbrain neurons in the echolocating big brown bat, Eptesicus fuscus. Our data indicate that these effects are partially due to changes in strength and timing of binaural inhibitory inputs. We measured filter characteristics for the periodicity (modulation frequency) of sinusoidally frequency modulated sounds (SFM) under different binaural conditions. As criteria, we used 50% filter cutoff frequencies of modulation transfer functions based on discharge rate as well as synchronicity of discharge to the sound envelope. The binaural conditions were contralateral stimulation only, equal stimulation at both ears (IID = 0 dB), and more intense at the ipsilateral ear (IID = -20, -30 dB). In 32% of neurons, the range of modulation frequencies the neurons responded to changed considerably comparing monaural and binaural (IID =0) stimulation. Moreover, in approximately 50% of neurons the range of modulation frequencies was narrower when the ipsilateral ear was favored (IID = -20) compared with equal stimulation at both ears (IID = 0). In approximately 10% of the neurons synchronization differed when comparing different binaural cues. Blockade of the GABAergic or glycinergic inputs to the cells recorded from revealed that inhibitory inputs were at least partially responsible for the observed changes in SFM filtering. In 25% of the neurons, drug application abolished those changes. Experiments using electronically introduced interaural time differences showed that the strength of ipsilaterally evoked inhibition increased with increasing modulation frequencies in one third of the cells tested. Thus glycinergic and GABAergic inhibition is at least one source responsible for the observed interdependence of temporal structure of a sound and spatial cues.  相似文献   

15.
Successful speech perception in complex auditory scenes with multiple competing speakers requires spatial segregation of auditory streams into perceptually distinct and coherent auditory objects and focusing of attention toward the speaker of interest. Here, we focused on the neural basis of this remarkable capacity of the human auditory system and investigated the spatiotemporal sequence of neural activity within the cortical network engaged in solving the “cocktail‐party” problem. Twenty‐eight subjects localized a target word in the presence of three competing sound sources. The analysis of the ERPs revealed an anterior contralateral subcomponent of the N2 (N2ac), computed as the difference waveform for targets to the left minus targets to the right. The N2ac peaked at about 500 ms after stimulus onset, and its amplitude was correlated with better localization performance. Cortical source localization for the contrast of left versus right targets at the time of the N2ac revealed a maximum in the region around left superior frontal sulcus and frontal eye field, both of which are known to be involved in processing of auditory spatial information. In addition, a posterior‐contralateral late positive subcomponent (LPCpc) occurred at a latency of about 700 ms. Both these subcomponents are potential correlates of allocation of spatial attention to the target under cocktail‐party conditions.  相似文献   

16.
Linguistic experience shapes auditory processing of speech sounds as indicated by the mismatch negativity (MMN). In the current study, magnetic mismatch fields (MMNm) in response to native and nonnative fricative-vowel syllables were recorded, using whole-head magnetoencephalography (MEG). Earlier difference waves were enhanced in the left hemisphere in native listeners for fricatives with rather subtle acoustic differences in frication and pronounced acoustic differences in the transition to the vowel. For nonnative subjects, difference waves were at first stronger in the right hemisphere for these contrasts. Left hemispheric MMNms only occurred later in nonnative participants. The results are discussed in relation to previous studies suggesting lateralization of speech sound processing.  相似文献   

17.
An important cue for sound localization is binaural comparison of stimulus intensity. Two features of neuronal responses, response strength, i.e., spike count and/or rate, and response latency, vary with stimulus intensity, and binaural comparison of either or both might underlie localization. Previous studies at the receptor-neuron level showed that these response features are affected by the stimulus temporal pattern. When sounds are repeated rapidly, as occurs in many natural sounds, response strength decreases and latency increases, resulting in altered coding of localization cues. In this study we analyze binaural cues for sound localization at the level of an identified pair of interneurons (the left and right AN2) in the cricket auditory system, with emphasis on the effects of stimulus temporal pattern on binaural response differences. AN2 spike count decreases with rapidly repeated stimulation and latency increases. Both effects depend on stimulus intensity. Because of the difference in intensity at the two ears, binaural differences in spike count and latency change as stimulation continues. The binaural difference in spike count decreases, whereas the difference in latency increases. The proportional changes in response strength and in latency are greater at the interneuron level than at the receptor level, suggesting that factors in addition to decrement of receptor responses are involved. Intracellular recordings reveal that a slowly building, long-lasting hyperpolarization is established in AN2. At the same time, the level of depolarization reached during the excitatory postsynaptic potential (EPSP) resulting from each sound stimulus decreases. Neither these effects on membrane potential nor the changes in spiking response are accounted for by contralateral inhibition. Based on comparison of our results with earlier behavioral experiments, it is unlikely that crickets use the binaural difference in latency of AN2 responses as the main cue for determining sound direction, leaving the difference in response strength, i.e., spike count and/or rate, as the most likely candidate.  相似文献   

18.
The auditory magnetic event-related fields (ERF) qualitatively change through the child development, reflecting maturation of auditory cortical areas. Clicks presented with long inter-stimulus interval produce distinct ERF components, and may appear useful to characterize immature EFR morphology in children. The present study is aimed to investigate morphology of the auditory ERFs in school-age children, as well as lateralization and repetition suppression of ERF components evoked by the clicks. School-age children and adults passively listened to pairs of click presented to the right ear, left ear or binaurally, with 8–11 s intervals between the pairs and a 1 s interval within a pair. Adults demonstrated a typical P50m/N100m response. Unlike adults, children had two distinct components preceding the N100m–P50m (at ~65 ms) and P100m (at ~100 ms). The P100m dominated the child ERF, and was most prominent in response to binaural stimulation. The N100m in children was less developed than in adults and partly overlapped in time with the P100m, especially in response to monaural clicks. Strong repetition suppression was observed for P50m both in children and adults, P100m in children and N100m in adults. Both children and adults demonstrated ERF amplitude and/or latency right hemispheric advantage effects that may reflect right hemisphere dominance for preattentive arousal processes. Our results contribute to the knowledge concerning development of auditory processing and its lateralization in children and have implications for investigation of the auditory evoked fields in developmental disorders.  相似文献   

19.
The cortical network subserving language processing is likely to exhibit a high spatial and temporal complexity. Studies using brain imaging methods, like fMRI or PET, succeeded in identifying a number of brain structures that seem to contribute to the processing of syntactic structures, while their dynamic interaction remains unclear due to the low temporal resolution of the methods. On the other hand, ERP studies have revealed a great deal of the temporal dimension of language processing without being able to provide more than very coarse information on the localisation of the underlying generators. MEG has a temporal resolution similar to EEG combined with a better spatial resolution. In this paper, Brain Surface Current Density (BSCD) mapping in a standard brain model was used to identify statistically significant differences between the activity of certain brain regions due to syntactically correct and incorrect auditory language input. The results show that the activity in the first 600 ms after violation onset is mainly concentrated in the temporal cortex and the adjacent frontal and parietal areas of both hemispheres. The statistical analysis reveals significantly different activity mainly in both frontal and temporal cortices. For longer latencies above 250 ms, the differential activity is more prominent in the right hemisphere. These findings confirm other recent results that suggest right hemisphere involvement in auditory language processing. One interpretation might be that right hemisphere regions play an important role in repair and re-analysis processes in order to free the specialised left hemisphere language areas for processing further input.  相似文献   

20.
We recorded unit activity in the auditory cortex (fields A1, A2, and PAF) of anesthetized cats while presenting paired clicks with variable locations and interstimulus delays (ISDs). In human listeners, such sounds elicit the precedence effect, in which localization of the lagging sound is impaired at ISDs less, similar10 ms. In the present study, neurons typically responded to the leading stimulus with a brief burst of spikes, followed by suppression lasting 100-200 ms. At an ISD of 20 ms, at which listeners report a distinct lagging sound, only 12% of units showed discrete lagging responses. Long-lasting suppression was found in all sampled cortical fields, for all leading and lagging locations, and at all sound levels. Recordings from awake cats confirmed this long-lasting suppression in the absence of anesthesia, although recovery from suppression was faster in the awake state. Despite the lack of discrete lagging responses at delays of 1-20 ms, the spike patterns of 40% of units varied systematically with ISD, suggesting that many neurons represent lagging sounds implicitly in their temporal firing patterns rather than explicitly in discrete responses. We estimated the amount of location-related information transmitted by spike patterns at delays of 1-16 ms under conditions in which we varied only the leading location or only the lagging location. Consistent with human psychophysical results, transmission of information about the leading location was high at all ISDs. Unlike listeners, however, transmission of information about the lagging location remained low, even at ISDs of 12-16 ms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号