首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The localization of sounds in space is based on spatial cues that arise from the acoustical properties of the head and external ears. Individual differences in localization cue values result from variability in the shape and dimensions of these structures. We have mapped spatial response fields of high-frequency neurons in ferret primary auditory cortex using virtual sound sources based either on the animal's own ears or on the ears of other subjects. For 73% of units, the response fields measured using the animals' own ears differed significantly in shape and/or position from those obtained using spatial cues from another ferret. The observed changes correlated with individual differences in the acoustics. These data are consistent with previous reports showing that humans localize less accurately when listening to virtual sounds from other individuals. Together these findings support the notion that neural mechanisms underlying auditory space perception are calibrated by experience to the properties of the individual.  相似文献   

2.
Because the inner ear is not organized spatially, sound localization relies on the neural processing of implicit acoustic cues. To determine a sound's position, the brain must learn and calibrate these cues, using accurate spatial feedback from other sensorimotor systems. Experimental evidence for such a system has been demonstrated in barn owls, but not in humans. Here, we demonstrate the existence of ongoing spatial calibration in the adult human auditory system. The spectral elevation cues of human subjects were disrupted by modifying their outer ears (pinnae) with molds. Although localization of sound elevation was dramatically degraded immediately after the modification, accurate performance was steadily reacquired. Interestingly, learning the new spectral cues did not interfere with the neural representation of the original cues, as subjects could localize sounds with both normal and modified pinnae.  相似文献   

3.
Sound localization in humans relies on binaural differences (azimuth cues) and monaural spectral shape information (elevation cues) and is therefore the result of a neural computational process. Despite the fact that these acoustic cues are referenced with respect to the head, accurate eye movements can be generated to sounds in complete darkness. This ability necessitates the use of eye position information. So far, however, sound localization has been investigated mainly with a fixed head position, usually straight ahead. Yet the auditory system may rely on head motor information to maintain a stable and spatially accurate representation of acoustic targets in the presence of head movements. We therefore studied the influence of changes in eye-head position on auditory-guided orienting behavior of human subjects. In the first experiment, we used a visual-auditory double-step paradigm. Subjects made saccadic gaze shifts in total darkness toward brief broadband sounds presented before an intervening eye-head movement that was evoked by an earlier visual target. The data show that the preceding displacements of both eye and head are fully accounted for, resulting in spatially accurate responses. This suggests that auditory target information may be transformed into a spatial (or body-centered) frame of reference. To further investigate this possibility, we exploited the unique property of the auditory system that sound elevation is extracted independently from pinna-related spectral cues. In the absence of such cues, accurate elevation detection is not possible, even when head movements are made. This is shown in a second experiment where pure tones were localized at a fixed elevation that depended on the tone frequency rather than on the actual target elevation, both under head-fixed and -free conditions. To test, in a third experiment, whether the perceived elevation of tones relies on a head- or space-fixed target representation, eye movements were elicited toward pure tones while subjects kept their head in different vertical positions. It appeared that each tone was localized at a fixed, frequency-dependent elevation in space that shifted to a limited extent with changes in head elevation. Hence information about head position is used under static conditions too. Interestingly, the influence of head position also depended on the tone frequency. Thus tone-evoked ocular saccades typically showed a partial compensation for changes in static head position, whereas noise-evoked eye-head saccades fully compensated for intervening changes in eye-head position. We propose that the auditory localization system combines the acoustic input with head-position information to encode targets in a spatial (or body-centered) frame of reference. In this way, accurate orienting responses may be programmed despite intervening eye-head movements. A conceptual model, based on the tonotopic organization of the auditory system, is presented that may account for our findings.  相似文献   

4.
The precedence effect (PE) describes several spatial perceptual phenomena that occur when similar sounds are presented from two different locations and separated by a delay. The mechanisms that produce the effect are thought to be responsible for the ability to localize sounds in reverberant environments. Although the physiological bases for the PE have been studied, little is known about how these sounds are localized by species other than humans. Here we used the search coil technique to measure the eye positions of cats trained to saccade to the apparent locations of sounds. To study the PE, brief broadband stimuli were presented from two locations, with a delay between their onsets; the delayed sound meant to simulate a single reflection. Although the cats accurately localized single sources, the apparent locations of the paired sources depended on the delay. First, the cats exhibited summing localization, the perception of a "phantom" sound located between the sources, for delays < +/-400 micros for sources positioned in azimuth along the horizontal plane, but not for sources positioned in elevation along the sagittal plane. Second, consistent with localization dominance, for delays from 400 micros to about 10 ms, the cats oriented toward the leading source location only, with little influence of the lagging source, both for horizontally and vertically placed sources. Finally, the echo threshold was reached for delays >10 ms, where the cats first began to orient to the lagging source on some trials. These data reveal that cats experience the PE phenomena similarly to humans.  相似文献   

5.
To investigate whether the visual system is crucial for adequate calibration of acoustic localization cues, sound-localization performance of early blind humans was compared with that of sighted controls. Because a potential benefit of vision is mainly expected for targets within the two-dimensional (2D) frontal hemifield, localization was tested within this target range, while using sounds of various durations and spectral content. Subjects were instructed to point, in separate experimental sessions, either with their left arm, or with their nose, in the direction of the perceived target position as accurately as possible. The experiments required the use of all available sound-localization cues such as interaural differences in phase and intensity, as well as the complex spectral shape cues provided by the pinnae. In addition, for long-duration stimuli, subjects could have had access to head motion-induced acoustic feedback. Moreover, the two pointing methods allowed us to assess different response strategies for the two groups. In an additional series, subjects were instructed to respond as quickly as possible. The results show that, in general, 2D sound-localization performance of blind subjects was indistinguishable from that of sighted subjects, both for broad-band noise and for pure tones. In the fast head-pointing task, the latency distributions of both groups were equal. These findings suggest that visual feedback is not required to calibrate the available localization cues – including the idiosyncratic and complex spectral shape cues for elevation. Instead, the localization abilities of blind people show that the putative supervising role of vision may be supported, or taken over, by other non-visual feedback systems. The results do not provide support for the hypothesis that blind people can hypercompensate for the loss of vision in the frontal hemifield by developing superior sound-localization abilities. Despite the general correspondence in localization behavior, some specific differences related to pointing strategies as well as to those between blind and sighted subjects were apparent. Most importantly, the reconstructed origin (bias) of arm pointing was located near the shoulder for the blind subjects, whereas it was shifted and located near the cyclopean eye for the sighted subjects. The results indicate that both early blind and sighted humans adequately transform the head-centered acoustic target coordinates into the required reference frame of either motor system, but that the adopted response strategy may be specific to the subject group and pointer method. Electronic Publication  相似文献   

6.
Localization of sounds by the auditory system is based on the analysis of three sources of information: interaural level differences (ILD, caused by an attenuation of the sound as it travels to the more distant ear), interaural time differences (ITD, caused by the additional amount of time it takes for the sound to arrive at the more distant ear), and spectral cues (caused by direction-specific spectral filter properties of the pinnae). Although in a number of psychophysiological studies cortical processes of ITD and ILD analysis were investigated, there is hitherto no evidence on the cortical processing of spectral cues for sound localization. The objective of the present experiment was to test whether it is possible to observe electrophysiological correlates of sound localization based on spectral cues. In an auditory oddball experiment, 80 ms of broadband noise from varying free field locations were presented to inattentive participants. Mismatch negativities (MMNs) were observed for pairs of standards and location deviants located symmetrically with respect to the interaural axis. As interaural time and level differences are identical for such pairs of sounds, the observed MMNs most likely reflect cognitive processes of sound localization utilizing the spectral filter properties of the pinnae. MMN latencies suggest that sound localization based on spectral cues is slower than ITD- or ILD-based localization.  相似文献   

7.
The goal of the present study was to investigate how monaural sound localization on the horizontal plane in blind humans is affected by manipulating spectral cues. As reported in a previous study (Lessard et al. 1998), blind subjects are able to calibrate their auditory space despite their congenital lack of vision. Moreover, the performance level of half of the blind subjects was superior to that of sighted subjects under monaural listening conditions. Here, we first tested ten blind subjects and five controls in free-field (1) binaural and (2) monaural sound localization tasks. Results showed that, contrary to controls and half the blind subjects, five of the blind listeners were able to localize the sounds with one ear blocked. The blind subjects who showed good monaural localization performances were then re-tested in three additional monaural tasks, but we manipulated their ability to use spectral cues to carry out their discrimination. These subjects thus localized these same sounds: (3) with acoustical paste on the pinna, (4) with high-pass sounds and unobstructed pinna and (5) with low-pass sounds and unobstructed pinna. A significant increase in localization errors was observed when their ability to use spectral cues was altered. We conclude that one of the reasons why some blind subjects show supra-normal performances might be that they more effectively utilize auditory spectral cues.  相似文献   

8.
Sound localization depends on multiple acoustic cues such as interaural differences in time (ITD) and level (ILD) and spectral features introduced by the pinnae. Although many neurons in the inferior colliculus (IC) are sensitive to the direction of sound sources in free field, the acoustic cues underlying this sensitivity are unknown. To approach this question, we recorded the responses of IC cells in anesthetized cats to virtual space (VS) stimuli synthesized by filtering noise through head-related transfer functions measured in one cat. These stimuli not only possess natural combinations of ITD, ILD, and spectral cues as in free field but also allow precise control over each cue. VS receptive fields were measured in the horizontal and median vertical planes. The vast majority of cells were sensitive to the azimuth of VS stimuli in the horizontal plane for low to moderate stimulus levels. Two-thirds showed a "contra-preference" receptive field, with a vigorous response on the contralateral side of an edge azimuth. The other third of receptive fields were tuned around a best azimuth. Although edge azimuths of contra-preference cells had a broad distribution, best azimuths of tuned cells were near the midline. About half the cells tested were sensitive to the elevation of VS stimuli along the median sagittal plane by showing either a peak or a trough at a particular elevation. In general receptive fields for VS stimuli were similar to those found in free-field studies of IC neurons, suggesting that VS stimulation provided the essential cues for sound localization. Binaural interactions for VS stimuli were studied by comparing responses to binaural stimulation with responses to monaural stimulation of the contralateral ear. A majority of cells showed either purely inhibitory (BI) or mixed facilitatory/inhibitory (BF&I) interactions. Others showed purely facilitatory (BF) or no interactions (monaural). Binaural interactions were correlated with azimuth sensitivity: most contra-preference cells had either BI or BF&I interactions, whereas tuned cells were usually BF. These correlations demonstrate the importance of binaural interactions for azimuth sensitivity. Nevertheless most monaural cells were azimuth-sensitive, suggesting that monaural cues also play a role. These results suggest that the azimuth of a high-frequency sound source is coded primarily by edges in azimuth receptive fields of a population of ILD-sensitive cells.  相似文献   

9.
This paper reports on the acute effects of a monaural plug on directional hearing in the horizontal (azimuth) and vertical (elevation) planes of human listeners. Sound localization behavior was tested with rapid head-orienting responses toward brief high-pass filtered (>3 kHz; HP) and broadband (0.5-20 kHz; BB) noises, with sound levels between 30 and 60 dB, A-weighted (dBA). To deny listeners any consistent azimuth-related head-shadow cues, stimuli were randomly interleaved. A plug immediately degraded azimuth performance, as evidenced by a sound level-dependent shift ("bias") of responses contralateral to the plug, and a level-dependent change in the slope of the stimulus-response relation ("gain"). Although the azimuth bias and gain were highly correlated, they could not be predicted from the plug's acoustic attenuation. Interestingly, listeners performed best for low-intensity stimuli at their normal-hearing side. These data demonstrate that listeners rely on monaural spectral cues for sound-source azimuth localization as soon as the binaural difference cues break down. Also the elevation response components were affected by the plug: elevation gain depended on both stimulus azimuth and on sound level and, as for azimuth, localization was best for low-intensity stimuli at the hearing side. Our results show that the neural computation of elevation incorporates a binaural weighting process that relies on the perceived, rather than the actual, sound-source azimuth. It is our conjecture that sound localization ensues from a weighting of all acoustic cues for both azimuth and elevation, in which the weights may be partially determined, and rapidly updated, by the reliability of the particular cue.  相似文献   

10.
The alternation of sounds in the left and right ears induces motion perception of a static visual stimulus (SIVM: Sound-Induced Visual Motion). In this case, binaural cues were of considerable benefit in perceiving locations and movements of the sounds. The present study investigated how a spectral cue – another important cue for sound localization and motion perception – contributed to the SIVM. In experiments, two alternating sound sources aligned in the vertical plane were presented, synchronized with a static visual stimulus. We found that the proportion of the SIVM and the magnitude of the perceived movements of the static visual stimulus increased with an increase of retinal eccentricity (1.875–30°), indicating the influence of the spectral cue on the SIVM. These findings suggest that the SIVM can be generalized to the whole two dimensional audio–visual space, and strongly imply that there are common neural substrates for auditory and visual motion perception in the brain.  相似文献   

11.
We present in this paper a connectionist model that extracts interaural intensity differences (IID) from head-related transfer functions (HRTF) in the form of spectral cues to localize broadband high-frequency auditory stimuli, in both azimuth and elevation. A novel discriminative matching measure (DMM) is defined and optimized to characterize matching this IID spectrum. The optimal DMM approach and a novel backpropagation-based fuzzy model of localization are shown to be capable of localizing sources in azimuth, using only spectral IID cues. The fuzzy neural network model is extended to include localization in elevation. The use of training data with additive noise provides robustness to input errors. Outputs are modeled as two-dimensional Gaussians that act as membership functions for the fuzzy sets of sound locations. Error back-propagation is used to train the network to correlate input patterns and the desired output patterns. The fuzzy outputs are used to estimate the location of the source by detecting Gaussians using the max-energy paradigm. The proposed model shows that HRTF-based spectral IID patterns can provide sufficient information for extracting localization cues using a connectionist paradigm. Successful recognition in the presence of additive noise in the inputs indicates that the computational framework of this model is robust to errors made in estimating the IID patterns. The localization errors for such noisy patterns at various elevations and azimuths are compared and found to be within limits of localization blurs observed in humans.  相似文献   

12.
It has been suggested that successively presented sounds that are perceived as separate auditory streams are represented by separate populations of neurons. Mostly, spectral separation in different peripheral filters has been identified as the cue for segregation. However, stream segregation based on temporal cues is also possible without spectral separation. Here we present sequences of ABA- triplet stimuli providing only temporal cues to neurons in the European starling auditory forebrain. A and B sounds (125 ms duration) were harmonic complexes (fundamentals 100, 200, or 400 Hz; center frequency and bandwidth chosen to fit the neurons' tuning characteristic) with identical amplitude spectra but different phase relations between components (cosine, alternating, or random phase) and presented at different rates. Differences in both rate responses and temporal response patterns of the neurons when stimulated with harmonic complexes with different phase relations provide first evidence for a mechanism allowing a separate neural representation of such stimuli. Recording sites responding >1 kHz showed enhanced rate and temporal differences compared with those responding at lower frequencies. These results demonstrate a neural correlate of streaming by temporal cues due to the variation of phase that shows striking parallels to observations in previous psychophysical studies.  相似文献   

13.
When two brief sounds arrive at a listener's ears nearly simultaneously from different directions, localization of the sounds is described by "the precedence effect." At inter-stimulus delays (ISDs) <5 ms, listeners typically report hearing not two sounds but a single fused sound. The reported location of the fused image depends on the ISD. At ISDs of 1-4 ms, listeners point near the leading source (localization dominance). As the ISD is decreased from 0.8 to 0 ms, the fused image shifts toward a location midway between the two sources (summing localization). When an inter-stimulus level difference (ISLD) is imposed, judgements shift toward the more intense source. Spatial hearing, including the precedence effect, is thought to depend on the auditory cortex. Therefore we tested the hypothesis that the activity of cortical neurons signals the perceived location of fused pairs of sounds. We recorded the unit responses of cortical neurons in areas A1 and A2 of anesthetized cats. Single broadband clicks were presented from various frontal locations. Paired clicks were presented with various ISDs and ISLDs from two loudspeakers located 50 degrees to the left and right of midline. Units typically responded to single clicks or paired clicks with a single burst of spikes. Artificial neural networks were trained to recognize the spike patterns elicited by single clicks from various locations. The trained networks were then used to identify the locations signaled by unit responses to paired clicks. At ISDs of 1-4 ms, unit responses typically signaled locations near that of the leading source in agreement with localization dominance. Nonetheless the responses generally exhibited a substantial undershoot; this finding, too, accorded with psychophysical measurements. As the ISD was decreased from ~0.4 to 0 ms, network estimates typically shifted from the leading location toward the midline in agreement with summing localization. Furthermore a superposed ISLD shifted network estimates toward the more intense source, reaching an asymptote at an ISLD of 15-20 dB. To allow quantitative comparison of our physiological findings to psychophysical results, we performed human psychophysical experiments and made acoustical measurements from the ears of cats and humans. After accounting for the difference in head size between cats and humans, the responses of cortical units usually agreed with the responses of human listeners, although a sizable minority of units defied psychophysical expectations.  相似文献   

14.
We are regularly exposed to several concurrent sounds, producing a mixture of binaural cues. The neuronal mechanisms underlying the localization of concurrent sounds are not well understood. The major binaural cues for localizing low-frequency sounds in the horizontal plane are interaural time differences (ITDs). Auditory brain stem neurons encode ITDs by firing maximally in response to "favorable" ITDs and weakly or not at all in response to "unfavorable" ITDs. We recorded from ITD-sensitive neurons in the dorsal nucleus of the lateral lemniscus (DNLL) while presenting pure tones at different ITDs embedded in noise. We found that increasing levels of concurrent white noise suppressed the maximal response rate to tones with favorable ITDs and slightly enhanced the response rate to tones with unfavorable ITDs. Nevertheless, most of the neurons maintained ITD sensitivity to tones even for noise intensities equal to that of the tone. Using concurrent noise with a spectral composition in which the neuron's excitatory frequencies are omitted reduced the maximal response similar to that obtained with concurrent white noise. This finding indicates that the decrease of the maximal rate is mediated by suppressive cross-frequency interactions, which we also observed during monaural stimulation with additional white noise. In contrast, the enhancement of the firing rate to tones at unfavorable ITD might be due to early binaural interactions (e.g., at the level of the superior olive). A simple simulation corroborates this interpretation. Taken together, these findings suggest that the spectral composition of a concurrent sound strongly influences the spatial processing of ITD-sensitive DNLL neurons.  相似文献   

15.
Previous studies have demonstrated that single neurons in the central nucleus of the inferior colliculus (ICC) are sensitive to multiple sound localization cues. We investigated the hypothesis that ICC neurons are specialized to encode multiple sound localization cues that are aligned in space (as would naturally occur from a single broadband sound source). Sound localization cues including interaural time differences (ITDs), interaural level differences (ILDs), and spectral shapes (SSs) were measured in a marmoset monkey. Virtual space methods were used to generate stimuli with aligned and misaligned combinations of cues while recording in the ICC of the same monkey. Mutual information (MI) between spike rates and stimuli for aligned versus misaligned cues were compared. Neurons with best frequencies (BFs) less than ~11 kHz mostly encoded information about a single sound localization cue, ITD or ILD depending on frequency, consistent with the dominance of ear acoustics by either ITD or ILD at those frequencies. Most neurons with BFs >11 kHz encoded information about multiple sound localization cues, usually ILD and SS, and were sensitive to their alignment. In some neurons MI between stimuli and spike responses was greater for aligned cues, while in others it was greater for misaligned cues. If SS cues were shifted to lower frequencies in the virtual space stimuli, a similar result was found for neurons with BFs <11 kHz, showing that the cue interaction reflects the spectra of the stimuli and not a specialization for representing SS cues. In general the results show that ICC neurons are sensitive to multiple localization cues if they are simultaneously present in the frequency response area of the neuron. However, the representation is diffuse in that there is not a specialization in the ICC for encoding aligned sound localization cues.  相似文献   

16.
A basic concept in neuroscience is to correlate specific functions with specific neuronal structures. By discussing a specific example, an alternative concept is proposed: structures may be linked to rules of processing and these rules may serve different functions in different species or at different stages of evolution. The medial superior olive (MSO), a mammalian auditory brainstem structure, has been thought to solely process interaural time differences (ITD), the main cue for localizing low frequency sounds. Recent findings, however, indicate that this is not its only function since mammals that do not hear low frequencies and do not use ITDs for sound localization also possess a MSO. Recordings from the bat MSO indicate that it processes temporal cues in the milli- and submillisecond range, based on monaural or binaural inputs. In bats, and most likely in other small mammals, this temporal processing is related to pattern recognition and echo suppression rather than sound localization. However, the underlying mechanism, coincidence detection of several inputs, creates an epiphenomenal ITD sensitivity that is of no use for small mammals like bats or ancestral mammals. Such an epiphenomenal ITD sensitivity would have been a pre-adaptation which, when mammals grew larger during evolution and when localization of low frequency sounds became a question of survival, suddenly gained relevance. This way the MSO became involved in a new function without changing its basic rules of processing.  相似文献   

17.
Human sound localization relies on binaural difference cues for sound-source azimuth and pinna-related spectral shape cues for sound elevation. Although the interaural timing and level difference cues are weighted to produce a percept of sound azimuth, much less is known about binaural mechanisms underlying elevation perception. This problem is particularly interesting for the frontal hemifield, where binaural inputs are of comparable strength. In this paper, localization experiments are described in which hearing for each ear was either normal, or spectrally disrupted by a mold fitted to the external ear. Head-fixed saccadic eye movements were used as a rapid and accurate indicator of perceived sound direction in azimuth and elevation. In the control condition (both ears free) azimuth and elevation components of saccadic responses were well described by a linear regression line for the entire measured range. For unilateral mold conditions, the azimuth response components did not differ from controls. The influence of the mold on elevation responses was largest on the ipsilateral side, and declined systematically with azimuth towards the side of the free ear. Near the midsagittal plane the elevation responses were clearly affected by the mold, suggesting a systematic binaural interaction in the neural computation of perceived elevation that straddles the midline. A quantitative comparison of responses from the unilateral mold, the bilateral mold and control condition provided evidence that the fusion process can be described by binaural weighted averaging. Two different conceptual schemes are discussed that could underlie the observed responses. Electronic Publication  相似文献   

18.
An important cue for sound localization is binaural comparison of stimulus intensity. Two features of neuronal responses, response strength, i.e., spike count and/or rate, and response latency, vary with stimulus intensity, and binaural comparison of either or both might underlie localization. Previous studies at the receptor-neuron level showed that these response features are affected by the stimulus temporal pattern. When sounds are repeated rapidly, as occurs in many natural sounds, response strength decreases and latency increases, resulting in altered coding of localization cues. In this study we analyze binaural cues for sound localization at the level of an identified pair of interneurons (the left and right AN2) in the cricket auditory system, with emphasis on the effects of stimulus temporal pattern on binaural response differences. AN2 spike count decreases with rapidly repeated stimulation and latency increases. Both effects depend on stimulus intensity. Because of the difference in intensity at the two ears, binaural differences in spike count and latency change as stimulation continues. The binaural difference in spike count decreases, whereas the difference in latency increases. The proportional changes in response strength and in latency are greater at the interneuron level than at the receptor level, suggesting that factors in addition to decrement of receptor responses are involved. Intracellular recordings reveal that a slowly building, long-lasting hyperpolarization is established in AN2. At the same time, the level of depolarization reached during the excitatory postsynaptic potential (EPSP) resulting from each sound stimulus decreases. Neither these effects on membrane potential nor the changes in spiking response are accounted for by contralateral inhibition. Based on comparison of our results with earlier behavioral experiments, it is unlikely that crickets use the binaural difference in latency of AN2 responses as the main cue for determining sound direction, leaving the difference in response strength, i.e., spike count and/or rate, as the most likely candidate.  相似文献   

19.
The detection of sounds that come from a region of space recently exposed to acoustic stimulation is often slower than the detection of sounds coming from regions of space previously unexposed to acoustic stimulation. The relative increase in reaction time (RT) to targets in recently stimulated locations is generally termed "inhibition of return" (IOR). This term alludes to the possibility that spatial attention is biased against returning to recently visited locations, thus favoring the sampling of new sources of information. However, auditory IOR effects found in paradigms where subjects have to detect a first sound (cue) without making an overt response to it, and then respond as fast as possible to a second sound (target), may be due to a purely motor inhibition carried over from cue to target. Such motor inhibition has been shown to be maximal when cue and target belong to the same category, such as when they occupy the same spatial position. We have assessed the possible contribution of this motor inhibition to auditory IOR effects by having subjects respond to both cues and targets randomly presented in a right location and a left location. Reaction time to targets preceded by cues at the same location was longer than reaction times to targets preceded by cues at the opposite location (IOR effect). Compared to a condition in which subjects responded only to targets, the IOR effect was smaller, but still significant, in the double response condition, suggesting that such an effect depends on both motor inhibition and other factors, possibly related to covert spatial orienting and oculomotor control. A second experiment indicated that the IOR effect component independent of motor inhibition was slightly but significantly greater when space was relevant to the task because subjects had to report the positions of both cues and targets, compared to when space was irrelevant to the task because subjects were not required to report stimulus positions.  相似文献   

20.
We examined the accuracy and precision with which the barn owl (Tyto alba) turns its head toward sound sources under conditions that evoke the precedence effect (PE) in humans. Stimuli consisted of 25-ms noise bursts emitted from two sources, separated horizontally by 40 degrees, and temporally by 3-50 ms. At delays from 3 to 10 ms, head turns were always directed at the leading source, and were nearly as accurate and precise as turns toward single sources, indicating that the leading source dominates perception. This lead dominance is particularly remarkable, first, because on some trials, the lagging source was significantly higher in amplitude than the lead, arising from the directionality of the owl's ears, and second, because the temporal overlap of the two sounds can degrade the binaural cues with which the owl localizes sounds. With increasing delays, the influence of the lagging source became apparent as the head saccades became increasingly biased toward the lagging source. Furthermore, on some of the trials at delays > or = 20 ms, the owl turned its head, first, in the direction of one source, and then the other, suggesting that it was able to resolve two separately localizable sources. At all delays <50 ms, response latencies were longer for paired sources than for single sources. With the possible exception of response latency, these findings demonstrate that the owl exhibits precedence phenomena in sound localization similar to those in humans and cats, and provide a basis for comparison with neurophysiological data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号