首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We examined the influence of dynamic visual scenes on the motion perception of subjects undergoing sinusoidal (0.45 Hz) roll swing motion at different radii. The visual scenes were presented on a flatscreen monitor with a monocular 40° field of view. There were three categories of trials: (1) trials in the dark; (2) trials where the visual scene matched the actual motion; and (3) trials where the visual scene showed swing motion at a different radius. Subjects verbally reported perceptions of head tilt and translation. When the visual and vestibular cues differed, subjects reported perceptions that were geometrically consistent with a radius between the radii of the visual scene and the actual motion. Even when sensations did not match either the visual or vestibular stimuli, reported motion perceptions were consistent with swing motions combining elements of each. Subjects were generally unable to detect cue conflicts or judge their own visual–vestibular biases, which suggests that the visual and vestibular self-motion cues are not independently accessible.  相似文献   

2.
Recent research has provided evidence that visual and body-based cues (vestibular, proprioceptive and efference copy) are integrated using a weighted linear sum during walking and passive transport. However, little is known about the specific weighting of visual information when combined with proprioceptive inputs alone, in the absence of vestibular information about forward self-motion. Therefore, in this study, participants walked in place on a stationary treadmill while dynamic visual information was updated in real time via a head-mounted display. The task required participants to travel a predefined distance and subsequently match this distance by adjusting an egocentric, in-depth target using a game controller. Travelled distance information was provided either through visual cues alone, proprioceptive cues alone or both cues combined. In the combined cue condition, the relationship between the two cues was manipulated by either changing the visual gain across trials (0.7×, 1.0×, 1.4×; Exp. 1) or the proprioceptive gain across trials (0.7×, 1.0×, 1.4×; Exp. 2). Results demonstrated an overall higher weighting of proprioception over vision. These weights were scaled, however, as a function of which sensory input provided more stable information across trials. Specifically, when visual gain was constantly manipulated, proprioceptive weights were higher than when proprioceptive gain was constantly manipulated. These results therefore reveal interesting characteristics of cue-weighting within the context of unfolding spatio-temporal cue dynamics.  相似文献   

3.
Human observers combine multiple sensory cues synergistically to achieve greater perceptual sensitivity, but little is known about the underlying neuronal mechanisms. We recorded the activity of neurons in the dorsal medial superior temporal (MSTd) area during a task in which trained monkeys combined visual and vestibular cues near-optimally to discriminate heading. During bimodal stimulation, MSTd neurons combined visual and vestibular inputs linearly with subadditive weights. Neurons with congruent heading preferences for visual and vestibular stimuli showed improvements in sensitivity that parallel behavioral effects. In contrast, neurons with opposite preferences showed diminished sensitivity under cue combination. Responses of congruent cells were more strongly correlated with monkeys' perceptual decisions than were responses of opposite cells, suggesting that the monkey monitored the activity of congruent cells to a greater extent during cue integration. These findings show that perceptual cue integration occurs in nonhuman primates and identify a population of neurons that may form its neural basis.  相似文献   

4.
Integration of multiple sensory cues is essential for precise and accurate perception and behavioral performance, yet the reliability of sensory signals can vary across modalities and viewing conditions. Human observers typically employ the optimal strategy of weighting each cue in proportion to its reliability, but the neural basis of this computation remains poorly understood. We trained monkeys to perform a heading discrimination task from visual and vestibular cues, varying cue reliability randomly. The monkeys appropriately placed greater weight on the more reliable cue, and population decoding of neural responses in the dorsal medial superior temporal area closely predicted behavioral cue weighting, including modest deviations from optimality. We found that the mathematical combination of visual and vestibular inputs by single neurons is generally consistent with recent theories of optimal probabilistic computation in neural circuits. These results provide direct evidence for a neural mechanism mediating a simple and widespread form of statistical inference.  相似文献   

5.
Many previous studies have focused on how humans combine inputs provided by different modalities for the same physical property. However, it is not yet very clear how different senses providing information about our own movements combine in order to provide a motion percept. We designed an experiment to investigate how upright turns are stored, and particularly how vestibular and visual cues interact at the different stages of the memorization process (encoding/recalling). Subjects experienced passive yaw turns stimulated in the vestibular modality (whole-body rotations) and/or in the visual modality (limited lifetime star-field rotations), with the visual scene turning 1.5 times faster when combined (unnoticed conflict). Then they were asked to actively reproduce the rotation displacement in the opposite direction, with body cues only, visual cues only, or both cues with either the same or a different gain factor. First, we found that in none of the conditions did the reproduced motion dynamics follow that of the presentation phase (Gaussian angular velocity profiles). Second, the unimodal recalling of turns was largely uninfluenced by the other sensory cue that it could be combined with during the encoding. Therefore, turns in each modality, visual, and vestibular are stored independently. Third, when the intersensory gain was preserved, the bimodal reproduction was more precise (reduced variance) and lay between the two unimodal reproductions. This suggests that with both visual and vestibular cues available, these combine in order to improve the reproduction. Fourth, when the intersensory gain was modified, the bimodal reproduction resulted in a substantially larger change for the body than for the visual scene rotations, which indicates that vision prevails for this rotation displacement task when a matching problem is introduced.  相似文献   

6.
Four inbred strains of mice, BALB/cByJ, C3H/2Ibg, C57BL/6Ibg, and DBA/2Ibg, were tested for their learning ability in the Morris water maze. Two forms of learning were examined: cue learning, in which the mice were required to swim toward a submerged platform marked by a proximal visual cue; and place learning, in which the animals were required to use distal visual cues to find a submerged platform. C3H and BALB mice, which lack good visual acuity, were incapable of either form of learning. Both C57 and DBA mice were capable of cue learning, but DBA mice performed poorly at the place learning task. A selective impairment in place learning is typical of rats with disrupted hippocampal function. A similar impairment in DBA mice may indicate that abnormal hippocampal function exists under baseline conditions in this strain.This work was supported by AFOSR Grant 85-0369, USPHS Grant HD-07289-01, and BRSG Grants RR-07013-19 and-20 awarded by the Biomedical Research Support Grant Program, Division of Research Resources, NIH, to the University of Colorado. The principles of animal care promulgated by the Guide for the Care and Use of Laboratory Animals, DHHS Publication No. 85-23, are observed by the University of Colorado. The University of Colorado has a letter of assurance (A1609) on file with the DHHS Office of Protection from Research Risks.  相似文献   

7.
Research in the vestibular field has revealed the existence of a central process, called ??velocity storage??, that is activated by both visual and vestibular rotation cues and is modified by gravity, but whose functional relevance during natural motion has often been questioned. In this review, we explore spatial orientation in the context of a Bayesian model of vestibular information processing. In this framework, deficiencies/ambiguities in the peripheral vestibular sensors are compensated for by central processing to more accurately estimate rotation velocity, orientation relative to gravity, and inertial motion. First, an inverse model of semicircular canal dynamics is used to reconstruct rotation velocity by integrating canal signals over time. However, its low-frequency bandwidth is limited to avoid accumulation of noise in the integrator. A second internal model uses this reconstructed rotation velocity to compute an internal estimate of tilt and inertial acceleration. The bandwidth of this second internal model is also restricted at low frequencies to avoid noise accumulation and drift of the tilt/translation estimator over time. As a result, low-frequency translation can be erroneously misinterpreted as tilt. The time constants of these two integrators (internal models) can be conceptualized as two Bayesian priors of zero rotation velocity and zero linear acceleration, respectively. The model replicates empirical observations like ??velocity storage?? and ??frequency segregation?? and explains spatial orientation (e.g., ??somatogravic??) illusions. Importantly, the functional significance of this network, including velocity storage, is found during short-lasting, natural head movements, rather than at low frequencies with which it has been traditionally studied.  相似文献   

8.
The role of visual orientation cues for human control of upright stance is still not well understood. We, therefore, investigated stance control during motion of a visual scene as stimulus, varying the stimulus parameters and the contribution from other senses (vestibular and leg proprioceptive cues present or absent). Eight normal subjects and three patients with chronic bilateral loss of vestibular function participated. They stood on a motion platform inside a cabin with an optokinetic pattern on its interior walls. The cabin was sinusoidally rotated in anterior-posterior (a-p) direction with the horizontal rotation axis through the ankle joints (f=0.05-0.4 Hz; A (max)=0.25 degrees -4 degrees ; v (max)=0.08-10 degrees /s). The subjects' centre of mass (COM) angular position was calculated from opto-electronically measured body sway parameters. The platform was either kept stationary or moved by coupling its position 1:1 to a-p hip position ('body sway referenced', BSR, platform condition), by which proprioceptive feedback of ankle joint angle became inactivated. The visual stimulus evoked in-phase COM excursions (visual responses) in all subjects. (1) In normal subjects on a stationary platform, the visual responses showed saturation with both increasing velocity and displacement of the visual stimulus. The saturation showed up abruptly when visually evoked COM velocity and displacement reached approximately 0.1 degrees /s and 0.1 degrees , respectively. (2) In normal subjects on a BSR platform (proprioceptive feedback disabled), the visual responses showed similar saturation characteristics, but at clearly higher COM velocity and displacement values ( approximately 1 degrees /s and 1 degrees , respectively). (3) In patients on a stationary platform (no vestibular cues), the visual responses were basically similar to those of the normal subjects, apart from somewhat higher gain values and less-pronounced saturation effects. (4) In patients on a BSR platform (no vestibular and proprioceptive cues, presumably only somatosensory graviceptive and visual cues), the visual responses showed an abnormal increase in gain with increasing stimulus frequency in addition to a displacement saturation. On the normal subjects we performed additional experiments in which we varied the gain of the visual response by using a 'virtual reality' visual stimulus or by applying small lateral platform tilts. This did not affect the saturation characteristics of the visual response to a considerable degree. We compared the present results to previous psychophysical findings on motion perception, noting similarities of the saturation characteristics in (1) with leg proprioceptive detection thresholds of approximately 0.1 degrees /s and 0.1 degrees and those in (2) with vestibular detection thresholds of 1 degrees /s and 1 degrees , respectively. From the psychophysical data one might hypothesise that a proprioceptive postural mechanism limits the visually evoked body excursions if these excursions exceed 0.1 degrees /s and 0.1 degrees in condition (1) and that a vestibular mechanism is doing so at 1 degrees /s and 1 degrees in (2). To better understand this, we performed computer simulations using a posture control model with multiple sensory feedbacks. We had recently designed the model to describe postural responses to body pull and platform tilt stimuli. Here, we added a visual input and adjusted its gain to fit the simulated data to the experimental data. The saturation characteristics of the visual responses of the normals were well mimicked by the simulations. They were caused by central thresholds of proprioceptive, vestibular and somatosensory signals in the model, which, however, differed from the psychophysical thresholds. Yet, we demonstrate in a theoretical approach that for condition (1) the model can be made monomodal proprioceptive with the psychophysical 0.1 degrees /s and 0.1 degrees thresholds, and for (2) monomodal vestibular with the psychophysical 1 degrees /s and 1 degrees thresholds, and still shows the corresponding saturation characteristics (whereas our original model covers both conditions without adjustments). The model simulations also predicted the almost normal visual responses of patients on a stationary platform and their clearly abnormal responses on a BSR platform.  相似文献   

9.
Many perceptual cue combination studies have shown that humans can integrate sensory information across modalities as well as within a modality in a manner that is close to optimal. While the limits of sensory cue integration have been extensively studied in the context of perceptual decision tasks, the evidence obtained in the context of motor decisions provides a less consistent picture. Here, we studied the combination of visual and haptic information in the context of human arm movement control. We implemented a pointing task in which human subjects pointed at an invisible unknown target position whose vertical position varied randomly across trials. In each trial, we presented a haptic and a visual cue that provided noisy information about the target position half-way through the reach. We measured pointing accuracy as function of haptic and visual cue onset and compared pointing performance to the predictions of a multisensory decision model. Our model accounts for pointing performance by computing the maximum a posteriori estimate, assuming minimum variance combination of uncertain sensory cues. Synchronicity of cue onset has previously been demonstrated to facilitate the integration of sensory information. We tested this in trials in which visual and haptic information was presented with temporal disparity. We found that for our sensorimotor task temporal disparity between visual and haptic cue had no effect. Sensorimotor learning appears to use all available information and to apply the same near-optimal rules for cue combination that are used by perception.  相似文献   

10.
Fixational eye movements occur involuntarily during visual fixation of stationary scenes. The fastest components of these miniature eye movements are microsaccades, which can be observed about once per second. Recent studies demonstrated that microsaccades are linked to covert shifts of visual attention. Here, we generalized this finding in two ways. First, we used peripheral cues, rather than the centrally presented cues of earlier studies. Second, we spatially cued attention in vision and audition to visual and auditory targets. An analysis of microsaccade responses revealed an equivalent impact of visual and auditory cues on microsaccade-rate signature (i.e. an initial inhibition followed by an overshoot and a final return to the pre-cue baseline rate). With visual cues or visual targets, microsaccades were briefly aligned with cue direction and then opposite to cue direction during the overshoot epoch, probably as a result of an inhibition of an automatic saccade to the peripheral cue. With left auditory cues and auditory targets microsaccades oriented in cue direction. We argue that microsaccades can be used to study crossmodal integration of sensory information and to map the time course of saccade preparation during covert shifts of visual and auditory attention.  相似文献   

11.
The aim of this study was to establish whether spatial attention triggered by bimodal exogenous cues acts differently as compared to unimodal and crossmodal exogenous cues due to crossmodal integration. In order to investigate this issue, we examined cuing effects in discrimination tasks and compared these effects in a condition wherein a visual target was preceded by both visual and auditory exogenous cues delivered simultaneously at the same side (bimodal cue), with conditions wherein the visual target was preceded by either a visual (unimodal cue) or an auditory cue (crossmodal cue). The results of two experiments revealed that cuing effects on RTs in these three conditions with an SOA of 200 ms had comparable magnitudes. Differences at a longer SOA of 600 ms (inhibition of return for bimodal cues, Experiment 1) disappeared when catch trials were included (in Experiment 2). The current data do not support an additional influence of crossmodal integration on exogenous orienting, but are well in agreement with the existence of a supramodal spatial attention module that allocates attentional resources towards stimulated locations for different sensory modalities.  相似文献   

12.
Summary This paper compares the motion sensations of a subject rotated about a vertical axis for two fixed visual fields (a large peripheral field and a single central spot) and in darkness.Motion sensation is described in terms of threshold, frequency response, and subjective displacement and velocity.The perception of angular acceleration showed significantly lower threshold and reduced latency time for the illuminated presentation. The level of illumination, however, produced no significant difference in threshold. The subjective frequency response, measured by a nulling method, showed a higher gain in the illuminated presentation, particularly at low frequencies and accelerations. With the subject rotating a pointer to maintain a fixed heading during triangular velocity stimuli, subjective displacements showed no difference for all different visual cues. Magnitude estimates of the after-rotation associated with deceleration from a constant velocity showed a quicker rising speed, larger subjective velocity and longer duration in the illuminated presentation. All the results suggest that the oculogyral illusion is principally responsible for producing a lower threshold in the illuminated presentation, although the fixed peripheral visual field tends to reduce reliance upon vestibular signals. At lower intensity rotation stimuli, this effect is especially apparent.This research was supported by NASA Ames Research Grants NSG 2012 and 2230  相似文献   

13.
The purpose of this study was to determine the contribution of visual, vestibular, and somatosensory cues to the maintenance of stance in humans. Postural sway was induced by full-field, sinusoidal visual surround rotations about an axis at the level of the ankle joints. The influences of vestibular and somatosensory cues were characterized by comparing postural sway in normal and bilateral vestibular absent subjects in conditions that provided either accurate or inaccurate somatosensory orientation information. In normal subjects, the amplitude of visually induced sway reached a saturation level as stimulus amplitude increased. The saturation amplitude decreased with increasing stimulus frequency. No saturation phenomena were observed in subjects with vestibular loss, implying that vestibular cues were responsible for the saturation phenomenon. For visually induced sways below the saturation level, the stimulus-response curves for both normal subjects and subjects experiencing vestibular loss were nearly identical, implying (1) that normal subjects were not using vestibular information to attenuate their visually induced sway, possibly because sway was below a vestibular-related threshold level, and (2) that subjects with vestibular loss did not utilize visual cues to a greater extent than normal subjects; that is, a fundamental change in visual system gain was not used to compensate for a vestibular deficit. An unexpected finding was that the amplitude of body sway induced by visual surround motion could be almost 3 times greater than the amplitude of the visual stimulus in normal subjects and subjects with vestibular loss. This occurred in conditions where somatosensory cues were inaccurate and at low stimulus amplitudes. A control system model of visually induced postural sway was developed to explain this finding. For both subject groups, the amplitude of visually induced sway was smaller by a factor of about 4 in tests where somatosensory cues provided accurate versus inaccurate orientation information. This implied (1) that the subjects experiencing vestibular loss did not utilize somatosensory cues to a greater extent than normal subjects; that is, changes in somatosensory system gain were not used to compensate for a vestibular deficit, and (2) that the threshold for the use of vestibular cues in normal subjects was apparently lower in test conditions where somatosensory cues were providing accurate orientation information.  相似文献   

14.
Summary In decerebrated, spinal transected cats with neck and forelimbs immobilized by plaster cats, the visual and proprioceptive cues were minimized when the animal was tilted. The contralateral labyrinth was acutely destroyed. The ipsilateral semicircular canals were plugged and the ipsilateral saccule extirpated leaving the ipsilateral utricle intact. Neurons in the vestibular nuclear complex driven by electrical stimulation of the utricle were shown to be highly sensitive to static pitch. Results suggest that the observed response to static pitch was due exclusively to input from the utricle.This research was supported in part by a grant from the Wing Lung Bank Medical Research Fund and Research Grant Committee of the Medical Faculty at the University of Hong Kong  相似文献   

15.
In the Morris Water Maze (MWM), an animal learns the location of a hidden platform relative to distal visual cues in a process known as spatial learning. The visual cues used in MWM experiments are invariably salient in nature, and non-salient cues, such as subtle environmental variations, have not traditionally been considered to play a significant role. However, the role of non-salient cues in spatial navigation has not been adequately investigated experimentally. The objective of this experiment was therefore to determine the relative contribution of salient and non-salient visual cues to spatial navigation in the MWM. Animals were presented with an environment containing both types of visual cues, and were tested in three successive phases of water maze testing, each with a new platform location. Probe tests were used to assess spatial accuracy, and several cue variation trials were run in which both salient and non-salient visual cues were manipulated. It was observed that removal of the salient visual cues did not cause a significant deterioration in performance unless accompanied by disruption of the non-salient visual cues, and that spatial navigation was unimpaired when only the salient visual cues were removed from view. This suggests that during place learning in Long-Evans rats, non-salient visual cues may play a dominant role, at least when salient cue presentation is limited to four cues.  相似文献   

16.
We investigated the relative weighting of vestibular, optokinetic and podokinetic (foot and leg proprioceptive) cues for the perception of self-turning in an environment which was either stationary (concordant stimulation) or moving (discordant stimulation) and asked whether cue weighting changes if subjects (Ss) detect a discordance. Ss (N = 18) stood on a turntable inside an optokinetic drum and turned either passively (turntable rotating) or actively in space at constant velocities of 15, 30, or 60°/s. Sensory discordance was introduced by simultaneous rotations of the environment (drum and/or turntable) at ±{5, 10, 20, 40, 80}% of self-turning velocity. In one experiment, Ss were to detect these rotations (i.e. the sensory discordance), and in a second experiment they reported perceived angular self-displacement. Discordant optokinetic cues were better detected, and more heavily weighted for self-turning perception, than discordant podokinetic cues. Within Ss, weights did not depend on whether a discordance was detected or not. Across Ss, optokinetic weights varied over a large range and were negatively correlated with the detection scores: the more perception was influenced by discordant optokinetic cues, the poorer was the detection score; no such correlation was found among the podokinetic results. These results are interpreted in terms of a "self-referential" model that makes the following assumptions: (1) a weighted average of the available sensory cues both determines turning perception and serves as a reference to which the optokinetic cue is compared; (2) a discordance is detected if the difference between reference and optokinetic cue exceeds some threshold; (3) the threshold value corresponds to about the same multiple of sensory uncertainty in all Ss. With these assumptions the model explains the observed relation between optokinetic weight and detection score.  相似文献   

17.
Rats learned a Y-maze position habit and eight successive reversals in one of three experimental conditions, each distinguished by different visual cues on the goal box doors. Following a retention test septal lesions were produced and the animals retested. Reversal performance was best when the correct choice was associated consistently with a single visual cue. When two distinct visual cues were correlated with the correct choice in an alternating sequence, preoperative reversal performance did not differ from the condition with no overt visual cue, except for an increased amount of vicarious approaches. Septal lesions did not affect reversal performance when visual cues consistently signalled the correct choice, but produced significant decrements in the other two conditions. In the condition which lacked distinct visual cues, septal animals made more errors and vicarious approaches than in the condition which had visual cues associated with the correct choice in an alternating sequence. These results suggest that septal lesions impair ability to use spatial/positional cues. The extent to which this deficit is expressed depends on the relevance of the visual cues provided.  相似文献   

18.
The control of self-motion is supported by visual, vestibular, and proprioceptive signals. Recent research has shown how these signals interact in the monkey medio-superior temporal area (area MST) to enhance and disambiguate the perception of heading during self-motion. Area MST is a central stage for self-motion processing from optic flow, and integrates flow field information with vestibular self-motion and extraretinal eye movement information. Such multimodal cue integration is clearly important to solidify perception. However to understand the information processing capabilities of the brain, one must also ask how much information can be deduced from a single cue alone. This is particularly pertinent for optic flow, where controversies over its usefulness for self-motion control have existed ever since Gibson proposed his direct approach to ecological perception. In our study, we therefore, tested macaque MST neurons for their heading selectivity in highly complex flow fields based on the purely visual mechanisms. We recorded responses of MST neurons to simple radial flow fields and to distorted flow fields that simulated a self-motion plus an eye movement. About half of the cells compensated for such distortion and kept the same heading selectivity in both cases. Our results strongly support the notion of an involvement of area MST in the computation of heading.  相似文献   

19.
This study reports an experiment investigating the relative effects of intramodal, crossmodal and bimodal cues on visual and auditory temporal order judgements. Pairs of visual or auditory targets, separated by varying stimulus onset asynchronies, were presented to either side of a central fixation (±45°), and participants were asked to identify the target that had occurred first. In some of the trials, one of the targets was preceded by a short, non-predictive visual, auditory or audiovisual cue stimulus. The cue and target stimuli were presented at the exact same locations in space. The point of subjective simultaneity revealed a consistent spatiotemporal bias towards targets at the cued location. For the visual targets, the intramodal cue elicited the largest, and the crossmodal cue the smallest, bias. The bias elicited by the bimodal cue fell between the intramodal and crossmodal cue biases, with significant differences between all cue types. The pattern for the auditory targets was similar apart from a scaling factor and greater variance, so the differences between the cue conditions did not reach significance. These results provide evidence for multisensory integration in exogenous attentional cueing. The magnitude of the bimodal cueing effect was equivalent to the average of the facilitation elicited by the intramodal and crossmodal cues. Under the assumption that the visual and auditory cues were equally informative, this is consistent with the notion that exogenous attention, like perception, integrates multimodal information in an optimal way.  相似文献   

20.
The fact that the sensory systems do not become functional at the same time during prenatal development raises the question of how experience in a given modality can influence functioning in other sensory modalities. The present study exposed groups of bobwhite quail embryos to augmented tactile and vestibular stimulation at times that either coincided with or followed the period of onset of function in the later-developing auditory and visual modalities. Differences in the timing of augmented prenatal stimulation led to different patterns of subsequent auditory and visual responsiveness following hatching. No effect on normal visual responsiveness to species-typical maternal cues was found when exposure to tactile and vestibular stimulation coincided with the emergence of visual function (Days 14-19), but when exposure took place after the onset of visual functioning (Days 17-22), chicks displayed enhanced responsiveness to the same maternal visual cues. When augmented tactile and vestibular stimulation coincided with the onset of auditory function (Days 9-14), embryos subsequently failed to learn a species-typical maternal call prior to hatching. However, when given exposure to the same type and amount of augmented stimulation following the onset of auditory function (Days 14-19), embryos did learn the maternal call. These findings demonstrate that augmented stimulation to earlier-emerging sensory modalities can either facilitate or interfere with perceptual responsiveness in later-developing modalities, depending on when that stimulation takes place.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号