首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 537 毫秒
1.
Many previous studies have focused on how humans combine inputs provided by different modalities for the same physical property. However, it is not yet very clear how different senses providing information about our own movements combine in order to provide a motion percept. We designed an experiment to investigate how upright turns are stored, and particularly how vestibular and visual cues interact at the different stages of the memorization process (encoding/recalling). Subjects experienced passive yaw turns stimulated in the vestibular modality (whole-body rotations) and/or in the visual modality (limited lifetime star-field rotations), with the visual scene turning 1.5 times faster when combined (unnoticed conflict). Then they were asked to actively reproduce the rotation displacement in the opposite direction, with body cues only, visual cues only, or both cues with either the same or a different gain factor. First, we found that in none of the conditions did the reproduced motion dynamics follow that of the presentation phase (Gaussian angular velocity profiles). Second, the unimodal recalling of turns was largely uninfluenced by the other sensory cue that it could be combined with during the encoding. Therefore, turns in each modality, visual, and vestibular are stored independently. Third, when the intersensory gain was preserved, the bimodal reproduction was more precise (reduced variance) and lay between the two unimodal reproductions. This suggests that with both visual and vestibular cues available, these combine in order to improve the reproduction. Fourth, when the intersensory gain was modified, the bimodal reproduction resulted in a substantially larger change for the body than for the visual scene rotations, which indicates that vision prevails for this rotation displacement task when a matching problem is introduced.  相似文献   

2.
The purpose of this study was to determine the contribution of visual, vestibular, and somatosensory cues to the maintenance of stance in humans. Postural sway was induced by full-field, sinusoidal visual surround rotations about an axis at the level of the ankle joints. The influences of vestibular and somatosensory cues were characterized by comparing postural sway in normal and bilateral vestibular absent subjects in conditions that provided either accurate or inaccurate somatosensory orientation information. In normal subjects, the amplitude of visually induced sway reached a saturation level as stimulus amplitude increased. The saturation amplitude decreased with increasing stimulus frequency. No saturation phenomena were observed in subjects with vestibular loss, implying that vestibular cues were responsible for the saturation phenomenon. For visually induced sways below the saturation level, the stimulus-response curves for both normal subjects and subjects experiencing vestibular loss were nearly identical, implying (1) that normal subjects were not using vestibular information to attenuate their visually induced sway, possibly because sway was below a vestibular-related threshold level, and (2) that subjects with vestibular loss did not utilize visual cues to a greater extent than normal subjects; that is, a fundamental change in visual system gain was not used to compensate for a vestibular deficit. An unexpected finding was that the amplitude of body sway induced by visual surround motion could be almost 3 times greater than the amplitude of the visual stimulus in normal subjects and subjects with vestibular loss. This occurred in conditions where somatosensory cues were inaccurate and at low stimulus amplitudes. A control system model of visually induced postural sway was developed to explain this finding. For both subject groups, the amplitude of visually induced sway was smaller by a factor of about 4 in tests where somatosensory cues provided accurate versus inaccurate orientation information. This implied (1) that the subjects experiencing vestibular loss did not utilize somatosensory cues to a greater extent than normal subjects; that is, changes in somatosensory system gain were not used to compensate for a vestibular deficit, and (2) that the threshold for the use of vestibular cues in normal subjects was apparently lower in test conditions where somatosensory cues were providing accurate orientation information.  相似文献   

3.
In the field of motion-based simulation, it was found that a visual amplitude equal to the inertial amplitude does not always provide the best perceived match between visual and inertial motion. This result is thought to be caused by the “quality” of the motion cues delivered by the simulator motion and visual systems. This paper studies how different visual characteristics, like field of view (FoV) and size and depth cues, influence the scaling between visual and inertial motion in a simulation environment. Subjects were exposed to simulator visuals with different fields of view and different visual scenes and were asked to vary the visual amplitude until it matched the perceived inertial amplitude. This was done for motion profiles in surge, sway, and yaw. Results showed that the subjective visual amplitude was significantly affected by the FoV, visual scene, and degree-of-freedom. When the FoV and visual scene were closer to what one expects in the real world, the scaling between the visual and inertial cues was closer to one. For yaw motion, the subjective visual amplitudes were approximately the same as the real inertial amplitudes, whereas for sway and especially surge, the subjective visual amplitudes were higher than the inertial amplitudes. This study demonstrated that visual characteristics affect the scaling between visual and inertial motion which leads to the hypothesis that this scaling may be a good metric to quantify the effect of different visual properties in motion-based simulation.  相似文献   

4.
To compare and contrast the neural mechanisms that contribute to vestibular perception and action, we measured vestibuloocular reflexes (VOR) and perceptions of tilt and translation. We took advantage of the well-known ambiguity that the otolith organs respond to both linear acceleration and tilt with respect to gravity and investigated the mechanisms by which this ambiguity is resolved. A new motion paradigm that combined roll tilt with inter-aural translation ("Tilt&Translation") was used; subjects were sinusoidally (0.8 Hz) roll tilted but with their ears above or below the rotation axis. This paradigm provided sinusoidal roll canal cues that were the same across trials while providing otolith cues that varied linearly with ear position relative to the earth-horizontal rotation axis. We found that perceived tilt and translation depended on canal cues, with substantial roll tilt and inter-aural translation perceptions reported even when the otolith organs measured no inter-aural force. These findings match internal model predictions that rotational cues from the canals influence the neural processing of otolith cues. We also found horizontal translational VORs that varied linearly with radius; a minimal response was measured when the otolith organs transduced little or no inter-aural force. Hence, the horizontal translational VOR was dependent on otolith cues but independent of canal cues. These findings match predictions that translational VORs are elicited by simple filtering of otolith signals. We conclude that internal models govern human perception of tilt and translation at 0.8 Hz and that high-pass filtering governs the human translational VOR at this same frequency.  相似文献   

5.
Surprisingly little is known of the perceptual consequences of visual or vestibular stimulation in updating our perceived position in space as we move around. We assessed the roles of visual and vestibular cues in determining the perceived distance of passive, linear self motion. Subjects were given cues to constant-acceleration motion: either optic flow presented in a virtual reality display, physical motion in the dark or combinations of visual and physical motions. Subjects indicated when they perceived they had traversed a distance that had been previously given to them either visually or physically. The perceived distance of motion evoked by optic flow was accurate relative to a previously presented visual target but was perceptually equivalent to about half the physical motion. The perceived distance of physical motion in the dark was accurate relative to a previously presented physical motion but was perceptually equivalent to a much longer visually presented distance. The perceived distance of self motion when both visual and physical cues were present was more closely perceptually equivalent to the physical motion experienced rather than the simultaneous visual motion, even when the target was presented visually. We discuss this dominance of the physical cues in determining the perceived distance of self motion in terms of capture by non-visual cues. These findings are related to emerging studies that show the importance of vestibular input to neural mechanisms that process self motion.  相似文献   

6.
Patients with phobic postural vertigo (PPV) often report a particularly increased unsteadiness when looking at moving visual scenes. Therefore, the differential effects of large-field visual motion stimulation in roll plane on body sway during upright stance were analyzed in 23 patients with PPV, who had been selected for the integrity of their vestibular and balance systems, and in 17 healthy subjects. Visual motion stimulation induced a sensation of apparent body motion (roll vection) in all patients and normal subjects. Normal subjects showed an increased lateral sway path with a lateral shift of the center of pressure (COP) in stimulus direction (mean 1.67 cm, SD 1.63). The patients also exhibited an increase in sway path during visual motion stimulation; however, their body sway differed from that of normals in that there was no lateral displacement of COP (mean 0.19 cm, SD 0.73). The lateral displacement of COP and the increase in RMS of body sway during visual motion stimulation were significantly greater in normals than in the patients ( p<0.05). The patients' increased body sway without COP deviation does not imply an increased risk of falling. Two explanations are conceivable for this increased body sway without body deviation in patients with PPV: (a) the patients rely more on proprioceptive and vestibular rather than on visual cues to regulate upright stance; or (b) they depend on visual, vestibular, and proprioceptive information, but the threshold at which they initiate a compensatory body sway opposite in direction to a perceived body deviation is lower than in normal subjects. The data support the second explanation.  相似文献   

7.
To investigate how visual and vestibular cues are integrated for the perception of gravity during passive self-motion, we measured the ability to maintain a handheld object vertical relative to gravity without visual feedback during sinusoidal roll-tilt stimulation. Visual input, either concordant or discordant with actual dynamic roll-tilt, was delivered by a head-mounted display showing the laboratory. The four visual conditions were darkness, visual-vestibular concordance, stationary visual scene, and a visual scene 180° phase-shifted relative to actual tilt. Tilt-indication performance using a solid, cylindrical joystick was better in the presence of concordant visual input relative to the other visual conditions. In addition, we compared performance when indicating the vertical by the joystick or a full glass of water. Subjects indicated the direction of gravity significantly better when holding the full glass of water than the joystick. Matching the inertial characteristics, including fluid properties, of the handheld object to the glass of water did not improve performance. There was no effect of visual input on tilt performance when using the glass of water to indicate gravitational vertical. The gain of object tilt motion did not change with roll-tilt amplitude and frequency (±7.5° at 0.25 Hz, ±10° at 0.16 Hz, and ±20° at 0.08 Hz), however, the phase of object tilt relative to subject tilt showed significant phase-leads at the highest frequency tested (0.25 Hz). Comparison of the object and visual effects observed suggest that the task-dependent behavior change may be due to an attentional shift and/or shift in strategy.  相似文献   

8.
Multiple sensory cues underlying the perception of translation and path   总被引:3,自引:0,他引:3  
The translational linear vestibuloocular reflex compensates most accurately for high frequencies of head translation, with response magnitude decreasing with declining stimulus frequency. However, studies of the perception of translation typically report robust responses even at low frequencies or during prolonged motion. This inconsistency may reflect the incorporation of nondirectional sensory information associated with the vibration and noise that typically accompany translation, into motion perception. We investigated the perception of passive translation in humans while dissociating nondirectional cues from actual head motion. In a cue-dissociation experiment, interaural (IA) motion was generated using either a linear sled, the mechanics of which generated noise and vibration cues that were correlated with the motion profile, or a multiaxis technique that dissociated these cues from actual motion. In a trajectory-shift experiment, IA motion was interrupted by a sudden change in direction (+/-30 degrees diagonal) that produced a change in linear acceleration while maintaining sled speed and therefore mechanical (nondirectional) cues. During multi-axis cue-dissociation trials, subjects reported erroneous translation perceptions that strongly reflected the pattern of nondirectional cues, as opposed to nearly veridical percepts when motion and nondirectional cues coincided. During trajectory-shift trials, subjects' percepts were initially accurate, but erroneous following the direction change. Results suggest that nondirectional cues strongly influence the perception of linear motion, while the utility of cues directly related to translational acceleration is limited. One key implication is that "path integration" likely involves complex mechanisms that depend on nondirectional and contextual self-motion cues in support of limited and transient otolith-dependent acceleration input.  相似文献   

9.
We assessed the capacity for the vestibular utricle to modulate muscle sympathetic nerve activity (MSNA) during sinusoidal linear acceleration at amplitudes extending from imperceptible to clearly perceptible. Subjects (n = 16) were seated in a sealed room, eliminating visual cues, mounted on a linear motor that could deliver peak sinusoidal accelerations of 30 mG in the antero-posterior direction. Subjects sat on a padded chair with their neck and head supported vertically, thereby minimizing somatosensory cues, facing the direction of motion in the anterior direction. Each block of sinusoidal motion was applied at a time unknown to subjects and in a random order of amplitudes (1.25, 2.5, 5, 10, 20 and 30 mG), at a constant frequency of 0.2 Hz. MSNA was recorded via tungsten microelectrodes inserted into muscle fascicles of the common peroneal nerve. Subjects used a linear potentiometer aligned to the axis of motion to indicate any perceived movement, which was compared with the accelerometer signal of actual room movement. On average, 67 % correct detection of movement did not occur until 6.5 mG, with correct knowledge of the direction of movement at ~10 mG. Cross-correlation analysis revealed potent sinusoidal modulation of MSNA even at accelerations subjects could not perceive (1.25–5 mG). The modulation index showed a positive linear increase with acceleration amplitude, such that the modulation was significantly higher (25.3 ± 3.7 %) at 30 mG than at 1.25 mG (15.5 ± 1.2 %). We conclude that selective activation of the vestibular utricle causes a pronounced modulation of MSNA, even at levels well below perceptual threshold, and provides further evidence in support of the importance of vestibulosympathetic reflexes in human cardiovascular control.  相似文献   

10.
Successful navigation through an environment requires precise monitoring of direction and distance traveled (”path integration” or ”dead reckoning”). Previous studies in blindfolded human subjects showed that velocity information arising from vestibular and somatosensory signals can be used to reproduce passive linear displacements. In these studies, visual information was excluded as sensory cue. Yet, in our everyday life, visual information is very important and usually dominates vestibular and somatosensory cues. In the present study, we investigated whether visual signals can be used to discriminate and reproduce simulated linear displacements. In a first set of experiments, subjects viewed two sequences of linear motion and were asked in a 2AFC task to judge whether the travel distance in the second sequence was larger or shorter than in the first. Displacements in either movement sequence could be forward (f) or backward (b). Subjects were very accurate in discriminating travel distances. Average error was less than 3% and did not depend on displacements being into the same (ff, bb) or opposite direction (fb, bf). In a second set of experiments, subjects had to reproduce a previously seen forward motion (passive condition), either in light or in darkness, i.e., with or without visual feedback. Passive displacements had different velocity profiles (constant, sinusoidal, complex) and speeds and were performed across a textured ground plane, a 2-D plane of dots or through a 3-D cloud of dots. With visual feedback, subjects reproduced distances accurately. Accuracy did not depend on the kind of velocity profile in the passive condition. Subjects tended to reproduce distance by replicating the velocity profile of the passive displacement. Finally, in the condition without visual feedback, subjects reproduced the shape of the velocity profile, but used much higher speeds, resulting in a substantial overshoot of travel distance. Our results show that visual, vestibular, and somatosensory signals are used for path integration, following a common strategy: the use of the velocity profile during self-motion. Received: 3 June 1998 / Accepted: 15 February 1999  相似文献   

11.
Accurate information about gaze direction is required to direct the hand towards visual objects in the environment. In the present experiments, we tested whether retinal inputs affect the accuracy with which healthy subjects indicate their gaze direction with the unseen index finger after voluntary saccadic eye movements. In experiment 1, subjects produced a series of back and forth saccades (about eight) of self-selected magnitudes before positioning the eyes in a self-chosen direction to the right. The saccades were produced while facing one of four possible visual scenes: (1) complete darkness, (2) a scene composed of a single light-emitting diode (LED) located at 18 degrees to the right, (3) a visually enriched scene made up of three LEDs located at 0 degrees, 18 degrees and 36 degrees to the right, or (4) a normally illuminated scene where the lights in the experimental room were turned on. Subjects were then asked to indicate their gaze direction with their unseen index finger. In the conditions where the visual scenes were composed of LEDs, subjects were instructed to foveate or not foveate one of the LEDs with their last saccade. It was therefore possible to compare subjects' accuracy when pointing in the direction of their gaze in conditions with and without foveal stimulation. The results showed that the accuracy of the pointing movements decreased when subjects produced their saccades in a dark environment or in the presence of a single LED compared to when the saccades were generated in richer visual environments. Visual stimulation of the fovea did not increase subjects' accuracy when pointing in the direction of their gaze compared to conditions where there was only stimulation of the peripheral retina. Experiment 2 tested how the retinal signals could contribute to the coding of eye position after saccadic eye movements. More specifically, we tested whether the shift in the retinal image of the environment during the saccades provided information about the reached position of the eyes. Subjects produced their series of saccades while facing a visual environment made up of three LEDs. In some trials, the whole visual scene was displaced either 4.5 degrees to the left or 3 degrees to the right during the primary saccade. These displacements created mismatches between the shift of the retinal image of the environment and the extent of gaze deviation. The displacements of the visual scene were not perceived by the subjects because they occurred near the peak velocity of the saccade (saccadic suppression phenomenon). Pointing accuracy was not affected by the unperceived shifts of the visual scene. The results of these experiments suggest that the arm motor system receives more precise information about gaze direction when there is retinal stimulation than when there is none. They also suggest that the most relevant factor in defining gaze direction is not the retinal locus of the visual stimulation (that is peripheral or foveal) but rather the amount of visual information. Finally, the results suggest an enhanced egocentric encoding of gaze direction by the retinal inputs and do not support a retinotopic model for encoding gaze direction.  相似文献   

12.
The role of visual orientation cues for human control of upright stance is still not well understood. We, therefore, investigated stance control during motion of a visual scene as stimulus, varying the stimulus parameters and the contribution from other senses (vestibular and leg proprioceptive cues present or absent). Eight normal subjects and three patients with chronic bilateral loss of vestibular function participated. They stood on a motion platform inside a cabin with an optokinetic pattern on its interior walls. The cabin was sinusoidally rotated in anterior-posterior (a-p) direction with the horizontal rotation axis through the ankle joints (f=0.05-0.4 Hz; A (max)=0.25 degrees -4 degrees ; v (max)=0.08-10 degrees /s). The subjects' centre of mass (COM) angular position was calculated from opto-electronically measured body sway parameters. The platform was either kept stationary or moved by coupling its position 1:1 to a-p hip position ('body sway referenced', BSR, platform condition), by which proprioceptive feedback of ankle joint angle became inactivated. The visual stimulus evoked in-phase COM excursions (visual responses) in all subjects. (1) In normal subjects on a stationary platform, the visual responses showed saturation with both increasing velocity and displacement of the visual stimulus. The saturation showed up abruptly when visually evoked COM velocity and displacement reached approximately 0.1 degrees /s and 0.1 degrees , respectively. (2) In normal subjects on a BSR platform (proprioceptive feedback disabled), the visual responses showed similar saturation characteristics, but at clearly higher COM velocity and displacement values ( approximately 1 degrees /s and 1 degrees , respectively). (3) In patients on a stationary platform (no vestibular cues), the visual responses were basically similar to those of the normal subjects, apart from somewhat higher gain values and less-pronounced saturation effects. (4) In patients on a BSR platform (no vestibular and proprioceptive cues, presumably only somatosensory graviceptive and visual cues), the visual responses showed an abnormal increase in gain with increasing stimulus frequency in addition to a displacement saturation. On the normal subjects we performed additional experiments in which we varied the gain of the visual response by using a 'virtual reality' visual stimulus or by applying small lateral platform tilts. This did not affect the saturation characteristics of the visual response to a considerable degree. We compared the present results to previous psychophysical findings on motion perception, noting similarities of the saturation characteristics in (1) with leg proprioceptive detection thresholds of approximately 0.1 degrees /s and 0.1 degrees and those in (2) with vestibular detection thresholds of 1 degrees /s and 1 degrees , respectively. From the psychophysical data one might hypothesise that a proprioceptive postural mechanism limits the visually evoked body excursions if these excursions exceed 0.1 degrees /s and 0.1 degrees in condition (1) and that a vestibular mechanism is doing so at 1 degrees /s and 1 degrees in (2). To better understand this, we performed computer simulations using a posture control model with multiple sensory feedbacks. We had recently designed the model to describe postural responses to body pull and platform tilt stimuli. Here, we added a visual input and adjusted its gain to fit the simulated data to the experimental data. The saturation characteristics of the visual responses of the normals were well mimicked by the simulations. They were caused by central thresholds of proprioceptive, vestibular and somatosensory signals in the model, which, however, differed from the psychophysical thresholds. Yet, we demonstrate in a theoretical approach that for condition (1) the model can be made monomodal proprioceptive with the psychophysical 0.1 degrees /s and 0.1 degrees thresholds, and for (2) monomodal vestibular with the psychophysical 1 degrees /s and 1 degrees thresholds, and still shows the corresponding saturation characteristics (whereas our original model covers both conditions without adjustments). The model simulations also predicted the almost normal visual responses of patients on a stationary platform and their clearly abnormal responses on a BSR platform.  相似文献   

13.
Integration of cues from multiple sensory channels improves our ability to sense and respond to stimuli. Cues arising from a single event may arrive at the brain asynchronously, requiring them to be ??bound?? in time. The perceptual asynchrony between vestibular and auditory stimuli has been reported to be several times greater than other stimulus pairs. However, these data were collected using electrically evoked vestibular stimuli, which may not provide similar results to those obtained using actual head rotations. Here, we tested whether auditory stimuli and vestibular stimuli consisting of physiologically relevant mechanical rotations are perceived with asynchronies consistent with other sensory systems. We rotated 14 normal subjects about the earth-vertical axis over a raised-cosine trajectory (0.5?Hz, peak velocity 10?deg/s) while isolated from external noise and light. This trajectory minimized any input from extravestibular sources such as proprioception. An 800-Hz, 10-ms auditory tone was presented at stimulus onset asynchronies ranging from 200?ms before to 700?ms after the onset of motion. After each trial, subjects reported whether the stimuli were ??simultaneous?? or ??not simultaneous.?? The experiment was repeated, with subjects reporting whether the tone or rotation came first. After correction for the time the rotational stimulus took to reach vestibular perceptual threshold, asynchronies spanned from ?41?ms (auditory stimulus leading vestibular) to 91?ms (vestibular stimulus leading auditory). These values are significantly lower than those previously reported for stimulus pairs involving electrically evoked vestibular stimuli and are more consistent with timing relationships between pairs of non-vestibular stimuli.  相似文献   

14.
To investigate the neural mechanisms that humans use to process the ambiguous force measured by the otolith organs, we measured vestibuloocular reflexes (VORs) and perceptions of tilt and translation. One primary goal was to determine if the same, or different, mechanisms contribute to vestibular perception and action. We used motion paradigms that provided identical sinusoidal inter-aural otolith cues across a broad frequency range. We accomplished this by sinusoidally tilting (20 degrees, 0.005-0.7 Hz) subjects in roll about an earth-horizontal, head-centered, rotation axis ("Tilt") or sinusoidally accelerating (3.3 m/s2, 0.005-0.7 Hz) subjects along their inter-aural axis ("Translation"). While identical inter-aural otolith cues were provided by these motion paradigms, the canal cues were substantially different because roll rotations were present during Tilt but not during Translation. We found that perception was dependent on canal cues because the reported perceptions of both roll tilt and inter-aural translation were substantially different during Translation and Tilt. These findings match internal model predictions that rotational cues from the canals influence the neural processing of otolith cues. We also found horizontal translational VORs at frequencies >0.2 Hz during both Translation and Tilt. These responses were dependent on otolith cues and match simple filtering predictions that translational VORs include contributions via simple high-pass filtering of otolith cues. More generally, these findings demonstrate that internal models govern human vestibular "perception" across a broad range of frequencies and that simple high-pass filters contribute to human horizontal translational VORs ("action") at frequencies above approximately 0.2 Hz.  相似文献   

15.
Previous studies have generally considered heading perception to be a visual task. However, since judgments of heading direction are required only during self-motion, there are several other relevant senses which could provide supplementary and, in some cases, necessary information to make accurate and precise judgments of the direction of self-motion. We assessed the contributions of several of these senses using tasks chosen to reflect the reference system used by each sensory modality. Head-pointing and rod-pointing tasks were performed in which subjects aligned either the head or an unseen pointer with the direction of motion during whole body linear motion. Passive visual and vestibular stimulation was generated by accelerating subjects at sub- or supravestibular thresholds down a linear track. The motor-kinesthetic system was stimulated by having subjects actively walk along the track. A helmet-mounted optical system, fixed either on the cart used to provide passive visual or vestibular information or on the walker used in the active walking conditions, provided a stereoscopic display of an optical flow field. Subjects could be positioned at any orientation relative to the heading, and heading judgments were obtained using unimodal visual, vestibular, or walking cues, or combined visual-vestibular and visual-walking cues. Vision alone resulted in reasonably precise and accurate head-pointing judgments (0.3° constant errors, 2.9° variable errors), but not rod-pointing judgments (3.5° constant errors, 5.9° variable errors). Concordant visual-walking stimulation slightly decreased the variable errors and reduced constant pointing errors to close to zero, while head-pointing errors were unaffected. Concordant visual-vestibular stimulation did not facilitate either response. Stimulation of the vestibular system in the absence of vision produced imprecise rod-pointing responses, while variable and constant pointing errors in the active walking condition were comparable to those obtained in the visual condition. During active self-motion, subjects made large headpointing undershoots when visual information was not available. These results suggest that while vision provides sufficient information to identify the heading direction, it cannot, in isolation, be used to guide the motor response required to point toward or move in the direction of self-motion.  相似文献   

16.
The present experiment was designed to assess the effect of active (deliberate) maintenance of a small forward (FL) or backward body lean (BL) (about 2° ankle flexion) with respect to the spontaneous direction of balance (or neutral posture, N) on postural balance. We questioned whether BL and FL stances, which impose a volitional proprioceptive control of the body-on-support angle, could efficiently reduce mediolateral displacements of the centre of pressure (CoP) induced by the visual motion of a room and darkness. Subjects (n = 15) were asked to stand upright quietly feet together while confronted to a large visual scene rolling to 10° on either side in peripheral vision (and surrounding vertical visual references in central vision) at 0.05 Hz. CoP displacements were recorded using a force platform. Analysis of medio-lateral CoP root-mean square showed that the effect of the moving room depends on the subject’s postural stability performance in the eyes open N stance condition. Two significant postural behaviours emerged. (1) The most stable subjects (G1) were not affected by the conditions of altered vision, but swayed more in BL stance than in the N stance. (2) The unstable subjects (G2) exhibited (i) larger CoP displacements in altered visual conditions and a greater coupling of the CoP with the motion of the visual scene, (ii) enhanced visual dependency with postural leaning, and (iii) decreased CoP displacements when leaning forward in the eyes open motionless scene. Interestingly, the visual quotient positively correlated with the proprioceptive quotient, indicating that the more the subjects relied heavily on the visual frame of reference (FOR) the more they were influenced by body leaning. This result suggested hence a lesser ability to use efficiently body-ground proprioceptive cues. On the whole, the present findings indicate that body leaning could provide a useful mean to assess the subject’s ability to use body-ground proprioceptive cues not only to improve postural stability during eyes opening (especially during forward leaning), but also as a mean to disclose subjects’ visual dependency and their associated difficulties to shift from visual to proprioceptive-based FOR.  相似文献   

17.
The brain is able to determine angular self-motion from visual, vestibular, and kinesthetic information. There is compelling evidence that both humans and non-human primates integrate visual and inertial (i.e., vestibular and kinesthetic) information in a statistically optimal fashion when discriminating heading direction. In the present study, we investigated whether the brain also integrates information about angular self-motion in a similar manner. Eight participants performed a 2IFC task in which they discriminated yaw-rotations (2-s sinusoidal acceleration) on peak velocity. Just-noticeable differences (JNDs) were determined as a measure of precision in unimodal inertial-only and visual-only trials, as well as in bimodal visual–inertial trials. The visual stimulus was a moving stripe pattern, synchronized with the inertial motion. Peak velocity of comparison stimuli was varied relative to the standard stimulus. Individual analyses showed that data of three participants showed an increase in bimodal precision, consistent with the optimal integration model; while data from the other participants did not conform to maximum-likelihood integration schemes. We suggest that either the sensory cues were not perceived as congruent, that integration might be achieved with fixed weights, or that estimates of visual precision obtained from non-moving observers do not accurately reflect visual precision during self-motion.  相似文献   

18.
Reaching toward a visual target involves the transformation of visual information into appropriate motor commands. Complex movements often occur either while we are moving or when objects in the world move around us, thus changing the spatial relationship between our hand and the space in which we plan to reach. This study investigated whether rotation of a wide field-of-view immersive scene produced by a virtual environment affected online visuomotor control during a double-step reaching task. A total of 20 seated healthy subjects reached for a visual target that remained stationary in space or unpredictably shifted to a second position (either to the right or left of its initial position) with different inter-stimulus intervals. Eleven subjects completed two experiments which were similar except for the duration of the target’s appearance. The final target was either visible throughout the entire trial or only for a period of 200 ms. Movements were performed under two visual field conditions: the virtual scene was matched to the subject’s head motion or rolled about the line of sight counterclockwise at 130°/s. Nine additional subjects completed a third experiment in which the direction of the rolling scene was manipulated (i.e., clockwise and counterclockwise). Our results showed that while all subjects were able to modify their hand trajectory in response to the target shift with both visual scenes, some of the double-step movements contained a pause prior to modifying trajectory direction. Furthermore, our findings indicated that both the timing and kinematic adjustments of the reach were affected by roll motion of the scene. Both planning and execution of the reach were affected by roll motion. Changes in proportion of trajectory types, and significantly longer pauses that occurred during the reach in the presence of roll motion suggest that background roll motion mainly interfered with the ability to update the visuomotor response to the target displacement. Furthermore, the reaching movement was affected differentially by the direction of roll motion. Subjects demonstrated a stronger effect of visual motion on movements taking place in the direction of visual roll (e.g., leftward movements during counterclockwise roll). Further investigation of the hand path revealed significant changes during roll motion for both the area and shape of the 95% tolerance ellipses that were constructed from the hand position following the main movement termination. These changes corresponded with a hand drift that would suggest that subjects were relying more on proprioceptive information to estimate the arm position in space during roll motion of the visual field. We conclude that both the spatial and temporal kinematics of the reach movement were affected by the motion of the visual field, suggesting interference with the ability to simultaneously process two consecutive stimuli.  相似文献   

19.
We investigated the changes of human posture control of upright stance which occur when vestibular cues (VEST) are absent and visual and somatosensory orientation cues (VIS, SOM) are removed. Postural responses to sinusoidal tilts of a motion platform in the sagittal plane (+/-2 degrees, f=0.05, 0.1, 0.2 and 0.4 Hz) were studied in normal subjects (Ns) and patients with bilateral vestibular loss (Ps). We found that absence of VEST (Ps, visual reference) and removal of VIS (Ns, no visual reference) had little effect on stabilization of upright body posture in space. In the absence of both VEST and VIS (Ps, no visual reference) somatosensory graviception still provided some information on body orientation in space at 0.05 and 0.1 Hz. However, at the higher frequencies Ps qualitatively changed their behavior; they then tended to actively align their bodies with respect to the motion platform. The findings confirm predictions of a novel postural control model.  相似文献   

20.
Seven healthy individuals were recruited to examine the interaction between visual and vestibular information on locomotor trajectory during walking. Subjects wore goggles that either contained a clear lens or a prism that displaced the visual scene either 20° to the left or right. A 5-s bipolar, binaural galvanic stimulus (GVS) was also applied at three times the subject's individual threshold (ranged between 1.2 to 1.5 mA). Subjects stood with their eyes closed and walked forward at a casual pace. At first heel contact, subjects opened their eyes and triggered the galvanic stimulus by foot switches positioned underneath a board. Reflective markers were placed bilaterally on the shoulders as the walking trajectory was captured using a camera mounted on the ceiling above the testing area. Twelve conditions were randomly assigned that combined four visual conditions (eyes closed, eyes open, left prism, right prism) and three GVS conditions (no GVS, GVS anode left, GVS anode right). As subjects walked forward, there was a tendency to deviate in the direction of the prisms. During GVS trials, subjects deviated towards the anode while walking, with the greatest deviations occurring with the eyes closed. However, when GVS was presented with the prisms, subjects always deviated to the side of the prisms, regardless of the position of the anode. Furthermore, the visual-vestibular conditions produced a larger lateral deviation than those observed in the prisms-only trials. This suggests that the nervous system examines the sensory inputs and takes into account the most reliable and relevant sensory input.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号