首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 95 毫秒
1.
When walking through space, both dynamic visual information (optic flow) and body-based information (proprioceptive and vestibular) jointly specify the magnitude of distance travelled. While recent evidence has demonstrated the extent to which each of these cues can be used independently, less is known about how they are integrated when simultaneously present. Many studies have shown that sensory information is integrated using a weighted linear sum, yet little is known about whether this holds true for the integration of visual and body-based cues for travelled distance perception. In this study using Virtual Reality technologies, participants first travelled a predefined distance and subsequently matched this distance by adjusting an egocentric, in-depth target. The visual stimulus consisted of a long hallway and was presented in stereo via a head-mounted display. Body-based cues were provided either by walking in a fully tracked free-walking space (Exp. 1) or by being passively moved in a wheelchair (Exp. 2). Travelled distances were provided either through optic flow alone, body-based cues alone or through both cues combined. In the combined condition, visually specified distances were either congruent (1.0×) or incongruent (0.7× or 1.4×) with distances specified by body-based cues. Responses reflect a consistent combined effect of both visual and body-based information, with an overall higher influence of body-based cues when walking and a higher influence of visual cues during passive movement. When comparing the results of Experiments 1 and 2, it is clear that both proprioceptive and vestibular cues contribute to travelled distance estimates during walking. These observed results were effectively described using a basic linear weighting model.  相似文献   

2.
One of the fundamental requirements for successful navigation through an environment is the continuous monitoring of distance travelled. To do so, humans normally use one or a combination of visual, proprioceptive/efferent, vestibular, and temporal cues. In the real world, information from one sensory modality is normally congruent with information from other modalities; hence, studying the nature of sensory interactions is often difficult. In order to decouple the natural covariation between different sensory cues, we used virtual reality technology to vary the relation between the information generated from visual sources and the information generated from proprioceptive/efferent sources. When we manipulated the stimuli such that the visual information was coupled in various ways to the proprioceptive/efferent information, human subjects predominantly used visual information to estimate the ratio of two traversed path lengths. Although proprioceptive/efferent information was not used directly, the mere availability of proprioceptive information increased the accuracy of relative path length estimation based on visual cues, even though the proprioceptive/efferent information was inconsistent with the visual information. These results convincingly demonstrated that active movement (locomotion) facilitates visual perception of path length travelled.  相似文献   

3.
Integration of multiple sensory cues is essential for precise and accurate perception and behavioral performance, yet the reliability of sensory signals can vary across modalities and viewing conditions. Human observers typically employ the optimal strategy of weighting each cue in proportion to its reliability, but the neural basis of this computation remains poorly understood. We trained monkeys to perform a heading discrimination task from visual and vestibular cues, varying cue reliability randomly. The monkeys appropriately placed greater weight on the more reliable cue, and population decoding of neural responses in the dorsal medial superior temporal area closely predicted behavioral cue weighting, including modest deviations from optimality. We found that the mathematical combination of visual and vestibular inputs by single neurons is generally consistent with recent theories of optimal probabilistic computation in neural circuits. These results provide direct evidence for a neural mechanism mediating a simple and widespread form of statistical inference.  相似文献   

4.
Many previous studies have focused on how humans combine inputs provided by different modalities for the same physical property. However, it is not yet very clear how different senses providing information about our own movements combine in order to provide a motion percept. We designed an experiment to investigate how upright turns are stored, and particularly how vestibular and visual cues interact at the different stages of the memorization process (encoding/recalling). Subjects experienced passive yaw turns stimulated in the vestibular modality (whole-body rotations) and/or in the visual modality (limited lifetime star-field rotations), with the visual scene turning 1.5 times faster when combined (unnoticed conflict). Then they were asked to actively reproduce the rotation displacement in the opposite direction, with body cues only, visual cues only, or both cues with either the same or a different gain factor. First, we found that in none of the conditions did the reproduced motion dynamics follow that of the presentation phase (Gaussian angular velocity profiles). Second, the unimodal recalling of turns was largely uninfluenced by the other sensory cue that it could be combined with during the encoding. Therefore, turns in each modality, visual, and vestibular are stored independently. Third, when the intersensory gain was preserved, the bimodal reproduction was more precise (reduced variance) and lay between the two unimodal reproductions. This suggests that with both visual and vestibular cues available, these combine in order to improve the reproduction. Fourth, when the intersensory gain was modified, the bimodal reproduction resulted in a substantially larger change for the body than for the visual scene rotations, which indicates that vision prevails for this rotation displacement task when a matching problem is introduced.  相似文献   

5.
Summary Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation is modeled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A dual-input describing function analysis supports the complementary model; vestibular cues dominate sensation at higher frequencies. The describing function model is extended by the proposal of a non-linear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.Research supported in part by NASA Grants NSG 2032 and 2230. GLZ supported by an NIH National Research Service Award. GLZ currently at Bolt Beranek and Newman, Inc., Cambridge, MA, USA  相似文献   

6.
The interaction of different orientation senses contributing to posture control is not well understood. We therefore performed experiments in which we measured the postural responses of normal subjects and vestibular loss patients during perturbation of their stance. Subjects stood on a motion platform with their eyes closed and auditory cues masked. The perturbing stimuli consisted of either platform tilts or external torque produced by force-controlled pull of the subjects' body on a stationary platform. Furthermore, we presented trials in which these two stimuli were applied when the platform was body-sway referenced (i.e., coupled 1:1 to body position, by which ankle joint proprioceptive feedback is essentially removed). We analyzed subjects' postural responses, i.e., the excursions of their center of mass (COM) and center of pressure (COP), using a systems analysis approach. We found gain and phase of the responses to vary as a function of stimulus frequency and in relation to the absence versus presence of vestibular and proprioceptive cues. In addition, gain depended on stimulus amplitude, reflecting a non-linearity in the control. The experimental results were compared to simulation results obtained from an 'inverted pendulum' model of posture control. In the model, sensor fusion mechanisms yield internal estimates of the external stimuli, i.e., of the external torque (pull), the platform tilt and gravity. These estimates are derived from three sensor systems: ankle proprioceptors, vestibular sensors and plantar pressure sensors (somatosensory graviceptors). They are fed as global set point signals into a local control loop of the ankle joints, which is based on proprioceptive negative feedback. This local loop stabilizes the body-on-foot support, while the set point signals upgrade the loop into a body-in-space control. Amplitude non-linearity was implemented in the model in the form of central threshold mechanisms. In model simulations that combined sensor fusion and thresholds, an automatic context-specific sensory re-weighting across stimulus conditions occurred. Model parameters were identified using an optimization procedure. Results suggested that in the sway-referenced condition normal subjects altered their postural strategy by strongly weighting feedback from plantar somatosensory force sensors. Taking this strategy change into account, the model's simulation results well paralleled all experimental results across all conditions tested.  相似文献   

7.
We examined the influence of dynamic visual scenes on the motion perception of subjects undergoing sinusoidal (0.45 Hz) roll swing motion at different radii. The visual scenes were presented on a flatscreen monitor with a monocular 40° field of view. There were three categories of trials: (1) trials in the dark; (2) trials where the visual scene matched the actual motion; and (3) trials where the visual scene showed swing motion at a different radius. Subjects verbally reported perceptions of head tilt and translation. When the visual and vestibular cues differed, subjects reported perceptions that were geometrically consistent with a radius between the radii of the visual scene and the actual motion. Even when sensations did not match either the visual or vestibular stimuli, reported motion perceptions were consistent with swing motions combining elements of each. Subjects were generally unable to detect cue conflicts or judge their own visual–vestibular biases, which suggests that the visual and vestibular self-motion cues are not independently accessible.  相似文献   

8.
The extent to which attending to one stimulus while ignoring another influences the integration of visual and inertial (vestibular, somatosensory, proprioceptive) stimuli is currently unknown. It is also unclear how cue integration is affected by an awareness of cue conflicts. We investigated these questions using a turn-reproduction paradigm, where participants were seated on a motion platform equipped with a projection screen and were asked to actively return a combined visual and inertial whole-body rotation around an earth-vertical axis. By introducing cue conflicts during the active return and asking the participants whether they had noticed a cue conflict, we measured the influence of each cue on the response. We found that the task instruction had a significant effect on cue weighting in the response, with a higher weight assigned to the attended modality, only when participants noticed the cue conflict. This suggests that participants used task-induced attention to reduce the influence of stimuli that conflict with the task instructions.  相似文献   

9.
Theoretically visual gain has been identified as a control variable in models of isometric force. However, visual gain is typically confounded with visual angle and distance, and the relative contribution of visual gain, distance, and angle to the control of force remains unclear. This study manipulated visual gain, distance, and angle in three experiments to examine the visual information properties used to regulate the control of a constant level of isometric force. Young adults performed a flexion motion of the index finger of the dominant hand in 20 s trials under a range of parameter values of the three visual variables. The findings demonstrate that the amount and structure of the force fluctuations were organized around the variable of visual angle, rather than gain or distance. Furthermore, the amount and structure of the force fluctuations changed considerably up to 1°, with little change higher than a 1° visual angle. Visual angle is the critical informational variable for the visuomotor system during the control of isometric force.  相似文献   

10.
Many perceptual cue combination studies have shown that humans can integrate sensory information across modalities as well as within a modality in a manner that is close to optimal. While the limits of sensory cue integration have been extensively studied in the context of perceptual decision tasks, the evidence obtained in the context of motor decisions provides a less consistent picture. Here, we studied the combination of visual and haptic information in the context of human arm movement control. We implemented a pointing task in which human subjects pointed at an invisible unknown target position whose vertical position varied randomly across trials. In each trial, we presented a haptic and a visual cue that provided noisy information about the target position half-way through the reach. We measured pointing accuracy as function of haptic and visual cue onset and compared pointing performance to the predictions of a multisensory decision model. Our model accounts for pointing performance by computing the maximum a posteriori estimate, assuming minimum variance combination of uncertain sensory cues. Synchronicity of cue onset has previously been demonstrated to facilitate the integration of sensory information. We tested this in trials in which visual and haptic information was presented with temporal disparity. We found that for our sensorimotor task temporal disparity between visual and haptic cue had no effect. Sensorimotor learning appears to use all available information and to apply the same near-optimal rules for cue combination that are used by perception.  相似文献   

11.
The role of visual orientation cues for human control of upright stance is still not well understood. We, therefore, investigated stance control during motion of a visual scene as stimulus, varying the stimulus parameters and the contribution from other senses (vestibular and leg proprioceptive cues present or absent). Eight normal subjects and three patients with chronic bilateral loss of vestibular function participated. They stood on a motion platform inside a cabin with an optokinetic pattern on its interior walls. The cabin was sinusoidally rotated in anterior-posterior (a-p) direction with the horizontal rotation axis through the ankle joints (f=0.05-0.4 Hz; A (max)=0.25 degrees -4 degrees ; v (max)=0.08-10 degrees /s). The subjects' centre of mass (COM) angular position was calculated from opto-electronically measured body sway parameters. The platform was either kept stationary or moved by coupling its position 1:1 to a-p hip position ('body sway referenced', BSR, platform condition), by which proprioceptive feedback of ankle joint angle became inactivated. The visual stimulus evoked in-phase COM excursions (visual responses) in all subjects. (1) In normal subjects on a stationary platform, the visual responses showed saturation with both increasing velocity and displacement of the visual stimulus. The saturation showed up abruptly when visually evoked COM velocity and displacement reached approximately 0.1 degrees /s and 0.1 degrees , respectively. (2) In normal subjects on a BSR platform (proprioceptive feedback disabled), the visual responses showed similar saturation characteristics, but at clearly higher COM velocity and displacement values ( approximately 1 degrees /s and 1 degrees , respectively). (3) In patients on a stationary platform (no vestibular cues), the visual responses were basically similar to those of the normal subjects, apart from somewhat higher gain values and less-pronounced saturation effects. (4) In patients on a BSR platform (no vestibular and proprioceptive cues, presumably only somatosensory graviceptive and visual cues), the visual responses showed an abnormal increase in gain with increasing stimulus frequency in addition to a displacement saturation. On the normal subjects we performed additional experiments in which we varied the gain of the visual response by using a 'virtual reality' visual stimulus or by applying small lateral platform tilts. This did not affect the saturation characteristics of the visual response to a considerable degree. We compared the present results to previous psychophysical findings on motion perception, noting similarities of the saturation characteristics in (1) with leg proprioceptive detection thresholds of approximately 0.1 degrees /s and 0.1 degrees and those in (2) with vestibular detection thresholds of 1 degrees /s and 1 degrees , respectively. From the psychophysical data one might hypothesise that a proprioceptive postural mechanism limits the visually evoked body excursions if these excursions exceed 0.1 degrees /s and 0.1 degrees in condition (1) and that a vestibular mechanism is doing so at 1 degrees /s and 1 degrees in (2). To better understand this, we performed computer simulations using a posture control model with multiple sensory feedbacks. We had recently designed the model to describe postural responses to body pull and platform tilt stimuli. Here, we added a visual input and adjusted its gain to fit the simulated data to the experimental data. The saturation characteristics of the visual responses of the normals were well mimicked by the simulations. They were caused by central thresholds of proprioceptive, vestibular and somatosensory signals in the model, which, however, differed from the psychophysical thresholds. Yet, we demonstrate in a theoretical approach that for condition (1) the model can be made monomodal proprioceptive with the psychophysical 0.1 degrees /s and 0.1 degrees thresholds, and for (2) monomodal vestibular with the psychophysical 1 degrees /s and 1 degrees thresholds, and still shows the corresponding saturation characteristics (whereas our original model covers both conditions without adjustments). The model simulations also predicted the almost normal visual responses of patients on a stationary platform and their clearly abnormal responses on a BSR platform.  相似文献   

12.
Spatial updating during self-motion typically involves the appropriate integration of both visual and non-visual cues, including vestibular and proprioceptive information. Here, we investigated how human observers combine these two non-visual cues during full-stride curvilinear walking. To obtain a continuous, real-time estimate of perceived position, observers were asked to continuously point toward a previously viewed target in the absence of vision. They did so while moving on a large circular treadmill under various movement conditions. Two conditions were designed to evaluate spatial updating when information was largely limited to either proprioceptive information (walking in place) or vestibular information (passive movement). A third condition evaluated updating when both sources of information were available (walking through space) and were either congruent or in conflict. During both the passive movement condition and while walking through space, the pattern of pointing behavior demonstrated evidence of accurate egocentric updating. In contrast, when walking in place, perceived self-motion was underestimated and participants always adjusted the pointer at a constant rate, irrespective of changes in the rate at which the participant moved relative to the target. The results are discussed in relation to the maximum likelihood estimation model of sensory integration. They show that when the two cues were congruent, estimates were combined, such that the variance of the adjustments was generally reduced. Results also suggest that when conflicts were introduced between the vestibular and proprioceptive cues, spatial updating was based on a weighted average of the two inputs.  相似文献   

13.
Previous studies on the ontogeny of spatial learning report that rats younger than 19–21 days of age are incapable of learning the location of a platform relative to distal cues in the Morris water task. Here, we manipulated the spatial relationship of a cued platform to the pool and the distal visual room cues to investigate whether distal cues can control navigation among 16‐ to 24‐day‐old rats. Rats were trained to navigate to a cued platform in a rich distal cue environment. During critical test trials, the pool was shifted to a different, overlapping position and the cued platform was placed either in the same absolute location in the room or the same relative location in the pool as during training. Rats aged 17 days and older exhibited a disruption in performance when the cued platform was in the absolute location but not the relative location, indicating that rats had learned the direction of the cued platform within the distal cue environment. These observations indicate that (1) information acquired from distal room cues influences navigation as early as 17 days of age, (2) this distal cue information is preferentially used to guide navigation in a particular direction rather than to a precise place in the room, and (3) the directional nature of the influence of distal cues on navigation is invariant across development. © 2010 Wiley Periodicals, Inc. Dev Psychobiol 53: 1–12, 2011.  相似文献   

14.
Human observers combine multiple sensory cues synergistically to achieve greater perceptual sensitivity, but little is known about the underlying neuronal mechanisms. We recorded the activity of neurons in the dorsal medial superior temporal (MSTd) area during a task in which trained monkeys combined visual and vestibular cues near-optimally to discriminate heading. During bimodal stimulation, MSTd neurons combined visual and vestibular inputs linearly with subadditive weights. Neurons with congruent heading preferences for visual and vestibular stimuli showed improvements in sensitivity that parallel behavioral effects. In contrast, neurons with opposite preferences showed diminished sensitivity under cue combination. Responses of congruent cells were more strongly correlated with monkeys' perceptual decisions than were responses of opposite cells, suggesting that the monkey monitored the activity of congruent cells to a greater extent during cue integration. These findings show that perceptual cue integration occurs in nonhuman primates and identify a population of neurons that may form its neural basis.  相似文献   

15.
People tend to make straight and smooth hand movements when reaching for an object. These trajectory features are resistant to perturbation, and both proprioceptive as well as visual feedback may guide the adaptive updating of motor commands enforcing this regularity. How is information from the two senses combined to generate a coherent internal representation of how the arm moves? Here we show that eliminating visual feedback of hand-path deviations from the straight-line reach (constraining visual feedback of motion within a virtual, "visual channel") prevents compensation of initial direction errors induced by perturbations. Because adaptive reduction in direction errors occurred with proprioception alone, proprioceptive and visual information are not combined in this reaching task using a fixed, linear weighting scheme as reported for static tasks not requiring arm motion. A computer model can explain these findings, assuming that proprioceptive estimates of initial limb posture are used to select motor commands for a desired reach and visual feedback of hand-path errors brings proprioceptive estimates into registration with a visuocentric representation of limb position relative to its target. Simulations demonstrate that initial configuration estimation errors lead to movement direction errors as observed experimentally. Registration improves movement accuracy when veridical visual feedback is provided but is not invoked when hand-path errors are eliminated. However, the visual channel did not exclude adjustment of terminal movement features maximizing hand-path smoothness. Thus visual and proprioceptive feedback may be combined in fundamentally different ways during trajectory control and final position regulation of reaching movements.  相似文献   

16.
Locomotion control uses proprioceptive, visual, and vestibular signals. The vestibular contribution has been analyzed previously with galvanic vestibular stimulation (GVS), which constitutes mainly a virtual head-fixed rotation in the roll plane that causes polarity-specific deviations of gait. In this study we examined whether a visual disturbance has similar effects on gait when it acts in the same direction as GVS, i.e., when roll vection is induced by head-fixed visual roll motion stimulation. Random dot patterns were constantly rotated in roll at ±15°/s on a computer-driven binocular head-mounted display that was worn by eight healthy participants. Their gait trajectories were tracked while they walked a distance of 6 m. A stimulation effect was observed only for the first three to four steps, but not for the whole walking distance. These results are similar to the results of previous GVS studies, suggesting that in terms of the direction of action visual motion stimulations in the roll plane are similar to GVS. Both kinds of stimulation cause only initial balance responses in the roll plane but do not contribute to the steering of gait in the yaw plane.  相似文献   

17.
The control of self-motion is supported by visual, vestibular, and proprioceptive signals. Recent research has shown how these signals interact in the monkey medio-superior temporal area (area MST) to enhance and disambiguate the perception of heading during self-motion. Area MST is a central stage for self-motion processing from optic flow, and integrates flow field information with vestibular self-motion and extraretinal eye movement information. Such multimodal cue integration is clearly important to solidify perception. However to understand the information processing capabilities of the brain, one must also ask how much information can be deduced from a single cue alone. This is particularly pertinent for optic flow, where controversies over its usefulness for self-motion control have existed ever since Gibson proposed his direct approach to ecological perception. In our study, we therefore, tested macaque MST neurons for their heading selectivity in highly complex flow fields based on the purely visual mechanisms. We recorded responses of MST neurons to simple radial flow fields and to distorted flow fields that simulated a self-motion plus an eye movement. About half of the cells compensated for such distortion and kept the same heading selectivity in both cases. Our results strongly support the notion of an involvement of area MST in the computation of heading.  相似文献   

18.
The roles of visual exteroception (information regarding environmental characteristics) and exproprioception (the relation of body segments to the environment) during gait adaptation are not fully understood. The purpose of this study was to determine how visual exteroception regarding obstacle characteristics provided during obstacle crossing modified foot elevation and placement with and without lower limb-obstacle visual exproprioception (manipulated with goggles). Visual exteroceptive information was provided by an obstacle cue - a second obstacle identical to the obstacle that was stepped over - which was visible during crossing. Ten subjects walked over obstacles under four visual conditions: full vision with no obstacle height cue, full vision with an obstacle height cue, goggles with no obstacle height cue and goggles with an obstacle height cue. Obstacle heights were 2, 10, 20 and 30 cm. The presence of goggles increased horizontal distance (distance between foot and obstacle at foot placement), toe clearance and toe clearance variability. The presence of the obstacle height cue did not alter horizontal distance, toe clearance or toe clearance variability. These observations strengthen the argument that it is the visual exproprioceptive information, not visual exteroceptive information, that is used on-line to fine tune the lower limb trajectory during obstacle avoidance.  相似文献   

19.
One possible source of information regarding the distance of a fixated target is provided by the height of the object within the visual scene. It is accepted that this cue can provide ordinal information, but generally it has been assumed that the nervous system cannot extract "absolute" information from height-in-scene. In order to use height-in-scene, the nervous system would need to be sensitive to ocular position with respect to the head and to head orientation with respect to the shoulders (i.e. vertical gaze angle or VGA). We used a perturbation technique to establish whether the nervous system uses vertical gaze angle as a distance cue. Vertical gaze angle was perturbed using ophthalmic prisms with the base oriented either up or down. In experiment 1, participants were required to carry out an open-loop pointing task whilst wearing: (1) no prisms; (2) a base-up prism; or (3) a base-down prism. In experiment 2, the participants reached to grasp an object under closed-loop viewing conditions whilst wearing: (1) no prisms; (2) a base-up prism; or (3) a base-down prism. Experiment 1 and 2 provided clear evidence that the human nervous system uses vertical gaze angle as a distance cue. It was found that the weighting attached to VGA decreased with increasing target distance. The weighting attached to VGA was also affected by the discrepancy between the height of the target, as specified by all other distance cues, and the height indicated by the initial estimate of the position of the supporting surface. We conclude by considering the use of height-in-scene information in the perception of surface slant and highlight some of the complexities that must be involved in the computation of environmental layout.  相似文献   

20.
Potential roles of force cues in human stance control   总被引:1,自引:1,他引:0  
Human stance is inherently unstable. A small deviation from upright body orientation is enough to yield a gravitational component in the ankle joint torque, which tends to accelerate the body further away from upright (‘gravitational torque’; magnitude is related to body-space lean angle). Therefore, to maintain a given body lean position, a corresponding compensatory torque must be generated. It is well known that subjects use kinematic sensory information on body-space lean from the vestibular system for this purpose. Less is known about kinetic cues from force/torque receptors. Previous work indicated that they are involved in compensating external contact forces such as a pull or push having impact on the body. In this study, we hypothesized that they play, in addition, a role when the vestibular estimate of the gravitational torque becomes erroneous. Reasons may be sudden changes in body mass, for instance by a load, or an impairment of the vestibular system. To test this hypothesis, we mimicked load effects on the gravitational torque in normal subjects and in patients with chronic bilateral vestibular loss (VL) with eyes closed. We added/subtracted extra torque to the gravitational torque by applying an external contact force (via cable winches and a body harness). The extra torque was referenced to body-space lean, using different proportionality factors. We investigated how it affected body-space lean responses that we evoked using sinusoidal tilts of the support surface (motion platform) with different amplitudes and frequencies (normals ±1°, ±2°, and ±4° at 0.05, 0.1, 0.2, and 0.4 Hz; patients ±1° and ±2° at 0.05 and 0.1 Hz). We found that added/subtracted extra torque scales the lean response in a systematic way, leading to increase/decrease in lean excursion. Expressing the responses in terms of gain and phase curves, we compared the experimental findings to predictions obtained from a recently published sensory feedback model. For the trials in which the extra torque tended to endanger stance control, predictions in normals were better when the model included force cues than without these cues. This supports our notion that force cues provide an automatic ‘gravitational load compensation’ upon changes in body mass in normals. The findings in the patients support our notion that the presumed force cue mechanism provides furthermore vestibular loss compensation. Patients showed a body-space stabilization that cannot be explained by ankle angle proprioception, but must involve graviception, most likely by force cues. Our findings suggest that force cues contribute considerably to the redundancy and robustness of the human stance control system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号