首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recent research has provided evidence that visual and body-based cues (vestibular, proprioceptive and efference copy) are integrated using a weighted linear sum during walking and passive transport. However, little is known about the specific weighting of visual information when combined with proprioceptive inputs alone, in the absence of vestibular information about forward self-motion. Therefore, in this study, participants walked in place on a stationary treadmill while dynamic visual information was updated in real time via a head-mounted display. The task required participants to travel a predefined distance and subsequently match this distance by adjusting an egocentric, in-depth target using a game controller. Travelled distance information was provided either through visual cues alone, proprioceptive cues alone or both cues combined. In the combined cue condition, the relationship between the two cues was manipulated by either changing the visual gain across trials (0.7×, 1.0×, 1.4×; Exp. 1) or the proprioceptive gain across trials (0.7×, 1.0×, 1.4×; Exp. 2). Results demonstrated an overall higher weighting of proprioception over vision. These weights were scaled, however, as a function of which sensory input provided more stable information across trials. Specifically, when visual gain was constantly manipulated, proprioceptive weights were higher than when proprioceptive gain was constantly manipulated. These results therefore reveal interesting characteristics of cue-weighting within the context of unfolding spatio-temporal cue dynamics.  相似文献   

2.
Spatial updating during self-motion typically involves the appropriate integration of both visual and non-visual cues, including vestibular and proprioceptive information. Here, we investigated how human observers combine these two non-visual cues during full-stride curvilinear walking. To obtain a continuous, real-time estimate of perceived position, observers were asked to continuously point toward a previously viewed target in the absence of vision. They did so while moving on a large circular treadmill under various movement conditions. Two conditions were designed to evaluate spatial updating when information was largely limited to either proprioceptive information (walking in place) or vestibular information (passive movement). A third condition evaluated updating when both sources of information were available (walking through space) and were either congruent or in conflict. During both the passive movement condition and while walking through space, the pattern of pointing behavior demonstrated evidence of accurate egocentric updating. In contrast, when walking in place, perceived self-motion was underestimated and participants always adjusted the pointer at a constant rate, irrespective of changes in the rate at which the participant moved relative to the target. The results are discussed in relation to the maximum likelihood estimation model of sensory integration. They show that when the two cues were congruent, estimates were combined, such that the variance of the adjustments was generally reduced. Results also suggest that when conflicts were introduced between the vestibular and proprioceptive cues, spatial updating was based on a weighted average of the two inputs.  相似文献   

3.
One of the fundamental requirements for successful navigation through an environment is the continuous monitoring of distance travelled. To do so, humans normally use one or a combination of visual, proprioceptive/efferent, vestibular, and temporal cues. In the real world, information from one sensory modality is normally congruent with information from other modalities; hence, studying the nature of sensory interactions is often difficult. In order to decouple the natural covariation between different sensory cues, we used virtual reality technology to vary the relation between the information generated from visual sources and the information generated from proprioceptive/efferent sources. When we manipulated the stimuli such that the visual information was coupled in various ways to the proprioceptive/efferent information, human subjects predominantly used visual information to estimate the ratio of two traversed path lengths. Although proprioceptive/efferent information was not used directly, the mere availability of proprioceptive information increased the accuracy of relative path length estimation based on visual cues, even though the proprioceptive/efferent information was inconsistent with the visual information. These results convincingly demonstrated that active movement (locomotion) facilitates visual perception of path length travelled.  相似文献   

4.
Interactions between perceived temporal and spatial properties of external stimuli (e.g. duration and size) suggest common neural mechanisms underlying the perception of time and space. This conclusion, however, lacks support from studies in large-scale environments, showing that judgements on travelled distances and associated travel times are independent from each other. Here, we used a different approach to test whether the perception of travelled distances is influenced by the perception of time. Unlike previous studies, in which temporal and spatial judgements were related to the same experience of walking, we assessed time and distance perception in analogous, but separate versions of estimation and production tasks. In estimation tasks, participants estimated the duration of a presented sound (time) or the length of a travelled distance (space), and in production tasks, participants terminated a sound after a numerically specified duration (time) or covered a numerically specified distance (space). The results show systematic overestimation of time and underestimation of travelled distance, and the latter reflecting previously reported misperceptions of visual distance. Time and distance judgements were related within individuals for production, but not for estimation tasks. These results suggest that temporal information might constitute a probabilistic cue for path integration.  相似文献   

5.
Surprisingly little is known of the perceptual consequences of visual or vestibular stimulation in updating our perceived position in space as we move around. We assessed the roles of visual and vestibular cues in determining the perceived distance of passive, linear self motion. Subjects were given cues to constant-acceleration motion: either optic flow presented in a virtual reality display, physical motion in the dark or combinations of visual and physical motions. Subjects indicated when they perceived they had traversed a distance that had been previously given to them either visually or physically. The perceived distance of motion evoked by optic flow was accurate relative to a previously presented visual target but was perceptually equivalent to about half the physical motion. The perceived distance of physical motion in the dark was accurate relative to a previously presented physical motion but was perceptually equivalent to a much longer visually presented distance. The perceived distance of self motion when both visual and physical cues were present was more closely perceptually equivalent to the physical motion experienced rather than the simultaneous visual motion, even when the target was presented visually. We discuss this dominance of the physical cues in determining the perceived distance of self motion in terms of capture by non-visual cues. These findings are related to emerging studies that show the importance of vestibular input to neural mechanisms that process self motion.  相似文献   

6.
Successful navigation through an environment requires precise monitoring of direction and distance traveled (”path integration” or ”dead reckoning”). Previous studies in blindfolded human subjects showed that velocity information arising from vestibular and somatosensory signals can be used to reproduce passive linear displacements. In these studies, visual information was excluded as sensory cue. Yet, in our everyday life, visual information is very important and usually dominates vestibular and somatosensory cues. In the present study, we investigated whether visual signals can be used to discriminate and reproduce simulated linear displacements. In a first set of experiments, subjects viewed two sequences of linear motion and were asked in a 2AFC task to judge whether the travel distance in the second sequence was larger or shorter than in the first. Displacements in either movement sequence could be forward (f) or backward (b). Subjects were very accurate in discriminating travel distances. Average error was less than 3% and did not depend on displacements being into the same (ff, bb) or opposite direction (fb, bf). In a second set of experiments, subjects had to reproduce a previously seen forward motion (passive condition), either in light or in darkness, i.e., with or without visual feedback. Passive displacements had different velocity profiles (constant, sinusoidal, complex) and speeds and were performed across a textured ground plane, a 2-D plane of dots or through a 3-D cloud of dots. With visual feedback, subjects reproduced distances accurately. Accuracy did not depend on the kind of velocity profile in the passive condition. Subjects tended to reproduce distance by replicating the velocity profile of the passive displacement. Finally, in the condition without visual feedback, subjects reproduced the shape of the velocity profile, but used much higher speeds, resulting in a substantial overshoot of travel distance. Our results show that visual, vestibular, and somatosensory signals are used for path integration, following a common strategy: the use of the velocity profile during self-motion. Received: 3 June 1998 / Accepted: 15 February 1999  相似文献   

7.
The role of visual orientation cues for human control of upright stance is still not well understood. We, therefore, investigated stance control during motion of a visual scene as stimulus, varying the stimulus parameters and the contribution from other senses (vestibular and leg proprioceptive cues present or absent). Eight normal subjects and three patients with chronic bilateral loss of vestibular function participated. They stood on a motion platform inside a cabin with an optokinetic pattern on its interior walls. The cabin was sinusoidally rotated in anterior-posterior (a-p) direction with the horizontal rotation axis through the ankle joints (f=0.05-0.4 Hz; A (max)=0.25 degrees -4 degrees ; v (max)=0.08-10 degrees /s). The subjects' centre of mass (COM) angular position was calculated from opto-electronically measured body sway parameters. The platform was either kept stationary or moved by coupling its position 1:1 to a-p hip position ('body sway referenced', BSR, platform condition), by which proprioceptive feedback of ankle joint angle became inactivated. The visual stimulus evoked in-phase COM excursions (visual responses) in all subjects. (1) In normal subjects on a stationary platform, the visual responses showed saturation with both increasing velocity and displacement of the visual stimulus. The saturation showed up abruptly when visually evoked COM velocity and displacement reached approximately 0.1 degrees /s and 0.1 degrees , respectively. (2) In normal subjects on a BSR platform (proprioceptive feedback disabled), the visual responses showed similar saturation characteristics, but at clearly higher COM velocity and displacement values ( approximately 1 degrees /s and 1 degrees , respectively). (3) In patients on a stationary platform (no vestibular cues), the visual responses were basically similar to those of the normal subjects, apart from somewhat higher gain values and less-pronounced saturation effects. (4) In patients on a BSR platform (no vestibular and proprioceptive cues, presumably only somatosensory graviceptive and visual cues), the visual responses showed an abnormal increase in gain with increasing stimulus frequency in addition to a displacement saturation. On the normal subjects we performed additional experiments in which we varied the gain of the visual response by using a 'virtual reality' visual stimulus or by applying small lateral platform tilts. This did not affect the saturation characteristics of the visual response to a considerable degree. We compared the present results to previous psychophysical findings on motion perception, noting similarities of the saturation characteristics in (1) with leg proprioceptive detection thresholds of approximately 0.1 degrees /s and 0.1 degrees and those in (2) with vestibular detection thresholds of 1 degrees /s and 1 degrees , respectively. From the psychophysical data one might hypothesise that a proprioceptive postural mechanism limits the visually evoked body excursions if these excursions exceed 0.1 degrees /s and 0.1 degrees in condition (1) and that a vestibular mechanism is doing so at 1 degrees /s and 1 degrees in (2). To better understand this, we performed computer simulations using a posture control model with multiple sensory feedbacks. We had recently designed the model to describe postural responses to body pull and platform tilt stimuli. Here, we added a visual input and adjusted its gain to fit the simulated data to the experimental data. The saturation characteristics of the visual responses of the normals were well mimicked by the simulations. They were caused by central thresholds of proprioceptive, vestibular and somatosensory signals in the model, which, however, differed from the psychophysical thresholds. Yet, we demonstrate in a theoretical approach that for condition (1) the model can be made monomodal proprioceptive with the psychophysical 0.1 degrees /s and 0.1 degrees thresholds, and for (2) monomodal vestibular with the psychophysical 1 degrees /s and 1 degrees thresholds, and still shows the corresponding saturation characteristics (whereas our original model covers both conditions without adjustments). The model simulations also predicted the almost normal visual responses of patients on a stationary platform and their clearly abnormal responses on a BSR platform.  相似文献   

8.
Previous studies have generally considered heading perception to be a visual task. However, since judgments of heading direction are required only during self-motion, there are several other relevant senses which could provide supplementary and, in some cases, necessary information to make accurate and precise judgments of the direction of self-motion. We assessed the contributions of several of these senses using tasks chosen to reflect the reference system used by each sensory modality. Head-pointing and rod-pointing tasks were performed in which subjects aligned either the head or an unseen pointer with the direction of motion during whole body linear motion. Passive visual and vestibular stimulation was generated by accelerating subjects at sub- or supravestibular thresholds down a linear track. The motor-kinesthetic system was stimulated by having subjects actively walk along the track. A helmet-mounted optical system, fixed either on the cart used to provide passive visual or vestibular information or on the walker used in the active walking conditions, provided a stereoscopic display of an optical flow field. Subjects could be positioned at any orientation relative to the heading, and heading judgments were obtained using unimodal visual, vestibular, or walking cues, or combined visual-vestibular and visual-walking cues. Vision alone resulted in reasonably precise and accurate head-pointing judgments (0.3° constant errors, 2.9° variable errors), but not rod-pointing judgments (3.5° constant errors, 5.9° variable errors). Concordant visual-walking stimulation slightly decreased the variable errors and reduced constant pointing errors to close to zero, while head-pointing errors were unaffected. Concordant visual-vestibular stimulation did not facilitate either response. Stimulation of the vestibular system in the absence of vision produced imprecise rod-pointing responses, while variable and constant pointing errors in the active walking condition were comparable to those obtained in the visual condition. During active self-motion, subjects made large headpointing undershoots when visual information was not available. These results suggest that while vision provides sufficient information to identify the heading direction, it cannot, in isolation, be used to guide the motor response required to point toward or move in the direction of self-motion.  相似文献   

9.
Locomotion control uses proprioceptive, visual, and vestibular signals. The vestibular contribution has been analyzed previously with galvanic vestibular stimulation (GVS), which constitutes mainly a virtual head-fixed rotation in the roll plane that causes polarity-specific deviations of gait. In this study we examined whether a visual disturbance has similar effects on gait when it acts in the same direction as GVS, i.e., when roll vection is induced by head-fixed visual roll motion stimulation. Random dot patterns were constantly rotated in roll at ±15°/s on a computer-driven binocular head-mounted display that was worn by eight healthy participants. Their gait trajectories were tracked while they walked a distance of 6 m. A stimulation effect was observed only for the first three to four steps, but not for the whole walking distance. These results are similar to the results of previous GVS studies, suggesting that in terms of the direction of action visual motion stimulations in the roll plane are similar to GVS. Both kinds of stimulation cause only initial balance responses in the roll plane but do not contribute to the steering of gait in the yaw plane.  相似文献   

10.
We have recently reported that the head systematically deviates toward the future direction of the trajectory about 500 ms before attaining a turning point of 90 degrees corner trajectories both in light and in darkness. Here, we investigated how this anticipatory strategy is modified whilst varying visual conditions (Experiment 1) and walking speed (Experiment 2). Exp. 1 showed similar anticipatory behaviour when walking with or without vision. Exp. 2 (that varied walking speed; eyes open) showed that the head started to deviate at a constant distance rather than at a constant time to the corner. The results appear inconsistent with optic flow theories of the guidance of walking direction and might highlight the role of landmarks and/or egocentric direction in anticipatory orienting behaviour.  相似文献   

11.
Visual information regarding limb location can override proprioceptive information when there is conflict between the two-a phenomenon referred to as visual capture. In three experiments, we employed the "mirror illusion," in which the perceived location of one's hand is influenced by the visual information specified by the mirror reflection of the other hand, to test whether visual capture influences body-based indications of the extent of objects. Participants viewed their visible hand and its reflection in a mirror after the unseen hand was positioned at one of four locations on a tabletop. The unseen hand's location appeared to be the same distance from the mirror as the visible hand's location. After viewing the visible hand and its reflection while simultaneously performing simple finger movements with both hands, participants viewed a block and had to move their unseen hand to a position that would allow them to grasp the block between their two hands. Movements of the unseen hand relative to the visible hand were biased by the visual information, reflecting errors in moved hand position given visual-proprioceptive conflict. In Experiment 1, visual capture influenced the indications of object extent for objects within reach and aligned with the viewer's midline. Experiments 2 and 3 extended these findings to indications of extent for objects outside the viewer's reach (Experiment 2) and misaligned with the viewer's midline (Experiment 3). These results suggest that visual body information has a generalizable effect on actions used to indicate space perception that extends beyond egocentric spatial localization tasks.  相似文献   

12.
Many previous studies have focused on how humans combine inputs provided by different modalities for the same physical property. However, it is not yet very clear how different senses providing information about our own movements combine in order to provide a motion percept. We designed an experiment to investigate how upright turns are stored, and particularly how vestibular and visual cues interact at the different stages of the memorization process (encoding/recalling). Subjects experienced passive yaw turns stimulated in the vestibular modality (whole-body rotations) and/or in the visual modality (limited lifetime star-field rotations), with the visual scene turning 1.5 times faster when combined (unnoticed conflict). Then they were asked to actively reproduce the rotation displacement in the opposite direction, with body cues only, visual cues only, or both cues with either the same or a different gain factor. First, we found that in none of the conditions did the reproduced motion dynamics follow that of the presentation phase (Gaussian angular velocity profiles). Second, the unimodal recalling of turns was largely uninfluenced by the other sensory cue that it could be combined with during the encoding. Therefore, turns in each modality, visual, and vestibular are stored independently. Third, when the intersensory gain was preserved, the bimodal reproduction was more precise (reduced variance) and lay between the two unimodal reproductions. This suggests that with both visual and vestibular cues available, these combine in order to improve the reproduction. Fourth, when the intersensory gain was modified, the bimodal reproduction resulted in a substantially larger change for the body than for the visual scene rotations, which indicates that vision prevails for this rotation displacement task when a matching problem is introduced.  相似文献   

13.
Ownership for body parts depends on multisensory integration of visual, tactile and proprioceptive signals. In a previous study, we demonstrated that vestibular signals also contribute to ownership for body parts, since vestibular stimulation increased illusory ownership for a rubber hand. However, it remained an open question whether the vestibular information acts on the visual or on the tactile input. Here, we used a non-visual variant of the rubber hand illusion, manipulating the synchrony between tactile signals from the participant's left and right hand. The results revealed a strong illusory ownership through self-reports (questionnaires) and proprioceptive drift measures. Interestingly, however, there was no influence of vestibular stimulation on illusory ownership and the proprioceptive drift. The present data suggest that vestibular signals do not interfere with the tactile-proprioceptive mechanisms underlying ownership for body parts when visual feedback from the body surface is absent.  相似文献   

14.
Judging distances is crucial when interacting with the environment. For short distances in action space (up to 30 m), both explicit verbal estimates and locomotor judgments are fairly accurate. For large distances, data have remained scarce. In two laboratory experiments, our observers judged distances to visual targets presented stereoscopically, either by giving a verbal estimate or by walking the distance to the target on a treadmill. While verbal judgments remained linearly scaled over the whole range of distances from 20 to 262 m, locomotor judgments fell short at distances above 100 m, indicating that observers overestimated the distance they had traveled and increasingly did so as a function of actual target distance. This pattern persisted when controlling for the potential confound of fatigue or reluctance to walk. We discuss different approaches to explain our findings and stress the importance of a differential use of distance cues. A model of leaky path integration showed a good fit with our locomotor data.  相似文献   

15.
Four cats labyrinthectomized shortly after birth ( DELAB ) exhibited the classical vestibular syndrome and recovery, while their motor development was otherwise unimpaired. As adults, they were tested for visual vestibular substitution in a locomotor task with either orientation requirements (tilted platforms) or balance requirements (narrow platforms). Visual motion cues or static visual cues were controlled using normal or stroboscopic lighting, or darkness. Measurements of the average speed of locomotion showed that: - Although all cats increase their speed when more visual cues become available, a marked deficit occurs in darkness only in the DELAB cats. - With either vestibular cues alone or static visual cues alone, cats are able to reach the same level of performance in the tilted platform test, which suggests a total visual-vestibular interchangeability in orientation. - DELAB cats perform very poorly in the narrow rail test. - When continuous vision is allowed in the narrow rail test the DELABs ' performance rises but does not match that of the control group. - A specific deficit in balance for the DELAB group is thus reduced by normal continuous vision as compared to stroboscopic vision, suggesting a significant, though imperfect, substitution of motion visual cues for the missing dynamic vestibular cues. - Dynamic visual cues play only a minor role in most situations, when locomotory speed is high. This results support the view that both the vestibular and the visual system can subserve two distinct functions: - dynamic information may stabilize the stance in narrow unstable situations, during slow locomotion, - and static orientation cues may mainly control the direction for displacement. Possible interactions between head positioning and body orientation in the DELAB cats are discussed.  相似文献   

16.
To assess the contribution of visual and vestibular information on human navigation, five blindfolded subjects were passively displaced along two sides of a triangular path using a mobile robot. Subjects were required to complete the triangle by driving the robot to the starting point either blindfolded or in full vision in a 7×6-m and a 38×38-m room. Room dimensions exerted a significant effect on performances: in the smaller environment blindfolded responses were always too short whereas subjects correctly reached the starting point when visual feedback was allowed. On the contrary, in the larger room subjects correctly responded while blindfolded but drove significantly farther than requested in full vision. Our data show that vestibular navigation is highly sensitive to both stored (knowledge of environment) and current visual information. Electronic Publication  相似文献   

17.
The contribution of intrinsic balance control factors to fall mechanisms has received little investigation in studies on occupational accidents. The aim of this study was to assess whether postural regulation in falling workers might have specificities in terms of sensorimotor strategies and neuromuscular responses to balance perturbations. Nine multi-fall-victims (MF), 43 single-fall-victims (SF) and 52 controls (C) were compared on performance measurements of static and dynamic postural control. MF and SF had the worst postural performance both in the static and slow dynamic tests, particularly in eyes closed conditions, suggesting a high dependency on visual cues and a lower use of proprioception. Moreover, the sensorial analysis showed that MF and SF relied less on vestibular input in the development of balance strategy and had more difficulties in maintaining a correct upright stance when proprioceptive input was altered. Finally, MF showed longer latency responses to unexpected external disturbance. Overall, postural control quality increased in the order MF, SF and C. MF and SF adopted particular sensorimotor organisation, placing them at an increased risk of falling in specific sensory environments. Strategies incorporating visual information involve using the cognitive processes causing delayed and less accurate fall avoidance responses, in contrast to adaptative strategies based on proprioceptive and vestibular information.  相似文献   

18.
Summary Prolonged periods of monocular paralysis alter the physiology of the dorsal lateral geniculate nucleus (LGN), shifting the X/Y cell ratio so that X cells are encountered less frequently than Y cells. The shift in the LGN X/Y cell ratio is observed in both the A-layers of both geniculates whether the innervating eye is paralyzed or mobile. This change in the LGN has been attributed to a mechanism that is sensitive to disruptions in binocular cues. The effects of monocular paralysis in the LGN were used to demonstrate that LGN cells possess a sensitivity to binocular cues of an extraretinal and retinal source. The removal of extraretinal signals, in the form of proprioceptive feedback from the extraocular muscles of the mobile eye, by section of the ophthalmic branch of the Vth cranial nerve, resulted in an immediate and long-lasting reversal in the effects of monocular paralysis. The LGN X/Y ratio was restored to a normal value in the layers innervated by the eye with intact proprioceptive inputs as well as in the layers innervated by the eye in which proprioceptive inputs were removed. In contrast to this, the removal of proprioceptive inputs from the paralyzed eye had no effect on the LGN X/Y ratio. The removal of visual inputs from the mobile eye by section of the optic nerve resulted in an immediate, but somewhat transient reversal in the effects of monocular paralysis. Within the first 25 h after optic nerve section, the LGN X/Y ratio was restored to a normal value in the layers innervated by the eye with intact visual inputs. A transient reversal was also observed when both visual and proprioceptive inputs from the mobile eye were removed. These results are consistent with the belief that the LGN is one site in the visual pathway where proprioceptive and visual signals from the two eyes converge.  相似文献   

19.
Integration of discrepant visual and proprioceptive action effects puts high demands on the human information processing system. The present study aimed to examine the integration mechanisms for the motor (Exp. 1) and visual modality (Exp. 2). According to theories of common coding, we assumed that visual as well as proprioceptive information is represented within the same cognitive domain and is therefore likely to affect each other (multisensory cross talk). Thus, apart from the often-confirmed visual dominance in multisensory integration, we asked about intra- and intermodal recall of either proprioceptive or visual information and whether there were any differences between the motor and visual modality. In a replication paradigm, we perturbed the relation between hand movements and cursor movements. The task required the (intra- vs. intermodal) replication of an initially performed (seen) hand (cursor) movement in a subsequent motor (visual) replication phase. First, mechanisms of integration were found to be dependent on the output modality. Visual action effects interfered the motor modality, but proprioceptive action effects did not have any effects on the visual modality. Second, however, intermodal integration was more susceptible to interference, and this was found to be independent from the output modality. Third, for the motor modality, the locus of perturbation (perturbation of cursor amplitude or perturbation of hand amplitude) was irrelevant, but for the visual modality, perturbation of hand amplitudes reduced the cross talk. Tool use is one field of application of these kinds of results, since the optimized integration of conflicting action effects is a precondition for using tools successfully.  相似文献   

20.
Recent evidence suggests that planning a reaching movement entails similar stages and common networks irrespective of whether the target location is defined through visual or proprioceptive cues. Here we test whether the transformations that convert the sensory information regarding target location into the required motor output are common for both types of reaches. To do so, we adaptively modified these sensorimotor transformations through exposure to displacing prisms and hypothesized that if they are common to both types of reaches, the aftereffects observed for reaches to visual targets would generalize to reaches to a proprioceptive target. Subjects (n = 16) were divided into two groups that differed with respect to the sensory modality of the targets (visual or proprioceptive) used in the pre- and posttests. The adaptation phase was identical for both groups and consisted of movements toward visual targets while wearing 10.5 degrees horizontally displacing prisms. We observed large aftereffects consistent with the magnitude of the prism-induced shift when reaching toward visual targets in the posttest, but no significant aftereffects for movements toward the proprioceptive target. These results provide evidence that distinct, differentially adaptable sensorimotor transformations underlie the planning of reaches to visual and proprioceptive targets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号