首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The existence of a temporal gap between the offset of a fixation target and the onset of a peripheral target generally reduces the saccadic and manual reaction time in response to the peripheral target. Using a double-step paradigm, the present experiment investigated whether a temporal gap between the extinction of the first target and the presentation of the second target can help in reducing the time to trigger the corrective eye movements and to correct the arm trajectory towards the final target position. A gap was introduced between the presentation of the initial target and a new unexpected goal-target during the movement. The results replicated the gap effect for the corrective saccade to the second target, but revealed an opposite effect for the correction of the reaching movements as the arm correction occurred later in the Gap than in the No-Gap conditions. These results suggest that the information available for the arm motor system to correct the trajectory in relation to the second target was different in the Gap and No-Gap conditions. In the No-Gap condition, the correction of reaching movements would be based on retinal errors between the first and the second targets whereas, in the Gap condition, the correction would be based on information derived from the corrective saccade-related signals to the second target.  相似文献   

2.
Reaching to grasp an object of interest requires a complex sensorimotor transformation-involving eye, head, hand, and postural systems. We show here that discontinuities in development of movement in these systems are dependent not only on age but also vary according to task constraints. Providing external postural support allows us to examine the differential influences of the eye on the hand and the hand on the eye as the ability to isolate and coordinate each system changes with age. Children 4–6 years old had significant difficulty isolating eye movement from head or hand movement, whereas children 7–9 years old showed improved ability to isolate the eye, and by 10–15 years children became proficient in isolating hand movements from eye movements. Postural support had differential effects on the processes of initiation and execution of eye–hand movements. The addition of postural support decreased the time needed for planning the movement, especially in the youngest children, and contributed to increased speed of isolated movements, whereas it caused differential slowing of coordinated movements depending on the child’s developmental level. We suggest that the complexity of the results reflects the complexity of changing task requirements as children transition from simpler ballistic control of all systems to flexible, independent but coordinated control of multiple systems.  相似文献   

3.
The present study evaluated the role of eye movements for manual adaptation to reversed vision. Subjects tracked a visual target using a mouse-driven cursor. In Experiment A, they were instructed to look at the target, look at the cursor, fixate straight ahead, or received no instructions regarding eye movements (Groups T, C, F, and N, respectively). Experiment B involved Groups T and C only. In accordance with literature, baseline manual tracking was more accurate when subjects were instructed to move their eyes rather than to fixate straight ahead. In contrast, no such benefit was observed for the adaptive improvement of tracking. We therefore concluded that transfer of information from the oculomotor to the hand motor system enhances the ongoing control of hand movements but not their adaptive modification; probably because the large computational demand of adaptation does not allow an additional processing of supplementary oculomotor signals. We further found adaptation to be worse in T than in any other group. In particular, adaptation was worse in T than in C although eye movements were the same: subjects in both groups moved their eyes in close relationship with the target rather than the cursor, Group C thus disobeying our instructions. The deficient performance of Group T is therefore not related to eye movements per se, but rather to our instructions. We conclude that an independently moving target strongly attracts eye movements independent of instruction (i.e. Groups T and C), but instructions may redirect spatially selective attention (i.e. Group T vs C), and thus influence adaptation.  相似文献   

4.
To investigate how the sensorimotor systems of eye and hand use position, velocity, and timing information of moving targets, we conducted a series of three experiments. Subjects performed combined eye-hand catch-up movements toward visual targets that moved with step-ramp-like velocity profiles. Visual feedback of the hand was prevented by blanking the target at the onset of the hand movement. A multiple regression was used to determine the effects of position, velocity, and timing accessed before each movement on the movement amplitudes of eye and hand. The following results were obtained: 1.The predictive strategy of eye movements could be modeled by a linear regression on the basis of the position error and the target velocity. This was not the case for hand movements, for which there was a significant partial correlation between the movement amplitude and the product of target velocity and movement duration. This correlation was not observed for eye movements suggesting that the predictive strategy of hand movements takes movement duration into account, in contrast to the strategy used in eye movements. 2.To determine whether the movement amplitudes of eye and hand depend on a categorical classification between a discrete number of movement types, we compared an experiment in which target position and velocity were distributed continuously with an experiment using only four different combinations of target position and velocity. No systematic differences between these experiments were observed. This shows that the system output is a function of continuous, interval-scaled variables rather than a function of discrete categorical variables. 3.We also analyzed the component of the movement amplitudes not explained by the regression, i.e., the residual error. The residual errors between subsequent trials were correlated more strongly for eye than for hand movements, suggesting that short-term temporal fluctuations of the predictive strategy were stronger for the eye than for the hand.  相似文献   

5.
The movements of both eyes and the head were recorded with search coils in unrestrained, freely moving chameleons. As a main result I found that the generation of saccades in the left and the right eye was either independent from each other or was highly correlated according to the behavioural situation. When no prey item was fixated, disconjugate saccades were observed which was in accordance with earlier observations in chameleons. During prey tracking the chameleons switched to a different oculomotor behaviour and pursued the moving prey with synchronous saccades. At higher target velocities, the tracking movement of the head was also saccadic and was synchronised with the two eyes. Binocular coupling affected only the timing of the saccades but not the metrics: the amplitudes of the synchronous saccades were usually different in the two eyes. These observations suggest the existence of two independent premotor neuronal circuits for left and right eye saccadic motor control in the chameleon. Binocular coupling in prey-tracking chameleons is probably achieved by neuronal coupling of these premotor circuits during eye–head coordination. The ability to switch between synchronous and uncoupled saccadic eye movements has not been described for any other vertebrate. This unique ability of the chameleon may help to understand the organisation of the oculomotor system of other vertebrates since evidence for separate left eye and right eye saccade generation and position control has recently also been reported in primates. Electronic Publication  相似文献   

6.
Aiming bias typically influences perception but action towards the illusory stimulus is often unaffected. Recent studies, however, have shown that the type of information available is a predictor for the expression of action bias. In the present cyclical aiming experiment, the type of information (retinal and extra-retinal) was manipulated in order to investigate the differential contributions of different cues on both eye and hand movements. The results showed that a Müller-Lyer illusion caused very similar perturbation effects on hand and eye-movement amplitudes and this bias was mediated by the type of information available on-line. Interestingly, the impact of the illusion on goal-directed movement was smaller, when information about the figure but not the hand was provided for on-line control. Saccadic information did not influence the size of the effect of a Müller-Lyer illusion on hand movements. Furthermore, the illusions did not alter the eye-hand coordination pattern. The timing of saccade termination was strongly linked to hand movement kinematics. The present results are not consistent with current dichotomous models of perception and action or movement planning and on-line control. Rather, they suggest that the type of information available for movement planning mediates the size of the illusory effects. Overall, it has been demonstrated that movement planning and control processes are versatile operations, which have the ability to adapt to the type of information available.  相似文献   

7.
Studies in monkeys and humans suggest a dissociation between the visual fields near and far from the hand. In this study, we investigated visual detection and spatial discrimination in near- and far-hand fields using the stimulus of a flashing light emitting diode placed near (1 cm) and/or far (40 cm) from the hand. We found that there was greater accuracy (i.e., fewer errors) in the near-hand field. Control experiments indicated that (a) the superior near-hand detection performance was not due to response strategies, (b) the hand did not serve as a spatial reference, (c) the greater accuracy in the near-hand field did not reflect within-hemisphere or within-hemispace facilitation and (d) the effect appeared to be essentially due to viewing of the hand but not proprioceptive information. The results suggest there is an interconnected system for integrated (visual–tactile) coding of peripersonal space centered on body parts and comprising bimodal visuo-tactile cells.  相似文献   

8.
Most object manipulation tasks involve a series of actions demarcated by mechanical contact events, and gaze is typically directed to the locations of these events as the task unfolds. Here, we examined the timing of gaze shifts relative to hand movements in a task in which participants used a handle to contact sequentially five virtual objects located in a horizontal plane. This task was performed both with and without visual feedback of the handle position. We were primarily interested in whether gaze shifts, which in our task shifted from a given object to the next about 100 ms after contact, were predictive or triggered by tactile feedback related to contact. To examine this issue, we included occasional catch contacts where forces simulating contact between the handle and object were removed. In most cases, removing force did not alter the timing of gaze shifts irrespective of whether or not vision of handle position was present. However, in about 30% of the catch contacts, gaze shifts were delayed. This percentage corresponded to the fraction of contacts with force feedback in which gaze shifted more than 130 ms after contact. We conclude that gaze shifts are predictively controlled but timed so that the hand actions around the time of contact are captured in central vision. Furthermore, a mismatch between the expected and actual tactile information related to the contact can lead to a reorganization of gaze behavior for gaze shifts executed greater than 130 ms after a contact event.  相似文献   

9.
During a goal-directed movement of the hand to a visual target the controlling nervous system depends on information provided by the visual system. This suggests that a coupling between these two systems is crucial. In a choice condition with two or more equivalent objects present at the same time the question arises whether we (a) reach for the object we have selected to look at or (b) look to the object we have selected to grasp. Therefore, we examined the preference of human subjects selecting the left or the right target and its correlation to the action to be performed (eye-, arm- or coordinated eye–arm movement) as well as the horizontal position of the target. Two targets were presented at the same distance to the left and right of a fixation point and the stimulus onset asynchrony (SOA) was adjusted until both targets were selected equally often. This balanced SOA was then taken as a quantitative measure of selection preference. We compared these preferences at three horizontal positions for the different movement types (eye, arm, both). The preferences of the ‘arm’ and ‘coordinated eye–arm’ movement types were correlated more strongly than the preferences of the other movement types. Thus, we look to where we have already selected to grasp. These findings provide evidence that in a coordinated movement of eyes and arm the control of gaze is a means to an end, namely a tool to conduct the arm movement properly.  相似文献   

10.
The tendency to generate head movements during saccades varies from person to person. Head movement tendencies can be measured as subjects fixate sequences of illuminated targets, but the extent to which such measures reflect eye–head coupling during more natural behaviors is unknown. We quantified head movement tendencies in 20 normal subjects in a conventional laboratory experiment and in an outdoor setting in which the subjects directed their gaze spontaneously. In the laboratory, head movement tendencies during centrifugal saccades could be described by the eye-only range (EOR), customary ocular motor range (COMR), and the customary head orientation range (CHOR). An analogous EOR, COMR, and CHOR could be extracted from the centrifugal saccades executed in the outdoor setting. An additional six measures were introduced to describe the preferred ranges of eyes-in-head and head-on-torso manifest throughout the outdoor recording, i.e., not limited to the orientations following centrifugal saccades. These 12 measured variables could be distilled by factor analysis to one indoor and six outdoor factors. The factors reflect separable tendencies related to preferred ranges of visual search, head eccentricity, and eye eccentricity. Multiple correlations were found between the indoor and outdoor factors. The results demonstrate that there are multiple types of head movement tendencies, but some of these influence behavior across rather different experimental settings and tasks. Thus behavior in the two settings likely relies on common neural mechanisms, and the laboratory assays of head movement tendencies succeed in probing the mechanisms underlying eye–head coupling during more natural behaviors.
John S. StahlEmail:
  相似文献   

11.
The ability to dissociate eye movements from head movements is essential to animals with foveas and fovea-like retinal specializations, as these species shift the eyes constantly, and moving the head with each gaze shift would be impractical and energetically wasteful. The processes by which the dissociation is effected remain unclear. We hypothesized that the dissociation is accomplished by means of a neural gate, which prevents a common gaze-shift command from reaching the neck circuitry when eye-only saccades are desired. We further hypothesized that such a gate would require a finite period to reset following opening to allow a combined eye–head saccade, and thus the probability of generating a head movement during a saccade would be augmented when a new visual target (the ‘test’ target) appeared during, or soon after, a combined eye–head saccade made to an earlier, ‘conditioning’ target. We tested human subjects using three different combinations of targets—a horizontal conditioning target followed by a horizontal test target (H/H condition), horizontal conditioning followed by vertical test (H/V), and vertical conditioning followed by horizontal test (V/H). We varied the delay between the onset of the conditioning head movement and the presentation of the test target, and determined the probability of generating a head movement to the test target as a function of target delay. As predicted, head movement probability was elevated significantly at the shortest target delays and declined thereafter. The half-life of the increase in probability averaged 740, 490, and 320 ms for the H/H, H/V, and V/H conditions, respectively. For the H/H condition, the augmentation appeared to outlast the duration of the conditioning head movement. Because the augmentation could outlast the conditioning head movement and did not depend on the head movements to the conditioning and test targets lying in the same directions, we could largely exclude the possibility that the augmentation arises from mechanical effects. These results support the existence of the hypothetical eye–head gate, and suggest ways that its constituent neurons might be identified using neurophysiological methods.  相似文献   

12.
Changing the direction of the line of sight (gaze) can involve coordinated movements of the eyes and head. During gaze shifts directed along the horizontal meridian, the contribution of the eyes and head depends upon the position of the eyes in the orbits; the contribution of the head to accomplishing the overall shift in gaze declines as the eyes increasingly are deviated away from the direction of the ensuing gaze shift. Also during horizontal gaze shifts, changes in the metrics and kinematics of the saccadic (eye movement) portion of coordinated movements, are correlated with the amplitude and velocity of the concurrent head movement. With increasing head contributions, saccade peak velocities decline, durations increase and velocity profiles develop two peaks. It remains unknown whether the interaction between head and eyes observed during horizontal gaze shifts also occurs during vertical gaze shifts. Yet, a full understanding of the neural control of eye–head coordination will depend upon the correlation of neural activity and features of vertical as well as horizontal movements. This report describes the metrics and kinematics of vertical gaze shifts made by head-unrestrained rhesus monkeys. Key observations include: (1) during vertical gaze shifts of a particular amplitude, relative eye and head contributions depend upon the initial vertical positions of the eyes in the orbits; (2) as head contribution increases, peak eye velocities decline, durations increase and vertical velocity profiles develop two peaks; (3) head movement metrics and kinematics are accurately predictable given knowledge only of head movement amplitude. In these ways, vertical gaze shifts were found to be qualitatively similar to horizontal gaze shifts. It seems probable that similar mechanisms mediate head–eye interactions during both horizontal and vertical movements. These observations are consistent with the hypothesis that a signal proportional to vertical head velocity reduces the gain of the vertical saccade burst generator.  相似文献   

13.
Anchoring, that is, a local reduction in kinematic (i.e., spatio-temporal) variability, is commonly observed in cyclical movements, often at or around reversal points. Two kinds of underpinnings of anchoring have been identified—visual and musculoskeletal—yet their relative contributions and interrelations are largely unknown. We conducted an experiment to delineate the effects of visual and musculoskeletal factors on anchoring behavior in visuo-motor tracking. Thirteen participants (reduced to 12 in the analyses) tracked a sinusoidally moving visual target signal by making flexion–extension movements about the wrist, while both visual (i.e., gaze direction) and musculoskeletal (i.e., wrist posture) factors were manipulated in a fully crossed (3 × 3) design. Anchoring was affected by both factors in the absence of any significant interactions, implying that their contributions were independent. When gaze was directed to one of the target turning points, spatial endpoint variability at this point was reduced, but not temporal endpoint variability. With the wrist in a flexed posture, spatial and temporal endpoint variability were both smaller for the flexion endpoint than for the extension endpoint, while the converse was true for tracking with the wrist extended. Differential anchoring effects were absent for a neutral wrist posture and when gaze was fixated in between the two target turning points. Detailed analyses of the tracking trajectories in terms of velocity profiles and Hooke’s portraits showed that the tracking dynamics were affected more by wrist posture than by gaze direction. The discussion focuses on the processes underlying the observed independent effects of gaze direction and wrist posture on anchoring as well as their implications for the notion of anchoring as a generic feature of sensorimotor coordination.  相似文献   

14.
Relatively little is known about movements of the eyes, head, and hands in natural tasks. Normal behavior requires spatial and temporal coordination of the movements in more complex circumstances than are typically studied, and usually provides the opportunity for motor planning. Previous studies of natural tasks have indicated that the parameters of eye and head movements are set by global task constraints. In this experiment, we explore the temporal coordination of eye, head, and hand movements while subjects performed a simple block-copying task. The task involved fixations to gather information about the pattern, as well as visually guided hand movements to pick up and place blocks. Subjects used rhythmic patterns of eye, head, and hand movements in a fixed temporal sequence or coordinative structure. However, the pattern varied according to the immediate task context. Coordination was maintained by delaying the hand movements until the eye was available for guiding the movement. This suggests that observers maintain coordination by setting up a temporary, task-specific synergy between the eye and hand. Head movements displayed considerable flexibility and frequently diverged from the gaze change, appearing instead to be linked to the hand trajectories. This indicates that the coordination of eye and head in gaze changes is usually the consequence of a synergistic linkage rather than an obligatory one. These temporary synergies simplify the coordination problem by reducing the number of control variables, and consequently the attentional demands, necessary for the task. Electronic Publication  相似文献   

15.
Rapid gaze shifts are often accomplished with coordinated movements of the eyes and head, the relative amplitude of which depends on the starting position of the eyes. The size of gaze shifts is determined by the superior colliculus (SC) but additional processing in the lower brain stem is needed to determine the relative contributions of eye and head components. Models of eye–head coordination often assume that the strength of the command sent to the head controllers is modified by a signal indicative of the eye position. Evidence in favor of this hypothesis has been recently obtained in a study of phasic electromyographic (EMG) responses to stimulation of the SC in head-restrained monkeys (Corneil et al. in J Neurophysiol 88:2000–2018, 2002b). Bearing in mind that the patterns of eye–head coordination are not the same in all species and because the eye position sensitivity of phasic EMG responses has not been systematically investigated in cats, in the present study we used cats to address this issue. We stimulated electrically the intermediate and deep layers of the caudal SC in alert cats and recorded the EMG responses of neck muscles with horizontal and vertical pulling directions. Our data demonstrate that phasic, short latency EMG responses can be modulated by the eye position such that they increase as the eye occupies more and more eccentric positions in the pulling direction of the muscle tested. However, the influence of the eye position is rather modest, typically accounting for only 10–50% of the variance of EMG response amplitude. Responses evoked from several SC sites were not modulated by the eye position.  相似文献   

16.
Head movement frequency of children in response to horizontal step stimulus is investigated. The aim is to determine if there is a correlation between the age of the child and the frequency of head movements made to visual step stimuli presented at a fixed distance. Also of importance is whether there is a period of rapid change in the frequency of head movements, and if so, what factors could be influencing this change. Seventy-three participants, between the ages of 4 and 15 years were requested to “look at a spot of light” in response to step stimuli which varied in size from 5 to 60°. Eye and head movements were recorded with a video based eye tracker (EL-Mar 2020) equipped with a Flock of Birds head tracker. Frequency of head movements was calculated for each participant and averaged across participants for each age group. Average head movement frequency was then plotted as a function of age. The frequency and variability of head movements decreases as a function of age. This decrease is linear between the ages of 4 and 15 years (y = −1.465x + 22.58; R 2 = 0.4378; F = 26.48; P < 0.0001). More head movements are made in response to larger step sizes than to smaller ones for all ages. The gradual decrease in frequency of head movements in response to step stimuli suggests that a specific environmental event, such as reading, is not the cause of the decline. Improved efficiency of eye movements could be due to pre-programmed factors related to neurological development. Alternatively, cognitive factors may be involved. Children may actually learn that utilizing their head for gaze shifts is more energy and time consuming, than merely using the eyes alone.  相似文献   

17.
Summary The accuracy of pointing movements of the hand, directed at visual targets 10° to 40° from the midline, was measured in normal human subjects. No visual feedback from the moving hand was available to the subjects. The head could be either maintained stationary (head-fixed condition) or free to move (head-free condition) during the pointing movements. It was found that the error in pointing was reduced for all targets in the head-free condition. This reduction was more important for the more eccentric target (40°). Improvement in accuracy was observed without any significant change in either the latency or the duration of eye, head or hand movements. In the head-free condition, it was found that the head was displaced in the direction of the target by an amount representing no more than 2/3 of the target amplitude. The improvement in accuracy was not influenced by the amplitude of the head movement. A model is proposed which shows how coordinated eye and head movements could improve the encoding of target position.Work supported by INSERM-Unité 94  相似文献   

18.
 It is now well established that the accuracy of pointing movements to visual targets is worse in the full open loop condition (FOL; the hand is never visible) than in the static closed loop condition (SCL; the hand is only visible in static position prior to movement onset). In order to account for this result, it is generally admitted that viewing the hand in static position (SCL) improves the movement planning process by allowing a better encoding of the initial state of the motor apparatus. Interestingly, this wide-spread interpretation has recently been challenged by several studies suggesting that the effect of viewing the upper limb at rest might be explained in terms of the simultaneous vision of the hand and target. This result is supported by recent studies showing that goal-directed movements involve different types of planning (egocentric versus allocentric) depending on whether the hand and target are seen simultaneously or not before movement onset. The main aim of the present study was to test whether or not the accuracy improvement observed when the hand is visible before movement onset is related, at least partially, to a better encoding of the initial state of the upper limb. To address this question, we studied experimental conditions in which subjects were instructed to point with their right index finger toward their unseen left index finger. In that situation (proprioceptive pointing), the hand and target are never visible simultaneously and an improvement of movement accuracy in SCL, with respect to FOL, may only be explained by a better encoding of the initial state of the moving limb when vision is present. The results of this experiment showed that both the systematic and the variable errors were significantly lower in the SCL than in the FOL condition. This suggests: (1) that the effect of viewing the static hand prior to motion does not only depend on the simultaneous vision of the goal and the effector during movement planning; (2) that knowledge of the initial upper limb configuration or position is necessary to accurately plan goal-directed movements; (3) that static proprioceptive receptors are partially ineffective in providing an accurate estimate of the limb posture, and/or hand location relative to the body; and (4) that static visual information significantly improves the representation provided by the static proprioceptive channel. Received: 23 July 1996 / Accepted: 13 December 1996  相似文献   

19.
Reaching to grasp an object of interest requires complex sensorimotor coordination involving eye, head, hand and trunk. While numerous studies have demonstrated deficits in each of these systems individually, little is known about how children with cerebral palsy (CP) coordinate multiple motor systems for functional tasks. Here we used kinematics, remote eye tracking and a trunk support device to examine the functional coupling of the eye, head and hand and the extent to which it was constrained by trunk postural control in 10 children with CP (6–16 years). Eye movements in children with CP were similar to typically developing (TD) peers, while hand movements were significantly slower. Postural support influenced initiation of hand movements in the youngest children (TD & CP) and execution of hand movements in children with CP differentially depending on diagnosis. Across all diagnostic categories, the most robust distinction between TD children and children with CP was in their ability to isolate eye, head and hand movements. Results of this study suggest that deficits in motor coordination for accurate reaching in children with CP may reflect coupled eye, head, and hand movements. We have previously suggested that coupled activation of effectors may be the default output for the CNS during early development.  相似文献   

20.
In addition to many other symptoms, Huntington’s Disease (HD) also causes an impairment of oculomotor functions. In particular, saccadic eye movements become progressively slower and more difficult to initiate; ultimately, patients are forced to recur to large head thrusts as means to initiate gaze shifts. We wondered whether, as a precursor of this condition, head movements would facilitate gaze shifts already in early stages of the disease. We studied horizontal head movements and eye–head coordination in 29 early stage HD patients (Ps) and 24 age matched controls (Cs). Subjects tracked random horizontal steps of visual or auditory targets while their heads were either stabilised (saccade amplitudes ≤40°) or free to move (amplitudes ≤160°). Subjects were to react either immediately (reactive mode), or wait until a go signal was sounded (delayed mode), or by antisaccades. Ps’ head velocity was found to depend on the age of disease onset in a similar way as their saccadic eye velocity does, being clearly reduced in early affected Ps, but increasing to normal levels in lately affected Ps. Yet, saccade and head velocity were only loosely correlated although both exhibited a negative correlation with the severity of Ps’ genetic condition (number of Ps’ CAG repeats). Eye–head coordination turned out to be identical in Ps and Cs except for quantitative differences caused by the lower saccade and head velocities of Ps. Specifically, the timing between head and eyes and the head contribution to gaze shifts were similar in both groups. Moreover, preventing head movements did not affect the saccade latency or accuracy of Ps. Although Ps made more small involuntary head movements in this condition than Cs, these movements were not instrumental in generating saccades since they occurred only late after saccade onset. Thus, the head manoeuvres of severely affected patients must be considered a late adaptive behaviour. Finally, the ability of both Ps and Cs to suppress immediate reactions in the delayed and antisaccade conditions diminished as target distance decreased, with failure rates in Ps being much larger than in Cs. Unlike eye and head velocity, these failure rates were not correlated with age and, by the same token, neither with the variations in head and eye velocity nor with the number of CAG repeats. Hence, the pattern of brain areas prominently affected by HD is likely to vary significantly among individuals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号