首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In two experiments the involvement of relative and fixed coordinate systems in visuomotor transformations was examined. The experimental task required the successive performance of two movements in each trial, which had to “correspond” to different visual stimuli. One kind of visual display indicated target positions by way of different horizontal positions of a vertical line on a monitor (position mode), while the other indicated movement amplitudes by way of different lengths of a horizontal line (amplitude mode). Formal analysis of variances and covariances of successive individual movements led to the conclusion that in the position mode visuomotor transformations were based on a mixture of relative and fixed coordinate systems, while in the amplitude mode only a relative coordinate system was involved. Thus, visuomotor transformations can be characterized as mixtures of different coordinate systems, and their respective weights in the mixtures are task-dependent. Received: 18 March 1997 / Accepted: 25 September 1997  相似文献   

2.
 We attempt to determine the egocentric reference frame used in directing saccades to remembered targets when landmark-based (exocentric) cues are not available. Specifically, we tested whether memory-guided saccades rely on a retina-centered frame, which must account for eye movements that intervene during the memory period (thereby accumulating error) or on a head-centered representation that requires knowledge of the position of the eyes in the head. We also examined the role of an exocentric reference frame in saccadic targeting since it would not need to account for intervening movements. We measured the precision of eye movements made by human observers to target locations held in memory for a few seconds. A variable number of saccades intervened between the visual presentation of a target and a later eye movement to its remembered location. A visual landmark that allowed for exocentric encoding of the memory target appeared in half the trials. Variable error increased slightly with a greater number of intervening saccades. The landmark aided targeting precision, but did not eliminate the increase in variable error with additional intervening saccades. We interpret these results as evidence for a representation that relies on knowledge of eye position with respect to the head and not one that relies solely on updating in a retina-centered frame. Our results allow us to set an upper bound on the standard deviation of an eye position signal available to the saccadic system during short memory periods at 1.4° for saccades of about 10°. Received: 7 February 1995 / Accepted: 4 October 1996  相似文献   

3.
Coding of reaching in the cerebral cortex is based on the operation of distributed populations of parietal and frontal neurons, whose main functional characteristics reside in their combinatorial power, i.e., in the capacity for combining different information related to the spatial aspects of reaching. The tangential distribution of reach-related neurons endowed with different functional properties changes gradually in the cortex and defines, in the parieto-frontal network, trends of functional properties. These visual-to-somatic gradients imply the existence of cortical regions of functional overlaps, i.e., of combinatorial domains, where the integration of different reach-related signals occurs. Studies of early coding of reaching in the mesial parietal areas show how somatomotor information, such as that related to arm posture and movement, influences neuronal activity in the very early stages of the visuomotor transformation underlying the composition of the motor command and is not added “downstream” in the frontal cortex. This influence is probably due to re-entrant signals traveling through fronto-parietal-association connections. Together with the gradient architecture of the network and the reciprocity of cortico-cortical connections, this implies that coding of reaching cannot be regarded as a top-down, serial sequence of coordinate transformation, each performed by a given cortical area, but as a recursive process, where different signals are progressively matched and further elaborated locally, due to intrinsic cortical connections. This model of reaching is also supported by psychophysical studies stressing the parallel processing of the different relevant parameters and the “hybrid” nature of the reference frame where they are combined. The theoretical frame presented here can also offer a background for a new interpretation of a well-known visuomotor disorder, due to superior parietal lesions, i.e., optic ataxia. More than a disconnection syndrome, this can now be interpreted as the consequence of the breakdown of the operations occurring in the combinatorial domains of the superior parietal segment of the parieto-frontal network.  相似文献   

4.
Neurophysiological and neuroimaging work has uncovered modulatory influence of long-range lateral connections from outside of the classical receptive field on neuronal and behavioral responses to localized targets. We report two psychophysical experiments investigating visual detection of real and apparent motion in central vision with and without remote and immediate stationary references. At a particular temporal frequency (0.1–12.8 Hz), participants adjusted the amplitude of either triangle-wave (real) or square-wave (stroboscopic/apparent) oscillatory motion of a vertical bar along a straight, horizontal trajectory for the first impression of the target’s stationarity/nonstationarity (the displacement threshold). In the relative motion conditions, a stationary reference bar was positioned 23′ apart from the target; in the absolute motion conditions, the bar was absent. The thresholds were measured with a dimly-lit uniform background (13 × 13°) and either in the darkness (experiment 1) or moving-background conditions (experiment 2). For both real and apparent motion, varying the observation conditions yields three sensitivity levels: irrespective of the background, the lowest thresholds occur in the presence of an immediate reference, followed by the moderately increased thresholds obtained with a dimly-lit background alone. The equally high thresholds occur in the darkness and moving-background conditions without any visible stationary references. The results suggest that the spatial frames of reference for visual motion detection are hierarchically nested, yet independent. The findings provide support for the view that absolute motion perception should be considered relative, extending neurophysiological evidence for the existence of long-range lateral connections across the visual field.  相似文献   

5.
 In 1995, an aftereffect following treadmill running was described, in which people would inadvertently advance when attempting to run in place on solid ground with their eyes closed. Although originally induced from treadmill running, the running-in-place aftereffect is argued here to result from the absence of sensory information specifying advancement during running. In a series of experiments in which visual information was systematically manipulated, aftereffect strength (AE), measured as the proportional increase (post-test/pre-test) in forward drift while attempting to run in place with eyes closed, was found to be inversely related to the amount of geometrically correct optical flow provided during induction. In particular, experiment 1 (n=20) demonstrated that the same aftereffect was not limited to treadmill running, but could also be strongly generated by running behind a golf-cart when the eyes were closed (AE=1.93), but not when the eyes were open (AE=1.16). Conversely, experiment 2 (n=39) showed that simulating an expanding flow field, albeit crudely, during treadmill running was insufficient to eliminate the aftereffect. Reducing ambient auditory information by means of earplugs increased the total distances inadvertently advanced while attempting to run in one place by a factor of two, both before and after adaptation, but did not influence the ratio of change produced by adaptation. It is concluded that the running-in-place aftereffect may result from a recalibration of visuomotor control systems that takes place even in the absence of visual input. Received: 2 April 1998 / Accepted: 16 February 1999  相似文献   

6.
Five experiments explored the influence of visual and kinesthetic/proprioceptive reference frames on location memory. Experiments 1 and 2 compared visual and kinesthetic reference frames in a memory task using visually-specified locations and a visually-guided response. When the environment was visible, results replicated previous findings of biases away from the midline symmetry axis of the task space, with stability for targets aligned with this axis. When the environment was not visible, results showed some evidence of bias away from a kinesthetically-specified midline (trunk anterior–posterior [a–p] axis), but there was little evidence of stability when targets were aligned with body midline. This lack of stability may reflect the challenges of coordinating visual and kinesthetic information in the absence of an environmental reference frame. Thus, Experiments 3–5 examined kinesthetic guidance of hand movement to kinesthetically-defined targets. Performance in these experiments was generally accurate with no evidence of consistent biases away from the trunk a–p axis. We discuss these results in the context of the challenges of coordinating reference frames within versus between multiple sensori-motor systems.
Vanessa R. SimmeringEmail:
  相似文献   

7.
 Recording studies in the parietal cortex have demonstrated single-unit activity in relation to sensory stimulation and during movement. We have performed three experiments to assess the effect of selective parietal lesions on sensory motor transformations. Animals were trained on two reaching tasks: reaching in the light to visual targets and reaching in the dark to targets defined by arm position. The third task assessed non-standard, non-spatial stimulus response mapping; in the conditional motor task animals were trained to either pull or turn a joystick on presentation of either a red or a blue square. We made two different lesions in the parietal cortex in two groups of monkeys. Three animals received bilateral lesions of areas 5, 7b and MIP, which have direct connections with the premotor and motor cortices. The three other animals subsequently received bilateral lesions in areas 7a, 7ab and LIP. Both groups were still able to select between movements arbitrarily associated with non-spatial cues in the conditional motor task. Removal of areas 7a, 7ab and LIP caused marked inaccuracy in reaching in the light to visual targets but had no effect on reaching in the dark. Removal of areas 5, 7b and MIP caused misreaching in the dark but had little effect on reaching in the light. The results suggest that the two divisions of the parietal cortex organize limb movements in distinct spatial coordinate systems. Area 7a/7ab/LIP is essential for spatial coordination of visual motor transformations. Area 5/7b/MIP is essential for the spatial coordination of arm movements in relation to proprioceptive and efference copy information. Neither part of the parietal lobe appears to be important for the non-standard, non-spatial transformations of response selection. Received: 5 June 1996 / Accepted: 12 February 1997  相似文献   

8.
The kinematics of straight reaching movements can be specified vectorially by the direction of the movement and its extent. To explore the representation in the brain of these two properties, psychophysical studies have examined learning of visuomotor transformations of either rotation or gain and their generalization. However, the neuronal substrates of such complex learning are only beginning to be addressed. As an initial step in ensuring the validity of such investigations, it must be shown that monkeys indeed learn and generalize visuomotor transformations in the same manner as humans. Here, we analyze trajectories and velocities of movements as monkeys adapt to either rotational or gain transformations. We used rotations with different signs and magnitudes, and gains with different signs, and analyzed transfer of learning to untrained movements. The results show that monkeys can adapt to both types of transformation with a time course that resembles human learning. Analysis of the aftereffects reveals that rotation is learned locally and generalizes poorly to untrained directions, whereas gain is learned more globally and can be transferred to other amplitudes. The results lend additional support to the hypothesis that reaching movements are learned locally but can be easily rescaled to other magnitudes by scaling the peak velocity. The findings also indicate that reaching movements in monkeys are planned and executed very similarly to those in humans. This validates the underlying presumption that neuronal recordings in primates can help elucidate the mechanisms of motor learning in particular and motor planning in general.  相似文献   

9.
The principal goal of our study is to gain an insight into the representation of peripersonal space. Two different experiments were conducted in this study. In the first experiment, subjects were asked to represent principal anatomical reference planes by drawing ellipses in the sagittal, frontal and horizontal planes. The three-dimensional hand-drawing movements, which were achieved with and without visual guidance, were considered as the expression of a cognitive process per se: the peripersonal space representation for action. We measured errors in the spatial orientation of ellipses with regard to the requested reference planes. For ellipses drawn without visual guidance, with eyes open and eyes closed, orientation errors were related to the reference planes. Errors were minimal for sagittal and maximal for horizontal plane. These disparities in errors were considerably reduced when subjects drew using a visual guide. These findings imply that different planes are centrally represented, and are characterized, by different errors when subjects use a body-centered frame for performing the movement and suggest that the representation of peripersonal space may be anisotropic. However, this representation can be modified when subjects use an environment-centered reference frame to produce the movement. In the second experiment, subjects were instructed to represent, with eyes open and eyes closed, sagittal, frontal and horizontal planes by pointing to virtual targets located in these planes. Disparities in orientation errors measured for pointing were similar to those found for drawing, implying that the sensorimotor representation of reference planes was not constrained by the type of motor tasks. Moreover, arm postures measured at pointing endpoints and at comparable spatial locations in drawing are strongly correlated. These results suggest that similar patterns of errors and arm posture correlation, for drawing and pointing, can be the consequence of using a common space representation and reference frame. These findings are consistent with the assumption of an anisotropic action-related representation of peripersonal space when the movement is performed in a body-centered frame.  相似文献   

10.
Visual information is mapped with respect to the retina within the early stages of the visual cortex. On the other hand, the brain has to achieve a representation of object location in a coordinate system that matches the reference frame used by the motor cortex to code reaching movement in space. The mechanism of the necessary coordinate transformation between the different frames of reference from the visual to the motor system as well as its localization within the cerebral cortex is still unclear. Coordinate transformation is traditionally described as a series of elementary computations along the visuomotor cortical pathways, and the motor system is thought to receive target information in a body-centered reference frame. However, neurons along these pathways have a number of similar properties and receive common input signals, suggesting that a non-retinocentric representation of object location in space might be available for sensory and motor purposes throughout the visuomotor pathway. This paper reviews recent findings showing that elementary input signals, such as retinal and eye position signals, reach the dorsal premotor cortex. We will also compare eye position effects in the premotor cortex with those described in the posterior parietal cortex. Our main thesis is that appropriate sensory input signals are distributed across the visuomotor continuum, and could potentially allow, in parallel, the emergence of multiple and task-dependent reference frames. Received: 21 September 1998 / Accepted: 19 March 1999  相似文献   

11.
 It is now well established that the accuracy of pointing movements to visual targets is worse in the full open loop condition (FOL; the hand is never visible) than in the static closed loop condition (SCL; the hand is only visible in static position prior to movement onset). In order to account for this result, it is generally admitted that viewing the hand in static position (SCL) improves the movement planning process by allowing a better encoding of the initial state of the motor apparatus. Interestingly, this wide-spread interpretation has recently been challenged by several studies suggesting that the effect of viewing the upper limb at rest might be explained in terms of the simultaneous vision of the hand and target. This result is supported by recent studies showing that goal-directed movements involve different types of planning (egocentric versus allocentric) depending on whether the hand and target are seen simultaneously or not before movement onset. The main aim of the present study was to test whether or not the accuracy improvement observed when the hand is visible before movement onset is related, at least partially, to a better encoding of the initial state of the upper limb. To address this question, we studied experimental conditions in which subjects were instructed to point with their right index finger toward their unseen left index finger. In that situation (proprioceptive pointing), the hand and target are never visible simultaneously and an improvement of movement accuracy in SCL, with respect to FOL, may only be explained by a better encoding of the initial state of the moving limb when vision is present. The results of this experiment showed that both the systematic and the variable errors were significantly lower in the SCL than in the FOL condition. This suggests: (1) that the effect of viewing the static hand prior to motion does not only depend on the simultaneous vision of the goal and the effector during movement planning; (2) that knowledge of the initial upper limb configuration or position is necessary to accurately plan goal-directed movements; (3) that static proprioceptive receptors are partially ineffective in providing an accurate estimate of the limb posture, and/or hand location relative to the body; and (4) that static visual information significantly improves the representation provided by the static proprioceptive channel. Received: 23 July 1996 / Accepted: 13 December 1996  相似文献   

12.
On the timing of reference frames for action control   总被引:1,自引:1,他引:0  
This study investigated the time course and automaticity of spatial coding of visual targets for pointing movements. To provide an allocentric reference, placeholders appeared on a touch screen either 500 ms before target onset, or simultaneously with target onset, or at movement onset, or not at all (baseline). With both blocked and randomized placeholder timing, movements to the most distant targets were only facilitated when placeholders were visible before movement onset. This result suggests that allocentric target coding is most useful during movement planning and that this visuo-spatial coding mechanism is not sensitive to strategic effects.  相似文献   

13.
 The accuracy of reaching movements to memorized visual target locations is presumed to be determined largely by central planning processes before movement onset. If so, then the initial kinematics of a pointing movement should predict its endpoint. Our study examined this hypothesis by testing the correlation between peak acceleration, peak velocity, and movement amplitude and the correspondence between the respective spatial positions of these kinematic landmarks. Subjects made planar horizontal reaching movements to targets located at five different distances and along five radially arrayed directions without visual feedback during the movements.The spatial dispersion of the positions of peak acceleration, peak velocity, and endpoint all tended to form ellipses oriented along the movement trajectory. However, whereas the peaks of acceleration and velocity scaled strongly with movement amplitude for all of the movements made at the five target distances in any one direction, the correlations with movement amplitude were more modest for trajectories aimed at each target separately. Furthermore, the spatial variability in direction and extent of the distribution of positions of peak acceleration and peak velocity did not scale differently with target distance, whereas they did for endpoint distributions. Therefore, certain features of the final kinematics are evident in the early kinematics of the movements as predicted by the hypothesis that they reflect planning processes. However, endpoint distributions were not completely predetermined by the Initial kinematics. In contrast, multivariate analysis suggests that adjustments to movement duration help compensate for the variability of the initial kinematics to achieve desired movement amplitude. These compensatory adjustments do not contradict the general conclusion that the systematic patterns in the spatial variability observed in this study reflect planning processes. On the contrary, and consistent with that conclusion, our results provide further evidence that direction and extent of reaching movements are planned and determined in parallel over time. Received: 23 March 1998 / Accepted: 2 September 1998  相似文献   

14.
Nine infants were tested, at the age of onset of reaching, seated on their parent’s lap and reaching for a small plastic toy. Kinematic analysis revealed that infants largely used shoulder and torso rotation to move their hands to the toy. Many changes in hand direction were observed during reaching, with later direction changes correcting for earlier directional errors. Approximately half of the infants started many reaches by bringing their hands backward or upward to a starting location that was similar across reaches. Individual infants often achieved highly similar peak speeds across their reaches. These results support the hypothesis that infants reduce the complexity of movement by using a limited number of degrees-of-freedom, which could simplify and accelerate the learning process. The proximodistal direction of maturation of the neural and muscular systems appears to restrict arm and hand movement in a way that simplifies learning to reach. Received: 27 July 1998 / Accepted: 26 March 1999  相似文献   

15.
 The maturation of manual dexterity and other sensorimotor functions was assessed with various behavioural tests. In healthy children (age 4–5 years) and in adults, the kinematics of reaching and grasping, a bimanual task and fast repetitive tapping movements were analysed. Furthermore a comprehensive motor function score (MOT), probing agility and balance, was evaluated. In the prehension task, the straightness of the reaching trajectories increased with age. Children opened their grip relatively wider than adults, thus grasping with a higher safety margin. The speed of both tapping and bimanual movements increased with age, and higher scores were reached in the MOT. Although the different behavioural tests sensitively indicated maturational changes, their results were generally not correlated, i.e. the outcome of a particular test could not predict the results of other tasks. Hence there is no simple and uniform relationship between different behavioural data describing maturation of sensorimotor functions. Received: 20 July 1998 / Accepted: 11 December 1998  相似文献   

16.
Target viewing time and velocity effects on prehension   总被引:2,自引:0,他引:2  
 The goal of the present study was to understand which characteristics (movement time or velocity) of target motion are important in the control and coordination of the transport and grasp-preshape components of prehensile movements during an interception task. Subjects were required to reach toward, grasp and lift an object as it entered a target area. Targets approached along a track at four velocities (500, 750, 1000 and 1250 mm/s) which were presented in two conditions. In the distance-controlled condition, targets moving at all velocities traveled the same distance. In the viewing-time-controlled condition, combinations of velocity and starting distances were performed such that the moving target was visible for 1000 ms for all trials. Analyses of kinematic data revealed that when, target distance was controlled, velocity affected all transport-dependent measures; however, when viewing time was controlled, these dependent measures were no longer affected by target velocity. Thus, the use of velocity information was limited in the viewing-time-controlled condition, and subjects used other information, such as target movement time, when generating the transport component of the prehensile movement. For the grasp-preshape component, both peak aperture and peak-aperture velocity increased as target velocity increased, regardless of condition, indicating that target velocity was used to control the spatial aspects of aperture formation. However, the timing of peak aperture was affected by target velocity in the distance-controlled condition, but not in the viewing-time-controlled condition. These results provide evidence for the autonomous generation of the spatial and temporal aspects of grasp preshape. Thus, an independence between the transport and grasp-preshape phases was found, whereby the use of target velocity as a source of information for generating the transport component was limited; however, target velocity was an important source of information in the grasp-preshape phase. Received: 16 March 1998 / Accepted: 2 February 1999  相似文献   

17.
The performances of a deafferented patient and five control subjects have been studied during a self-driven passing task in which one hand has to grasp an object transported by the other hand and in a unimanual reach-to-grasp task. The kinematics of the reach and grasp components and the scaling of the grip aperture recorded for the self-driven passing task were very similar in controls and the deafferented subject (GL). In contrast, for the unimanual task when vision was absent, GL’s coordination between reaching and grasping was delayed in space and time compared with the control subjects. In addition, frequent reopening of the grip was observed in GL during the final closure phase of the unimanual prehension task. These results support the notion that afferent proprioceptive information resulting from the reaching movement – which seemed to be used to coordinate reaching and grasping commands in the unimanual task – is no longer necessary in the self-induced passing task. Finally, for the externally driven passing task, when the object was passively transported by the experimenter, the coordination was consistently modified in all subjects; grip aperture onset was delayed, thus asserting a specific contribution of the central command or feedforward mechanisms into the anticipation of the grasp onset observed in the self-driven passing task. The origin and nature of the information necessary for building up the feedforward mechanisms remains to be elucidated. Received: 21 August 1998 / Accepted: 29 January 1999  相似文献   

18.
This study investigates how a change in the physical relation between objects (two-dimensional, 2-D, angles) and a subject, as well as scanning conditions, modify the ability to discriminate small changes in 2-D shape. Subjects scanned pairs of angles (90º standard; 91º–103º comparison angles) with the right index finger of the out-stretched arm, identifying the larger of each pair. When joint rotation was restricted to the shoulder, the discrimination threshold significantly increased when the angles were explored with the shoulder in a more eccentric position rather than closer to the midline (60º versus 30º to the right). This result was attributed to changes in proprioceptive sensitivity, since explorations restricted to distal joints (wrist/second metacarpophalangeal joint) showed no change with shoulder position. The results showed, moreover, that discrimination threshold was similar for distal and proximal joints when the delay between scanning the pairs of angles was long (15 s). This observation suggests that regional variations in proprioceptive acuity (proximal > distal) may reflect an adaptation to generate an invariant central representation of haptic shape. Using a shorter interscan delay (5 s), a position-dependent increase in discrimination threshold was revealed for distal explorations, an effect that disappeared when the head was turned in the direction of the unseen angle (vision occluded). We suggest that these results can be explained by the existence of two competing egocentric frames of reference with different time courses, one of short duration that is centred on the arm/hand, and a second of longer duration centred on the head. At the short delay, the reference frames interacted to distort the haptic representation when they were misaligned. This distortion was resolved at the long delay, possibly through suppression of the arm/hand-centred reference frame.  相似文献   

19.
Patients with unilateral neglect following right hemisphere damage may have difficulty in moving towards contralesional targets. To test the hypothesis that this impairment arises from competing motor programs triggered by irrelevant ipsilesional stimuli, we examined 16 right hemisphere patients, eight with left visual neglect and eight without, in addition to eight healthy control subjects. In experiment 1 subjects performed sequences of movements using their right hand to targets on the contralesional or ipsilesional side of the responding limb. The locations of successive targets in each sequence were either predictable or unpredictable. In separate blocks of trials, targets appeared either alone or with a simultaneous distractor located at the immediately preceding target location. Neglect patients were significantly slower to execute movements to contralesional targets, but only for unpredictable movements and in the presence of a concurrent ipsilesional distractor. In contrast, healthy controls and right hemisphere patients without neglect showed no directional asymmetries of movement execution. In experiment 2 subjects were required to interrupt a predictable, reciprocating sequence of leftward and rightward movements in order to move to an occasional, unpredictable target that occurred either in the direction opposite to that expected, or in the same direction but twice the extent. Neglect patients were significantly slower in reprogramming the direction and extent of movements towards contralesional versus ipsilesional targets, and they also made significantly more errors when executing such movements. Right hemisphere patients without neglect showed a similar bias in reprogramming direction (but not extent) for contralesional targets, whereas healthy controls showed no directional asymmetry in either condition. On the basis of these findings we propose that neglect involves a competitive bias in favour of motor programs for actions directed towards ipsilesional versus contralesional events. We suggest that programming errors and increased latencies for contralesional movements arise because the damaged right hemisphere can no longer effectively inhibit the release of inappropriate motor programs towards ipsilesional events. Received: 1 October 1996 / Accepted: 21 October 1997  相似文献   

20.
 The cerebellar interposed nuclei are considered critical components of circuits controlling the classical conditioning of eyeblink responses in several mammalian species. The main purpose of the present experiments was to examine whether the interposed nuclei are also involved in the control of classically conditioned withdrawal responses in other skeletomuscular effector systems. To achieve this objective, a unique learning paradigm was developed to examine classically conditioned withdrawal responses in three effector systems (the eyelid, forelimb and hindlimb) in individual cats. Trained animals were injected with muscimol in the cerebellar interposed nuclei, and the effects on the three conditioned responses (CRs) were examined. Although the effects of muscimol were less dramatic than previously reported in the rabbit eyeblink preparation, the inactivation of the cerebellar nuclei affected the performance of CRs in all three effector systems. In additional experiments, animals were injected with muscimol at the sites affecting classically conditioned withdrawal responses to determine the effects of these injections on reaching and locomotion behaviors. These tests demonstrated that the same regions of the cerebellar interposed nuclei which control withdrawal reflexes are also involved in the control of limb flexion and precision placement of the paw during both locomotion and reaching tasks. The obtained data indicate that the interposed nuclei are involved in the control of ipsilateral action primitives and that inactivating the interposed nuclei affects several modes of action of these functional units. Received: 15 June 1998 / Accepted: 5 November 1998  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号