首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recent research suggests that basal ganglia dysfunction may result in problems integrating concurrent vision and proprioception during movement. We evaluated dopaminergic system involvement in this sensorimotor process during locomotion within a large sample of Parkinson's disease (PD) patients while "On" and "Off" their dopaminergic medications (n=25), in conditions that selectively manipulated the availability of proprioception, vision or both. The present experiment focused on two main objectives: i) to examine the relative influence of visual and proprioceptive inputs on locomotion and target accuracy in patients with PD; and ii) to examine the influence of dopamine replacement therapy on sensorimotor integration while moving toward the target. All participants walked at a self-selected pace on a GAITRite carpet in two baseline conditions (light and dark), as well as four experimental darkness conditions: a) to a remembered target (i.e. proprioception only), b) to a remembered target with light on chest for body position awareness (proprioception plus), c) with vision of a lit target, also with light on chest (vision and proprioception), d) pushed in wheelchair to remembered target (no proprioception or vision). Final position was measured by 2-D radial error, and revealed a group by condition interaction, suggesting that PD patients "Off" their medications move to targets with less accuracy, but approach the accuracy of healthy participants when in the "On" state. Both PD and healthy improved their accuracy with availability of concurrent vision and proprioception (condition c). Interestingly, our results demonstrate that PD "Off" performed the task with greater difficulty than when "On" medication, but only when proprioception was the sole source of feedback. Since PD, whether medicated or unmedicated were even more affected when proprioception was removed (wheelchair), a memory-related explanation can be ruled out. Our results suggest that the basal ganglia are not specifically involved in visuoproprioceptive integration; however, assimilation of proprioceptive feedback to guide an ongoing movement may be a critical function of the basal ganglia.  相似文献   

2.
To localize one's hand, i.e., to find out its position with respect to the body, humans may use proprioceptive information or visual information or both. It is still not known how the CNS combines simultaneous proprioceptive and visual information. In this study, we investigate in what position in a horizontal plane a hand is localized on the basis of simultaneous proprioceptive and visual information and compare this to the positions in which it is localized on the basis of proprioception only and vision only. Seated at a table, subjects matched target positions on the table top with their unseen left hand under the table. The experiment consisted of three series. In each of these series, the target positions were presented in three conditions: by vision only, by proprioception only, or by both vision and proprioception. In one of the three series, the visual information was veridical. In the other two, it was modified by prisms that displaced the visual field to the left and to the right, respectively. The results show that the mean of the positions indicated in the condition with both vision and proprioception generally lies off the straight line through the means of the other two conditions. In most cases the mean lies on the side predicted by a model describing the integration of multisensory information. According to this model, the visual information and the proprioceptive information are weighted with direction-dependent weights, the weights being related to the direction-dependent precision of the information in such a way that the available information is used very efficiently. Because the proposed model also can explain the unexpectedly small sizes of the variable errors in the localization of a seen hand that were reported earlier, there is strong evidence to support this model. The results imply that the CNS has knowledge about the direction-dependent precision of the proprioceptive and visual information.  相似文献   

3.
The coding of body part location may depend upon both visual and proprioceptive information, and allows targets to be localized with respect to the body. The present study investigates the interaction between visual and proprioceptive localization systems under conditions of multisensory conflict induced by optokinetic stimulation (OKS). Healthy subjects were asked to estimate the apparent motion speed of a visual target (LED) that could be located either in the extrapersonal space (visual encoding only, V), or at the same distance, but stuck on the subject’s right index finger-tip (visual and proprioceptive encoding, V–P). Additionally, the multisensory condition was performed with the index finger kept in position both passively (V–P passive) and actively (V–P active). Results showed that the visual stimulus was always perceived to move, irrespective of its out- or on-the-body location. Moreover, this apparent motion speed varied consistently with the speed of the moving OKS background in all conditions. Surprisingly, no differences were found between V–P active and V–P passive conditions in the speed of apparent motion. The persistence of the visual illusion during the active posture maintenance reveals a novel condition in which vision totally dominates over proprioceptive information, suggesting that the hand-held visual stimulus was perceived as a purely visual, external object despite its contact with the hand.  相似文献   

4.
We used electroencephalography to see how the brain deals with altered sensory processing demands in lower extremity movements. In unimodal conditions, sensory processing demands were altered with subjects performing movement to a small or large visual target, or with a small or large weight to modify proprioception. In bimodal conditions, both weight and targets needed to be met. We assessed activity over primary sensorimotor, premotor and parietal areas before and during knee movements. In unimodal conditions, the primary sensorimotor area showed the least sensitivity to the maximally increased sensory demand in both vision and proprioception, while the premotor region was most sensitive to proprioceptive demands, and the parietal region showed greatest sensitivity to visual demands. In bimodal conditions, intermediate levels of sensory processing demand maximally increased activation at premotor and parietal regions. However, when visual and proprioceptive demands were both maximal, activation decreased and was similar to that seen with the lowest level of sensory processing demand. As behavior was consistent across conditions while activation at these regions decreased, we suggest that additional brain areas, possibly high order cognitive and attentional regions, may be required to augment the function of the traditional sensorimotor network in lower extremity movements with increasingly difficult sensory processing demands.  相似文献   

5.
The roles of visual and somatosensory information in arm movement planning remain enigmatic. Previous studies have examined these roles by dissociating visual and somatosensory cues about limb position prior to movement onset and examining the resulting effects on movements performed in the horizontal plane. Here we examined the effects of misaligned limb position cues prior to movement onset as reaches were planned and executed along different directions in the vertical plane. Movements were planned with somatosensory and visual feedback aligned at the starting position of the reach or with visual feedback displaced horizontally (Experiment 1) or vertically (Experiment 2). As in the horizontal plane, changes in movement directions induced by misaligned feedback indicated that vision and proprioception were both generally taken into account when planning vertical plane movements. However, we also found evidence that the contributions of vision and proprioception differed across target directions and between directions of displaced visual feedback. These findings suggest that the contributions of vision and proprioception to movement planning in the vertical plane reflect the unique multisensory and biomechanical demands associated with moving against gravity.  相似文献   

6.
 In a previous study we investigated how the CNS combines simultaneous visual and proprioceptive information about the position of the finger. We found that localization of the index finger of a seen hand was more precise (a smaller variance) than could reasonably be expected from the precision of localization on the basis of vision only and proprioception only. This suggests that, in localizing the tip of the index finger of a seen hand, the CNS may make use of more information than proprioceptive information and visual information about the fingertip. In the present study we investigate whether this additional information stems from additional sources of sensory information. In experiment 1 we tested whether seeing an entire arm instead of only the fingertip gives rise to a more precise proprioceptive and/or visual localization of that fingertip. In experiment 2 we checked whether the presence of a structured visual environment leads to a more precise proprioceptive localization of the index finger of an unseen hand. In experiment 3 we investigated whether looking in the direction of the index finger of an unseen hand improves proprioceptive localization of that finger. We found no significant effect in any of the experiments. The results refute the hypothesis that the investigated effects can explain the previously reported very precise localization of a seen hand. This suggests that localization of a seen finger is based exclusively on proprioception and on vision of the finger. The results suggest that these sensory signals may contain more information than is described by the magnitude of their variances. Received: 12 March 1998 / Accepted: 11 August 1998  相似文献   

7.
It has been theorized that sensorimotor processing deficits underlie Parkinson's disease (PD) motor impairments including movement under proprioceptive control. However, it is possible that these sensorimotor processing deficits exclude tactile/proprioception sensorimotor integration: prior studies show improved movement accuracy in PD with endpoint tactile feedback, and good control in tactile-driven precision-grip tasks. To determine whether tactile/proprioceptive integration in particular is affected by PD, nine subjects with PD (off-medication, UPDRS motor = 19–42) performed an arm-matching task without visual feedback. In some trials one arm touched a static tactile cue that conflicted with dynamic proprioceptive feedback from biceps brachii muscle vibration. This sensory conflict paradigm has characterized tactile/proprioceptive integration in healthy subjects as specific to the context of tactile cue mobility assumptions and the intention to move the arm. We found that the individuals with PD had poorer arm-matching acuracy than age-matched control subjects. However, PD-group accuracy improved with tactile feedback. Furthermore, sensory conflict conditions were resolved in the same context-dependent fashion by both subject groups. We conclude that the somatosensory integration mechanism for prioritizing tactile and proprioception feedback in this task are not disrupted by PD, and are not related to the observed proprioceptive deficits.  相似文献   

8.
The goal of this study was to determine whether the sensory nature of a target influences the roles of vision and proprioception in the planning of movement distance. Two groups of subjects made rapid, elbow extension movements, either toward a visual target or toward the index fingertip of the unseen opposite hand. Visual feedback of the reaching index fingertip was only available before movement onset. Using a virtual reality display, we randomly introduced a discrepancy between actual and virtual (cursor) fingertip location. When subjects reached toward the visual target, movement distance varied with changes in visual information about initial hand position. For the proprioceptive target, movement distance varied mostly with changes in proprioceptive information about initial position. The effect of target modality was already present at the time of peak acceleration, indicating that this effect include feedforward processes. Our results suggest that the relative contributions of vision and proprioception to motor planning can change, depending on the modality in which task relevant information is represented.  相似文献   

9.
Synesthetic congruency modulates the temporal ventriloquism effect   总被引:2,自引:0,他引:2  
People sometimes find it easier to judge the temporal order in which two visual stimuli have been presented if one tone is presented before the first visual stimulus and a second tone is presented after the second visual stimulus. This enhancement of people's visual temporal sensitivity has been attributed to the temporal ventriloquism of the visual stimuli toward the temporally proximate sounds, resulting in an expansion of the perceived interval between the two visual events. In the present study, we demonstrate that the synesthetic congruency between the auditory and visual stimuli (in particular, between the relative pitch of the sounds and the relative size of the visual stimuli) can modulate the magnitude of this multisensory integration effect: The auditory capture of vision is larger for pairs of auditory and visual stimuli that are synesthetically congruent than for pairs of stimuli that are synesthetically incongruent, as reflected by participants' increased sensitivity in discriminating the temporal order of the visual stimuli. These results provide the first evidence that multisensory temporal integration can be affected by the synesthetic congruency between the auditory and visual stimuli that happen to be presented.  相似文献   

10.
Enhanced behavioral performance mediated by multisensory stimuli has been shown using a variety of measures, including response times, orientation behaviors, and even simple stimulus detection. However, there has been little evidence for a multisensory-mediated improvement in stimulus localization. We suggest that this lack of effect may be a result of the high acuity of the visual system. To examine whether normal visual acuity may be masking any potential multisensory benefit for stimulus localization, we examined the ability of human subjects to localize visual, auditory and combined visual-auditory targets under conditions of normal and degraded vision. Under conditions of normal vision, localization precision (i.e., variability) was equivalent for visual and multisensory targets, and was significantly worse for auditory targets. In contrast, under conditions of induced myopia, visual localization performance was degraded by an average of 25%, while auditory localization performance was unaffected. However, during induced myopia, multisensory (i.e., visual-auditory) localization performance was significantly improved relative to visual performance. These results show a multisensory-mediated enhancement in human localization ability, and illustrate the cross-modal benefits that can be obtained when spatial information in one sense is compromised or ambiguous.  相似文献   

11.
Several perceptual studies have shown that the ability to estimate the location of the arm degrades quickly during visual occlusion. To account for this effect, it has been suggested that proprioception drifts when not continuously calibrated by vision. In the present study, we re-evaluated this hypothesis by isolating the proprioceptive component of position sense (i.e., the subjects were forced to rely exclusively on proprioception to locate their hand, which was not the case in earlier studies). Three experiments were conducted. In experiment 1, subjects were required to estimate the location of their unseen right hand, at rest, using a visual spot controlled by the left hand through a joystick. Results showed that the mean accuracy was identical whether the localization task was performed immediately after the positioning of the hand or after a 10-s delay. In experiments 2 and 3, subjects were required to point, without vision of their limb, to visual targets. These two experiments relied on the demonstration that biases in the perception of the initial hand location induced systematic variations of the movement characteristics (initial direction, final accuracy, end-point variability). For these motor tasks, the subjects did not pay attention to the initial hand location, which removed the possible occurrence of confounding cognitive strategies. Results indicated that movement characteristics were, on average, not affected when a 15-s or 20-s delay was introduced between the positioning of the arm at the starting point and the presentation of the target. When considered together, our results suggest that proprioception does not quickly drift in the absence of visual information. The potential origin of the discrepancy between our results and earlier studies is discussed.  相似文献   

12.
Increasing evidence suggests that the pathophysiology of movement disorders in Parkinson's disease (PD) includes deficits in sensory processing and integration. However, the exact nature of these deficits and the ability of dopamine medication to correct them have not been thoroughly examined in previous studies. For instance, it remains unclear whether PD patients have globally impaired sensorimotor integration functions or selective deficiencies in processing proprioception. We evaluated the specific deficits of PD patients in sensorimotor integration and proprioceptive processing by testing their ability to perform three-dimensional (3D) reaching movements in four conditions in which the sensory signals defining target and hand positions (visual and/or proprioceptive) varied. Ten healthy subjects and 11 PD patients, ON dopamine medication and in the OFF state, were tested. PD patients in the OFF state showed a greater mean level of 3D errors relative to controls when the only available sensory information about target and hand position came from proprioception, but this difference did not reach significance. This indicates that deficient proprioception is not an early key feature of PD. Interestingly, the inaccuracies of a number of PD subjects further increased in the ON medicated state relative to healthy controls when reaching to proprioceptively-defined targets, and this between group difference was statistically significant. However, dopamine medication did not consistently degrade the reaching accuracy of PD patients, with both negative and positive effects on accuracy of reaching to proprioceptive-defined targets. Together, these findings indicate that dopamine replacement therapy not only did not normalize sensorimotor performance to the level of controls, but also induced deficits in the processing of proprioceptive information in some of the PD patients tested. Furthermore, the diversity of effects of medication on accuracy of reaching to proprioceptively-defined targets supports the idea that dysfunction of dopaminergic circuits within the basal ganglia is not primarily responsible for the proprioceptive processing deficits of PD patients.  相似文献   

13.
The vestibular system has been shown to contribute to multisensory integration by balancing conflictual sensory information. It remains unclear whether such modulation of exteroceptive (e.g., vision), proprioceptive, and interoceptive (e.g., affective touch) sensory sources is influenced by epistemically different aspects of tactile stimulation (i.e., felt from within vs. seen, vicarious touch). In the current study, we aimed to (a) replicate previous findings regarding the effects of galvanic stimulation of the right vestibular network in multisensory integration, and (b) examine vestibular contributions to multisensory integration when touch is felt but not seen (and vice versa). During artificial vestibular stimulation (LGVS, i.e., right vestibular stimulation), RGVS (i.e., bilateral stimulation), and sham (i.e., placebo stimulation), healthy participants (N = 36, Experiment 1; N = 37, Experiment 2) looked at a rubber hand while either their own unseen hand or the rubber hand were touched by affective or neutral touch. We found that (a) LGVS led to enhancement of vision over proprioception during visual only conditions (replicating our previous findings), and (b) LGVS (versus sham) favored proprioception over vision when touch was felt (Experiment 1), with the opposite results when touch was vicariously perceived via vision (Experiment 2) and with no difference between affective and neutral touch. We showed how vestibular signals modulate the weight of each sensory modality according to the context in which they are perceived and that such modulation extends to different aspects of tactile stimulation: felt and seen touch are differentially balanced in multisensory integration according to their epistemic relevance.  相似文献   

14.
Motor control tasks like stance or object handling require sensory feedback from proprioception, vision and touch. The distinction between tactile and proprioceptive sensors is not frequently made in dynamic motor control tasks, and if so, mostly based on signal latency. We previously found that force control tasks entail more compliant behavior than a passive, relaxed condition and by neuromuscular modeling we were able to attribute this to adaptations in proprioceptive force feedback from Golgi tendon organs. This required the assumption that both tactile and visual feedback are too slow to explain the measured adaptations in face of unpredictable force perturbations. Although this assumption was shown to hold using model simulations, so far no experimental data is available to validate it. Here we applied a systematic approach using continuous perturbations and engineering analyses to provide experimental evidence for the hypothesis that motor control adaptation in force control tasks can be achieved using proprioceptive feedback only. Varying task instruction resulted in substantial adaptations in neuromuscular behavior, which persisted after eliminating visual and/or tactile feedback by a nerve block of the nervus plantaris medialis. It is concluded that proprioception adapts dynamic human ankle motor control even in the absence of visual and tactile feedback.  相似文献   

15.
Different sensory systems (e.g. proprioception and vision) have a combined influence on the perception of body orientation, but the timescale over which they can be integrated remains unknown. Here we examined how visual information and neck proprioception interact in perception of the "subjective straight ahead" (SSA), as a function of time since initial stimulation. In complete darkness, healthy subjects directed a laser spot to the point felt subjectively to be exactly straight ahead of the trunk. As previously observed, left neck muscle vibration led to a disparity between subjective perception and objective position of the body midline, with SSA misplaced to the left. We found that this displacement was sustained throughout 28 min of continuous proprioceptive stimulation, provided there was no visual input. Moreover, prolonged vibration of neck muscles leads to a continuing disparity between subjective and objective body orientation even after offset of the vibration; the longer the preceding vibration, the more persistent the illusory deviation of body orientation. To examine the role of vision, one group of subjects fixated a central visual target at the start of each block of continuous neck vibration, with SSA then measured at successive intervals in darkness. The illusory deviation of SSA was eliminated whenever visual input was provided, but returned as a linear function of time when visual information was eliminated. These results reveal: the persistent effects of neck proprioception on the SSA, both during and after vibration; the influence of vision; and integration between incoming proprioceptive information and working memory traces of visual information.  相似文献   

16.
The saccade generator updates memorized target representations for saccades during eye and head movements. Here, we tested if proprioceptive feedback from the arm can also update handheld object locations for saccades, and what intrinsic coordinate system(s) is used in this transformation. We measured radial saccades beginning from a central light-emitting diode to 16 target locations arranged peripherally in eight directions and two eccentricities on a horizontal plane in front of subjects. Target locations were either indicated 1) by a visual flash, 2) by the subject actively moving the handheld central target to a peripheral location, 3) by the experimenter passively moving the subject's hand, or 4) through a combination of the above proprioceptive and visual stimuli. Saccade direction was relatively accurate, but subjects showed task-dependent systematic overshoots and variable errors in radial amplitude. Visually guided saccades showed the smallest overshoot, followed by saccades guided by both vision and proprioception, whereas proprioceptively guided saccades showed the largest overshoot. In most tasks, the overall distribution of saccade endpoints was shifted and expanded in a gaze- or head-centered cardinal coordinate system. However, the active proprioception task produced a tilted pattern of errors, apparently weighted toward a limb-centered coordinate system. This suggests the saccade generator receives an efference copy of the arm movement command but fails to compensate for the arm's inertia-related directional anisotropy. Thus the saccade system is able to transform hand-centered somatosensory signals into oculomotor coordinates and combine somatosensory signals with visual inputs, but it seems to have a poorly calibrated internal model of limb properties.  相似文献   

17.
We investigated the relative importance of vision and proprioception in estimating target and hand locations in a dynamic environment. Subjects performed a position estimation task in which a target moved horizontally on a screen at a constant velocity and then disappeared. They were asked to estimate the position of the invisible target under two conditions: passively observing and manually tracking. The tracking trials included three visual conditions with a cursor representing the hand position: always visible, disappearing simultaneously with target disappearance, and always invisible. The target’s invisible displacement was systematically underestimated during passive observation. In active conditions, tracking with the visible cursor significantly decreased the extent of underestimation. Tracking of the invisible target became much more accurate under this condition and was not affected by cursor disappearance. In a second experiment, subjects were asked to judge the position of their unseen hand instead of the target during tracking movements. Invisible hand displacements were also underestimated when compared with the actual displacement. Continuous or brief presentation of the cursor reduced the extent of underestimation. These results suggest that vision–proprioception interactions are critical for representing exact target–hand spatial relationships, and that such sensorimotor representation of hand kinematics serves a cognitive function in predicting target position. We propose a hypothesis that the central nervous system can utilize information derived from proprioception and/or efference copy for sensorimotor prediction of dynamic target and hand positions, but that effective use of this information for conscious estimation requires that it be presented in a form that corresponds to that used for the estimations.  相似文献   

18.
19.
Previous research has shown that reach endpoints vary with the starting position of the reaching hand and the location of the reach target in space. We examined the effect of movement direction of a proprioceptive target-hand, immediately preceding a reach, on reach endpoints to that target. Participants reached to visual, proprioceptive (left target-hand), or visual-proprioceptive targets (left target-hand illuminated for 1 s prior to reach onset) with their right hand. Six sites served as starting and final target locations (35 target movement directions in total). Reach endpoints do not vary with the movement direction of the proprioceptive target, but instead appear to be anchored to some other reference (e.g., body). We also compared reach endpoints across the single and dual modality conditions. Overall, the pattern of reaches for visual-proprioceptive targets resembled those for proprioceptive targets, while reach precision resembled those for the visual targets. We did not, however, find evidence for integration of vision and proprioception based on a maximum-likelihood estimator in these tasks.  相似文献   

20.
The purpose of this study was to determine the precision of proprioceptive localization of the hand in humans. We derived spatial probability distributions which describe the precision of localization on the basis of three different sources of information: proprioceptive information about the left hand, proprioceptive information about the right hand, and visual information. In the experiment subjects were seated at a table and had to perform three different position-matching tasks. In each task, the position of a target and the position of an indicator were available in a different combination of two of these three sources of information. From the spatial distributions of indicated positions in these three conditions, we derived spatial probability distributions for proprioceptive localization of the two hands and for visual localization. For proprioception we found that localization in the radial direction with respect to the shoulder is more precise than localization in the azimuthal direction. The distributions for proprioceptive localization also suggest that hand positions closer to the shoulder are localized more precisely than positions further away. These patterns can be understood from the geometry of the arm. In addition, the variability in the indicated positions suggests that the shoulder and elbow angles are known to the central nervous system with a precision of 0.6–1.1°. This is a considerably better precision than the values reported in studies on perception of these angles. This implies that joint angles, or quantities equivalent to them, are represented in the central nervous system more precisely than they are consciously perceived. For visual localization we found that localization in the azimuthal direction with respect to the cyclopean eye is more precise than localization in the radial direction. The precision of the perception of visual direction is of the order of 0.2–0.6°. Received: 3 July 1997 / Accepted: 27 March 1998  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号