首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Research has shown that haptic spatial matching at intermanual distances over 60 cm is prone to large systematic errors. The error pattern has been explained by the use of reference frames intermediate between egocentric and allocentric coding. This study investigated haptic performance in near peripersonal space, i.e. at intermanual distances of 60 cm and less. Twelve blindfolded participants (six males and six females) were presented with two turn bars at equal distances from the midsagittal plane, 30 or 60 cm apart. Different orientations (vertical/horizontal or oblique) of the left bar had to be matched by adjusting the right bar to either a mirror symmetric (/ \) or parallel (/ /) position. The mirror symmetry task can in principle be performed accurately in both an egocentric and an allocentric reference frame, whereas the parallel task requires an allocentric representation. Results showed that parallel matching induced large systematic errors which increased with distance. Overall error was significantly smaller in the mirror task. The task difference also held for the vertical orientation at 60 cm distance, even though this orientation required the same response in both tasks, showing a marked effect of task instruction. In addition, men outperformed women on the parallel task. Finally, contrary to our expectations, systematic errors were found in the mirror task, predominantly at 30 cm distance. Based on these findings, we suggest that haptic performance in near peripersonal space might be dominated by different mechanisms than those which come into play at distances over 60 cm. Moreover, our results indicate that both inter-individual differences and task demands affect task performance in haptic spatial matching. Therefore, we conclude that the study of haptic spatial matching in near peripersonal space might reveal important additional constraints for the specification of adequate models of haptic spatial performance.  相似文献   

2.
What humans haptically perceive as parallel is often far from physically parallel. These deviations from parallelity are highly significant and very systematic. There exists accumulating evidence, both psychophysical and neurophysiological, that what is haptically parallel is decided in a frame of reference intermediate to an allocentric and an egocentric one. The central question here concerns the nature of the egocentric frame of reference. In the literature, various kinds of egocentric reference frames are mentioned for haptic spatial tasks, such as hand-centered, arm-centered, and body-centered frames of reference. Thus far, it has not been possible to distinguish between body-centered, arm-centered, and hand-centered reference frames in our experiments, as hand and arm orientation always covaried with distance from the body-midline. In the current set of experiments the influence of body-centered and hand-centered reference frames could be dissociated. Subjects were asked to make a test bar haptically parallel to a reference bar in five different conditions, in which their hands were oriented straight ahead, rotated to the left, rotated to the right, rotated outward or rotated inward. If the reference frame is body-centered, the deviations should be independent of condition. If, on the other hand, the reference frame is hand-centered, the deviations should vary with condition. The results show that deviation size varies strongly with condition, exactly in the way predicted by the influence of a hand-centered egocentric frame of reference. Interestingly, this implies that subjects do not sufficiently take into account the orientation of their hands.  相似文献   

3.
We investigated the contribution of haptic and visual information about object size to both perception and action. Kinematics of the right hand were measured while participants performed grasping actions or manual estimations under the guidance of haptic information from the left hand, binocular visual information, or both haptics and vision. The greatest uncertainty was observed with haptic information alone. Moreover, when visual and haptic sizes were congruent, performance was no different from that with vision alone. Although this gives the appearance that vision dominates, when information from the two senses was incongruent, an influence of haptic cues emerged for both tasks. Our paradigm also allowed us to demonstrate that haptic sensitivity, like visual sensitivity, scales with object size for manual estimation (consistent with Weber’s law) but not for grasping. In sum, although haptics represents a less certain source of information, haptic processing follows similar principles to vision and its contribution to perception and action becomes evident only when cross-modal information is incongruent.  相似文献   

4.
Gravicentric visual alignments become less precise when the head is roll-tilted relative to gravity, which is most likely due to decreasing otolith sensitivity. To align a luminous line with the perceived gravity vector (gravicentric task) or the perceived body-longitudinal axis (egocentric task), the roll orientation of the line on the retina and the torsional position of the eyes relative to the head must be integrated to obtain the line orientation relative to the head. Whether otolith input contributes to egocentric tasks and whether the modulation of variability is restricted to vision-dependent paradigms is unknown. In nine subjects we compared precision and accuracy of gravicentric and egocentric alignments in various roll positions (upright, 45°, and 75° right-ear down) using a luminous line (visual paradigm) in darkness. Trial-to-trial variability doubled for both egocentric and gravicentric alignments when roll-tilted. Two mechanisms might explain the roll-angle-dependent modulation in egocentric tasks: 1) Modulating variability in estimated ocular torsion, which reflects the roll-dependent precision of otolith signals, affects the precision of estimating the line orientation relative to the head; this hypothesis predicts that variability modulation is restricted to vision-dependent alignments. 2) Estimated body-longitudinal reflects the roll-dependent variability of perceived earth-vertical. Gravicentric cues are thereby integrated regardless of the task's reference frame. To test the two hypotheses the visual paradigm was repeated using a rod instead (haptic paradigm). As with the visual paradigm, precision significantly decreased with increasing head roll for both tasks. These findings propose that the CNS integrates input coded in a gravicentric frame to solve egocentric tasks. In analogy to gravicentric tasks, where trial-to-trial variability is mainly influenced by the properties of the otolith afferents, egocentric tasks may also integrate otolith input. Such a shared mechanism for both paradigms and frames of reference is supported by the significantly correlated trial-to-trial variabilities.  相似文献   

5.
The primary purpose of this study was to examine the effects of non-informative vision and visual interference upon haptic spatial processing, which supposedly derives from an interaction between an allocentric and egocentric reference frame. To this end, a haptic parallelity task served as baseline to determine the participant-dependent biasing influence of the egocentric reference frame. As expected, large systematic participant-dependent deviations from veridicality were observed. In the second experiment we probed the effect of non-informative vision on the egocentric bias. Moreover, orienting mechanisms (gazing directions) were studied with respect to the presentation of haptic information in a specific hemispace. Non-informative vision proved to have a beneficial effect on haptic spatial processing. No effect of gazing direction or hemispace was observed. In the third experiment we investigated the effect of simultaneously presented interfering visual information on the haptic bias. Interfering visual information parametrically influenced haptic performance. The interplay of reference frames that subserves haptic spatial processing was found to be related to both the effects of non-informative vision and visual interference. These results suggest that spatial representations are influenced by direct cross-modal interactions; inter-participant differences in the haptic modality resulted in differential effects of the visual modality.  相似文献   

6.
It has been argued that representations of peripersonal space based on haptic input are systematically distorted by egocentric reference frames. Interestingly, a recent study has shown that noninformative vision (i.e., freely viewing the region above the haptic workspace) improves performance on the so-called haptic parallel-setting task, in which participants are instructed to rotate a test bar until it is parallel to a reference bar. In the present study, we made a start at identifying the different sensory integration mechanisms involved in haptic space perception by distinguishing the possible effects of orienting mechanisms from those of noninformative vision. We found that both the orienting direction of head and eyes and the availability of noninformative vision affect parallel-setting performance and that they do so independently: orienting towards a reference bar facilitated the parallel-setting of a test bar in both no-vision and noninformative vision conditions, and noninformative vision improved performance irrespective of orienting direction. These results suggest the effects of orienting and noninformative vision on haptic space perception to depend on distinct neurocognitive mechanisms, likely to be expressed in different modulations of neural activation in the multimodal parietofrontal network, thought to be concerned with multimodal representations of peripersonal space.  相似文献   

7.
The simple experience of a coherent percept while looking and touching an object conceals an intriguing issue: different senses encode and compare information in different modality-specific reference frames. We addressed this problem in a cross-modal visuo-haptic mental rotation task. Two objects in various orientations were presented at the same spatial location, one visually and one haptically. Participants had to identify the objects as same or different. The relative angle between viewing direction and hand orientation was manipulated (Aligned versus Orthogonal). In an additional condition (Delay), a temporal delay was introduced between haptic and visual explorations while the viewing direction and the hand orientation were orthogonal to each other. Whereas the phase shift of the response time function was close to 0° in the Aligned condition, we observed a consistent phase shift in the hand’s direction in the Orthogonal condition. A phase shift, although reduced, was also found in the Delay condition. Counterintuitively, these results mean that seen and touched objects do not need to be physically aligned for optimal performance to occur. The present results suggest that the information about an object is acquired in separate visual and hand-centered reference frames, which directly influence each other and which combine in a time-dependent manner.  相似文献   

8.
Delay improves performance on a haptic spatial matching task   总被引:6,自引:6,他引:0  
Systematic deviations occur when blindfolded subjects set a test bar parallel to a reference bar in the horizontal plane using haptic information (Kappers and Koenderink 1999, Perception 28:781–795; Kappers 1999, Perception 28:1001–1012). These deviations are assumed to reflect the use of a combination of a biasing egocentric reference frame and an allocentric, more cognitive one (Kappers 2002, Acta Psychol 109:25–40). In two experiments, we have examined the effect of delay between the perception of a reference bar and the parallel setting of a test bar. In both experiments a 10-s delay improved performance. The improvement increased with a larger horizontal (left–right) distance between the bars. This improvement was interpreted as a shift from the egocentric towards the allocentric reference frame during the delay period. Electronic Publication  相似文献   

9.
Robotic guidance is an engineered form of haptic-guidance training and intended to enhance motor learning in rehabilitation, surgery, and sports. However, its benefits (and pitfalls) are still debated. Here, we investigate the effects of different presentation modes on the reproduction of a spatiotemporal movement pattern. In three different groups of participants, the movement was demonstrated in three different modalities, namely visual, haptic, and visuo-haptic. After demonstration, participants had to reproduce the movement in two alternating recall conditions: haptic and visuo-haptic. Performance of the three groups during recall was compared with regard to spatial and dynamic movement characteristics. After haptic presentation, participants showed superior dynamic accuracy, whereas after visual presentation, participants performed better with regard to spatial accuracy. Added visual feedback during recall always led to enhanced performance, independent of the movement characteristic and the presentation modality. These findings substantiate the different benefits of different presentation modes for different movement characteristics. In particular, robotic guidance is beneficial for the learning of dynamic, but not of spatial movement characteristics.  相似文献   

10.
While haptic guidance can improve ongoing performance of a motor task, several studies have found that it ultimately impairs motor learning. However, some recent studies suggest that the haptic demonstration of optimal timing, rather than movement magnitude, enhances learning in subjects trained with haptic guidance. Timing of an action plays a crucial role in the proper accomplishment of many motor skills, such as hitting a moving object (discrete timing task) or learning a velocity profile (time-critical tracking task). The aim of the present study is to evaluate which feedback conditions—visual or haptic guidance—optimize learning of the discrete and continuous elements of a timing task. The experiment consisted in performing a fast tennis forehand stroke in a virtual environment. A tendon-based parallel robot connected to the end of a racket was used to apply haptic guidance during training. In two different experiments, we evaluated which feedback condition was more adequate for learning: (1) a time-dependent discrete task—learning to start a tennis stroke and (2) a tracking task—learning to follow a velocity profile. The effect that the task difficulty and subject’s initial skill level have on the selection of the optimal training condition was further evaluated. Results showed that the training condition that maximizes learning of the discrete time-dependent motor task depends on the subjects’ initial skill level. Haptic guidance was especially suitable for less-skilled subjects and in especially difficult discrete tasks, while visual feedback seems to benefit more skilled subjects. Additionally, haptic guidance seemed to promote learning in a time-critical tracking task, while visual feedback tended to deteriorate the performance independently of the task difficulty and subjects’ initial skill level. Haptic guidance outperformed visual feedback, although additional studies are needed to further analyze the effect of other types of feedback visualization on motor learning of time-critical tasks.  相似文献   

11.
This study examined effects of binocular information on visual orientation perception and visuomotor control of a hand using two tasks in which observers matched the orientation of two rods. On the perceptual matching task, they were asked to make a visual judgment of the orientation of two rods. On the motor (preshaping) task, they were asked to move a rod which they can not see by their hands so as to match the orientation of the presented rod. Decrement in availability of binocular cues produced considerable disruptions on the motor task, while did less disruptions on the perceptual matching task. Hence binocular information seemed to play a critical role on the motor task. These results are consistent with recent suggestions that visual perception and visually guided motor control should be mediated by separate visual pathways.  相似文献   

12.
The principal goal of our study is to gain an insight into the representation of peripersonal space. Two different experiments were conducted in this study. In the first experiment, subjects were asked to represent principal anatomical reference planes by drawing ellipses in the sagittal, frontal and horizontal planes. The three-dimensional hand-drawing movements, which were achieved with and without visual guidance, were considered as the expression of a cognitive process per se: the peripersonal space representation for action. We measured errors in the spatial orientation of ellipses with regard to the requested reference planes. For ellipses drawn without visual guidance, with eyes open and eyes closed, orientation errors were related to the reference planes. Errors were minimal for sagittal and maximal for horizontal plane. These disparities in errors were considerably reduced when subjects drew using a visual guide. These findings imply that different planes are centrally represented, and are characterized, by different errors when subjects use a body-centered frame for performing the movement and suggest that the representation of peripersonal space may be anisotropic. However, this representation can be modified when subjects use an environment-centered reference frame to produce the movement. In the second experiment, subjects were instructed to represent, with eyes open and eyes closed, sagittal, frontal and horizontal planes by pointing to virtual targets located in these planes. Disparities in orientation errors measured for pointing were similar to those found for drawing, implying that the sensorimotor representation of reference planes was not constrained by the type of motor tasks. Moreover, arm postures measured at pointing endpoints and at comparable spatial locations in drawing are strongly correlated. These results suggest that similar patterns of errors and arm posture correlation, for drawing and pointing, can be the consequence of using a common space representation and reference frame. These findings are consistent with the assumption of an anisotropic action-related representation of peripersonal space when the movement is performed in a body-centered frame.  相似文献   

13.
Body position relative to gravity is continuously updated to prevent falls. Therefore, the brain integrates input from the otoliths, truncal graviceptors, proprioception and vision. Without visual cues estimated direction of gravity mainly depends on otolith input and becomes more variable with increasing roll-tilt. Contrary, the discrimination threshold for object orientation shows little modulation with varying roll orientation of the visual stimulus. Providing earth-stationary visual cues, this retinal input may be sufficient to perform self-adjustment tasks successfully, with resulting variability being independent of whole-body roll orientation. We compared conditions with informative (earth-fixed) and non-informative (body-fixed) visual cues. If the brain uses exclusively retinal input (if earth-stationary) to solve the task, trial-to-trial variability will be independent from the subject’s roll orientation. Alternatively, central integration of both retinal (earth-fixed) and extra-retinal inputs will lead to increasing variability when roll-tilted. Subjects, seated on a motorized chair, were instructed to (1) align themselves parallel to an earth-fixed line oriented earth-vertical or roll-tilted 75° clockwise; (2) move a body-fixed line (aligned with the body-longitudinal axis or roll-tilted 75° counter-clockwise to it) by adjusting their body position until the line was perceived earth-vertical. At 75° right-ear-down position, variability increased significantly (p < 0.05) compared to upright in both paradigms, suggesting that, despite the earth-stationary retinal cues, extra-retinal input is integrated. Self-adjustments in the roll-tilted position were significantly (p < 0.01) more precise for earth-fixed cues than for body-fixed cues, underlining the importance of earth-stable visual cues when estimates of gravity become more variable with increasing whole-body roll.  相似文献   

14.
It has been suggested that interference in symbolically cued bimanual reaction time tasks is caused primarily by the perceptual processing of stimuli and not by motor preparation of the required movements. Here subjects made movements of the right and left index fingers that varied in their spatial and motor congruence. Spatial congruence was manipulated by presenting symbolic cues (i.e., pairs of letters) on a computer screen cueing the required movement directions. Motor congruence was manipulated by altering hand orientation. Results showed that interference occurs at both the stage of stimulus processing and the stage of motor preparation. These effects were reflected in the latencies of the different bimanual movements with both motor incongruence and spatial incongruence causing significant increases in reaction time. However, spatially incongruent movements that were made in response to incongruent visual cues demonstrated changes in reaction time that were more than double those of movements that required simultaneous activation of nonhomologous muscles. Therefore in symbolically cued bimanual reaction-time tasks, although both motor and spatial constraints operate, there is a clear dominance of spatial incongruence on performance. While motor congruence effects are likely due to cross-facilitation in corticospinal pathways, spatial incongruence effects are probably due to interference between the mechanisms that identify incongruent stimuli and translate these cues into the appropriate movements.  相似文献   

15.
Effects of an orientation illusion on motor performance and motor imagery   总被引:1,自引:1,他引:0  
Although the effect of visual illusions on overt actions has been an area of keen interest in motor performance, no study has yet examined whether illusions have similar or different effects on overt and imagined movements. Two experiments were conducted that compared the effects of an orientation illusion on an overt posture selection task and an imagined posture selection task. In Experiment 1 subjects were given a choice of grasping a bar with the thumb on the left side or right side of the bar. In Experiment 2 subjects were instructed to only imagine grasping the bar while remaining motionless. Subjects then reported which side of the bar their thumb had been placed in imagined grasping. Both the overt selection and imagined selection tasks were found to be sensitive to the orientation illusion, suggesting that similar visual information is used for overt and imagined movements, with both being sensitive to an orientation illusion. The results are discussed in terms of the visual processing and representation of real and imagined actions.  相似文献   

16.
Visual information is mapped with respect to the retina within the early stages of the visual cortex. On the other hand, the brain has to achieve a representation of object location in a coordinate system that matches the reference frame used by the motor cortex to code reaching movement in space. The mechanism of the necessary coordinate transformation between the different frames of reference from the visual to the motor system as well as its localization within the cerebral cortex is still unclear. Coordinate transformation is traditionally described as a series of elementary computations along the visuomotor cortical pathways, and the motor system is thought to receive target information in a body-centered reference frame. However, neurons along these pathways have a number of similar properties and receive common input signals, suggesting that a non-retinocentric representation of object location in space might be available for sensory and motor purposes throughout the visuomotor pathway. This paper reviews recent findings showing that elementary input signals, such as retinal and eye position signals, reach the dorsal premotor cortex. We will also compare eye position effects in the premotor cortex with those described in the posterior parietal cortex. Our main thesis is that appropriate sensory input signals are distributed across the visuomotor continuum, and could potentially allow, in parallel, the emergence of multiple and task-dependent reference frames. Received: 21 September 1998 / Accepted: 19 March 1999  相似文献   

17.
Humans build representations of objects and their locations by integrating imperfect information from multiple perceptual modalities (e.g., visual, haptic). Because sensory information is specified in different frames of reference (i.e., eye- and body-centered), it must be remapped into a common coordinate frame before integration and storage in memory. Such transformations require an understanding of body articulation, which is estimated through noisy sensory data. Consequently, target information acquires additional coordinate transformation uncertainty (CTU) during remapping because of errors in joint angle sensing. As a result, CTU creates differences in the reliability of target information depending on the reference frame used for storage. This paper explores whether the brain represents and compensates for CTU when making grasping movements. To address this question, we varied eye position in the head, while participants reached to grasp a spatially fixed object, both when the object was in view and when it was occluded. Varying eye position changes CTU between eye and head, producing additional uncertainty in remapped information away from forward view. The results showed that people adjust their maximum grip aperture to compensate both for changes in visual information and for changes in CTU when the target is occluded. Moreover, the amount of compensation is predicted by a Bayesian model for location inference that uses eye-centered storage.  相似文献   

18.
When performing everyday tasks, we often move our eyes and hand together: we look where we are reaching in order to better guide the hand. This coordinated pattern with the eye leading the hand is presumably optimal behaviour. But eyes and hands can move to different locations if they are involved in different tasks. To find out whether this leads to optimal performance, we studied the combination of visual and haptic search. We asked ten participants to perform a combined visual and haptic search for a target that was present in both modalities and compared their search times to those on visual only and haptic only search tasks. Without distractors, search times were faster for visual search than for haptic search. With many visual distractors, search times were longer for visual than for haptic search. For the combined search, performance was poorer than the optimal strategy whereby each modality searched a different part of the display. The results are consistent with several alternative accounts, for instance with vision and touch searching independently at the same time.  相似文献   

19.
Many perceptual cue combination studies have shown that humans can integrate sensory information across modalities as well as within a modality in a manner that is close to optimal. While the limits of sensory cue integration have been extensively studied in the context of perceptual decision tasks, the evidence obtained in the context of motor decisions provides a less consistent picture. Here, we studied the combination of visual and haptic information in the context of human arm movement control. We implemented a pointing task in which human subjects pointed at an invisible unknown target position whose vertical position varied randomly across trials. In each trial, we presented a haptic and a visual cue that provided noisy information about the target position half-way through the reach. We measured pointing accuracy as function of haptic and visual cue onset and compared pointing performance to the predictions of a multisensory decision model. Our model accounts for pointing performance by computing the maximum a posteriori estimate, assuming minimum variance combination of uncertain sensory cues. Synchronicity of cue onset has previously been demonstrated to facilitate the integration of sensory information. We tested this in trials in which visual and haptic information was presented with temporal disparity. We found that for our sensorimotor task temporal disparity between visual and haptic cue had no effect. Sensorimotor learning appears to use all available information and to apply the same near-optimal rules for cue combination that are used by perception.  相似文献   

20.
Real-world scene perception can often involve more than one sensory modality. Here we investigated the visual, haptic and crossmodal recognition of scenes of familiar objects. In three experiments participants first learned a scene of objects arranged in random positions on a platform. After learning, the experimenter swapped the position of two objects in the scene and the task for the participant was to identify the two swapped objects. In experiment 1, we found a cost in scene recognition performance when there was a change in sensory modality and scene orientation between learning and test. The cost in crossmodal performance was not due to the participants verbally encoding the objects (experiment 2) or by differences between serial and parallel encoding of the objects during haptic and visual learning, respectively (experiment 3). Instead, our findings suggest that differences between visual and haptic representations of space may affect the recognition of scenes of objects across these modalities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号