首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Different movement characteristics can be governed by different frames of reference. The present study serves to identify the frames of reference, which govern intermanual interactions with respect to movement directions. Previous studies had shown that intermanual interactions are adjusted to task requirements during motor preparation: for parallel movements directional coupling becomes parallel, and for symmetric movements it becomes symmetric. The timed-response procedure allows to trace these adjustments as they are reflected in the intermanual correlations between left-hand and right-hand directions. In the present study the adjustments remained unchanged when all target directions were rotated laterally, indicating a critical role of hand-centered frames of reference. The additional role of a body-centered frame of reference was indicated by the finding of overall higher intermanual correlations with the rotated target configurations. Intermanual interference at long preparation intervals was absent even when eccentricities in the body-centered frame of reference were different. These findings converge with results on the frames of reference that govern intermanual interactions with respect to movement amplitudes. They suggest a role of both body-centered and hand-centered frames of reference for the adjustments of intermanual interactions to task requirements, but of a hand-centered frame of reference only for the intermanual interference, which remains in spite of the adjustments.  相似文献   

2.
Eye-hand coordination is crucial for everyday visuo-haptic object-manipulation. Noninformative vision has been reported to improve haptic spatial tasks relying on world-based reference frames. The current study investigated whether the degree of visuo-haptic congruity systematically affects haptic task performance. Congruent and parametrically varied incongruent visual orientation cues were presented while participants manually explored the orientation of a reference bar stimulus. Participants were asked to haptically match this reference orientation by turning a test bar either to a parallel or mirrored orientation, depending on the instruction. While parallel matching can only be performed correctly in a world-based frame, mirror matching (in the mid-sagittal plane) can also be achieved in a body-centered frame. We revealed that visuo-haptic incongruence affected parallel but not mirror matching responses in size and direction. Parallel matching did not improve when congruent visual orientation cues were provided throughout a run, and mirror matching even deteriorated. These results show that there is no positive effect of visual input on haptic performance per se. Tasks, which favor a body-centered frame are immune to incongruent visual input, while such input parametrically modulates performance on world-based haptic tasks.  相似文献   

3.
Pointing with an unseen hand to a visual target that disappears prior to movement requires maintaining a memory representation about the target location. The target location can be transformed either into a hand-centered frame of reference during target presentation and remembered under that form, or remembered in terms of retinal and extra-retinal cues and transformed into a body-centered frame of reference before movement initiation. The main goal of the present study was to investigate whether the target is stored in memory in an eye-centered frame, a hand-centered frame or in both frames of reference concomitantly. The task was to locate, memorize, and point to a target in a dark environment. Hand movement was not visible. During the recall delay, participants were asked to move their hand or their eyes in order to disrupt the memory representation of the target. Movement of the eyes during the recall delay was expected to disrupt an eye-centered memory representation whereas movement of the hand was expected to disrupt a hand-centered memory representation by increasing movement variability to the target. Variability of movement amplitude and direction was examined. Results showed that participants were more variable on the directional component of the movement when required to move their hand during recall delay. On the contrary, moving the eyes caused an increase in variability only in the amplitude component of the pointing movement. Taken together, these results suggest that the direction of the movement is coded and remembered in a frame of reference linked to the arm, whereas the amplitude of the movement is remembered in an eye-centered frame of reference.  相似文献   

4.
The simple experience of a coherent percept while looking and touching an object conceals an intriguing issue: different senses encode and compare information in different modality-specific reference frames. We addressed this problem in a cross-modal visuo-haptic mental rotation task. Two objects in various orientations were presented at the same spatial location, one visually and one haptically. Participants had to identify the objects as same or different. The relative angle between viewing direction and hand orientation was manipulated (Aligned versus Orthogonal). In an additional condition (Delay), a temporal delay was introduced between haptic and visual explorations while the viewing direction and the hand orientation were orthogonal to each other. Whereas the phase shift of the response time function was close to 0° in the Aligned condition, we observed a consistent phase shift in the hand’s direction in the Orthogonal condition. A phase shift, although reduced, was also found in the Delay condition. Counterintuitively, these results mean that seen and touched objects do not need to be physically aligned for optimal performance to occur. The present results suggest that the information about an object is acquired in separate visual and hand-centered reference frames, which directly influence each other and which combine in a time-dependent manner.  相似文献   

5.
Delay improves performance on a haptic spatial matching task   总被引:6,自引:6,他引:0  
Systematic deviations occur when blindfolded subjects set a test bar parallel to a reference bar in the horizontal plane using haptic information (Kappers and Koenderink 1999, Perception 28:781–795; Kappers 1999, Perception 28:1001–1012). These deviations are assumed to reflect the use of a combination of a biasing egocentric reference frame and an allocentric, more cognitive one (Kappers 2002, Acta Psychol 109:25–40). In two experiments, we have examined the effect of delay between the perception of a reference bar and the parallel setting of a test bar. In both experiments a 10-s delay improved performance. The improvement increased with a larger horizontal (left–right) distance between the bars. This improvement was interpreted as a shift from the egocentric towards the allocentric reference frame during the delay period. Electronic Publication  相似文献   

6.
At some stage in the process of a sensorimotor transformation for a reaching movement, information about the current position of the hand and information about the location of the target must be encoded in the same frame of reference to compute the hand-to-target difference vector. Two main hypotheses have been proposed regarding this reference frame: an eye-centered and a body-centered frame. Here we evaluated these hypotheses using the pointing errors that subjects made when planning and executing arm movements to memorized targets starting from various initial hand positions while keeping gaze fixed in various directions. One group of subjects (n = 10) was tested without visual information about hand position during movement planning (unseen-hand condition); another group (n = 8) was tested with hand and target position simultaneously visible before movement onset (seen-hand condition). We found that both initial hand position and gaze fixation direction had a significant effect on the magnitude and direction of the pointing error. Errors were significantly smaller in the seen-hand condition. For both conditions, though, a reference frame analysis showed that the errors arose at an eye- or hand-centered stage or both, but not at a body-centered stage. As a common reference frame is required to specify a movement vector, these results suggest that an eye-centered mechanism is involved in integrating target and hand position in programming reaching movements. We discuss how simple gain elements modulating the eye-centered target and hand-position signals can account for these results.  相似文献   

7.
Spatial priming in visual search is a well-documented phenomenon. If the target of a visual search is presented at the same location in subsequent trials, the time taken to find the target at this repeated target location is significantly reduced. Previous studies did not determine which spatial reference frame is used to code the location. At least two reference frames can be distinguished: an observer-related frame of reference (egocentric) or a scene-based frame of reference (allocentric). While past studies suggest that an allocentric reference frame is more effective, we found that an egocentric reference frame is at least as effective as an allocentric one (Ball et al. Neuropsychologia 47(6):1585–1591, 2009). Our previous study did not identify which specific egocentric reference frame was used for the priming: participants could have used a retinotopic or a body-centred frame of reference. Here, we disentangled the retinotopic and body-centred reference frames. In the retinotopic condition, the position of the target stimulus, when repeated, changed with the fixation position, whereas in the body-centred condition, the position of the target stimulus remained the same relative to the display, and thus to the body-midline, but was different relative to the fixation position. We used a conjunction search task to assess the generality of our previous findings. We found that participants relied on body-centred information and not retinotopic cues. Thus, we provide further evidence that egocentric information, and specifically body-centred information, can persist for several seconds, and that these effects are not specific to either a feature or a conjunction search paradigm.  相似文献   

8.
Convergent findings demonstrate that numbers can be represented according to a spatially oriented mental number line. However, it is not established whether a default organization of the mental number line exists (i.e., a left-to-right orientation) or whether its spatial arrangement is only the epiphenomenon of specific task requirements. To address this issue we performed two experiments in which subjects were required to judge laterality of hand stimuli preceded by small, medium or large numerical cues; hand stimuli were compatible with egocentric or allocentric perspectives. We found evidence of a left-to-right number–hand association in processing stimuli compatible with an egocentric perspective, whereas the reverse mapping was found with hands compatible with an allocentric perspective. These findings demonstrate that the basic left-to-right arrangement of the mental number line is defined with respect to the body-centred egocentric reference frame.  相似文献   

9.
This research is about the role of categorical and coordinate spatial relations and allocentric and egocentric frames of reference in processing spatial information. To this end, we asked whether spatial information is firstly encoded with respect to a frame of reference or with respect to categorical/coordinate spatial relations. Participants had to judge whether two vertical bars appeared on the same side (categorical) or at the same distance (coordinate) with respect to the centre of a horizontal bar (allocentric) or with respect to their body midline (egocentric). The key manipulation was the timing of the instructions: one instruction (reference frame or spatial relation) was given before stimulus presentation, the other one after. If spatial processing requires egocentric/allocentric encoding before coordinate/categorical encoding, then spatial judgements should be facilitated when the frame of reference is specified in advance. In contrast, if categorical and coordinate dimensions are primary, then a facilitation should appear when the spatial relation is specified in advance. Results showed that participants were more accurate and faster when the reference frame rather than the type of spatial relation was provided before stimulus presentation. Furthermore, a selective facilitation was found for coordinate and categorical judgements after egocentric and allocentric cues, respectively. These results suggest a hierarchical structure of spatial information processing where reference frames play a primary role and selectively interact with subsequent processing of spatial relations.  相似文献   

10.
Motor imagery tasks (hand laterality judgment) are usually performed with respect to a self-body (egocentric) representation, but manipulations of stimulus features (hand orientation) can induce a shift to other's body (allocentric) reference frame. Visual perspective taking tasks are also performed in self-body perspective but a shift to an allocentric frame can be triggered by manipulations of context features (e.g., another person present in the to-be-judged scene). Combining hand laterality task and visual perspective taking, we demonstrated that both stimulus and context features can modulate motor imagery performance. In Experiment 1, participants judged laterality of a hand embedded in a human or non-human silhouette. Results showed that observing a human silhouette interfered with judgments on “egocentric hand stimuli” (right hand, fingers up). In Experiment 2, participants were explicitly required to judge laterality of a hand embedded in a human silhouette from their own (egocentric group) or from the silhouette's perspective (allocentric group). Consistent with previous results, the egocentric group was significantly faster than the allocentric group in judging fingers-up right hand stimuli. These findings showed that concurrent activation of egocentric and allocentric frames during mental transformation of body parts impairs participants’ performance due to a conflict between motor and visual mechanisms.  相似文献   

11.
Research has shown that haptic spatial matching at intermanual distances over 60 cm is prone to large systematic errors. The error pattern has been explained by the use of reference frames intermediate between egocentric and allocentric coding. This study investigated haptic performance in near peripersonal space, i.e. at intermanual distances of 60 cm and less. Twelve blindfolded participants (six males and six females) were presented with two turn bars at equal distances from the midsagittal plane, 30 or 60 cm apart. Different orientations (vertical/horizontal or oblique) of the left bar had to be matched by adjusting the right bar to either a mirror symmetric (/ \) or parallel (/ /) position. The mirror symmetry task can in principle be performed accurately in both an egocentric and an allocentric reference frame, whereas the parallel task requires an allocentric representation. Results showed that parallel matching induced large systematic errors which increased with distance. Overall error was significantly smaller in the mirror task. The task difference also held for the vertical orientation at 60 cm distance, even though this orientation required the same response in both tasks, showing a marked effect of task instruction. In addition, men outperformed women on the parallel task. Finally, contrary to our expectations, systematic errors were found in the mirror task, predominantly at 30 cm distance. Based on these findings, we suggest that haptic performance in near peripersonal space might be dominated by different mechanisms than those which come into play at distances over 60 cm. Moreover, our results indicate that both inter-individual differences and task demands affect task performance in haptic spatial matching. Therefore, we conclude that the study of haptic spatial matching in near peripersonal space might reveal important additional constraints for the specification of adequate models of haptic spatial performance.  相似文献   

12.
It has been hypothesized that the end-point position of reaching may be specified in an egocentric frame of reference. In most previous studies, however, reaching was toward a memorized target, rather than an actual target. Thus, the role played by sensorimotor transformation could not be disassociated from the role played by storage in short-term memory. In the present study the direct process of sensorimotor transformation was investigated in reaching toward continuously visible targets that need not be stored in memory. A virtual reality system was used to present visual targets in different three-dimensional (3D) locations in two different tasks, one with visual feedback of the hand and arm position (Seen Hand) and the other without such feedback (Unseen Hand). In the Seen Hand task, the axes of maximum variability and of maximum contraction converge toward the mid-point between the eyes. In the Unseen Hand task only the maximum contraction correlates with the sight-line and the axes of maximum variability are not viewer-centered but rotate anti-clockwise around the body and the effector arm during the move from the right to the left workspace. The bulk of findings from these and previous experiments support the hypothesis of a two-stage process, with a gradual transformation from viewer-centered to body-centered and arm-centered coordinates. Retinal, extra-retinal and arm-related signals appear to be progressively combined in superior and inferior parietal areas, giving rise to egocentric representations of the end-point position of reaching. Received: 25 November 1998 / Accepted: 8 July 1999  相似文献   

13.
We required healthy subjects to recognize visually presented one’s own or others’ hands in egocentric or allocentric perspective. Both right- and left-handers were faster in recognizing dominant hands in egocentric perspective and others’ non-dominant hand in allocentric perspective. These findings demonstrated that body-specific information contributes to sense of ownership, and that the “peri-dominant-hand space” is the preferred reference frame to distinguish self from not-self body parts.  相似文献   

14.
The visual and vestibular systems begin functioning early in life. However, it is unclear whether young infants perceive the dynamic world based on the retinal coordinate (egocentric reference frame) or the environmental coordinate (allocentric reference frame) when they encounter incongruence between frames of reference due to changes in body position. In this study, we performed the habituation–dishabituation procedure to assess novelty detection in a visual display, and a change in body position was included between the habituation and dishabituation phases in order to test whether infants dishabituate to the change in stimulus on the retinal or environmental coordinate. Twenty infants aged 3–4 months were placed in the right-side-down position (RSDp) and habituated to an animated human-like character that walked horizontally in the environmental frame of reference. Subsequently, their body position was changed in the roll plane. Ten infants were repositioned to the upright position (UPp) and the rest, to the RSDp after rotation. In the test phase, the displays that were spatially identical to those shown in the habituation phase and 90° rotated displays were alternately presented, and visual preference was examined. The results revealed that infants looked longer at changes in the display on the retinal coordinate than at changes in the display on the environmental coordinate. This suggests that changes in body position from lying to upright produced incongruence of the egocentric and allocentric reference frames for perception of dynamic visual displays and that infants may rely more on the egocentric reference frame.  相似文献   

15.
The authors report the case of a woman with a right basal ganglia lesion and severe mental-rotation impairments. She had no difficulty recognizing rotated objects and had intact left-right orientation in egocentric space but was unable to map the left and right sides of external objects to her egocentric reference frame. This study indicates that the right basal ganglia may be critical components in a cortico-subcortical network involved in mental rotation. We speculate that the role of these structures is to select and maintain an appropriate motor program for performing smooth and accurate rotation. The results also have important implications for theories of object recognition by demonstrating that recognition of rotated objects can be achieved without mental rotation.  相似文献   

16.
Spatial transformations for eye-hand coordination   总被引:6,自引:0,他引:6  
Eye-hand coordination is complex because it involves the visual guidance of both the eyes and hands, while simultaneously using eye movements to optimize vision. Since only hand motion directly affects the external world, eye movements are the slave in this system. This eye-hand visuomotor system incorporates closed-loop visual feedback but here we focus on early feedforward mechanisms that allow primates to make spatially accurate reaches. First, we consider how the parietal cortex might store and update gaze-centered representations of reach targets during a sequence of gaze shifts and fixations. Recent evidence suggests that such representations might be compared with hand position signals within this early gaze-centered frame. However, the resulting motor error commands cannot be treated independently of their frame of origin or the frame of their destined motor command. Behavioral experiments show that the brain deals with the nonlinear aspects of such reference frame transformations, and incorporates internal models of the complex linkage geometry of the eye-head-shoulder system. These transformations are modeled as a series of vector displacement commands, rotated by eye and head orientation, and implemented between parietal and frontal cortex through efficient parallel neuronal architectures. Finally, we consider how this reach system might interact with the visually guided grasp system through both parallel and coordinated neural algorithms.  相似文献   

17.
This study investigates coordinative constraints when participants execute discrete bimanual tool use actions. Participants moved two levers to targets that were either presented near the proximal parts of the levers or near the distal tips of the levers. In the first case, the tool transformation (i.e. the relationship between hand movement direction and target direction) was compatible, whereas in the second case, it was incompatible. We hypothesized that an egocentric constraint (i.e. a preference for moving the hands and tools in a mirror-symmetrical fashion) would be dominant when targets are presented near the proximal parts of the levers because in this situation, movements can be coded in terms of body-related coordinates. Furthermore, an allocentric constraint (i.e. a preference to move the hands in the same (parallel) direction in extrinsic space) was expected to be dominant when one of the targets or both are presented near the distal parts of the levers because in this condition, movements have to be coded in an external reference frame. The results show that when both targets are presented near the proximal parts of the levers, participants are faster and produce less errors with mirror-symmetrical when compared to parallel movements. Furthermore, the RT mirror-symmetry advantage is eliminated, when both targets are presented near the distal parts of the levers, and it is reversed, when the target for one lever is presented near its distal part and the target for the other lever is presented near its proximal part. These results show that the dominance of egocentric and allocentric coordinative constraints in bimanual tool use depends on whether movements are coded in terms of body-related coordinates or in an external reference frame.  相似文献   

18.
Research on joint attention has addressed both the effects of gaze following and the ability to share representations. It is largely unknown, however, whether sharing attention also affects the perceptual processing of jointly attended objects. This study tested whether attending to stimuli with another person from opposite perspectives induces a tendency to adopt an allocentric rather than an egocentric reference frame. Pairs of participants performed a handedness task while individually or jointly attending to rotated hand stimuli from opposite sides. Results revealed a significant flattening of the performance rotation curve when participants attended jointly (experiment 1). The effect of joint attention was robust to manipulations of social interaction (cooperation versus competition, experiment 2), but was modulated by the extent to which an allocentric reference frame was primed (experiment 3). Thus, attending to objects together from opposite perspectives makes people adopt an allocentric rather than the default egocentric reference frame.  相似文献   

19.
In a classic demonstration, Ernst Mach showed that the same figure could be perceived as a square or as a diamond depending on the orientation of the subject relative to gravity. Such phenomenon is based on the use of a geocentric reference frame for object perception. If the central nervous system perceives an object with respect to the gravitationally defined vertical, what will happen if this reference frame is removed? We investigated the Mach phenomenon in subjects placed in short-term microgravity during parabolic flight. Subjects were presented with a square with a corner pointing upwards, and asked whether they perceived it as a diamond or square with the head upright or tilted 45 degrees in roll both in normal gravity and when free-floating in microgravity during parabolic flight. The addition of a rectangular frame around the figure was also investigated. In contrast to the normal gravity condition, with the head tilted the subjects still perceived a diamond figure in microgravity, indicating that they had switched from a geocentric to an egocentric reference frame. Also in contrast to the normal gravity condition, adding a rectangular frame around the figure did not significantly change the perception of the object in microgravity, suggesting that an intrinsic reference determined by the axis of elongation or symmetry of the object does not easily override an egocentric reference frame like it does for a geocentric reference frame.  相似文献   

20.
Gravicentric visual alignments become less precise when the head is roll-tilted relative to gravity, which is most likely due to decreasing otolith sensitivity. To align a luminous line with the perceived gravity vector (gravicentric task) or the perceived body-longitudinal axis (egocentric task), the roll orientation of the line on the retina and the torsional position of the eyes relative to the head must be integrated to obtain the line orientation relative to the head. Whether otolith input contributes to egocentric tasks and whether the modulation of variability is restricted to vision-dependent paradigms is unknown. In nine subjects we compared precision and accuracy of gravicentric and egocentric alignments in various roll positions (upright, 45°, and 75° right-ear down) using a luminous line (visual paradigm) in darkness. Trial-to-trial variability doubled for both egocentric and gravicentric alignments when roll-tilted. Two mechanisms might explain the roll-angle-dependent modulation in egocentric tasks: 1) Modulating variability in estimated ocular torsion, which reflects the roll-dependent precision of otolith signals, affects the precision of estimating the line orientation relative to the head; this hypothesis predicts that variability modulation is restricted to vision-dependent alignments. 2) Estimated body-longitudinal reflects the roll-dependent variability of perceived earth-vertical. Gravicentric cues are thereby integrated regardless of the task's reference frame. To test the two hypotheses the visual paradigm was repeated using a rod instead (haptic paradigm). As with the visual paradigm, precision significantly decreased with increasing head roll for both tasks. These findings propose that the CNS integrates input coded in a gravicentric frame to solve egocentric tasks. In analogy to gravicentric tasks, where trial-to-trial variability is mainly influenced by the properties of the otolith afferents, egocentric tasks may also integrate otolith input. Such a shared mechanism for both paradigms and frames of reference is supported by the significantly correlated trial-to-trial variabilities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号