首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
We used a memory-saccade task to test whether the location of a target, briefly presented before a whole-body rotation in roll, is stored in egocentric or in allocentric coordinates. To make this distinction, we exploited the fact that subjects, when tilted sideways in darkness, make systematic errors when indicating the direction of gravity (an allocentric task) even though they have a veridical percept of their self-orientation in space. We hypothesized that if spatial memory is coded allocentrically, these distortions affect the coding of remembered targets and their readout after a body rotation. Alternatively, if coding is egocentric, updating for body rotation becomes essential and errors in performance should be related to the amount of intervening rotation. Subjects (n = 6) were tested making saccades to remembered world-fixed targets after passive body tilts. Initial and final tilt angle ranged between -120 degrees CCW and 120 degrees CW. The results showed that subjects made large systematic directional errors in their saccades (up to 90 degrees ). These errors did not occur in the absence of intervening body rotation, ruling out a memory degradation effect. Regression analysis showed that the errors were closely related to the amount of subjective allocentric distortion at both the initial and final tilt angle, rather than to the amount of intervening rotation. We conclude that the brain uses an allocentric reference frame, possibly gravity-based, to code visuospatial memories during whole-body tilts. This supports the notion that the brain can define information in multiple frames of reference, depending on sensory inputs and task demands.  相似文献   

2.
To produce accurate goal-directed arm movements, subjects must determine the precise location of target object. Position of extracorporeal objects can be determined using: (a) an egocentric frame of reference, in which the target is localized in relation to the position of the body; and/or (b) an allocentric system, in which target position is determined in relation to stable visual landmarks surrounding the target (Bridgeman 1989; Paillard 1991). The present experiment was based on the premise that (a) the presence of a structured visual environment enables the use of an allocentric frame of reference, and (b) the sole presence of a visual target within a homogeneous background forces the registration of the target location by an egocentric system. Normal subjects and a deafferented patient (i.e., with an impaired egocentric system) pointed to visual targets presented in both visual environments to evaluate the efficiency of the two reference systems. For normals, the visual environment conditions did not affect pointing accuracy. However, kinematic parameters were affected by the presence or absence of a structured visual surrounding. For the deafferented patient, the presence of a structured visual environment permitted a decrease in spatial errors when compared with the unstructured surrounding condition (for movements with or without visual feedback of the trajectory). Overall, results support the existence of an egocentric and an allocentric reference system capable of organizing extracorporeal space during arm movements directed toward visual targets.  相似文献   

3.
The study investigated pointing at memorized targets in reachable space in congenitally blind (CB) and blindfolded sighted (BS) children (6, 8, 10 and 12 years; ten children in each group). The target locations were presented on a sagittal plane by passive positioning of the left index finger. A go signal for matching the target location with the right index finger was provided 0 or 4 s after demonstration. An age effect was found only for absolute distance errors and the surface area of pointing was smaller for the CB children. Results indicate that early visual experience and age are not predictive factors for pointing in children. The delay was an important factor at all ages and for both groups, indicating distinct spatial representations such as egocentric and allocentric frames of reference, for immediate and delayed pointing, respectively. Therefore, the CB like the BS children are able to use both ego- and allocentric frames of reference.  相似文献   

4.
This study investigates coordinative constraints when participants execute discrete bimanual tool use actions. Participants moved two levers to targets that were either presented near the proximal parts of the levers or near the distal tips of the levers. In the first case, the tool transformation (i.e. the relationship between hand movement direction and target direction) was compatible, whereas in the second case, it was incompatible. We hypothesized that an egocentric constraint (i.e. a preference for moving the hands and tools in a mirror-symmetrical fashion) would be dominant when targets are presented near the proximal parts of the levers because in this situation, movements can be coded in terms of body-related coordinates. Furthermore, an allocentric constraint (i.e. a preference to move the hands in the same (parallel) direction in extrinsic space) was expected to be dominant when one of the targets or both are presented near the distal parts of the levers because in this condition, movements have to be coded in an external reference frame. The results show that when both targets are presented near the proximal parts of the levers, participants are faster and produce less errors with mirror-symmetrical when compared to parallel movements. Furthermore, the RT mirror-symmetry advantage is eliminated, when both targets are presented near the distal parts of the levers, and it is reversed, when the target for one lever is presented near its distal part and the target for the other lever is presented near its proximal part. These results show that the dominance of egocentric and allocentric coordinative constraints in bimanual tool use depends on whether movements are coded in terms of body-related coordinates or in an external reference frame.  相似文献   

5.
Spatial priming in visual search is a well-documented phenomenon. If the target of a visual search is presented at the same location in subsequent trials, the time taken to find the target at this repeated target location is significantly reduced. Previous studies did not determine which spatial reference frame is used to code the location. At least two reference frames can be distinguished: an observer-related frame of reference (egocentric) or a scene-based frame of reference (allocentric). While past studies suggest that an allocentric reference frame is more effective, we found that an egocentric reference frame is at least as effective as an allocentric one (Ball et al. Neuropsychologia 47(6):1585–1591, 2009). Our previous study did not identify which specific egocentric reference frame was used for the priming: participants could have used a retinotopic or a body-centred frame of reference. Here, we disentangled the retinotopic and body-centred reference frames. In the retinotopic condition, the position of the target stimulus, when repeated, changed with the fixation position, whereas in the body-centred condition, the position of the target stimulus remained the same relative to the display, and thus to the body-midline, but was different relative to the fixation position. We used a conjunction search task to assess the generality of our previous findings. We found that participants relied on body-centred information and not retinotopic cues. Thus, we provide further evidence that egocentric information, and specifically body-centred information, can persist for several seconds, and that these effects are not specific to either a feature or a conjunction search paradigm.  相似文献   

6.
Healthy humans performed arm movements in a horizontal plane, from an initial position toward remembered targets, while the movement and the targets were projected on a vertical computer monitor. We analyzed the mean error of movement endpoints and we observed two distinct systematic error patterns. The first pattern resulted in the clustering of movement endpoints toward the diagonals of the four quadrants of an imaginary circular area encompassing all target locations (oblique effect). The second pattern resulted in a tendency of movement endpoints to be closer to the body or equivalently lower than the actual target positions on the computer monitor (y-effect). Both these patterns of systematic error increased in magnitude when a time delay was imposed between target presentation and initiation of movement. In addition, the presence of a stable visual cue in the vicinity of some targets imposed a novel pattern of systematic errors, including minimal errors near the cue and a tendency for other movement endpoints within the cue quadrant to err away from the cue location. A pattern of systematic errors similar to the oblique effect has already been reported in the literature and is attributed to the subject's conceptual categorization of space. Given the properties of the errors in the present work, we discuss the possibility that such conceptual effects could be reflected in a broad variety of visuomotor tasks. Our results also provide insight into the problem of reference frames used in the execution of these aiming movements. Thus, the oblique effect could reflect a hand-centered reference frame while the y-effect could reflect a body or eye-centered reference frame. The presence of the stable visual cue may impose an additional cue-centered (allocentric) reference frame. Electronic Publication  相似文献   

7.
There is a significant overlap between the processes and neural substrates of spatial cognition and those subserving memory and learning. However, for procedural learning, which often is spatial in nature, we do not know how different forms of spatial knowledge, such as egocentric and allocentric frames of reference, are utilized nor whether these frames are differentially engaged during implicit and explicit processes. To address this issue, we trained human subjects on a movement sequence presented on a bi-dimensional (2D) geometric frame. We then systematically manipulated the geometric frame (allocentric) or the sequence of movements (egocentric) or both, and retested the subjects on their ability to transfer the sequence knowledge they had acquired in training and also determined whether the subjects had learned the sequence implicitly or explicitly. None of the subjects (implicit or explicit) showed evidence of transfer when both frames of reference were changed which suggests that spatial information is essential. Both implicit and explicit subjects transferred when the egocentric frame was maintained indicating that this representation is common to both processes. Finally, explicit subjects were also able to benefit from the allocentric frame in transfer, which suggests that explicit procedural knowledge may have two tiers comprising egocentric and allocentric representations.  相似文献   

8.
This research examined motor measures of the apparent egocentric location and perceptual measures of the apparent allocentric location of a target that was being seen to undergo induced motion (IM). In Experiments 1 and 3, subjects fixated a stationary dot (IM target) while a rectangular surround stimulus (inducing stimulus) oscillated horizontally. The inducing stimulus motion caused the IM target to appear to move in the opposite direction. In Experiment 1, two dots (flashed targets) were flashed above and below the IM target when the surround had reached its leftmost or rightmost displacement from the subject’s midline. Subjects pointed open-loop at either the apparent egocentric location of the IM target or at the bottom of the two flashed targets. On separate trials, subjects made judgments of the Vernier alignment of the IM target with the flashed targets at the endpoints of the surround’s oscillation. The pointing responses were displaced in the direction of the previously seen IM for the IM target and to a lesser degree for the bottom flashed target. However, the allocentric Vernier judgments demonstrated no perceptual displacement of the IM target relative to the flashed targets. Thus, IM results in a dissociation of egocentric location measures from allocentric location measures. In Experiment 2, pointing and Vernier measures were obtained with stationary horizontally displaced surrounds and there was no dissociation of egocentric location measures from allocentric location measures. These results indicate that the Roelofs effect did not produce the pattern of results in Experiment 1. In Experiment 3, pointing and Vernier measures were obtained when the surround was at the midpoint of an oscillation. In this case, egocentric pointing responses were displaced in the direction of surround motion (opposite IM) for the IM target and to a greater degree for the bottom flashed target. However, there was no apparent displacement of the IM target relative to the flashed targets in the allocentric Vernier judgments. Therefore, in Experiment 3 egocentric location measures were again dissociated from allocentric location measures. The results of this experiment also demonstrate that IM does not generate an allocentric displacement illusion analogous to the “flash-lag” effect.
Robert B. PostEmail:
  相似文献   

9.
This research is about the role of categorical and coordinate spatial relations and allocentric and egocentric frames of reference in processing spatial information. To this end, we asked whether spatial information is firstly encoded with respect to a frame of reference or with respect to categorical/coordinate spatial relations. Participants had to judge whether two vertical bars appeared on the same side (categorical) or at the same distance (coordinate) with respect to the centre of a horizontal bar (allocentric) or with respect to their body midline (egocentric). The key manipulation was the timing of the instructions: one instruction (reference frame or spatial relation) was given before stimulus presentation, the other one after. If spatial processing requires egocentric/allocentric encoding before coordinate/categorical encoding, then spatial judgements should be facilitated when the frame of reference is specified in advance. In contrast, if categorical and coordinate dimensions are primary, then a facilitation should appear when the spatial relation is specified in advance. Results showed that participants were more accurate and faster when the reference frame rather than the type of spatial relation was provided before stimulus presentation. Furthermore, a selective facilitation was found for coordinate and categorical judgements after egocentric and allocentric cues, respectively. These results suggest a hierarchical structure of spatial information processing where reference frames play a primary role and selectively interact with subsequent processing of spatial relations.  相似文献   

10.
When reaching to remembered target locations following an intervening eye movement a systematic pattern of error is found indicating eye-centred updating of visuospatial memory. Here we investigated if implicit targets, defined only by allocentric visual cues, are also updated in an eye-centred reference frame as explicit targets are. Participants viewed vertical bars separated by varying distances, and horizontal lines of equivalently varying lengths, implying a “target” location at the midpoint of the stimulus. After determining the implied “target” location from only the allocentric stimuli provided, participants saccaded to an eccentric location, and reached to the remembered “target” location. Irrespective of the type of stimulus reaching errors to these implicit targets are gaze-dependent, and do not differ from those found when reaching to remembered explicit targets. Implicit target locations are coded and updated as a function of relative gaze direction with respect to those implied locations just as explicit targets are, even though no target is specifically represented.  相似文献   

11.
When programming movement, one must account for gravitational acceleration. This is particularly important when catching a falling object because the task requires a precise estimate of time-to-contact. Knowledge of gravity’s effects is intimately linked to our definition of ‘up’ and ‘down’. Both directions can be described in an allocentric reference frame, based on visual and/or gravitational cues, or in an egocentric reference frame in which the body axis is taken as vertical. To test which frame humans use to predict gravity’s effect, we asked participants to intercept virtual balls approaching from above or below with artificially controlled acceleration that could be congruent or not with gravity. To dissociate between these frames, subjects were seated upright (trunk parallel to gravity) or lying down (body axis orthogonal to the gravitational axis). We report data in line with the use of an allocentric reference frame and discuss its relevance depending on available gravity-related cues.  相似文献   

12.
We investigated the influence of gaze elevation on judging the possibility of passing under high obstacles during pitch body tilts, while stationary, in absence of allocentric cues. Specifically, we aimed at studying the influence of egocentric references upon geocentric judgements. Seated subjects, orientated at various body orientations, were asked to perceptually estimate the possibility of passing under a projected horizontal line while keeping their gaze on a fixation target and imagining a horizontal body displacement. The results showed a global overestimation of the possibility of passing under the line, and confirmed the influence of body orientation reported by Bringoux et al. (Exp Brain Res 185(4):673–680, 2008). More strikingly, a linear influence of gaze elevation was found on perceptual estimates. Precisely, downward eye elevation yielded increased overestimations, and conversely upward gaze elevation yielded decreased overestimations. Furthermore, body and gaze orientation effects were independent and combined additively to yield a global egocentric influence with a weight of 45 and 54%, respectively. Overall, our data suggest that multiple egocentric references can jointly affect the estimated possibility of passing under high obstacles. These results are discussed in terms of “interpenetrability” between geocentric and egocentric reference frames and clearly demonstrate that gaze elevation is involved, as body orientation, in geocentric spatial localization.  相似文献   

13.
The present study investigated the brain dynamics accompanying spatial navigation based on distinct reference frames. Participants preferentially using an allocentric or an egocentric reference frame navigated through virtual tunnels and reported their homing direction at the end of each trial based on their spatial representation of the passage. Task-related electroencephalographic (EEG) dynamics were analyzed based on independent component analysis (ICA) and subsequent clustering of independent components. Parietal alpha desynchronization during encoding of spatial information predicted homing performance for participants using an egocentric reference frame. In contrast, retrosplenial and occipital alpha desynchronization during retrieval covaried with homing performance of participants using an allocentric reference frame. These results support the assumption of distinct neural networks underlying the computation of distinct reference frames and reveal a direct relationship of alpha modulation in parietal and retrosplenial areas with encoding and retrieval of spatial information for homing behavior.  相似文献   

14.
The spatial location of objects is processed in egocentric and allocentric reference frames, the early temporal dynamics of which have remained relatively unexplored. Previous experiments focused on ERP components related only to egocentric navigation. Thus, we designed a virtual reality experiment to see whether allocentric reference frame‐related ERP modulations can also be registered. Participants collected reward objects at the end of the west and east alleys of a cross maze, and their ERPs to the feedback objects were measured. Participants made turn choices from either the south or the north alley randomly in each trial. In this way, we were able to discern place and response coding of object location. Behavioral results indicated a strong preference for using the allocentric reference frame and a preference for choosing the rewarded place in the next trial, suggesting that participants developed probabilistic expectations between places and rewards. We also found that the amplitude of the P1 was sensitive to the allocentric place of the reward object, independent of its value. We did not find evidence for egocentric response learning. These results show that early ERPs are sensitive to the location of objects during navigation in an allocentric reference frame.  相似文献   

15.
What humans haptically perceive as parallel is often far from physically parallel. These deviations from parallelity are highly significant and very systematic. There exists accumulating evidence, both psychophysical and neurophysiological, that what is haptically parallel is decided in a frame of reference intermediate to an allocentric and an egocentric one. The central question here concerns the nature of the egocentric frame of reference. In the literature, various kinds of egocentric reference frames are mentioned for haptic spatial tasks, such as hand-centered, arm-centered, and body-centered frames of reference. Thus far, it has not been possible to distinguish between body-centered, arm-centered, and hand-centered reference frames in our experiments, as hand and arm orientation always covaried with distance from the body-midline. In the current set of experiments the influence of body-centered and hand-centered reference frames could be dissociated. Subjects were asked to make a test bar haptically parallel to a reference bar in five different conditions, in which their hands were oriented straight ahead, rotated to the left, rotated to the right, rotated outward or rotated inward. If the reference frame is body-centered, the deviations should be independent of condition. If, on the other hand, the reference frame is hand-centered, the deviations should vary with condition. The results show that deviation size varies strongly with condition, exactly in the way predicted by the influence of a hand-centered egocentric frame of reference. Interestingly, this implies that subjects do not sufficiently take into account the orientation of their hands.  相似文献   

16.
Errors in pointing to actual and remembered targets presented in three-dimensional (3D) space in a dark room were studied under various conditions of visual feedback. During their movements, subjects either had no vision of their arms or of the target, vision of the target but not of their arms, vision of a light-emitting diode (LED) on their moving index fingertip but not of the target, or vision of an LED on their moving index fingertip and of the target. Errors depended critically upon feedback condition. 3D errors were largest for movements to remembered targets without visual feedback, diminished with vision of the moving fingertip, and diminished further with vision of the target and vision of the finger and the target. Moreover, the different conditions differentially influenced the radial distance, azimuth, and elevation errors, indicating that subjects control motion along all three axes relatively independently. The pattern of errors suggest that the neural systems that mediate processing of actual versus remembered targets may have different capacities for integrating visual and proprioceptive information in order to program spatially directed arm movements.  相似文献   

17.
Research has shown that haptic spatial matching at intermanual distances over 60 cm is prone to large systematic errors. The error pattern has been explained by the use of reference frames intermediate between egocentric and allocentric coding. This study investigated haptic performance in near peripersonal space, i.e. at intermanual distances of 60 cm and less. Twelve blindfolded participants (six males and six females) were presented with two turn bars at equal distances from the midsagittal plane, 30 or 60 cm apart. Different orientations (vertical/horizontal or oblique) of the left bar had to be matched by adjusting the right bar to either a mirror symmetric (/ \) or parallel (/ /) position. The mirror symmetry task can in principle be performed accurately in both an egocentric and an allocentric reference frame, whereas the parallel task requires an allocentric representation. Results showed that parallel matching induced large systematic errors which increased with distance. Overall error was significantly smaller in the mirror task. The task difference also held for the vertical orientation at 60 cm distance, even though this orientation required the same response in both tasks, showing a marked effect of task instruction. In addition, men outperformed women on the parallel task. Finally, contrary to our expectations, systematic errors were found in the mirror task, predominantly at 30 cm distance. Based on these findings, we suggest that haptic performance in near peripersonal space might be dominated by different mechanisms than those which come into play at distances over 60 cm. Moreover, our results indicate that both inter-individual differences and task demands affect task performance in haptic spatial matching. Therefore, we conclude that the study of haptic spatial matching in near peripersonal space might reveal important additional constraints for the specification of adequate models of haptic spatial performance.  相似文献   

18.
Riva G 《Medical hypotheses》2012,78(2):254-257
Evidence from psychology and neuroscience indicates that our spatial experience, including the bodily one, involves the integration of different sensory inputs within two different reference frames egocentric (body as reference of first-person experience) and allocentric (body as object in the physical world). Even if functional relations between these two frames are usually limited, they influence each other during the interaction between long- and short-term memory processes in spatial cognition. If, for some reasons, this process is impaired, the egocentric sensory inputs are no more able to update the contents of the allocentric representation of the body: the subject is locked to it. In the presented perspective, subjects with eating disorders are locked to an allocentric representation of their body, stored in long-term memory (allocentric lock). A significant role in the locking may be played by the medial temporal lobe, and in particular by the connection between the hippocampal complex and amygdala. The differences between exogenous and endogenous causes of the lock may also explain the difference between bulimia nervosa and anorexia nervosa.  相似文献   

19.
The location of an object in peripersonal space can be represented with respect to our body (i.e., egocentric frame of reference) or relative to contextual features and other objects (i.e., allocentric frame of reference). In the current study, we sought to determine whether the frame, or frames, of visual reference supporting motor output is influenced by reach trajectories structured to maximize visual feedback utilization (i.e., controlled online) or structured largely in advance of movement onset via central planning mechanisms (i.e., controlled offline). Reaches were directed to a target embedded in a pictorial illusion (the induced Roelofs effect: IRE) and advanced knowledge of visual feedback was manipulated to influence the nature of reaching control as reported by Zelaznik et al. (J Mot Behav 15:217–236, 1983). When vision could not be predicted in advance of movement onset, trajectories showed primary evidence of an offline mode of control (even when vision was provided) and endpoints demonstrated amplified sensitivity to the illusory (i.e., allocentric) features of the IRE. In contrast, reaches performed with reliable visual feedback evidenced a primarily online mode of control and showed increased visuomotor resistance to the IRE. These findings suggest that the manner a reaching response is structured differentially influences the weighting of allocentric and egocentric visual information. More specifically, when visual feedback is unavailable or unpredictable, the weighting of allocentric visual information for the advanced planning of a reach trajectory is increased.
Matthew HeathEmail:
  相似文献   

20.
We required healthy subjects to recognize visually presented one’s own or others’ hands in egocentric or allocentric perspective. Both right- and left-handers were faster in recognizing dominant hands in egocentric perspective and others’ non-dominant hand in allocentric perspective. These findings demonstrated that body-specific information contributes to sense of ownership, and that the “peri-dominant-hand space” is the preferred reference frame to distinguish self from not-self body parts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号