首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
Spatial priming in visual search is a well-documented phenomenon. If the target of a visual search is presented at the same location in subsequent trials, the time taken to find the target at this repeated target location is significantly reduced. Previous studies did not determine which spatial reference frame is used to code the location. At least two reference frames can be distinguished: an observer-related frame of reference (egocentric) or a scene-based frame of reference (allocentric). While past studies suggest that an allocentric reference frame is more effective, we found that an egocentric reference frame is at least as effective as an allocentric one (Ball et al. Neuropsychologia 47(6):1585–1591, 2009). Our previous study did not identify which specific egocentric reference frame was used for the priming: participants could have used a retinotopic or a body-centred frame of reference. Here, we disentangled the retinotopic and body-centred reference frames. In the retinotopic condition, the position of the target stimulus, when repeated, changed with the fixation position, whereas in the body-centred condition, the position of the target stimulus remained the same relative to the display, and thus to the body-midline, but was different relative to the fixation position. We used a conjunction search task to assess the generality of our previous findings. We found that participants relied on body-centred information and not retinotopic cues. Thus, we provide further evidence that egocentric information, and specifically body-centred information, can persist for several seconds, and that these effects are not specific to either a feature or a conjunction search paradigm.  相似文献   

2.
This study investigates coordinative constraints when participants execute discrete bimanual tool use actions. Participants moved two levers to targets that were either presented near the proximal parts of the levers or near the distal tips of the levers. In the first case, the tool transformation (i.e. the relationship between hand movement direction and target direction) was compatible, whereas in the second case, it was incompatible. We hypothesized that an egocentric constraint (i.e. a preference for moving the hands and tools in a mirror-symmetrical fashion) would be dominant when targets are presented near the proximal parts of the levers because in this situation, movements can be coded in terms of body-related coordinates. Furthermore, an allocentric constraint (i.e. a preference to move the hands in the same (parallel) direction in extrinsic space) was expected to be dominant when one of the targets or both are presented near the distal parts of the levers because in this condition, movements have to be coded in an external reference frame. The results show that when both targets are presented near the proximal parts of the levers, participants are faster and produce less errors with mirror-symmetrical when compared to parallel movements. Furthermore, the RT mirror-symmetry advantage is eliminated, when both targets are presented near the distal parts of the levers, and it is reversed, when the target for one lever is presented near its distal part and the target for the other lever is presented near its proximal part. These results show that the dominance of egocentric and allocentric coordinative constraints in bimanual tool use depends on whether movements are coded in terms of body-related coordinates or in an external reference frame.  相似文献   

3.
The location of an object in peripersonal space can be represented with respect to our body (i.e., egocentric frame of reference) or relative to contextual features and other objects (i.e., allocentric frame of reference). In the current study, we sought to determine whether the frame, or frames, of visual reference supporting motor output is influenced by reach trajectories structured to maximize visual feedback utilization (i.e., controlled online) or structured largely in advance of movement onset via central planning mechanisms (i.e., controlled offline). Reaches were directed to a target embedded in a pictorial illusion (the induced Roelofs effect: IRE) and advanced knowledge of visual feedback was manipulated to influence the nature of reaching control as reported by Zelaznik et al. (J Mot Behav 15:217–236, 1983). When vision could not be predicted in advance of movement onset, trajectories showed primary evidence of an offline mode of control (even when vision was provided) and endpoints demonstrated amplified sensitivity to the illusory (i.e., allocentric) features of the IRE. In contrast, reaches performed with reliable visual feedback evidenced a primarily online mode of control and showed increased visuomotor resistance to the IRE. These findings suggest that the manner a reaching response is structured differentially influences the weighting of allocentric and egocentric visual information. More specifically, when visual feedback is unavailable or unpredictable, the weighting of allocentric visual information for the advanced planning of a reach trajectory is increased.
Matthew HeathEmail:
  相似文献   

4.
Hay L  Redon C 《Neuroscience letters》2006,408(3):194-198
Pointing movements decrease in accuracy when target information is removed before movement onset. This time effect was analyzed in relation with the spatial representation of the target location, which can be egocentric (i.e. in relation to the body) or exocentric (i.e. in relation to the external world) depending on the visual environment of the target. The accuracy of pointing movements performed without visual feedback was measured in two delay conditions: 0 and 5-s delay between target removal and movement onset. In each delay condition, targets were presented either in the darkness (egocentric localization) or within a structured visual background (exocentric localization). The results show that pointing was more accurate when targets were presented within a visual background than in the darkness. The time-related decrease in accuracy was observed in the darkness condition, whereas no delay effect was found in the presence of a visual background. Therefore, contextual factors applied to a simple pointing action might induce different spatial representations: a short-lived sensorimotor egocentric representation used in immediate action control, or a long-lived perceptual exocentric representation which drives perception and delayed action.  相似文献   

5.
Healthy humans performed arm movements in a horizontal plane, from an initial position toward remembered targets, while the movement and the targets were projected on a vertical computer monitor. We analyzed the mean error of movement endpoints and we observed two distinct systematic error patterns. The first pattern resulted in the clustering of movement endpoints toward the diagonals of the four quadrants of an imaginary circular area encompassing all target locations (oblique effect). The second pattern resulted in a tendency of movement endpoints to be closer to the body or equivalently lower than the actual target positions on the computer monitor (y-effect). Both these patterns of systematic error increased in magnitude when a time delay was imposed between target presentation and initiation of movement. In addition, the presence of a stable visual cue in the vicinity of some targets imposed a novel pattern of systematic errors, including minimal errors near the cue and a tendency for other movement endpoints within the cue quadrant to err away from the cue location. A pattern of systematic errors similar to the oblique effect has already been reported in the literature and is attributed to the subject's conceptual categorization of space. Given the properties of the errors in the present work, we discuss the possibility that such conceptual effects could be reflected in a broad variety of visuomotor tasks. Our results also provide insight into the problem of reference frames used in the execution of these aiming movements. Thus, the oblique effect could reflect a hand-centered reference frame while the y-effect could reflect a body or eye-centered reference frame. The presence of the stable visual cue may impose an additional cue-centered (allocentric) reference frame. Electronic Publication  相似文献   

6.
There is a significant overlap between the processes and neural substrates of spatial cognition and those subserving memory and learning. However, for procedural learning, which often is spatial in nature, we do not know how different forms of spatial knowledge, such as egocentric and allocentric frames of reference, are utilized nor whether these frames are differentially engaged during implicit and explicit processes. To address this issue, we trained human subjects on a movement sequence presented on a bi-dimensional (2D) geometric frame. We then systematically manipulated the geometric frame (allocentric) or the sequence of movements (egocentric) or both, and retested the subjects on their ability to transfer the sequence knowledge they had acquired in training and also determined whether the subjects had learned the sequence implicitly or explicitly. None of the subjects (implicit or explicit) showed evidence of transfer when both frames of reference were changed which suggests that spatial information is essential. Both implicit and explicit subjects transferred when the egocentric frame was maintained indicating that this representation is common to both processes. Finally, explicit subjects were also able to benefit from the allocentric frame in transfer, which suggests that explicit procedural knowledge may have two tiers comprising egocentric and allocentric representations.  相似文献   

7.
Motor imagery tasks (hand laterality judgment) are usually performed with respect to a self-body (egocentric) representation, but manipulations of stimulus features (hand orientation) can induce a shift to other's body (allocentric) reference frame. Visual perspective taking tasks are also performed in self-body perspective but a shift to an allocentric frame can be triggered by manipulations of context features (e.g., another person present in the to-be-judged scene). Combining hand laterality task and visual perspective taking, we demonstrated that both stimulus and context features can modulate motor imagery performance. In Experiment 1, participants judged laterality of a hand embedded in a human or non-human silhouette. Results showed that observing a human silhouette interfered with judgments on “egocentric hand stimuli” (right hand, fingers up). In Experiment 2, participants were explicitly required to judge laterality of a hand embedded in a human silhouette from their own (egocentric group) or from the silhouette's perspective (allocentric group). Consistent with previous results, the egocentric group was significantly faster than the allocentric group in judging fingers-up right hand stimuli. These findings showed that concurrent activation of egocentric and allocentric frames during mental transformation of body parts impairs participants’ performance due to a conflict between motor and visual mechanisms.  相似文献   

8.
Subjects reached in three-dimensional space to a set of remembered targets whose position was varied randomly from trial to trial, but always fell along a "virtual" line (line condition). Targets were presented briefly, one-by-one and in an empty visual field. After a short delay, subjects were required to point to the remembered target location. Under these conditions, the target was presented in the complete absence of allocentric visual cues as to its position in space. However, because the subjects were informed prior to the experiment that all targets would fall on a straight line, they could conceivably imagine each point target as belonging to a single rigid object with a particular geometry and orientation in space, although this virtual object was never explicitly shown to the subjects. We compared the responses to repeated measurements of each target with those measured for targets presented in a directionally neutral configuration (sphere condition), and used the variable errors to infer the putative reference frames underlying the corresponding sensorimotor transformation. Performance in the different tasks was compared under two different lighting conditions (dim light or total darkness) and two memory delays (0.5 or 5 s). The pattern of variable errors differed significantly between the sphere condition and the line condition. In the former case, the errors were always accounted for by egocentric reference frames. By contrast the errors in the line condition revealed both egocentric and allocentric components, consistent with the hypothesis that target information can be defined concurrently in both egocentric and allocentric frames of reference, resulting in two independent coexisting representations. Electronic Publication  相似文献   

9.
This experiment investigated the relative extent to which different signals from the visuo-oculomotor system are used to improve accuracy of arm movements. Different visuo-oculomotor conditions were used to produce various retinal and extraretinal signals leading to a similar target amplitude: (a) fixating a central target while pointing to a peripheral visual target, (b) tracking a target through smooth pursuit movement and then pointing to the target when its excursion ceased, and (c) pointing to a target reached previously by a saccadic eye movement. The experiment was performed with a deafferented subject and control subjects. For the deafferented patient, the absence of proprioception prevented any comparison between internal representations of target and limb (through proprioception) positions during the arm movement. The deafferented patient's endpoint therefore provided a good estimate of the accuracy of the target coordinates used by the arm motor system. The deafferented subject showed relatively good accuracy by producing a saccade prior to the pointing, but large overshooting in the fixation condition and undershooting in the pursuit condition. The results suggest that the deafferented subject does use oculomotor signals to program arm movement and that signals associated with fast movements of the eyes are better for pointing accuracy than slow ramp movements. The inaccuracy of the deafferented subject when no eye movement is allowed (the condition in which the controls were the most accurate) suggests that, in this condition, a proprioceptive map is involved in which both the target and the arm are represented.  相似文献   

10.
The visual and vestibular systems begin functioning early in life. However, it is unclear whether young infants perceive the dynamic world based on the retinal coordinate (egocentric reference frame) or the environmental coordinate (allocentric reference frame) when they encounter incongruence between frames of reference due to changes in body position. In this study, we performed the habituation–dishabituation procedure to assess novelty detection in a visual display, and a change in body position was included between the habituation and dishabituation phases in order to test whether infants dishabituate to the change in stimulus on the retinal or environmental coordinate. Twenty infants aged 3–4 months were placed in the right-side-down position (RSDp) and habituated to an animated human-like character that walked horizontally in the environmental frame of reference. Subsequently, their body position was changed in the roll plane. Ten infants were repositioned to the upright position (UPp) and the rest, to the RSDp after rotation. In the test phase, the displays that were spatially identical to those shown in the habituation phase and 90° rotated displays were alternately presented, and visual preference was examined. The results revealed that infants looked longer at changes in the display on the retinal coordinate than at changes in the display on the environmental coordinate. This suggests that changes in body position from lying to upright produced incongruence of the egocentric and allocentric reference frames for perception of dynamic visual displays and that infants may rely more on the egocentric reference frame.  相似文献   

11.
We used a memory-saccade task to test whether the location of a target, briefly presented before a whole-body rotation in roll, is stored in egocentric or in allocentric coordinates. To make this distinction, we exploited the fact that subjects, when tilted sideways in darkness, make systematic errors when indicating the direction of gravity (an allocentric task) even though they have a veridical percept of their self-orientation in space. We hypothesized that if spatial memory is coded allocentrically, these distortions affect the coding of remembered targets and their readout after a body rotation. Alternatively, if coding is egocentric, updating for body rotation becomes essential and errors in performance should be related to the amount of intervening rotation. Subjects (n = 6) were tested making saccades to remembered world-fixed targets after passive body tilts. Initial and final tilt angle ranged between -120 degrees CCW and 120 degrees CW. The results showed that subjects made large systematic directional errors in their saccades (up to 90 degrees ). These errors did not occur in the absence of intervening body rotation, ruling out a memory degradation effect. Regression analysis showed that the errors were closely related to the amount of subjective allocentric distortion at both the initial and final tilt angle, rather than to the amount of intervening rotation. We conclude that the brain uses an allocentric reference frame, possibly gravity-based, to code visuospatial memories during whole-body tilts. This supports the notion that the brain can define information in multiple frames of reference, depending on sensory inputs and task demands.  相似文献   

12.
The primary purpose of this study was to examine the effects of non-informative vision and visual interference upon haptic spatial processing, which supposedly derives from an interaction between an allocentric and egocentric reference frame. To this end, a haptic parallelity task served as baseline to determine the participant-dependent biasing influence of the egocentric reference frame. As expected, large systematic participant-dependent deviations from veridicality were observed. In the second experiment we probed the effect of non-informative vision on the egocentric bias. Moreover, orienting mechanisms (gazing directions) were studied with respect to the presentation of haptic information in a specific hemispace. Non-informative vision proved to have a beneficial effect on haptic spatial processing. No effect of gazing direction or hemispace was observed. In the third experiment we investigated the effect of simultaneously presented interfering visual information on the haptic bias. Interfering visual information parametrically influenced haptic performance. The interplay of reference frames that subserves haptic spatial processing was found to be related to both the effects of non-informative vision and visual interference. These results suggest that spatial representations are influenced by direct cross-modal interactions; inter-participant differences in the haptic modality resulted in differential effects of the visual modality.  相似文献   

13.
 We attempt to determine the egocentric reference frame used in directing saccades to remembered targets when landmark-based (exocentric) cues are not available. Specifically, we tested whether memory-guided saccades rely on a retina-centered frame, which must account for eye movements that intervene during the memory period (thereby accumulating error) or on a head-centered representation that requires knowledge of the position of the eyes in the head. We also examined the role of an exocentric reference frame in saccadic targeting since it would not need to account for intervening movements. We measured the precision of eye movements made by human observers to target locations held in memory for a few seconds. A variable number of saccades intervened between the visual presentation of a target and a later eye movement to its remembered location. A visual landmark that allowed for exocentric encoding of the memory target appeared in half the trials. Variable error increased slightly with a greater number of intervening saccades. The landmark aided targeting precision, but did not eliminate the increase in variable error with additional intervening saccades. We interpret these results as evidence for a representation that relies on knowledge of eye position with respect to the head and not one that relies solely on updating in a retina-centered frame. Our results allow us to set an upper bound on the standard deviation of an eye position signal available to the saccadic system during short memory periods at 1.4° for saccades of about 10°. Received: 7 February 1995 / Accepted: 4 October 1996  相似文献   

14.
The study investigated pointing at memorized targets in reachable space in congenitally blind (CB) and blindfolded sighted (BS) children (6, 8, 10 and 12 years; ten children in each group). The target locations were presented on a sagittal plane by passive positioning of the left index finger. A go signal for matching the target location with the right index finger was provided 0 or 4 s after demonstration. An age effect was found only for absolute distance errors and the surface area of pointing was smaller for the CB children. Results indicate that early visual experience and age are not predictive factors for pointing in children. The delay was an important factor at all ages and for both groups, indicating distinct spatial representations such as egocentric and allocentric frames of reference, for immediate and delayed pointing, respectively. Therefore, the CB like the BS children are able to use both ego- and allocentric frames of reference.  相似文献   

15.
If a peripheral target follows an ipsilateral cue with a stimulus-onset-asynchrony (SOA) of 300 ms or more, its detection is delayed compared to a contralateral-cue condition. This phenomena, known as inhibition-of-return (IOR), affects responses to visual, auditory, and tactile stimuli, and is thought to provide an index of exogenous shifts of spatial attention. The present study investigated whether tactile IOR occurs in a somatotopic vs an allocentric frame of reference. In experiment 1, tactile cue and target stimuli were presented to the index and middle fingers of either hand, with the hands positioned in an uncrossed posture (SOA 500 or 1,000 ms). Speeded target detection responses were slowest for targets presented from the cued finger, and were also slower for targets presented to the adjacent finger on the cued hand than to either finger on the uncued hand. The same pattern of results was also reported when the index and middle fingers of the two hands were interleaved on the midline (experiment 2), suggesting that the gradient of tactile IOR surrounding a cued body site is modulated by the somatotopic rather than by the allocentric distance between cue and target.  相似文献   

16.
Research on joint attention has addressed both the effects of gaze following and the ability to share representations. It is largely unknown, however, whether sharing attention also affects the perceptual processing of jointly attended objects. This study tested whether attending to stimuli with another person from opposite perspectives induces a tendency to adopt an allocentric rather than an egocentric reference frame. Pairs of participants performed a handedness task while individually or jointly attending to rotated hand stimuli from opposite sides. Results revealed a significant flattening of the performance rotation curve when participants attended jointly (experiment 1). The effect of joint attention was robust to manipulations of social interaction (cooperation versus competition, experiment 2), but was modulated by the extent to which an allocentric reference frame was primed (experiment 3). Thus, attending to objects together from opposite perspectives makes people adopt an allocentric rather than the default egocentric reference frame.  相似文献   

17.
During trunk-assisted reaching to targets placed within arms length, the influence of trunk motion on the hand trajectory is compensated for by changes in the arm configuration. The role of proprioception in this compensation was investigated by analyzing the movements of 2 deafferented and 12 healthy subjects. Subjects reached to remembered targets (placed ~80° ipsilateral or ~45° contralateral to the sagittal midline) with an active forward movement of the trunk produced by hip flexion. In 40% of randomly selected trials, trunk motion was mechanically blocked. No visual feedback was provided during the experiment. The hand trajectory and velocity profiles of healthy subjects remained invariant whether or not the trunk was blocked. The invariance was achieved by changes in arm interjoint coordination that, for reaches toward the ipsilateral target, started as early as 50 ms after the perturbation. Both deafferented subjects exhibited considerable, though incomplete, compensation for the effects of the perturbation. Compensation was more successful for reaches to the ipsilateral target. Both deafferented subjects showed invariance between conditions (unobstructed or blocked trunk motion) in their hand paths to the ipsilateral target, and one did to the contralateral target. For the other deafferented subject, hand paths in the two types of trials began to deviate after about 50% into the movement, because of excessive elbow extension. In movements to the ipsilateral target, when deafferented subjects compensated successfully, the changes in arm joint angles were initiated as early as 50 ms after the trunk perturbation, similar to healthy subjects. Although the deafferented subjects showed less than ideal compensatory control, they compensated to a remarkably large extent given their complete loss of proprioception. The presence of partial compensation in the absence of vision and proprioception points to the likelihood that not only proprioception but also vestibulospinal pathways help mediate this compensation.Due to an error in the citation line, this revised PDF (published in December 2003) deviates from the printed version, and is the correct and authoritative version of the paper.  相似文献   

18.
It has been hypothesized that the end-point position of reaching may be specified in an egocentric frame of reference. In most previous studies, however, reaching was toward a memorized target, rather than an actual target. Thus, the role played by sensorimotor transformation could not be disassociated from the role played by storage in short-term memory. In the present study the direct process of sensorimotor transformation was investigated in reaching toward continuously visible targets that need not be stored in memory. A virtual reality system was used to present visual targets in different three-dimensional (3D) locations in two different tasks, one with visual feedback of the hand and arm position (Seen Hand) and the other without such feedback (Unseen Hand). In the Seen Hand task, the axes of maximum variability and of maximum contraction converge toward the mid-point between the eyes. In the Unseen Hand task only the maximum contraction correlates with the sight-line and the axes of maximum variability are not viewer-centered but rotate anti-clockwise around the body and the effector arm during the move from the right to the left workspace. The bulk of findings from these and previous experiments support the hypothesis of a two-stage process, with a gradual transformation from viewer-centered to body-centered and arm-centered coordinates. Retinal, extra-retinal and arm-related signals appear to be progressively combined in superior and inferior parietal areas, giving rise to egocentric representations of the end-point position of reaching. Received: 25 November 1998 / Accepted: 8 July 1999  相似文献   

19.
Spatial orientation is crucial when subjects have to accurately reach memorized visual targets. In previous studies modified gravitoinertial force fields were used to affect the accuracy of pointing movements in complete darkness without visual feedback of the moving limb. Target mislocalization was put forward as one hypothesis to explain this decrease in accuracy of pointing movements. The aim of this study was to test this hypothesis by determining the accuracy of spatial localization of memorized visual targets in a perturbed gravitoinertial force field. As head orientation is involved in localization tasks and carrying relevant sensory systems (visual, vestibular and neck muscle proprioceptive), we also tested the effect of head posture on the accuracy of localization. Subjects (n=10) were seated off-axis on a rotating platform (120 degrees s(-1)) in complete darkness with the head fixed (head-fixed session) or free to move (head-free session). They were required to report verbally the egocentric spatial localization of visual memorized targets. They gave the perceived target location in direction (i.e. left or right) and in amplitude (in centimeters) relative to the direction they thought to be straight ahead. Results showed that the accuracy of visual localization decreased when subjects were exposed to inertial forces. Moreover, subjects localized the memorized visual targets more to the right than their actual position, that was in the direction of the inertial forces. With further analysis, it appeared that this shift of localization was concomitant with a shift of the visual straight ahead (VSA) in the opposite direction. Thus, the modified gravitoinertial force field led to a modification in the orientation of the egocentric reference frame. Furthermore, this shift of localization increased when the head was free to move while the head was tilted in roll toward the center of rotation of the platform and turned in yaw in the same direction. It is concluded that the orientation of the egocentric reference frame was influenced by the gravitoinertial vector.  相似文献   

20.
Many movements that people perform every day are directed at visual targets, e.g., when we press an elevator button. However, many other movements are not target-directed, but are based on allocentric (object-centered) visual information. Examples of allocentric movements are gesture imitation, drawing or copying. Here, show a reaction time difference between these two types of movements in four separate experiments. In Exp. 1, subjects moved their eyes freely and used direct hand movements. In Exp. 2, subjects moved their eyes freely and their movements were tool-mediated (computer mouse). In Exp. 3, subjects fixated a central target and the visual field in which visual information was presented was manipulated. Experiment 4 was identical to Exp. 3 except for the fact that visual information about targets disappeared before movement onset. In all four experiments, reaction times in the allocentric task were approximately 35 ms slower than they were in the target-directed task. We suggest that this difference in reaction time between the two tasks reflects the fact that allocentric, but not target-directed, movements recruit the ventral stream, in particular lateral occipital cortex, which increases processing time. We also observed an advantage for movements made in the lower visual field as measured by movement variability, whether or not those movements were allocentric or target-directed. This latter result, we argue, reflects the role of the dorsal visual stream in the online control of movements in both kinds of tasks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号