首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
If a peripheral target follows an ipsilateral cue with a stimulus-onset-asynchrony (SOA) of 300 ms or more, its detection is delayed compared to a contralateral-cue condition. This phenomena, known as inhibition-of-return (IOR), affects responses to visual, auditory, and tactile stimuli, and is thought to provide an index of exogenous shifts of spatial attention. The present study investigated whether tactile IOR occurs in a somatotopic vs an allocentric frame of reference. In experiment 1, tactile cue and target stimuli were presented to the index and middle fingers of either hand, with the hands positioned in an uncrossed posture (SOA 500 or 1,000 ms). Speeded target detection responses were slowest for targets presented from the cued finger, and were also slower for targets presented to the adjacent finger on the cued hand than to either finger on the uncued hand. The same pattern of results was also reported when the index and middle fingers of the two hands were interleaved on the midline (experiment 2), suggesting that the gradient of tactile IOR surrounding a cued body site is modulated by the somatotopic rather than by the allocentric distance between cue and target.  相似文献   

2.
Spatial priming in visual search is a well-documented phenomenon. If the target of a visual search is presented at the same location in subsequent trials, the time taken to find the target at this repeated target location is significantly reduced. Previous studies did not determine which spatial reference frame is used to code the location. At least two reference frames can be distinguished: an observer-related frame of reference (egocentric) or a scene-based frame of reference (allocentric). While past studies suggest that an allocentric reference frame is more effective, we found that an egocentric reference frame is at least as effective as an allocentric one (Ball et al. Neuropsychologia 47(6):1585–1591, 2009). Our previous study did not identify which specific egocentric reference frame was used for the priming: participants could have used a retinotopic or a body-centred frame of reference. Here, we disentangled the retinotopic and body-centred reference frames. In the retinotopic condition, the position of the target stimulus, when repeated, changed with the fixation position, whereas in the body-centred condition, the position of the target stimulus remained the same relative to the display, and thus to the body-midline, but was different relative to the fixation position. We used a conjunction search task to assess the generality of our previous findings. We found that participants relied on body-centred information and not retinotopic cues. Thus, we provide further evidence that egocentric information, and specifically body-centred information, can persist for several seconds, and that these effects are not specific to either a feature or a conjunction search paradigm.  相似文献   

3.
We examined the role of gaze in a task where subjects had to reproduce the position of a remembered visual target with the tip of the index finger, referred to as pointing. Subjects were tested in 3 visual feedback conditions: complete darkness (dark), complete darkness with visual feedback of the finger position (finger), and with vision of a well-defined environment and feedback of the finger position (frame). Pointing accuracy increases with feedback about the finger or visual environment. In the finger and frame conditions, the 95% confidence regions of the variable errors have an ellipsoidal distribution with the main axis oriented toward the subjects' head. During the 1-s period when the target is visible, gaze is almost on target. However, gaze drifts away from the target relative to the subject in the delay period after target disappearance. In the finger and frame conditions, gaze returns toward the remembered target during pointing. In all 3 feedback conditions, the correlations between the variable errors of gaze and pointing position increase during the delay period, reaching highly significant values at the time of pointing. Our results demonstrate that gaze affects the accuracy of pointing. We conclude that the covariance between gaze and pointing position reflects a common drive for gaze and arm movements and an effect of gaze on pointing accuracy at the time of pointing. Previous studies interpreted the orientation of variable errors as indicative for a frame of reference used for pointing. Our results suggest that the orientation of the error ellipses toward the head is at least partly the result of gaze drift in the delay period.  相似文献   

4.
Subjects reached in three-dimensional space to a set of remembered targets whose position was varied randomly from trial to trial, but always fell along a "virtual" line (line condition). Targets were presented briefly, one-by-one and in an empty visual field. After a short delay, subjects were required to point to the remembered target location. Under these conditions, the target was presented in the complete absence of allocentric visual cues as to its position in space. However, because the subjects were informed prior to the experiment that all targets would fall on a straight line, they could conceivably imagine each point target as belonging to a single rigid object with a particular geometry and orientation in space, although this virtual object was never explicitly shown to the subjects. We compared the responses to repeated measurements of each target with those measured for targets presented in a directionally neutral configuration (sphere condition), and used the variable errors to infer the putative reference frames underlying the corresponding sensorimotor transformation. Performance in the different tasks was compared under two different lighting conditions (dim light or total darkness) and two memory delays (0.5 or 5 s). The pattern of variable errors differed significantly between the sphere condition and the line condition. In the former case, the errors were always accounted for by egocentric reference frames. By contrast the errors in the line condition revealed both egocentric and allocentric components, consistent with the hypothesis that target information can be defined concurrently in both egocentric and allocentric frames of reference, resulting in two independent coexisting representations. Electronic Publication  相似文献   

5.
There is a significant overlap between the processes and neural substrates of spatial cognition and those subserving memory and learning. However, for procedural learning, which often is spatial in nature, we do not know how different forms of spatial knowledge, such as egocentric and allocentric frames of reference, are utilized nor whether these frames are differentially engaged during implicit and explicit processes. To address this issue, we trained human subjects on a movement sequence presented on a bi-dimensional (2D) geometric frame. We then systematically manipulated the geometric frame (allocentric) or the sequence of movements (egocentric) or both, and retested the subjects on their ability to transfer the sequence knowledge they had acquired in training and also determined whether the subjects had learned the sequence implicitly or explicitly. None of the subjects (implicit or explicit) showed evidence of transfer when both frames of reference were changed which suggests that spatial information is essential. Both implicit and explicit subjects transferred when the egocentric frame was maintained indicating that this representation is common to both processes. Finally, explicit subjects were also able to benefit from the allocentric frame in transfer, which suggests that explicit procedural knowledge may have two tiers comprising egocentric and allocentric representations.  相似文献   

6.
To produce accurate goal-directed arm movements, subjects must determine the precise location of target object. Position of extracorporeal objects can be determined using: (a) an egocentric frame of reference, in which the target is localized in relation to the position of the body; and/or (b) an allocentric system, in which target position is determined in relation to stable visual landmarks surrounding the target (Bridgeman 1989; Paillard 1991). The present experiment was based on the premise that (a) the presence of a structured visual environment enables the use of an allocentric frame of reference, and (b) the sole presence of a visual target within a homogeneous background forces the registration of the target location by an egocentric system. Normal subjects and a deafferented patient (i.e., with an impaired egocentric system) pointed to visual targets presented in both visual environments to evaluate the efficiency of the two reference systems. For normals, the visual environment conditions did not affect pointing accuracy. However, kinematic parameters were affected by the presence or absence of a structured visual surrounding. For the deafferented patient, the presence of a structured visual environment permitted a decrease in spatial errors when compared with the unstructured surrounding condition (for movements with or without visual feedback of the trajectory). Overall, results support the existence of an egocentric and an allocentric reference system capable of organizing extracorporeal space during arm movements directed toward visual targets.  相似文献   

7.
This research examined motor measures of the apparent egocentric location and perceptual measures of the apparent allocentric location of a target that was being seen to undergo induced motion (IM). In Experiments 1 and 3, subjects fixated a stationary dot (IM target) while a rectangular surround stimulus (inducing stimulus) oscillated horizontally. The inducing stimulus motion caused the IM target to appear to move in the opposite direction. In Experiment 1, two dots (flashed targets) were flashed above and below the IM target when the surround had reached its leftmost or rightmost displacement from the subject’s midline. Subjects pointed open-loop at either the apparent egocentric location of the IM target or at the bottom of the two flashed targets. On separate trials, subjects made judgments of the Vernier alignment of the IM target with the flashed targets at the endpoints of the surround’s oscillation. The pointing responses were displaced in the direction of the previously seen IM for the IM target and to a lesser degree for the bottom flashed target. However, the allocentric Vernier judgments demonstrated no perceptual displacement of the IM target relative to the flashed targets. Thus, IM results in a dissociation of egocentric location measures from allocentric location measures. In Experiment 2, pointing and Vernier measures were obtained with stationary horizontally displaced surrounds and there was no dissociation of egocentric location measures from allocentric location measures. These results indicate that the Roelofs effect did not produce the pattern of results in Experiment 1. In Experiment 3, pointing and Vernier measures were obtained when the surround was at the midpoint of an oscillation. In this case, egocentric pointing responses were displaced in the direction of surround motion (opposite IM) for the IM target and to a greater degree for the bottom flashed target. However, there was no apparent displacement of the IM target relative to the flashed targets in the allocentric Vernier judgments. Therefore, in Experiment 3 egocentric location measures were again dissociated from allocentric location measures. The results of this experiment also demonstrate that IM does not generate an allocentric displacement illusion analogous to the “flash-lag” effect.
Robert B. PostEmail:
  相似文献   

8.
On the timing of reference frames for action control   总被引:1,自引:1,他引:0  
This study investigated the time course and automaticity of spatial coding of visual targets for pointing movements. To provide an allocentric reference, placeholders appeared on a touch screen either 500 ms before target onset, or simultaneously with target onset, or at movement onset, or not at all (baseline). With both blocked and randomized placeholder timing, movements to the most distant targets were only facilitated when placeholders were visible before movement onset. This result suggests that allocentric target coding is most useful during movement planning and that this visuo-spatial coding mechanism is not sensitive to strategic effects.  相似文献   

9.
 It is now well established that the accuracy of pointing movements to visual targets is worse in the full open loop condition (FOL; the hand is never visible) than in the static closed loop condition (SCL; the hand is only visible in static position prior to movement onset). In order to account for this result, it is generally admitted that viewing the hand in static position (SCL) improves the movement planning process by allowing a better encoding of the initial state of the motor apparatus. Interestingly, this wide-spread interpretation has recently been challenged by several studies suggesting that the effect of viewing the upper limb at rest might be explained in terms of the simultaneous vision of the hand and target. This result is supported by recent studies showing that goal-directed movements involve different types of planning (egocentric versus allocentric) depending on whether the hand and target are seen simultaneously or not before movement onset. The main aim of the present study was to test whether or not the accuracy improvement observed when the hand is visible before movement onset is related, at least partially, to a better encoding of the initial state of the upper limb. To address this question, we studied experimental conditions in which subjects were instructed to point with their right index finger toward their unseen left index finger. In that situation (proprioceptive pointing), the hand and target are never visible simultaneously and an improvement of movement accuracy in SCL, with respect to FOL, may only be explained by a better encoding of the initial state of the moving limb when vision is present. The results of this experiment showed that both the systematic and the variable errors were significantly lower in the SCL than in the FOL condition. This suggests: (1) that the effect of viewing the static hand prior to motion does not only depend on the simultaneous vision of the goal and the effector during movement planning; (2) that knowledge of the initial upper limb configuration or position is necessary to accurately plan goal-directed movements; (3) that static proprioceptive receptors are partially ineffective in providing an accurate estimate of the limb posture, and/or hand location relative to the body; and (4) that static visual information significantly improves the representation provided by the static proprioceptive channel. Received: 23 July 1996 / Accepted: 13 December 1996  相似文献   

10.
We have investigated pointing movements toward remembered targets after an intervening self-generated body movement. We tested to what extent visual information about the environment or finger position is used in updating target position relative to the body after a step and whether gaze plays a role in the accuracy of the pointing movement. Subjects were tested in three visual conditions: complete darkness (DARK), complete darkness with visual feedback of the finger (FINGER), and with vision of a well-defined environment and with feedback of the finger (FRAME). Pointing accuracy was rather poor in the FINGER and DARK conditions, which did not provide vision of the environment. Constant pointing errors were mainly in the direction of the step and ranged from about 10 to 20 cm. Differences between binocular fixation and target position were often related to the step size and direction. At the beginning of the trial, when the target was visible, fixation was on target. After target extinction, fixation moved away from the target relative to the subject. The variability in the pointing positions appeared to be related to the variable errors in fixation, and the co-variance increases during the delay period after the step, reaching a highly significant value at the time of pointing. The significant co-variance between fixation position and pointing is not the result of a mutual dependence on the step, since we corrected for any direct contributions of the step in both signals. We conclude that the co-variance between fixation and pointing position reflects 1) a common command signal for gaze and arm movements and 2) an effect of fixation on pointing accuracy at the time of pointing.  相似文献   

11.
The present study investigated the brain dynamics accompanying spatial navigation based on distinct reference frames. Participants preferentially using an allocentric or an egocentric reference frame navigated through virtual tunnels and reported their homing direction at the end of each trial based on their spatial representation of the passage. Task-related electroencephalographic (EEG) dynamics were analyzed based on independent component analysis (ICA) and subsequent clustering of independent components. Parietal alpha desynchronization during encoding of spatial information predicted homing performance for participants using an egocentric reference frame. In contrast, retrosplenial and occipital alpha desynchronization during retrieval covaried with homing performance of participants using an allocentric reference frame. These results support the assumption of distinct neural networks underlying the computation of distinct reference frames and reveal a direct relationship of alpha modulation in parietal and retrosplenial areas with encoding and retrieval of spatial information for homing behavior.  相似文献   

12.
We used a memory-saccade task to test whether the location of a target, briefly presented before a whole-body rotation in roll, is stored in egocentric or in allocentric coordinates. To make this distinction, we exploited the fact that subjects, when tilted sideways in darkness, make systematic errors when indicating the direction of gravity (an allocentric task) even though they have a veridical percept of their self-orientation in space. We hypothesized that if spatial memory is coded allocentrically, these distortions affect the coding of remembered targets and their readout after a body rotation. Alternatively, if coding is egocentric, updating for body rotation becomes essential and errors in performance should be related to the amount of intervening rotation. Subjects (n = 6) were tested making saccades to remembered world-fixed targets after passive body tilts. Initial and final tilt angle ranged between -120 degrees CCW and 120 degrees CW. The results showed that subjects made large systematic directional errors in their saccades (up to 90 degrees ). These errors did not occur in the absence of intervening body rotation, ruling out a memory degradation effect. Regression analysis showed that the errors were closely related to the amount of subjective allocentric distortion at both the initial and final tilt angle, rather than to the amount of intervening rotation. We conclude that the brain uses an allocentric reference frame, possibly gravity-based, to code visuospatial memories during whole-body tilts. This supports the notion that the brain can define information in multiple frames of reference, depending on sensory inputs and task demands.  相似文献   

13.
This research is about the role of categorical and coordinate spatial relations and allocentric and egocentric frames of reference in processing spatial information. To this end, we asked whether spatial information is firstly encoded with respect to a frame of reference or with respect to categorical/coordinate spatial relations. Participants had to judge whether two vertical bars appeared on the same side (categorical) or at the same distance (coordinate) with respect to the centre of a horizontal bar (allocentric) or with respect to their body midline (egocentric). The key manipulation was the timing of the instructions: one instruction (reference frame or spatial relation) was given before stimulus presentation, the other one after. If spatial processing requires egocentric/allocentric encoding before coordinate/categorical encoding, then spatial judgements should be facilitated when the frame of reference is specified in advance. In contrast, if categorical and coordinate dimensions are primary, then a facilitation should appear when the spatial relation is specified in advance. Results showed that participants were more accurate and faster when the reference frame rather than the type of spatial relation was provided before stimulus presentation. Furthermore, a selective facilitation was found for coordinate and categorical judgements after egocentric and allocentric cues, respectively. These results suggest a hierarchical structure of spatial information processing where reference frames play a primary role and selectively interact with subsequent processing of spatial relations.  相似文献   

14.
The spatial location of objects is processed in egocentric and allocentric reference frames, the early temporal dynamics of which have remained relatively unexplored. Previous experiments focused on ERP components related only to egocentric navigation. Thus, we designed a virtual reality experiment to see whether allocentric reference frame‐related ERP modulations can also be registered. Participants collected reward objects at the end of the west and east alleys of a cross maze, and their ERPs to the feedback objects were measured. Participants made turn choices from either the south or the north alley randomly in each trial. In this way, we were able to discern place and response coding of object location. Behavioral results indicated a strong preference for using the allocentric reference frame and a preference for choosing the rewarded place in the next trial, suggesting that participants developed probabilistic expectations between places and rewards. We also found that the amplitude of the P1 was sensitive to the allocentric place of the reward object, independent of its value. We did not find evidence for egocentric response learning. These results show that early ERPs are sensitive to the location of objects during navigation in an allocentric reference frame.  相似文献   

15.
 Inter- and intra-sensory modality matching by 8-year-old children diagnosed as having hand-eye co-ordination problems (HECP) and by a control group of children without such problems were tested using a target-location and pointing task. The task required the children to locate target pins visually (seen target), with the hand (felt target) or in combination (felt and seen target), while pointing to the located target was always carried out without vision. The most striking finding, for both the control and the HECP children, was the superiority of performance when the target had to be located visually. When combined scores for both hands were analysed, the HECP children showed inferior performance to the control children in both inter- and intra-modal matching. Analyses of the scores achieved with the preferred and non-preferred hand separately, however, demonstrated that the differences between the HECP and the control children could, in the main, be attributed to lowered performances when the non-preferred hand was used for pointing to the target. When pointing with the preferred hand, the only significant difference between the groups was when the target was visually located, the control children showing superior performance. Pointing with the non-preferred hand gave rise to significant differences, in favour of the control children, when the target was located visually, with the hand or in combination. These findings suggest that earlier studies, using only the preferred hand or a combination of the scores of both hands, might need to be qualified. Putative neurological disorders in the HECP children are invoked to account for the poor performance with the non-preferred hand. Received: 31 May 1996 / Accepted: 31 October 1996  相似文献   

16.
Motor imagery tasks (hand laterality judgment) are usually performed with respect to a self-body (egocentric) representation, but manipulations of stimulus features (hand orientation) can induce a shift to other's body (allocentric) reference frame. Visual perspective taking tasks are also performed in self-body perspective but a shift to an allocentric frame can be triggered by manipulations of context features (e.g., another person present in the to-be-judged scene). Combining hand laterality task and visual perspective taking, we demonstrated that both stimulus and context features can modulate motor imagery performance. In Experiment 1, participants judged laterality of a hand embedded in a human or non-human silhouette. Results showed that observing a human silhouette interfered with judgments on “egocentric hand stimuli” (right hand, fingers up). In Experiment 2, participants were explicitly required to judge laterality of a hand embedded in a human silhouette from their own (egocentric group) or from the silhouette's perspective (allocentric group). Consistent with previous results, the egocentric group was significantly faster than the allocentric group in judging fingers-up right hand stimuli. These findings showed that concurrent activation of egocentric and allocentric frames during mental transformation of body parts impairs participants’ performance due to a conflict between motor and visual mechanisms.  相似文献   

17.
The effect of fatigue on finger force perception within a hand during ipsilateral finger force matching was examined. Thirteen subjects were instructed to match a reference force of an instructed finger using the same or different finger within the hand before and after index finger fatigue. Absolute reference force targets for the index or little finger were identical during pre- and post-fatigue sessions. Fatigue was induced by a 60-s sustained maximal voluntary contraction (MVC) of the index finger. Index finger MVC decreased approximately 29%, while there was a non-significant (about 5%) decrease in the little finger MVC. The results showed that: (1) the absolute reference and matching forces of the instructed fingers were not significantly changed after fatigue, while the total forces (sum of instructed and uninstructed finger forces) were increased after fatigue. (2) The relative forces (with respect to corresponding pre- and post-fatigue MVCs) of the index finger increased significantly in both reference and matching tasks, while the relative forces of the little finger remained unchanged after fatigue. (3) Matching errors remained unchanged after fatigue when the fatigued index finger produced the reference force, while the errors increased significantly when the fatigued index finger produced the matching force. (4) Enslaving (difference between total and instructed finger forces) increased significantly after fatigue, especially during force production by the fatigued index finger and when the little finger produced matching forces at higher force levels. (5) Enslaving significantly increased matching errors particularly after fatigue. Taken together, our results suggest that absolute finger forces within the hand are perceived within the CNS during ipsilateral finger force matching. Perception of absolute forces of the fatigued index finger is not altered after fatigue. The ability of the fatigued index finger to reproduce little finger forces is impaired to a certain degree, however. The impairment is likely to be attributable to altered afferent/efferent relationships of the fatigued index finger.  相似文献   

18.
Goal‐directed actions become truly functional and skilled when they are consistent yet flexible. In manual pointing, end‐effector consistency is characterized by the end position of the index fingertip, whereas flexibility in movement execution is captured by the use of abundant arm‐joint configurations not affecting the index finger end position. Because adults have been shown to exploit their system's flexibility in challenging conditions, we wondered whether during middle childhood children are already able to exploit motor flexibility when demanded by the situation. We had children aged 5–10 years and adults perform pointing movements in a nonchallenging and challenging condition. Results showed that end‐effector errors and flexibility in movement execution decreased with age. Importantly, only the 9‐10‐year‐olds and adults showed increased flexibility in the challenging condition. Thus, while consistency increases and flexibility decreases during mid‐childhood development, from the age of nine children appear able to employ more flexibility with increasing task demands.  相似文献   

19.
The visual and vestibular systems begin functioning early in life. However, it is unclear whether young infants perceive the dynamic world based on the retinal coordinate (egocentric reference frame) or the environmental coordinate (allocentric reference frame) when they encounter incongruence between frames of reference due to changes in body position. In this study, we performed the habituation–dishabituation procedure to assess novelty detection in a visual display, and a change in body position was included between the habituation and dishabituation phases in order to test whether infants dishabituate to the change in stimulus on the retinal or environmental coordinate. Twenty infants aged 3–4 months were placed in the right-side-down position (RSDp) and habituated to an animated human-like character that walked horizontally in the environmental frame of reference. Subsequently, their body position was changed in the roll plane. Ten infants were repositioned to the upright position (UPp) and the rest, to the RSDp after rotation. In the test phase, the displays that were spatially identical to those shown in the habituation phase and 90° rotated displays were alternately presented, and visual preference was examined. The results revealed that infants looked longer at changes in the display on the retinal coordinate than at changes in the display on the environmental coordinate. This suggests that changes in body position from lying to upright produced incongruence of the egocentric and allocentric reference frames for perception of dynamic visual displays and that infants may rely more on the egocentric reference frame.  相似文献   

20.
 The present study investigated the control of manual prehension movements in humans. Subjects grasped luminous virtual discs with the thumb and index finger, and we recorded the instantaneous grip aperture, defined as the 3-D distance between the thumb and index finger. Target size could remain constant (single-step trials) or unexpectedly change shortly after target appearance (double-step trials). In single-step responses, grip aperture varied throughout the movement in a consistent fashion. Double-step responses exhibited distinct corrective modifications, which followed the target change with a latency similar to the normal reaction time. This suggests that visual size information has a fast and continuous access to the processes involved in grip formation. The grip-aperture profiles of single-step responses had a different shape when the target called for an increase than when it called for a decrease in the initial finger distance. The same asymmetry was observed for aperture corrections in double-step trials. These findings indicate that increases and decreases of grip aperture are controlled through separate processes, engaged equally by the appearance and by the size change of a target. Corrections of grip aperture in double-step trials had a higher peak velocity and reached their maximum as well as their final value earlier than the aperture profiles of single-step trials. Nevertheless, the total duration of double-step trials was prolonged. These response characteristics did not fit with either of the three corrective strategies previously proposed for double-step pointing movements, which could indicate that grasping and pointing movements are controlled by different mechanisms. However, more data are needed to substantiate this view. Received: 20 April 1998 / Accepted: 28 October 1998  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号