首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
There is a significant overlap between the processes and neural substrates of spatial cognition and those subserving memory and learning. However, for procedural learning, which often is spatial in nature, we do not know how different forms of spatial knowledge, such as egocentric and allocentric frames of reference, are utilized nor whether these frames are differentially engaged during implicit and explicit processes. To address this issue, we trained human subjects on a movement sequence presented on a bi-dimensional (2D) geometric frame. We then systematically manipulated the geometric frame (allocentric) or the sequence of movements (egocentric) or both, and retested the subjects on their ability to transfer the sequence knowledge they had acquired in training and also determined whether the subjects had learned the sequence implicitly or explicitly. None of the subjects (implicit or explicit) showed evidence of transfer when both frames of reference were changed which suggests that spatial information is essential. Both implicit and explicit subjects transferred when the egocentric frame was maintained indicating that this representation is common to both processes. Finally, explicit subjects were also able to benefit from the allocentric frame in transfer, which suggests that explicit procedural knowledge may have two tiers comprising egocentric and allocentric representations.  相似文献   

2.
The primary purpose of this study was to examine the effects of non-informative vision and visual interference upon haptic spatial processing, which supposedly derives from an interaction between an allocentric and egocentric reference frame. To this end, a haptic parallelity task served as baseline to determine the participant-dependent biasing influence of the egocentric reference frame. As expected, large systematic participant-dependent deviations from veridicality were observed. In the second experiment we probed the effect of non-informative vision on the egocentric bias. Moreover, orienting mechanisms (gazing directions) were studied with respect to the presentation of haptic information in a specific hemispace. Non-informative vision proved to have a beneficial effect on haptic spatial processing. No effect of gazing direction or hemispace was observed. In the third experiment we investigated the effect of simultaneously presented interfering visual information on the haptic bias. Interfering visual information parametrically influenced haptic performance. The interplay of reference frames that subserves haptic spatial processing was found to be related to both the effects of non-informative vision and visual interference. These results suggest that spatial representations are influenced by direct cross-modal interactions; inter-participant differences in the haptic modality resulted in differential effects of the visual modality.  相似文献   

3.
Spatial priming in visual search is a well-documented phenomenon. If the target of a visual search is presented at the same location in subsequent trials, the time taken to find the target at this repeated target location is significantly reduced. Previous studies did not determine which spatial reference frame is used to code the location. At least two reference frames can be distinguished: an observer-related frame of reference (egocentric) or a scene-based frame of reference (allocentric). While past studies suggest that an allocentric reference frame is more effective, we found that an egocentric reference frame is at least as effective as an allocentric one (Ball et al. Neuropsychologia 47(6):1585–1591, 2009). Our previous study did not identify which specific egocentric reference frame was used for the priming: participants could have used a retinotopic or a body-centred frame of reference. Here, we disentangled the retinotopic and body-centred reference frames. In the retinotopic condition, the position of the target stimulus, when repeated, changed with the fixation position, whereas in the body-centred condition, the position of the target stimulus remained the same relative to the display, and thus to the body-midline, but was different relative to the fixation position. We used a conjunction search task to assess the generality of our previous findings. We found that participants relied on body-centred information and not retinotopic cues. Thus, we provide further evidence that egocentric information, and specifically body-centred information, can persist for several seconds, and that these effects are not specific to either a feature or a conjunction search paradigm.  相似文献   

4.
The present study investigated the brain dynamics accompanying spatial navigation based on distinct reference frames. Participants preferentially using an allocentric or an egocentric reference frame navigated through virtual tunnels and reported their homing direction at the end of each trial based on their spatial representation of the passage. Task-related electroencephalographic (EEG) dynamics were analyzed based on independent component analysis (ICA) and subsequent clustering of independent components. Parietal alpha desynchronization during encoding of spatial information predicted homing performance for participants using an egocentric reference frame. In contrast, retrosplenial and occipital alpha desynchronization during retrieval covaried with homing performance of participants using an allocentric reference frame. These results support the assumption of distinct neural networks underlying the computation of distinct reference frames and reveal a direct relationship of alpha modulation in parietal and retrosplenial areas with encoding and retrieval of spatial information for homing behavior.  相似文献   

5.
The visual and vestibular systems begin functioning early in life. However, it is unclear whether young infants perceive the dynamic world based on the retinal coordinate (egocentric reference frame) or the environmental coordinate (allocentric reference frame) when they encounter incongruence between frames of reference due to changes in body position. In this study, we performed the habituation–dishabituation procedure to assess novelty detection in a visual display, and a change in body position was included between the habituation and dishabituation phases in order to test whether infants dishabituate to the change in stimulus on the retinal or environmental coordinate. Twenty infants aged 3–4 months were placed in the right-side-down position (RSDp) and habituated to an animated human-like character that walked horizontally in the environmental frame of reference. Subsequently, their body position was changed in the roll plane. Ten infants were repositioned to the upright position (UPp) and the rest, to the RSDp after rotation. In the test phase, the displays that were spatially identical to those shown in the habituation phase and 90° rotated displays were alternately presented, and visual preference was examined. The results revealed that infants looked longer at changes in the display on the retinal coordinate than at changes in the display on the environmental coordinate. This suggests that changes in body position from lying to upright produced incongruence of the egocentric and allocentric reference frames for perception of dynamic visual displays and that infants may rely more on the egocentric reference frame.  相似文献   

6.
The location of an object in peripersonal space can be represented with respect to our body (i.e., egocentric frame of reference) or relative to contextual features and other objects (i.e., allocentric frame of reference). In the current study, we sought to determine whether the frame, or frames, of visual reference supporting motor output is influenced by reach trajectories structured to maximize visual feedback utilization (i.e., controlled online) or structured largely in advance of movement onset via central planning mechanisms (i.e., controlled offline). Reaches were directed to a target embedded in a pictorial illusion (the induced Roelofs effect: IRE) and advanced knowledge of visual feedback was manipulated to influence the nature of reaching control as reported by Zelaznik et al. (J Mot Behav 15:217–236, 1983). When vision could not be predicted in advance of movement onset, trajectories showed primary evidence of an offline mode of control (even when vision was provided) and endpoints demonstrated amplified sensitivity to the illusory (i.e., allocentric) features of the IRE. In contrast, reaches performed with reliable visual feedback evidenced a primarily online mode of control and showed increased visuomotor resistance to the IRE. These findings suggest that the manner a reaching response is structured differentially influences the weighting of allocentric and egocentric visual information. More specifically, when visual feedback is unavailable or unpredictable, the weighting of allocentric visual information for the advanced planning of a reach trajectory is increased.
Matthew HeathEmail:
  相似文献   

7.
The spatial location of objects is processed in egocentric and allocentric reference frames, the early temporal dynamics of which have remained relatively unexplored. Previous experiments focused on ERP components related only to egocentric navigation. Thus, we designed a virtual reality experiment to see whether allocentric reference frame‐related ERP modulations can also be registered. Participants collected reward objects at the end of the west and east alleys of a cross maze, and their ERPs to the feedback objects were measured. Participants made turn choices from either the south or the north alley randomly in each trial. In this way, we were able to discern place and response coding of object location. Behavioral results indicated a strong preference for using the allocentric reference frame and a preference for choosing the rewarded place in the next trial, suggesting that participants developed probabilistic expectations between places and rewards. We also found that the amplitude of the P1 was sensitive to the allocentric place of the reward object, independent of its value. We did not find evidence for egocentric response learning. These results show that early ERPs are sensitive to the location of objects during navigation in an allocentric reference frame.  相似文献   

8.
Motor imagery tasks (hand laterality judgment) are usually performed with respect to a self-body (egocentric) representation, but manipulations of stimulus features (hand orientation) can induce a shift to other's body (allocentric) reference frame. Visual perspective taking tasks are also performed in self-body perspective but a shift to an allocentric frame can be triggered by manipulations of context features (e.g., another person present in the to-be-judged scene). Combining hand laterality task and visual perspective taking, we demonstrated that both stimulus and context features can modulate motor imagery performance. In Experiment 1, participants judged laterality of a hand embedded in a human or non-human silhouette. Results showed that observing a human silhouette interfered with judgments on “egocentric hand stimuli” (right hand, fingers up). In Experiment 2, participants were explicitly required to judge laterality of a hand embedded in a human silhouette from their own (egocentric group) or from the silhouette's perspective (allocentric group). Consistent with previous results, the egocentric group was significantly faster than the allocentric group in judging fingers-up right hand stimuli. These findings showed that concurrent activation of egocentric and allocentric frames during mental transformation of body parts impairs participants’ performance due to a conflict between motor and visual mechanisms.  相似文献   

9.
Convergent findings demonstrate that numbers can be represented according to a spatially oriented mental number line. However, it is not established whether a default organization of the mental number line exists (i.e., a left-to-right orientation) or whether its spatial arrangement is only the epiphenomenon of specific task requirements. To address this issue we performed two experiments in which subjects were required to judge laterality of hand stimuli preceded by small, medium or large numerical cues; hand stimuli were compatible with egocentric or allocentric perspectives. We found evidence of a left-to-right number–hand association in processing stimuli compatible with an egocentric perspective, whereas the reverse mapping was found with hands compatible with an allocentric perspective. These findings demonstrate that the basic left-to-right arrangement of the mental number line is defined with respect to the body-centred egocentric reference frame.  相似文献   

10.
The spatial location of an object can be represented in the brain with respect to different classes of reference frames, either relative to or independent of the subject's position. We used functional magnetic resonance imaging to identify regions of the healthy human brain subserving mainly egocentric or allocentric (object-based) coordinates by asking subjects to judge the location of a visual stimulus with respect to either their body or an object. A color-judgement task, matched for stimuli, difficulty, motor and oculomotor responses, was used as a control. We identified a bilateral, though mainly right-hemisphere based, fronto-parietal network involved in egocentric processing. A subset of these regions, including a much less extensive unilateral, right fronto-parietal network, was found to be active during object-based processing. The right-hemisphere lateralization and the partial superposition of the egocentric and the object-based networks is discussed in the light of neuropsychological findings in brain-damaged patients with unilateral spatial neglect and of neurophysiological studies in the monkey.  相似文献   

11.
Riva G 《Medical hypotheses》2012,78(2):254-257
Evidence from psychology and neuroscience indicates that our spatial experience, including the bodily one, involves the integration of different sensory inputs within two different reference frames egocentric (body as reference of first-person experience) and allocentric (body as object in the physical world). Even if functional relations between these two frames are usually limited, they influence each other during the interaction between long- and short-term memory processes in spatial cognition. If, for some reasons, this process is impaired, the egocentric sensory inputs are no more able to update the contents of the allocentric representation of the body: the subject is locked to it. In the presented perspective, subjects with eating disorders are locked to an allocentric representation of their body, stored in long-term memory (allocentric lock). A significant role in the locking may be played by the medial temporal lobe, and in particular by the connection between the hippocampal complex and amygdala. The differences between exogenous and endogenous causes of the lock may also explain the difference between bulimia nervosa and anorexia nervosa.  相似文献   

12.
Subjects reached in three-dimensional space to a set of remembered targets whose position was varied randomly from trial to trial, but always fell along a "virtual" line (line condition). Targets were presented briefly, one-by-one and in an empty visual field. After a short delay, subjects were required to point to the remembered target location. Under these conditions, the target was presented in the complete absence of allocentric visual cues as to its position in space. However, because the subjects were informed prior to the experiment that all targets would fall on a straight line, they could conceivably imagine each point target as belonging to a single rigid object with a particular geometry and orientation in space, although this virtual object was never explicitly shown to the subjects. We compared the responses to repeated measurements of each target with those measured for targets presented in a directionally neutral configuration (sphere condition), and used the variable errors to infer the putative reference frames underlying the corresponding sensorimotor transformation. Performance in the different tasks was compared under two different lighting conditions (dim light or total darkness) and two memory delays (0.5 or 5 s). The pattern of variable errors differed significantly between the sphere condition and the line condition. In the former case, the errors were always accounted for by egocentric reference frames. By contrast the errors in the line condition revealed both egocentric and allocentric components, consistent with the hypothesis that target information can be defined concurrently in both egocentric and allocentric frames of reference, resulting in two independent coexisting representations. Electronic Publication  相似文献   

13.
What humans haptically perceive as parallel is often far from physically parallel. These deviations from parallelity are highly significant and very systematic. There exists accumulating evidence, both psychophysical and neurophysiological, that what is haptically parallel is decided in a frame of reference intermediate to an allocentric and an egocentric one. The central question here concerns the nature of the egocentric frame of reference. In the literature, various kinds of egocentric reference frames are mentioned for haptic spatial tasks, such as hand-centered, arm-centered, and body-centered frames of reference. Thus far, it has not been possible to distinguish between body-centered, arm-centered, and hand-centered reference frames in our experiments, as hand and arm orientation always covaried with distance from the body-midline. In the current set of experiments the influence of body-centered and hand-centered reference frames could be dissociated. Subjects were asked to make a test bar haptically parallel to a reference bar in five different conditions, in which their hands were oriented straight ahead, rotated to the left, rotated to the right, rotated outward or rotated inward. If the reference frame is body-centered, the deviations should be independent of condition. If, on the other hand, the reference frame is hand-centered, the deviations should vary with condition. The results show that deviation size varies strongly with condition, exactly in the way predicted by the influence of a hand-centered egocentric frame of reference. Interestingly, this implies that subjects do not sufficiently take into account the orientation of their hands.  相似文献   

14.
To produce accurate goal-directed arm movements, subjects must determine the precise location of target object. Position of extracorporeal objects can be determined using: (a) an egocentric frame of reference, in which the target is localized in relation to the position of the body; and/or (b) an allocentric system, in which target position is determined in relation to stable visual landmarks surrounding the target (Bridgeman 1989; Paillard 1991). The present experiment was based on the premise that (a) the presence of a structured visual environment enables the use of an allocentric frame of reference, and (b) the sole presence of a visual target within a homogeneous background forces the registration of the target location by an egocentric system. Normal subjects and a deafferented patient (i.e., with an impaired egocentric system) pointed to visual targets presented in both visual environments to evaluate the efficiency of the two reference systems. For normals, the visual environment conditions did not affect pointing accuracy. However, kinematic parameters were affected by the presence or absence of a structured visual surrounding. For the deafferented patient, the presence of a structured visual environment permitted a decrease in spatial errors when compared with the unstructured surrounding condition (for movements with or without visual feedback of the trajectory). Overall, results support the existence of an egocentric and an allocentric reference system capable of organizing extracorporeal space during arm movements directed toward visual targets.  相似文献   

15.
This review examines the isotropy of the perception of spatial orientations in the haptic system. It shows the existence of an oblique effect (i.e., a better perception of vertical and horizontal orientations than oblique orientations) in a spatial plane intrinsic to the haptic system, determined by the gravitational cues and the cognitive resources and defined in a subjective frame of reference. Similar results are observed from infancy to adulthood. In 3D space, the haptic processing of orientations is also anisotropic and seems to use both egocentric and allocentric cues. Taken together, these results revealed that the haptic oblique effect occurs when the sensory motor traces associated with exploratory movement are represented more abstractly at a cognitive level.  相似文献   

16.
We investigated the influence of gaze elevation on judging the possibility of passing under high obstacles during pitch body tilts, while stationary, in absence of allocentric cues. Specifically, we aimed at studying the influence of egocentric references upon geocentric judgements. Seated subjects, orientated at various body orientations, were asked to perceptually estimate the possibility of passing under a projected horizontal line while keeping their gaze on a fixation target and imagining a horizontal body displacement. The results showed a global overestimation of the possibility of passing under the line, and confirmed the influence of body orientation reported by Bringoux et al. (Exp Brain Res 185(4):673–680, 2008). More strikingly, a linear influence of gaze elevation was found on perceptual estimates. Precisely, downward eye elevation yielded increased overestimations, and conversely upward gaze elevation yielded decreased overestimations. Furthermore, body and gaze orientation effects were independent and combined additively to yield a global egocentric influence with a weight of 45 and 54%, respectively. Overall, our data suggest that multiple egocentric references can jointly affect the estimated possibility of passing under high obstacles. These results are discussed in terms of “interpenetrability” between geocentric and egocentric reference frames and clearly demonstrate that gaze elevation is involved, as body orientation, in geocentric spatial localization.  相似文献   

17.
Research on joint attention has addressed both the effects of gaze following and the ability to share representations. It is largely unknown, however, whether sharing attention also affects the perceptual processing of jointly attended objects. This study tested whether attending to stimuli with another person from opposite perspectives induces a tendency to adopt an allocentric rather than an egocentric reference frame. Pairs of participants performed a handedness task while individually or jointly attending to rotated hand stimuli from opposite sides. Results revealed a significant flattening of the performance rotation curve when participants attended jointly (experiment 1). The effect of joint attention was robust to manipulations of social interaction (cooperation versus competition, experiment 2), but was modulated by the extent to which an allocentric reference frame was primed (experiment 3). Thus, attending to objects together from opposite perspectives makes people adopt an allocentric rather than the default egocentric reference frame.  相似文献   

18.
When programming movement, one must account for gravitational acceleration. This is particularly important when catching a falling object because the task requires a precise estimate of time-to-contact. Knowledge of gravity’s effects is intimately linked to our definition of ‘up’ and ‘down’. Both directions can be described in an allocentric reference frame, based on visual and/or gravitational cues, or in an egocentric reference frame in which the body axis is taken as vertical. To test which frame humans use to predict gravity’s effect, we asked participants to intercept virtual balls approaching from above or below with artificially controlled acceleration that could be congruent or not with gravity. To dissociate between these frames, subjects were seated upright (trunk parallel to gravity) or lying down (body axis orthogonal to the gravitational axis). We report data in line with the use of an allocentric reference frame and discuss its relevance depending on available gravity-related cues.  相似文献   

19.
Skill improvements may develop between practice sessions during memory consolidation. Skill enhancement within an egocentric coordinate frame develops over wake, whereas skill enhancement in an allocentric coordinate frame develops over a night of sleep. We tested whether both types of improvement could develop over two different 24-h intervals: 8 am to 8 am or from 8 pm to 8 pm. We found that for each 24 h interval, only one type of skill improvement was seen. Despite passing through wake and a night of sleep participants only showed skill improvements commensurate with either a night of sleep or a day awake. The nature of the off-line skill enhancement was determined by when consolidation occurred within the normal sleep–wake cycle. We conclude that motor sequence consolidation is constrained either by having critical time windows or by a competitive interaction in which improvements within one co-ordinate frame actively block improvements from developing in the alternative co-ordinate frame.  相似文献   

20.
Animals with medial prefrontal cortex or parietal cortex lesions and sham-operated and non-operated controls were tested for the acquisition of an adjacent arm task that accentuated the importance of egocentric spatial localization and a cheese board task that accentuated the importance of allocentric spatial localization. Results indicated that relative to controls, animals with medial-prefrontal cortex lesions are impaired on the adjacent arm task but displayed facilitation on the cheese board task. In contrast, relative to controls, rats with parietal cortex lesions are impaired on the cheese board task but show no impairment on the adjacent arm task. The data suggest a double dissociation of function between medial prefrontal cortex and parietal cortex in terms of coding of egocentric versus allocentric spatial information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号