首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The spatial location of objects is processed in egocentric and allocentric reference frames, the early temporal dynamics of which have remained relatively unexplored. Previous experiments focused on ERP components related only to egocentric navigation. Thus, we designed a virtual reality experiment to see whether allocentric reference frame‐related ERP modulations can also be registered. Participants collected reward objects at the end of the west and east alleys of a cross maze, and their ERPs to the feedback objects were measured. Participants made turn choices from either the south or the north alley randomly in each trial. In this way, we were able to discern place and response coding of object location. Behavioral results indicated a strong preference for using the allocentric reference frame and a preference for choosing the rewarded place in the next trial, suggesting that participants developed probabilistic expectations between places and rewards. We also found that the amplitude of the P1 was sensitive to the allocentric place of the reward object, independent of its value. We did not find evidence for egocentric response learning. These results show that early ERPs are sensitive to the location of objects during navigation in an allocentric reference frame.  相似文献   

2.
Convergent findings demonstrate that numbers can be represented according to a spatially oriented mental number line. However, it is not established whether a default organization of the mental number line exists (i.e., a left-to-right orientation) or whether its spatial arrangement is only the epiphenomenon of specific task requirements. To address this issue we performed two experiments in which subjects were required to judge laterality of hand stimuli preceded by small, medium or large numerical cues; hand stimuli were compatible with egocentric or allocentric perspectives. We found evidence of a left-to-right number–hand association in processing stimuli compatible with an egocentric perspective, whereas the reverse mapping was found with hands compatible with an allocentric perspective. These findings demonstrate that the basic left-to-right arrangement of the mental number line is defined with respect to the body-centred egocentric reference frame.  相似文献   

3.
Motor imagery tasks (hand laterality judgment) are usually performed with respect to a self-body (egocentric) representation, but manipulations of stimulus features (hand orientation) can induce a shift to other's body (allocentric) reference frame. Visual perspective taking tasks are also performed in self-body perspective but a shift to an allocentric frame can be triggered by manipulations of context features (e.g., another person present in the to-be-judged scene). Combining hand laterality task and visual perspective taking, we demonstrated that both stimulus and context features can modulate motor imagery performance. In Experiment 1, participants judged laterality of a hand embedded in a human or non-human silhouette. Results showed that observing a human silhouette interfered with judgments on “egocentric hand stimuli” (right hand, fingers up). In Experiment 2, participants were explicitly required to judge laterality of a hand embedded in a human silhouette from their own (egocentric group) or from the silhouette's perspective (allocentric group). Consistent with previous results, the egocentric group was significantly faster than the allocentric group in judging fingers-up right hand stimuli. These findings showed that concurrent activation of egocentric and allocentric frames during mental transformation of body parts impairs participants’ performance due to a conflict between motor and visual mechanisms.  相似文献   

4.
This research is about the role of categorical and coordinate spatial relations and allocentric and egocentric frames of reference in processing spatial information. To this end, we asked whether spatial information is firstly encoded with respect to a frame of reference or with respect to categorical/coordinate spatial relations. Participants had to judge whether two vertical bars appeared on the same side (categorical) or at the same distance (coordinate) with respect to the centre of a horizontal bar (allocentric) or with respect to their body midline (egocentric). The key manipulation was the timing of the instructions: one instruction (reference frame or spatial relation) was given before stimulus presentation, the other one after. If spatial processing requires egocentric/allocentric encoding before coordinate/categorical encoding, then spatial judgements should be facilitated when the frame of reference is specified in advance. In contrast, if categorical and coordinate dimensions are primary, then a facilitation should appear when the spatial relation is specified in advance. Results showed that participants were more accurate and faster when the reference frame rather than the type of spatial relation was provided before stimulus presentation. Furthermore, a selective facilitation was found for coordinate and categorical judgements after egocentric and allocentric cues, respectively. These results suggest a hierarchical structure of spatial information processing where reference frames play a primary role and selectively interact with subsequent processing of spatial relations.  相似文献   

5.
Spatial priming in visual search is a well-documented phenomenon. If the target of a visual search is presented at the same location in subsequent trials, the time taken to find the target at this repeated target location is significantly reduced. Previous studies did not determine which spatial reference frame is used to code the location. At least two reference frames can be distinguished: an observer-related frame of reference (egocentric) or a scene-based frame of reference (allocentric). While past studies suggest that an allocentric reference frame is more effective, we found that an egocentric reference frame is at least as effective as an allocentric one (Ball et al. Neuropsychologia 47(6):1585–1591, 2009). Our previous study did not identify which specific egocentric reference frame was used for the priming: participants could have used a retinotopic or a body-centred frame of reference. Here, we disentangled the retinotopic and body-centred reference frames. In the retinotopic condition, the position of the target stimulus, when repeated, changed with the fixation position, whereas in the body-centred condition, the position of the target stimulus remained the same relative to the display, and thus to the body-midline, but was different relative to the fixation position. We used a conjunction search task to assess the generality of our previous findings. We found that participants relied on body-centred information and not retinotopic cues. Thus, we provide further evidence that egocentric information, and specifically body-centred information, can persist for several seconds, and that these effects are not specific to either a feature or a conjunction search paradigm.  相似文献   

6.
To produce accurate goal-directed arm movements, subjects must determine the precise location of target object. Position of extracorporeal objects can be determined using: (a) an egocentric frame of reference, in which the target is localized in relation to the position of the body; and/or (b) an allocentric system, in which target position is determined in relation to stable visual landmarks surrounding the target (Bridgeman 1989; Paillard 1991). The present experiment was based on the premise that (a) the presence of a structured visual environment enables the use of an allocentric frame of reference, and (b) the sole presence of a visual target within a homogeneous background forces the registration of the target location by an egocentric system. Normal subjects and a deafferented patient (i.e., with an impaired egocentric system) pointed to visual targets presented in both visual environments to evaluate the efficiency of the two reference systems. For normals, the visual environment conditions did not affect pointing accuracy. However, kinematic parameters were affected by the presence or absence of a structured visual surrounding. For the deafferented patient, the presence of a structured visual environment permitted a decrease in spatial errors when compared with the unstructured surrounding condition (for movements with or without visual feedback of the trajectory). Overall, results support the existence of an egocentric and an allocentric reference system capable of organizing extracorporeal space during arm movements directed toward visual targets.  相似文献   

7.
There is a significant overlap between the processes and neural substrates of spatial cognition and those subserving memory and learning. However, for procedural learning, which often is spatial in nature, we do not know how different forms of spatial knowledge, such as egocentric and allocentric frames of reference, are utilized nor whether these frames are differentially engaged during implicit and explicit processes. To address this issue, we trained human subjects on a movement sequence presented on a bi-dimensional (2D) geometric frame. We then systematically manipulated the geometric frame (allocentric) or the sequence of movements (egocentric) or both, and retested the subjects on their ability to transfer the sequence knowledge they had acquired in training and also determined whether the subjects had learned the sequence implicitly or explicitly. None of the subjects (implicit or explicit) showed evidence of transfer when both frames of reference were changed which suggests that spatial information is essential. Both implicit and explicit subjects transferred when the egocentric frame was maintained indicating that this representation is common to both processes. Finally, explicit subjects were also able to benefit from the allocentric frame in transfer, which suggests that explicit procedural knowledge may have two tiers comprising egocentric and allocentric representations.  相似文献   

8.
The present study investigated the brain dynamics accompanying spatial navigation based on distinct reference frames. Participants preferentially using an allocentric or an egocentric reference frame navigated through virtual tunnels and reported their homing direction at the end of each trial based on their spatial representation of the passage. Task-related electroencephalographic (EEG) dynamics were analyzed based on independent component analysis (ICA) and subsequent clustering of independent components. Parietal alpha desynchronization during encoding of spatial information predicted homing performance for participants using an egocentric reference frame. In contrast, retrosplenial and occipital alpha desynchronization during retrieval covaried with homing performance of participants using an allocentric reference frame. These results support the assumption of distinct neural networks underlying the computation of distinct reference frames and reveal a direct relationship of alpha modulation in parietal and retrosplenial areas with encoding and retrieval of spatial information for homing behavior.  相似文献   

9.
We investigated brain activity associated with recognition of appropriate action selection based on allocentric perspectives using functional magnetic resonance imaging. The participants observed video clips in which one person (responder) passed one of three objects after a request by a second person (requester). The requester was unable to see one of the three objects because it was occluded by another object. Participants were asked to judge the appropriateness of the responder's action selection based on the visual information from the requester's perspective (i.e., allocentric perspective), not the responder's perspective (i.e., egocentric perspective). The experimental factors included the congruency of request interpretation and the appropriateness of action selection. The results showed that brain regions including the right temporo-parieto-occipital (TPO) junction and the left inferior parietal lobule (IPL) were more activated when the interpretation of the requested object differed between the egocentric and allocentric perspectives than when it was the same (the effect of incongruency for consistency). On the other hand, greater activation was found in the right dorsolateral prefrontal cortex (DLPFC) when the incongruency effect was compared only between the conditions of appropriate action selection (the interaction effect). These results suggest that both the TPO junction and IPL are involved in obtaining visual information from the allocentric perspective when visual information based on only the egocentric perspective is insufficient to interpret another person's request. The right DLPFC is likely related to this process to override the interference of action selection based on the egocentric perspective.  相似文献   

10.
If a peripheral target follows an ipsilateral cue with a stimulus-onset-asynchrony (SOA) of 300 ms or more, its detection is delayed compared to a contralateral-cue condition. This phenomena, known as inhibition-of-return (IOR), affects responses to visual, auditory, and tactile stimuli, and is thought to provide an index of exogenous shifts of spatial attention. The present study investigated whether tactile IOR occurs in a somatotopic vs an allocentric frame of reference. In experiment 1, tactile cue and target stimuli were presented to the index and middle fingers of either hand, with the hands positioned in an uncrossed posture (SOA 500 or 1,000 ms). Speeded target detection responses were slowest for targets presented from the cued finger, and were also slower for targets presented to the adjacent finger on the cued hand than to either finger on the uncued hand. The same pattern of results was also reported when the index and middle fingers of the two hands were interleaved on the midline (experiment 2), suggesting that the gradient of tactile IOR surrounding a cued body site is modulated by the somatotopic rather than by the allocentric distance between cue and target.  相似文献   

11.
In this study, we investigated the spatial dependency of action simulation. From previous research in the field of single-cell recordings, grasping studies and from crossmodal extinction tasks, it is known that our surrounding space can be divided into a peripersonal space and extrapersonal space. These two spaces are functionally different at both the behavioral and neuronal level. The peripersonal space can be seen as an action space which is limited to the area in which we can grasp objects without moving the object or ourselves. The extrapersonal space is the space beyond the peripersonal space. Objects situated within peripersonal space are mapped onto an egocentric reference frame. This mapping is thought to be accomplished by action simulation. To provide direct evidence of the embodied nature of this simulated motor act, we performed two experiments, in which we used two mental rotation tasks, one with stimuli of hands and one with stimuli of graspable objects. Stimuli were presented in both peri- and extrapersonal space. The results showed increased reaction times for biomechanically difficult to adopt postures compared to more easy to adopt postures for both hand and graspable object stimuli. Importantly, this difference was only present for stimuli presented in peripersonal space but not for the stimuli presented in extrapersonal space. These results extend previous behavioral findings on the functional distinction between peripersonal- and extrapersonal space by providing direct evidence for the spatial dependency of the use of action simulation. Furthermore, these results strengthen the hypothesis that objects situated within the peripersonal space are mapped onto an egocentric reference frame by action simulation.  相似文献   

12.
The visual and vestibular systems begin functioning early in life. However, it is unclear whether young infants perceive the dynamic world based on the retinal coordinate (egocentric reference frame) or the environmental coordinate (allocentric reference frame) when they encounter incongruence between frames of reference due to changes in body position. In this study, we performed the habituation–dishabituation procedure to assess novelty detection in a visual display, and a change in body position was included between the habituation and dishabituation phases in order to test whether infants dishabituate to the change in stimulus on the retinal or environmental coordinate. Twenty infants aged 3–4 months were placed in the right-side-down position (RSDp) and habituated to an animated human-like character that walked horizontally in the environmental frame of reference. Subsequently, their body position was changed in the roll plane. Ten infants were repositioned to the upright position (UPp) and the rest, to the RSDp after rotation. In the test phase, the displays that were spatially identical to those shown in the habituation phase and 90° rotated displays were alternately presented, and visual preference was examined. The results revealed that infants looked longer at changes in the display on the retinal coordinate than at changes in the display on the environmental coordinate. This suggests that changes in body position from lying to upright produced incongruence of the egocentric and allocentric reference frames for perception of dynamic visual displays and that infants may rely more on the egocentric reference frame.  相似文献   

13.
Insights into the functional nature and neuroanatomy of spatial attention have come from research in neglect patients but to date many conflicting results have been reported. The novelty of the current study is that we used voxel-wise analyses based on information from segmented grey and white matter tissue combined with diffusion tensor imaging to decompose neural substrates of different neglect symptoms. Allocentric neglect was associated with damage to posterior cortical regions (posterior superior temporal sulcus, angular, middle temporal and middle occipital gyri). In contrast, egocentric neglect was associated with more anterior cortical damage (middle frontal, postcentral, supramarginal, and superior temporal gyri) and damage within subcortical structures. Damage to intraparietal sulcus (IPS) and the temporo-parietal junction (TPJ) was associated with both forms of neglect. Importantly, we showed that both disorders were associated with white matter lesions suggesting damage within long association and projection pathways such as the superior longitudinal, superior fronto-occipital, inferior longitudinal, and inferior fronto-occipital fascicule, thalamic radiation, and corona radiata. We conclude that distinct cortical regions control attention (a) across space (using an egocentric frame of reference) and (b) within objects (using an allocentric frame of reference), while common cortical regions (TPJ, IPS) and common white matter pathways support interactions across the different cortical regions.  相似文献   

14.
Insights into the functional nature and neuroanatomy of spatial attention have come from research in neglect patients but to date many conflicting results have been reported. The novelty of the current study is that we used voxel-wise analyses based on information from segmented grey and white matter tissue combined with diffusion tensor imaging to decompose neural substrates of different neglect symptoms. Allocentric neglect was associated with damage to posterior cortical regions (posterior superior temporal sulcus, angular, middle temporal and middle occipital gyri). In contrast, egocentric neglect was associated with more anterior cortical damage (middle frontal, postcentral, supramarginal, and superior temporal gyri) and damage within subcortical structures. Damage to intraparietal sulcus (IPS) and the temporo-parietal junction (TPJ) was associated with both forms of neglect. Importantly, we showed that both disorders were associated with white matter lesions suggesting damage within long association and projection pathways such as the superior longitudinal, superior fronto-occipital, inferior longitudinal, and inferior fronto-occipital fascicule, thalamic radiation, and corona radiata. We conclude that distinct cortical regions control attention (a) across space (using an egocentric frame of reference) and (b) within objects (using an allocentric frame of reference), while common cortical regions (TPJ, IPS) and common white matter pathways support interactions across the different cortical regions.  相似文献   

15.
In two experiments, we examined the effect of selective attention at encoding on repetition priming in normal aging and Alzheimer's disease (AD) patients for objects presented visually (experiment 1) or haptically (experiment 2). We used a repetition priming paradigm combined with a selective attention procedure at encoding. Reliable priming was found for both young adults and healthy older participants for visually presented pictures (experiment 1) as well as for haptically presented objects (experiment 2). However, this was only found for attended and not for unattended stimuli. The results suggest that independently of the perceptual modality, repetition priming requires attention at encoding and that perceptual facilitation is maintained in normal aging. However, AD patients did not show priming for attended stimuli, or for unattended visual or haptic objects. These findings suggest an early deficit of selective attention in AD. Results are discussed from a cognitive neuroscience approach.  相似文献   

16.
The primary purpose of this study was to examine the effects of non-informative vision and visual interference upon haptic spatial processing, which supposedly derives from an interaction between an allocentric and egocentric reference frame. To this end, a haptic parallelity task served as baseline to determine the participant-dependent biasing influence of the egocentric reference frame. As expected, large systematic participant-dependent deviations from veridicality were observed. In the second experiment we probed the effect of non-informative vision on the egocentric bias. Moreover, orienting mechanisms (gazing directions) were studied with respect to the presentation of haptic information in a specific hemispace. Non-informative vision proved to have a beneficial effect on haptic spatial processing. No effect of gazing direction or hemispace was observed. In the third experiment we investigated the effect of simultaneously presented interfering visual information on the haptic bias. Interfering visual information parametrically influenced haptic performance. The interplay of reference frames that subserves haptic spatial processing was found to be related to both the effects of non-informative vision and visual interference. These results suggest that spatial representations are influenced by direct cross-modal interactions; inter-participant differences in the haptic modality resulted in differential effects of the visual modality.  相似文献   

17.
When programming movement, one must account for gravitational acceleration. This is particularly important when catching a falling object because the task requires a precise estimate of time-to-contact. Knowledge of gravity’s effects is intimately linked to our definition of ‘up’ and ‘down’. Both directions can be described in an allocentric reference frame, based on visual and/or gravitational cues, or in an egocentric reference frame in which the body axis is taken as vertical. To test which frame humans use to predict gravity’s effect, we asked participants to intercept virtual balls approaching from above or below with artificially controlled acceleration that could be congruent or not with gravity. To dissociate between these frames, subjects were seated upright (trunk parallel to gravity) or lying down (body axis orthogonal to the gravitational axis). We report data in line with the use of an allocentric reference frame and discuss its relevance depending on available gravity-related cues.  相似文献   

18.
Recent behavioural and event-related potential (ERP) studies reported cross-modal links in spatial attention between vision, audition and touch. Such links could reflect differences in hemispheric-activation levels associated with spatial attention to one side, or more abstract spatial reference-frames mediating selectivity across modalities. To distinguish these hypotheses, ERPs were recorded to lateral tactile stimuli, plus visual (experiment 1) or auditory stimuli (experiment 2), while participants attended to the left or right hand to detect infrequent tactile targets, and ignored other modalities. In separate blocks, hands were either in a crossed or uncrossed posture. With uncrossed hands, visual stimuli on the tactually attended side elicited enhanced N1 and P2 components at occipital sites, and an enhanced negativity at midline electrodes, reflecting cross-modal links in spatial attention from touch to vision. Auditory stimuli at tactually attended locations elicited an enhanced negativity overlapping with the N1 component, reflecting cross-modal links from touch to audition. An analogous pattern of results arose for crossed hands, with tactile attention enhancing auditory or visual responses on the side where the attended hand now lay (i.e. in the opposite visual or auditory hemifield to that enhanced by attending the same hand when uncrossed). This suggests that cross-modal attentional links are not determined by hemispheric projections, but by common external locations. Unexpectedly, somatosensory ERPs were strongly affected by hand posture in both experiments, with attentional effects delayed and smaller for crossed hands. This may reflect the combined influence of anatomical and external spatial codes within the tactile modality, while cross-modal links depend only on the latter codes. Electronic Publication  相似文献   

19.
The location of an object in peripersonal space can be represented with respect to our body (i.e., egocentric frame of reference) or relative to contextual features and other objects (i.e., allocentric frame of reference). In the current study, we sought to determine whether the frame, or frames, of visual reference supporting motor output is influenced by reach trajectories structured to maximize visual feedback utilization (i.e., controlled online) or structured largely in advance of movement onset via central planning mechanisms (i.e., controlled offline). Reaches were directed to a target embedded in a pictorial illusion (the induced Roelofs effect: IRE) and advanced knowledge of visual feedback was manipulated to influence the nature of reaching control as reported by Zelaznik et al. (J Mot Behav 15:217–236, 1983). When vision could not be predicted in advance of movement onset, trajectories showed primary evidence of an offline mode of control (even when vision was provided) and endpoints demonstrated amplified sensitivity to the illusory (i.e., allocentric) features of the IRE. In contrast, reaches performed with reliable visual feedback evidenced a primarily online mode of control and showed increased visuomotor resistance to the IRE. These findings suggest that the manner a reaching response is structured differentially influences the weighting of allocentric and egocentric visual information. More specifically, when visual feedback is unavailable or unpredictable, the weighting of allocentric visual information for the advanced planning of a reach trajectory is increased.
Matthew HeathEmail:
  相似文献   

20.
Attending to a visual or auditory stimulus often requires irrelevant information to be filtered out, both within the modality attended and in other modalities. For example, attentively listening to a phone conversation can diminish our ability to detect visual events. We used functional magnetic resonance imaging (fMRI) to examine brain responses to visual and auditory stimuli while subjects attended visual or auditory information. Although early cortical areas are traditionally considered unimodal, we found that brain responses to the same ignored information depended on the modality attended. In early visual area V1, responses to ignored visual stimuli were weaker when attending to another visual stimulus, compared with attending to an auditory stimulus. The opposite was true in more central visual area MT+, where responses to ignored visual stimuli were weaker when attending to an auditory stimulus. Furthermore, fMRI responses to the same ignored visual information depended on the location of the auditory stimulus, with stronger responses when the attended auditory stimulus shared the same side of space as the ignored visual stimulus. In early auditory cortex, responses to ignored auditory stimuli were weaker when attending a visual stimulus. A simple parameterization of our data can describe the effects of redirecting attention across space within the same modality (spatial attention) or across modalities (cross-modal attention), and the influence of spatial attention across modalities (cross-modal spatial attention). Our results suggest that the representation of unattended information depends on whether attention is directed to another stimulus in the same modality or the same region of space.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号