首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
Spatial priming in visual search is a well-documented phenomenon. If the target of a visual search is presented at the same location in subsequent trials, the time taken to find the target at this repeated target location is significantly reduced. Previous studies did not determine which spatial reference frame is used to code the location. At least two reference frames can be distinguished: an observer-related frame of reference (egocentric) or a scene-based frame of reference (allocentric). While past studies suggest that an allocentric reference frame is more effective, we found that an egocentric reference frame is at least as effective as an allocentric one (Ball et al. Neuropsychologia 47(6):1585–1591, 2009). Our previous study did not identify which specific egocentric reference frame was used for the priming: participants could have used a retinotopic or a body-centred frame of reference. Here, we disentangled the retinotopic and body-centred reference frames. In the retinotopic condition, the position of the target stimulus, when repeated, changed with the fixation position, whereas in the body-centred condition, the position of the target stimulus remained the same relative to the display, and thus to the body-midline, but was different relative to the fixation position. We used a conjunction search task to assess the generality of our previous findings. We found that participants relied on body-centred information and not retinotopic cues. Thus, we provide further evidence that egocentric information, and specifically body-centred information, can persist for several seconds, and that these effects are not specific to either a feature or a conjunction search paradigm.  相似文献   

2.
This research is about the role of categorical and coordinate spatial relations and allocentric and egocentric frames of reference in processing spatial information. To this end, we asked whether spatial information is firstly encoded with respect to a frame of reference or with respect to categorical/coordinate spatial relations. Participants had to judge whether two vertical bars appeared on the same side (categorical) or at the same distance (coordinate) with respect to the centre of a horizontal bar (allocentric) or with respect to their body midline (egocentric). The key manipulation was the timing of the instructions: one instruction (reference frame or spatial relation) was given before stimulus presentation, the other one after. If spatial processing requires egocentric/allocentric encoding before coordinate/categorical encoding, then spatial judgements should be facilitated when the frame of reference is specified in advance. In contrast, if categorical and coordinate dimensions are primary, then a facilitation should appear when the spatial relation is specified in advance. Results showed that participants were more accurate and faster when the reference frame rather than the type of spatial relation was provided before stimulus presentation. Furthermore, a selective facilitation was found for coordinate and categorical judgements after egocentric and allocentric cues, respectively. These results suggest a hierarchical structure of spatial information processing where reference frames play a primary role and selectively interact with subsequent processing of spatial relations.  相似文献   

3.
When programming movement, one must account for gravitational acceleration. This is particularly important when catching a falling object because the task requires a precise estimate of time-to-contact. Knowledge of gravity’s effects is intimately linked to our definition of ‘up’ and ‘down’. Both directions can be described in an allocentric reference frame, based on visual and/or gravitational cues, or in an egocentric reference frame in which the body axis is taken as vertical. To test which frame humans use to predict gravity’s effect, we asked participants to intercept virtual balls approaching from above or below with artificially controlled acceleration that could be congruent or not with gravity. To dissociate between these frames, subjects were seated upright (trunk parallel to gravity) or lying down (body axis orthogonal to the gravitational axis). We report data in line with the use of an allocentric reference frame and discuss its relevance depending on available gravity-related cues.  相似文献   

4.
Motor imagery tasks (hand laterality judgment) are usually performed with respect to a self-body (egocentric) representation, but manipulations of stimulus features (hand orientation) can induce a shift to other's body (allocentric) reference frame. Visual perspective taking tasks are also performed in self-body perspective but a shift to an allocentric frame can be triggered by manipulations of context features (e.g., another person present in the to-be-judged scene). Combining hand laterality task and visual perspective taking, we demonstrated that both stimulus and context features can modulate motor imagery performance. In Experiment 1, participants judged laterality of a hand embedded in a human or non-human silhouette. Results showed that observing a human silhouette interfered with judgments on “egocentric hand stimuli” (right hand, fingers up). In Experiment 2, participants were explicitly required to judge laterality of a hand embedded in a human silhouette from their own (egocentric group) or from the silhouette's perspective (allocentric group). Consistent with previous results, the egocentric group was significantly faster than the allocentric group in judging fingers-up right hand stimuli. These findings showed that concurrent activation of egocentric and allocentric frames during mental transformation of body parts impairs participants’ performance due to a conflict between motor and visual mechanisms.  相似文献   

5.
To produce accurate goal-directed arm movements, subjects must determine the precise location of target object. Position of extracorporeal objects can be determined using: (a) an egocentric frame of reference, in which the target is localized in relation to the position of the body; and/or (b) an allocentric system, in which target position is determined in relation to stable visual landmarks surrounding the target (Bridgeman 1989; Paillard 1991). The present experiment was based on the premise that (a) the presence of a structured visual environment enables the use of an allocentric frame of reference, and (b) the sole presence of a visual target within a homogeneous background forces the registration of the target location by an egocentric system. Normal subjects and a deafferented patient (i.e., with an impaired egocentric system) pointed to visual targets presented in both visual environments to evaluate the efficiency of the two reference systems. For normals, the visual environment conditions did not affect pointing accuracy. However, kinematic parameters were affected by the presence or absence of a structured visual surrounding. For the deafferented patient, the presence of a structured visual environment permitted a decrease in spatial errors when compared with the unstructured surrounding condition (for movements with or without visual feedback of the trajectory). Overall, results support the existence of an egocentric and an allocentric reference system capable of organizing extracorporeal space during arm movements directed toward visual targets.  相似文献   

6.
Three experiments are described that investigate 4.5‐month‐old infants' spatial thinking during passive movement using a task that required no manual or visual search. In these experiments, infants habituated to a display located near one corner of a table. Before the test trial the infants were either moved to the opposite side of the table or they remained in the same position that they held during the habituation trials. Also, between the habituation trials and the test trial, the display was either surreptitiously moved to the diagonally opposite position on the table, or the display remained stationary. The results showed that infants generally dishabituated when the actual (allocentric/objective) location of the display was changed between habituation and test. However, in Experiment 3, in which infants had reduced experience moving around the testing chamber, infants dishabituated to a change in their egocentric spatial relationship to the display. The results of this experiment suggest that experience moving around the testing chamber was a prerequisite for such location constancy. Taken together, the findings presented here indicate that with enough experience, young infants become aware of key spatial relationships in their environment during passive movement. © 2010 Wiley Periodicals, Inc. Dev Psychobiol 53: 23–36, 2011.  相似文献   

7.
There is a significant overlap between the processes and neural substrates of spatial cognition and those subserving memory and learning. However, for procedural learning, which often is spatial in nature, we do not know how different forms of spatial knowledge, such as egocentric and allocentric frames of reference, are utilized nor whether these frames are differentially engaged during implicit and explicit processes. To address this issue, we trained human subjects on a movement sequence presented on a bi-dimensional (2D) geometric frame. We then systematically manipulated the geometric frame (allocentric) or the sequence of movements (egocentric) or both, and retested the subjects on their ability to transfer the sequence knowledge they had acquired in training and also determined whether the subjects had learned the sequence implicitly or explicitly. None of the subjects (implicit or explicit) showed evidence of transfer when both frames of reference were changed which suggests that spatial information is essential. Both implicit and explicit subjects transferred when the egocentric frame was maintained indicating that this representation is common to both processes. Finally, explicit subjects were also able to benefit from the allocentric frame in transfer, which suggests that explicit procedural knowledge may have two tiers comprising egocentric and allocentric representations.  相似文献   

8.
The location of an object in peripersonal space can be represented with respect to our body (i.e., egocentric frame of reference) or relative to contextual features and other objects (i.e., allocentric frame of reference). In the current study, we sought to determine whether the frame, or frames, of visual reference supporting motor output is influenced by reach trajectories structured to maximize visual feedback utilization (i.e., controlled online) or structured largely in advance of movement onset via central planning mechanisms (i.e., controlled offline). Reaches were directed to a target embedded in a pictorial illusion (the induced Roelofs effect: IRE) and advanced knowledge of visual feedback was manipulated to influence the nature of reaching control as reported by Zelaznik et al. (J Mot Behav 15:217–236, 1983). When vision could not be predicted in advance of movement onset, trajectories showed primary evidence of an offline mode of control (even when vision was provided) and endpoints demonstrated amplified sensitivity to the illusory (i.e., allocentric) features of the IRE. In contrast, reaches performed with reliable visual feedback evidenced a primarily online mode of control and showed increased visuomotor resistance to the IRE. These findings suggest that the manner a reaching response is structured differentially influences the weighting of allocentric and egocentric visual information. More specifically, when visual feedback is unavailable or unpredictable, the weighting of allocentric visual information for the advanced planning of a reach trajectory is increased.
Matthew HeathEmail:
  相似文献   

9.
Subjects reached in three-dimensional space to a set of remembered targets whose position was varied randomly from trial to trial, but always fell along a "virtual" line (line condition). Targets were presented briefly, one-by-one and in an empty visual field. After a short delay, subjects were required to point to the remembered target location. Under these conditions, the target was presented in the complete absence of allocentric visual cues as to its position in space. However, because the subjects were informed prior to the experiment that all targets would fall on a straight line, they could conceivably imagine each point target as belonging to a single rigid object with a particular geometry and orientation in space, although this virtual object was never explicitly shown to the subjects. We compared the responses to repeated measurements of each target with those measured for targets presented in a directionally neutral configuration (sphere condition), and used the variable errors to infer the putative reference frames underlying the corresponding sensorimotor transformation. Performance in the different tasks was compared under two different lighting conditions (dim light or total darkness) and two memory delays (0.5 or 5 s). The pattern of variable errors differed significantly between the sphere condition and the line condition. In the former case, the errors were always accounted for by egocentric reference frames. By contrast the errors in the line condition revealed both egocentric and allocentric components, consistent with the hypothesis that target information can be defined concurrently in both egocentric and allocentric frames of reference, resulting in two independent coexisting representations. Electronic Publication  相似文献   

10.
The primary purpose of this study was to examine the effects of non-informative vision and visual interference upon haptic spatial processing, which supposedly derives from an interaction between an allocentric and egocentric reference frame. To this end, a haptic parallelity task served as baseline to determine the participant-dependent biasing influence of the egocentric reference frame. As expected, large systematic participant-dependent deviations from veridicality were observed. In the second experiment we probed the effect of non-informative vision on the egocentric bias. Moreover, orienting mechanisms (gazing directions) were studied with respect to the presentation of haptic information in a specific hemispace. Non-informative vision proved to have a beneficial effect on haptic spatial processing. No effect of gazing direction or hemispace was observed. In the third experiment we investigated the effect of simultaneously presented interfering visual information on the haptic bias. Interfering visual information parametrically influenced haptic performance. The interplay of reference frames that subserves haptic spatial processing was found to be related to both the effects of non-informative vision and visual interference. These results suggest that spatial representations are influenced by direct cross-modal interactions; inter-participant differences in the haptic modality resulted in differential effects of the visual modality.  相似文献   

11.
The spatial location of objects is processed in egocentric and allocentric reference frames, the early temporal dynamics of which have remained relatively unexplored. Previous experiments focused on ERP components related only to egocentric navigation. Thus, we designed a virtual reality experiment to see whether allocentric reference frame‐related ERP modulations can also be registered. Participants collected reward objects at the end of the west and east alleys of a cross maze, and their ERPs to the feedback objects were measured. Participants made turn choices from either the south or the north alley randomly in each trial. In this way, we were able to discern place and response coding of object location. Behavioral results indicated a strong preference for using the allocentric reference frame and a preference for choosing the rewarded place in the next trial, suggesting that participants developed probabilistic expectations between places and rewards. We also found that the amplitude of the P1 was sensitive to the allocentric place of the reward object, independent of its value. We did not find evidence for egocentric response learning. These results show that early ERPs are sensitive to the location of objects during navigation in an allocentric reference frame.  相似文献   

12.
The present study investigated the brain dynamics accompanying spatial navigation based on distinct reference frames. Participants preferentially using an allocentric or an egocentric reference frame navigated through virtual tunnels and reported their homing direction at the end of each trial based on their spatial representation of the passage. Task-related electroencephalographic (EEG) dynamics were analyzed based on independent component analysis (ICA) and subsequent clustering of independent components. Parietal alpha desynchronization during encoding of spatial information predicted homing performance for participants using an egocentric reference frame. In contrast, retrosplenial and occipital alpha desynchronization during retrieval covaried with homing performance of participants using an allocentric reference frame. These results support the assumption of distinct neural networks underlying the computation of distinct reference frames and reveal a direct relationship of alpha modulation in parietal and retrosplenial areas with encoding and retrieval of spatial information for homing behavior.  相似文献   

13.
Ten male infants, 3–4 months old, and 10 male infants, 6–7 months old, were habituated to a visual stimulus composed of both specific featural and structural information. After habituation, orienting magnitude (dishabituation) to changes in feature with structure controlled versus changes in structure with feature controlled was used as a means of measuring the infant's processing capacity. Results indicate that younger and older infants were habituated over the habituation trials but that younger and older infants differed significantly in dishabituation to changes in structure and feature information. The present findings support the hypothesis that feature and structure information are both independently important to visual processing in the human infant.  相似文献   

14.
The spatial location of an object can be represented in the brain with respect to different classes of reference frames, either relative to or independent of the subject's position. We used functional magnetic resonance imaging to identify regions of the healthy human brain subserving mainly egocentric or allocentric (object-based) coordinates by asking subjects to judge the location of a visual stimulus with respect to either their body or an object. A color-judgement task, matched for stimuli, difficulty, motor and oculomotor responses, was used as a control. We identified a bilateral, though mainly right-hemisphere based, fronto-parietal network involved in egocentric processing. A subset of these regions, including a much less extensive unilateral, right fronto-parietal network, was found to be active during object-based processing. The right-hemisphere lateralization and the partial superposition of the egocentric and the object-based networks is discussed in the light of neuropsychological findings in brain-damaged patients with unilateral spatial neglect and of neurophysiological studies in the monkey.  相似文献   

15.
We required healthy subjects to recognize visually presented one’s own or others’ hands in egocentric or allocentric perspective. Both right- and left-handers were faster in recognizing dominant hands in egocentric perspective and others’ non-dominant hand in allocentric perspective. These findings demonstrated that body-specific information contributes to sense of ownership, and that the “peri-dominant-hand space” is the preferred reference frame to distinguish self from not-self body parts.  相似文献   

16.
What humans haptically perceive as parallel is often far from physically parallel. These deviations from parallelity are highly significant and very systematic. There exists accumulating evidence, both psychophysical and neurophysiological, that what is haptically parallel is decided in a frame of reference intermediate to an allocentric and an egocentric one. The central question here concerns the nature of the egocentric frame of reference. In the literature, various kinds of egocentric reference frames are mentioned for haptic spatial tasks, such as hand-centered, arm-centered, and body-centered frames of reference. Thus far, it has not been possible to distinguish between body-centered, arm-centered, and hand-centered reference frames in our experiments, as hand and arm orientation always covaried with distance from the body-midline. In the current set of experiments the influence of body-centered and hand-centered reference frames could be dissociated. Subjects were asked to make a test bar haptically parallel to a reference bar in five different conditions, in which their hands were oriented straight ahead, rotated to the left, rotated to the right, rotated outward or rotated inward. If the reference frame is body-centered, the deviations should be independent of condition. If, on the other hand, the reference frame is hand-centered, the deviations should vary with condition. The results show that deviation size varies strongly with condition, exactly in the way predicted by the influence of a hand-centered egocentric frame of reference. Interestingly, this implies that subjects do not sufficiently take into account the orientation of their hands.  相似文献   

17.
Gravicentric visual alignments become less precise when the head is roll-tilted relative to gravity, which is most likely due to decreasing otolith sensitivity. To align a luminous line with the perceived gravity vector (gravicentric task) or the perceived body-longitudinal axis (egocentric task), the roll orientation of the line on the retina and the torsional position of the eyes relative to the head must be integrated to obtain the line orientation relative to the head. Whether otolith input contributes to egocentric tasks and whether the modulation of variability is restricted to vision-dependent paradigms is unknown. In nine subjects we compared precision and accuracy of gravicentric and egocentric alignments in various roll positions (upright, 45°, and 75° right-ear down) using a luminous line (visual paradigm) in darkness. Trial-to-trial variability doubled for both egocentric and gravicentric alignments when roll-tilted. Two mechanisms might explain the roll-angle-dependent modulation in egocentric tasks: 1) Modulating variability in estimated ocular torsion, which reflects the roll-dependent precision of otolith signals, affects the precision of estimating the line orientation relative to the head; this hypothesis predicts that variability modulation is restricted to vision-dependent alignments. 2) Estimated body-longitudinal reflects the roll-dependent variability of perceived earth-vertical. Gravicentric cues are thereby integrated regardless of the task's reference frame. To test the two hypotheses the visual paradigm was repeated using a rod instead (haptic paradigm). As with the visual paradigm, precision significantly decreased with increasing head roll for both tasks. These findings propose that the CNS integrates input coded in a gravicentric frame to solve egocentric tasks. In analogy to gravicentric tasks, where trial-to-trial variability is mainly influenced by the properties of the otolith afferents, egocentric tasks may also integrate otolith input. Such a shared mechanism for both paradigms and frames of reference is supported by the significantly correlated trial-to-trial variabilities.  相似文献   

18.
We used a memory-saccade task to test whether the location of a target, briefly presented before a whole-body rotation in roll, is stored in egocentric or in allocentric coordinates. To make this distinction, we exploited the fact that subjects, when tilted sideways in darkness, make systematic errors when indicating the direction of gravity (an allocentric task) even though they have a veridical percept of their self-orientation in space. We hypothesized that if spatial memory is coded allocentrically, these distortions affect the coding of remembered targets and their readout after a body rotation. Alternatively, if coding is egocentric, updating for body rotation becomes essential and errors in performance should be related to the amount of intervening rotation. Subjects (n = 6) were tested making saccades to remembered world-fixed targets after passive body tilts. Initial and final tilt angle ranged between -120 degrees CCW and 120 degrees CW. The results showed that subjects made large systematic directional errors in their saccades (up to 90 degrees ). These errors did not occur in the absence of intervening body rotation, ruling out a memory degradation effect. Regression analysis showed that the errors were closely related to the amount of subjective allocentric distortion at both the initial and final tilt angle, rather than to the amount of intervening rotation. We conclude that the brain uses an allocentric reference frame, possibly gravity-based, to code visuospatial memories during whole-body tilts. This supports the notion that the brain can define information in multiple frames of reference, depending on sensory inputs and task demands.  相似文献   

19.
Research on joint attention has addressed both the effects of gaze following and the ability to share representations. It is largely unknown, however, whether sharing attention also affects the perceptual processing of jointly attended objects. This study tested whether attending to stimuli with another person from opposite perspectives induces a tendency to adopt an allocentric rather than an egocentric reference frame. Pairs of participants performed a handedness task while individually or jointly attending to rotated hand stimuli from opposite sides. Results revealed a significant flattening of the performance rotation curve when participants attended jointly (experiment 1). The effect of joint attention was robust to manipulations of social interaction (cooperation versus competition, experiment 2), but was modulated by the extent to which an allocentric reference frame was primed (experiment 3). Thus, attending to objects together from opposite perspectives makes people adopt an allocentric rather than the default egocentric reference frame.  相似文献   

20.
Analogously to the visual system, somatosensory processing may be segregated into two streams, with the body constituting either part of the action system or a perceptual object. Experimental studies with participants free from neurological disease which test this hypothesis are rare, however. The present study explored the contributions of the two putative streams to a task that requires participants to estimate the spatial properties of their own body. Two manipulations from the visuospatial literature were included. First, participants were required to point either backward towards pre-defined landmarks on their own body (egocentric reference frame) or to a forward projection of their own body (allocentric representation). Second, a manipulation of movement mode was included, requiring participants to perform pointing movements either immediately, or after a fixed delay, following instruction. Results show that accessing an allocentric representation of one’s own body results in performance changes. Specifically, the spatial bias shown to exist for body space when pointing backward at one’s own body disappears when participants are requested to mentally project their body to a pre-defined location in front space. Conversely, delayed execution of pointing movements does not result in performance changes. Altogether, these findings provide support for a constrained dual stream hypothesis of somatosensory processing and are the first to show similarities in the processing of body space and peripersonal space.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号