首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Determining the handedness of visually presented stimuli is thought to involve two separate stages--a rapid, implicit recognition of laterality followed by a confirmatory mental rotation of the matching hand. In two studies, we explore the role of the dominant and non-dominant hands in this process. In Experiment 1, participants judged stimulus laterality with either their left or right hand held behind their back or with both hands resting in the lap. The variation in reactions times across these conditions reveals that both hands play a role in hand laterality judgments, with the hand which is not involved in the mental rotation stage causing some interference, slowing down mental rotations and making them more accurate. While this interference occurs for both lateralities in right-handed people, it occurs for the dominant hand only in left-handers. This is likely due to left-handers' greater reliance on the initial, visual recognition stage than on the later, mental rotation stage, particularly when judging hands from the non-dominant laterality. Participants' own judgments of whether the stimuli were 'self' and 'other' hands in Experiment 2 suggest a difference in strategy for hands seen from an egocentric and allocentric perspective, with a combined visuo-sensorimotor strategy for the former and a visual only strategy for the latter. This result is discussed with reference to recent brain imaging research showing that the extrastriate body area distinguishes between bodies and body parts in egocentric and allocentric perspective.  相似文献   

2.
Convergent findings demonstrate that numbers can be represented according to a spatially oriented mental number line. However, it is not established whether a default organization of the mental number line exists (i.e., a left-to-right orientation) or whether its spatial arrangement is only the epiphenomenon of specific task requirements. To address this issue we performed two experiments in which subjects were required to judge laterality of hand stimuli preceded by small, medium or large numerical cues; hand stimuli were compatible with egocentric or allocentric perspectives. We found evidence of a left-to-right number–hand association in processing stimuli compatible with an egocentric perspective, whereas the reverse mapping was found with hands compatible with an allocentric perspective. These findings demonstrate that the basic left-to-right arrangement of the mental number line is defined with respect to the body-centred egocentric reference frame.  相似文献   

3.
Spatial priming in visual search is a well-documented phenomenon. If the target of a visual search is presented at the same location in subsequent trials, the time taken to find the target at this repeated target location is significantly reduced. Previous studies did not determine which spatial reference frame is used to code the location. At least two reference frames can be distinguished: an observer-related frame of reference (egocentric) or a scene-based frame of reference (allocentric). While past studies suggest that an allocentric reference frame is more effective, we found that an egocentric reference frame is at least as effective as an allocentric one (Ball et al. Neuropsychologia 47(6):1585–1591, 2009). Our previous study did not identify which specific egocentric reference frame was used for the priming: participants could have used a retinotopic or a body-centred frame of reference. Here, we disentangled the retinotopic and body-centred reference frames. In the retinotopic condition, the position of the target stimulus, when repeated, changed with the fixation position, whereas in the body-centred condition, the position of the target stimulus remained the same relative to the display, and thus to the body-midline, but was different relative to the fixation position. We used a conjunction search task to assess the generality of our previous findings. We found that participants relied on body-centred information and not retinotopic cues. Thus, we provide further evidence that egocentric information, and specifically body-centred information, can persist for several seconds, and that these effects are not specific to either a feature or a conjunction search paradigm.  相似文献   

4.
Research on joint attention has addressed both the effects of gaze following and the ability to share representations. It is largely unknown, however, whether sharing attention also affects the perceptual processing of jointly attended objects. This study tested whether attending to stimuli with another person from opposite perspectives induces a tendency to adopt an allocentric rather than an egocentric reference frame. Pairs of participants performed a handedness task while individually or jointly attending to rotated hand stimuli from opposite sides. Results revealed a significant flattening of the performance rotation curve when participants attended jointly (experiment 1). The effect of joint attention was robust to manipulations of social interaction (cooperation versus competition, experiment 2), but was modulated by the extent to which an allocentric reference frame was primed (experiment 3). Thus, attending to objects together from opposite perspectives makes people adopt an allocentric rather than the default egocentric reference frame.  相似文献   

5.
We investigated brain activity associated with recognition of appropriate action selection based on allocentric perspectives using functional magnetic resonance imaging. The participants observed video clips in which one person (responder) passed one of three objects after a request by a second person (requester). The requester was unable to see one of the three objects because it was occluded by another object. Participants were asked to judge the appropriateness of the responder's action selection based on the visual information from the requester's perspective (i.e., allocentric perspective), not the responder's perspective (i.e., egocentric perspective). The experimental factors included the congruency of request interpretation and the appropriateness of action selection. The results showed that brain regions including the right temporo-parieto-occipital (TPO) junction and the left inferior parietal lobule (IPL) were more activated when the interpretation of the requested object differed between the egocentric and allocentric perspectives than when it was the same (the effect of incongruency for consistency). On the other hand, greater activation was found in the right dorsolateral prefrontal cortex (DLPFC) when the incongruency effect was compared only between the conditions of appropriate action selection (the interaction effect). These results suggest that both the TPO junction and IPL are involved in obtaining visual information from the allocentric perspective when visual information based on only the egocentric perspective is insufficient to interpret another person's request. The right DLPFC is likely related to this process to override the interference of action selection based on the egocentric perspective.  相似文献   

6.
This study addressed the role of the medial temporal lobe regions and, more specifically, the contribution of the human hippocampus in memory for body-centered (egocentric) and environment-centered (allocentric) spatial location. Twenty-one patients with unilateral atrophy of the hippocampus secondary to long-standing epilepsy (left, n = 7; right, n = 14) and 15 normal control participants underwent 3 tasks measuring recall of egocentric or allocentric spatial location. Patients with left hippocampal sclerosis were consistently impaired in the allocentric conditions of all 3 tasks but not in the egocentric conditions. Patients with right hippocampal sclerosis were impaired to a lesser extent and in only 2 of the 3 tasks. It was concluded that hippocampal structures are crucial for allocentric, but not egocentric, spatial memory.  相似文献   

7.
The location of an object in peripersonal space can be represented with respect to our body (i.e., egocentric frame of reference) or relative to contextual features and other objects (i.e., allocentric frame of reference). In the current study, we sought to determine whether the frame, or frames, of visual reference supporting motor output is influenced by reach trajectories structured to maximize visual feedback utilization (i.e., controlled online) or structured largely in advance of movement onset via central planning mechanisms (i.e., controlled offline). Reaches were directed to a target embedded in a pictorial illusion (the induced Roelofs effect: IRE) and advanced knowledge of visual feedback was manipulated to influence the nature of reaching control as reported by Zelaznik et al. (J Mot Behav 15:217–236, 1983). When vision could not be predicted in advance of movement onset, trajectories showed primary evidence of an offline mode of control (even when vision was provided) and endpoints demonstrated amplified sensitivity to the illusory (i.e., allocentric) features of the IRE. In contrast, reaches performed with reliable visual feedback evidenced a primarily online mode of control and showed increased visuomotor resistance to the IRE. These findings suggest that the manner a reaching response is structured differentially influences the weighting of allocentric and egocentric visual information. More specifically, when visual feedback is unavailable or unpredictable, the weighting of allocentric visual information for the advanced planning of a reach trajectory is increased.
Matthew HeathEmail:
  相似文献   

8.
In the present study, we demonstrated that observation of hand rotation had specific facilitation effects on a classical motor imagery task, the hand-laterality judgement. In Experiment 1, we found that action observation improved subjects’ performance on the hand laterality but not on the letter rotation task (stimulus specificity). In Experiment 2, we demonstrated that this facilitation was not due to mere observation of a moving hand, because it was triggered by observation of manual rotation but not of manual prehension movements (motion specificity). In Experiment 3, this stimulus- and motion-specific effect was found to be right hand-specific, compatible with left-hemispheric specialization in motor imagery but not in action observation. These data provided direct support to the idea that different simulation states, such as action observation and motor imagery, share some common mechanisms but also show specific functional differences.  相似文献   

9.
Several studies showed that mental rotation of body parts is interfered with by manipulation of the subjects’ posture. However, the experimental manipulations in such studies, e.g., to hold one arm flexed on one’s own chest, activated not only proprioceptive but also self-tactile information. Here, we tested the hypothesis that the combination of self-touch and proprioception is more effective than proprioception alone in interfering with motor imagery. In Experiment 1 right- and left-handers were required to perform the hand laterality task, while holding one arm (right or left) flexed with the hand in direct contact with their chest (self-touch condition, STC) or with the hand placed on a wooden smooth surface in correspondence with their chest (no self-touch condition, NoSTC); in a third neutral condition, subjects kept both arms extended (neutral posture condition, NPC). Right-handers were slower when judging hand laterality in STC with respect to NoSTC and NPC, particularly when the sensory manipulation involved their dominant arm. No posture-related effect was observed in left-handers. In Experiment 2, by applying the same sensory manipulations as above to both arms, we verified that previous results were not due to a conflict between perceived position of the two hands. These data highlighted a complex interaction between body schema and motor imagery, and underlined the role of hand dominance in shaping such interaction.  相似文献   

10.
This research examined motor measures of the apparent egocentric location and perceptual measures of the apparent allocentric location of a target that was being seen to undergo induced motion (IM). In Experiments 1 and 3, subjects fixated a stationary dot (IM target) while a rectangular surround stimulus (inducing stimulus) oscillated horizontally. The inducing stimulus motion caused the IM target to appear to move in the opposite direction. In Experiment 1, two dots (flashed targets) were flashed above and below the IM target when the surround had reached its leftmost or rightmost displacement from the subject’s midline. Subjects pointed open-loop at either the apparent egocentric location of the IM target or at the bottom of the two flashed targets. On separate trials, subjects made judgments of the Vernier alignment of the IM target with the flashed targets at the endpoints of the surround’s oscillation. The pointing responses were displaced in the direction of the previously seen IM for the IM target and to a lesser degree for the bottom flashed target. However, the allocentric Vernier judgments demonstrated no perceptual displacement of the IM target relative to the flashed targets. Thus, IM results in a dissociation of egocentric location measures from allocentric location measures. In Experiment 2, pointing and Vernier measures were obtained with stationary horizontally displaced surrounds and there was no dissociation of egocentric location measures from allocentric location measures. These results indicate that the Roelofs effect did not produce the pattern of results in Experiment 1. In Experiment 3, pointing and Vernier measures were obtained when the surround was at the midpoint of an oscillation. In this case, egocentric pointing responses were displaced in the direction of surround motion (opposite IM) for the IM target and to a greater degree for the bottom flashed target. However, there was no apparent displacement of the IM target relative to the flashed targets in the allocentric Vernier judgments. Therefore, in Experiment 3 egocentric location measures were again dissociated from allocentric location measures. The results of this experiment also demonstrate that IM does not generate an allocentric displacement illusion analogous to the “flash-lag” effect.
Robert B. PostEmail:
  相似文献   

11.
Thirty patients who had undergone either a right or left unilateral temporal lobectomy (14 RTL; 16 LTL) and 16 control participants were tested on a computerized human analogue of the Morris Water Maze. The procedure was designed to compare allocentric and egocentric spatial memory. In the allocentric condition, participants searched for a target location on the screen, guided by object cues. Between trials, participants had to walk around the screen, which disrupted egocentric memory representation. In the egocentric condition, participants remained in the same position, but the object cues were shifted between searches to prevent them from using allocentric memory. Only the RTL group was impaired on the allocentric condition, and neither the LTL nor RTL group was impaired on additional tests of spatial working memory or spatial manipulation. The results support the notion that the right anterior temporal lobe stores long-term allocentric spatial memories.  相似文献   

12.
This research is about the role of categorical and coordinate spatial relations and allocentric and egocentric frames of reference in processing spatial information. To this end, we asked whether spatial information is firstly encoded with respect to a frame of reference or with respect to categorical/coordinate spatial relations. Participants had to judge whether two vertical bars appeared on the same side (categorical) or at the same distance (coordinate) with respect to the centre of a horizontal bar (allocentric) or with respect to their body midline (egocentric). The key manipulation was the timing of the instructions: one instruction (reference frame or spatial relation) was given before stimulus presentation, the other one after. If spatial processing requires egocentric/allocentric encoding before coordinate/categorical encoding, then spatial judgements should be facilitated when the frame of reference is specified in advance. In contrast, if categorical and coordinate dimensions are primary, then a facilitation should appear when the spatial relation is specified in advance. Results showed that participants were more accurate and faster when the reference frame rather than the type of spatial relation was provided before stimulus presentation. Furthermore, a selective facilitation was found for coordinate and categorical judgements after egocentric and allocentric cues, respectively. These results suggest a hierarchical structure of spatial information processing where reference frames play a primary role and selectively interact with subsequent processing of spatial relations.  相似文献   

13.
The visual and vestibular systems begin functioning early in life. However, it is unclear whether young infants perceive the dynamic world based on the retinal coordinate (egocentric reference frame) or the environmental coordinate (allocentric reference frame) when they encounter incongruence between frames of reference due to changes in body position. In this study, we performed the habituation–dishabituation procedure to assess novelty detection in a visual display, and a change in body position was included between the habituation and dishabituation phases in order to test whether infants dishabituate to the change in stimulus on the retinal or environmental coordinate. Twenty infants aged 3–4 months were placed in the right-side-down position (RSDp) and habituated to an animated human-like character that walked horizontally in the environmental frame of reference. Subsequently, their body position was changed in the roll plane. Ten infants were repositioned to the upright position (UPp) and the rest, to the RSDp after rotation. In the test phase, the displays that were spatially identical to those shown in the habituation phase and 90° rotated displays were alternately presented, and visual preference was examined. The results revealed that infants looked longer at changes in the display on the retinal coordinate than at changes in the display on the environmental coordinate. This suggests that changes in body position from lying to upright produced incongruence of the egocentric and allocentric reference frames for perception of dynamic visual displays and that infants may rely more on the egocentric reference frame.  相似文献   

14.
To produce accurate goal-directed arm movements, subjects must determine the precise location of target object. Position of extracorporeal objects can be determined using: (a) an egocentric frame of reference, in which the target is localized in relation to the position of the body; and/or (b) an allocentric system, in which target position is determined in relation to stable visual landmarks surrounding the target (Bridgeman 1989; Paillard 1991). The present experiment was based on the premise that (a) the presence of a structured visual environment enables the use of an allocentric frame of reference, and (b) the sole presence of a visual target within a homogeneous background forces the registration of the target location by an egocentric system. Normal subjects and a deafferented patient (i.e., with an impaired egocentric system) pointed to visual targets presented in both visual environments to evaluate the efficiency of the two reference systems. For normals, the visual environment conditions did not affect pointing accuracy. However, kinematic parameters were affected by the presence or absence of a structured visual surrounding. For the deafferented patient, the presence of a structured visual environment permitted a decrease in spatial errors when compared with the unstructured surrounding condition (for movements with or without visual feedback of the trajectory). Overall, results support the existence of an egocentric and an allocentric reference system capable of organizing extracorporeal space during arm movements directed toward visual targets.  相似文献   

15.
Vestibular information helps to establish a reliable gravitational frame of reference and contributes to the adequate perception of the location of one’s own body in space. This information is likely to be required in spatial cognitive tasks. Indeed, previous studies suggest that the processing of vestibular information is involved in mental transformation tasks in healthy participants. In this study, we investigate whether patients with bilateral or unilateral vestibular loss show impaired ability to mentally transform images of bodies and body parts compared to a healthy, age-matched control group. An egocentric and an object-based mental transformation task were used. Moreover, spatial perception was assessed using a computerized version of the subjective visual vertical and the rod and frame test. Participants with bilateral vestibular loss showed impaired performance in mental transformation, especially in egocentric mental transformation, compared to participants with unilateral vestibular lesions and the control group. Performance of participants with unilateral vestibular lesions and the control group are comparable, and no differences were found between right- and left-sided labyrinthectomized patients. A control task showed no differences between the three groups. The findings from this study substantiate that central vestibular processes are involved in imagined spatial body transformations; but interestingly, only participants with bilateral vestibular loss are affected, whereas unilateral vestibular loss does not lead to a decline in spatial imagery.  相似文献   

16.
Analogously to the visual system, somatosensory processing may be segregated into two streams, with the body constituting either part of the action system or a perceptual object. Experimental studies with participants free from neurological disease which test this hypothesis are rare, however. The present study explored the contributions of the two putative streams to a task that requires participants to estimate the spatial properties of their own body. Two manipulations from the visuospatial literature were included. First, participants were required to point either backward towards pre-defined landmarks on their own body (egocentric reference frame) or to a forward projection of their own body (allocentric representation). Second, a manipulation of movement mode was included, requiring participants to perform pointing movements either immediately, or after a fixed delay, following instruction. Results show that accessing an allocentric representation of one’s own body results in performance changes. Specifically, the spatial bias shown to exist for body space when pointing backward at one’s own body disappears when participants are requested to mentally project their body to a pre-defined location in front space. Conversely, delayed execution of pointing movements does not result in performance changes. Altogether, these findings provide support for a constrained dual stream hypothesis of somatosensory processing and are the first to show similarities in the processing of body space and peripersonal space.  相似文献   

17.
The investigation of brain areas involved in the human execution/observation matching system (EOM) has been limited to restricted motor actions when using common neuroimaging techniques such as functional magnetic resonance imaging (fMRI). A method which overcomes this limitation is functional near-infrared spectroscopy (fNIRS). In the present study, we explored the cerebral responses underlying action execution and observation during a complex everyday task. We measured brain activation of 39 participants during the performance of object-related reaching, grasping and displacing movements, namely setting and clearing a table, and observation of the same task from different perspectives. Observation of the table-setting task activated parts of a network matching those activated during execution of the task. Specifically, observation from an egocentric perspective led to a higher activation in the inferior parietal cortex than observation from an allocentric perspective, implicating that the viewpoint also influences the EOM during the observation of complex everyday tasks. Together these findings suggest that fNIRS is able to overcome the restrictions of common imaging methods by investigating the EOM with a naturalistic task.  相似文献   

18.
The spatial location of an object can be represented in the brain with respect to different classes of reference frames, either relative to or independent of the subject's position. We used functional magnetic resonance imaging to identify regions of the healthy human brain subserving mainly egocentric or allocentric (object-based) coordinates by asking subjects to judge the location of a visual stimulus with respect to either their body or an object. A color-judgement task, matched for stimuli, difficulty, motor and oculomotor responses, was used as a control. We identified a bilateral, though mainly right-hemisphere based, fronto-parietal network involved in egocentric processing. A subset of these regions, including a much less extensive unilateral, right fronto-parietal network, was found to be active during object-based processing. The right-hemisphere lateralization and the partial superposition of the egocentric and the object-based networks is discussed in the light of neuropsychological findings in brain-damaged patients with unilateral spatial neglect and of neurophysiological studies in the monkey.  相似文献   

19.
Various studies on the hand laterality judgment task, using complex sets of stimuli, have shown that the judgments during this task are dependent on bodily constraints. More specific, these studies showed that reaction times are dependent on the participant’s posture or differ for hand pictures rotated away or toward the mid-sagittal plane (i.e., lateral or medial rotation, respectively). These findings point to the use of a cognitive embodied process referred to as motor imagery. We hypothesize that the number of axes of rotation of the displayed stimuli during the task is a critical factor for showing engagement in a mental rotation task, with an increased number of rotational axes leading to a facilitation of motor imagery. To test this hypothesis, we used a hand laterality judgment paradigm in which we manipulated the difficulty of the task via the manipulation of the number of rotational axes of the shown stimuli. Our results showed increased influence of bodily constraints for increasing number of axes of rotation. More specifically, for the stimulus set containing stimuli rotated over a single axis, no influence of biomechanical constraints was present. The stimulus sets containing stimuli rotated over more than one axes of rotation did induce the use of motor imagery, as a clear influence of bodily constraints on the reaction times was found. These findings extend and refine previous findings on motor imagery as our results show that engagement in motor imagery critically depends on the used number of axes of rotation of the stimulus set.  相似文献   

20.
Normally reared hamsters, but not hamsters reared on a liquid diet, demonstrated spatial memory for the location of odor cues in an allocentric task (Experiment 1). In Experiment 2, an egocentric task, liquid-reared hamsters detected a change in the spatial location of odor cues. In Experiment 3 liquidreared hamsters detected a change in the spatial location of two visual cues under allocentric task conditions. Female hamsters on a liquid diet retrieved their pups more often than dams on solid food, resulting in reduced exploratory opportunities for their pups during the period when olfaction mediates behavior. Hamsters in Experiment 4 experienced a direct restriction of early forays. The restricted-rearing group failed to detect a change in the spatial location of odor cues in an allocentric task. These findings suggest that restriction of early exploratory experience during a narrow period of development results in specific spatial processing deficits.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号