首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This research examined motor measures of the apparent egocentric location and perceptual measures of the apparent allocentric location of a target that was being seen to undergo induced motion (IM). In Experiments 1 and 3, subjects fixated a stationary dot (IM target) while a rectangular surround stimulus (inducing stimulus) oscillated horizontally. The inducing stimulus motion caused the IM target to appear to move in the opposite direction. In Experiment 1, two dots (flashed targets) were flashed above and below the IM target when the surround had reached its leftmost or rightmost displacement from the subject’s midline. Subjects pointed open-loop at either the apparent egocentric location of the IM target or at the bottom of the two flashed targets. On separate trials, subjects made judgments of the Vernier alignment of the IM target with the flashed targets at the endpoints of the surround’s oscillation. The pointing responses were displaced in the direction of the previously seen IM for the IM target and to a lesser degree for the bottom flashed target. However, the allocentric Vernier judgments demonstrated no perceptual displacement of the IM target relative to the flashed targets. Thus, IM results in a dissociation of egocentric location measures from allocentric location measures. In Experiment 2, pointing and Vernier measures were obtained with stationary horizontally displaced surrounds and there was no dissociation of egocentric location measures from allocentric location measures. These results indicate that the Roelofs effect did not produce the pattern of results in Experiment 1. In Experiment 3, pointing and Vernier measures were obtained when the surround was at the midpoint of an oscillation. In this case, egocentric pointing responses were displaced in the direction of surround motion (opposite IM) for the IM target and to a greater degree for the bottom flashed target. However, there was no apparent displacement of the IM target relative to the flashed targets in the allocentric Vernier judgments. Therefore, in Experiment 3 egocentric location measures were again dissociated from allocentric location measures. The results of this experiment also demonstrate that IM does not generate an allocentric displacement illusion analogous to the “flash-lag” effect.
Robert B. PostEmail:
  相似文献   

2.
The spatial location of objects is processed in egocentric and allocentric reference frames, the early temporal dynamics of which have remained relatively unexplored. Previous experiments focused on ERP components related only to egocentric navigation. Thus, we designed a virtual reality experiment to see whether allocentric reference frame‐related ERP modulations can also be registered. Participants collected reward objects at the end of the west and east alleys of a cross maze, and their ERPs to the feedback objects were measured. Participants made turn choices from either the south or the north alley randomly in each trial. In this way, we were able to discern place and response coding of object location. Behavioral results indicated a strong preference for using the allocentric reference frame and a preference for choosing the rewarded place in the next trial, suggesting that participants developed probabilistic expectations between places and rewards. We also found that the amplitude of the P1 was sensitive to the allocentric place of the reward object, independent of its value. We did not find evidence for egocentric response learning. These results show that early ERPs are sensitive to the location of objects during navigation in an allocentric reference frame.  相似文献   

3.
Gravicentric visual alignments become less precise when the head is roll-tilted relative to gravity, which is most likely due to decreasing otolith sensitivity. To align a luminous line with the perceived gravity vector (gravicentric task) or the perceived body-longitudinal axis (egocentric task), the roll orientation of the line on the retina and the torsional position of the eyes relative to the head must be integrated to obtain the line orientation relative to the head. Whether otolith input contributes to egocentric tasks and whether the modulation of variability is restricted to vision-dependent paradigms is unknown. In nine subjects we compared precision and accuracy of gravicentric and egocentric alignments in various roll positions (upright, 45°, and 75° right-ear down) using a luminous line (visual paradigm) in darkness. Trial-to-trial variability doubled for both egocentric and gravicentric alignments when roll-tilted. Two mechanisms might explain the roll-angle-dependent modulation in egocentric tasks: 1) Modulating variability in estimated ocular torsion, which reflects the roll-dependent precision of otolith signals, affects the precision of estimating the line orientation relative to the head; this hypothesis predicts that variability modulation is restricted to vision-dependent alignments. 2) Estimated body-longitudinal reflects the roll-dependent variability of perceived earth-vertical. Gravicentric cues are thereby integrated regardless of the task's reference frame. To test the two hypotheses the visual paradigm was repeated using a rod instead (haptic paradigm). As with the visual paradigm, precision significantly decreased with increasing head roll for both tasks. These findings propose that the CNS integrates input coded in a gravicentric frame to solve egocentric tasks. In analogy to gravicentric tasks, where trial-to-trial variability is mainly influenced by the properties of the otolith afferents, egocentric tasks may also integrate otolith input. Such a shared mechanism for both paradigms and frames of reference is supported by the significantly correlated trial-to-trial variabilities.  相似文献   

4.
We studied grip force control when catching a free-falling object with the dominant hand. An instrumented object was dropped either from the subjects opposite hand or expectedly from the experimenters hand. Following digit-object contact, triggered responses were observed in the load and grip force profiles. The peak rates of load force increase and the peak load forces produced at the time the catching fingers made contact with the object were of similar magnitude for the experimenter- and self-release conditions. However, the peak rates of grip force development and the peak grip forces were more pronounced when the object was dropped by the experimenter. These findings suggest that the prediction of the load magnitude was less precise when the object was dropped from the experimenters hand. In addition, a correlation analysis between maximum grip and load force rates revealed a less precise coupling between the force rates in the experimenter-release condition. The time lags between maximum force rates and maximum forces were longer for the experimenter-release than for the self-release condition. These observations may indicate a less precise temporal coupling between grip and load force profiles in the experimenter-release condition. As observed during other manipulative tasks, the co-ordination between grip and load forces is a prerequisite to cope with collision forces when catching free-falling objects. Grip force control during catching is both highly automatic and flexible depending on the predictability of the task.  相似文献   

5.
Pointing with an unseen hand to a visual target that disappears prior to movement requires maintaining a memory representation about the target location. The target location can be transformed either into a hand-centered frame of reference during target presentation and remembered under that form, or remembered in terms of retinal and extra-retinal cues and transformed into a body-centered frame of reference before movement initiation. The main goal of the present study was to investigate whether the target is stored in memory in an eye-centered frame, a hand-centered frame or in both frames of reference concomitantly. The task was to locate, memorize, and point to a target in a dark environment. Hand movement was not visible. During the recall delay, participants were asked to move their hand or their eyes in order to disrupt the memory representation of the target. Movement of the eyes during the recall delay was expected to disrupt an eye-centered memory representation whereas movement of the hand was expected to disrupt a hand-centered memory representation by increasing movement variability to the target. Variability of movement amplitude and direction was examined. Results showed that participants were more variable on the directional component of the movement when required to move their hand during recall delay. On the contrary, moving the eyes caused an increase in variability only in the amplitude component of the pointing movement. Taken together, these results suggest that the direction of the movement is coded and remembered in a frame of reference linked to the arm, whereas the amplitude of the movement is remembered in an eye-centered frame of reference.  相似文献   

6.
Different movement characteristics can be governed by different frames of reference. The present study serves to identify the frames of reference, which govern intermanual interactions with respect to movement directions. Previous studies had shown that intermanual interactions are adjusted to task requirements during motor preparation: for parallel movements directional coupling becomes parallel, and for symmetric movements it becomes symmetric. The timed-response procedure allows to trace these adjustments as they are reflected in the intermanual correlations between left-hand and right-hand directions. In the present study the adjustments remained unchanged when all target directions were rotated laterally, indicating a critical role of hand-centered frames of reference. The additional role of a body-centered frame of reference was indicated by the finding of overall higher intermanual correlations with the rotated target configurations. Intermanual interference at long preparation intervals was absent even when eccentricities in the body-centered frame of reference were different. These findings converge with results on the frames of reference that govern intermanual interactions with respect to movement amplitudes. They suggest a role of both body-centered and hand-centered frames of reference for the adjustments of intermanual interactions to task requirements, but of a hand-centered frame of reference only for the intermanual interference, which remains in spite of the adjustments.  相似文献   

7.
Neurophysiological studies suggest that the transformation of visual signals into arm movement commands does not involve a sequential recruitment of the various reach-related regions of the cerebral cortex but a largely simultaneous activation of these areas, which form a distributed and recurrent visuomotor network. However, little is known about how the reference frames used to encode reach-related variables in a given “node” of this network vary with the time taken to generate a behavioral response. Here we show that in an instructed delay reaching task, the reference frames used to encode target location in the parietal reach region (PRR) and area 5 of the posterior parietal cortex (PPC) do not evolve dynamically in time; rather the same spatial representation exists within each area from the time target-related information is first instantiated in the network until the moment of movement execution. As previously reported, target location was encoded predominantly in eye coordinates in PRR and in both eye and hand coordinates in area 5. Thus, the different computational stages of the visuomotor transformation for reaching appear to coexist simultaneously in the parietal cortex, which may facilitate the rapid adjustment of trajectories that are a hallmark of skilled reaching behavior.  相似文献   

8.
At some stage in the process of a sensorimotor transformation for a reaching movement, information about the current position of the hand and information about the location of the target must be encoded in the same frame of reference to compute the hand-to-target difference vector. Two main hypotheses have been proposed regarding this reference frame: an eye-centered and a body-centered frame. Here we evaluated these hypotheses using the pointing errors that subjects made when planning and executing arm movements to memorized targets starting from various initial hand positions while keeping gaze fixed in various directions. One group of subjects (n = 10) was tested without visual information about hand position during movement planning (unseen-hand condition); another group (n = 8) was tested with hand and target position simultaneously visible before movement onset (seen-hand condition). We found that both initial hand position and gaze fixation direction had a significant effect on the magnitude and direction of the pointing error. Errors were significantly smaller in the seen-hand condition. For both conditions, though, a reference frame analysis showed that the errors arose at an eye- or hand-centered stage or both, but not at a body-centered stage. As a common reference frame is required to specify a movement vector, these results suggest that an eye-centered mechanism is involved in integrating target and hand position in programming reaching movements. We discuss how simple gain elements modulating the eye-centered target and hand-position signals can account for these results.  相似文献   

9.
On the timing of reference frames for action control   总被引:1,自引:1,他引:0  
This study investigated the time course and automaticity of spatial coding of visual targets for pointing movements. To provide an allocentric reference, placeholders appeared on a touch screen either 500 ms before target onset, or simultaneously with target onset, or at movement onset, or not at all (baseline). With both blocked and randomized placeholder timing, movements to the most distant targets were only facilitated when placeholders were visible before movement onset. This result suggests that allocentric target coding is most useful during movement planning and that this visuo-spatial coding mechanism is not sensitive to strategic effects.  相似文献   

10.
The observation of someone else’s action facilitates similar actions in the observer. Such priming effects can be driven by alignment between the observer and the observed in body-centred or spatial coordinates (or both). The separate and joint contributions of these sources of priming remain to be fully characterised. Here, we compare spatial and body priming effects across the whole body “space”, by using hand and foot responses. This allows a clearer separation of body priming from spatial priming than available from previous studies. In addition, we demonstrate two further features of these action priming effects. First, there are general interference and facilitation effects when the layout of viewed displays matches the participant’s body (e.g. hand above the foot). These effects have not been considered in previous studies. Second, by taking these layout effects into account, we identify the facilitation and interference components of spatial and body priming effects. Both types of priming effect are observed, and facilitation and interference effects are only observed when both body and spatial frames of reference are working in the same direction. These findings show that in action perception, the behaviours of others are processed simultaneously in multiple frames of reference that have complex, interacting effects—both facilitating and interfering—on the motor system of the observer.  相似文献   

11.
12.
The principal goal of our study is to gain an insight into the representation of peripersonal space. Two different experiments were conducted in this study. In the first experiment, subjects were asked to represent principal anatomical reference planes by drawing ellipses in the sagittal, frontal and horizontal planes. The three-dimensional hand-drawing movements, which were achieved with and without visual guidance, were considered as the expression of a cognitive process per se: the peripersonal space representation for action. We measured errors in the spatial orientation of ellipses with regard to the requested reference planes. For ellipses drawn without visual guidance, with eyes open and eyes closed, orientation errors were related to the reference planes. Errors were minimal for sagittal and maximal for horizontal plane. These disparities in errors were considerably reduced when subjects drew using a visual guide. These findings imply that different planes are centrally represented, and are characterized, by different errors when subjects use a body-centered frame for performing the movement and suggest that the representation of peripersonal space may be anisotropic. However, this representation can be modified when subjects use an environment-centered reference frame to produce the movement. In the second experiment, subjects were instructed to represent, with eyes open and eyes closed, sagittal, frontal and horizontal planes by pointing to virtual targets located in these planes. Disparities in orientation errors measured for pointing were similar to those found for drawing, implying that the sensorimotor representation of reference planes was not constrained by the type of motor tasks. Moreover, arm postures measured at pointing endpoints and at comparable spatial locations in drawing are strongly correlated. These results suggest that similar patterns of errors and arm posture correlation, for drawing and pointing, can be the consequence of using a common space representation and reference frame. These findings are consistent with the assumption of an anisotropic action-related representation of peripersonal space when the movement is performed in a body-centered frame.  相似文献   

13.
We review human functional neuroimaging studies that have explicitly investigated the reference frames used in different cortical regions for representing spatial locations of objects. Beyond the general distinction between “egocentric” and “allocentric” reference frames, we provide evidence for the selective involvement of the posterior parietal cortex and associated frontal regions in the specific process of egocentric localization of visual and somatosensory stimuli with respect to relevant body parts (“body referencing”). Similarly, parahippocampal and retrosplenial regions, together with specific parietal subregions such as the precuneus, are selectively involved in a specific form of allocentric representation in which object locations are encoded relative to enduring spatial features of a familiar environment (“environmental referencing”). We also present a novel functional magnetic resonance imaging study showing that these regions are selectively activated, whenever a purely perceptual spatial task involves an object which maintains a stable location in space during the whole experiment, irrespective of its perceptual features and its orienting value as a landmark. This effect can be dissociated from the consequences of an explicit memory recall of landmark locations, a process that further engages the retrosplenial cortex.  相似文献   

14.
It has been hypothesized that the end-point position of reaching may be specified in an egocentric frame of reference. In most previous studies, however, reaching was toward a memorized target, rather than an actual target. Thus, the role played by sensorimotor transformation could not be disassociated from the role played by storage in short-term memory. In the present study the direct process of sensorimotor transformation was investigated in reaching toward continuously visible targets that need not be stored in memory. A virtual reality system was used to present visual targets in different three-dimensional (3D) locations in two different tasks, one with visual feedback of the hand and arm position (Seen Hand) and the other without such feedback (Unseen Hand). In the Seen Hand task, the axes of maximum variability and of maximum contraction converge toward the mid-point between the eyes. In the Unseen Hand task only the maximum contraction correlates with the sight-line and the axes of maximum variability are not viewer-centered but rotate anti-clockwise around the body and the effector arm during the move from the right to the left workspace. The bulk of findings from these and previous experiments support the hypothesis of a two-stage process, with a gradual transformation from viewer-centered to body-centered and arm-centered coordinates. Retinal, extra-retinal and arm-related signals appear to be progressively combined in superior and inferior parietal areas, giving rise to egocentric representations of the end-point position of reaching. Received: 25 November 1998 / Accepted: 8 July 1999  相似文献   

15.
Previous studies have shown that the manipulation of body position in space can modulate the manifestations of visual neglect. Here, we investigated in right brain-damaged patients (RBD) the possible influence of gravitational inputs on the capability to detect tactile stimuli delivered to hands positioned in ipsilesional or contralesional space. RBD patients (with or without impairments in detecting contralesional stimuli under single and double stimulation conditions) and healthy control subjects were tested in a tactile detection task in which gravitational (upright vs. supine) and hand position (anatomical vs. crossed) variables were orthogonally varied. The postural manipulation of the entire body turned out to influence the degree of tactile detection. In particular, RBD patients with tactile deficits detected a significantly higher number of left-sided stimuli in the supine posture than in the upright posture. Moreover, crossing of hands improved the ability of RBD patients with tactile deficits in detecting stimuli delivered to their left contralesional hand. The beneficial effect of lying supine was independent of the spatial position of the hands, thus suggesting that the improvement of performance dependent upon entire-body posture and that dependent upon crossing hands may rely upon separate mechanisms.  相似文献   

16.
Visual information is mapped with respect to the retina within the early stages of the visual cortex. On the other hand, the brain has to achieve a representation of object location in a coordinate system that matches the reference frame used by the motor cortex to code reaching movement in space. The mechanism of the necessary coordinate transformation between the different frames of reference from the visual to the motor system as well as its localization within the cerebral cortex is still unclear. Coordinate transformation is traditionally described as a series of elementary computations along the visuomotor cortical pathways, and the motor system is thought to receive target information in a body-centered reference frame. However, neurons along these pathways have a number of similar properties and receive common input signals, suggesting that a non-retinocentric representation of object location in space might be available for sensory and motor purposes throughout the visuomotor pathway. This paper reviews recent findings showing that elementary input signals, such as retinal and eye position signals, reach the dorsal premotor cortex. We will also compare eye position effects in the premotor cortex with those described in the posterior parietal cortex. Our main thesis is that appropriate sensory input signals are distributed across the visuomotor continuum, and could potentially allow, in parallel, the emergence of multiple and task-dependent reference frames. Received: 21 September 1998 / Accepted: 19 March 1999  相似文献   

17.
Individuals rely on various frames of reference (FORs), such as an egocentric FOR (EFOR) and intrinsic FOR (IFOR), to represent spatial information. Previous behavioral studies have shown different IFOR‐IFOR (II) and EFOR‐IFOR (EI) conflict effects and an effect of their interaction. However, the neural mechanism of conflict processing between two FOR‐based conflicts is unclear. In the current ERP study, two FOR‐based conflicts were manipulated using a two‐cannon task to elucidate common and distinct brain mechanisms that underlie FOR‐based conflict processing. The behavioral results showed that both conflicts exhibited longer reaction times and larger error rates in the II (180° cannon angle) and EI (target cannon pointed down) incongruent conditions than in the II (0° cannon angle) and EI (target cannon pointed up) congruent conditions and that an interaction existed between the two conflicts. The ERP results indicated that, for both conflicts, more negative N2 amplitudes and less positive P3 amplitudes occurred in the incongruent conditions than in the congruent conditions, and the interactions between the two conflicts during later P3 amplitudes were significant. Time‐frequency analysis further indicated that, in the early time window, the II conflict and the EI conflict specifically modulated power in the theta bands and beta bands, respectively. In contrast, in the later time window, both conflicts modulated power in the alpha and beta bands. In summary, our findings provide insights into the potential existence of two specific early conflict monitoring systems and a general late executive control system for FOR‐based conflicts.  相似文献   

18.
In an earlier experiment we showed that selective attention plays a critical role in rabbit eye blink conditioning (Steele-Russell et al. in Exp Brain Res 173:587–602, 2006). The present experiments are concerned to examine the extent to which visual recognition processes are a separate component from the motor learning that is also involved in conditioning. This was achieved by midline section of the optic chiasma which disconnected the direct retinal projections via the brainstem to the cerebellar oculomotor control system. By comparing both normal and chiasma-sectioned rabbits it was possible to determine the dependence or independence of conditioning on the motor expression of the eye blink response during training. Both normal and chiasma-sectioned animals were tested using a multiple test battery to determine the effect of this redirection of the visual input pathways on conditioning. All animals were first tested for any impairment in visual capability following section of the optic chiasma. Despite the loss of 90% of retinal ganglion cell fibres, no visual impairment for either intensity or pattern vision was seen in the chiasma animals. Also no difference was seen in nictitating membrane (NM) conditioning to an auditory signal between normal and chiasma animals. Testing for motor learning to a visual signal, the chiasma rabbits showed a complete lack of any NM conditioning. However the sensory tests of visual conditioning showed that chiasma-sectioned animals had completely normal sensory recognition learning. These results show that NM Pavlovian conditioning involves anatomically separate and independent sensory recognition and motor output components of the learning. This research was supported by S&W research grants ID# 1810 to ISR and ID# 7985 to JAC.  相似文献   

19.
Five experiments explored the influence of visual and kinesthetic/proprioceptive reference frames on location memory. Experiments 1 and 2 compared visual and kinesthetic reference frames in a memory task using visually-specified locations and a visually-guided response. When the environment was visible, results replicated previous findings of biases away from the midline symmetry axis of the task space, with stability for targets aligned with this axis. When the environment was not visible, results showed some evidence of bias away from a kinesthetically-specified midline (trunk anterior–posterior [a–p] axis), but there was little evidence of stability when targets were aligned with body midline. This lack of stability may reflect the challenges of coordinating visual and kinesthetic information in the absence of an environmental reference frame. Thus, Experiments 3–5 examined kinesthetic guidance of hand movement to kinesthetically-defined targets. Performance in these experiments was generally accurate with no evidence of consistent biases away from the trunk a–p axis. We discuss these results in the context of the challenges of coordinating reference frames within versus between multiple sensori-motor systems.
Vanessa R. SimmeringEmail:
  相似文献   

20.
Previous evidence based on perceptual integration and arbitrary responses suggests extensive cross-modal links in attention across the various modalities. Attention typically shifts to a common location across the modalities, despite the vast differences in their initial coding of space. An issue that remains unclear is whether or not these effects of multisensory coding occur during more natural tasks, such as grasping and manipulating three-dimensional objects. Using kinematic measures, we found strong effects of the diameter of a grasped distractor object on the aperture used to grasp a target object at both coincident and non-coincident locations. These results suggest that interference effects can occur between proprioceptive and visuomotor signals in grasping. Unlike other interference effects in cross-modal attention, these effects do not depend on the spatial relation between target and distractor, but occur within an object-based frame of reference.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号