首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The aim of this study was to further understand how the brain represents spatial information for shaping aiming movements to targets. Both behavioral and neurophysiological studies have shown that the brain represents spatial memory for reaching targets in an eye-fixed frame. To date, these studies have only shown how the brain stores and updates target locations for generating a single arm movement. But once a target's location has been computed relative to the hand to program a pointing movement, is that information reused for subsequent movements to the same location? Or is the remembered target location reconverted from eye to motor coordinates each time a pointing movement is made? To test between these two possibilities, we had subjects point twice to the remembered location of a previously foveated target after shifting their gaze to the opposite side of the target site before each pointing movement. When we compared the direction of pointing errors for the second movement to those of the first, we found that errors for each movement varied as a function of current gaze so that pointing endpoints fell on opposite sides of the remembered target site in the same trial. Our results suggest that when shaping multiple pointing movements to the same location the brain does not use information from the previous arm movement such as an arm-fixed representation of the target but instead mainly uses the updated eye-fixed representation of the target to recalculate its location into the appropriate motor frame.  相似文献   

2.
This research examined motor measures of the apparent egocentric location and perceptual measures of the apparent allocentric location of a target that was being seen to undergo induced motion (IM). In Experiments 1 and 3, subjects fixated a stationary dot (IM target) while a rectangular surround stimulus (inducing stimulus) oscillated horizontally. The inducing stimulus motion caused the IM target to appear to move in the opposite direction. In Experiment 1, two dots (flashed targets) were flashed above and below the IM target when the surround had reached its leftmost or rightmost displacement from the subject’s midline. Subjects pointed open-loop at either the apparent egocentric location of the IM target or at the bottom of the two flashed targets. On separate trials, subjects made judgments of the Vernier alignment of the IM target with the flashed targets at the endpoints of the surround’s oscillation. The pointing responses were displaced in the direction of the previously seen IM for the IM target and to a lesser degree for the bottom flashed target. However, the allocentric Vernier judgments demonstrated no perceptual displacement of the IM target relative to the flashed targets. Thus, IM results in a dissociation of egocentric location measures from allocentric location measures. In Experiment 2, pointing and Vernier measures were obtained with stationary horizontally displaced surrounds and there was no dissociation of egocentric location measures from allocentric location measures. These results indicate that the Roelofs effect did not produce the pattern of results in Experiment 1. In Experiment 3, pointing and Vernier measures were obtained when the surround was at the midpoint of an oscillation. In this case, egocentric pointing responses were displaced in the direction of surround motion (opposite IM) for the IM target and to a greater degree for the bottom flashed target. However, there was no apparent displacement of the IM target relative to the flashed targets in the allocentric Vernier judgments. Therefore, in Experiment 3 egocentric location measures were again dissociated from allocentric location measures. The results of this experiment also demonstrate that IM does not generate an allocentric displacement illusion analogous to the “flash-lag” effect.
Robert B. PostEmail:
  相似文献   

3.
Previous reports have argued that single neurons in the ventral premotor cortex of rhesus monkeys (PMv, the ventrolateral part of Brodmann's area 6) typically show spatial response fields that are independent of gaze angle. We reinvestigated this issue for PMv and also explored the adjacent prearcuate cortex (PAv, areas 12 and 45). Two rhesus monkeys were operantly conditioned to press a switch and maintain fixation on a small visual stimulus (0.2° × 0.2°) while a second visual stimulus (1° × 1° or 2° × 2°) appeared at one of several possible locations on a video screen. When the second stimulus dimmed, after an unpredictable period of 0.4–1.2s, the monkey had to quickly release the switch to receive liquid reinforcement. By presenting stimuli at fixed screen locations and varying the location of the fixation point, we could determine whether single neurons encode stimulus location in absolute space or any other coordinate system independent of gaze. For the vast majority of neurons in both PMv (90%) and PAv (94%), the apparent response to a stimulus at a given screen location varied significantly and dramatically with gaze angle. Thus, we found little evidence for gaze-independent activity in either PMv or PAv neurons. The present result in frontal cortex resembles that in posterior parietal cortex, where both retinal image location and eye position affect responsiveness to visual stimuli.  相似文献   

4.
In the present study we addressed the issue of how an object is visually isolated from surrounding cues when a reaching-grasping (prehension) movement towards it is planned. Subjects were required to reach and grasp an object presented either alone or with a distractor. In five experiments, different degrees of elaboration of the distractor were induced by varying: (1) the position of the distractor (central or peripheral); (2) the time when the distractor was suppressed (immediately or delayed, with respect to stimulus presentation); and (3) the type of distractor analysis (implicit or explicit). In addition, we tested whether the possible effects of the distractor on reaching-grasping were due to the use of an allocentric reference centered on it. This was obtained by comparing the effects of the distractor with those of a stimulus, the target of a placing movement successive to the reaching-grasping. The results of the five experiments can be summarized as follows. The necessary condition for an interference effect on both the reaching and the grasping components was the central presentation of the distractor. When the information on the distractor could be immediately suppressed, an interference effect was observed only on the grasp component. In the case of delayed suppression, an effect was found on the reaching component. Finally, when an overt analysis of the distractor was required, the interference effect disappeared. Two main conclusions have been drawn from the results of the present study. First, comparison between properties of the target and surrounding cues is performed by two independent processes for reaching and grasping an object. The process for the grasp relies more on allocentric cues than that for the reach. Second, when surrounding stimuli are automatically analyzed during visual search of the target, the process of visuo-motor transformation can incorporate their features into the target. In contrast, overt analysis of surrounding stimuli is performed separately from that of the target. Finally, the data of the present study are discussed in support of the premotor theory of attention. Received: 31 December 1997 / Accepted: 3 June 1998  相似文献   

5.
Sleep supports the conversion of implicitly acquired information into explicitly available knowledge. Currently, it is unclear if awareness about the presence of regularities in the stimulus material can modulate this conversion. Forty participants were trained on a serial reaction time task (SRTT). Twenty participants were informed afterwards that there was some regularity in the underlying sequence, without giving them any specific details about this regularity (aware condition); twenty other participants were not informed (unaware condition). Ten participants in each group slept the night after training, whereas 10 remained awake. After a second night of (recovery) sleep, a generation task followed where the target positions of the trained SRTT had to be deliberately generated. Both “sleep” and “awareness” improved generation task performance, but the two factors did not interact. We conclude that whilst sleep facilitates the conversion of implicit into explicit knowledge, the effect of awareness is not specific to sleep-dependent consolidation.  相似文献   

6.
Subjects reached in three-dimensional space to a set of remembered targets whose position was varied randomly from trial to trial, but always fell along a "virtual" line (line condition). Targets were presented briefly, one-by-one and in an empty visual field. After a short delay, subjects were required to point to the remembered target location. Under these conditions, the target was presented in the complete absence of allocentric visual cues as to its position in space. However, because the subjects were informed prior to the experiment that all targets would fall on a straight line, they could conceivably imagine each point target as belonging to a single rigid object with a particular geometry and orientation in space, although this virtual object was never explicitly shown to the subjects. We compared the responses to repeated measurements of each target with those measured for targets presented in a directionally neutral configuration (sphere condition), and used the variable errors to infer the putative reference frames underlying the corresponding sensorimotor transformation. Performance in the different tasks was compared under two different lighting conditions (dim light or total darkness) and two memory delays (0.5 or 5 s). The pattern of variable errors differed significantly between the sphere condition and the line condition. In the former case, the errors were always accounted for by egocentric reference frames. By contrast the errors in the line condition revealed both egocentric and allocentric components, consistent with the hypothesis that target information can be defined concurrently in both egocentric and allocentric frames of reference, resulting in two independent coexisting representations. Electronic Publication  相似文献   

7.
The aim of this investigation was to gain further insight into control strategies used for whole body reaching tasks. Subjects were requested to step and reach to remembered target locations in normal room lighting (LIGHT) and complete darkness (DARK) with their gaze directed toward or eccentric to the remembered target location. Targets were located centrally at three different heights. Eccentric anchors for gaze direction were located at target height and initial target distance, either 30° to the right or 20° to the left of target location. Control trials, where targets remained in place, and remembered target trials were randomly presented. We recorded movements of the hand, eye and head, while subjects stepped and reached to real or remembered target locations. Lateral, vertical and anterior–posterior (AP) hand errors and eye location, and gaze direction deviations were determined relative to control trials. Final hand location errors varied by target height, lighting condition and gaze eccentricity. Lower reaches in the DARK compared to the LIGHT condition were common, and when matched with a tendency to reach above the low target, help explain more accurate reaches for this target in darkness. Anchoring the gaze eccentrically reduced hand errors in the AP direction and increased errors in the lateral direction. These results could be explained by deviations in eye locations and gaze directions, which were deemed significant predictors of final reach errors, accounting for a 17–47% of final hand error variance. Results also confirmed a link between gaze deviations and hand and head displacements, suggesting that gaze direction is used as a common input for movement of the hand and body. Additional links between constant and variable eye deviations and hand errors were common for the AP direction but not for lateral or vertical directions. When combined with data regarding hand error predictions, we found that subjectsȁ9 alterations in body movement in the AP direction were associated with AP adjustments in their reach, but final hand position adjustments were associated with gaze direction alterations for movements in the vertical and horizontal directions. These results support the hypothesis that gaze direction provides a control signal for hand and body movement and that this control signal is used for movement direction and not amplitude.  相似文献   

8.
The purpose of this research was to determine the cortical circuit involved in encoding and controlling kinesthetically guided reaching movements. We used 15O-butanol positron emission tomography in ten blindfolded able-bodied volunteers in a factorial experiment in which arm (left/right) used to encode target location and to reach back to the remembered location and hemispace of target location (left/right side of midsagittal plane) varied systematically. During encoding of a target the experimenter guided the hand to touch the index fingertip to an external target and then returned the hand to the start location. After a short delay the subject voluntarily moved the same hand back to the remembered target location. SPM99 analysis of the PET data contrasting left versus right hand reaching showed increased (P < 0.05, corrected) neural activity in the sensorimotor cortex, premotor cortex and posterior parietal lobule (PPL) contralateral to the moving hand. Additional neural activation was observed in prefrontal cortex and visual association areas of occipital and parietal lobes contralateral and ipsilateral to the reaching hand. There was no statistically significant effect of target location in left versus right hemispace nor was there an interaction of hand and hemispace effects. Structural equation modeling showed that parietal lobe visual association areas contributed to kinesthetic processing by both hands but occipital lobe visual areas contributed only during dominant hand kinesthetic processing. This visual processing may also involve visualization of kinesthetically guided target location and use of the same network employed to guide reaches to visual targets when reaching to kinesthetic targets. The present work clearly demonstrates a network for kinesthetic processing that includes higher visual processing areas in the PPL for both upper limbs and processing in occipital lobe visual areas for the dominant limb.  相似文献   

9.
Perception of the near environment gives rise to spatial images in working memory that continue to represent the spatial layout even after cessation of sensory input. As the observer moves, these spatial images are continuously updated. This research is concerned with (1) whether spatial images of targets are formed when they are sensed using extended touch (i.e., using a probe to extend the reach of the arm) and (2) the accuracy with which such targets are perceived. In Experiment 1, participants perceived the 3-D locations of individual targets from a fixed origin and were then tested with an updating task involving blindfolded walking followed by placement of the hand at the remembered target location. Twenty-four target locations, representing all combinations of two distances, two heights, and six azimuths, were perceived by vision or by blindfolded exploration with the bare hand, a 1-m probe, or a 2-m probe. Systematic errors in azimuth were observed for all targets, reflecting errors in representing the target locations and updating. Overall, updating after visual perception was best, but the quantitative differences between conditions were small. Experiment 2 demonstrated that auditory information signifying contact with the target was not a factor. Overall, the results indicate that 3-D spatial images can be formed of targets sensed by extended touch and that perception by extended touch, even out to 1.75 m, is surprisingly accurate.  相似文献   

10.
Motor imagery tasks (hand laterality judgment) are usually performed with respect to a self-body (egocentric) representation, but manipulations of stimulus features (hand orientation) can induce a shift to other's body (allocentric) reference frame. Visual perspective taking tasks are also performed in self-body perspective but a shift to an allocentric frame can be triggered by manipulations of context features (e.g., another person present in the to-be-judged scene). Combining hand laterality task and visual perspective taking, we demonstrated that both stimulus and context features can modulate motor imagery performance. In Experiment 1, participants judged laterality of a hand embedded in a human or non-human silhouette. Results showed that observing a human silhouette interfered with judgments on “egocentric hand stimuli” (right hand, fingers up). In Experiment 2, participants were explicitly required to judge laterality of a hand embedded in a human silhouette from their own (egocentric group) or from the silhouette's perspective (allocentric group). Consistent with previous results, the egocentric group was significantly faster than the allocentric group in judging fingers-up right hand stimuli. These findings showed that concurrent activation of egocentric and allocentric frames during mental transformation of body parts impairs participants’ performance due to a conflict between motor and visual mechanisms.  相似文献   

11.
Remembered object locations are stored in an eye-fixed reference frame, so that every time the eyes move, spatial representations must be updated for the arm-motor system to reflect the target's new relative position. To date, studies have not investigated how the brain updates these spatial representations during other types of eye movements, such as smooth-pursuit. Further, it is unclear what information is used in spatial updating. To address these questions we investigated whether remembered locations of pointing targets are updated following smooth-pursuit eye movements, as they are following saccades, and also investigated the role of visual information in estimating eye-movement amplitude for updating spatial memory. Misestimates of eye-movement amplitude were induced when participants visually tracked stimuli presented with a background that moved in either the same or opposite direction of the eye before pointing or looking back to the remembered target location. We found that gaze-dependent pointing errors were similar following saccades and smooth-pursuit and that incongruent background motion did result in a misestimate of eye-movement amplitude. However, the background motion had no effect on spatial updating for pointing, but did when subjects made a return saccade, suggesting that the oculomotor and arm-motor systems may rely on different sources of information for spatial updating.  相似文献   

12.
In a typical flanker task, responses to a central target (“S” or “N”) are modulated by whether the flankers are compatible (“SSSSS”) or incompatible (“NNSNN”), with increased reaction times and decreased accuracy on incompatible trials. The role of the motor system in response interference under these conditions remains unclear, however. Here we show that transcranial magnetic stimulation (TMS) of the left primary motor cortex modulates the amount of flanker interference depending on the hand used for the response. Left motor TMS delivered at 200 ms after the onset of the array increased interference from incompatible flankers (“SSNSS”) when the target response was associated with the contralateral motor response (i.e. for “N” responses with the right hand), relative to when responses were to targets using the (left) hand ipsilateral to the site of TMS. Interestingly, under identical conditions, the degree of flanker interference was reduced when the TMS pulse was applied later in time. The analyses of the TMS-induced motor evoked potentials pointed to motor activity varying in the same conditions. We discuss the implications for understanding response interference and the role of the primary motor cortex in response selection.  相似文献   

13.
We used a memory-saccade task to test whether the location of a target, briefly presented before a whole-body rotation in roll, is stored in egocentric or in allocentric coordinates. To make this distinction, we exploited the fact that subjects, when tilted sideways in darkness, make systematic errors when indicating the direction of gravity (an allocentric task) even though they have a veridical percept of their self-orientation in space. We hypothesized that if spatial memory is coded allocentrically, these distortions affect the coding of remembered targets and their readout after a body rotation. Alternatively, if coding is egocentric, updating for body rotation becomes essential and errors in performance should be related to the amount of intervening rotation. Subjects (n = 6) were tested making saccades to remembered world-fixed targets after passive body tilts. Initial and final tilt angle ranged between -120 degrees CCW and 120 degrees CW. The results showed that subjects made large systematic directional errors in their saccades (up to 90 degrees ). These errors did not occur in the absence of intervening body rotation, ruling out a memory degradation effect. Regression analysis showed that the errors were closely related to the amount of subjective allocentric distortion at both the initial and final tilt angle, rather than to the amount of intervening rotation. We conclude that the brain uses an allocentric reference frame, possibly gravity-based, to code visuospatial memories during whole-body tilts. This supports the notion that the brain can define information in multiple frames of reference, depending on sensory inputs and task demands.  相似文献   

14.
Our sense of proprioception is vital for the successful performance of most activities of daily living, and memory-based joint position matching (JPM) tasks are often utilized to quantify such proprioceptive abilities. In the present study we sought to determine if matching a remembered proprioceptive target angle was influenced significantly by the length of time given to develop a neural representation of that position. Thirteen healthy adult subjects performed active matching of passively determined elbow joint angles (amplitude = 20° or 40° extension) in the absence of vision, with either a relatively “short” (3 s) or “long” (12 s) target presentation time. In the long condition, where subjects had a greater opportunity to develop an internal representation of the target elbow joint angle, matching movements had significantly smaller variable errors and were associated with smoother matching movement trajectories of a shorter overall duration. Taken together, these findings provide an important proprioceptive corollary for previous results obtained in studies of visually-guided reaching suggesting that increased exposure to target sensory stimuli can improve the accuracy of matching performance. Further, these results appear to be of particular importance with respect to the estimation of proprioceptive function in individuals with disability, who typically have increased noise in their proprioceptive systems.  相似文献   

15.
Single-neuron responses in motor and premotor cortex were recorded during a movement-sequence delay task. On each trial the monkey viewed a randomly selected sequence of target lights arrayed in two-dimensional space, remembered the sequence during a delay period, and then generated a coordinated sequence of movements to the remembered targets. Of 307 neurons studied, 25% were tuned specifically for either the first or the second target, but not both. In particular, for neurons tuned during both target presentations, tuned activity related to a particular first target direction were maintained during the presentation of a second target in a different direction. During the delay period, 32% of the neurons were tuned for upcoming movement in a single direction. These delay period responses often reflected activity patterns that first developed during target presentations and may therefore act to maintain target period information during the delay. Neurons with tuned activity during both the delay and movement periods exhibited two patterns: the first exhibited tuned responses during the delay that were correlated with the tuning of first-movement responses, while the second pattern showed delay-period tuning that was better correlated with tuned responses during second movements. This indicates that, before movement, distinct neural populations are correlated with specific movements in a sequence. About half the neurons studied were not directionally tuned during the initiation, target, or delay periods, but did show systematic changes in activity during task performance. Some (34%) were exclusively tuned during movement and appear to be involved in the direct control of movement. Others (17%) showed changes in firing rate from period to period within a trial but showed no directional preference for a particular direction of movement. Population analyses of tuned activity during the target and delay periods indicated that accurate directional information about both first and second movements was available in the neuronal ensemble well before reaching began. These results extend the idea that both motor and premotor cortex play a role in reaching behavior other than the direct control of muscles. While some early neural responses resembled muscle activation patterns involved in maintaining fixed postures before movement, others probably relate to the sensory-to-motor transformations, information storage in short-term memory, and movement preparation required to generate accurate reaching to remembered locations in space.  相似文献   

16.
Manipulation of objects around the head requires an accurate and stable internal representation of their locations in space, also during movements such as that of the eye or head. For far space, the representation of visual stimuli for goal-directed arm movements relies on retinal updating, if eye movements are involved. Recent neurophysiological studies led us to infer that a transformation of visual space from retinocentric to a head-centric representation may be involved for visual objects in close proximity to the head. The first aim of this study was to investigate if there is indeed such a representation for remembered visual targets of goal-directed arm movements. Participants had to point toward an initially foveated central target after an intervening saccade. Participants made errors that reflect a bias in the visuomotor transformation that depends on eye displacement rather than any head-centred variable. The second issue addressed was if pointing toward the centre of a wide-field expanding motion pattern involves a retinal updating mechanism or a transformation to a head-centric map and if that process is distance dependent. The same pattern of pointing errors in relation to gaze displacement was found independent of depth. We conclude that for goal-directed arm movements, representation of the remembered visual targets is updated in a retinal frame, a mechanism that is actively used regardless of target distance, stimulus characteristics or the requirements of the task.  相似文献   

17.
Spatial priming in visual search is a well-documented phenomenon. If the target of a visual search is presented at the same location in subsequent trials, the time taken to find the target at this repeated target location is significantly reduced. Previous studies did not determine which spatial reference frame is used to code the location. At least two reference frames can be distinguished: an observer-related frame of reference (egocentric) or a scene-based frame of reference (allocentric). While past studies suggest that an allocentric reference frame is more effective, we found that an egocentric reference frame is at least as effective as an allocentric one (Ball et al. Neuropsychologia 47(6):1585–1591, 2009). Our previous study did not identify which specific egocentric reference frame was used for the priming: participants could have used a retinotopic or a body-centred frame of reference. Here, we disentangled the retinotopic and body-centred reference frames. In the retinotopic condition, the position of the target stimulus, when repeated, changed with the fixation position, whereas in the body-centred condition, the position of the target stimulus remained the same relative to the display, and thus to the body-midline, but was different relative to the fixation position. We used a conjunction search task to assess the generality of our previous findings. We found that participants relied on body-centred information and not retinotopic cues. Thus, we provide further evidence that egocentric information, and specifically body-centred information, can persist for several seconds, and that these effects are not specific to either a feature or a conjunction search paradigm.  相似文献   

18.
Saccadic eye movements made to remembered locations in the dark show a distinct up-shift in macaque monkey, and slight upward bias in humans (Gnadt et al. 1991). This upward bias created in the visual spatial mapping of a saccade may be translated downstream in a hand/touch movement. This error could possibly reveal (a) information about the frames of reference used in each scenario and (b) the sources of this error within the brain. This would suggest an early planning stage if they are shared, or a later stage if the errors are distinct. Methods: Eight human subjects performed touch responses to a touch screen monitor to both visual and remembered target locations. The subjects used a high-resolution touch-screen monitor, a bite bar and chin-rest for restricting head movements during responses. All target locations were 20° vectors from the central starting position in horizontal, vertical and oblique planes of motion. Results: Subjects were accurate to both visual and remembered target locations with little variance. Subject means showed no significant differences between control and memory trials; however, a distinct asymmetry was observed between cardinal and oblique planes during memory trials. Subjects consistently made errors to oblique locations during touches made to the remembered location that was not evident in control conditions. This error pattern revealed a strong hypermetric tendency for oblique planes of touches made to a remembered location.  相似文献   

19.
There is a significant overlap between the processes and neural substrates of spatial cognition and those subserving memory and learning. However, for procedural learning, which often is spatial in nature, we do not know how different forms of spatial knowledge, such as egocentric and allocentric frames of reference, are utilized nor whether these frames are differentially engaged during implicit and explicit processes. To address this issue, we trained human subjects on a movement sequence presented on a bi-dimensional (2D) geometric frame. We then systematically manipulated the geometric frame (allocentric) or the sequence of movements (egocentric) or both, and retested the subjects on their ability to transfer the sequence knowledge they had acquired in training and also determined whether the subjects had learned the sequence implicitly or explicitly. None of the subjects (implicit or explicit) showed evidence of transfer when both frames of reference were changed which suggests that spatial information is essential. Both implicit and explicit subjects transferred when the egocentric frame was maintained indicating that this representation is common to both processes. Finally, explicit subjects were also able to benefit from the allocentric frame in transfer, which suggests that explicit procedural knowledge may have two tiers comprising egocentric and allocentric representations.  相似文献   

20.
目的:考察大学生对正性、负性和中性词在内隐记忆和外显记忆上的差异。方法:分别使用真假词判断任务和学习-再认范式考察被试对不同情绪词的内隐和外显记忆。结果:在内隐记忆中,大学生对不同情绪词的反应时存在显著差异(F=6.360,P0.05)。进一步分析发现,对积极词的反应时(573.0±57.9ms)和消极词的反应时(650.3±109.12ms)显著短于对中性词的反应时(671.8±101.0ms),但积极词与消极词的反应时之间差异不显著。在外显记忆中,大学生对不同情绪词的正确率和反应时均存在显著差异(F=7.353,15.000,P0.05)。进一步分析发现,被试对积极词记忆的正确率(67.0%±17.9%)明显高于消极词(46.3%±15.9%)和中性词(50.3%±20.4%),但中性词与消极词的正确率差异不显著。积极词(688.2±129.3 ms)和消极词的反应时(814.5±140.3ms)均低于中性词的反应时(951.8±182.0ms),且积极词的反应时明显低于消极词的反应时。结论:刺激本身所具有的情绪信息会影响个体的记忆,个体对不同情绪词的内隐记忆与外显记忆存在差异。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号