首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
The location of an object in peripersonal space can be represented with respect to our body (i.e., egocentric frame of reference) or relative to contextual features and other objects (i.e., allocentric frame of reference). In the current study, we sought to determine whether the frame, or frames, of visual reference supporting motor output is influenced by reach trajectories structured to maximize visual feedback utilization (i.e., controlled online) or structured largely in advance of movement onset via central planning mechanisms (i.e., controlled offline). Reaches were directed to a target embedded in a pictorial illusion (the induced Roelofs effect: IRE) and advanced knowledge of visual feedback was manipulated to influence the nature of reaching control as reported by Zelaznik et al. (J Mot Behav 15:217–236, 1983). When vision could not be predicted in advance of movement onset, trajectories showed primary evidence of an offline mode of control (even when vision was provided) and endpoints demonstrated amplified sensitivity to the illusory (i.e., allocentric) features of the IRE. In contrast, reaches performed with reliable visual feedback evidenced a primarily online mode of control and showed increased visuomotor resistance to the IRE. These findings suggest that the manner a reaching response is structured differentially influences the weighting of allocentric and egocentric visual information. More specifically, when visual feedback is unavailable or unpredictable, the weighting of allocentric visual information for the advanced planning of a reach trajectory is increased.
Matthew HeathEmail:
  相似文献   

2.
To produce accurate goal-directed arm movements, subjects must determine the precise location of target object. Position of extracorporeal objects can be determined using: (a) an egocentric frame of reference, in which the target is localized in relation to the position of the body; and/or (b) an allocentric system, in which target position is determined in relation to stable visual landmarks surrounding the target (Bridgeman 1989; Paillard 1991). The present experiment was based on the premise that (a) the presence of a structured visual environment enables the use of an allocentric frame of reference, and (b) the sole presence of a visual target within a homogeneous background forces the registration of the target location by an egocentric system. Normal subjects and a deafferented patient (i.e., with an impaired egocentric system) pointed to visual targets presented in both visual environments to evaluate the efficiency of the two reference systems. For normals, the visual environment conditions did not affect pointing accuracy. However, kinematic parameters were affected by the presence or absence of a structured visual surrounding. For the deafferented patient, the presence of a structured visual environment permitted a decrease in spatial errors when compared with the unstructured surrounding condition (for movements with or without visual feedback of the trajectory). Overall, results support the existence of an egocentric and an allocentric reference system capable of organizing extracorporeal space during arm movements directed toward visual targets.  相似文献   

3.
The purpose of this study was to examine how discrete segments of contiguous space arising from perceptual or cognitive channels are mentally concatenated. We induced and measured errors in each channel separately, then summed the psychophysical functions to accurately predict pointing to a depth specified by both together. In Experiment 1, subjects drew a line to match the visible indentation of a probe into a compressible surface. Systematic perceptual errors were induced by manipulating surface stiffness. Subjects in Experiment 2 placed the probe against a rigid surface and viewed the depth of a hidden target below it from a remote image with a metric scale. This cognitively mediated depth judgment produces systematic under-estimation (Wu et al. in IEEE Trans Vis Comput Grap 11(6):684–693, 2005; confirmed here). In Experiment 3, subjects pointed to a target location detected by the indented probe and displayed remotely, requiring mental concatenation of the depth components. The model derived from the data indicated the errors in the components were passed through the integration process without additional systematic error. Experiment 4 further demonstrated that this error-free concatenation was intrinsically spatial, rather than numerical.
Bing WuEmail:
  相似文献   

4.
Subjects reached in three-dimensional space to a set of remembered targets whose position was varied randomly from trial to trial, but always fell along a "virtual" line (line condition). Targets were presented briefly, one-by-one and in an empty visual field. After a short delay, subjects were required to point to the remembered target location. Under these conditions, the target was presented in the complete absence of allocentric visual cues as to its position in space. However, because the subjects were informed prior to the experiment that all targets would fall on a straight line, they could conceivably imagine each point target as belonging to a single rigid object with a particular geometry and orientation in space, although this virtual object was never explicitly shown to the subjects. We compared the responses to repeated measurements of each target with those measured for targets presented in a directionally neutral configuration (sphere condition), and used the variable errors to infer the putative reference frames underlying the corresponding sensorimotor transformation. Performance in the different tasks was compared under two different lighting conditions (dim light or total darkness) and two memory delays (0.5 or 5 s). The pattern of variable errors differed significantly between the sphere condition and the line condition. In the former case, the errors were always accounted for by egocentric reference frames. By contrast the errors in the line condition revealed both egocentric and allocentric components, consistent with the hypothesis that target information can be defined concurrently in both egocentric and allocentric frames of reference, resulting in two independent coexisting representations. Electronic Publication  相似文献   

5.
The study investigated pointing at memorized targets in reachable space in congenitally blind (CB) and blindfolded sighted (BS) children (6, 8, 10 and 12 years; ten children in each group). The target locations were presented on a sagittal plane by passive positioning of the left index finger. A go signal for matching the target location with the right index finger was provided 0 or 4 s after demonstration. An age effect was found only for absolute distance errors and the surface area of pointing was smaller for the CB children. Results indicate that early visual experience and age are not predictive factors for pointing in children. The delay was an important factor at all ages and for both groups, indicating distinct spatial representations such as egocentric and allocentric frames of reference, for immediate and delayed pointing, respectively. Therefore, the CB like the BS children are able to use both ego- and allocentric frames of reference.  相似文献   

6.
The purpose of the present study was to investigate the coordination of the two effectors when one or both targets were displaced in a bimanual prehension task. Sixteen right-handed volunteers were asked to reach 20 cm to grasp and lift two cubic objects with the right and left hands. Upon initiation of the reach: (1) both objects could remain at the initial position (NN); (2) the right object could be displaced toward the subject (NJ); (3) the left object could be displaced (JN); or (4) both objects could be displaced (JJ). Generally, the results indicated that the hand moving to the perturbed object was reorganized to reach the target efficiently, but hovered to somewhat couple object lift for the two hands. In contrast, adjustments were seen in the velocity profiles of the hand moving to the non-perturbed target, including a premature deceleration phase and corrective movements to reach the target location. Together, these results indicate that when the perturbation of one object occurs during the performance of a bimanual prehension task, visual information is used to independently update the control process for the limb moving to the perturbed object. Additionally, interference causes the limb moving to the non-perturbed target to be inappropriately adjusted in response to the perturbation. Our results also indicated that perceptual and motor factors such as time allotted for the use of feedback and the direction of movement may play a role in the independence/dependence relationship between the hands during bimanual tasks. Furthermore, subjects’ expectations about the performance and goal of the task could have a further influence on the level of interference seen during bimanual movements. Finally, despite interference effects which caused multiple accelerations and decelerations, the hand moving to the non-perturbed target still achieved the target location in the same movement time as during control conditions. This final result indicates the efficiency with which subjects can reorganize both limbs in the face of altered task requirements.
Andrea H. MasonEmail:
  相似文献   

7.
We sought to determine whether mirror-symmetrical limb movements (so-called anti-pointing) elicit a pattern of endpoint bias commensurate with perceptual judgments. In particular, we examined whether asymmetries related to the perceptual over- and under-estimation of target extent in respective left and right visual space impacts the trajectories of anti-pointing. In Experiment 1, participants completed direct (i.e. pro-pointing) and mirror-symmetrical (i.e. anti-pointing) responses to targets in left and right visual space with their right hand. In line with the anti-saccade literature, anti-pointing yielded longer reaction times than pro-pointing: a result suggesting increased top-down processing for the sensorimotor transformations underlying a mirror-symmetrical response. Most interestingly, pro-pointing yielded comparable endpoint accuracy in left and right visual space; however, anti-pointing produced an under- and overshooting bias in respective left and right visual space. In Experiment 2, we replicated the findings from Experiment 1 and further demonstrate that the endpoint bias of anti-pointing is independent of the reaching limb (i.e. left vs. right hand) and between-task differences in saccadic drive. We thus propose that the visual field-specific endpoint bias observed here is related to the cognitive (i.e. top-down) nature of anti-pointing and the corollary use of visuo-perceptual networks to support the sensorimotor transformations underlying such actions.
Matthew HeathEmail:
  相似文献   

8.
Although many studies have demonstrated that crossmodal exogenous orienting can lead to a facilitation of reaction times, the issue of whether exogenous spatial orienting also affects the accuracy of perceptual judgments has proved to be much more controversial. Here, we examined whether or not exogenous spatial attentional orienting would affect sensitivity in a temporal discrimination task. Participants judged which of the two target letters, presented on either the same or opposite sides, had been presented first. A spatially non-predictive tone was presented 200 ms prior to the onset of the first visual stimulus. In two experiments, we observed improved performance (i.e., a decrease in the just noticeable difference) when the target letters were presented on opposite sides and the auditory cue was presented on the side of the first visual stimulus, even when central fixation was monitored ("Experiment 2"). A shift in the point of subjective simultaneity was also observed in both experiments, indicating ‘prior entry’ for cued as compared to uncued first target trials. No such JND or PSS effects were observed when the auditory tone was presented after the second visual stimulus ("Experiment 3"), thus confirming the attentional nature of the effects observed. These findings clearly show that the crossmodal exogenous orienting of spatial attention can affect the accuracy of temporal judgments.
Valerio SantangeloEmail:
  相似文献   

9.
Motor imagery tasks (hand laterality judgment) are usually performed with respect to a self-body (egocentric) representation, but manipulations of stimulus features (hand orientation) can induce a shift to other's body (allocentric) reference frame. Visual perspective taking tasks are also performed in self-body perspective but a shift to an allocentric frame can be triggered by manipulations of context features (e.g., another person present in the to-be-judged scene). Combining hand laterality task and visual perspective taking, we demonstrated that both stimulus and context features can modulate motor imagery performance. In Experiment 1, participants judged laterality of a hand embedded in a human or non-human silhouette. Results showed that observing a human silhouette interfered with judgments on “egocentric hand stimuli” (right hand, fingers up). In Experiment 2, participants were explicitly required to judge laterality of a hand embedded in a human silhouette from their own (egocentric group) or from the silhouette's perspective (allocentric group). Consistent with previous results, the egocentric group was significantly faster than the allocentric group in judging fingers-up right hand stimuli. These findings showed that concurrent activation of egocentric and allocentric frames during mental transformation of body parts impairs participants’ performance due to a conflict between motor and visual mechanisms.  相似文献   

10.
Behavioral studies suggest that humans intercept moving targets by maintaining a constant bearing angle (CBA). The purely feedback-driven CBA strategy has been contrasted with the strategy of predicting the eventual time and location of the future interception point. This study considers an intermediate anticipatory strategy of moving so as to produce a CBA a short duration into the future. Subjects controlled their speed of self-motion along a linear path through a simulated environment to intercept a moving target. When targets changed speed midway through the trial in Experiment 1, subjects abandoned an ineffective CBA strategy in favor of a strategy of anticipating the most likely change in target speed. In Experiment 2, targets followed paths of varying curvature. Behavior was inconsistent with both the CBA and the purely predictive strategy. To investigate the intermediate anticipatory strategy, human performance was compared with a model of interceptive behavior that, at each time-step t, produced the velocity adjustment that would minimize the change in bearing angle at time t + ∆t, taking into account the target’s behavior during that interval. Values of ∆t at which the model best fit the human data for practiced subjects varied between 0.5 and 3.5 s, suggesting that actors adopt an anticipatory strategy to keep the bearing angle constant a short time into the future.
Gabriel Jacob DiazEmail:
  相似文献   

11.
We investigated the effect of varying sound intensity on the audiotactile crossmodal dynamic capture effect. Participants had to discriminate the direction of a target stream (tactile, Experiment 1; auditory, Experiment 2) while trying to ignore the direction of a distractor stream presented in a different modality (auditory, Experiment 1; tactile, Experiment 2). The distractor streams could either be spatiotemporally congruent or incongruent with respect to the target stream. In half of the trials, the participants were presented with auditory stimuli at 75 dB(A) while in the other half of the trials they were presented with auditory stimuli at 82 dB(A). Participants’ performance on both tasks was significantly affected by the intensity of the sounds. Namely, the crossmodal capture of tactile motion by audition was stronger with the more intense (vs. less intense) auditory distractors (Experiment 1), whereas the capture effect exerted by the tactile distractors was stronger for less intense (than for more intense) auditory targets (Experiment 2). The crossmodal dynamic capture was larger in Experiment 1 than in Experiment 2, with a stronger congruency effect when the target streams were presented in the tactile (vs. auditory) modality. Two explanations are put forward to account for these results: an attentional biasing toward the more intense auditory stimuli, and a modulation induced by the relative perceptual weight of, respectively, the auditory and the tactile signals.
Valeria OccelliEmail:
  相似文献   

12.
It has been widely reported that aging is accompanied by a decline in motor skill performance and in particular, it has been shown that older subjects take longer to adapt their ongoing reach in response to a target location shift. In the present experiment, we investigated the influence of aging on the ability to perform trajectory corrections in response to a target jump, but also assessed inhibition by asking a younger and an older group of participants to either adapt or stop their ongoing movement in response to a target location change. Results showed that although older subjects took longer to initiate, execute, correct and inhibit an ongoing reach, they performed both tasks with the same level of accuracy as the younger sample. Moreover, the slowing was also observed when older subjects were asked to point to stationary targets. Our findings thus indicate that aging does not specifically influence the ability to perform or inhibit fast online corrections to target location changes, but rather produces a general slowing and increased variability of movement planning, initiation and execution to both perturbed and stationary targets. For the first time, we demonstrate that aging is not accompanied by a decrease in the inhibition of motor control.
Monika HarveyEmail:
  相似文献   

13.
When participants reach for a target, their hand can adjust to a change in target position that occurs while their eyes are in motion (the hand’s automatic pilot) even though they are not aware of the target’s displacement (saccadic suppression of perceptual experience). However, previous studies of this effect have displayed the target without interruption, such that the new target position remains visible during the fixation that follows the saccade. Here we test whether a change in target position that begins and ends during the saccade can be used to update aiming movements. We also ask whether such information can be acquired from two targets at a time. The results showed that participants responded to single and double target jumps even when these targets were extinguished prior to saccade termination. The results imply that the hand’s automatic pilot is updated with new visual information even when the eye is in motion.
Romeo ChuaEmail:
  相似文献   

14.
The vestibular system analyses angular and linear accelerations of the head that are important information for perceiving the location of one’s own body in space. Vestibular stimulation and in particular galvanic vestibular stimulation (GVS) that allow a systematic modification of vestibular signals has so far mainly been used to investigate vestibular influence on sensori-motor integration in eye movements and postural control. Comparatively, only a few behavioural and imaging studies have investigated how cognition of space and body may depend on vestibular processing. This study was designed to differentiate the influence of left versus right anodal GVS compared to sham stimulation on object-based versus egocentric mental transformations. While GVS was applied, subjects made left-right judgments about pictures of a plant or a human body presented at different orientations in the roll plane. All subjects reported illusory sensations of body self-motion and/or visual field motion during GVS. Response times in the mental transformation task were increased during right but not left anodal GVS for the more difficult stimuli and the larger angles of rotation. Post-hoc analyses suggested that the interfering effect of right anodal GVS was only present in subjects who reported having imagined turning themselves to solve the mental transformation task (egocentric transformation) as compared to those subjects having imagined turning the picture in space (object-based mental transformation). We suggest that this effect relies on shared functional and cortical mechanisms in the posterior parietal cortex associated with both right anodal GVS and mental imagery. Electronic supplementary material  The online version of this article (doi:) contains supplementary material, which is available to authorized users.
Olaf Blanke (Corresponding author)Email:
  相似文献   

15.
The present study investigated the contribution of the presence of a visual signal at the saccade goal on saccade trajectory deviations and measured distractor-related inhibition as indicated by deviation away from an irrelevant distractor. Performance in a prosaccade task where a visual target was present at the saccade goal was compared to performance in an anti- and memory-guided saccade task. In the latter two tasks no visual signal is present at the location of the saccade goal. It was hypothesized that if saccade deviation can be ultimately explained in terms of relative activation levels between the saccade goal location and distractor locations, the absence of a visual stimulus at the goal location will increase the competition evoked by the distractor and affect saccade deviations. The results of Experiment 1 showed that saccade deviation away from a distractor varied significantly depending on whether a visual target was presented at the saccade goal or not: when no visual target was presented, saccade deviation away from a distractor was increased compared to when the visual target was present. The results of Experiments 2–4 showed that saccade deviation did not systematically change as a function of time since the offset of the target. Moreover, Experiments 3 and 4 revealed that the disappearance of the target immediately increased the effect of a distractor on saccade deviations, suggesting that activation at the target location decays very rapidly once the visual signal has disappeared from the display.
Wieske van ZoestEmail:
  相似文献   

16.
Research demonstrates that listening to and viewing speech excites tongue and lip motor areas involved in speech production. This perceptual-motor relationship was investigated behaviourally by presenting video clips of a speaker producing vowel-consonant-vowel syllables in three conditions: visual-only, audio-only, and audiovisual. Participants identified target letters that were flashed over the mouth during the video, either manually or verbally as quickly as possible. Verbal responses were fastest when the target matched the speech stimuli in all modality conditions, yet optimal facilitation was observed when participants were presented with visual-only stimuli. Critically, no such facilitation occurred when participants were asked to identify the target manually. Our findings support previous research suggesting a close relationship between speech perception and production by demonstrating that viewing speech can ‘prime’ our motor system for subsequent speech production.
Jeffery A. JonesEmail:
  相似文献   

17.
Spatial priming in visual search is a well-documented phenomenon. If the target of a visual search is presented at the same location in subsequent trials, the time taken to find the target at this repeated target location is significantly reduced. Previous studies did not determine which spatial reference frame is used to code the location. At least two reference frames can be distinguished: an observer-related frame of reference (egocentric) or a scene-based frame of reference (allocentric). While past studies suggest that an allocentric reference frame is more effective, we found that an egocentric reference frame is at least as effective as an allocentric one (Ball et al. Neuropsychologia 47(6):1585–1591, 2009). Our previous study did not identify which specific egocentric reference frame was used for the priming: participants could have used a retinotopic or a body-centred frame of reference. Here, we disentangled the retinotopic and body-centred reference frames. In the retinotopic condition, the position of the target stimulus, when repeated, changed with the fixation position, whereas in the body-centred condition, the position of the target stimulus remained the same relative to the display, and thus to the body-midline, but was different relative to the fixation position. We used a conjunction search task to assess the generality of our previous findings. We found that participants relied on body-centred information and not retinotopic cues. Thus, we provide further evidence that egocentric information, and specifically body-centred information, can persist for several seconds, and that these effects are not specific to either a feature or a conjunction search paradigm.  相似文献   

18.
Masked priming effect with canonical finger numeral configurations   总被引:1,自引:1,他引:0  
Discrete numerosities can be represented by various finger configurations. The impact of counting strategies on these configurations and their possible semantic status were investigated in young adults. Experiment 1 showed that young adults named numerical finger configurations faster when they conformed to their own canonical finger-counting habits than when they did not. Experiment 2 showed that numeral finger configurations used as unconsciously presented primes speeded up numerical comparative judgements of Arabic numeral targets. Participants responded faster and made fewer errors with numerical than with non-numerical primes, and when primes and targets were congruent (i.e., leading to the same response). Moreover, this priming effect generalised to novel never consciously seen numerosities for canonical configurations but not for non-canonical ones. These results support the idea that canonical finger configurations automatically activate number semantics whereas non-canonical ones do not.
Mauro PesentiEmail:
  相似文献   

19.
The body schema, a constantly updated representation of the body and its parts, has been suggested to emerge from body part-specific representations which integrate tactile, visual, and proprioceptive information about the identity and posture of the body. Studies using different approaches have provided evidence for a distinct representation of the visual space ~30 cm around the upper body, and predominantly the hands, termed the peripersonal space. In humans, peripersonal space representations have often been investigated with a visual–tactile crossmodal congruency task. We used this task to test if a representation of peripersonal space exists also around the feet, and to explore possible interactions of peripersonal space representations of different body parts. In Experiment 1, tactile stimuli to the hands and feet were judged according to their elevation while visual distractors presented near the same limbs had to be ignored. Crossmodal congruency effects did not differ between the two types of limbs, suggesting a representation of peripersonal space also around the feet. In Experiment 2, tactile stimuli were presented to the hands, and visual distractors were flashed either near the participant’s foot, near a fake foot, or in distant space. Crossmodal congruency effects were larger in the real foot condition than in the two other conditions, indicating interactions between the peripersonal space representations of foot and hand. Furthermore, results of all three conditions showed that vision of the stimulated body part, compared to only proprioceptive input about its location, strongly influences crossmodal interactions for tactile perception, affirming the central role of vision in the construction of the body schema.
Tobias SchickeEmail:
  相似文献   

20.
In order to optimally characterize full-body self-motion perception during passive translations, changes in perceived location, velocity, and acceleration must be quantified in real time and with high spatial resolution. Past methods have failed to effectively measure these critical variables. Here, we introduce continuous pointing as a novel method with several advantages over previous methods. Participants point continuously to the mentally updated location of a previously viewed target during passive, full-body movement. High-precision motion-capture data of arm angle provide a measure of a participant’s perceived location and, in turn, perceived velocity at every moment during a motion trajectory. In two experiments, linear movements were presented in the absence of vision by passively translating participants with a robotic wheelchair or an anthropomorphic robotic arm (MPI Motion Simulator). The movement profiles included constant-velocity trajectories, two successive movement intervals separated by a brief pause, and reversed-motion trajectories. Results indicate a steady decay in perceived velocity during constant-velocity travel and an attenuated response to mid-trial accelerations. Electronic supplementary material  The online version of this article (doi:) contains supplementary material, which is available to authorized users.
Jennifer L. CamposEmail: Email:
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号