首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We tested the hypothesis that the body self-advantage, i.e., the facilitation in discriminating self versus other people’s body-effectors, is the expression of an implicit and body-specific knowledge, based mainly on the sensorimotor representation of one’s own body-effectors. Alternatively, the body self-advantage could rely on visual recognition of pictorial cues. According to the first hypothesis, using gray-scale pictures of body-parts, the body self-advantage should emerge when self-body recognition is implicitly required and should be specific for body-effectors and not for inanimate-objects. In contrast, if the self-advantage is due to a mere visual–perceptual facilitation, it should be independent of the implicit or explicit request (and could be extended also to objects). To disentangle these hypotheses, healthy participants were implicitly or explicitly required to recognize either their own body-effectors or inanimate-objects. Participants were more accurate in the implicit task with self rather than with others’ body-effectors. In contrast, the self-advantage was not found when an explicit recognition of one’s own body-effectors was required, suggesting that the body self-advantage relies upon a sensorimotor, rather than a mere visual representation of one’s own body. Moreover, the absence of both self/other and implicit/explicit effects, when processing inanimate-objects, underlines the differences between the body and other objects.  相似文献   

2.
The present study aimed at verifying whether and why sequences of actions directed to oneself are facilitated when compared to action sequences directed to conspecifics. In experiment 1, participants reached to grasp and brought a piece of food either to their own mouth for self-feeding or to the mouth of a conspecific for feeding. In control conditions, they executed the same sequence to place the piece of food into a mouth-like aperture in a flat container placed upon either their own mouth or the mouth of a conspecific. Kinematic analysis showed that the actions of reaching and bringing were faster when directed to the participant’s own body, especially for self-feeding. The data support the hypothesis that reaching to grasp and bringing to one’s own body and, in particular, one’s own mouth for self-feeding, form an automatic sequence, because this is the result of more frequent execution and coordination between different effectors of one’s own body, such as arm and mouth. In contrast, the same sequence directed toward a conspecific is not automatic and requires more accuracy probably because it is guided by social intentions. This hypothesis was supported by the results of control experiment 2 in which we compared the kinematics of reaching to grasp and placing the piece of food into the mouth of a conspecific (i.e. feeding) with those of reaching to grasp and placing the same piece of food into a mouth-like aperture in a human body shape (i.e. placing). Indeed, the entire sequence was slowed down during feeding when compared to placing.  相似文献   

3.
We required healthy subjects to recognize visually presented one’s own or others’ hands in egocentric or allocentric perspective. Both right- and left-handers were faster in recognizing dominant hands in egocentric perspective and others’ non-dominant hand in allocentric perspective. These findings demonstrated that body-specific information contributes to sense of ownership, and that the “peri-dominant-hand space” is the preferred reference frame to distinguish self from not-self body parts.  相似文献   

4.
Vestibular information helps to establish a reliable gravitational frame of reference and contributes to the adequate perception of the location of one’s own body in space. This information is likely to be required in spatial cognitive tasks. Indeed, previous studies suggest that the processing of vestibular information is involved in mental transformation tasks in healthy participants. In this study, we investigate whether patients with bilateral or unilateral vestibular loss show impaired ability to mentally transform images of bodies and body parts compared to a healthy, age-matched control group. An egocentric and an object-based mental transformation task were used. Moreover, spatial perception was assessed using a computerized version of the subjective visual vertical and the rod and frame test. Participants with bilateral vestibular loss showed impaired performance in mental transformation, especially in egocentric mental transformation, compared to participants with unilateral vestibular lesions and the control group. Performance of participants with unilateral vestibular lesions and the control group are comparable, and no differences were found between right- and left-sided labyrinthectomized patients. A control task showed no differences between the three groups. The findings from this study substantiate that central vestibular processes are involved in imagined spatial body transformations; but interestingly, only participants with bilateral vestibular loss are affected, whereas unilateral vestibular loss does not lead to a decline in spatial imagery.  相似文献   

5.
Along the evolutionary history, humans have reached a high level of sophistication in the way they interact with the environment. One important step in this process has been the introduction of tools, enabling humans to go beyond the boundaries of their physical possibilities. Here, we focus on some “low level” aspects of sensorimotor processing that highlight how tool-use plays a causal role in shaping body representations, an essential plastic feature for efficient motor control during development and skilful tool-use in the adult life. We assess the evidence supporting the hypothesis that tools are incorporated in body representation for action, which is the body schema, by critically reviewing some previous findings and providing new data from on-going work in our laboratory. In particular, we discuss several experiments that reveal the effects of tool-use both on the kinematics of hand movements and the localization of somatosensory stimuli on the body surface, as well as the conditions that are necessary for these effects to be manifested. We suggest that overall these findings speak in favour of genuine tool-use-dependent plasticity of the body representation for the control of action.  相似文献   

6.
Thirty patients who had undergone either a right or left unilateral temporal lobectomy (14 RTL; 16 LTL) and 16 control participants were tested on a computerized human analogue of the Morris Water Maze. The procedure was designed to compare allocentric and egocentric spatial memory. In the allocentric condition, participants searched for a target location on the screen, guided by object cues. Between trials, participants had to walk around the screen, which disrupted egocentric memory representation. In the egocentric condition, participants remained in the same position, but the object cues were shifted between searches to prevent them from using allocentric memory. Only the RTL group was impaired on the allocentric condition, and neither the LTL nor RTL group was impaired on additional tests of spatial working memory or spatial manipulation. The results support the notion that the right anterior temporal lobe stores long-term allocentric spatial memories.  相似文献   

7.
The experience of body ownership can be successfully manipulated during the rubber hand illusion using synchronous multisensory stimulation. The hypothesis that multisensory integration is both a necessary and sufficient condition for body ownership is debated. We systematically varied the appearance of the object that was stimulated in synchrony or asynchrony with the participant’s hand. A viewed object that was transformed in three stages from a plain wooden block to a wooden hand was compared to a realistic rubber hand. Introspective and behavioural results show that participants experience a sense of ownership only for the realistic prosthetic hand, suggesting that not all objects can be experienced as part of one’s body. Instead, the viewed object must fit with a reference model of the body that contains important structural information about body parts. This body model can distinguish between corporeal and non-corporeal objects, and it therefore plays a critical role in maintaining a coherent sense of one’s body.  相似文献   

8.
Goal-directed movements performed in a virtual environment pose serious challenges to the central nervous system because the visual and proprioceptive representations of one’s hand position are not perfectly congruent. The aim of the present study was to determine whether the vision of one’s hand or upper arm, compared with that of a cursor representing the tips of one’s index finger and thumb, optimizes the planning and modulation of one’s movement as the cursor nears the target. The participants performed manual aiming movements that differed by the source of static visual information available during movement planning and the source of dynamic information available during movement execution. The results revealed that the vision of one’s hand during the movement planning phase results in more efficient online control processes than when the movement planning was based on a virtual representation of one’s initial hand location. This observation was seen regardless of the availability of online visual feedback during movement execution. These results suggest that a more reliable estimation of the initial hand position results in more accurate estimation of the position of the cursor/hand at any one time resulting in more accurate online control.  相似文献   

9.
Viewing the body affects somatosensory processing, even when entirely non-informative about stimulation. While several studies have reported effects of viewing the body on cortical processing of touch and pain, the neural locus of this modulation remains unclear. We investigated whether seeing the body modulates processing in primary somatosensory cortex (SI) by measuring short-latency somatosensory evoked-potentials (SEPs) elicited by electrical stimulation of the median nerve while participants looked directly at their stimulated hand or at a non-hand object. Vision of the body produced a clear reduction of the P27 component of the SEP recorded over contralateral parietal channels, which is known to reflect processing in SI. These results provide the first direct evidence that seeing the body modulates processing in SI and demonstrate that vision can affect even the earliest stages of cortical somatosensory processing.  相似文献   

10.
The posterior parietal cortex of both human and non-human primates is known to play a crucial role in the early integration of visual information with somatosensory, proprioceptive and vestibular signals. However, it is not known whether in humans this region is further capable of discriminating if a stimulus poses a threat to the body. In this functional magnetic resonance imaging (fMRI) study, we tested the hypothesis that the posterior parietal cortex of humans is capable of modulating its response to the visual processing of noxious threat representation in the absence of tactile input. During fMRI, participants watched while we "stimulated" a visible rubber hand, placed over their real hand with either a sharp (painful) or a blunt (nonpainful) probe. We found that superior and inferior parietal regions (BA5/7 and BA40) increased their activity in response to observing a painful versus nonpainful stimulus. However, this effect was only evident when the rubber hand was in a spatially congruent (vs. incongruent) position with respect to the participants' own hand. In addition, areas involved in motivational-affective coding such as mid-cingulate (BA24) and anterior insula also showed such relevance-dependent modulation, whereas premotor areas known to receive multisensory information about limb position did not. We suggest these results reveal a human anatomical-functional homologue to monkey inferior parietal areas that respond to aversive stimuli by producing nocifensive muscle and limb movements.  相似文献   

11.
On the timing of reference frames for action control   总被引:1,自引:1,他引:0  
This study investigated the time course and automaticity of spatial coding of visual targets for pointing movements. To provide an allocentric reference, placeholders appeared on a touch screen either 500 ms before target onset, or simultaneously with target onset, or at movement onset, or not at all (baseline). With both blocked and randomized placeholder timing, movements to the most distant targets were only facilitated when placeholders were visible before movement onset. This result suggests that allocentric target coding is most useful during movement planning and that this visuo-spatial coding mechanism is not sensitive to strategic effects.  相似文献   

12.
It has been proposed that an internal representation of body vertical has a prominent role in spatial orientation. This investigation investigated the ability of human subjects to accurately locate their longitudinal body axis (an imaginary straight body midline running from head to toes) while free-floating in microgravity. Subjects were tested in-flight, as well as on ground in normal gravity in both the upright and supine orientations to provide baseline measurements. The subjects wore a goggle device and were in total darkness. They used knobs to rotate two luminous lines until they were parallel to the subjective direction of their longitudinal body axis, in the roll (right/left) and the pitch (forward/backward) planes. Results showed that the error between the perceived and the objective direction of the longitudinal body axis was significantly larger in microgravity than in normal gravity. This error in this egocentric frame of reference is presumably due to the absence of somatosensory cues when free-floating. Mechanical pressure on the chest using an airbag reduced the error in perception of the longitudinal body axis in microgravity, thus improving spatial orientation.  相似文献   

13.
Can viewing our own body modified in size reshape the bodily representation employed for interacting with the environment? This question was addressed here by exposing participants to either an enlarged, a shrunken, or an unmodified view of their own hand in a reach-to-grasp task toward a target of fixed dimensions. When presented with a visually larger hand, participants modified the kinematics of their grasping movement by reducing maximum grip aperture. This adjustment was carried over even when the hand was rendered invisible in subsequent trials, suggesting a stable modification of the bodily representation employed for the action. The effect was specific for the size of the grip aperture, leaving the other features of the reach-to-grasp movement unaffected. Reducing the visual size of the hand did not induce the opposite effect, although individual differences were found, which possibly depended on the degree of subject’s reliance on visual input. A control experiment suggested that the effect exerted by the vision of the enlarged hand could not be merely explained by simple global visual rescaling. Overall, our results suggest that visual information pertaining to the size of the body is accessed by the body schema and is prioritized over the proprioceptive input for motor control.  相似文献   

14.
Perceiving the external spatial location of the limbs using position sense requires that immediate proprioceptive afferent signals be combined with a stored body model specifying the size and shape of the body. Longo and Haggard (Proc Natl Acad Sci USA 107:11727–11732, 2010) developed a method to isolate and measure this body model in the case of the hand in which participants judge the perceived location in external space of several landmarks on their occluded hand. The spatial layout of judgments of different landmarks is used to construct implicit hand maps, which can then be compared with actual hand shape. Studies using this paradigm have revealed that the body model of the hand is massively distorted, in a highly stereotyped way across individuals, with large underestimation of finger length and overestimation of hand width. Previous studies using this paradigm have allowed participants to see the locations of their judgments on the occluding board. Several previous studies have demonstrated that immediate vision, even when wholly non-informative, can alter processing of somatosensory signals and alter the reference frame in which they are localised. The present study therefore investigated whether immediate vision contributes to the distortions of implicit hand maps described previously. Participants judged the external spatial location of the tips and knuckles of their occluded left hand either while being able to see where they were pointing (as in previous studies) or while blindfolded. The characteristic distortions of implicit hand maps reported previously were clearly apparent in both conditions, demonstrating that the distortions are not an artefact of immediate vision. However, there were significant differences in the magnitude of distortions in the two conditions, suggesting that vision may modulate representations of body size and shape, even when entirely non-informative.  相似文献   

15.
Numerical magnitude is believed to be represented along a mental number line (MNL), and there is evidence to suggest that the activation of the MNL affects the perception and representation of external space. In the present study, we investigated whether a spatial motor task affects numerical processing in the auditory modality. Blindfolded participants were presented with a numerical interval bisection task, while performing a tapping task with either their left or right hand, either in the fronto-central, fronto-left, or fronto-right peripersonal space. Results showed that tapping significantly influenced the participants’ numerical bisection, with tapping in the left side of space increasing the original tendency to err leftward, and tapping to the right reducing such bias. Importantly, the effect depended on the side of space in which the tapping activity was performed, regardless of which hand was used. Tapping with either the left or right hand in the fronto-central space did not affect the participants’ bias. These findings offer novel support for the existence of bidirectional interactions between external and internal representations of space.  相似文献   

16.
To produce accurate goal-directed arm movements, subjects must determine the precise location of target object. Position of extracorporeal objects can be determined using: (a) an egocentric frame of reference, in which the target is localized in relation to the position of the body; and/or (b) an allocentric system, in which target position is determined in relation to stable visual landmarks surrounding the target (Bridgeman 1989; Paillard 1991). The present experiment was based on the premise that (a) the presence of a structured visual environment enables the use of an allocentric frame of reference, and (b) the sole presence of a visual target within a homogeneous background forces the registration of the target location by an egocentric system. Normal subjects and a deafferented patient (i.e., with an impaired egocentric system) pointed to visual targets presented in both visual environments to evaluate the efficiency of the two reference systems. For normals, the visual environment conditions did not affect pointing accuracy. However, kinematic parameters were affected by the presence or absence of a structured visual surrounding. For the deafferented patient, the presence of a structured visual environment permitted a decrease in spatial errors when compared with the unstructured surrounding condition (for movements with or without visual feedback of the trajectory). Overall, results support the existence of an egocentric and an allocentric reference system capable of organizing extracorporeal space during arm movements directed toward visual targets.  相似文献   

17.
Motor imagery tasks (hand laterality judgment) are usually performed with respect to a self-body (egocentric) representation, but manipulations of stimulus features (hand orientation) can induce a shift to other's body (allocentric) reference frame. Visual perspective taking tasks are also performed in self-body perspective but a shift to an allocentric frame can be triggered by manipulations of context features (e.g., another person present in the to-be-judged scene). Combining hand laterality task and visual perspective taking, we demonstrated that both stimulus and context features can modulate motor imagery performance. In Experiment 1, participants judged laterality of a hand embedded in a human or non-human silhouette. Results showed that observing a human silhouette interfered with judgments on “egocentric hand stimuli” (right hand, fingers up). In Experiment 2, participants were explicitly required to judge laterality of a hand embedded in a human silhouette from their own (egocentric group) or from the silhouette's perspective (allocentric group). Consistent with previous results, the egocentric group was significantly faster than the allocentric group in judging fingers-up right hand stimuli. These findings showed that concurrent activation of egocentric and allocentric frames during mental transformation of body parts impairs participants’ performance due to a conflict between motor and visual mechanisms.  相似文献   

18.
We determined whether uncertainty about the location of one’s hand in virtual environments limits the efficacy of online control processes. In the Non-aligned and Aligned conditions, the participant’s hand was represented by a cursor on a vertical or horizontal display, respectively. In the Natural condition, participants saw their hand. During an acquisition phase, visual feedback was either permitted or not during movement execution. To test the hypothesis (Norris et al. 2001) that reliance on visual feedback increases as the task becomes less natural (Natural < Aligned < Non-aligned), following acquisition, participants performed a transfer phase without visual feedback. During acquisition in both visual feedback conditions, movement endpoint variability increased as the task became less natural. This suggests that the orientation of the display and the representation of one’s hand by a cursor introduced uncertainty about its location, which limits the efficacy of online control processes. In contradiction with the hypothesis of Norris et al. (2001), withdrawing visual feedback in transfer had a larger deleterious effect on movement accuracy as the task became less natural. This suggests that the CNS increases the weight attributed to the input that can be processed without first having to be transformed.  相似文献   

19.
Hay L  Redon C 《Neuroscience letters》2006,408(3):194-198
Pointing movements decrease in accuracy when target information is removed before movement onset. This time effect was analyzed in relation with the spatial representation of the target location, which can be egocentric (i.e. in relation to the body) or exocentric (i.e. in relation to the external world) depending on the visual environment of the target. The accuracy of pointing movements performed without visual feedback was measured in two delay conditions: 0 and 5-s delay between target removal and movement onset. In each delay condition, targets were presented either in the darkness (egocentric localization) or within a structured visual background (exocentric localization). The results show that pointing was more accurate when targets were presented within a visual background than in the darkness. The time-related decrease in accuracy was observed in the darkness condition, whereas no delay effect was found in the presence of a visual background. Therefore, contextual factors applied to a simple pointing action might induce different spatial representations: a short-lived sensorimotor egocentric representation used in immediate action control, or a long-lived perceptual exocentric representation which drives perception and delayed action.  相似文献   

20.
Previous research has shown that tactile-spatial information originating from the front of the body is remapped from an anatomical to an external spatial coordinate system, guided by the availability of visual information early in development. Comparably little is known about regions of space for which visual information is not typically available, such as the space behind the body. This study tests for the first time the electrophysiological correlates of the effects of proprioceptive information on tactile-attentional mechanisms in the space behind the back. Observers were blindfolded and tactually cued to detect infrequent tactile targets on either their left or right hand and to respond to them either vocally or with index finger movements. We measured event-related potentials to tactile probes on the hands in order to explore tactile-spatial attention when the hands were either held close together or far apart behind the observer’s back. Results show systematic effects of arm posture on tactile-spatial attention different from those previously found for front space. While attentional selection is typically more effective for hands placed far apart than close together in front space, we found that selection occurred more rapidly for close than far hands behind the back, during both covert attention and movement preparation tasks. This suggests that proprioceptive space may “wrap” around the body, following the hands as they extend horizontally from the front body midline to the center of the back.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号