首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Motor imagery tasks (hand laterality judgment) are usually performed with respect to a self-body (egocentric) representation, but manipulations of stimulus features (hand orientation) can induce a shift to other's body (allocentric) reference frame. Visual perspective taking tasks are also performed in self-body perspective but a shift to an allocentric frame can be triggered by manipulations of context features (e.g., another person present in the to-be-judged scene). Combining hand laterality task and visual perspective taking, we demonstrated that both stimulus and context features can modulate motor imagery performance. In Experiment 1, participants judged laterality of a hand embedded in a human or non-human silhouette. Results showed that observing a human silhouette interfered with judgments on “egocentric hand stimuli” (right hand, fingers up). In Experiment 2, participants were explicitly required to judge laterality of a hand embedded in a human silhouette from their own (egocentric group) or from the silhouette's perspective (allocentric group). Consistent with previous results, the egocentric group was significantly faster than the allocentric group in judging fingers-up right hand stimuli. These findings showed that concurrent activation of egocentric and allocentric frames during mental transformation of body parts impairs participants’ performance due to a conflict between motor and visual mechanisms.  相似文献   

2.
Previous studies on task sharing propose that a representation of the co-actor's task share is generated when two actors share a common task. An important function of co-representation seems to lie in the anticipation of others' upcoming actions, which is essential for one's own action planning, as it enables the rapid selection of an appropriate response. We utilized measures of lateralized motor activation, the lateralized readiness potential (LRP), in a task sharing paradigm to address the questions (1) whether the generation of a co-representation involves motor activity in the non-acting person when it is other agent's turn to respond, and (2) whether co-representation of the other's task share is generated from one's own egocentric perspective or from the perspective of the actor (allocentric). Results showed that although it was the other agent's turn to respond, the motor system of the non-acting person was activated prior to the other's response. Furthermore, motor activity was based on egocentric spatial properties. The findings support the tight functional coupling between one's own actions and actions produced by others, suggesting that the involvement of the motor system is crucial for social interaction.  相似文献   

3.
Research on joint attention has addressed both the effects of gaze following and the ability to share representations. It is largely unknown, however, whether sharing attention also affects the perceptual processing of jointly attended objects. This study tested whether attending to stimuli with another person from opposite perspectives induces a tendency to adopt an allocentric rather than an egocentric reference frame. Pairs of participants performed a handedness task while individually or jointly attending to rotated hand stimuli from opposite sides. Results revealed a significant flattening of the performance rotation curve when participants attended jointly (experiment 1). The effect of joint attention was robust to manipulations of social interaction (cooperation versus competition, experiment 2), but was modulated by the extent to which an allocentric reference frame was primed (experiment 3). Thus, attending to objects together from opposite perspectives makes people adopt an allocentric rather than the default egocentric reference frame.  相似文献   

4.
Determining the handedness of visually presented stimuli is thought to involve two separate stages--a rapid, implicit recognition of laterality followed by a confirmatory mental rotation of the matching hand. In two studies, we explore the role of the dominant and non-dominant hands in this process. In Experiment 1, participants judged stimulus laterality with either their left or right hand held behind their back or with both hands resting in the lap. The variation in reactions times across these conditions reveals that both hands play a role in hand laterality judgments, with the hand which is not involved in the mental rotation stage causing some interference, slowing down mental rotations and making them more accurate. While this interference occurs for both lateralities in right-handed people, it occurs for the dominant hand only in left-handers. This is likely due to left-handers' greater reliance on the initial, visual recognition stage than on the later, mental rotation stage, particularly when judging hands from the non-dominant laterality. Participants' own judgments of whether the stimuli were 'self' and 'other' hands in Experiment 2 suggest a difference in strategy for hands seen from an egocentric and allocentric perspective, with a combined visuo-sensorimotor strategy for the former and a visual only strategy for the latter. This result is discussed with reference to recent brain imaging research showing that the extrastriate body area distinguishes between bodies and body parts in egocentric and allocentric perspective.  相似文献   

5.
Convergent findings demonstrate that numbers can be represented according to a spatially oriented mental number line. However, it is not established whether a default organization of the mental number line exists (i.e., a left-to-right orientation) or whether its spatial arrangement is only the epiphenomenon of specific task requirements. To address this issue we performed two experiments in which subjects were required to judge laterality of hand stimuli preceded by small, medium or large numerical cues; hand stimuli were compatible with egocentric or allocentric perspectives. We found evidence of a left-to-right number–hand association in processing stimuli compatible with an egocentric perspective, whereas the reverse mapping was found with hands compatible with an allocentric perspective. These findings demonstrate that the basic left-to-right arrangement of the mental number line is defined with respect to the body-centred egocentric reference frame.  相似文献   

6.
To produce accurate goal-directed arm movements, subjects must determine the precise location of target object. Position of extracorporeal objects can be determined using: (a) an egocentric frame of reference, in which the target is localized in relation to the position of the body; and/or (b) an allocentric system, in which target position is determined in relation to stable visual landmarks surrounding the target (Bridgeman 1989; Paillard 1991). The present experiment was based on the premise that (a) the presence of a structured visual environment enables the use of an allocentric frame of reference, and (b) the sole presence of a visual target within a homogeneous background forces the registration of the target location by an egocentric system. Normal subjects and a deafferented patient (i.e., with an impaired egocentric system) pointed to visual targets presented in both visual environments to evaluate the efficiency of the two reference systems. For normals, the visual environment conditions did not affect pointing accuracy. However, kinematic parameters were affected by the presence or absence of a structured visual surrounding. For the deafferented patient, the presence of a structured visual environment permitted a decrease in spatial errors when compared with the unstructured surrounding condition (for movements with or without visual feedback of the trajectory). Overall, results support the existence of an egocentric and an allocentric reference system capable of organizing extracorporeal space during arm movements directed toward visual targets.  相似文献   

7.
Assessing the mental state of others by considering their perspective plays an important part in social communication. Imitation based on visual information represents a typical case of the translation of sensory input into action. Although humans are often successful in imitating complex actions, the mechanisms that underlie successful imitation are poorly understood. In earlier findings, it has been suggested that understanding others’ minds through imitation is realized in the course of the comparison between the representations of the self and others, involving a transformation of the egocentric perspective to the allocentric one. There are two possible strategies of transformation between the representation of the self and others. One possible scenario is that the imitator perceives and imitates others as if looking in a mirror (mirror-image imitation, where, for example, the demonstrator’s right hand corresponds to the imitator’s left hand). Alternatively, the imitator might estimate the demonstrator’s action using the anatomically congruent limb (anatomic imitation, where, for example, the demonstrator’s right hand corresponds to the imitator’s right hand). Here, we conducted a series of experiments in which the subjects imitated simple hand actions such as pushing a button presented from several different spatial orientations rotated at various angles. We observed that the imitators changed their strategy of imitation (mirror-image or anatomic imitation) depending on the nature of spatial configurations. Behavioral data from this study support the hypothesis that mirror-image and anatomic imitations provide complementary systems for understanding the actions and intentions of others.  相似文献   

8.
Total sleep deprivation (TSD) is known to alter cognitive processes. Surprisingly little attention has been paid to its impact on social cognition. Here, we investigated whether TSD alters levels‐1 and ‐2 visual perspective‐taking abilities, i.e. the capacity to infer (a) what can be seen and (b) how it is seen from another person's visual perspective, respectively. Participants completed levels‐1 and ‐2 visual perspective‐taking tasks after a night of sleep and after a night of TSD. In these tasks, participants had to take their own (self trials) or someone else's (other trials) visual perspective in trials where both perspectives were either the same (consistent trials) or different (inconsistent trials). An instruction preceding each trial indicated the perspective to take (i.e. the relevant perspective). Results show that TSD globally deteriorates social performance. In the level‐1 task, TSD affects the selection of relevant over irrelevant perspectives. In the level‐2 task, the effect of TSD cannot be unequivocally explained. This implies that visual perspective taking should be viewed as partially state‐dependent, rather than a wholly static trait‐like characteristic.  相似文献   

9.
The investigation of brain areas involved in the human execution/observation matching system (EOM) has been limited to restricted motor actions when using common neuroimaging techniques such as functional magnetic resonance imaging (fMRI). A method which overcomes this limitation is functional near-infrared spectroscopy (fNIRS). In the present study, we explored the cerebral responses underlying action execution and observation during a complex everyday task. We measured brain activation of 39 participants during the performance of object-related reaching, grasping and displacing movements, namely setting and clearing a table, and observation of the same task from different perspectives. Observation of the table-setting task activated parts of a network matching those activated during execution of the task. Specifically, observation from an egocentric perspective led to a higher activation in the inferior parietal cortex than observation from an allocentric perspective, implicating that the viewpoint also influences the EOM during the observation of complex everyday tasks. Together these findings suggest that fNIRS is able to overcome the restrictions of common imaging methods by investigating the EOM with a naturalistic task.  相似文献   

10.
In advance of grasping a visual object embedded within fins-in and fins-out Müller-Lyer (ML) configurations, participants formulated a premovement grip aperture (GA) based on the size of a neutral preview object. Preview objects were smaller, veridical, or larger than the size of the to-be-grasped target object. As a result, premovement GA associated with the small and large preview objects required significant online reorganization to appropriately grasp the target object. We reasoned that such a manipulation would provide an opportunity to examine the extent to which the visuomotor system engages egocentric and/or allocentric visual cues for the online, feedback-based control of action. It was found that the online reorganization of GA was reliably influenced by the ML figures (i.e., from 20 to 80% of movement time), regardless of the size of the preview object, albeit the small and large preview objects elicited more robust illusory effects than the veridical preview object. These results counter the view that online grasping control is mediated by absolute visual information computed with respect to the observer (e.g., Glover in Behav Brain Sci 27:3-78, 2004; Milner and Goodale in The visual brain in action 1995). Instead, the impact of the ML figures suggests a level of interaction between egocentric and allocentric visual cues in online action control.  相似文献   

11.
A great effort has been made to identify crucial cognitive markers that can be used to characterize the cognitive profile of Alzheimer's disease (AD). Because topographical disorientation is one of the earliest clinical manifestation of AD, an increasing number of studies have investigated the spatial deficits in this clinical population. In this systematic review, we specifically focused on experimental studies investigating allocentric and egocentric deficits to understand which spatial cognitive processes are differentially impaired in the different stages of the disease. First, our results highlighted that spatial deficits appear in the earliest stages of the disease. Second, a need for a more ecological assessment of spatial functions will be presented. Third, our analysis suggested that a prevalence of allocentric impairment exists. Specifically, two selected studies underlined that a more specific impairment is found in the translation between the egocentric and allocentric representations. In this perspective, the implications for future research and neurorehabilitative interventions will be discussed.  相似文献   

12.
Being able to imagine another person’s experience and perspective of the world is a crucial human ability and recent reports suggest that humans “embody” another’s viewpoint by mentally rotating their own body representation into the other’s orientation. Our recent Magnetoencephalography (MEG) data further confirmed this notion of embodied perspective transformations and pinpointed the right posterior temporo-parietal junction (pTPJ) as the crucial hub in a distributed network oscillating at theta frequency (3–7 Hz). In a subsequent transcranial magnetic stimulation (TMS) experiment we interfered with right pTPJ processing and observed a modulation of the embodied aspects of perspective transformations. While these results corroborated the role of right pTPJ, the notion of theta oscillations being the crucial neural code remained a correlational observation based on our MEG data. In the current study we therefore set out to confirm the importance of theta oscillations directly by means of TMS entrainment. We compared entrainment of right pTPJ at 6 Hz vs. 10 Hz and confirmed that only 6 Hz entrainment facilitated embodied perspective transformations (at 160° angular disparity) while 10 Hz slowed it down. The reverse was true at low angular disparity (60° between egocentric and target perspective) where a perspective transformation was not strictly necessary. Our results further corroborate right pTPJ involvement in embodied perspective transformations and highlight theta oscillations as a crucial neural code.  相似文献   

13.
The spatial location of an object can be represented in the brain with respect to different classes of reference frames, either relative to or independent of the subject's position. We used functional magnetic resonance imaging to identify regions of the healthy human brain subserving mainly egocentric or allocentric (object-based) coordinates by asking subjects to judge the location of a visual stimulus with respect to either their body or an object. A color-judgement task, matched for stimuli, difficulty, motor and oculomotor responses, was used as a control. We identified a bilateral, though mainly right-hemisphere based, fronto-parietal network involved in egocentric processing. A subset of these regions, including a much less extensive unilateral, right fronto-parietal network, was found to be active during object-based processing. The right-hemisphere lateralization and the partial superposition of the egocentric and the object-based networks is discussed in the light of neuropsychological findings in brain-damaged patients with unilateral spatial neglect and of neurophysiological studies in the monkey.  相似文献   

14.
The visual and vestibular systems begin functioning early in life. However, it is unclear whether young infants perceive the dynamic world based on the retinal coordinate (egocentric reference frame) or the environmental coordinate (allocentric reference frame) when they encounter incongruence between frames of reference due to changes in body position. In this study, we performed the habituation–dishabituation procedure to assess novelty detection in a visual display, and a change in body position was included between the habituation and dishabituation phases in order to test whether infants dishabituate to the change in stimulus on the retinal or environmental coordinate. Twenty infants aged 3–4 months were placed in the right-side-down position (RSDp) and habituated to an animated human-like character that walked horizontally in the environmental frame of reference. Subsequently, their body position was changed in the roll plane. Ten infants were repositioned to the upright position (UPp) and the rest, to the RSDp after rotation. In the test phase, the displays that were spatially identical to those shown in the habituation phase and 90° rotated displays were alternately presented, and visual preference was examined. The results revealed that infants looked longer at changes in the display on the retinal coordinate than at changes in the display on the environmental coordinate. This suggests that changes in body position from lying to upright produced incongruence of the egocentric and allocentric reference frames for perception of dynamic visual displays and that infants may rely more on the egocentric reference frame.  相似文献   

15.
The study investigated pointing at memorized targets in reachable space in congenitally blind (CB) and blindfolded sighted (BS) children (6, 8, 10 and 12 years; ten children in each group). The target locations were presented on a sagittal plane by passive positioning of the left index finger. A go signal for matching the target location with the right index finger was provided 0 or 4 s after demonstration. An age effect was found only for absolute distance errors and the surface area of pointing was smaller for the CB children. Results indicate that early visual experience and age are not predictive factors for pointing in children. The delay was an important factor at all ages and for both groups, indicating distinct spatial representations such as egocentric and allocentric frames of reference, for immediate and delayed pointing, respectively. Therefore, the CB like the BS children are able to use both ego- and allocentric frames of reference.  相似文献   

16.
When programming movement, one must account for gravitational acceleration. This is particularly important when catching a falling object because the task requires a precise estimate of time-to-contact. Knowledge of gravity’s effects is intimately linked to our definition of ‘up’ and ‘down’. Both directions can be described in an allocentric reference frame, based on visual and/or gravitational cues, or in an egocentric reference frame in which the body axis is taken as vertical. To test which frame humans use to predict gravity’s effect, we asked participants to intercept virtual balls approaching from above or below with artificially controlled acceleration that could be congruent or not with gravity. To dissociate between these frames, subjects were seated upright (trunk parallel to gravity) or lying down (body axis orthogonal to the gravitational axis). We report data in line with the use of an allocentric reference frame and discuss its relevance depending on available gravity-related cues.  相似文献   

17.
Introduction. People often show a bias of attributing their own actions to more positive causes (e.g., generosity) than other persons’ actions. Models of paranoia suggest links between paranoia and negative construals of others’ intentions. Research on these biases has focused on causal attributions from two explainer perspectives, the agent (the person performing the action) and the object (the person being acted on), and has omitted the observer (third person) perspective.

Methods. This study investigated intention attributions from three perspectives (agent, object, observer). Students (n=149) took one of these perspectives and judged the intentionality, frequency, and positivity of 30 behaviours before completing the Paranoia Scale.

Results. Participants in agent and object perspectives rated positive behaviours more frequent and intentional than those in the observer perspective. Participants higher in paranoia distinguished less between positive and negative behaviours, and, in the object perspective, paranoia correlated with lower perceived intentionality of positive behaviours.

Conclusions. The use of three explainer perspectives and intention attributions clarifies how attributions for actions relate to paranoid beliefs. Results suggest that people higher in paranoia make more negative judgements about other person's positive and negative intentions, especially when they are the object of the action.  相似文献   

18.
The location of an object in peripersonal space can be represented with respect to our body (i.e., egocentric frame of reference) or relative to contextual features and other objects (i.e., allocentric frame of reference). In the current study, we sought to determine whether the frame, or frames, of visual reference supporting motor output is influenced by reach trajectories structured to maximize visual feedback utilization (i.e., controlled online) or structured largely in advance of movement onset via central planning mechanisms (i.e., controlled offline). Reaches were directed to a target embedded in a pictorial illusion (the induced Roelofs effect: IRE) and advanced knowledge of visual feedback was manipulated to influence the nature of reaching control as reported by Zelaznik et al. (J Mot Behav 15:217–236, 1983). When vision could not be predicted in advance of movement onset, trajectories showed primary evidence of an offline mode of control (even when vision was provided) and endpoints demonstrated amplified sensitivity to the illusory (i.e., allocentric) features of the IRE. In contrast, reaches performed with reliable visual feedback evidenced a primarily online mode of control and showed increased visuomotor resistance to the IRE. These findings suggest that the manner a reaching response is structured differentially influences the weighting of allocentric and egocentric visual information. More specifically, when visual feedback is unavailable or unpredictable, the weighting of allocentric visual information for the advanced planning of a reach trajectory is increased.
Matthew HeathEmail:
  相似文献   

19.
This research examined motor measures of the apparent egocentric location and perceptual measures of the apparent allocentric location of a target that was being seen to undergo induced motion (IM). In Experiments 1 and 3, subjects fixated a stationary dot (IM target) while a rectangular surround stimulus (inducing stimulus) oscillated horizontally. The inducing stimulus motion caused the IM target to appear to move in the opposite direction. In Experiment 1, two dots (flashed targets) were flashed above and below the IM target when the surround had reached its leftmost or rightmost displacement from the subject’s midline. Subjects pointed open-loop at either the apparent egocentric location of the IM target or at the bottom of the two flashed targets. On separate trials, subjects made judgments of the Vernier alignment of the IM target with the flashed targets at the endpoints of the surround’s oscillation. The pointing responses were displaced in the direction of the previously seen IM for the IM target and to a lesser degree for the bottom flashed target. However, the allocentric Vernier judgments demonstrated no perceptual displacement of the IM target relative to the flashed targets. Thus, IM results in a dissociation of egocentric location measures from allocentric location measures. In Experiment 2, pointing and Vernier measures were obtained with stationary horizontally displaced surrounds and there was no dissociation of egocentric location measures from allocentric location measures. These results indicate that the Roelofs effect did not produce the pattern of results in Experiment 1. In Experiment 3, pointing and Vernier measures were obtained when the surround was at the midpoint of an oscillation. In this case, egocentric pointing responses were displaced in the direction of surround motion (opposite IM) for the IM target and to a greater degree for the bottom flashed target. However, there was no apparent displacement of the IM target relative to the flashed targets in the allocentric Vernier judgments. Therefore, in Experiment 3 egocentric location measures were again dissociated from allocentric location measures. The results of this experiment also demonstrate that IM does not generate an allocentric displacement illusion analogous to the “flash-lag” effect.
Robert B. PostEmail:
  相似文献   

20.
Analogously to the visual system, somatosensory processing may be segregated into two streams, with the body constituting either part of the action system or a perceptual object. Experimental studies with participants free from neurological disease which test this hypothesis are rare, however. The present study explored the contributions of the two putative streams to a task that requires participants to estimate the spatial properties of their own body. Two manipulations from the visuospatial literature were included. First, participants were required to point either backward towards pre-defined landmarks on their own body (egocentric reference frame) or to a forward projection of their own body (allocentric representation). Second, a manipulation of movement mode was included, requiring participants to perform pointing movements either immediately, or after a fixed delay, following instruction. Results show that accessing an allocentric representation of one’s own body results in performance changes. Specifically, the spatial bias shown to exist for body space when pointing backward at one’s own body disappears when participants are requested to mentally project their body to a pre-defined location in front space. Conversely, delayed execution of pointing movements does not result in performance changes. Altogether, these findings provide support for a constrained dual stream hypothesis of somatosensory processing and are the first to show similarities in the processing of body space and peripersonal space.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号