首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 17 毫秒
1.
When walking through space, both dynamic visual information (optic flow) and body-based information (proprioceptive and vestibular) jointly specify the magnitude of distance travelled. While recent evidence has demonstrated the extent to which each of these cues can be used independently, less is known about how they are integrated when simultaneously present. Many studies have shown that sensory information is integrated using a weighted linear sum, yet little is known about whether this holds true for the integration of visual and body-based cues for travelled distance perception. In this study using Virtual Reality technologies, participants first travelled a predefined distance and subsequently matched this distance by adjusting an egocentric, in-depth target. The visual stimulus consisted of a long hallway and was presented in stereo via a head-mounted display. Body-based cues were provided either by walking in a fully tracked free-walking space (Exp. 1) or by being passively moved in a wheelchair (Exp. 2). Travelled distances were provided either through optic flow alone, body-based cues alone or through both cues combined. In the combined condition, visually specified distances were either congruent (1.0×) or incongruent (0.7× or 1.4×) with distances specified by body-based cues. Responses reflect a consistent combined effect of both visual and body-based information, with an overall higher influence of body-based cues when walking and a higher influence of visual cues during passive movement. When comparing the results of Experiments 1 and 2, it is clear that both proprioceptive and vestibular cues contribute to travelled distance estimates during walking. These observed results were effectively described using a basic linear weighting model.  相似文献   

2.
Recent research has provided evidence that visual and body-based cues (vestibular, proprioceptive and efference copy) are integrated using a weighted linear sum during walking and passive transport. However, little is known about the specific weighting of visual information when combined with proprioceptive inputs alone, in the absence of vestibular information about forward self-motion. Therefore, in this study, participants walked in place on a stationary treadmill while dynamic visual information was updated in real time via a head-mounted display. The task required participants to travel a predefined distance and subsequently match this distance by adjusting an egocentric, in-depth target using a game controller. Travelled distance information was provided either through visual cues alone, proprioceptive cues alone or both cues combined. In the combined cue condition, the relationship between the two cues was manipulated by either changing the visual gain across trials (0.7×, 1.0×, 1.4×; Exp. 1) or the proprioceptive gain across trials (0.7×, 1.0×, 1.4×; Exp. 2). Results demonstrated an overall higher weighting of proprioception over vision. These weights were scaled, however, as a function of which sensory input provided more stable information across trials. Specifically, when visual gain was constantly manipulated, proprioceptive weights were higher than when proprioceptive gain was constantly manipulated. These results therefore reveal interesting characteristics of cue-weighting within the context of unfolding spatio-temporal cue dynamics.  相似文献   

3.
The brain is able to determine angular self-motion from visual, vestibular, and kinesthetic information. There is compelling evidence that both humans and non-human primates integrate visual and inertial (i.e., vestibular and kinesthetic) information in a statistically optimal fashion when discriminating heading direction. In the present study, we investigated whether the brain also integrates information about angular self-motion in a similar manner. Eight participants performed a 2IFC task in which they discriminated yaw-rotations (2-s sinusoidal acceleration) on peak velocity. Just-noticeable differences (JNDs) were determined as a measure of precision in unimodal inertial-only and visual-only trials, as well as in bimodal visual–inertial trials. The visual stimulus was a moving stripe pattern, synchronized with the inertial motion. Peak velocity of comparison stimuli was varied relative to the standard stimulus. Individual analyses showed that data of three participants showed an increase in bimodal precision, consistent with the optimal integration model; while data from the other participants did not conform to maximum-likelihood integration schemes. We suggest that either the sensory cues were not perceived as congruent, that integration might be achieved with fixed weights, or that estimates of visual precision obtained from non-moving observers do not accurately reflect visual precision during self-motion.  相似文献   

4.
Visual self-motion perception during head turns   总被引:4,自引:0,他引:4  
Extra-retinal information is critical in the interpretation of visual input during self-motion. Turning our eyes and head to track objects displaces the retinal image but does not affect our ability to navigate because we use extra-retinal information to compensate for these displacements. We showed observers animated displays depicting their forward motion through a scene. They perceived the simulated self-motion accurately while smoothly shifting the gaze by turning the head, but not when the same gaze shift was simulated in the display; this indicates that the visual system also uses extra-retinal information during head turns. Additional experiments compared self-motion judgments during active and passive head turns, passive rotations of the body and rotations of the body with head fixed in space. We found that accurate perception during active head turns is mediated by contributions from three extra-retinal cues: vestibular canal stimulation, neck proprioception and an efference copy of the motor command to turn the head.  相似文献   

5.
Humans are typically able to keep track of brief changes in their head and body orientation, even when visual and auditory cues are temporarily unavailable. Determining the magnitude of one’s displacement from a known location is one form of self-motion updating. Most research on self-motion updating during body rotations has focused on the role of a restricted set of sensory signals (primarily vestibular) available during self-motion. However, humans can and do internally represent spatial aspects of the environment, and little is known about how remembered spatial frameworks may impact angular self-motion updating. Here, we describe an experiment addressing this issue. Participants estimated the magnitude of passive, non-visual body rotations (40°–130°), using non-visual manual pointing. Prior to each rotation, participants were either allowed full vision of the testing environment, or remained blindfolded. Within-subject response precision was dramatically enhanced when the body rotations were preceded by a visual preview of the surrounding environment; constant (signed) and absolute (unsigned) error were much less affected. These results are informative for future perceptual, cognitive, and neuropsychological studies, and demonstrate the powerful role of stored spatial representations for improving the precision of angular self-motion updating.
Joeanna C. ArthurEmail:
  相似文献   

6.
Surprisingly little is known of the perceptual consequences of visual or vestibular stimulation in updating our perceived position in space as we move around. We assessed the roles of visual and vestibular cues in determining the perceived distance of passive, linear self motion. Subjects were given cues to constant-acceleration motion: either optic flow presented in a virtual reality display, physical motion in the dark or combinations of visual and physical motions. Subjects indicated when they perceived they had traversed a distance that had been previously given to them either visually or physically. The perceived distance of motion evoked by optic flow was accurate relative to a previously presented visual target but was perceptually equivalent to about half the physical motion. The perceived distance of physical motion in the dark was accurate relative to a previously presented physical motion but was perceptually equivalent to a much longer visually presented distance. The perceived distance of self motion when both visual and physical cues were present was more closely perceptually equivalent to the physical motion experienced rather than the simultaneous visual motion, even when the target was presented visually. We discuss this dominance of the physical cues in determining the perceived distance of self motion in terms of capture by non-visual cues. These findings are related to emerging studies that show the importance of vestibular input to neural mechanisms that process self motion.  相似文献   

7.
The perception of self-motion is a product of the integration of information from both visual and non-visual cues, to which the vestibular system is a central contributor. It is well documented that vestibular dysfunction leads to impaired movement and balance, dizziness and falls, and yet our knowledge of the neuronal processing of vestibular signals remains relatively sparse. In this study, high-density electroencephalographic recordings were deployed to investigate the neural processes associated with vestibular detection of changes in heading. To this end, a self-motion oddball paradigm was designed. Participants were translated linearly 7.8 cm on a motion platform using a one second motion profile, at a 45° angle leftward or rightward of straight ahead. These headings were presented with a stimulus probability of 80–20 %. Participants responded when they detected the infrequent direction change via button-press. Event-related potentials (ERPs) were calculated in response to the standard (80 %) and target (20 %) movement directions. Statistical parametric mapping showed that ERPs to standard and target movements differed significantly from 490 to 950 ms post-stimulus. Topographic analysis showed that this difference had a typical P3 topography. Individual participant bootstrap analysis revealed that 93.3 % of participants exhibited a clear P3 component. These results indicate that a perceived change in vestibular heading can readily elicit a P3 response, wholly similar to that evoked by oddball stimuli presented in other sensory modalities. This vestibular-evoked P3 response may provide a readily and robustly detectable objective measure for the evaluation of vestibular integrity in various disease models.  相似文献   

8.
The vestibular system is vital for motor control and spatial self-motion perception. Afferents from the otolith organs and the semicircular canals converge with optokinetic, somatosensory and motor-related signals in the vestibular nuclei, which are reciprocally interconnected with the vestibulocerebellar cortex and deep cerebellar nuclei. Here, we review the properties of the many cell types in the vestibular nuclei, as well as some fundamental computations implemented within this brainstem–cerebellar circuitry. These include the sensorimotor transformations for reflex generation, the neural computations for inertial motion estimation, the distinction between active and passive head movements, as well as the integration of vestibular and proprioceptive information for body motion estimation. A common theme in the solution to such computational problems is the concept of internal models and their neural implementation. Recent studies have shed new insights into important organizational principles that closely resemble those proposed for other sensorimotor systems, where their neural basis has often been more difficult to identify. As such, the vestibular system provides an excellent model to explore common neural processing strategies relevant both for reflexive and for goal-directed, voluntary movement as well as perception.  相似文献   

9.
The integration of neck proprioceptive and vestibular inputs underlies the generation of accurate postural and motor control. Recent studies have shown that central mechanisms underlying the integration of these sensory inputs differ across species. Notably, in rhesus monkey (Macaca mulata), an Old World monkey, neurons in the vestibular nuclei are insensitive to passive stimulation of neck proprioceptors. In contrast, in squirrel monkey, a New World monkey, stimulation produces robust modulation. This has led to the suggestion that there are differences in how sensory information is integrated during self-motion in Old versus New World monkeys. To test this hypothesis, we recorded from neurons in the vestibular nuclei of another species in the Macaca genus [i.e., M. fascicularis (cynomolgus monkey)]. Recordings were made from vestibular-only (VO) and position-vestibular-pause (PVP) neurons. The majority (53%) of neurons in both groups were sensitive to neck proprioceptive and vestibular stimulation during passive body-under-head and whole-body rotation, respectively. Furthermore, responses during passive rotations of the head-on-body were well predicted by the linear summation of vestibular and neck responses (which were typically antagonistic). During active head movement, the responses of VO and PVP neurons were further attenuated (relative to a model based on linear summation) for the duration of the active head movement or gaze shift, respectively. Taken together, our findings show that the brain’s strategy for the central processing of sensory information can vary even within a single genus. We suggest that similar divergence may be observed in other areas in which multimodal integration occurs. Electronic supplementary material  The online version of this article (doi:) contains supplementary material, which is available to authorized users.  相似文献   

10.
The control of self-motion is supported by visual, vestibular, and proprioceptive signals. Recent research has shown how these signals interact in the monkey medio-superior temporal area (area MST) to enhance and disambiguate the perception of heading during self-motion. Area MST is a central stage for self-motion processing from optic flow, and integrates flow field information with vestibular self-motion and extraretinal eye movement information. Such multimodal cue integration is clearly important to solidify perception. However to understand the information processing capabilities of the brain, one must also ask how much information can be deduced from a single cue alone. This is particularly pertinent for optic flow, where controversies over its usefulness for self-motion control have existed ever since Gibson proposed his direct approach to ecological perception. In our study, we therefore, tested macaque MST neurons for their heading selectivity in highly complex flow fields based on the purely visual mechanisms. We recorded responses of MST neurons to simple radial flow fields and to distorted flow fields that simulated a self-motion plus an eye movement. About half of the cells compensated for such distortion and kept the same heading selectivity in both cases. Our results strongly support the notion of an involvement of area MST in the computation of heading.  相似文献   

11.
Previous studies have generally considered heading perception to be a visual task. However, since judgments of heading direction are required only during self-motion, there are several other relevant senses which could provide supplementary and, in some cases, necessary information to make accurate and precise judgments of the direction of self-motion. We assessed the contributions of several of these senses using tasks chosen to reflect the reference system used by each sensory modality. Head-pointing and rod-pointing tasks were performed in which subjects aligned either the head or an unseen pointer with the direction of motion during whole body linear motion. Passive visual and vestibular stimulation was generated by accelerating subjects at sub- or supravestibular thresholds down a linear track. The motor-kinesthetic system was stimulated by having subjects actively walk along the track. A helmet-mounted optical system, fixed either on the cart used to provide passive visual or vestibular information or on the walker used in the active walking conditions, provided a stereoscopic display of an optical flow field. Subjects could be positioned at any orientation relative to the heading, and heading judgments were obtained using unimodal visual, vestibular, or walking cues, or combined visual-vestibular and visual-walking cues. Vision alone resulted in reasonably precise and accurate head-pointing judgments (0.3° constant errors, 2.9° variable errors), but not rod-pointing judgments (3.5° constant errors, 5.9° variable errors). Concordant visual-walking stimulation slightly decreased the variable errors and reduced constant pointing errors to close to zero, while head-pointing errors were unaffected. Concordant visual-vestibular stimulation did not facilitate either response. Stimulation of the vestibular system in the absence of vision produced imprecise rod-pointing responses, while variable and constant pointing errors in the active walking condition were comparable to those obtained in the visual condition. During active self-motion, subjects made large headpointing undershoots when visual information was not available. These results suggest that while vision provides sufficient information to identify the heading direction, it cannot, in isolation, be used to guide the motor response required to point toward or move in the direction of self-motion.  相似文献   

12.
One of the fundamental requirements for successful navigation through an environment is the continuous monitoring of distance travelled. To do so, humans normally use one or a combination of visual, proprioceptive/efferent, vestibular, and temporal cues. In the real world, information from one sensory modality is normally congruent with information from other modalities; hence, studying the nature of sensory interactions is often difficult. In order to decouple the natural covariation between different sensory cues, we used virtual reality technology to vary the relation between the information generated from visual sources and the information generated from proprioceptive/efferent sources. When we manipulated the stimuli such that the visual information was coupled in various ways to the proprioceptive/efferent information, human subjects predominantly used visual information to estimate the ratio of two traversed path lengths. Although proprioceptive/efferent information was not used directly, the mere availability of proprioceptive information increased the accuracy of relative path length estimation based on visual cues, even though the proprioceptive/efferent information was inconsistent with the visual information. These results convincingly demonstrated that active movement (locomotion) facilitates visual perception of path length travelled.  相似文献   

13.
The hippocampus has long been considered critical for spatial learning and navigation. Recent theoretical models of the rodent and primate hippocampus consider spatial processing a special case of a more general memory function. These non-spatial theories of hippocampus differ from navigational theories with respect to the role of self-motion representations. The present study presents evidence for a new cell type in the CA1 area of the rat hippocampus that codes for directional heading independent of location information (i.e. the angular component of self-motion). These hippocampal head direction cells are controlled by external and idiothetic cues in a similar way as head direction cells in other brain areas and hippocampal place cells.Convergent head direction information and location information may be an essential component of a neural system that monitors behavioral sequences during navigation. Conflicts between internally generated and external cues have previously been shown to result in new hippocampal place representations, suggesting that head direction information may participate in synaptic interactions when new location codes are formed. Combined hippocampal representations of self-motion and external cues may therefore contribute to path integration as well as spatial memory processing.  相似文献   

14.
Ownership for body parts depends on multisensory integration of visual, tactile and proprioceptive signals. In a previous study, we demonstrated that vestibular signals also contribute to ownership for body parts, since vestibular stimulation increased illusory ownership for a rubber hand. However, it remained an open question whether the vestibular information acts on the visual or on the tactile input. Here, we used a non-visual variant of the rubber hand illusion, manipulating the synchrony between tactile signals from the participant's left and right hand. The results revealed a strong illusory ownership through self-reports (questionnaires) and proprioceptive drift measures. Interestingly, however, there was no influence of vestibular stimulation on illusory ownership and the proprioceptive drift. The present data suggest that vestibular signals do not interfere with the tactile-proprioceptive mechanisms underlying ownership for body parts when visual feedback from the body surface is absent.  相似文献   

15.
A subset of neurons in the rat limbic system encodes head direction (HD) by selectively discharging when the rat points its head in a preferred direction in the horizontal plane. The preferred firing direction is sensitive to the location of landmark cues, as well as idiothetic or self-motion cues (i.e., vestibular, motor efference copy, proprioception, and optic flow). Previous studies have shown that the preferred firing direction remains relatively stable (average shift +/- 18 degrees ) after the rat walks from a familiar environment into a novel one, suggesting that without familiar landmarks, the preferred firing direction can be maintained using idiothetic cues, a process called directional path integration. This study repeated this experiment and manipulated the idiothetic cues available to the rat as it moved between the familiar and novel environment. Motor efference copy/proprioceptive cues were disrupted by passively transporting the animal between the familiar and novel environment. Darkening the room as the animal moved to the novel environment eliminated optic flow cues. HD cell preferred firing directions shifted in the novel environment by an average of 30 degrees after locomotion from the familiar environment with the room lights off; by an average of 70 degrees after passive transport from the familiar environment with the room lights on; and by an average of 67 degrees after passive transport with the room lights off. These findings are consistent with the view that motor efference copy/proprioception cues are important for maintaining the preferred firing direction of HD cells under conditions requiring path integration.  相似文献   

16.
Recent studies report efficient vestibular control of goal-directed arm movements during body motion. This contribution tested whether this control relies (a) on an updating process in which vestibular signals are used to update the perceived egocentric position of surrounding objects when body orientation changes, or (b) on a sensorimotor process, i.e. a transfer function between vestibular input and the arm motor output that preserves hand trajectory in space despite body rotation. Both processes were separately and specifically adapted. We then compared the respective influences of the adapted processes on the vestibular control of arm-reaching movements. The rationale was that if a given process underlies a given behavior, any adaptive modification of this process should give rise to observable modification of the behavior. The updating adaptation adapted the matching between vestibular input and perceived body displacement in the surrounding world. The sensorimotor adaptation adapted the matching between vestibular input and the arm motor output necessary to keep the hand fixed in space during body rotation. Only the sensorimotor adaptation significantly altered the vestibular control of arm-reaching movements. Our results therefore suggest that during passive self-motion, the vestibular control of arm-reaching movements essentially derives from a sensorimotor process by which arm motor output is modified on-line to preserve hand trajectory in space despite body displacement. In contrast, the updating process maintaining up-to-date the egocentric representation of visual space seems to contribute little to generating the required arm compensation during body rotations.  相似文献   

17.
In two experiments we investigated whether bistable visual perception is influenced by passive own body displacements due to vestibular stimulation. For this we passively rotated our participants around the vertical (yaw) axis while observing different rotating bistable stimuli (bodily or non-bodily) with different ambiguous motion directions. Based on previous work on multimodal effects on bistable perception, we hypothesized that vestibular stimulation should alter bistable perception and that the effects should differ for bodily versus non-bodily stimuli. In the first experiment, it was found that the rotation bias (i.e., the difference between the percentage of time that a CW or CCW rotation was perceived) was selectively modulated by vestibular stimulation: the perceived duration of the bodily stimuli was longer for the rotation direction congruent with the subject’s own body rotation, whereas the opposite was true for the non-bodily stimulus (Necker cube). The results found in the second experiment extend the findings from the first experiment and show that these vestibular effects on bistable perception only occur when the axis of rotation of the bodily stimulus matches the axis of passive own body rotation. These findings indicate that the effect of vestibular stimulation on the rotation bias depends on the stimulus that is presented and the rotation axis of the stimulus. Although most studies on vestibular processing have traditionally focused on multisensory signal integration for posture, balance, and heading direction, the present data show that vestibular self-motion influences the perception of bistable bodily stimuli revealing the importance of vestibular mechanisms for visual consciousness.  相似文献   

18.
Primates are able to localize a briefly flashed target despite intervening movements of the eyes, head, or body. This ability, often referred to as updating, requires extraretinal signals related to the intervening movement. With active roll rotations of the head from an upright position it has been shown that the updating mechanism is 3-dimensional, robust, and geometrically sophisticated. Here we examine whether such a rotational updating mechanism operates during passive motion both with and without inertial cues about head/body position in space. Subjects were rotated from either an upright or supine position, about a nasal-occipital axis, briefly shown a world-fixed target, rotated back to their original position, and then asked to saccade to the remembered target location. Using this paradigm, we tested subjects' abilities to update from various tilt angles (0, +/-30, +/-45, +/-90 degrees), to 8 target directions and 2 target eccentricities. In the upright condition, subjects accurately updated the remembered locations from all tilt angles independent of target direction or eccentricity. Slopes of directional errors versus tilt angle ranged from -0.011 to 0.15, and were significantly different from a slope of 1 (no compensation for head-in-space roll) and a slope of 0.9 (no compensation for eye-in-space roll). Because the eyes, head, and body were fixed throughout these passive movements, subjects could not use efference copies or neck proprioceptive cues to assess the amount of tilt, suggesting that vestibular signals and/or body proprioceptive cues suffice for updating. In the supine condition, where gravitational signals could not contribute, slopes ranged from 0.60 to 0.82, indicating poor updating performance. Thus information specifying the body's orientation relative to gravity is critical for maintaining spatial constancy and for distinguishing body-fixed versus world-fixed reference frames.  相似文献   

19.
In everyday life, vestibular sensors are activated by both self-generated and externally applied head movements. The ability to distinguish inputs that are a consequence of our own actions (i.e., active motion) from those that result from changes in the external world (i.e., passive or unexpected motion) is essential for perceptual stability and accurate motor control. Recent work has made progress toward understanding how the brain distinguishes between these two kinds of sensory inputs. We have performed a series of experiments in which single-unit recordings were made from vestibular afferents and central neurons in alert macaque monkeys during rotation and translation. Vestibular afferents showed no differences in firing variability or sensitivity during active movements when compared to passive movements. In contrast, the analyses of neuronal firing rates revealed that neurons at the first central stage of vestibular processing (i.e., in the vestibular nuclei) were effectively less sensitive to active motion. Notably, however, this ability to distinguish between active and passive motion was not a general feature of early central processing, but rather was a characteristic of a distinct group of neurons known to contribute to postural control and spatial orientation. Our most recent studies have addressed how vestibular and proprioceptive inputs are integrated in the vestibular cerebellum, a region likely to be involved in generating an internal model of self-motion. We propose that this multimodal integration within the vestibular cerebellum is required for eliminating self-generated vestibular information from the subsequent computation of orientation and posture control at the first central stage of processing.  相似文献   

20.
Human observers combine multiple sensory cues synergistically to achieve greater perceptual sensitivity, but little is known about the underlying neuronal mechanisms. We recorded the activity of neurons in the dorsal medial superior temporal (MSTd) area during a task in which trained monkeys combined visual and vestibular cues near-optimally to discriminate heading. During bimodal stimulation, MSTd neurons combined visual and vestibular inputs linearly with subadditive weights. Neurons with congruent heading preferences for visual and vestibular stimuli showed improvements in sensitivity that parallel behavioral effects. In contrast, neurons with opposite preferences showed diminished sensitivity under cue combination. Responses of congruent cells were more strongly correlated with monkeys' perceptual decisions than were responses of opposite cells, suggesting that the monkey monitored the activity of congruent cells to a greater extent during cue integration. These findings show that perceptual cue integration occurs in nonhuman primates and identify a population of neurons that may form its neural basis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号