首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 349 毫秒
1.
In contrast to vision, the neuro-anatomical substrates of vestibular perception are obscure. The vestibular apparati provide a head angular velocity signal allowing perception of self-motion velocity. Perceived change of angular position-in-space can also be obtained from the vestibular head velocity signal via a process called Path Integration (so-called since displacement is obtained by a mathematical temporal integration of the vestibular velocity signal). It is unknown however, if distinct cortical loci sub-serve vestibular perceptions of velocity versus displacement (i.e. Path Integration). Previous studies of human brain activity have not used head motion stimuli hence precluding localisation of vestibular cortical areas specialised for Path Integration distinct from velocity perception. We inferred vestibular cortical function by measuring the disrupting effect of repetitive transcranial magnetic stimulation on the performance of a displacement-dependent vestibular navigation task. Our data suggest that posterior parietal cortex is involved in encoding contralaterally directed vestibular-derived signals of perceived angular displacement and a similar effect was found for both hemispheres. We separately tested whether right posterior parietal cortex was involved in vestibular-sensed velocity perception but found no association. Overall, our data demonstrate that posterior parietal cortex is involved in human Path Integration but not velocity perception. We suggest that there are separate brain areas that process vestibular signals of head velocity versus those involved in Path Integration.  相似文献   

2.
Many studies provide evidence that information from different modalities is integrated following the maximum likelihood estimation model (MLE). For instance, we recently found that visual and proprioceptive path trajectories are optimally combined (Reuschel et al. in Exp Brain Res 201:853–862, 2010). However, other studies have failed to reveal optimal integration of such dynamic information. In the present study, we aim to generalize our previous findings to different parts of the workspace (central, ipsilateral, or contralateral) and to different types of judgments (relative vs. absolute). Participants made relative judgments by judging whether an angular path was acute or obtuse, or they made absolute judgments by judging whether a one-segmented straight path was directed to left or right. Trajectories were presented in the visual, proprioceptive, or combined visual–proprioceptive modality. We measured the bias and the variance of these estimates and predicted both parameters using the MLE. In accordance with the MLE model, participants linearly combined and weighted the unimodal angular path information by their reliabilities irrespective of the side of workspace. However, the precision of bimodal estimates was not greater than that for unimodal estimates, which is inconsistent with the MLE. For the absolute judgment task, participants’ estimates were highly accurate and did not differ across modalities. Thus, we were unable to test whether the bimodal percept resulted as a weighted average of the visual and proprioceptive input. Additionally, participants were not more precise in the bimodal compared with the unimodal conditions, which is inconsistent with the MLE. Current findings suggest that optimal integration of visual and proprioceptive information of path trajectory only applies in some conditions.  相似文献   

3.
The control of self-motion is supported by visual, vestibular, and proprioceptive signals. Recent research has shown how these signals interact in the monkey medio-superior temporal area (area MST) to enhance and disambiguate the perception of heading during self-motion. Area MST is a central stage for self-motion processing from optic flow, and integrates flow field information with vestibular self-motion and extraretinal eye movement information. Such multimodal cue integration is clearly important to solidify perception. However to understand the information processing capabilities of the brain, one must also ask how much information can be deduced from a single cue alone. This is particularly pertinent for optic flow, where controversies over its usefulness for self-motion control have existed ever since Gibson proposed his direct approach to ecological perception. In our study, we therefore, tested macaque MST neurons for their heading selectivity in highly complex flow fields based on the purely visual mechanisms. We recorded responses of MST neurons to simple radial flow fields and to distorted flow fields that simulated a self-motion plus an eye movement. About half of the cells compensated for such distortion and kept the same heading selectivity in both cases. Our results strongly support the notion of an involvement of area MST in the computation of heading.  相似文献   

4.
The extent to which attending to one stimulus while ignoring another influences the integration of visual and inertial (vestibular, somatosensory, proprioceptive) stimuli is currently unknown. It is also unclear how cue integration is affected by an awareness of cue conflicts. We investigated these questions using a turn-reproduction paradigm, where participants were seated on a motion platform equipped with a projection screen and were asked to actively return a combined visual and inertial whole-body rotation around an earth-vertical axis. By introducing cue conflicts during the active return and asking the participants whether they had noticed a cue conflict, we measured the influence of each cue on the response. We found that the task instruction had a significant effect on cue weighting in the response, with a higher weight assigned to the attended modality, only when participants noticed the cue conflict. This suggests that participants used task-induced attention to reduce the influence of stimuli that conflict with the task instructions.  相似文献   

5.
This study reports an experiment investigating the relative effects of intramodal, crossmodal and bimodal cues on visual and auditory temporal order judgements. Pairs of visual or auditory targets, separated by varying stimulus onset asynchronies, were presented to either side of a central fixation (±45°), and participants were asked to identify the target that had occurred first. In some of the trials, one of the targets was preceded by a short, non-predictive visual, auditory or audiovisual cue stimulus. The cue and target stimuli were presented at the exact same locations in space. The point of subjective simultaneity revealed a consistent spatiotemporal bias towards targets at the cued location. For the visual targets, the intramodal cue elicited the largest, and the crossmodal cue the smallest, bias. The bias elicited by the bimodal cue fell between the intramodal and crossmodal cue biases, with significant differences between all cue types. The pattern for the auditory targets was similar apart from a scaling factor and greater variance, so the differences between the cue conditions did not reach significance. These results provide evidence for multisensory integration in exogenous attentional cueing. The magnitude of the bimodal cueing effect was equivalent to the average of the facilitation elicited by the intramodal and crossmodal cues. Under the assumption that the visual and auditory cues were equally informative, this is consistent with the notion that exogenous attention, like perception, integrates multimodal information in an optimal way.  相似文献   

6.
7.
Normal visual input plays a very dominant role during locomotion. Functionally, it can assist the central nervous system to overcome a destabilizing effect of abnormal or perturbed vestibular information. However, a recent study has shown a directional effect of transmastoidal galvanic vestibular stimulation (GVS) on gait trajectory when visual information is unreliable. The purpose of this study was to investigate how inputs from the visual and vestibular systems are weighted to optimize locomotor performance under impoverished visual conditions during goal directed locomotion. For unimodal stimulation, the visual input was manipulated using displacing prisms that caused 14 degrees horizontal displacement of perceived target location to the right or left. In addition, GVS (0.8 mAmp) was applied to manipulate vestibular system information during bimodal stimulation conditions. Two bimodal stimulation conditions were defined by the polarity of the galvanic current (anode on congruent and incongruent sides of prismatic deviation). The center of mass (CoM) displacement, head and trunk yaw angles and trunk roll angles were computed to analyze the global output as well as segmental coordination, as the participants walked towards the target. Although the performance was primarily guided by visual information, both congruent and incongruent GVS significantly altered CoM displacement. Similarly, the basic pattern of segmental responses during steering was maintained; however, the magnitude of the responses was altered. Spatio-temporal analysis demonstrated that during bimodal stimulation, the effect of GVS on global output tapered off as the participants approached the target. Results suggest a dynamic visual-vestibular interaction in which the gain of the vestibular input is initially upregulated in the presence of insufficient or impoverished visual information. However, there is a gradual habituation and the visual information, although insufficient, primarily dominates during goal directed locomotion. The experimental trajectories resembled mathematically simulated trajectories with a decaying GVS gain as opposed to a constant gain, further supporting the dynamic nature of sensory integration.  相似文献   

8.
Recent research has provided evidence that visual and body-based cues (vestibular, proprioceptive and efference copy) are integrated using a weighted linear sum during walking and passive transport. However, little is known about the specific weighting of visual information when combined with proprioceptive inputs alone, in the absence of vestibular information about forward self-motion. Therefore, in this study, participants walked in place on a stationary treadmill while dynamic visual information was updated in real time via a head-mounted display. The task required participants to travel a predefined distance and subsequently match this distance by adjusting an egocentric, in-depth target using a game controller. Travelled distance information was provided either through visual cues alone, proprioceptive cues alone or both cues combined. In the combined cue condition, the relationship between the two cues was manipulated by either changing the visual gain across trials (0.7×, 1.0×, 1.4×; Exp. 1) or the proprioceptive gain across trials (0.7×, 1.0×, 1.4×; Exp. 2). Results demonstrated an overall higher weighting of proprioception over vision. These weights were scaled, however, as a function of which sensory input provided more stable information across trials. Specifically, when visual gain was constantly manipulated, proprioceptive weights were higher than when proprioceptive gain was constantly manipulated. These results therefore reveal interesting characteristics of cue-weighting within the context of unfolding spatio-temporal cue dynamics.  相似文献   

9.
In two experiments we investigated whether bistable visual perception is influenced by passive own body displacements due to vestibular stimulation. For this we passively rotated our participants around the vertical (yaw) axis while observing different rotating bistable stimuli (bodily or non-bodily) with different ambiguous motion directions. Based on previous work on multimodal effects on bistable perception, we hypothesized that vestibular stimulation should alter bistable perception and that the effects should differ for bodily versus non-bodily stimuli. In the first experiment, it was found that the rotation bias (i.e., the difference between the percentage of time that a CW or CCW rotation was perceived) was selectively modulated by vestibular stimulation: the perceived duration of the bodily stimuli was longer for the rotation direction congruent with the subject’s own body rotation, whereas the opposite was true for the non-bodily stimulus (Necker cube). The results found in the second experiment extend the findings from the first experiment and show that these vestibular effects on bistable perception only occur when the axis of rotation of the bodily stimulus matches the axis of passive own body rotation. These findings indicate that the effect of vestibular stimulation on the rotation bias depends on the stimulus that is presented and the rotation axis of the stimulus. Although most studies on vestibular processing have traditionally focused on multisensory signal integration for posture, balance, and heading direction, the present data show that vestibular self-motion influences the perception of bistable bodily stimuli revealing the importance of vestibular mechanisms for visual consciousness.  相似文献   

10.
We investigated perceptual learning in self-motion perception. Blindfolded participants were displaced leftward or rightward by means of a motion platform and asked to indicate the direction of motion. A total of eleven participants underwent 3,360 practice trials, distributed over twelve (Experiment 1) or 6 days (Experiment 2). We found no improvement in motion discrimination in both experiments. These results are surprising since perceptual learning has been demonstrated for visual, auditory, and somatosensory discrimination. Improvements in the same task were found when visual input was provided (Experiment 3). The multisensory nature of vestibular information is discussed as a possible explanation of the absence of perceptual learning in darkness.  相似文献   

11.
This study asked whether individual differences in the influence of vision on postural stability could be used to predict the strength of subsequently induced visual illusions of self-motion (vection). In the experiment, we first measured spontaneous postural sway while subjects stood erect for 60 s with their eyes both open and both closed. We then showed our subjects two types of self-motion display: radially expanding optic flow (simulating constant velocity forwards self-motion) and vertically oscillating radially expanding optic flow (simulating constant velocity forwards self-motion combined with vertical head oscillation). As expected, subjects swayed more with their eyes closed (compared to open) and experienced more compelling illusions of self-motion with vertically oscillating (as opposed to smooth) radial flow. The extent to which participants relied on vision for postural stability—measured as the ratio of sway with eyes closed compared to that with eyes open—was found to predict vection strength. However, this was only the case for displays representing smooth self-motion. It seems that for oscillating displays, other factors, such as visual–vestibular interactions, may be more important.  相似文献   

12.
Kaneko F  Yasojima T  Kizuka T 《Neuroscience》2007,149(4):976-984
The present study aimed to clarify whether a kinesthetic illusion arises in our experimental condition (visual stimulus) and whether corticomotor excitability changes in parallel with the kinesthetic illusion. The visual stimulus was a movie in which someone else's limb was being moved. The computer screen showing the movie was installed at an appropriate portion of the subject's forearm, so that the performer's hand appeared as if it were the subject's hand (illusion). The experience of kinesthetic illusion under this condition was verified by interview using a visual analog scale. Healthy male subjects participated in this experiment. Transcranial magnetic stimulation was applied to induce motor-evoked potential (MEP) from the first dorsal interosseous and abductor digiti minimi muscle. Each subject was instructed to watch the same computer display shown as in the illusion, with his own stationary hand in full view (non-illusion) and to watch a display of non-biological movement (moving text) (sham) as the control conditions. The present results showed significant facilitation of MEP under the illusion compared with the control conditions for the index finger abducting in the movie, although not for adducting. MEP in the abductor digiti minimi showed no change during either abduction or adduction of the little finger. The present study demonstrated that an illusion of self-motion can be created by a video of a moving abstract index finger, and inputs to the corticomotor pathways during the self-motion illusion facilitated the corticomotor excitability. The excitatory effect of the illusion depended on the movement direction of the index finger.  相似文献   

13.
Recent findings of vestibular responses in part of the visual cortex--the dorsal medial superior temporal area (MSTd)--indicate that vestibular signals might contribute to cortical processes that mediate the perception of self-motion. We tested this hypothesis in monkeys trained to perform a fine heading discrimination task solely on the basis of inertial motion cues. The sensitivity of the neuronal responses was typically lower than that of psychophysical performance, and only the most sensitive neurons rivaled behavioral performance. Responses recorded in MSTd were significantly correlated with perceptual decisions, and the correlations were strongest for the most sensitive neurons. These results support a functional link between MSTd and heading perception based on inertial motion cues. These cues seem mainly to be of vestibular origin, as labyrinthectomy produced a marked elevation of psychophysical thresholds and abolished MSTd responses. This study provides evidence that links single-unit activity to spatial perception mediated by vestibular signals, and supports the idea that the role of MSTd in self-motion perception extends beyond optic flow processing.  相似文献   

14.
The perception of self-motion is a product of the integration of information from both visual and non-visual cues, to which the vestibular system is a central contributor. It is well documented that vestibular dysfunction leads to impaired movement and balance, dizziness and falls, and yet our knowledge of the neuronal processing of vestibular signals remains relatively sparse. In this study, high-density electroencephalographic recordings were deployed to investigate the neural processes associated with vestibular detection of changes in heading. To this end, a self-motion oddball paradigm was designed. Participants were translated linearly 7.8 cm on a motion platform using a one second motion profile, at a 45° angle leftward or rightward of straight ahead. These headings were presented with a stimulus probability of 80–20 %. Participants responded when they detected the infrequent direction change via button-press. Event-related potentials (ERPs) were calculated in response to the standard (80 %) and target (20 %) movement directions. Statistical parametric mapping showed that ERPs to standard and target movements differed significantly from 490 to 950 ms post-stimulus. Topographic analysis showed that this difference had a typical P3 topography. Individual participant bootstrap analysis revealed that 93.3 % of participants exhibited a clear P3 component. These results indicate that a perceived change in vestibular heading can readily elicit a P3 response, wholly similar to that evoked by oddball stimuli presented in other sensory modalities. This vestibular-evoked P3 response may provide a readily and robustly detectable objective measure for the evaluation of vestibular integrity in various disease models.  相似文献   

15.
Spatial updating during self-motion typically involves the appropriate integration of both visual and non-visual cues, including vestibular and proprioceptive information. Here, we investigated how human observers combine these two non-visual cues during full-stride curvilinear walking. To obtain a continuous, real-time estimate of perceived position, observers were asked to continuously point toward a previously viewed target in the absence of vision. They did so while moving on a large circular treadmill under various movement conditions. Two conditions were designed to evaluate spatial updating when information was largely limited to either proprioceptive information (walking in place) or vestibular information (passive movement). A third condition evaluated updating when both sources of information were available (walking through space) and were either congruent or in conflict. During both the passive movement condition and while walking through space, the pattern of pointing behavior demonstrated evidence of accurate egocentric updating. In contrast, when walking in place, perceived self-motion was underestimated and participants always adjusted the pointer at a constant rate, irrespective of changes in the rate at which the participant moved relative to the target. The results are discussed in relation to the maximum likelihood estimation model of sensory integration. They show that when the two cues were congruent, estimates were combined, such that the variance of the adjustments was generally reduced. Results also suggest that when conflicts were introduced between the vestibular and proprioceptive cues, spatial updating was based on a weighted average of the two inputs.  相似文献   

16.
The vestibular system detects the velocity of the head even in complete darkness, and thus contributes to spatial orientation. However, during vestibular estimation of linear passive self-motion distance in darkness, healthy human subjects mainly rely on time, and they replicate also stimulus duration when required to reproduce previous self-rotation. We then made the hypothesis that the perception of vestibular-sensed motion duration is embedded within encoding of motion kinetics. The ability to estimate time during passive self-motion in darkness was examined with a self-rotation reproduction paradigm. Subjects were required to replicate through self-driven transport the plateau velocity (30, 60 and 90 °/s) and duration (2, 3 and 4 s) of the previously imposed whole-body rotation (trapezoid velocity profile) in complete darkness; the rotating chair position was recorded (500 Hz) during the whole trials. The results showed that the peak velocity, but not duration, of the plateau phase of the imposed rotation was accurately reproduced. Suspecting that the velocity instruction had impaired the duration reproduction, we added a control experiment requiring subjects to reproduce two successive identical rotations separated by a momentary motion interruption (MMI). The MMI was of identical duration to the previous plateau phase. MMI duration was fidelitously reproduced whereas that of the plateau phase was hypometric (i.e. lesser reproduced duration than plateau) suggesting that subjective time is shorter during vestibular stimulation. Furthermore, the accurate reproduction of the whole motion duration, that was not required, indicates an automatic process and confirms that vestibular duration perception is embedded within motion kinetics.  相似文献   

17.
1. We used a modeling approach to test the hypothesis that, in humans, the smooth pursuit (SP) system provides the primary signal for cancelling the vestibuloocular reflex (VOR) during combined eye-head tracking (CEHT) of a target moving smoothly in the horizontal plane. Separate models for SP and the VOR were developed. The optimal values of parameters of the two models were calculated using measured responses of four subjects to trials of SP and the visually enhanced VOR. After optimal parameter values were specified, each model generated waveforms that accurately reflected the subjects' responses to SP and vestibular stimuli. The models were then combined into a CEHT model wherein the final eye movement command signal was generated as the linear summation of the signals from the SP and VOR pathways. 2. The SP-VOR superposition hypothesis was tested using two types of CEHT stimuli, both of which involved passive rotation of subjects in a vestibular chair. The first stimulus consisted of a "chair brake" or sudden stop of the subject's head during CEHT; the visual target continued to move. The second stimulus consisted of a sudden change from the visually enhanced VOR to CEHT ("delayed target onset" paradigm); as the vestibular chair rotated past the angular position of the stationary visual stimulus, the latter started to move in synchrony with the chair. Data collected during experiments that employed these stimuli were compared quantitatively with predictions made by the CEHT model. 3. During CEHT, when the chair was suddenly and unexpectedly stopped, the eye promptly began to move in the orbit to track the moving target. Initially, gaze velocity did not completely match target velocity, however; this finally occurred approximately 100 ms after the brake onset. The model did predict the prompt onset of eye-in-orbit motion after the brake, but it did not predict that gaze velocity would initially be only approximately 70% of target velocity. One possible explanation for this discrepancy is that VOR gain can be dynamically modulated and, during sustained CEHT, it may assume a lower value. Consequently, during CEHT, a smaller-amplitude SP signal would be needed to cancel the lower-gain VOR. This reduction of the SP signal could account for the attenuated tracking response observed immediately after the brake. We found evidence for the dynamic modulation of VOR gain by noting differences in responses to the onset and offset of head rotation in trials of the visually enhanced VOR.(ABSTRACT TRUNCATED AT 400 WORDS)  相似文献   

18.
Humans are typically able to keep track of brief changes in their head and body orientation, even when visual and auditory cues are temporarily unavailable. Determining the magnitude of one’s displacement from a known location is one form of self-motion updating. Most research on self-motion updating during body rotations has focused on the role of a restricted set of sensory signals (primarily vestibular) available during self-motion. However, humans can and do internally represent spatial aspects of the environment, and little is known about how remembered spatial frameworks may impact angular self-motion updating. Here, we describe an experiment addressing this issue. Participants estimated the magnitude of passive, non-visual body rotations (40°–130°), using non-visual manual pointing. Prior to each rotation, participants were either allowed full vision of the testing environment, or remained blindfolded. Within-subject response precision was dramatically enhanced when the body rotations were preceded by a visual preview of the surrounding environment; constant (signed) and absolute (unsigned) error were much less affected. These results are informative for future perceptual, cognitive, and neuropsychological studies, and demonstrate the powerful role of stored spatial representations for improving the precision of angular self-motion updating.
Joeanna C. ArthurEmail:
  相似文献   

19.
Assessing intentions, direction, and velocity of others is necessary for most daily tasks, and such information is often made available by both visual and auditory motion cues. Therefore, it is not surprising our great ability to perceive human motion. Here, we explore the multisensory integration of cues of biological motion walking speed. After testing for audiovisual asynchronies (visual signals led auditory ones by 30?ms in simultaneity temporal windows of 76.4?ms), in the main experiment, visual, auditory, and bimodal stimuli were compared to a standard audiovisual walker in a velocity discrimination task. Results in variance reduction conformed to optimal integration of congruent bimodal stimuli across all subjects. Interestingly, the perceptual judgements were still close to optimal for stimuli at the smallest level of incongruence. Comparison of slopes allows us to estimate an integration window of about 60?ms, which is smaller than that reported in audiovisual speech.  相似文献   

20.
Successful navigation through an environment requires precise monitoring of direction and distance traveled (”path integration” or ”dead reckoning”). Previous studies in blindfolded human subjects showed that velocity information arising from vestibular and somatosensory signals can be used to reproduce passive linear displacements. In these studies, visual information was excluded as sensory cue. Yet, in our everyday life, visual information is very important and usually dominates vestibular and somatosensory cues. In the present study, we investigated whether visual signals can be used to discriminate and reproduce simulated linear displacements. In a first set of experiments, subjects viewed two sequences of linear motion and were asked in a 2AFC task to judge whether the travel distance in the second sequence was larger or shorter than in the first. Displacements in either movement sequence could be forward (f) or backward (b). Subjects were very accurate in discriminating travel distances. Average error was less than 3% and did not depend on displacements being into the same (ff, bb) or opposite direction (fb, bf). In a second set of experiments, subjects had to reproduce a previously seen forward motion (passive condition), either in light or in darkness, i.e., with or without visual feedback. Passive displacements had different velocity profiles (constant, sinusoidal, complex) and speeds and were performed across a textured ground plane, a 2-D plane of dots or through a 3-D cloud of dots. With visual feedback, subjects reproduced distances accurately. Accuracy did not depend on the kind of velocity profile in the passive condition. Subjects tended to reproduce distance by replicating the velocity profile of the passive displacement. Finally, in the condition without visual feedback, subjects reproduced the shape of the velocity profile, but used much higher speeds, resulting in a substantial overshoot of travel distance. Our results show that visual, vestibular, and somatosensory signals are used for path integration, following a common strategy: the use of the velocity profile during self-motion. Received: 3 June 1998 / Accepted: 15 February 1999  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号