首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 562 毫秒
1.
We examined the influence of dynamic visual scenes on the motion perception of subjects undergoing sinusoidal (0.45 Hz) roll swing motion at different radii. The visual scenes were presented on a flatscreen monitor with a monocular 40° field of view. There were three categories of trials: (1) trials in the dark; (2) trials where the visual scene matched the actual motion; and (3) trials where the visual scene showed swing motion at a different radius. Subjects verbally reported perceptions of head tilt and translation. When the visual and vestibular cues differed, subjects reported perceptions that were geometrically consistent with a radius between the radii of the visual scene and the actual motion. Even when sensations did not match either the visual or vestibular stimuli, reported motion perceptions were consistent with swing motions combining elements of each. Subjects were generally unable to detect cue conflicts or judge their own visual–vestibular biases, which suggests that the visual and vestibular self-motion cues are not independently accessible.  相似文献   

2.
When walking through space, both dynamic visual information (optic flow) and body-based information (proprioceptive and vestibular) jointly specify the magnitude of distance travelled. While recent evidence has demonstrated the extent to which each of these cues can be used independently, less is known about how they are integrated when simultaneously present. Many studies have shown that sensory information is integrated using a weighted linear sum, yet little is known about whether this holds true for the integration of visual and body-based cues for travelled distance perception. In this study using Virtual Reality technologies, participants first travelled a predefined distance and subsequently matched this distance by adjusting an egocentric, in-depth target. The visual stimulus consisted of a long hallway and was presented in stereo via a head-mounted display. Body-based cues were provided either by walking in a fully tracked free-walking space (Exp. 1) or by being passively moved in a wheelchair (Exp. 2). Travelled distances were provided either through optic flow alone, body-based cues alone or through both cues combined. In the combined condition, visually specified distances were either congruent (1.0×) or incongruent (0.7× or 1.4×) with distances specified by body-based cues. Responses reflect a consistent combined effect of both visual and body-based information, with an overall higher influence of body-based cues when walking and a higher influence of visual cues during passive movement. When comparing the results of Experiments 1 and 2, it is clear that both proprioceptive and vestibular cues contribute to travelled distance estimates during walking. These observed results were effectively described using a basic linear weighting model.  相似文献   

3.
The purpose of this study was to determine the contribution of visual, vestibular, and somatosensory cues to the maintenance of stance in humans. Postural sway was induced by full-field, sinusoidal visual surround rotations about an axis at the level of the ankle joints. The influences of vestibular and somatosensory cues were characterized by comparing postural sway in normal and bilateral vestibular absent subjects in conditions that provided either accurate or inaccurate somatosensory orientation information. In normal subjects, the amplitude of visually induced sway reached a saturation level as stimulus amplitude increased. The saturation amplitude decreased with increasing stimulus frequency. No saturation phenomena were observed in subjects with vestibular loss, implying that vestibular cues were responsible for the saturation phenomenon. For visually induced sways below the saturation level, the stimulus-response curves for both normal subjects and subjects experiencing vestibular loss were nearly identical, implying (1) that normal subjects were not using vestibular information to attenuate their visually induced sway, possibly because sway was below a vestibular-related threshold level, and (2) that subjects with vestibular loss did not utilize visual cues to a greater extent than normal subjects; that is, a fundamental change in visual system gain was not used to compensate for a vestibular deficit. An unexpected finding was that the amplitude of body sway induced by visual surround motion could be almost 3 times greater than the amplitude of the visual stimulus in normal subjects and subjects with vestibular loss. This occurred in conditions where somatosensory cues were inaccurate and at low stimulus amplitudes. A control system model of visually induced postural sway was developed to explain this finding. For both subject groups, the amplitude of visually induced sway was smaller by a factor of about 4 in tests where somatosensory cues provided accurate versus inaccurate orientation information. This implied (1) that the subjects experiencing vestibular loss did not utilize somatosensory cues to a greater extent than normal subjects; that is, changes in somatosensory system gain were not used to compensate for a vestibular deficit, and (2) that the threshold for the use of vestibular cues in normal subjects was apparently lower in test conditions where somatosensory cues were providing accurate orientation information.  相似文献   

4.
Many previous studies have focused on how humans combine inputs provided by different modalities for the same physical property. However, it is not yet very clear how different senses providing information about our own movements combine in order to provide a motion percept. We designed an experiment to investigate how upright turns are stored, and particularly how vestibular and visual cues interact at the different stages of the memorization process (encoding/recalling). Subjects experienced passive yaw turns stimulated in the vestibular modality (whole-body rotations) and/or in the visual modality (limited lifetime star-field rotations), with the visual scene turning 1.5 times faster when combined (unnoticed conflict). Then they were asked to actively reproduce the rotation displacement in the opposite direction, with body cues only, visual cues only, or both cues with either the same or a different gain factor. First, we found that in none of the conditions did the reproduced motion dynamics follow that of the presentation phase (Gaussian angular velocity profiles). Second, the unimodal recalling of turns was largely uninfluenced by the other sensory cue that it could be combined with during the encoding. Therefore, turns in each modality, visual, and vestibular are stored independently. Third, when the intersensory gain was preserved, the bimodal reproduction was more precise (reduced variance) and lay between the two unimodal reproductions. This suggests that with both visual and vestibular cues available, these combine in order to improve the reproduction. Fourth, when the intersensory gain was modified, the bimodal reproduction resulted in a substantially larger change for the body than for the visual scene rotations, which indicates that vision prevails for this rotation displacement task when a matching problem is introduced.  相似文献   

5.
Perception of the relative orientation of the self and objects in the environment requires integration of visual and vestibular sensory information, and an internal representation of the body's orientation. Parkinson's disease (PD) patients are more visually dependent than controls, implicating the basal ganglia in using visual orientation cues. We examined the relative roles of visual and non-visual cues to orientation in PD using two different measures: the subjective visual vertical (SVV) and the perceptual upright (PU). We tested twelve PD patients (nine both on- and off-medication), and thirteen age-matched controls. Visual, vestibular and body cues were manipulated using a polarized visual room presented in various orientations while observers were upright or lying right-side-down. Relative to age-matched controls, patients with PD showed more influence of visual cues for the SVV but were more influenced by the direction of gravity for the PU. Increased SVV visual dependence corresponded with equal decreases of the contributions of body sense and gravity. Increased PU gravitational dependence corresponded mainly with a decreased contribution of body sense. Curiously however, both of these effects were significant only when patients were medicated. Increased SVV visual dependence was highest for PD patients with left-side initial motor symptoms. PD patients when on and off medication were more variable than controls when making judgments. Our results suggest that (i) PD patients are not more visually dependent in general, rather increased visual dependence is task specific and varies with initial onset side, (ii) PD patients may rely more on vestibular information for some perceptual tasks which is reflected in relying less on the internal representation of the body, and (iii) these effects are only present when PD patients are taking dopaminergic medication.  相似文献   

6.
Successful navigation through an environment requires precise monitoring of direction and distance traveled (”path integration” or ”dead reckoning”). Previous studies in blindfolded human subjects showed that velocity information arising from vestibular and somatosensory signals can be used to reproduce passive linear displacements. In these studies, visual information was excluded as sensory cue. Yet, in our everyday life, visual information is very important and usually dominates vestibular and somatosensory cues. In the present study, we investigated whether visual signals can be used to discriminate and reproduce simulated linear displacements. In a first set of experiments, subjects viewed two sequences of linear motion and were asked in a 2AFC task to judge whether the travel distance in the second sequence was larger or shorter than in the first. Displacements in either movement sequence could be forward (f) or backward (b). Subjects were very accurate in discriminating travel distances. Average error was less than 3% and did not depend on displacements being into the same (ff, bb) or opposite direction (fb, bf). In a second set of experiments, subjects had to reproduce a previously seen forward motion (passive condition), either in light or in darkness, i.e., with or without visual feedback. Passive displacements had different velocity profiles (constant, sinusoidal, complex) and speeds and were performed across a textured ground plane, a 2-D plane of dots or through a 3-D cloud of dots. With visual feedback, subjects reproduced distances accurately. Accuracy did not depend on the kind of velocity profile in the passive condition. Subjects tended to reproduce distance by replicating the velocity profile of the passive displacement. Finally, in the condition without visual feedback, subjects reproduced the shape of the velocity profile, but used much higher speeds, resulting in a substantial overshoot of travel distance. Our results show that visual, vestibular, and somatosensory signals are used for path integration, following a common strategy: the use of the velocity profile during self-motion. Received: 3 June 1998 / Accepted: 15 February 1999  相似文献   

7.
Vestibular input is required for accurate locomotion in the dark, yet blind subjects' vestibular function is unexplored. Such investigation may also identify visually dependent aspects of vestibular function. We assessed vestibular function perceptually in six congenitally blind (and 12 sighted) subjects. Cupula deflection by a transient angular, horizontal acceleration generates a related vestibular nerve signal that declines exponentially with time constant approximately 4-7 s, which is prolonged to 15 s in the evoked vestibular-ocular reflex by the brain stem "velocity storage." We measured perceptual velocity storage in blind subjects following velocity steps (overall perceptual vestibular time constant, experiment 1) and found it to be significantly shorter (5.34 s; range: 2.39-8.58 s) than in control, sighted subjects (15.8 s; P < 0.001). Vestibular navigation was assessed by subjects steering a motorized Bárány-chair in response to imposed angular displacements in a path-reversal task, "go-back-to-start" (GBS: experiment 2); and a path-completion task, "complete-the-circle" (CTC: experiment 3). GBS performances (comparing response vs. stimulus displacement regression slopes and r(2)) were equal between groups (P > 0.05), but the blind showed worse CTC performance (P < 0.05). Two blind individuals showed ultrashort perceptual time constants, high lifetime physical activity scores and superior CTC performances; we speculate that these factors may be inter-related. In summary, the vestibular velocity storage as measured perceptually is visually dependent. Early blindness does not affect path reversal performance but is associated with worse path completion, a task requiring an absolute spatial strategy. Although congenitally blind subjects are overall less able to utilize spatial mechanisms during vestibular navigation, prior extensive physical spatial activity may enhance vestibular navigation.  相似文献   

8.
Using vestibular sensors to maintain visual stability during changes in head tilt, crucial when panoramic cues are not available, presents a computational challenge. Reliance on the otoliths requires a neural strategy for resolving their tilt/translation ambiguity, such as canal-otolith interaction or frequency segregation. The canal signal is subject to bandwidth limitations. In this study, we assessed the relative contribution of canal and otolith signals and investigated how they might be processed and combined. The experimental approach was to explore conditions with and without otolith contributions in a frequency range with various degrees of canal activation. We tested the perceptual stability of visual line orientation in six human subjects during passive sinusoidal roll tilt in the dark at frequencies from 0.05 to 0.4 Hz (30 degrees peak to peak). Because subjects were constantly monitoring spatial motion of a visual line in the frontal plane, the paradigm required moment-to-moment updating for ongoing ego motion. Their task was to judge the total spatial sway of the line when it rotated sinusoidally at various amplitudes. From the responses we determined how the line had to be rotated to be perceived as stable in space. Tests were taken both with (subject upright) and without (subject supine) gravity cues. Analysis of these data showed that the compensation for body rotation in the computation of line orientation in space, although always incomplete, depended on vestibular rotation frequency and on the availability of gravity cues. In the supine condition, the compensation for ego motion showed a steep increase with frequency, compatible with an integrated canal signal. The improvement of performance in the upright condition, afforded by graviceptive cues from the otoliths, showed low-pass characteristics. Simulations showed that a linear combination of an integrated canal signal and a gravity-based signal can account for these results.  相似文献   

9.
Spatial updating during self-motion typically involves the appropriate integration of both visual and non-visual cues, including vestibular and proprioceptive information. Here, we investigated how human observers combine these two non-visual cues during full-stride curvilinear walking. To obtain a continuous, real-time estimate of perceived position, observers were asked to continuously point toward a previously viewed target in the absence of vision. They did so while moving on a large circular treadmill under various movement conditions. Two conditions were designed to evaluate spatial updating when information was largely limited to either proprioceptive information (walking in place) or vestibular information (passive movement). A third condition evaluated updating when both sources of information were available (walking through space) and were either congruent or in conflict. During both the passive movement condition and while walking through space, the pattern of pointing behavior demonstrated evidence of accurate egocentric updating. In contrast, when walking in place, perceived self-motion was underestimated and participants always adjusted the pointer at a constant rate, irrespective of changes in the rate at which the participant moved relative to the target. The results are discussed in relation to the maximum likelihood estimation model of sensory integration. They show that when the two cues were congruent, estimates were combined, such that the variance of the adjustments was generally reduced. Results also suggest that when conflicts were introduced between the vestibular and proprioceptive cues, spatial updating was based on a weighted average of the two inputs.  相似文献   

10.
The principal visual cue to self-motion (egomotion) is optic flow, which is specified in terms of local 2D velocities in the retinal image without reference to depth cues. However, in general, points near the center of expansion of natural flow fields are distant, whereas those in the periphery are closer, creating gradients of horizontal binocular disparity. To assess whether the brain combines disparity gradients with optic flow when encoding egomotion, stereoscopic gradients were applied to expanding dot patterns presented to observers during functional MRI scanning. The gradients were radially symmetrical, disparity changing as a function of eccentricity. The depth cues were either consistent with egomotion (peripheral dots perceived as near and central dots perceived as far) or inconsistent (the reverse gradient, central dots near, peripheral dots far). The BOLD activity generated by these stimuli was compared in a range of predefined visual regions in 13 participants with good stereoacuity. Visual area V6, in the parieto-occipital sulcus, showed a unique pattern of results, responding well to all optic flow patterns but much more strongly when they were paired with consistent rather than inconsistent or zero-disparity gradients. Of the other areas examined, a region of the precuneus and parietoinsular vestibular cortex also differentiate between consistent and inconsistent gradients, but with weak or suppressive responses. V3A, V7, MT, and ventral intraparietal area responded more strongly in the presence of a depth gradient but were indifferent to its depth-flow congruence. The results suggest that depth and flow cues are integrated in V6 to improve estimation of egomotion.  相似文献   

11.
 The influence that the perceived size of visual targets has on the characteristics of pointing movements was investigated in the present study. A size-contrast illusion, known as the Ebbinghaus or Tichener circles, was employed. In this illusion, a target circle surrounded by several smaller circles is perceived to be larger than a target circle of the same physical size surrounded by several larger circles. Movement times of open-loop pointing responses directed to the perceptually smaller target circle were significantly longer than the movement times of pointing responses directed to the perceptually larger target circle. The extent of this difference was similar to that observed when pointing responses were directed at physically different-sized target circles that were not surrounded by other circles. In addition, when the perceptually smaller circle was enlarged so that it appeared to be the same size as the perceptually larger circle, the movement times became equivalent. This evidence supports the contention that the relative rather than the absolute size of the target has a major impact on the control and execution of pointing movements. Such a conclusion contradicts those made previously concerning grasping movements made under similar conditions and implies that pointing responses are more directly influenced by visual perceptual processing than grasping responses. Received: 3 September 1998 / Accepted: 15 December 1998  相似文献   

12.
Recent findings of vestibular responses in part of the visual cortex--the dorsal medial superior temporal area (MSTd)--indicate that vestibular signals might contribute to cortical processes that mediate the perception of self-motion. We tested this hypothesis in monkeys trained to perform a fine heading discrimination task solely on the basis of inertial motion cues. The sensitivity of the neuronal responses was typically lower than that of psychophysical performance, and only the most sensitive neurons rivaled behavioral performance. Responses recorded in MSTd were significantly correlated with perceptual decisions, and the correlations were strongest for the most sensitive neurons. These results support a functional link between MSTd and heading perception based on inertial motion cues. These cues seem mainly to be of vestibular origin, as labyrinthectomy produced a marked elevation of psychophysical thresholds and abolished MSTd responses. This study provides evidence that links single-unit activity to spatial perception mediated by vestibular signals, and supports the idea that the role of MSTd in self-motion perception extends beyond optic flow processing.  相似文献   

13.
Summary Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation is modeled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A dual-input describing function analysis supports the complementary model; vestibular cues dominate sensation at higher frequencies. The describing function model is extended by the proposal of a non-linear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.Research supported in part by NASA Grants NSG 2032 and 2230. GLZ supported by an NIH National Research Service Award. GLZ currently at Bolt Beranek and Newman, Inc., Cambridge, MA, USA  相似文献   

14.
Previous studies have generally considered heading perception to be a visual task. However, since judgments of heading direction are required only during self-motion, there are several other relevant senses which could provide supplementary and, in some cases, necessary information to make accurate and precise judgments of the direction of self-motion. We assessed the contributions of several of these senses using tasks chosen to reflect the reference system used by each sensory modality. Head-pointing and rod-pointing tasks were performed in which subjects aligned either the head or an unseen pointer with the direction of motion during whole body linear motion. Passive visual and vestibular stimulation was generated by accelerating subjects at sub- or supravestibular thresholds down a linear track. The motor-kinesthetic system was stimulated by having subjects actively walk along the track. A helmet-mounted optical system, fixed either on the cart used to provide passive visual or vestibular information or on the walker used in the active walking conditions, provided a stereoscopic display of an optical flow field. Subjects could be positioned at any orientation relative to the heading, and heading judgments were obtained using unimodal visual, vestibular, or walking cues, or combined visual-vestibular and visual-walking cues. Vision alone resulted in reasonably precise and accurate head-pointing judgments (0.3° constant errors, 2.9° variable errors), but not rod-pointing judgments (3.5° constant errors, 5.9° variable errors). Concordant visual-walking stimulation slightly decreased the variable errors and reduced constant pointing errors to close to zero, while head-pointing errors were unaffected. Concordant visual-vestibular stimulation did not facilitate either response. Stimulation of the vestibular system in the absence of vision produced imprecise rod-pointing responses, while variable and constant pointing errors in the active walking condition were comparable to those obtained in the visual condition. During active self-motion, subjects made large headpointing undershoots when visual information was not available. These results suggest that while vision provides sufficient information to identify the heading direction, it cannot, in isolation, be used to guide the motor response required to point toward or move in the direction of self-motion.  相似文献   

15.
The purpose of this study was to investigate adaptive changes in the activity of vestibular nuclei neurons unilaterally deprived of their primary afferent inputs when influenced by visual motion cues. These neuronal changes might account for the established role that vision plays in the compensation for posturo-kinetic deficits after the loss of vestibular inputs. Neuronal recordings were made in alert, non-paralysed cats that had undergone unilateral vestibular nerve sections. The unit responses collected in both Deiters' nuclei were compared to those previously recorded in intact cats. We analysed the extracellular activity of Deiters' nucleus neurons, as well as the optokinetic reflex (OKR) evoked during sinusoidal translation of a whole-field optokinetic stimulus in the vertical plane. In intact cats, we found the unit firing rate closely correlated with the visual surround translation velocity, and the relationship between the discharge rate and the motion frequency was tuned around an optimal frequency. The maximum firing rate modulation was generally below the 0.25 Hz stimulus frequency; unit responses were weak or even absent above 0.25 Hz. From the 4th day to the end of the 3rd week after ipsilateral deafferentation, a majority of cells was found to display maximum discharge modulation during vertical visual stimulation at 0.50 Hz, and even at 0.75 Hz, indicating that the frequency bandwidth of the visually induced responses of deafferented vestibular nuclei neurons had been extended. Consequently, the frequency-dependent attenuation in the sensitivity of vestibular neurons to visual inputs was much less pronounced. After the first 3 weeks postlesion, the unit response characteristics were very similar to those observed prior to the deafferentation. On the nucleus contralateral to the neurectomy, the maximum modulation of most cells was tuned to the low frequencies of optokinetic stimulation, as also seen prior to the lesion. We found, however, a subgroup of cells displaying well-developed responses above 0.50 Hz. Under all experimental conditions, the neuronal response phase still remained closely correlated with the motion velocity of the vertical sinusoidal visual pattern. We hypothesize that Deiters' neurons deprived of their primary afferents may transiently acquire the ability to code fast head movements on the basis of visual messages, thus compensating, at least partially, for the loss of dynamic vestibular inputs during the early stages of the recovery process. Since the overall vertical OKR gain was not significantly altered within the 0.0125 Hz–1 Hz range of stimulation after the unilateral neurectomy, it can be postulated that the increased sensitivity of deafferented vestibular neurons to visual motion cues was accounted for by plasticity mechanisms operating within the deafferented Deiters' nucleus. The neuroplasticity mechanisms underlying this rapid and temporary increase in neuronal sensitivity are discussed.  相似文献   

16.
Four cats labyrinthectomized shortly after birth ( DELAB ) exhibited the classical vestibular syndrome and recovery, while their motor development was otherwise unimpaired. As adults, they were tested for visual vestibular substitution in a locomotor task with either orientation requirements (tilted platforms) or balance requirements (narrow platforms). Visual motion cues or static visual cues were controlled using normal or stroboscopic lighting, or darkness. Measurements of the average speed of locomotion showed that: - Although all cats increase their speed when more visual cues become available, a marked deficit occurs in darkness only in the DELAB cats. - With either vestibular cues alone or static visual cues alone, cats are able to reach the same level of performance in the tilted platform test, which suggests a total visual-vestibular interchangeability in orientation. - DELAB cats perform very poorly in the narrow rail test. - When continuous vision is allowed in the narrow rail test the DELABs ' performance rises but does not match that of the control group. - A specific deficit in balance for the DELAB group is thus reduced by normal continuous vision as compared to stroboscopic vision, suggesting a significant, though imperfect, substitution of motion visual cues for the missing dynamic vestibular cues. - Dynamic visual cues play only a minor role in most situations, when locomotory speed is high. This results support the view that both the vestibular and the visual system can subserve two distinct functions: - dynamic information may stabilize the stance in narrow unstable situations, during slow locomotion, - and static orientation cues may mainly control the direction for displacement. Possible interactions between head positioning and body orientation in the DELAB cats are discussed.  相似文献   

17.
Multisensory interactions between haptics and vision remain poorly understood. Previous studies have shown that shapes, such as letters of the alphabet, when drawn on the skin, are differently perceived dependent upon which body part is stimulated and on how the stimulated body part, such as the hand, is positioned. Another line of research within this area has investigated multisensory interactions. Tactile perceptions, for example, have the potential to disambiguate visually perceived information. While the former studies focused on explicit reports about tactile perception, the latter studies relied on fully aligned multisensory stimulus dimensions. In this study, we investigated to what extent rotating tactile stimulations on the hand affect directional visual motion judgments implicitly and without any spatial stimulus alignment. We show that directional tactile cues and ambiguous visual motion cues are integrated, thus biasing the judgment of visually perceived motion. We further show that the direction of the tactile influence depends on the position and orientation of the stimulated part of the hand relative to a head-centered frame of reference. Finally, we also show that the time course of the cue integration is very versatile. Overall, the results imply immediate directional cue integration within a head-centered frame of reference.  相似文献   

18.
Visual self-motion perception during head turns   总被引:4,自引:0,他引:4  
Extra-retinal information is critical in the interpretation of visual input during self-motion. Turning our eyes and head to track objects displaces the retinal image but does not affect our ability to navigate because we use extra-retinal information to compensate for these displacements. We showed observers animated displays depicting their forward motion through a scene. They perceived the simulated self-motion accurately while smoothly shifting the gaze by turning the head, but not when the same gaze shift was simulated in the display; this indicates that the visual system also uses extra-retinal information during head turns. Additional experiments compared self-motion judgments during active and passive head turns, passive rotations of the body and rotations of the body with head fixed in space. We found that accurate perception during active head turns is mediated by contributions from three extra-retinal cues: vestibular canal stimulation, neck proprioception and an efference copy of the motor command to turn the head.  相似文献   

19.
The contribution of cervical and vestibular cues in signaling the changes in target-trunk relative positions during self-motion was investigated. Normal subjects (Ss) were shown a LED flashed in the peripheral visual field in a dark room. Ss were then passively rotated about the vertical axis in one of three different conditions: (1) head chair-fixed (vestibular condition); (2) head earth-fixed (relaxed neck condition); and (3) head earth-fixed, but with the Ss actively attempting to turn it (activated neck condition). The Ss were then required to indicate, with their unseen index finger, the position of the previously flashed target. It was found that pointing at the memorized target was similarly accurate in the relaxed neck condition and in the activated neck condition. In the vestibular condition, pointing accuracy dropped significantly. These results suggest that neck proprioceptive signals are more effective than vestibular ones in signaling relative changes in the position of stationary objects with respect to the body during head-trunk motion. The finding that cervically mediated estimates were unchanged during active contraction of the neck muscles may suggests that efference copy signals may help interprete the change in the afferent signals caused by voluntary neck muscle activation. Received: 18 February 1997 / Accepted: 16 March 1998  相似文献   

20.
The task of parceling perceived visual motion into self- and object motion components is critical to safe and accurate visually guided navigation. In this paper, we used functional magnetic resonance imaging to determine the cortical areas functionally active in this task and the pattern connectivity among them to investigate the cortical regions of interest and networks that allow subjects to detect object motion separately from induced self-motion. Subjects were presented with nine textured objects during simulated forward self-motion and were asked to identify the target object, which had an additional, independent motion component toward or away from the observer. Cortical activation was distributed among occipital, intra-parietal and fronto-parietal areas. We performed a network analysis of connectivity data derived from partial correlation and multivariate Granger causality analyses among functionally active areas. This revealed four coarsely separated network clusters: bilateral V1 and V2; visually responsive occipito-temporal areas, including bilateral LO, V3A, KO (V3B) and hMT; bilateral VIP, DIPSM and right precuneus; and a cluster of higher, primarily left hemispheric regions, including the central sulcus, post-, pre- and sub-central sulci, pre-central gyrus, and FEF. We suggest that the visually responsive networks are involved in forming the representation of the visual stimulus, while the higher, left hemisphere cluster is involved in mediating the interpretation of the stimulus for action. Our main focus was on the relationships of activations during our task among the visually responsive areas. To determine the properties of the mechanism corresponding to the visual processing networks, we compared subjects’ psychophysical performance to a model of object motion detection based solely on relative motion among objects and found that it was inconsistent with observer performance. Our results support the use of scene context (e.g., eccentricity, depth) in the detection of object motion. We suggest that the cortical activation and visually responsive networks provide a potential substrate for this computation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号