首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In the field of motion-based simulation, it was found that a visual amplitude equal to the inertial amplitude does not always provide the best perceived match between visual and inertial motion. This result is thought to be caused by the “quality” of the motion cues delivered by the simulator motion and visual systems. This paper studies how different visual characteristics, like field of view (FoV) and size and depth cues, influence the scaling between visual and inertial motion in a simulation environment. Subjects were exposed to simulator visuals with different fields of view and different visual scenes and were asked to vary the visual amplitude until it matched the perceived inertial amplitude. This was done for motion profiles in surge, sway, and yaw. Results showed that the subjective visual amplitude was significantly affected by the FoV, visual scene, and degree-of-freedom. When the FoV and visual scene were closer to what one expects in the real world, the scaling between the visual and inertial cues was closer to one. For yaw motion, the subjective visual amplitudes were approximately the same as the real inertial amplitudes, whereas for sway and especially surge, the subjective visual amplitudes were higher than the inertial amplitudes. This study demonstrated that visual characteristics affect the scaling between visual and inertial motion which leads to the hypothesis that this scaling may be a good metric to quantify the effect of different visual properties in motion-based simulation.  相似文献   

2.
The brain is able to determine angular self-motion from visual, vestibular, and kinesthetic information. There is compelling evidence that both humans and non-human primates integrate visual and inertial (i.e., vestibular and kinesthetic) information in a statistically optimal fashion when discriminating heading direction. In the present study, we investigated whether the brain also integrates information about angular self-motion in a similar manner. Eight participants performed a 2IFC task in which they discriminated yaw-rotations (2-s sinusoidal acceleration) on peak velocity. Just-noticeable differences (JNDs) were determined as a measure of precision in unimodal inertial-only and visual-only trials, as well as in bimodal visual–inertial trials. The visual stimulus was a moving stripe pattern, synchronized with the inertial motion. Peak velocity of comparison stimuli was varied relative to the standard stimulus. Individual analyses showed that data of three participants showed an increase in bimodal precision, consistent with the optimal integration model; while data from the other participants did not conform to maximum-likelihood integration schemes. We suggest that either the sensory cues were not perceived as congruent, that integration might be achieved with fixed weights, or that estimates of visual precision obtained from non-moving observers do not accurately reflect visual precision during self-motion.  相似文献   

3.
In order to navigate efficiently, animals can benefit from internal representations of their moment-to-moment orientation. Head-direction (HD) cells are neurons that discharge maximally when the head of a rat is oriented in a specific ("preferred") direction in the horizontal plane, independently from position or ongoing behavior. This directional selectivity depends on environmental and inertial cues. However, the mechanisms by which these cues are integrated remain unknown. This study examines the relative influence of visual, inertial and substratal cues on the preferred directions of HD cells when cue conflicts are produced in the presence of the rats. Twenty-nine anterior dorsal thalamic (ATN) and 19 postsubicular (PoS) HD cells were recorded from 7 rats performing a foraging task in a cylinder (76 cm in diameter, 60 cm high) with a white card attached to its inner wall. Changes in preferred directions were measured after the wall or the floor of the cylinder was rotated separately or together in the same direction by 45 degrees, 90 degrees or 180 degrees, either clockwise or counterclockwise. Linear regression analyses showed that the preferred directions of the HD cells in both structures shifted by approximately =90% of the angle of rotation of the wall, whether rotated alone or together with the floor (r2>0.87, P<0.001). Rotations of the floor alone did not trigger significant shifts in preferred directions. These results indicate that visual cues exerted a strong but incomplete control over the preferred directions of the neurons, while inertial cues had a small but significant influence, and substratal cues were of no consequence.  相似文献   

4.
Humans exploit a range of visual depth cues to estimate three-dimensional structure. For example, the slant of a nearby tabletop can be judged by combining information from binocular disparity, texture and perspective. Behavioral tests show humans combine cues near-optimally, a feat that could depend on discriminating the outputs from cue-specific mechanisms or on fusing signals into a common representation. Although fusion is computationally attractive, it poses a substantial challenge, requiring the integration of quantitatively different signals. We used functional magnetic resonance imaging (fMRI) to provide evidence that dorsal visual area V3B/KO meets this challenge. Specifically, we found that fMRI responses are more discriminable when two cues (binocular disparity and relative motion) concurrently signal depth, and that information provided by one cue is diagnostic of depth indicated by the other. This suggests a cortical node important when perceiving depth, and highlights computations based on fusion in the dorsal stream.  相似文献   

5.
We investigated the tactile cuing of visual spatial attention using spatially-informative (75% valid) and spatially-noninformative (25% valid) tactile cues. The participants performed a visual change detection task following the presentation of a tactile spatial cue on their back whose location corresponded to one of the four visual quadrants on a computer monitor. The participants were explicitly instructed to use the spatially-informative tactile cues but to ignore the spatially-noninformative cues. In addition to reaction time data, participants’ eye-gaze was monitored as a measure of overt visual attention. The results showed that the spatially-informative tactile cues resulted in initial saccades toward the cued visual quadrants, and significantly reduced the visual change detection latencies. When spatially-noninformative tactile cues were used, the participants were largely successful at ignoring them as indicated by a saccade distribution that was independent of the quadrant that was cued, as well as the lack of a significant change in search time as compared to the baseline measure of no tactile cuing. The eye-gaze data revealed that the participants could not always completely ignore the spatially-noninformative tactile cues. Our results suggest that the tactile cuing of visual attention is natural but not automatic when the tactile cue and visual target are not collocated spatially, and that it takes effort to ignore the cues even when they are known to provide no useful information. In addition, our results confirm previous findings that spatially-informative tactile cues are especially effective at directing overt visual attention to locations that are not typically monitored visually, such as the bottom of a computer screen or the rearview mirror in an automobile.  相似文献   

6.
The deployment of attention during temporal integration was investigated with event-related potentials. Attentional selection of an integrated percept and an actual singleton were examined. Integration performance was related to modulations of the N2pc, N2, and P3 components. Singleton localization performance was reflected in N2pc and P3 only. Of note, the singleton N2pc developed and subsided earlier than the integration N2pc. The singleton P3 seemed to develop in a single deflection, while the integration P3 showed two more distinct deflections. Physical stimulus differences could not explain these results. The N2pc and N2 modulations showed that attending to an integrated percept is not slower per se, but does differ from attending to a singleton. Integrated percepts furthermore have special correlates in late stages of perception (i.e., the P3). These differences are linked to the unique demand to combine and represent successive stimuli.  相似文献   

7.
Subordinate‐level category learning recruits neural resources associated with perceptual expertise, including the N250 component of the ERP, a posterolateral negative wave maximal between 230 and 330 ms. The N250 is a relatively late visual ERP and could plausibly be driven by attention to the features of categorized objects. Indeed, it has a latency and scalp distribution similar to the selection negativity (SN), an ERP component long known to be sensitive to attentional selection of target features. To clarify sensitivity of the N250 to attention and to more generally investigate the effect of category learning on attentional modulation of learned features, we independently manipulated subordinate‐level category learning and target detection in a speeded paradigm designed to optimally elicit the SN and accompanying frontal selection positivity (FSP). Participants first practiced categorizing a set of artificial animal stimuli and then performed a speeded target detection task on trained and untrained stimuli while ERPs were recorded. SN and FSP were roughly linearly related to the number of target features in the stimulus. Trained stimuli elicited a significantly larger N250 than untrained stimuli. The SN and N250 effects were additive, with all levels of target similarity equally affected by training, and had different time courses. Training had little effect on the FSP. The results suggest that (a) the N250 and SN have different sources, and (b) at the very least, the learning‐induced N250 indexes a different attentional subprocess from the target‐induced SN and could be driven by a different cognitive process altogether.  相似文献   

8.
Can driver steering behaviors, such as a lane change, be executed without visual feedback? In a recent study with a fixed-base driving simulator, drivers failed to execute the return phase of a lane change when steering without vision, resulting in systematic final heading errors biased in the direction of the lane change. Here we challenge the generality of that finding. Suppose that, when asked to perform a lane (position) change, drivers fail to recognize that a heading change is required to make a lateral position change. However, given an explicit path, the necessary heading changes become apparent. Here we demonstrate that when heading requirements are made explicit, drivers appropriately implement the return phase. More importantly, by using an electric vehicle outfitted with a portable virtual reality system, we also show that valid inertial information (i.e., vestibular and somatosensory cues) enables accurate steering behavior when vision is absent. Thus, the failure to properly execute a lane change in a driving simulator without a moving base does not present a fundamental problem for feed-forward driving behavior.  相似文献   

9.
This study analyzed the spatial memory capacities of rats in darkness with visual and/or olfactory cues through ontogeny. Tests were conducted with the homing board, where rats had to find the correct escape hole. Four age groups (24 days, 48 days, 3-6 months, and 12 months) were trained in 3 conditions: (a) 3 identical light cues; (b) 5 different olfactory cues; and (c) both types of cues, followed by removal of the olfactory cues. Results indicate that immature rats first take into account olfactory information but are unable to orient with only the help of discrete visual cues. Olfaction enables the use of visual information by 48-day-old rats. Visual information predominantly supports spatial cognition in adult and 12-month-old rats. Results point out cooperation between vision and olfaction for place navigation during ontogeny in rats.  相似文献   

10.
11.
12.
Event-related potentials (ERPs) were recorded while subjects viewed visually presented words, some of which occurred twice. Each trial consisted of two colored letter strings, the requirement being to attend to and make a word/non word discrimination for one of the strings. Attention was manipulated by color in Experiment 1, and color and a precue were used in Experiment 2. As in previous ERP studies of word repetition, a positive offset to repeated words developed when both first and second presentations were the focus of attention. In Experiment 2, ERPs showed evidence of positive-going repetition effects in all conditions in which at least one of the two presentations of the repeated word was attended. In the visual modality, the positive-going ERP repetition effect occurs only when at least one of the two presentations of a repeated item is the object of attention, which suggests that one or more of the processes reflected by the effect is capacity limited.  相似文献   

13.
Adult male mice, isolated for 14 days, were tested for aggressiveness against standard castrate opponents under five conditions: (1) the home cage; (2) visual cage (visual characteristics similar to the home cage); (3) olfactory + visual cage (olfactory and visual characteristics similar to the home cage); (4) olfactory cage (a completely strange cage with olfactory characteristics similar to the home cage); and (5) strange cage (a completely strange cage). The results show a significant increase in aggressive behaviour with progressively more familiar environments. It is also shown that familiar olfactory cues have a greater enhancing effect on aggression than do familiar visual cues. The results are explained in terms of the conflict between attack and competing behaviours.  相似文献   

14.
Parkinson's disease (PD) patients and normal controls (NCs) were administered a series of visual attention tasks. The dimensional integration task required integration of information from 2 stimulus dimensions. The selective attention task required selective attention to 1 stimulus dimension while ignoring the other stimulus dimension. Both integral- and separable-dimension stimuli were examined. A series of quantitative models of attentional processing was applied to each participant's data. The results suggest that (a) PD patients were not impaired in integrating information from 2 stimulus dimensions, (b) PD patients were impaired in selective attention, (c) selective attention deficits in PD patients were not due to perceptual interference, and (d) PD patients were affected by manipulations of stimulus integrality and separability in much the same way as were NCs.  相似文献   

15.
The time course of shifting visual spatial attention to flickering stimuli in the left and right visual hemifield was investigated. The goal was to test whether an instructive peripheral salient cue located close to the newly to-be-attended location triggers faster shifts per se compared to a central cue. Besides behavioural data an objective electrophysiological measure, the steady-state visual evoked potential (SSVEP) was used to measure the time course of visual pathway facilitation in the human brain for centrally and peripherally cued shifts of spatial attention. Results revealed that both spatial cues resulted in identical time courses of shifts of covert spatial attention. This was true with respect to behavioural data and SSVEP amplitude. Results support the notion that a salient peripheral spatial cue does not automatically produce faster shifts of spatial attention to the to-be-attended location when this cue is informative and embedded in an ongoing stimulation.  相似文献   

16.
Vection is the illusion of self-motion in the absence of real physical movement. The aim of the present study was to analyze how multisensory inputs (visual and auditory) contribute to the perception of vection. Participants were seated in a stationary position in front of a large, curved projection display and were exposed to a virtual scene that constantly rotated around the yaw-axis, simulating a 360° rotation. The virtual scene contained either only visual, only auditory, or a combination of visual and auditory cues. Additionally, simulated rotation speed (90°/s vs. 60°/s) and the number of sound sources (1 vs. 3) were varied for all three stimulus conditions. All participants were exposed to every condition in a randomized order. Data specific to vection latency, vection strength, the severity of motion sickness (MS), and postural steadiness were collected. Results revealed reduced vection onset latencies and increased vection strength when auditory cues were added to the visual stimuli, whereas MS and postural steadiness were not affected by the presence of auditory cues. Half of the participants reported experiencing auditorily induced vection, although the sensation was rather weak and less robust than visually induced vection. Results demonstrate that the combination of visual and auditory cues can enhance the sensation of vection.  相似文献   

17.
The observation of figure-ground selectivity in neurons of the visual cortex shows that these neurons can be influenced by the image context far beyond the classical receptive field. To clarify the nature of the context integration mechanism, we studied the latencies of neural edge signals, comparing the emergence of context-dependent definition of border ownership with the onset of local edge definition (contrast polarity; stereoscopic depth order). Single-neuron activity was recorded in areas V1 and V2 of Macaca mulatta under behaviorally induced fixation. Whereas local edge definition emerged immediately (<13 ms) after the edge onset response, the context-dependent signal was delayed by about 30 ms. To see if the context influence was mediated by horizontal fibers within cortex, we measured the latencies of border ownership signals for two conditions in which the relevant context information was located at different distances from the receptive field and compared the latency difference with the difference predicted from horizontal signal propagation. The prediction was based on the increase in cortical distance, computed from the mapping of the test stimuli in the cortex, and the known conduction velocities of horizontal fibers. The measured latencies increased with cortical distance, but much less than predicted by the horizontal propagation hypothesis. Probability calculations showed that an explanation of the context influence by horizontal signal propagation alone is highly unlikely, whereas mechanisms involving back projections from other extrastriate areas are plausible.  相似文献   

18.
There is debate in the crossmodal cueing literature as to whether capture of visual attention by means of sound is a fully automatic process. Recent studies show that when visual attention is endogenously focused sound still captures attention. The current study investigated whether there is interaction between exogenous auditory and visual capture. Participants preformed an orthogonal cueing task, in which, the visual target was preceded by both a peripheral visual and auditory cue. When both cues were presented at chance level, visual and auditory capture was observed. However, when the validity of the visual cue was increased to 80% only visual capture and no auditory capture was observed. Furthermore, a highly predictive (80% valid) auditory cue was not able to prevent visual capture. These results demonstrate that crossmodal auditory capture does not occur when a competing predictive visual event is presented and is therefore not a fully automatic process.  相似文献   

19.
Vibration on localised areas of skin can be used to signal spatial orientation, multi-directional motion and also to guide arm and hand movements. This study investigated the possibility that vibration at loci on the skin might also be used to cue gaze direction. Eight subjects made eye or (head + eye) gaze saccades in the dark cued by vibration stimulation at discrete loci spaced on a horizontal contour across the chest. Saccade and gaze amplitudes, latencies, and directions were analysed. In the first experiment, performed without training, subjects could only use vibration cues to direct their gaze in cardinal directions and gross quadrature. There was a high variability in the relationship between locus on the trunk and gaze direction in space, both within and between subjects. Saccade latencies ranged from 377 to 433 ms and were related to the loci of vibration; the further from the body midline the quicker the response. Since the association of skin loci with gaze direction did not appear intuitive a sub-group of four subjects were retested after intensive training with feedback until they attained criterion on midline ≡ 0° and 15 cm (to right/left of midline) ≡ 45° gaze shifts right and left. Training gave a moderate improvement in directional specificity of gaze to a particular locus on the skin. Gaze direction was linearly rescaled with respect to skin loci but variability and saccade latencies remained high. The uncertainty in the relationship between vibration locus and gaze direction and the prolonged latencies of responses indicate circuitous neuronal processing. There appears to be no pre-existing stimulus-response compatibility mapping between loci on the skin and gaze direction. Vibrotactile cues on the skin of the trunk only serve a gross indication of visual direction in space.  相似文献   

20.
Visual information regarding obstacle position and size is used for planning and controlling adaptive gait. However, the manner in which visual cues in the environment are used in the control of gait is not fully known. This research examined the effect of obstacle position cues on the lead and trail limb trajectories during obstacle avoidance with and without visual information of the lower limbs and obstacle (termed visual exproprioception). Eight subjects stepped over obstacles under four visual conditions: full vision without obstacle position cues, full vision with position cues, goggles without position cues and goggles with position cues. Goggles obstructed visual exproprioception of the lower limbs and the obstacle. Position cues (2 m tall) were placed beside the obstacle to provide visual cues regarding obstacle position. Obstacle heights were 2, 10, 20 and 30 cm. When wearing goggles and without position cues, a majority of the dependent measures (horizontal distance, toe clearance and lead stride length) increased for the 10, 20 and 30 cm obstacles. Therefore lower limb–obstacle visual exproprioception was important for the control of both limbs, even though with normal vision the trail limb was not visible during obstacle clearance. When wearing goggles, the presence of position cues, which provided on-line visual exproprioception of the self relative to the obstacle position in the anterior–posterior direction, returned lead and trail foot placements to full vision values. Lead toe clearance was not affected by the position cues, trail clearance decreased but was greater than values observed during full vision. Therefore, visual exproprioception of obstacle location, provided by visual cues in the environment, was more relevant than visual exproprioception of the lower limbs for controlling lead and trail foot placement.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号