首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Human observers combine multiple sensory cues synergistically to achieve greater perceptual sensitivity, but little is known about the underlying neuronal mechanisms. We recorded the activity of neurons in the dorsal medial superior temporal (MSTd) area during a task in which trained monkeys combined visual and vestibular cues near-optimally to discriminate heading. During bimodal stimulation, MSTd neurons combined visual and vestibular inputs linearly with subadditive weights. Neurons with congruent heading preferences for visual and vestibular stimuli showed improvements in sensitivity that parallel behavioral effects. In contrast, neurons with opposite preferences showed diminished sensitivity under cue combination. Responses of congruent cells were more strongly correlated with monkeys' perceptual decisions than were responses of opposite cells, suggesting that the monkey monitored the activity of congruent cells to a greater extent during cue integration. These findings show that perceptual cue integration occurs in nonhuman primates and identify a population of neurons that may form its neural basis.  相似文献   

2.
Recent research has provided evidence that visual and body-based cues (vestibular, proprioceptive and efference copy) are integrated using a weighted linear sum during walking and passive transport. However, little is known about the specific weighting of visual information when combined with proprioceptive inputs alone, in the absence of vestibular information about forward self-motion. Therefore, in this study, participants walked in place on a stationary treadmill while dynamic visual information was updated in real time via a head-mounted display. The task required participants to travel a predefined distance and subsequently match this distance by adjusting an egocentric, in-depth target using a game controller. Travelled distance information was provided either through visual cues alone, proprioceptive cues alone or both cues combined. In the combined cue condition, the relationship between the two cues was manipulated by either changing the visual gain across trials (0.7×, 1.0×, 1.4×; Exp. 1) or the proprioceptive gain across trials (0.7×, 1.0×, 1.4×; Exp. 2). Results demonstrated an overall higher weighting of proprioception over vision. These weights were scaled, however, as a function of which sensory input provided more stable information across trials. Specifically, when visual gain was constantly manipulated, proprioceptive weights were higher than when proprioceptive gain was constantly manipulated. These results therefore reveal interesting characteristics of cue-weighting within the context of unfolding spatio-temporal cue dynamics.  相似文献   

3.
Summary Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation is modeled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A dual-input describing function analysis supports the complementary model; vestibular cues dominate sensation at higher frequencies. The describing function model is extended by the proposal of a non-linear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.Research supported in part by NASA Grants NSG 2032 and 2230. GLZ supported by an NIH National Research Service Award. GLZ currently at Bolt Beranek and Newman, Inc., Cambridge, MA, USA  相似文献   

4.
We investigated the relative weighting of vestibular, optokinetic and podokinetic (foot and leg proprioceptive) cues for the perception of self-turning in an environment which was either stationary (concordant stimulation) or moving (discordant stimulation) and asked whether cue weighting changes if subjects (Ss) detect a discordance. Ss (N = 18) stood on a turntable inside an optokinetic drum and turned either passively (turntable rotating) or actively in space at constant velocities of 15, 30, or 60°/s. Sensory discordance was introduced by simultaneous rotations of the environment (drum and/or turntable) at ±{5, 10, 20, 40, 80}% of self-turning velocity. In one experiment, Ss were to detect these rotations (i.e. the sensory discordance), and in a second experiment they reported perceived angular self-displacement. Discordant optokinetic cues were better detected, and more heavily weighted for self-turning perception, than discordant podokinetic cues. Within Ss, weights did not depend on whether a discordance was detected or not. Across Ss, optokinetic weights varied over a large range and were negatively correlated with the detection scores: the more perception was influenced by discordant optokinetic cues, the poorer was the detection score; no such correlation was found among the podokinetic results. These results are interpreted in terms of a "self-referential" model that makes the following assumptions: (1) a weighted average of the available sensory cues both determines turning perception and serves as a reference to which the optokinetic cue is compared; (2) a discordance is detected if the difference between reference and optokinetic cue exceeds some threshold; (3) the threshold value corresponds to about the same multiple of sensory uncertainty in all Ss. With these assumptions the model explains the observed relation between optokinetic weight and detection score.  相似文献   

5.
Reflexively orienting toward a peripheral cue can influence subsequent responses to a target, depending on when and where the cue and target appear relative to each other. At short delays between the cue and target [cue-target onset asynchrony (CTOA)], subjects are faster to respond when they appear at the same location, an effect referred to as reflexive attentional capture. At longer CTOAs, subjects are slower to respond when the two appear at the same location, an effect referred to as inhibition of return (IOR). Recent evidence suggests that these phenomena originate from sensory interactions between the cue- and target-related responses. The capture of attention originates from a strong target-related response, derived from the overlap of the cue- and target-related activities, whereas IOR corresponds to a weaker target-aligned response. If such interactions are responsible, then modifying their nature should impact the neuronal and behavioral outcome. Monkeys performed a cue-target saccade task featuring visual and auditory cues while neural activity was recorded from the superior colliculus (SC). Compared with visual stimuli, auditory responses are weaker and occur earlier, thereby decreasing the likelihood of interactions between these signals. Similar to previous studies, visual stimuli evoked reflexive attentional capture at a short CTOA (60 ms) and IOR at longer CTOAs (160 and 610 ms) with corresponding changes in the target-aligned activity in the SC. Auditory cues used in this study failed to elicit either a behavioral effect or modification of SC activity at any CTOA, supporting the hypothesis that reflexive orienting is mediated by sensory interactions between the cue and target stimuli.  相似文献   

6.
Many perceptual cue combination studies have shown that humans can integrate sensory information across modalities as well as within a modality in a manner that is close to optimal. While the limits of sensory cue integration have been extensively studied in the context of perceptual decision tasks, the evidence obtained in the context of motor decisions provides a less consistent picture. Here, we studied the combination of visual and haptic information in the context of human arm movement control. We implemented a pointing task in which human subjects pointed at an invisible unknown target position whose vertical position varied randomly across trials. In each trial, we presented a haptic and a visual cue that provided noisy information about the target position half-way through the reach. We measured pointing accuracy as function of haptic and visual cue onset and compared pointing performance to the predictions of a multisensory decision model. Our model accounts for pointing performance by computing the maximum a posteriori estimate, assuming minimum variance combination of uncertain sensory cues. Synchronicity of cue onset has previously been demonstrated to facilitate the integration of sensory information. We tested this in trials in which visual and haptic information was presented with temporal disparity. We found that for our sensorimotor task temporal disparity between visual and haptic cue had no effect. Sensorimotor learning appears to use all available information and to apply the same near-optimal rules for cue combination that are used by perception.  相似文献   

7.
We explored functional differences between the supplementary and presupplementary motor areas (SMA and pre-SMA, respectively) systematically with respect to multiple behavioral factors, ranging from the retrieval and processing of associative visual signals to the planning and execution of target-reaching movement. We analyzed neuronal activity while monkeys performed a behavioral task in which two visual instruction cues were given successively with a delay: one cue instructed the location of the reach target, and the other instructed arm use (right or left). After a second delay, the monkey received a motor-set cue to be prepared to make the reaching movement as instructed. Finally, after a GO signal, it reached for the instructed target with the instructed arm. We found the following apparent differences in activity: 1) neuronal activity preceding the appearance of visual cues was more frequent in the pre-SMA; 2) a majority of pre-SMA neurons, but many fewer SMA neurons, responded to the first or second cue, reflecting what was shown or instructed; 3) in addition, pre-SMA neurons often reflected information combining the instructions in the first and second cues; 4) during the motor-set period, pre-SMA neurons preferentially reflected the location of the target, while SMA neurons mainly reflected which arm to use; and 5) when executing the movement, a majority of SMA neurons increased their activity and were largely selective for the use of either the ipsilateral or contralateral arm. In contrast, the activity of pre-SMA neurons tended to be suppressed. These findings point to the functional specialization of the two areas, with respect to receiving associative cues, information processing, motor behavior planning, and movement execution.  相似文献   

8.
We examined neuronal activity in the dorsal and ventral premotor cortex (PMd and PMv, respectively) to explore the role of each motor area in processing visual signals for action planning. We recorded neuronal activity while monkeys performed a behavioral task during which two visual instruction cues were given successively with an intervening delay. One cue instructed the location of the target to be reached, and the other indicated which arm was to be used. We found that the properties of neuronal activity in the PMd and PMv differed in many respects. After the first cue was given, PMv neuron response mostly reflected the spatial position of the visual cue. In contrast, PMd neuron response also reflected what the visual cue instructed, such as which arm to be used or which target to be reached. After the second cue was given, PMv neurons initially responded to the cue's visuospatial features and later reflected what the two visual cues instructed, progressively increasing information about the target location. In contrast, the activity of the majority of PMd neurons responded to the second cue with activity reflecting a combination of information supplied by the first and second cues. Such activity, already reflecting a forthcoming action, appeared with short latencies (<400 ms) and persisted throughout the delay period. In addition, both the PMv and PMd showed bilateral representation on visuospatial information and motor-target or effector information. These results further elucidate the functional specialization of the PMd and PMv during the processing of visual information for action planning.  相似文献   

9.
Recent findings of vestibular responses in part of the visual cortex--the dorsal medial superior temporal area (MSTd)--indicate that vestibular signals might contribute to cortical processes that mediate the perception of self-motion. We tested this hypothesis in monkeys trained to perform a fine heading discrimination task solely on the basis of inertial motion cues. The sensitivity of the neuronal responses was typically lower than that of psychophysical performance, and only the most sensitive neurons rivaled behavioral performance. Responses recorded in MSTd were significantly correlated with perceptual decisions, and the correlations were strongest for the most sensitive neurons. These results support a functional link between MSTd and heading perception based on inertial motion cues. These cues seem mainly to be of vestibular origin, as labyrinthectomy produced a marked elevation of psychophysical thresholds and abolished MSTd responses. This study provides evidence that links single-unit activity to spatial perception mediated by vestibular signals, and supports the idea that the role of MSTd in self-motion perception extends beyond optic flow processing.  相似文献   

10.
Many previous studies have focused on how humans combine inputs provided by different modalities for the same physical property. However, it is not yet very clear how different senses providing information about our own movements combine in order to provide a motion percept. We designed an experiment to investigate how upright turns are stored, and particularly how vestibular and visual cues interact at the different stages of the memorization process (encoding/recalling). Subjects experienced passive yaw turns stimulated in the vestibular modality (whole-body rotations) and/or in the visual modality (limited lifetime star-field rotations), with the visual scene turning 1.5 times faster when combined (unnoticed conflict). Then they were asked to actively reproduce the rotation displacement in the opposite direction, with body cues only, visual cues only, or both cues with either the same or a different gain factor. First, we found that in none of the conditions did the reproduced motion dynamics follow that of the presentation phase (Gaussian angular velocity profiles). Second, the unimodal recalling of turns was largely uninfluenced by the other sensory cue that it could be combined with during the encoding. Therefore, turns in each modality, visual, and vestibular are stored independently. Third, when the intersensory gain was preserved, the bimodal reproduction was more precise (reduced variance) and lay between the two unimodal reproductions. This suggests that with both visual and vestibular cues available, these combine in order to improve the reproduction. Fourth, when the intersensory gain was modified, the bimodal reproduction resulted in a substantially larger change for the body than for the visual scene rotations, which indicates that vision prevails for this rotation displacement task when a matching problem is introduced.  相似文献   

11.
To keep a stable internal representation of the visual world as our eyes, head, and body move around, humans and monkeys must continuously adjust neural maps of visual space using extraretinal sensory or motor cues. When such movements include translation, the amount of body displacement must be weighted differently in the updating of far versus near targets. Using a memory-saccade task, we have investigated whether nonhuman primates can benefit from this geometry when passively moved sideways. We report that monkeys made appropriate memory saccades, taking into account not only the amplitude and nature (rotation vs. translation) of the movement, but also the distance of the memorized target: i.e., the amplitude of memory saccades was larger for near versus far targets. The scaling by viewing distance, however, was less than geometrically required, such that memory saccades consistently undershot near targets. Such a less-than-ideal scaling of memory saccades is reminiscent of the viewing distance-dependent properties of the vestibuloocular reflex. We propose that a similar viewing distance-dependent vestibular signal is used as an extraretinal compensation for the visuomotor consequences of the geometry of motion parallax by scaling both memory saccades and reflexive eye movements during motion through space.  相似文献   

12.
We examined the cellular activity in the rostral cingulate motor area (CMAr) with respect to multiple behavioral factors that ranged from the retrieval and processing of associative visual signals to the planning and execution of instructed actions. We analyzed the neuronal activity in monkeys while they performed a behavioral task in which 2 visual instruction cues were given successively with an intervening delay. One cue instructed the location of the target to be reached; the other cue instructed which arm was to be used. After a second delay, the monkey received a motor-set cue to be prepared to initiate the motor task in accordance with instructions. Finally, after a GO signal, the monkey reached for the instructed target with the instructed arm. We found that the activity of neurons in the CMAr changed profoundly throughout the behavioral task, which suggested that the CMAr participated in each of the behavioral processing steps. However, the neuronal activity was only modestly selective for the spatial location of the visual signal. We also found that selectivity for the instructional information delivered with the signals (target location and arm use) was modest. Furthermore, during the motor-set and movement periods, few CMAr neurons exhibited selectivity for such motor parameters as the location of the target or the arm to be used. The abundance and robustness of the neuronal activity within the CMAr that reflected each step of the behavioral task and the modest selectivity of the same cells for sensorimotor parameters are strikingly different from the preponderance of selectivity that we have observed in other frontal areas. Based on these results, we propose that the CMAr participates in monitoring individual behavioral events to keep track of the progress of required behavioral tasks. On the other hand, CMAr activity during motor planning may reflect the emergence of a general intention for action.  相似文献   

13.
In behavioral tasks, previous research has found that advanced Spanish learners of Dutch rely on duration cues to distinguish Dutch vowels, while Dutch listeners rely on spectral cues. This study tested whether language-specific cue weighting is reflected in preattentive processing. The mismatch negativity (MMN) of Dutch and Spanish participants was examined in response to spectral and duration cues in Dutch vowels. The MMN at frontal and mid sites was weaker and peaked later at Fz for Spanish than for Dutch listeners for the spectrally cued contrasts, whereas both groups responded similarly to the duration cue. In line with overt categorization behavior, these MMN data indicate that preattentive cue weighting depends on the listeners' language experience.  相似文献   

14.
Ablation of entorhinal/perirhinal cortices prevents learning associations between visual stimuli used as cues in reward schedules and the schedule state. Single neurons in perirhinal cortex are sensitive to associations between the cues and the reward schedules. To investigate whether neurons in the entorhinal cortex have similar sensitivities, we recorded single neuronal activity from two rhesus monkeys while the monkeys performed a visually cued reward schedule task. When the cue was related to the reward schedules, the monkeys made progressively fewer errors as the schedule state became closer to the reward state, showing that the monkeys were sensitive to the cue and the schedule state. Of 75 neurons recorded in the entorhinal cortex during task performance, about 30% responded. About half of these responded after cue presentation. When the relation of the cue to the reward schedules was random, the cue-related responses disappeared or lost their selectivity for schedule states. The responses of the entorhinal cortex neurons are similar to responses of perirhinal cortex neurons in that they are selective for the associative relationships between cues and reward schedules. However, they are particularly selective for the first trial of a new schedule, in contrast to perirhinal cortex where responsivity to all schedule states is seen. A different subpopulation of entorhinal neurons responded to the reward, unlike perirhinal neurons which respond solely to the cue. These results indicate that the entorhinal signals carry associative relationships between the visual cues and reward schedules, and between rewards and reward schedules that are not simply derived from perirhinal cortex by feed-forward serial processing.  相似文献   

15.
When walking through space, both dynamic visual information (optic flow) and body-based information (proprioceptive and vestibular) jointly specify the magnitude of distance travelled. While recent evidence has demonstrated the extent to which each of these cues can be used independently, less is known about how they are integrated when simultaneously present. Many studies have shown that sensory information is integrated using a weighted linear sum, yet little is known about whether this holds true for the integration of visual and body-based cues for travelled distance perception. In this study using Virtual Reality technologies, participants first travelled a predefined distance and subsequently matched this distance by adjusting an egocentric, in-depth target. The visual stimulus consisted of a long hallway and was presented in stereo via a head-mounted display. Body-based cues were provided either by walking in a fully tracked free-walking space (Exp. 1) or by being passively moved in a wheelchair (Exp. 2). Travelled distances were provided either through optic flow alone, body-based cues alone or through both cues combined. In the combined condition, visually specified distances were either congruent (1.0×) or incongruent (0.7× or 1.4×) with distances specified by body-based cues. Responses reflect a consistent combined effect of both visual and body-based information, with an overall higher influence of body-based cues when walking and a higher influence of visual cues during passive movement. When comparing the results of Experiments 1 and 2, it is clear that both proprioceptive and vestibular cues contribute to travelled distance estimates during walking. These observed results were effectively described using a basic linear weighting model.  相似文献   

16.
Medullary dorsal horn neurons with trigeminal sensory properties have been previously shown to have additional responses associated with cues relevant to the successful execution of a behavioral task. These "task-related" responses were evoked by environmental cues but were independent of the specific stimulus parameters. We have examined further the characteristics of task-related responses in medullary dorsal horn neurons of three monkeys. Single-unit activity was recorded while the monkeys were performing behavioral tasks that required them to discriminate thermal or visual stimuli for a liquid reward. Forty-five percent (34/75) of the medullary dorsal horn neurons studied exhibited task-related activity that was significantly correlated with the stereotypical behavioral events that occurred during the tasks. Similar events occurring outside of the task produced no response. In addition to the task-related activity of these medullary dorsal horn neurons, responses to mechanical and/or thermal stimuli presented within the neuron's receptive field were demonstrated in 28 of 34 cases. These sensory responses also were evoked by the same stimuli presented outside of the behavioral task. Fifteen of the neurons with task-related responses could be activated antidromically from thalamic stimulating electrodes. Task-related responses were categorized according to their relationship to the three phases of the behavioral trial: trial initiation, trial continuation, and trial termination. Although an individual task-related response was associated with a single behavioral event, most medullary dorsal horn neurons (30/34) exhibited a reproducible pattern of task-related responses that occurred during more than one phase of the trial. Trial-initiation task-related responses were subdivided depending on their correlation with specific events that occurred within that phase of the trial. One-third of the 18 excitatory trial-initiation responses were associated with the visual stimulus that cued the monkey to begin the trial; the remaining two-thirds were associated with the monkey's press of the button that actually initiated the trial. Trial-continuation task-related responses (observed while the monkey waited for a thermal stimulus that triggered a rewarded motor response) were shown to be independent of the actual temperature of the thermal stimulus. In addition these trial-continuation task-related responses were also noted during trials without a thermal stimulus, in which the trigger cue was the onset of a light (in a visual task).(ABSTRACT TRUNCATED AT 400 WORDS)  相似文献   

17.
We report here that shape-from-shading stimuli evoked a long-latency contextual pop-out response in V1 and V2 neurons of macaque monkeys, particularly after the monkeys had used the stimuli in a behavioral task. The magnitudes of the pop-out responses were correlated to the monkeys' behavioral performance, suggesting that these signals are neural correlates of perceptual pop-out saliency. The signals changed with the animal's behavioral adaptation to stimulus contingencies, indicating that perceptual saliency is also a function of experience and behavioral relevance. The evidence that higher-order stimulus attributes and task experience can influence early visual processing supports the notion that perceptual computation is an interactive and plastic process involving multiple cortical areas.  相似文献   

18.
This study aimed to identify neural mechanisms that underlie perceptual learning in a visual-discrimination task. We trained two monkeys (Macaca mulatta) to determine the direction of visual motion while we recorded from their middle temporal area (MT), which in trained monkeys represents motion information that is used to solve the task, and lateral intraparietal area (LIP), which represents the transformation of motion information into a saccadic choice. During training, improved behavioral sensitivity to weak motion signals was accompanied by changes in motion-driven responses of neurons in LIP, but not in MT. The time course and magnitude of the changes in LIP correlated with the changes in behavioral sensitivity throughout training. Thus, for this task, perceptual learning does not appear to involve improvements in how sensory information is represented in the brain, but rather how the sensory representation is interpreted to form the decision that guides behavior.  相似文献   

19.
This paper reports on the acute effects of a monaural plug on directional hearing in the horizontal (azimuth) and vertical (elevation) planes of human listeners. Sound localization behavior was tested with rapid head-orienting responses toward brief high-pass filtered (>3 kHz; HP) and broadband (0.5-20 kHz; BB) noises, with sound levels between 30 and 60 dB, A-weighted (dBA). To deny listeners any consistent azimuth-related head-shadow cues, stimuli were randomly interleaved. A plug immediately degraded azimuth performance, as evidenced by a sound level-dependent shift ("bias") of responses contralateral to the plug, and a level-dependent change in the slope of the stimulus-response relation ("gain"). Although the azimuth bias and gain were highly correlated, they could not be predicted from the plug's acoustic attenuation. Interestingly, listeners performed best for low-intensity stimuli at their normal-hearing side. These data demonstrate that listeners rely on monaural spectral cues for sound-source azimuth localization as soon as the binaural difference cues break down. Also the elevation response components were affected by the plug: elevation gain depended on both stimulus azimuth and on sound level and, as for azimuth, localization was best for low-intensity stimuli at the hearing side. Our results show that the neural computation of elevation incorporates a binaural weighting process that relies on the perceived, rather than the actual, sound-source azimuth. It is our conjecture that sound localization ensues from a weighting of all acoustic cues for both azimuth and elevation, in which the weights may be partially determined, and rapidly updated, by the reliability of the particular cue.  相似文献   

20.
We examined the influence of dynamic visual scenes on the motion perception of subjects undergoing sinusoidal (0.45 Hz) roll swing motion at different radii. The visual scenes were presented on a flatscreen monitor with a monocular 40° field of view. There were three categories of trials: (1) trials in the dark; (2) trials where the visual scene matched the actual motion; and (3) trials where the visual scene showed swing motion at a different radius. Subjects verbally reported perceptions of head tilt and translation. When the visual and vestibular cues differed, subjects reported perceptions that were geometrically consistent with a radius between the radii of the visual scene and the actual motion. Even when sensations did not match either the visual or vestibular stimuli, reported motion perceptions were consistent with swing motions combining elements of each. Subjects were generally unable to detect cue conflicts or judge their own visual–vestibular biases, which suggests that the visual and vestibular self-motion cues are not independently accessible.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号