首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 413 毫秒
1.
We examined the effect of temporal context on discrimination of intervals marked by auditory, visual and tactile stimuli. Subjects were asked to compare the duration of the interval immediately preceded by an irrelevant “distractor” stimulus with an interval with no distractor. For short interval durations, the presence of the distractor affected greatly the apparent duration of the test stimulus: short distractors caused the test interval to appear shorter and vice versa. For very short reference durations (≤100 ms), the contextual effects were large, changing perceived duration by up to a factor of two. The effect of distractors reduced steadily for longer reference durations, to zero effect for durations greater than 500 ms. We found similar results for intervals defined by visual flashes, auditory tones and brief finger vibrations, all falling to zero effect at 500 ms. Under appropriate conditions, there were strong cross-modal interactions, particularly from audition to vision. We also measured the Weber fractions for duration discrimination and showed that under the conditions of this experiment, Weber fractions decreased steadily with duration, following a square-root law, similarly for all three modalities. The magnitude of the effect of the distractors on apparent duration correlated well with Weber fraction, showing that when duration discrimination was relatively more precise, the context dependency was less. The results were well fit by a simple Bayesian model combining noisy estimates of duration with the action of a resonance-like mechanism that tended to regularize the sound sequence intervals.  相似文献   

2.
The temporal asynchrony between inputs to different sensory modalities has been shown to be a critical factor influencing the interaction between such inputs. We used scalp-recorded event-related potentials (ERPs) to investigate the effects of attention on the processing of audiovisual multisensory stimuli as the temporal asynchrony between the auditory and visual inputs varied across the audiovisual integration window (i.e., up to 125 ms). Randomized streams of unisensory auditory stimuli, unisensory visual stimuli, and audiovisual stimuli (consisting of the temporally proximal presentation of the visual and auditory stimulus components) were presented centrally while participants attended to either the auditory or the visual modality to detect occasional target stimuli in that modality. ERPs elicited by each of the contributing sensory modalities were extracted by signal processing techniques from the combined ERP waveforms elicited by the multisensory stimuli. This was done for each of the five different 50-ms subranges of stimulus onset asynchrony (SOA: e.g., V precedes A by 125–75 ms, by 75–25 ms, etc.). The extracted ERPs for the visual inputs of the multisensory stimuli were compared among each other and with the ERPs to the unisensory visual control stimuli, separately when attention was directed to the visual or to the auditory modality. The results showed that the attention effects on the right-hemisphere visual P1 was largest when auditory and visual stimuli were temporally aligned. In contrast, the N1 attention effect was smallest at this latency, suggesting that attention may play a role in the processing of the relative temporal alignment of the constituent parts of multisensory stimuli. At longer latencies an occipital selection negativity for the attended versus unattended visual stimuli was also observed, but this effect did not vary as a function of SOA, suggesting that by that latency a stable representation of the auditory and visual stimulus components has been established.  相似文献   

3.
Attending to a visual or auditory stimulus often requires irrelevant information to be filtered out, both within the modality attended and in other modalities. For example, attentively listening to a phone conversation can diminish our ability to detect visual events. We used functional magnetic resonance imaging (fMRI) to examine brain responses to visual and auditory stimuli while subjects attended visual or auditory information. Although early cortical areas are traditionally considered unimodal, we found that brain responses to the same ignored information depended on the modality attended. In early visual area V1, responses to ignored visual stimuli were weaker when attending to another visual stimulus, compared with attending to an auditory stimulus. The opposite was true in more central visual area MT+, where responses to ignored visual stimuli were weaker when attending to an auditory stimulus. Furthermore, fMRI responses to the same ignored visual information depended on the location of the auditory stimulus, with stronger responses when the attended auditory stimulus shared the same side of space as the ignored visual stimulus. In early auditory cortex, responses to ignored auditory stimuli were weaker when attending a visual stimulus. A simple parameterization of our data can describe the effects of redirecting attention across space within the same modality (spatial attention) or across modalities (cross-modal attention), and the influence of spatial attention across modalities (cross-modal spatial attention). Our results suggest that the representation of unattended information depends on whether attention is directed to another stimulus in the same modality or the same region of space.  相似文献   

4.
A frequent approach to study interactions of the auditory and the visual system is to measure event-related potentials (ERPs) to auditory, visual, and auditory-visual stimuli (A, V, AV). A nonzero result of the AV ? (A + V) comparison indicates that the sensory systems interact at a specific processing stage. Two possible biases weaken the conclusions drawn by this approach: first, subtracting two ERPs from one requires that A, V, and AV do not share any common activity. We have shown before (Gondan and Röder in Brain Res 1073–1074:389–397, 2006) that the problem of common activity can be avoided using an additional tactile stimulus (T) and evaluating the ERP difference (T + TAV) ? (TA + TV). A second possible confound is the modality shift effect (MSE): for example, the auditory N1 is increased if an auditory stimulus follows a visual stimulus, whereas it is smaller if the modality is unchanged (ipsimodal stimulus). Bimodal stimuli might be affected less by MSEs because at least one component always matches the preceding trial. Consequently, an apparent amplitude modulation of the N1 would be observed in AV. We tested the influence of MSEs on auditory-visual interactions by comparing the results of AV ? (A + V) using (a) all stimuli and using (b) only ipsimodal stimuli. (a) and (b) differed around 150 ms, this indicates that AV ? (A + V) is indeed affected by the MSE. We then formally and empirically demonstrate that (T + TAV) ? (TA + TV) is robust against possible biases due to the MSE.  相似文献   

5.
The temporal integration of stimuli in different sensory modalities plays a crucial role in multisensory processing. Previous studies using temporal-order judgments to determine the point of subjective simultaneity (PSS) with multisensory stimulation yielded conflicting results on modality-specific delays. While it is known that the relative stimulus intensities of stimuli from different sensory modalities affect their perceived temporal order, we have hypothesized that some of these discrepancies might be explained by a previously overlooked confounding factor, namely the duration of the stimulus. We therefore studied the influence of both factors on the PSS in a spatial-audiovisual temporal-order task. In addition to confirming previous results on the role of stimulus intensity, we report that varying the temporal duration of an audiovisual stimulus pair also affects the perceived temporal order of the auditory and visual stimulus components. Although individual PSS values varied from negative to positive values across participants, we found a systematic shift of PSS values in all participants toward a common attractor value with increasing stimulus duration. This resulted in a stabilization of PSS values with increasing stimulus duration, indicative of a mechanism that compensates individual imbalances between sensory modalities, which might arise from attentional biases toward one modality at short stimulus durations.  相似文献   

6.
Saccades to combined audiovisual stimuli often have reduced saccadic reaction times (SRTs) compared with those to unimodal stimuli. Neurons in the intermediate/deep layers of the superior colliculus (dSC) are capable of integrating converging sensory inputs to influence the time to saccade initiation. To identify how neural processing in the dSC contributes to reducing SRTs to audiovisual stimuli, we recorded activity from dSC neurons while monkeys generated saccades to visual or audiovisual stimuli. To evoke crossmodal interactions of varying strength, we used auditory and visual stimuli of different intensities, presented either in spatial alignment or to opposite hemifields. Spatially aligned audiovisual stimuli evoked the shortest SRTs. In the case of low-intensity stimuli, the response to the auditory component of the aligned audiovisual target increased the activity preceding the response to the visual component, accelerating the onset of the visual response and facilitating the generation of shorter-latency saccades. In the case of high-intensity stimuli, the auditory and visual responses occurred much closer together in time and so there was little opportunity for the auditory stimulus to influence previsual activity. Instead, the reduction in SRT for high-intensity, aligned audiovisual stimuli was correlated with increased premotor activity (activity after visual burst but preceding saccade-aligned burst). These data provide a link between changes in neural activity related to stimulus modality with changes in behavior. They further demonstrate how crossmodal interactions are not limited to the initial sensory activity but can also influence premotor activity in the SC.  相似文献   

7.
The integration of visual and auditory inputs in the human brain occurs only if the components are perceived in temporal proximity, that is, when the intermodal time difference falls within the so-called subjective synchrony range. We used the midpoint of this range to estimate the point of subjective simultaneity (PSS). We measured the PSS for audio-visual (AV) stimuli in a synchrony judgment task, in which subjects had to judge a given AV stimulus using three response categories (audio first, synchronous, video first). The relevant stimulus manipulation was the duration of the auditory and visual components. Results for unimodal auditory and visual stimuli have shown that the perceived onset shifts to relatively later positions with increasing stimulus duration. These unimodal shifts should be reflected in changing PSS values, when AV stimuli with different durations of the auditory and visual components are used. The results for 17 subjects showed indeed a significant shift of the PSS for different duration combinations of the stimulus components. Because the shifts were approximately equal for duration changes in either of the components, no net shift of the PSS was observed as long as the durations of the two components were equal. This result indicates the need to appropriately account for unimodal timing effects when quantifying intermodal synchrony perception.  相似文献   

8.
Single-unit recordings from motor cortex (area 4) were obtained in two monkeys trained to perform simple flexion and extension movements of the arm in response to somesthetic, visual, and auditory signals. All neurons tested showed movement-related responses that were identical for equivalent movements irrespective of the modality of the triggering stimulus. Progressively longer reaction times were always associated with progressively longer latencies of unit responses. When visual and auditory stimuli were presented simultaneously, the intensity and the duration of both motor and unitary responses remained unchanged as if only one stimulus (auditory) had been given. When the auditory stimulus was appropriately delayed with respect to the visual one, shortening of motor reaction time was observed with a corresponding shortening of the latency of unit responses. In addition to movement-related responses, some neurons showed sensory-related responses mainly to the somesthetic stimulus (37%) and more rarely to the auditory (11%) and visual stimuli (3%). These "sensory" responses preceded and were independent of the movement-related responses; they showed no obvious correlation with the reaction time. Whenever tested, the somatosensory responses persisted after extinction of the motor responses. These findings suggest that, in our experimental conditions, area 4 neurons of the monkey are not involved in the early processing of sensory information required for the initiation of simple, triggered movements. Rather, they appear to generate signals that are mainly related to the characteristics of the motor responses.  相似文献   

9.
Eyeblink performance parameters were investigated in subjects engaged in a series of duration discrimination tasks differing in modality (visual vs. auditory) and presentation schedule (fixed vs. variable). Visual tasks were associated with slower blink rates and shorter blink durations than auditory tasks. Sensitivity measures suggested that this difference might be due, in part, to the greater difficulty of the visual tasks. Blink latency declined within and across tasks and was longer for target stimuli which were followed by responses. Since the target stimuli were the short duration stimuli, the latter effect could be a compound of two opposing effects. The first is related to the response, which tends to delay the blink on target trials, while the second, related to decision processes, would tend to increase latencies on nontarget trials. Schedule of stimulus presentation did not affect dependent measures as predicted. RT was unaffected by either of the experimental variables.  相似文献   

10.
This article investigated both the ability of naive human subjects to learn interval production, as well as the properties of learning generalization across modalities and interval durations that varied systematically from the over-trained interval. Human subjects trained on a 450-, 650-, or 850-ms single-interval production task, using auditory stimuli to define the intervals, showed a significant decrease in performance variability with intensive training. This learning generalized to the visual modality and to non-trained durations following a Gaussian transfer pattern. However, the learning carryover followed different rules, depending on the duration of the trained interval as follows: (1) the dispersion of the generalization curve increased as a function of the trained interval, (2) the generalization pattern was tilted to the right in the visual condition, and (3) the transfer magnitude for 650 ms was less prominent than for the other two intervals. These findings suggest the existence of neural circuits that are tuned to specific time lengths and that show different temporal processing properties depending on their preferred interval duration. Electronic supplementary material  The online version of this article (doi:) contains supplementary material, which is available to authorized users.  相似文献   

11.
Sensory dominance in combinations of audio,visual and haptic stimuli   总被引:1,自引:1,他引:0  
Participants presented with auditory, visual, or bi-sensory audio–visual stimuli in a speeded discrimination task, fail to respond to the auditory component of the bi-sensory trials significantly more often than they fail to respond to the visual component—a ‘visual dominance’ effect. The current study investigated further the sensory dominance phenomenon in all combinations of auditory, visual and haptic stimuli. We found a similar visual dominance effect also in bi-sensory trials of combined haptic–visual stimuli, but no bias towards either sensory modality in bi-sensory trials of haptic–auditory stimuli. When presented with tri-sensory trials of combined auditory–visual–haptic stimuli, participants made more errors of responding only to two corresponding sensory signals than errors of responding only to a single sensory modality, however, there were no biases towards either sensory modality (or sensory pairs) in the distribution of both types of errors (i.e. responding only to a single stimulus or to pairs of stimuli). These results suggest that while vision can dominate both the auditory and the haptic sensory modalities, it is limited to bi-sensory combinations in which the visual signal is combined with another single stimulus. However, in a tri-sensory combination when a visual signal is presented simultaneously with both the auditory and the haptic signals, the probability of missing two signals is much smaller than of missing only one signal and therefore the visual dominance disappears.  相似文献   

12.
Our perception of time is affected by the modality in which it is conveyed. Moreover, certain temporal phenomena appear to exist in only one modality. The perception of temporal regularity or structure (e.g., the ‘beat’) in rhythmic patterns is one such phenomenon: visual beat perception is rare. The modality-specificity for beat perception is puzzling, as the durations that comprise rhythmic patterns are much longer than the limits of visual temporal resolution. Moreover, the optimization that beat perception provides for memory of auditory sequences should be equally relevant to visual sequences. Why does beat perception appear to be modality specific? One possibility is that the nature of the visual stimulus plays a role. Previous studies have usually used brief stimuli (e.g., light flashes) to present visual rhythms. In the current study, a rotating line that appeared sequentially in different spatial orientations was used to present a visual rhythm. Discrimination accuracy for visual rhythms and auditory rhythms was compared for different types of rhythms. The rhythms either had a regular temporal structure that previously has been shown to induce beat perception in the auditory modality, or they had an irregular temporal structure without beat-inducing qualities. Overall, the visual rhythms were discriminated more poorly than the auditory rhythms. The beat-based structure, however, increased accuracy for visual as well as auditory rhythms. These results indicate that beat perception can occur in the visual modality and improve performance on a temporal discrimination task, when certain types of stimuli are used.  相似文献   

13.
To investigate the cross-modal nature of the exogenous attention system, we studied how involuntary attention in the visual modality affects ERPs elicited by sudden onset of events in the auditory modality. Relatively loud auditory white noise bursts were presented to subjects with random and long inter-trial intervals. The noise bursts were either presented alone, or paired with a visual stimulus with a visual to auditory onset asynchrony of 120 ms. In a third condition, the visual stimuli were shown alone. All three conditions, auditory alone, visual alone, and paired visual/auditory, were randomly inter-mixed and presented with equal probabilities. Subjects were instructed to fixate on a point in front of them without task instructions concerning either the auditory or visual stimuli. ERPs were recorded from 28 scalp sites throughout every experimental session. Compared to ERPs in the auditory alone condition, pairing the auditory noise bursts with the visual stimulus reduced the amplitude of the auditory N100 component at Cz by 40% and the auditory P200/P300 component at Cz by 25%. No significant topographical change was observed in the scalp distributions of the N100 and P200/P300. Our results suggest that involuntary attention to visual stimuli suppresses early sensory (N100) as well as late cognitive (P200/P300) processing of sudden auditory events. The activation of the exogenous attention system by sudden auditory onset can be modified by involuntary visual attention in a cross-model, passive prepulse inhibition paradigm.  相似文献   

14.
The time estimation paradigm allows the recording of anticipatory attention for an upcoming stimulus unconfounded by any anticipatory motor activity. Three seconds after a warning signal (WS) subjects have to press a button. A button press within a time window from 2,850 ms to 3,150 ms after the WS is considered ‘correct’, a movement prior to 2,850 ms after the WS is labelled ‘too early’ and a movement after 3,150 ms is labelled ‘too late’. Two seconds after the button press a Knowledge of Results (KR) stimulus is presented, informing the subject about the correctness of the response. Stimulus Preceding Negativity (SPN) is a slow wave which is recorded prior to the presentation of the KR stimulus. The SPN has a right hemisphere preponderance and is based upon activity in a network in which prefrontal cortex, the insula Reili and the parietal cortex are crucial. In the present study we asked two questions: (1) does the SPN show modality specificity and (2) does the use of verbal KR stimuli influence the right hemisphere preponderance? Auditory and visual stimuli were presented, in a verbal mode and in a non-verbal mode. SPN amplitudes prior to visual stimuli were larger over the visual cortex than prior to auditory stimuli. SPN amplitudes prior to auditory stimuli were larger over the frontal areas than prior to visual stimuli. The use of verbal stimuli did not influence the right hemisphere preponderance. We concluded that apart from the supramodal effect of KR stimuli in general, there is (first) a modality-specific activation of the relevant sensory cortical areas. The supramodal network underlying the attention for and the use of KR information is activated either from different sensory areas or from language processing cortical areas.  相似文献   

15.
The present study aimed to investigate the effect of reward and stimulus modality of feedback stimuli on the stimulus-preceding negativity. A time estimation task was performed, and (a) the motivational level (reward and no-reward) and (b) the stimulus modality (auditory and visual) of feedback stimuli were manipulated. The results demonstrated that the stimulus-preceding negativity was larger in the reward than in the no-reward condition, especially at the right frontal and the left occipito-temporal areas. Moreover, the stimulus-preceding negativity prior to visual feedback stimuli was larger over the occipital areas than in the auditory condition. In contrast, at the prefrontal areas, the amplitude prior to auditory feedback stimuli was larger than in the visual condition. Our results revealed that the prefeedback stimulus-preceding negativity was independently influenced by stimulus modality and motivation.  相似文献   

16.
The purpose of the present study was to determine whether preweanling rats respond differentially to the intensity and energy source of a stimulus. Previous studies have suggested that infants process compound stimuli based on net stimulus intensity regardless of the energy source of the compound's elements, but more direct tests have been needed of the infant's response to the stimulus attributes of intensity and energy source. This response was tested for auditory and visual stimuli that had been equated (Experiment 1) in terms of perceived intensity (low and high). Intensity level and energy source of the stimuli were varied independently within nonassociative (Experiment 2) and associative (Experiment 3) procedures. The overall results indicate that stimuli of a low-perceived intensity were processed in terms of their intensity, whereas high-intensity stimuli were processed on the basis of energy source. These results are relevant to contemporary issues of cognitive development in humans and animals. © 1998 John Wiley & Sons, Inc. Dev Psychobiol 32: 199–214, 1998  相似文献   

17.
Saccadic reaction time (SRT) to a visual target tends to be shorter when auditory stimuli are presented in close temporal and spatial proximity, even when subjects are instructed to ignore the auditory non-target (focused attention paradigm). Previous studies using pairs of visual and auditory stimuli differing in both azimuth and vertical position suggest that the amount of SRT facilitation decreases not with the physical but with the perceivable distance between visual target and auditory non-target. Steenken et al. (Brain Res 1220:150–156, 2008) presented an additional white-noise masker background of three seconds duration. Increasing the masker level had a diametrical effect on SRTs in spatially coincident versus disparate stimulus configurations: saccadic responses to coincident visual–auditory stimuli are slowed down, whereas saccadic responses to disparate stimuli are speeded up. Here we show that the time-window-of-integration model accounts for this observation by variation of a perceivable-distance parameter in the second stage of the model whose value does not depend on stimulus onset asynchrony between target and non-target.  相似文献   

18.
Although a number of studies support an arousal theory interpretation of differential sensation-seeking behavior, the conditions under which arousal correlates of this personality dimension are manifest is as yet unclear. Both theory and research suggest that among the external factors affecting the differential arousal response are stimulus relevance and stimulus intensity. The present study assessed the impact of these factors on both psychophysiological and behavioral responses. Subjects were preselected to represent the extremes of the sensation-seeking dimension, then exposed to auditory and visual presentations of a series of sexual and aggressive stimuli of systematically varied intensity. High sensation seekers gave larger amplitude SCRs to violent stimuli and larger initial responses to sexual stimuli presented visually, while verbal response intensities showed an opposite pattern. Overall, results provided support for an arousal theory interpretation of sensation-seeking, but suggested that the probability and magnitude of group differences is a somewhat complex function of stimulus intensity, stimulus modality, and perhaps other factors not yet assessed.  相似文献   

19.
The effects of intensity, duration and modality of a warning signal on tendon (T) reflexes evoked during the initial phase of a preparatory period of 4 sec were investigated. Reflexes were evoked simultaneously in both legs, from 0 to 350 msec after warning signal onset in steps of 50 msec. The required response was a plantar flexion of the right foot. A facilitation of reflexes was seen within 150 msec after warning signal onset, showing a somewhat longer latency for visual as compared to auditory signals. An effect of intensity was found in the auditory modality only, where the louder of two warning signals yielded a clear peak at 100 msec while the softer stimulus caused no significant departure of Achilles tendon reflexes from baseline. The time course of facilitation in the auditory modality was influenced by warning signal duration as well, although this effect was only marginally significant. There were no effects of physical warning signal parameters on reaction time. A comparison with an experiment in which non-signal stimuli were presented alone, pointed to aspects of the preparatory process which were manifest at the spinal level as early as 200 msec after warning signal onset.  相似文献   

20.
Fu S  Fan S  Chen L 《Psychophysiology》2003,40(5):770-775
The present study investigated whether there is a visual counterpart of the auditory mismatch negativity. Event-related potentials were recorded while subjects performed a spatial frequency discrimination task. "Match" and "nonmatch" stimuli were specifically categorized according to whether the second stimulus had the same orientation as the first stimulus in each trial. Nonmatch stimuli elicited larger occipital P84 and occipital and temporal N192 components than match stimuli, indicating the existence of involuntary processing in the visual modality. Moreover, the amplitude of the change-related N 192 was larger at the short stimulus onset asynchrony (SOA) than at the long SOA, suggesting that the visual modality involves a mismatch process similar to that of the auditory modality. The underlying neural representation (i.e., the visual memory trace) seems to develop easier and decay faster than its auditory counterpart.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号