首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The body schema, a constantly updated representation of the body and its parts, has been suggested to emerge from body part-specific representations which integrate tactile, visual, and proprioceptive information about the identity and posture of the body. Studies using different approaches have provided evidence for a distinct representation of the visual space ~30 cm around the upper body, and predominantly the hands, termed the peripersonal space. In humans, peripersonal space representations have often been investigated with a visual–tactile crossmodal congruency task. We used this task to test if a representation of peripersonal space exists also around the feet, and to explore possible interactions of peripersonal space representations of different body parts. In Experiment 1, tactile stimuli to the hands and feet were judged according to their elevation while visual distractors presented near the same limbs had to be ignored. Crossmodal congruency effects did not differ between the two types of limbs, suggesting a representation of peripersonal space also around the feet. In Experiment 2, tactile stimuli were presented to the hands, and visual distractors were flashed either near the participant’s foot, near a fake foot, or in distant space. Crossmodal congruency effects were larger in the real foot condition than in the two other conditions, indicating interactions between the peripersonal space representations of foot and hand. Furthermore, results of all three conditions showed that vision of the stimulated body part, compared to only proprioceptive input about its location, strongly influences crossmodal interactions for tactile perception, affirming the central role of vision in the construction of the body schema.
Tobias SchickeEmail:
  相似文献   

2.
Seeing one’s own body (either directly or indirectly) can influence visuotactile crossmodal interactions. Previously, it has been shown that even viewing a simple line drawing of a hand can also modulate such crossmodal interactions, as if viewing the picture of a hand somehow primes the representation of one’s own hand. However, factors other than the sight of a symbolic picture of a hand may have modulated the crossmodal interactions reported in previous research. In the present study, we examined the crossmodal modulatory effects of viewing five different visual images (photograph of a hand, line drawing of a hand, line drawing of a car, an U-shape, and an ellipse) on tactile performance. Participants made speeded discrimination responses regarding the location of brief vibrotactile targets presented to either the tip or base of their left index finger, while trying to ignore visual distractors presented to either the left or right of central fixation. We compared the visuotactile congruency effects elicited when the five different visual images were presented superimposed over the visual distractors. Participants’ tactile discrimination performance was modulated to a significantly greater extent by viewing the photograph of a hand than when viewing the outline drawing of a hand. No such crossmodal congruency effects were reported in any of the other conditions. These results therefore suggest that visuotactile interactions are specifically modulated by the image of the hand rather than just by any simple orientation cues that may be provided by the image of a hand.
Yuka IgarashiEmail:
  相似文献   

3.
In this study we investigated the effect of the directional congruency of tactile, visual, or bimodal visuotactile apparent motion distractors on the perception of auditory apparent motion. Participants had to judge the direction in which an auditory apparent motion stream moved (left-to-right or right-to-left) while trying to ignore one of a range of distractor stimuli, including unimodal tactile or visual, bimodal visuotactile, and crossmodal (i.e., composed of one visual and one tactile stimulus) distractors. Significant crossmodal dynamic capture effects (i.e., better performance when the target and distractor stimuli moved in the same direction rather than in opposite directions) were demonstrated in all conditions. Bimodal distractors elicited more crossmodal dynamic capture than unimodal distractors, thus providing the first empirical demonstration of the effect of information presented simultaneously in two irrelevant sensory modalities on the perception of motion in a third (target) sensory modality. The results of a second experiment demonstrated that the capture effect reported in the crossmodal distractor condition was most probably attributable to the combined effect of the individual static distractors (i.e., to ventriloquism) rather than to any emergent property of crossmodal apparent motion.  相似文献   

4.
Research has shown that people fail to report the presence of the auditory component of suprathreshold audiovisual targets significantly more often than they fail to detect the visual component in speeded response tasks. Here, we investigated whether this phenomenon, known as the “Colavita effect”, also affects people’s perception of visuotactile stimuli as well. In Experiments 1 and 2, participants made speeded detection/discrimination responses to unimodal visual, unimodal tactile, and bimodal (visual and tactile) stimuli. A significant Colavita visual dominance effect was observed (i.e., participants failed to respond to touch far more often than they failed to respond to vision on the bimodal trials). This dominance of vision over touch was significantly larger when the stimuli were presented from the same position than when they were presented from different positions (Experiment 3), and still occurred even when the subjective intensities of the visual and tactile stimuli had been matched (Experiment 4), thus ruling out a simple intensity-based account of the results. These results suggest that the Colavita visual dominance effect (over touch) may result from a competition between the neural representations of the two stimuli for access to consciousness and/or the recruitment of attentional resources.
Alberto GallaceEmail:
  相似文献   

5.
The remote distractor effect is a robust finding whereby a saccade to a lateralised visual target is delayed by the simultaneous, or near simultaneous, onset of a distractor in the opposite hemifield. Saccadic inhibition is a more recently discovered phenomenon whereby a transient change to the scene during a visual task induces a depression in saccadic frequency beginning within 70 ms, and maximal around 90–100 ms. We assessed whether saccadic inhibition is responsible for the increase in saccadic latency induced by remote distractors. Participants performed a simple saccadic task in which the delay between target and distractor was varied between 0, 25, 50, 100 and 150 ms. Examination of the distributions of saccadic latencies showed that each distractor produced a discrete dip in saccadic frequency, time-locked to distractor onset, conforming closely to the character of saccadic inhibition. We conclude that saccadic inhibition underlies the remote distractor effect.
Robert D. McIntoshEmail:
  相似文献   

6.
Although many studies have demonstrated that crossmodal exogenous orienting can lead to a facilitation of reaction times, the issue of whether exogenous spatial orienting also affects the accuracy of perceptual judgments has proved to be much more controversial. Here, we examined whether or not exogenous spatial attentional orienting would affect sensitivity in a temporal discrimination task. Participants judged which of the two target letters, presented on either the same or opposite sides, had been presented first. A spatially non-predictive tone was presented 200 ms prior to the onset of the first visual stimulus. In two experiments, we observed improved performance (i.e., a decrease in the just noticeable difference) when the target letters were presented on opposite sides and the auditory cue was presented on the side of the first visual stimulus, even when central fixation was monitored ("Experiment 2"). A shift in the point of subjective simultaneity was also observed in both experiments, indicating ‘prior entry’ for cued as compared to uncued first target trials. No such JND or PSS effects were observed when the auditory tone was presented after the second visual stimulus ("Experiment 3"), thus confirming the attentional nature of the effects observed. These findings clearly show that the crossmodal exogenous orienting of spatial attention can affect the accuracy of temporal judgments.
Valerio SantangeloEmail:
  相似文献   

7.
Semantic congruency and the Colavita visual dominance effect   总被引:2,自引:2,他引:0  
Participants presented with auditory, visual, or bimodal audiovisual stimuli in a speeded discrimination task, fail to respond to the auditory component of bimodal targets significantly more often than to the visual component, a phenomenon known as the Colavita visual dominance effect. Given that spatial and temporal factors have recently been shown to modulate the Colavita effect, the aim of the present study, was to investigate whether semantic congruency also modulates the effect. In the three experiments reported here, participants were presented with a version of the Colavita task in which the stimulus congruency between the auditory and visual components of the bimodal targets was manipulated. That is, the auditory and visual stimuli could refer to the same or different object (in Experiments 1 and 2) or audiovisual speech event (Experiment 3). Surprisingly, semantic/stimulus congruency had no effect on the magnitude of the Colavita effect in any of the experiments, although it exerted a significant effect on certain other aspects of participants’ performance. This finding contrasts with the results of other recent studies showing that semantic/stimulus congruency can affect certain multisensory interactions.
Camille KoppenEmail:
  相似文献   

8.
Recognizing a natural object requires one to pool information from various sensory modalities, and to ignore information from competing objects. That the same semantic knowledge can be accessed through different modalities makes it possible to explore the retrieval of supramodal object concepts. Here, object-recognition processes were investigated by manipulating the relationships between sensory modalities, specifically, semantic content, and spatial alignment between auditory and visual information. Experiments were run under realistic virtual environment. Participants were asked to react as fast as possible to a target object presented in the visual and/or the auditory modality and to inhibit a distractor object (go/no-go task). Spatial alignment had no effect on object-recognition time. The only spatial effect observed was a stimulus–response compatibility between the auditory stimulus and the hand position. Reaction times were significantly shorter for semantically congruent bimodal stimuli than would be predicted by independent processing of information about the auditory and visual targets. Interestingly, this bimodal facilitation effect was twice as large as found in previous studies that also used information-rich stimuli. An interference effect was observed (i.e. longer reaction times to semantically incongruent stimuli than to the corresponding unimodal stimulus) only when the distractor was auditory. When the distractor was visual, the semantic incongruence did not interfere with object recognition. Our results show that immersive displays with large visual stimuli may provide large multimodal integration effects, and reveal a possible asymmetry in the attentional filtering of irrelevant auditory and visual information.
Clara SuiedEmail:
  相似文献   

9.
Little is known on cross-modal interaction in complex object recognition. The factors influencing this interaction were investigated using simultaneous presentation of pictures and vocalizations of animals. In separate blocks, the task was to identify either the visual or the auditory stimulus, ignoring the other modality. The pictures and the sounds were congruent (same animal), incongruent (different animals) or neutral (animal with meaningless stimulus). Performance in congruent trials was better than in incongruent trials, regardless of whether subjects attended the visual or the auditory stimuli, but the effect was larger in the latter case. This asymmetry persisted with addition of a long delay after the stimulus and before the response. Thus, the asymmetry cannot be explained by a lack of processing time for the auditory stimulus. However, the asymmetry was eliminated when low-contrast visual stimuli were used. These findings suggest that when visual stimulation is highly informative, it affects auditory recognition more than auditory stimulation affects visual recognition. Nevertheless, this modality dominance is not rigid; it is highly influenced by the quality of the presented information. Electronic supplementary material  The online version of this article (doi:) contains supplementary material, which is available to authorized users.
Shlomit Yuval-GreenbergEmail:
  相似文献   

10.
Many studies have investigated how saccades may affect the internal representation of visual locations across eye-movements. Here, we studied, instead, whether eye-movements can affect auditory spatial cognition. In two experiments, participants judged the relative azimuth (same/different) of two successive sounds presented from a horizontal array of loudspeakers, separated by a 2.5-s delay. Eye-position was either held constant throughout the trial (being directed in a fixed manner to the far left or right of the loudspeaker array) or had to be shifted to the opposite side of the array during the retention delay between the two sounds, after the first sound but before the second. Loudspeakers were either visible (Experiment 1) or occluded from sight (Experiment 2). In both cases, shifting eye-position during the silent delay-period affected auditory performance in thn the successive auditory comparison task, even though the auditory inputs to be judged were equivalent. Sensitivity (d′) for the auditory discrimination was disrupted, specifically when the second sound shifted in the opposite direction to the intervening eye-movement with respect to the first sound. These results indicate that eye-movements affect internal representation of auditory location.
Francesco PavaniEmail: Email:
  相似文献   

11.
Masked priming effect with canonical finger numeral configurations   总被引:1,自引:1,他引:0  
Discrete numerosities can be represented by various finger configurations. The impact of counting strategies on these configurations and their possible semantic status were investigated in young adults. Experiment 1 showed that young adults named numerical finger configurations faster when they conformed to their own canonical finger-counting habits than when they did not. Experiment 2 showed that numeral finger configurations used as unconsciously presented primes speeded up numerical comparative judgements of Arabic numeral targets. Participants responded faster and made fewer errors with numerical than with non-numerical primes, and when primes and targets were congruent (i.e., leading to the same response). Moreover, this priming effect generalised to novel never consciously seen numerosities for canonical configurations but not for non-canonical ones. These results support the idea that canonical finger configurations automatically activate number semantics whereas non-canonical ones do not.
Mauro PesentiEmail:
  相似文献   

12.
The present study investigated the contribution of the presence of a visual signal at the saccade goal on saccade trajectory deviations and measured distractor-related inhibition as indicated by deviation away from an irrelevant distractor. Performance in a prosaccade task where a visual target was present at the saccade goal was compared to performance in an anti- and memory-guided saccade task. In the latter two tasks no visual signal is present at the location of the saccade goal. It was hypothesized that if saccade deviation can be ultimately explained in terms of relative activation levels between the saccade goal location and distractor locations, the absence of a visual stimulus at the goal location will increase the competition evoked by the distractor and affect saccade deviations. The results of Experiment 1 showed that saccade deviation away from a distractor varied significantly depending on whether a visual target was presented at the saccade goal or not: when no visual target was presented, saccade deviation away from a distractor was increased compared to when the visual target was present. The results of Experiments 2–4 showed that saccade deviation did not systematically change as a function of time since the offset of the target. Moreover, Experiments 3 and 4 revealed that the disappearance of the target immediately increased the effect of a distractor on saccade deviations, suggesting that activation at the target location decays very rapidly once the visual signal has disappeared from the display.
Wieske van ZoestEmail:
  相似文献   

13.
Localizing and reacting to tactile events on our skin requires the coordination between primary somatotopic projections and an external representation of space. Previous research has attributed an important role to early visual experience in shaping up this mapping. Here, we addressed the role played by immediately available visual information about body posture. We asked participants to determine the temporal order of two successive tactile events delivered to the hands while they adopted a crossed or an uncrossed-hands posture. As previously found, hand-crossing led to a dramatic impairment in tactile localization, which is a phenomenon attributed to a mismatch between somatotopic and externally-based frames of reference. In the present study, however, participants watched a pair of rubber hands that were placed either in a crossed or uncrossed posture (congruent or incongruent with the posture of their own hands). The results showed that the crossed-hands deficit can be significantly ameliorated by the sight of uncrossed rubber hands (Experiment 1). Moreover, this visual modulation seemed to depend critically on the degree to which the visual information about the rubber hands can be attributed to one’s own actions, in a process revealing short-term adaptation (Experiment 2).
Salvador Soto-FaracoEmail:
  相似文献   

14.
Sequence learning in serial reaction time (SRT) tasks has been investigated mostly with unimodal stimulus presentation. This approach disregards the possibility that sequence acquisition may be guided by multiple sources of sensory information simultaneously. In the current study we trained participants in a SRT task with visual only, tactile only, or bimodal (visual and tactile) stimulus presentation. Sequence performance for the bimodal and visual only training groups was similar, while both performed better than the tactile only training group. In a subsequent transfer phase, participants from all three training groups were tested in conditions with visual, tactile, and bimodal stimulus presentation. Sequence performance between the visual only and bimodal training groups again was highly similar across these identical stimulus conditions, indicating that the addition of tactile stimuli did not benefit the bimodal training group. Additionally, comparing across identical stimulus conditions in the transfer phase showed that the lesser sequence performance from the tactile only group during training probably did not reflect a difference in sequence learning but rather just a difference in expression of the sequence knowledge.
Elger L. AbrahamseEmail:
  相似文献   

15.
In a focused attention task saccadic reaction time (SRT) to a visual target stimulus (LED) was measured with an auditory (white noise burst) or tactile (vibration applied to palm) non-target presented in ipsi- or contralateral position to the target. Crossmodal facilitation of SRT was observed under all configurations and stimulus onset asynchrony (SOA) values ranging from  −500 (non-target prior to target) to 0 ms, but the effect was larger for ipsi- than for contralateral presentation within an SOA range from  −200 ms to 0. The time-window-of-integration (TWIN) model (Colonius and Diederich in J Cogn Neurosci 16:1000, 2004) is extended here to separate the effect of a spatially unspecific warning effect of the non-target from a spatially specific and genuine multisensory integration effect.
Hans ColoniusEmail:
  相似文献   

16.
The temporal perception of simple auditory and visual stimuli can be modulated by exposure to asynchronous audiovisual speech. For instance, research using the temporal order judgment (TOJ) task has shown that exposure to temporally misaligned audiovisual speech signals can induce temporal adaptation that will influence the TOJs of other (simpler) audiovisual events (Navarra et al. (2005) Cognit Brain Res 25:499–507). Given that TOJ and simultaneity judgment (SJ) tasks appear to reflect different underlying mechanisms, we investigated whether adaptation to asynchronous speech inputs would also influence SJ task performance. Participants judged whether a light flash and a noise burst, presented at varying stimulus onset asynchronies, were simultaneous or not, or else they discriminated which of the two sensory events appeared to have occurred first. While performing these tasks, participants monitored a continuous speech stream for target words that were either presented in synchrony, or with the audio channel lagging 300 ms behind the video channel. We found that the sensitivity of participant’s TOJ and SJ responses was reduced when the background speech stream was desynchronized. A significant modulation of the point of subjective simultaneity (PSS) was also observed in the SJ task but, interestingly, not in the TOJ task, thus supporting previous claims that TOJ and SJ tasks may tap somewhat different aspects of temporal perception.
Argiro VatakisEmail: Email:
  相似文献   

17.
The simultaneous presentation of a visual and an auditory stimulus can lead to a decrease in people’s ability to perceive or respond to the auditory stimulus. In this study, we investigate the effect that threat has upon this phenomenon, known as the Colavita visual dominance effect. Participants performed two blocks of trials containing 40% visual, 40% auditory, and 20% bimodal trials. The first block of trials was identical for all participants, while in the second block, either the visual stimulus (visual threat condition), auditory stimulus (auditory threat condition), or neither stimulus (control condition) was fear-conditioned using aversive electrocutaneous stimuli. We predicted that, when compared with the control condition, this visual dominance effect would increase in the visual threat condition and decrease in the auditory threat condition. This hypothesis was partially supported by the data. In particular, the results showed that the fear-conditioning of the visual stimulus significantly increased the visual dominance effect relative to the control condition. However, the fear-conditioning of the auditory stimulus did not reduce the visual dominance effect but instead increased it slightly. These findings are discussed in terms of the role that attention and arousal play in the dominance of vision over audition.
Stefaan Van DammeEmail:
  相似文献   

18.
We report two experiments designed to assess the consequences of posture change on audiotactile spatiotemporal interactions. In Experiment 1, participants had to discriminate the direction of an auditory stream (consisting of the sequential presentation of two tones from different spatial positions) while attempting to ignore a task-irrelevant tactile stream (consisting of the sequential presentation of two vibrations, one to each of the participant's hands). The tactile stream presented to the participants' hands was either spatiotemporally congruent or incongruent with respect to the sounds. A significant decrease in performance in incongruent trials compared with congruent trials was demonstrated when the participants adopted an uncrossed-hands posture but not when their hands were crossed over the midline. In Experiment 2, we investigated the ability of participants to discriminate the direction of two sequentially presented tactile stimuli (one presented to each hand) as a function of the presence of congruent vs incongruent auditory distractors. Here, the crossmodal effect was stronger in the crossed-hands posture than in the uncrossed-hands posture. These results demonstrate the reciprocal nature of audiotactile interactions in spatiotemporal processing, and highlight the important role played by body posture in modulating such crossmodal interactions.  相似文献   

19.
Research demonstrates that listening to and viewing speech excites tongue and lip motor areas involved in speech production. This perceptual-motor relationship was investigated behaviourally by presenting video clips of a speaker producing vowel-consonant-vowel syllables in three conditions: visual-only, audio-only, and audiovisual. Participants identified target letters that were flashed over the mouth during the video, either manually or verbally as quickly as possible. Verbal responses were fastest when the target matched the speech stimuli in all modality conditions, yet optimal facilitation was observed when participants were presented with visual-only stimuli. Critically, no such facilitation occurred when participants were asked to identify the target manually. Our findings support previous research suggesting a close relationship between speech perception and production by demonstrating that viewing speech can ‘prime’ our motor system for subsequent speech production.
Jeffery A. JonesEmail:
  相似文献   

20.
In this study we investigated audiotactile spatial interactions in the region behind the head. In experiment 1, participants made unspeeded temporal order judgments (TOJs) regarding pairs of auditory and tactile stimuli presented at varying stimulus onset asynchronies (SOAs) using the method of constant stimuli. Electrocutaneous stimuli were presented to the left or right earlobe while auditory stimuli were presented from just behind the participant's head on either the same or opposite side. Participants responded significantly more accurately when the stimuli were presented from different sides rather than from the same side. In experiment 2, we used a distractor interference task to show that speeded left/right discrimination responses to electrocutaneous targets were also modulated by the spatial congruency of auditory distractors presented behind the head. Performance was worse (i.e. response latencies were slower and error rates higher) when the auditory distractors were presented on the opposite side to the electrocutaneous target than when they were presented on the same side. This crossmodal distractor interference effect was larger when white noise distractors were presented from close to the head (20 cm) than when they were presented far from the head (70 cm). By contrast, pure tone distractors elicited a smaller crossmodal distractor interference effect overall, and showed no modulation as a function of distance. Taken together, these results suggest that the spatial modulation of audiotactile interactions occurs predominantly for complex auditory stimuli (for example, white noise) originating from the region close to the back of the head.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号