首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
We explored mechanisms of cross-modal priming between visual and haptic modalities. Specifically, we investigated a mechanism of the visual-to-haptic transfer (Experiment 1) and vice versa (Experiment 2). In Experiment 1, three experimental groups, presented visual prime stimuli using novel three-line patterns, were asked to form visual images matching only with shape, haptic images matching only with shape, or haptic images matching with both shape and texture of haptic targets. Priming occurred only when induced haptic images of prime stimuli coincided with the actual texture of haptic targets. In Experiment 2, two experimental groups, presented haptic prime stimuli, were asked to form visual images matching only with shape, or visual images matching with both shape and material (i.e., monochromatic contrast between foreground and background) of visual targets. Priming occurred regardless of experimental conditions, including the control group. Thus, both shape and material representations significantly contributed to the visual-to-haptic transfer. Contrastingly, only shape representation played a significant role in the haptic-to-visual transfer.  相似文献   

2.
The brain is able to determine angular self-motion from visual, vestibular, and kinesthetic information. There is compelling evidence that both humans and non-human primates integrate visual and inertial (i.e., vestibular and kinesthetic) information in a statistically optimal fashion when discriminating heading direction. In the present study, we investigated whether the brain also integrates information about angular self-motion in a similar manner. Eight participants performed a 2IFC task in which they discriminated yaw-rotations (2-s sinusoidal acceleration) on peak velocity. Just-noticeable differences (JNDs) were determined as a measure of precision in unimodal inertial-only and visual-only trials, as well as in bimodal visual–inertial trials. The visual stimulus was a moving stripe pattern, synchronized with the inertial motion. Peak velocity of comparison stimuli was varied relative to the standard stimulus. Individual analyses showed that data of three participants showed an increase in bimodal precision, consistent with the optimal integration model; while data from the other participants did not conform to maximum-likelihood integration schemes. We suggest that either the sensory cues were not perceived as congruent, that integration might be achieved with fixed weights, or that estimates of visual precision obtained from non-moving observers do not accurately reflect visual precision during self-motion.  相似文献   

3.
We examined the development of visual cue integration in a desktop working-memory task using boxes with different visual action cues (opening actions) and perceptual surface cues (colours, monochromatic textures, or images of faces). Children had to recall which box held a hidden toy, based on (a) the action cue, (b) the surface cue, or (c) a conjunction of the two. Results from three experiments show a set of asymmetries in children's integration of action and surface cues. The 18–24-month-olds disregarded colour in conjunction judgements with action; 30–36-month-olds used colour but disregarded texture. Images of faces were not disregarded at either age. We suggest that 18–24-month-olds' disregard of colour, seen previously in reorientation tasks (Hermer & Spelke, 1994 Hermer, L. and Spelke, E. 1994. A geometric process for spatial reorientation in young children. Nature, 370: 5759. [Crossref], [PubMed], [Web of Science ®] [Google Scholar]), may represent a general phenomenon, likened to uneven integration between the dorsal and ventral streams in early development.  相似文献   

4.
We examined the development of visual cue integration in a desktop working-memory task using boxes with different visual action cues (opening actions) and perceptual surface cues (colours, monochromatic textures, or images of faces). Children had to recall which box held a hidden toy, based on (a) the action cue, (b) the surface cue, or (c) a conjunction of the two. Results from three experiments show a set of asymmetries in children's integration of action and surface cues. The 18-24-month-olds disregarded colour in conjunction judgements with action; 30-36-month-olds used colour but disregarded texture. Images of faces were not disregarded at either age. We suggest that 18-24-month-olds' disregard of colour, seen previously in reorientation tasks (Hermer & Spelke, 1994), may represent a general phenomenon, likened to uneven integration between the dorsal and ventral streams in early development.  相似文献   

5.
This study analyzed the spatial memory capacities of rats in darkness with visual and/or olfactory cues through ontogeny. Tests were conducted with the homing board, where rats had to find the correct escape hole. Four age groups (24 days, 48 days, 3-6 months, and 12 months) were trained in 3 conditions: (a) 3 identical light cues; (b) 5 different olfactory cues; and (c) both types of cues, followed by removal of the olfactory cues. Results indicate that immature rats first take into account olfactory information but are unable to orient with only the help of discrete visual cues. Olfaction enables the use of visual information by 48-day-old rats. Visual information predominantly supports spatial cognition in adult and 12-month-old rats. Results point out cooperation between vision and olfaction for place navigation during ontogeny in rats.  相似文献   

6.
Vection is the illusion of self-motion in the absence of real physical movement. The aim of the present study was to analyze how multisensory inputs (visual and auditory) contribute to the perception of vection. Participants were seated in a stationary position in front of a large, curved projection display and were exposed to a virtual scene that constantly rotated around the yaw-axis, simulating a 360° rotation. The virtual scene contained either only visual, only auditory, or a combination of visual and auditory cues. Additionally, simulated rotation speed (90°/s vs. 60°/s) and the number of sound sources (1 vs. 3) were varied for all three stimulus conditions. All participants were exposed to every condition in a randomized order. Data specific to vection latency, vection strength, the severity of motion sickness (MS), and postural steadiness were collected. Results revealed reduced vection onset latencies and increased vection strength when auditory cues were added to the visual stimuli, whereas MS and postural steadiness were not affected by the presence of auditory cues. Half of the participants reported experiencing auditorily induced vection, although the sensation was rather weak and less robust than visually induced vection. Results demonstrate that the combination of visual and auditory cues can enhance the sensation of vection.  相似文献   

7.
Recent models of the visual system in primates suggest that the mechanisms underlying visual perception and visuomotor control are implemented in separate functional streams in the cerebral cortex. However, a little-studied perceptual illusion demonstrates that a motor-related signal representing arm position can contribute to the visual perception of size. The illusion consists of an illusory size change in an afterimage of the hand when the hand is moved towards or away from the subject. The motor signal necessary for the illusion could be specified by feedforward and/or feedback sources (i.e. efference copy and/or proprioception/kinesthesis). We investigated the nature of this signal by measuring the illusion's magnitude when subjects moved their own arm (active condition, feedforward and feedback information available), and when arm movement was under the control of the experimenter (passive condition, feedback information available). Active and passive movements produced equivalent illusory size changes in the afterimages. However, the illusion was not obtained when an afterimage of subject's hand was obtained prior to movement of the other hand from a very similar location in space. This evidence shows that proprioceptive/kinesthetic feedback was sufficient to drive the illusion and suggests that a specific three-dimensional registration of proprioceptive input and the initial afterimage is necessary for the illusion to occur.  相似文献   

8.
Our perception of the world's three-dimensional (3D) structure is critical for object recognition, navigation and planning actions. To accomplish this, the brain combines different types of visual information about depth structure, but at present, the neural architecture mediating this combination remains largely unknown. Here, we report neuroimaging correlates of human 3D shape perception from the combination of two depth cues. We measured fMRI responses while observers judged the 3D structure of two sequentially presented images of slanted planes defined by binocular disparity and perspective. We compared the behavioral and fMRI responses evoked by changes in one or both of the depth cues. fMRI responses in extrastriate areas (hMT+/V5 and lateral occipital complex), rather than responses in early retinotopic areas, reflected differences in perceived 3D shape, suggesting 'combined-cue' representations in higher visual areas. These findings provide insight into the neural circuits engaged when the human brain combines different information sources for unified 3D visual perception.  相似文献   

9.
The influence of temporal and spatial context during haptic roughness perception was investigated in two experiments. Subjects examined embossed dot patterns of varying average dot distance. A two-alternative forced-choice procedure was used to measure discrimination thresholds and biases. In Experiment 1, subjects had to discriminate between two stimuli that were presented simultaneously to adjacent fingers, after adaptation of one of these fingers. The results showed that adaptation to a rough surface decreased the perceived roughness of a surface subsequently scanned with the adapted finger, whereas adaptation to a smooth surface increased the perceived roughness (i.e. contrast after effect). In Experiment 2, subjects discriminated between subsequent test stimuli, while the adjacent finger was stimulated simultaneously. The results showed that perceived roughness of the test stimulus shifted towards the roughness of the adjacent stimulus (i.e. assimilation effect). These contextual effects are explained by structures of cortical receptive fields. Analogies with comparable effects in the visual system are discussed.  相似文献   

10.
Our ability to recognize and manipulate objects relies on our haptic sense of the objects' geometry. But little is known about the acuity of haptic perception compared to other senses like sight and hearing. Here, we determined how accurately humans could sense various geometric features of objects across the workspace. Subjects gripped the handle of a robot arm which was programmed to keep the hand inside a planar region with straight or curved boundaries. With eyes closed, subjects moved the manipulandum along this virtual wall and judged its curvature or direction. We mapped their sensitivity in different parts of the workspace. We also tested subjects' ability to discriminate between boundaries with different degrees of curvature, to sense the rate of change of curvature, and to detect the elongation or flattening of ellipses. We found that subjects' estimates of the curvature of their hand path were close to veridical, and did not change across the workspace though they did vary somewhat with hand path direction. Subjects were less accurate at judging the direction of the hand path in an egocentric frame of reference, and were slightly poorer at discriminating between arcs of different curvature than at detecting absolute curvature. They also consistently mistook flattened ellipses and paths of decreasing curvature (inward spirals) for circles—and mistook arcs of true circles for arcs of tall ellipses or outward spirals. Nevertheless, the sensitivity of haptic perception compared well with that of spatial vision in other studies. Furthermore, subjects detected curvature and directional deviations much smaller than those that actually arise for most reaching movements. These findings suggest that our haptic sense is acute enough to guide and train motor systems and to form accurate representations of shapes. Electronic Publication  相似文献   

11.
Summary Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation is modeled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A dual-input describing function analysis supports the complementary model; vestibular cues dominate sensation at higher frequencies. The describing function model is extended by the proposal of a non-linear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.Research supported in part by NASA Grants NSG 2032 and 2230. GLZ supported by an NIH National Research Service Award. GLZ currently at Bolt Beranek and Newman, Inc., Cambridge, MA, USA  相似文献   

12.
Animal studies suggest a relationship between activation of the cholinergic system and neural synchronization, which again has been suggested to mediate feature binding. We investigated whether suppressing cholinergic activity through moderate alcohol consumption in healthy humans affects behavioral measures of feature binding in visual perception and across perception and action. Indeed, evidence of the binding of shape and color, and of shape and location, of visual objects disappeared after alcohol consumption, whereas bindings between object features and the manual response were unaffected.  相似文献   

13.
14.
Perceptual objects often comprise a visual and auditory signature that arrives simultaneously through distinct sensory channels, and cross-modal features are linked by virtue of being attributed to a specific object. Continued exposure to cross-modal events sets up expectations about what a given object most likely "sounds" like, and vice versa, thereby facilitating object detection and recognition. The binding of familiar auditory and visual signatures is referred to as semantic, multisensory integration. Whereas integration of semantically related cross-modal features is behaviorally advantageous, situations of sensory dominance of one modality at the expense of another impair performance. In the present study, magnetoencephalography recordings of semantically related cross-modal and unimodal stimuli captured the spatiotemporal patterns underlying multisensory processing at multiple stages. At early stages, 100 ms after stimulus onset, posterior parietal brain regions responded preferentially to cross-modal stimuli irrespective of task instructions or the degree of semantic relatedness between the auditory and visual components. As participants were required to classify cross-modal stimuli into semantic categories, activity in superior temporal and posterior cingulate cortices increased between 200 and 400 ms. As task instructions changed to incorporate cross-modal conflict, a process whereby auditory and visual components of cross-modal stimuli were compared to estimate their degree of congruence, multisensory processes were captured in parahippocampal, dorsomedial, and orbitofrontal cortices 100 and 400 ms after stimulus onset. Our results suggest that multisensory facilitation is associated with posterior parietal activity as early as 100 ms after stimulus onset. However, as participants are required to evaluate cross-modal stimuli based on their semantic category or their degree of congruence, multisensory processes extend in cingulate, temporal, and prefrontal cortices.  相似文献   

15.
The present study investigated the contributions of object weight, haptic size, and density to the accurate perception of heaviness or lightness in the process of discriminating differences in weight between pairs of cubes with cue conflicts such as that resulting from the size-weight illusion. Fifteen subjects, with visual input blocked and relying on the input gained by grasping the cubes with only their fingertips, attempted to accurately discriminate possible differences in weight factor between the two respective cubes in each step of the trials. Three sets - one set each of copper (CP), aluminum (AL), and plastic (PL) - of seven cubes of various weight (0.10-0.74 N) were used. All of the cubes were covered with smooth, thin vinyl to eliminate possible input concerning density or material per se. Screens were strategically placed to eliminate any visual cues. One hundred and ninety-six trials with 37 combinations were pseudorandomly presented to subjects in the following conditions: PL versus AL, AL versus CP, and CP versus PL. Trials included 2 x 3 combinations on the basis of density (98 trials for higher and 98 for lower conditions) and weight (84 ascending trials for heavier, 28 for identical, and 84 descending for lighter conditions). The response for each trial given by each subject was regarded as correct when it accurately identified the weight relationship between the first and second cube. It was found that the subjects fairly accurately identified the weight relationship when density and weight both increased for the second cube (95.6% for given trials), and when density and weight both decreased (94.6%). The current results were markedly greater than those in the constant-density conditions obtained previously, suggesting that changes in density may be as much of an aid in the perception of heaviness and lightness as is weight. Whenever two cues conflicted directionally with each other, however, accuracy fell dramatically to 33.6% for lower density/ascending weight, and to 22.7% for higher density/descending weight. These results indicate the possibility of two different cues contributing to the perception of heaviness and lightness. Cue conflict such as the size-weight illusion naturally occurs when discriminating weight between objects. The present results, however, suggest that a person may perceive heaviness on the basis of the well-regulated relations between changes of density, size, and weight. The way in which these two cues are related through the haptic size is discussed.  相似文献   

16.
We tested the hypothesis that speed cues are used to haptically identify changes in the curvature of the hands trajectory. Subjects grasped the handle of a robotically-controlled manipulandum that was moved in the horizontal plane along various elliptical arcs following one of three different speed profiles. In one profile, a circular arc was traced at a constant speed whereas in the other two speed was constant for ellipses whose aspect ratios differed from unity. A two-alternative forced choice procedure was used to identify the ellipse that was sensed to be circular in each of the three experimental conditions. In unconstrained movements, speed varies with the radius of curvature. If speed cues are used to identify curvature during passive movements, one would expect that subjects responses should be biased towards the ellipse traced at a constant speed. The results did not support this hypothesis, indicating that speed cues are not a major contributor in the haptic sensing of shape.  相似文献   

17.
When a brief visual cue is presented and followed by a static bar stimulus, the bar is perceived to be drawn rapidly away from the cue end toward the uncued end (the "line-motion illusion"). Previous research has reported that this illusion is observed with the use of lateral auditory or tactile cues. The present study revealed that the same illusion can be observed when both the cue and the line are presented in the tactile modality (Experiment 1) and when the visual cue was presented prior to the tactile line (Experiments 2 and 3). These results suggest that this illusion is not limited to the visual modality. The implications of the findings for the supramodal nature and possible sources of the effect are discussed.  相似文献   

18.
Summary Single unit activity was recorded from principal cells in the A-laminae of the cat dorsal lateral geniculate nucleus (dLGN). A steady state pattern of afferent activation was induced by presenting a continuously drifting square wave grating of constant spatial frequency to the eye (the dominant eye) that provided the excitatory input to the recorded cell. Intermittently, a second grating stimulus was presented to the other, nondominant, eye. In most neurones nondominant eye stimulation led to inhibition of relay cell responses. The latency of this suppressive effect was unusually long (up to 1 s) and its intensity and duration depended critically on the similarity between the gratings that were presented to the two eyes. Typically suppression was strongest when the gratings differed in orientation, direction of movement and contrast and when the nondominant eye stimulus was moving rather than stationary. Ablation of visual cortex abolished these long latency and feature-dependent interferences. We conclude that the visual cortex and the corticothalamic projections are involved in the mediation of these interocular interactions. We interpret our results as support for the hypothesis that corticothalamic feedback modifies thalamic transmission as a function of the congruency between ongoing cortical activation patterns and afferent retinal signals.  相似文献   

19.
Two experiments were conducted to examine the effect of temperature on force perception. The objective of the first experiment was to quantify the change in skin temperature of the finger as a function of contact force, in order to characterize how much temperature changes under normal contact conditions. The decrease in temperature ranged from 2.3 to 4.2°C as the force increased from 0.1 to 6 N, averaging 3.2°C across the nine force levels studied. The changes in temperature as a function of force were well above threshold, which suggests that thermal cues could be used to discriminate between contact forces if other sources of sensory information were absent. The second experiment examined whether the perceived magnitude of forces (1–8 N) generated by the index finger changed as a function of the temperature of the contact surface against which the force was produced. A contralateral force-matching procedure was used to evaluate force perception. The results indicated that the perceived magnitude of finger forces did not change as a function of the temperature of the reference contact surface which varied from 22 to 38°C. These results provide further support for the centrally generated theory of force perception and indicate that the thermal intensification of tactually perceived weight does not occur when forces are actively generated.  相似文献   

20.
This study investigated whether and how the force cues play a role in the haptic perception of length. We assumed that the introduction of a dynamic disruption during haptic exploration generated by a haptic display would lead to a systematic bias in the estimation of a virtual length. Two types of “opposition” disruption (“elastic” and “viscous”) were proposed in Experiments 1 and 2, and two types of “traction” disruption (“fluid” and “full”) in Experiments 3 and 4. In all experiments, blindfolded adults were asked to compare two lengths of virtual rods explored with the right index. Results revealed an underestimation of the length with elastic and viscous opposition disruptions and an overestimation of this length with fluid and full-traction disruptions. No systematic bias in the estimation was observed in the “control” sessions in which the active exploration of the segment was “normal” (i.e. not disrupted). These results suggest that the forces produced during exploratory movements are used as a relevant cue in the haptic length estimation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号