首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   231篇
  免费   3篇
  国内免费   1篇
耳鼻咽喉   4篇
儿科学   2篇
基础医学   118篇
口腔科学   1篇
临床医学   8篇
内科学   4篇
神经病学   84篇
特种医学   5篇
外科学   1篇
综合类   2篇
眼科学   5篇
药学   1篇
  2023年   2篇
  2022年   1篇
  2021年   2篇
  2020年   2篇
  2019年   3篇
  2018年   2篇
  2017年   5篇
  2016年   3篇
  2015年   1篇
  2014年   7篇
  2013年   15篇
  2012年   8篇
  2011年   15篇
  2010年   15篇
  2009年   23篇
  2008年   22篇
  2007年   28篇
  2006年   18篇
  2005年   18篇
  2004年   8篇
  2003年   9篇
  2002年   5篇
  2001年   1篇
  2000年   4篇
  1999年   1篇
  1998年   3篇
  1996年   3篇
  1995年   1篇
  1994年   2篇
  1992年   2篇
  1987年   1篇
  1979年   1篇
  1975年   1篇
  1969年   2篇
  1968年   1篇
排序方式: 共有235条查询结果,搜索用时 17 毫秒
1.
2.
Physiological and behavioral studies in cat have shown that corticotectal influences play important roles in the information-processing capabilities of superior colliculus (SC) neurons. While corticotectal inputs from the anterior ectosylvian sulcus (AES) play a comparatively small role in the unimodal responses of SC neurons, they are particularly important in rendering these neurons capable of integrating information from different sensory modalities (e.g., visual and auditory). The present experiments examined the behavioral consequences of depriving SC neurons of AES inputs, and thereby compromising their ability to integrate visual and auditory information. Selective deactivation of a variety of other cortical areas (posterolateral lateral suprasylvian cortex, PLLS; primary auditory cortex, AI; or primary visual cortex, 17/18) served as controls. Cats were trained in a perimetry device to ignore a brief, low-intensity auditory stimulus but to orient toward and approach a nearthreshold visual stimulus (a light-emitting diode, LED) to obtain food. The LED was presented at different eccentricities either alone (unimodal) or combined with the auditory stimulus (multisensory). Subsequent deactivation of the AES, with focal injections of a local anesthetic, had no effect on responses to unimodal cues regardless of their location. However, it profoundly, though reversibly, altered orientation and approach to multisensory stimuli in contralateral space. The characteristic enhancement of these responses observed when an auditory cue was presented in spatial correspondence with the visual stimulus was significantly degraded. Similarly, the inhibitory effect of a spatially disparate auditory cue was significantly ameliorated. The observed effects were specific to AES deactivation, as similar effects were not obtained with deactivation of PLLS, AI or 17/18, or saline injections into the AES. These observations are consistent with postulates that specific cortical-midbrain interactions are essential for the synthesis of multisensory information in the SC, and for the orientation and localization behaviors that depend on this synthesis.  相似文献   
3.
Summary The results of this study show that the different receptive fields of multisensory neurons in the cortex of the cat anterior ectosylvian sulcus (AES) were in spatial register, and it is this register that determined the manner in which these neurons integrated multiple sensory stimuli. The functional properties of multisensory neurons in AES cortex bore fundamental similarities to those in other cortical and subcortical structures. These constancies in the principles of multisensory integration are likely to provide a basis for spatial coherence in information processing throughout the nervous system.  相似文献   
4.
For children with autism spectrum disorders (ASDs), social robots are increasingly utilized as therapeutic tools in order to enhance social skills and communication. Robots have been shown to generate a number of social and behavioral benefits in children with ASD including heightened engagement, increased attention, and decreased social anxiety. Although social robots appear to be effective social reinforcement tools in assistive therapies, the perceptual mechanism underlying these benefits remains unknown. To date, social robot studies have primarily relied on expertise in fields such as engineering and clinical psychology, with measures of social robot efficacy principally limited to qualitative observational assessments of children's interactions with robots. In this review, we examine a range of socially interactive robots that currently have the most widespread use as well as the utility of these robots and their therapeutic effects. In addition, given that social interactions rely on audiovisual communication, we discuss how enhanced sensory processing and integration of robotic social cues may underlie the perceptual and behavioral benefits that social robots confer. Although overall multisensory processing (including audiovisual integration) is impaired in individuals with ASD, social robot interactions may provide therapeutic benefits by allowing audiovisual social cues to be experienced through a simplified version of a human interaction. By applying systems neuroscience tools to identify, analyze, and extend the multisensory perceptual substrates that may underlie the therapeutic benefits of social robots, future studies have the potential to strengthen the clinical utility of social robots for individuals with ASD.  相似文献   
5.
《Social neuroscience》2013,8(2):148-162
Abstract

Self-face recognition is crucial for sense of identity and self-awareness. Finding self-face recognition disorders mainly in neurological and psychiatric diseases suggests that modifying sense of identity in a simple, rapid way remains a “holy grail” for cognitive neuroscience. By touching the face of subjects who were viewing simultaneous touches on a partner's face, we induced a novel illusion of personal identity that we call “enfacement”: The partner's facial features became incorporated into the representation of the participant's own face. Subjects reported that morphed images of themselves and their partner contained more self than other only after synchronous, but not asynchronous, stroking. Therefore, we modified self-face recognition by means of a simple psychophysical manipulation. While accommodating gradual change in one's own face is an important form of representational plasticity that may help maintaining identity over time, the surprisingly rapid changes induced by our procedure suggest that sense of facial identity may be more malleable than previously believed. “Enfacement” correlated positively with the participant's empathic traits and with the physical attractiveness the participants attributed to their partners. Thus, personality variables modulate enfacement, which may represent a marker of the tendency to be social and may be absent in subjects with defective empathy.  相似文献   
6.
Research examining multisensory integration suggests that the correspondence of stimulus characteristics across modalities (cross-modal correspondence) can have a dramatic influence on both neurophysiological and perceptual responses to multimodal stimulation. The current study extends prior research by examining the cross-modal correspondence of amplitude modulation rate for simultaneous acoustic and vibrotactile stimulation using EEG and perceptual measures of sensitivity to amplitude modulation. To achieve this, psychophysical thresholds and steady-state responses (SSRs) were measured for acoustic and vibrotactile amplitude modulated (AM) stimulation for 21 and 40 Hz AM rates as a function of the cross-modal correspondence. The study design included three primary conditions to determine whether the changes in the SSR and psychophysical thresholds were due to the cross-modal temporal correspondence of amplitude modulated stimuli: NONE (AM in one modality only), SAME (the same AM rate for each modality) and DIFF (different AM rates for each modality). The results of the psychophysical analysis showed that AM detection thresholds for the simultaneous AM conditions (i.e., SAME and DIFF) were significantly higher (i.e., lower sensitivity) than AM detection thresholds for the stimulation of a single modality (i.e., NONE). SSR results showed significant effects of SAME and DIFF conditions on SSR activity. The different pattern of results for perceptual and SSR measures of cross-modal correspondence of AM rate indicates a dissociation between entrained cortical activity (i.e., SSR) and perception.  相似文献   
7.
Identifying with a body is central to being a conscious self. The now classic “rubber hand illusion” demonstrates that the experience of body-ownership can be modulated by manipulating the timing of exteroceptive (visual and tactile) body-related feedback. Moreover, the strength of this modulation is related to individual differences in sensitivity to internal bodily signals (interoception). However the interaction of exteroceptive and interoceptive signals in determining the experience of body-ownership within an individual remains poorly understood. Here, we demonstrate that this depends on the online integration of exteroceptive and interoceptive signals by implementing an innovative “cardiac rubber hand illusion” that combined computer-generated augmented-reality with feedback of interoceptive (cardiac) information. We show that both subjective and objective measures of virtual-hand ownership are enhanced by cardio-visual feedback in-time with the actual heartbeat, as compared to asynchronous feedback. We further show that these measures correlate with individual differences in interoceptive sensitivity, and are also modulated by the integration of proprioceptive signals instantiated using real-time visual remapping of finger movements to the virtual hand. Our results demonstrate that interoceptive signals directly influence the experience of body ownership via multisensory integration, and they lend support to models of conscious selfhood based on interoceptive predictive coding.  相似文献   
8.
A frequent approach to study interactions of the auditory and the visual system is to measure event-related potentials (ERPs) to auditory, visual, and auditory-visual stimuli (A, V, AV). A nonzero result of the AV ? (A + V) comparison indicates that the sensory systems interact at a specific processing stage. Two possible biases weaken the conclusions drawn by this approach: first, subtracting two ERPs from one requires that A, V, and AV do not share any common activity. We have shown before (Gondan and Röder in Brain Res 1073–1074:389–397, 2006) that the problem of common activity can be avoided using an additional tactile stimulus (T) and evaluating the ERP difference (T + TAV) ? (TA + TV). A second possible confound is the modality shift effect (MSE): for example, the auditory N1 is increased if an auditory stimulus follows a visual stimulus, whereas it is smaller if the modality is unchanged (ipsimodal stimulus). Bimodal stimuli might be affected less by MSEs because at least one component always matches the preceding trial. Consequently, an apparent amplitude modulation of the N1 would be observed in AV. We tested the influence of MSEs on auditory-visual interactions by comparing the results of AV ? (A + V) using (a) all stimuli and using (b) only ipsimodal stimuli. (a) and (b) differed around 150 ms, this indicates that AV ? (A + V) is indeed affected by the MSE. We then formally and empirically demonstrate that (T + TAV) ? (TA + TV) is robust against possible biases due to the MSE.  相似文献   
9.
Multimodal reference frame for the planning of vertical arms movements   总被引:3,自引:0,他引:3  
In this study we investigated the reference frames used to plan arm movements. Specifically, we asked whether the body axis, visual cues and graviception can each play a role in defining "up" and "down" in the planning and execution of movements along the vertical axis. Horizontal and vertical pointing movements were tested in two postures (upright and reclined) and two visual conditions (with and without vision) to identify possible effects of each of these cues on kinematics of movement. Movements were recorded using an optical 3D tracking system and analysis was conducted on velocity profiles of the hand. Despite a major effect of gravity, our analysis shows an effect of the movement direction with respect to the body axis when subjects were reclined with eyes closed. These results suggest that our CNS takes into account multimodal information about vertical in order to compute an optimal motor command that anticipates the effects of gravity.  相似文献   
10.
The goal of this study was to determine whether the sensory nature of a target influences the roles of vision and proprioception in the planning of movement distance. Two groups of subjects made rapid, elbow extension movements, either toward a visual target or toward the index fingertip of the unseen opposite hand. Visual feedback of the reaching index fingertip was only available before movement onset. Using a virtual reality display, we randomly introduced a discrepancy between actual and virtual (cursor) fingertip location. When subjects reached toward the visual target, movement distance varied with changes in visual information about initial hand position. For the proprioceptive target, movement distance varied mostly with changes in proprioceptive information about initial position. The effect of target modality was already present at the time of peak acceleration, indicating that this effect include feedforward processes. Our results suggest that the relative contributions of vision and proprioception to motor planning can change, depending on the modality in which task relevant information is represented.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号