首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   273篇
  免费   21篇
  国内免费   10篇
耳鼻咽喉   1篇
儿科学   6篇
基础医学   60篇
口腔科学   6篇
临床医学   13篇
内科学   10篇
皮肤病学   1篇
神经病学   123篇
特种医学   6篇
外科学   10篇
综合类   23篇
预防医学   11篇
眼科学   2篇
药学   15篇
中国医学   7篇
肿瘤学   10篇
  2024年   1篇
  2023年   2篇
  2022年   6篇
  2021年   7篇
  2020年   7篇
  2019年   11篇
  2018年   4篇
  2017年   16篇
  2016年   11篇
  2015年   12篇
  2014年   22篇
  2013年   40篇
  2012年   21篇
  2011年   13篇
  2010年   15篇
  2009年   9篇
  2008年   25篇
  2007年   24篇
  2006年   15篇
  2005年   6篇
  2004年   3篇
  2003年   7篇
  2002年   5篇
  2001年   4篇
  2000年   3篇
  1999年   2篇
  1997年   1篇
  1996年   3篇
  1995年   1篇
  1994年   1篇
  1993年   1篇
  1991年   1篇
  1983年   3篇
  1982年   1篇
  1981年   1篇
排序方式: 共有304条查询结果,搜索用时 15 毫秒
1.
A general approach to control non‐linear uncertain systems is to apply a pre‐computed nominal optimal control, and use a pre‐computed LQG compensator to generate control corrections from the on‐line measured data. If the non‐linear model, on which the optimal control and LQG compensator design is based, is of sufficient quality, and when the LQG compensator is designed appropriately, the closed‐loop control system is approximately optimal. This paper contributes to the selection and computation of the time‐varying LQG weighting and noise matrices, which determine the LQG compensator design. It is argued that the noise matrices may be taken time‐invariant and diagonal. Three very important considerations concerning the selection of the time‐varying LQG weighting matrices are turned into a concrete computational scheme. Thereby, the selection of the time‐varying LQG weighting matrices is reduced to selecting three scalar design parameters, each one weighting one consideration. Although the three considerations seem straightforward they may oppose one another. Furthermore, they usually result in time‐varying weighting matrices that are indefinite, rather than positive (semi) definite as required for the LQG design. The computational scheme presented in this paper addresses and resolves both problems. By two numerical examples the benefits of our optimal closed‐loop control system design are demonstrated and evaluated using Monte Carlo simulation. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   
2.
BACKGROUND: Oxytocin is known to reduce anxiety and stress in social interactions as well as to modulate approach behavior. Recent studies suggest that the amygdala might be the primary neuronal basis for these effects. METHODS: In a functional magnetic resonance imaging study using a double-blind, placebo-controlled within-subject design, we measured neural responses to fearful, angry, and happy facial expressions after intranasal application of 24 IU oxytocin compared with placebo. RESULTS: Oxytocin reduced right-sided amygdala responses to all three face categories even when the emotional content of the presented face was not evaluated explicitly. Exploratory whole brain analysis revealed modulatory effects in prefrontal and temporal areas as well as in the brainstem. CONCLUSIONS: Results suggest a modulatory role of oxytocin on amygdala responses to facial expressions irrespective of their valence. Reduction of amygdala activity to positive and negative stimuli might reflect reduced uncertainty about the predictive value of a social stimulus and thereby facilitates social approach behavior.  相似文献   
3.
The cerebellum has extensive connections with the frontal lobes. Cerebellar injury has been reported to induce frontal-executive cognitive dysfunction and blunting of affect. We examined a patient with idiopathic cerebellar degeneration with impaired family relationships attributed to an “emotional disconnection.” Examination revealed ataxia, dysmetria, and adiadochokinesia more severe on the left and frontal-executive dysfunction; memory and cognitive functions were otherwise normal. Testing of emotional communication included assessments of emotional semantic knowledge, emotional prosody, and emotional facial expressions. Comprehension was normal but expression was severely impaired. Cerebellar dysfunction can cause a defect in facial and prosodic emotional communication.  相似文献   
4.
Alexithymia, or a lack of emotional awareness, is prevalent in some chronic pain conditions and has been linked to poor recognition of others' emotions. Recognising others' emotions from their facial expression involves both emotional and motor processing, but the possible contribution of motor disruption has not been considered. It is possible that poor performance on emotional recognition tasks could reflect problems with emotional processing, motor processing or both. We hypothesised that people with chronic facial pain would be less accurate in recognising others' emotions from facial expressions, would be less accurate in a motor imagery task involving the face, and that performance on both tasks would be positively related. A convenience sample of 19 people (15 females) with chronic facial pain and 19 gender‐matched controls participated. They undertook two tasks; in the first task, they identified the facial emotion presented in a photograph. In the second, they identified whether the person in the image had a facial feature pointed towards their left or right side, a well‐recognised paradigm to induce implicit motor imagery. People with chronic facial pain performed worse than controls at both tasks (Facially Expressed Emotion Labelling (FEEL) task P < 0·001; left/right judgment task P < 0·001). Participants who were more accurate at one task were also more accurate at the other, regardless of group (P < 0·001, r2 = 0·523). Participants with chronic facial pain were worse than controls at both the FEEL emotion recognition task and the left/right facial expression task and performance covaried within participants. We propose that disrupted motor processing may underpin or at least contribute to the difficulty that facial pain patients have in emotion recognition and that further research that tests this proposal is warranted.  相似文献   
5.
ABSTRACT

Research on facial expressions in individuals with Down syndrome (DS) has been conducted using photographs. Our goal was to examine the effect of motion on perception of emotional expressions. Adults with DS, adults with typical development matched for chronological age (CA), and children with typical development matched for developmental age (DA) viewed photographs and video clips of facial expressions of: happy, sad, mad, and scared. The odds of accurate identification of facial expressions were 2.7 times greater for video clips compared with photographs. The odds of accurate identification of expressions of mad and scared were greater for video clips compared with photographs. The odds of accurate identification of expressions of mad and sad were greater for adults but did not differ between adults with DS and children. Adults with DS demonstrated the lowest accuracy for recognition of scared. These results support the importance of motion cues in evaluating the social skills of individuals with DS.  相似文献   
6.
BackgroundA plethora of research on facial emotion recognition in autism spectrum disorders (ASD) exists and reported deficits in ASD compared to controls, particularly for negative basic emotions. However, these studies have largely used static high intensity stimuli. The current study investigated facial emotion recognition across three levels of expression intensity from videos, looking at accuracy rates to investigate impairments in facial emotion recognition and error patterns (’confusions’) to explore potential underlying factors.MethodTwelve individuals with ASD (9 M/3F; M(age) = 17.3) and 12 matched controls (9 M/3F; M(age) = 16.9) completed a facial emotion recognition task including 9 emotion categories (anger, disgust, fear, sadness, surprise, happiness, contempt, embarrassment, pride) and neutral, each expressed by 12 encoders at low, intermediate, and high intensity.ResultsA facial emotion recognition deficit was found overall for the ASD group compared to controls, as well as deficits in recognising individual negative emotions at varying expression intensities. Compared to controls, the ASD group showed significantly more, albeit typical, confusions between emotion categories (at high intensity), and significantly more confusions of emotions as ‘neutral’ (at low intensity).ConclusionsThe facial emotion recognition deficits identified in ASD, particularly for negative emotions, are in line with previous studies using other types of stimuli. Error analysis showed that individuals with ASD had difficulties detecting emotional information in the face (sensitivity) at low intensity, and correctly identifying emotional information (specificity) at high intensity. These results suggest different underlying mechanisms for the facial emotion recognition deficits at low vs high expression intensity.  相似文献   
7.
There has long been interest in why languages are shaped the way they are, and in the relationship between sign language and gesture. In sign languages, entity classifiers are handshapes that encode how objects move, how they are located relative to one another, and how multiple objects of the same type are distributed in space. Previous studies have shown that hearing adults who are asked to use only manual gestures to describe how objects move in space will use gestures that bear some similarities to classifiers. We investigated how accurately hearing adults, who had been learning British Sign Language (BSL) for 1–3 years, produce and comprehend classifiers in (static) locative and distributive constructions. In a production task, learners of BSL knew that they could use their hands to represent objects, but they had difficulty choosing the same, conventionalized, handshapes as native signers. They were, however, highly accurate at encoding location and orientation information. Learners therefore show the same pattern found in sign-naïve gesturers. In contrast, handshape, orientation, and location were comprehended with equal (high) accuracy, and testing a group of sign-naïve adults showed that they too were able to understand classifiers with higher than chance accuracy. We conclude that adult learners of BSL bring their visuo-spatial knowledge and gestural abilities to the tasks of understanding and producing constructions that contain entity classifiers. We speculate that investigating the time course of adult sign language acquisition might shed light on how gesture became (and, indeed, becomes) conventionalized during the genesis of sign languages.  相似文献   
8.
Patients with corticobasal degeneration (CBD) frequently develop orofacial apraxia but little is known about CBD's influence on emotional facial processing. We describe a patient who developed a facial apraxia including an impaired ability to voluntarily generate facial expressions with relative sparing of spontaneous emotional faces. Her ability to interpret the facial expressions of others was also severely impaired. Despite these deficits, the patient had normal affect and normal speech, including expressive and receptive emotional prosody. As patients with corticobasal degeneration are known to manifest both orofacial apraxia and visuospatial dysfunction this patient's expressive and receptive deficits may be independent manifestations of the same underlying disease process. Alternatively, these functions may share a common neuroanatomic substrate that degenerates with CBD.  相似文献   
9.
《Social neuroscience》2013,8(5):448-461
The recognition of emotional facial expressions is an important means to adjust behavior in social interactions. As facial expressions widely differ in their duration and degree of expressiveness, they often manifest with short and transient expressions below the level of awareness. In this combined behavioral and fMRI study, we aimed at examining whether or not consciously accessible (subliminal) emotional facial expressions influence empathic judgments and which brain activations are related to it. We hypothesized that subliminal facial expressions of emotions masked with neutral expressions of the same faces induce an empathic processing similar to consciously accessible (supraliminal) facial expressions. Our behavioral data in 23 healthy subjects showed that subliminal emotional facial expressions of 40 ms duration affect the judgments of the subsequent neutral facial expressions. In the fMRI study in 12 healthy subjects it was found that both, supra- and subliminal emotional facial expressions shared a widespread network of brain areas including the fusiform gyrus, the temporo-parietal junction, and the inferior, dorsolateral, and medial frontal cortex. Compared with subliminal facial expressions, supraliminal facial expressions led to a greater activation of left occipital and fusiform face areas. We conclude that masked subliminal emotional information is suited to trigger processing in brain areas which have been implicated in empathy and, thereby in social encounters  相似文献   
10.
《Social neuroscience》2013,8(1):30-39
Abstract

Previous research demonstrated that young infants' neural processing of novel objects is enhanced by a fearful face gazing toward the object. The current event-related potential (ERP) study addresses the question of whether this effect is driven by the particular threat-value of a fearful expression or whether a positive emotion could elicit a similar response. Three-month-old infants' brain responses were measured while infants were presented with happy and neutral faces gazing toward simultaneously presented objects (Experiment 1) or happy and neutral faces gazing away from objects (Experiment 2). Then the objects were presented again without the face. While infants showed an increased neural response for happy relative to neutral faces looking towards objects, infants did not differentiate between happy and neutral faces gazing away from the objects. Furthermore, infants showed no different response to objects alone in Experiment 1. However, infants responded with an increased negative central component (Nc) indicating increased attention for objects in the neutral face condition in Experiment 2. The current results confirm previous findings showing that infants allocate increased attention to an emotional face if it directs eye gaze toward an object in the environment. However, a happy expression does not increase subsequent processing of the gaze-cued object. The findings are discussed in terms of early social cognitive development.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号