首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3698篇
  免费   368篇
  国内免费   59篇
耳鼻咽喉   146篇
儿科学   296篇
妇产科学   13篇
基础医学   420篇
口腔科学   23篇
临床医学   563篇
内科学   209篇
皮肤病学   3篇
神经病学   1116篇
特种医学   53篇
外科学   97篇
综合类   492篇
一般理论   4篇
预防医学   485篇
眼科学   10篇
药学   85篇
  4篇
中国医学   82篇
肿瘤学   24篇
  2024年   16篇
  2023年   91篇
  2022年   82篇
  2021年   191篇
  2020年   158篇
  2019年   161篇
  2018年   176篇
  2017年   169篇
  2016年   161篇
  2015年   173篇
  2014年   233篇
  2013年   526篇
  2012年   217篇
  2011年   189篇
  2010年   132篇
  2009年   162篇
  2008年   161篇
  2007年   152篇
  2006年   163篇
  2005年   126篇
  2004年   98篇
  2003年   91篇
  2002年   75篇
  2001年   60篇
  2000年   66篇
  1999年   31篇
  1998年   34篇
  1997年   35篇
  1996年   27篇
  1995年   22篇
  1994年   19篇
  1993年   19篇
  1992年   8篇
  1991年   15篇
  1990年   10篇
  1989年   10篇
  1988年   9篇
  1987年   6篇
  1986年   3篇
  1985年   9篇
  1984年   7篇
  1983年   7篇
  1982年   7篇
  1981年   3篇
  1980年   4篇
  1977年   1篇
  1976年   1篇
  1975年   2篇
  1974年   1篇
  1973年   5篇
排序方式: 共有4125条查询结果,搜索用时 437 毫秒
101.
102.
ContextGoals-of-care discussions are an important quality metric in palliative care. However, goals-of-care discussions are often documented as free text in diverse locations. It is difficult to identify these discussions in the electronic health record (EHR) efficiently.ObjectivesTo develop, train, and test an automated approach to identifying goals-of-care discussions in the EHR, using natural language processing (NLP) and machine learning (ML).MethodsFrom the electronic health records of an academic health system, we collected a purposive sample of 3183 EHR notes (1435 inpatient notes and 1748 outpatient notes) from 1426 patients with serious illness over 2008–2016, and manually reviewed each note for documentation of goals-of-care discussions. Separately, we developed a program to identify notes containing documentation of goals-of-care discussions using NLP and supervised ML. We estimated the performance characteristics of the NLP/ML program across 100 pairs of randomly partitioned training and test sets. We repeated these methods for inpatient-only and outpatient-only subsets.ResultsOf 3183 notes, 689 contained documentation of goals-of-care discussions. The mean sensitivity of the NLP/ML program was 82.3% (SD 3.2%), and the mean specificity was 97.4% (SD 0.7%). NLP/ML results had a median positive likelihood ratio of 32.2 (IQR 27.5–39.2) and a median negative likelihood ratio of 0.18 (IQR 0.16–0.20). Performance was better in inpatient-only samples than outpatient-only samples.ConclusionUsing NLP and ML techniques, we developed a novel approach to identifying goals-of-care discussions in the EHR. NLP and ML represent a potential approach toward measuring goals-of-care discussions as a research outcome and quality metric.  相似文献   
103.
104.
Sign language (SL) conveys linguistic information using gestures instead of sounds. Here, we apply a meta‐analytic estimation approach to neuroimaging studies (N = 23; subjects = 316) and ask whether SL comprehension in deaf signers relies on the same primarily left‐hemispheric cortical network implicated in spoken and written language (SWL) comprehension in hearing speakers. We show that: (a) SL recruits bilateral fronto‐temporo‐occipital regions with strong left‐lateralization in the posterior inferior frontal gyrus known as Broca''s area, mirroring functional asymmetries observed for SWL. (b) Within this SL network, Broca''s area constitutes a hub which attributes abstract linguistic information to gestures. (c) SL‐specific voxels in Broca''s area are also crucially involved in SWL, as confirmed by meta‐analytic connectivity modeling using an independent large‐scale neuroimaging database. This strongly suggests that the human brain evolved a lateralized language network with a supramodal hub in Broca''s area which computes linguistic information independent of speech.  相似文献   
105.
BackgroundUsing novel data mining methods such as natural language processing (NLP) on electronic health records (EHRs) for screening and detecting individuals at risk for psychosis. MethodThe study included all patients receiving a first index diagnosis of nonorganic and nonpsychotic mental disorder within the South London and Maudsley (SLaM) NHS Foundation Trust between January 1, 2008, and July 28, 2018. Least Absolute Shrinkage and Selection Operator (LASSO)-regularized Cox regression was used to refine and externally validate a refined version of a five-item individualized, transdiagnostic, clinically based risk calculator previously developed (Harrell’s C = 0.79) and piloted for implementation. The refined version included 14 additional NLP-predictors: tearfulness, poor appetite, weight loss, insomnia, cannabis, cocaine, guilt, irritability, delusions, hopelessness, disturbed sleep, poor insight, agitation, and paranoia. ResultsA total of 92 151 patients with a first index diagnosis of nonorganic and nonpsychotic mental disorder within the SLaM Trust were included in the derivation (n = 28 297) or external validation (n = 63 854) data sets. Mean age was 33.6 years, 50.7% were women, and 67.0% were of white race/ethnicity. Mean follow-up was 1590 days. The overall 6-year risk of psychosis in secondary mental health care was 3.4 (95% CI, 3.3–3.6). External validation indicated strong performance on unseen data (Harrell’s C 0.85, 95% CI 0.84–0.86), an increase of 0.06 from the original model. ConclusionsUsing NLP on EHRs can considerably enhance the prognostic accuracy of psychosis risk calculators. This can help identify patients at risk of psychosis who require assessment and specialized care, facilitating earlier detection and potentially improving patient outcomes.  相似文献   
106.
Ververi‐Brady syndrome (VBS, # 617982) is a rare developmental disorder, and loss‐of‐function variants in QRICH1 were implicated in its etiology. Furthermore, a recognizable phenotype was proposed comprising delayed speech, learning difficulties and dysmorphic signs. Here, we present four unrelated individuals with one known nonsense variant (c.1954C > T; p.[Arg652*]) and three novel de novo QRICH1 variants, respectively. These included two frameshift mutations (c.832_833del; p.(Ser278Leufs*25), c.1812_1813delTG; p.(Glu605Glyfs*25)) and interestingly one missense mutation (c.2207G > A; p.[Ser736Asn]), expanding the mutational spectrum. Enlargement of the cohort by these four individuals contributes to the delineation of the VBS phenotype and suggests expressive speech delay, moderate motor delay, learning difficulties/mild ID, mild microcephaly, short stature and notable social behavior deficits as clinical hallmarks. In addition, one patient presented with nephroblastoma. The possible involvement of QRICH1 in pediatric cancer assumes careful surveillance a key priority for outcome of these patients. Further research and enlargement of cohorts are warranted to learn about the genetic architecture and the phenotypic spectrum in more detail.  相似文献   
107.
Natural use of language involves at least two individuals. Some studies have focused on the interaction between senders in communicative situations and how the knowledge about the speaker can bias language comprehension. However, the mere effect of a face as a social context on language processing remains unknown. In the present study, we used event-related potentials to investigate the semantic and morphosyntactic processing of speech in the presence of a photographic portrait of the speaker. In Experiment 1, we show that the N400, a component related to semantic comprehension, increased its amplitude when processed within this minimal social context compared to a scrambled face control condition. Hence, the semantic neural processing of speech is sensitive to the concomitant perception of a picture of the speaker’s face, even if irrelevant to the content of the sentences. Moreover, a late posterior negativity effect was found to the presentation of the speaker’s face compared to control stimuli. In contrast, in Experiment 2, we found that morphosyntactic processing, as reflected in left anterior negativity and P600 effects, is not notably affected by the presence of the speaker’s portrait. Overall, the present findings suggest that the mere presence of the speaker’s image seems to trigger a minimal communicative context, increasing processing resources for language comprehension at the semantic level.  相似文献   
108.
109.
There has long been interest in why languages are shaped the way they are, and in the relationship between sign language and gesture. In sign languages, entity classifiers are handshapes that encode how objects move, how they are located relative to one another, and how multiple objects of the same type are distributed in space. Previous studies have shown that hearing adults who are asked to use only manual gestures to describe how objects move in space will use gestures that bear some similarities to classifiers. We investigated how accurately hearing adults, who had been learning British Sign Language (BSL) for 1–3 years, produce and comprehend classifiers in (static) locative and distributive constructions. In a production task, learners of BSL knew that they could use their hands to represent objects, but they had difficulty choosing the same, conventionalized, handshapes as native signers. They were, however, highly accurate at encoding location and orientation information. Learners therefore show the same pattern found in sign-naïve gesturers. In contrast, handshape, orientation, and location were comprehended with equal (high) accuracy, and testing a group of sign-naïve adults showed that they too were able to understand classifiers with higher than chance accuracy. We conclude that adult learners of BSL bring their visuo-spatial knowledge and gestural abilities to the tasks of understanding and producing constructions that contain entity classifiers. We speculate that investigating the time course of adult sign language acquisition might shed light on how gesture became (and, indeed, becomes) conventionalized during the genesis of sign languages.  相似文献   
110.
When naming certain hand-held, man-made tools, American Sign Language (ASL) signers exhibit either of two iconic strategies: a handling strategy, where the hands show holding or grasping an imagined object in action, or an instrument strategy, where the hands represent the shape or a dimension of the object in a typical action. The same strategies are also observed in the gestures of hearing nonsigners identifying pictures of the same set of tools. In this paper, we compare spontaneously created gestures from hearing nonsigning participants to commonly used lexical signs in ASL. Signers and gesturers were asked to respond to pictures of tools and to video vignettes of actions involving the same tools. Nonsigning gesturers overwhelmingly prefer the handling strategy for both the Picture and Video conditions. Nevertheless, they use more instrument forms when identifying tools in pictures, and more handling forms when identifying actions with tools. We found that ASL signers generally favor the instrument strategy when naming tools, but when describing tools being used by an actor, they are significantly more likely to use more handling forms. The finding that both gesturers and signers are more likely to alternate strategies when the stimuli are pictures or video suggests a common cognitive basis for differentiating objects from actions. Furthermore, the presence of a systematic handling/instrument iconic pattern in a sign language demonstrates that a conventionalized sign language exploits the distinction for grammatical purpose, to distinguish nouns and verbs related to tool use.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号