首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1744篇
  免费   171篇
  国内免费   29篇
耳鼻咽喉   51篇
儿科学   11篇
妇产科学   4篇
基础医学   325篇
口腔科学   14篇
临床医学   169篇
内科学   208篇
皮肤病学   17篇
神经病学   262篇
特种医学   218篇
外科学   96篇
综合类   103篇
预防医学   35篇
眼科学   211篇
药学   121篇
中国医学   55篇
肿瘤学   44篇
  2024年   7篇
  2023年   21篇
  2022年   40篇
  2021年   79篇
  2020年   53篇
  2019年   55篇
  2018年   60篇
  2017年   72篇
  2016年   76篇
  2015年   62篇
  2014年   102篇
  2013年   127篇
  2012年   82篇
  2011年   69篇
  2010年   63篇
  2009年   63篇
  2008年   71篇
  2007年   61篇
  2006年   78篇
  2005年   53篇
  2004年   58篇
  2003年   58篇
  2002年   33篇
  2001年   47篇
  2000年   45篇
  1999年   29篇
  1998年   56篇
  1997年   44篇
  1996年   46篇
  1995年   35篇
  1994年   24篇
  1993年   27篇
  1992年   16篇
  1991年   14篇
  1990年   9篇
  1989年   11篇
  1988年   7篇
  1987年   8篇
  1986年   11篇
  1985年   16篇
  1984年   9篇
  1983年   4篇
  1982年   12篇
  1981年   10篇
  1980年   4篇
  1979年   3篇
  1978年   6篇
  1977年   3篇
  1976年   2篇
  1973年   1篇
排序方式: 共有1944条查询结果,搜索用时 31 毫秒
71.
Objective. Scandinavian guidelines recommend controlling middle-ear effusion (MEE) after acute otitis media. The study aim was to determine whether nurses without otoscopic experience can reliably exclude MEE with tympanometry or spectral gradient acoustic reflectometry (SG-AR) at asymptomatic visits. Design. Three nurses were taught to perform examinations with tympanometry and SG-AR. Pneumatic otoscopy by the study physician served as the diagnostic standard. Setting. Study clinic at primary health care level. Patients. A total of 156 children aged 6–35 months. Main outcome measures. Predictive values (with 95% confidence interval) for tympanometry and SG-AR, and the clinical usefulness, i.e. the proportion of visits where nurses obtained the exclusive test result from both ears of the child. Results. At 196 visits, the negative predictive value of type A and C1 tympanograms (tympanometric peak pressure > −200 daPa) was 95% (91–97%). Based on type A and C1 tympanograms, the nurse could exclude MEE at 81/196 (41%) of visits. The negative predictive value of SG-AR level 1 result was 86% (79–91%). Based on SG-AR level 1 results, the nurse could exclude MEE at 29/196 (15%) of visits. Conclusion. Tympanograms with tympanometric peak pressure > −200 daPa (types A and C1) obtained by nurses are reliable test results in excluding MEE. However, these test results were obtained at less than half of the asymptomatic visits and, thus, the usefulness of excluding MEE by nurses depends on the clinical setting.  相似文献   
72.
目的 利用频域光学相干断层扫描(SD-OCT)检测健康对照和早中期原发性开角型青光眼(POAG)患者的视盘及黄斑参数,分析其相关影响因素。方法 选择2015年9月至2018年8月在我科就诊的早中期POAG患者40例(40眼),其中20~39岁者20例、60~79岁者20例;纳入同期在我科就诊的40名(40眼)健康对照,其中20~39岁者20名、60~79岁者20名。受试者均接受全面的眼科检查,并利用SD-OCT测量所有受试者视盘周围视网膜神经纤维层(pRNFL)厚度、黄斑区平均神经节细胞层联合内丛状层(GCL-IPL)厚度、黄斑区最薄GCL-IPL厚度和黄斑区平均整体厚度。结果 健康对照组和早中期POAG组黄斑区平均GCL-IPL厚度及最薄GCL-IPL厚度均随年龄增长而变薄,差异均有统计学意义(P<0.05,P<0.01);两组黄斑区平均整体厚度随着年龄的增长均无明显变化。在健康对照组,年龄因素对pRNFL厚度影响不大,20~39岁者与60~79岁者pRNFL厚度差异无统计学意义(P>0.05)。在早中期POAG组,20~39岁者和60~79岁者pRNFL平均厚度、上方厚度、下方厚度和颞侧厚度与健康对照组同年龄者相比均变薄,且60~79岁者上述pRNFL厚度与20~39岁者相比更薄,差异均有统计学意义(P<0.01);早中期POAG组pRNFL鼻侧厚度在20~39岁者与60~79岁者之间差异无统计学意义(P>0.05),与健康对照组同年龄者相比差异亦无统计学意义(P>0.05)。结论 SD-OCT测量的pRNFL厚度与POAG有关,可作为早期诊断POAG的检测指标。  相似文献   
73.
74.
75.
76.
目的 采用碘浓度值评估肾脏光谱CT增强皮质期不同单能级图像以确定最佳单能级。方法 回顾性分析50例肾功能正常并行腹部光谱CT增强检查的患者,对皮质期图像行碘浓度值及不同光谱单能级图像CT值测量,并分析其相关性及CT值变异系数。结果 相关性分析提示皮质期肾皮质在40、50、60、70、80、90及100 keV能级CT值与碘浓度值相关系数分别为0.994、0.994、0.993、0.987、0.976、0.960及0.938(P均<0.001)。相关系数比较显示40、50 keV能级(P=0.007)及60 keV能级(P=0.030)相关系数均显著大于70 keV能级相关系数,40、50 keV能级与60 keV能级相关系数之间差异无统计学意义(P =0.590)。肾皮质CT值在40、50、60 keV能级的变异系数分别为0.21、0.20及0.19。结论 60 keV是肾皮质光谱CT增强扫描皮质期最佳单能级。  相似文献   
77.
In a broad range of classification and decision-making problems, one is given the advice or predictions of several classifiers, of unknown reliability, over multiple questions or queries. This scenario is different from the standard supervised setting, where each classifier’s accuracy can be assessed using available labeled data, and raises two questions: Given only the predictions of several classifiers over a large set of unlabeled test data, is it possible to (i) reliably rank them and (ii) construct a metaclassifier more accurate than most classifiers in the ensemble? Here we present a spectral approach to address these questions. First, assuming conditional independence between classifiers, we show that the off-diagonal entries of their covariance matrix correspond to a rank-one matrix. Moreover, the classifiers can be ranked using the leading eigenvector of this covariance matrix, because its entries are proportional to their balanced accuracies. Second, via a linear approximation to the maximum likelihood estimator, we derive the Spectral Meta-Learner (SML), an unsupervised ensemble classifier whose weights are equal to these eigenvector entries. On both simulated and real data, SML typically achieves a higher accuracy than most classifiers in the ensemble and can provide a better starting point than majority voting for estimating the maximum likelihood solution. Furthermore, SML is robust to the presence of small malicious groups of classifiers designed to veer the ensemble prediction away from the (unknown) ground truth.Every day, multiple decisions are made based on input and suggestions from several sources, either algorithms or advisers, of unknown reliability. Investment companies handle their portfolios by combining reports from several analysts, each providing recommendations on buying, selling, or holding multiple stocks (1, 2). Central banks combine surveys of several professional forecasters to monitor rates of inflation, real gross domestic product growth, and unemployment (36). Biologists study the genomic binding locations of proteins by combining or ranking the predictions of several peak detection algorithms applied to large-scale genomics data (7). Physician tumor boards convene a number of experts from different disciplines to discuss patients whose diseases pose diagnostic and therapeutic challenges (8). Peer-review panels discuss multiple grant applications and make recommendations to fund or reject them (9). The examples above describe scenarios in which several human advisers or algorithms provide their predictions or answers to a list of queries or questions. A key challenge is to improve decision making by combining these multiple predictions of unknown reliability. Automating this process of combining multiple predictors is an active field of research in decision science (cci.mit.edu/research), medicine (10), business (refs. 11 and 12 and www.kaggle.com/competitions), and government (www.iarpa.gov/Programs/ia/ACE/ace.html and www.goodjudgmentproject.com), as well as in statistics and machine learning.Such scenarios, whereby advisers of unknown reliability provide potentially conflicting opinions, or propose to take opposite actions, raise several interesting questions. How should the decision maker proceed to identify who, among the advisers, is the most reliable? Moreover, is it possible for the decision maker to cleverly combine the collection of answers from all of the advisers and provide even more accurate answers?In statistical terms, the first question corresponds to the problem of estimating prediction performances of preconstructed classifiers (e.g., the advisers) in the absence of class labels. Namely, each classifier was constructed independently on a potentially different training dataset (e.g., each adviser trained on his/her own using possibly different sources of information), yet they are all being applied to the same new test data (e.g., list of queries) for which labels are not available, either because they are expensive to obtain or because they will only be available in the future, after the decision has been made. In addition, the accuracy of each classifier on its own training data is unknown. This scenario is markedly different from the standard supervised setting in machine learning and statistics. There, classifiers are typically trained on the same labeled data and can be ranked, for example, by comparing their empirical accuracy on a common labeled validation set. In this paper we show that under standard assumptions of independence between classifier errors their unknown performances can still be ranked even in the absence of labeled data.The second question raised above corresponds to the problem of combining predictions of preconstructed classifiers to form a metaclassifier with improved prediction performance. This problem arises in many fields, including combination of forecasts in decision science and crowdsourcing in machine learning, which have each derived different approaches to address it. If we had external knowledge or historical data to assess the reliability of the available classifiers we could use well-established solutions relying on panels of experts or forecast combinations (1114). In our problem such knowledge is not always available and thus these solutions are in general not applicable. The oldest solution that does not require additional information is majority voting, whereby the predicted class label is determined by a rule of majority, with all advisers assigned the same weight. More recently, iterative likelihood maximization procedures, pioneered by Dawid and Skene (15), have been proposed, in particular in crowdsourcing applications (1623). Owing to the nonconvexity of the likelihood function, these techniques often converge only to a local, rather than global, maximum and require careful initialization. Furthermore, there are typically no guarantees on the quality of the resulting solution.In this paper we address these questions via a spectral analysis that yields four major insights:
  1. Under standard assumptions of independence between classifier errors, in the limit of an infinite test set, the off-diagonal entries of the population covariance matrix of the classifiers correspond to a rank-one matrix.
  2. The entries of the leading eigenvector of this rank-one matrix are proportional to the balanced accuracies of the classifiers. Thus, a spectral decomposition of this rank-one matrix provides a computationally efficient approach to rank the performances of an ensemble of classifiers.
  3. A linear approximation of the maximum likelihood estimator yields an ensemble learner whose weights are proportional to the entries of this eigenvector. This represents an efficient, easily constructed, unsupervised ensemble learner, which we term Spectral Meta-Learner (SML).
  4. An interest group of conspiring classifiers (a cartel) that maliciously attempts to veer the overall ensemble solution away from the (unknown) ground truth leads to a rank-two covariance matrix. Furthermore, in contrast to majority voting, SML is robust to the presence of a small-enough cartel whose members are unknown.
In addition, we demonstrate the advantages of spectral approaches based on these insights, using both simulated and real-world datasets. When the independence assumptions hold approximately, SML is typically better than most classifiers in the ensemble and their majority vote, achieving results comparable to the maximum likelihood estimator (MLE). Empirically, we find SML to be a better starting point for computing the MLE that consistently leads to improved performance. Finally, spectral approaches are also robust to cartels and therefore helpful in analyzing surveys where a biased subgroup of advisers (a cartel) may have corrupted the data.  相似文献   
78.
79.
Introduction. The aim of this study was to clarify the interpretation of sensory-motor rhythm (SMR; 13–15 Hz) and beta (16–20 Hz) changes with respect to attention states.

Method. For this purpose, EEG was recorded from 11 participants during (a) a multiple object tracking task (MOT), which required externally directed attention; (b) the retention phase of a visuo-spatial memory task (VSM), which required internally directed attention and avoidance of sensory distraction; and (c) the waiting intervals between trials, which constituted a no-task-imposed control condition. The 2 active tasks were consecutively presented at 2 difficulty levels (i.e., easy and hard). Two analyses of variance were conducted on EEG log spectral amplitudes in the alpha (8–12 Hz), SMR, and beta bands from F3, F4, C3, C4 and P3, P4.

Results. The first 15 analysis compared the MOT to the VSM by difficulty levels and revealed a significant task effect (p < .0005) but no effect of difficulty. The results showed that externally directed attention (MOT) resulted in lower values than internally directed attention (VSM) in all three bands. The second analysis averaged the difficulty levels together and added the no-task-imposed reference condition. The results again showed a significant task effect that did not interact with site, hemisphere, or, more important, band. Post hoc tests revealed that both MOT and VSM produced significantly smaller means than the no-task-imposed condition. This pattern of log-amplitude means and the lack of task interaction with any other factor indicate that task-induced attention reduces EEG power in the same proportion across the 3 bands and the 6 channels studied.

Conclusions. These results contradict a frequent interpretation concerning the relationship between the brain's aptitude to increase low beta in neurofeedback programs and improved sustain attention capacities.  相似文献   
80.
ObjectiveTo explore the use of detrended fluctuation analysis (DFA) scaling exponent of the awake electroencephalogram (EEG) as a new alternative biomarker of neurobehavioural impairment and sleepiness in obstructive sleep apnea (OSA).MethodsEight patients with moderate–severe OSA and nine non-OSA controls underwent a 40-h extended wakefulness challenge with resting awake EEG, neurobehavioural performance (driving simulator and psychomotor vigilance task) and subjective sleepiness recorded every 2-h. The DFA scaling exponent and power spectra of the EEG were calculated at each time point and their correlation with sleepiness and performance were quantified.ResultsDFA scaling exponent and power spectra biomarkers significantly correlated with simultaneously tested performance and self-rated sleepiness across the testing period in OSA patients and controls. Baseline (8am) DFA scaling exponent but not power spectra were markers of impaired simulated driving after 24-h extended wakefulness in OSA (r = 0.738, p = 0.037). OSA patients had a higher scaling exponent and delta power during wakefulness than controls.ConclusionsThe DFA scaling exponent of the awake EEG performed as well as conventional power spectra as a marker of impaired performance and sleepiness resulting from sleep loss.SignificanceDFA may potentially identify patients at risk of neurobehavioural impairment and assess treatment effectiveness.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号