首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   18848篇
  免费   1199篇
  国内免费   34篇
耳鼻咽喉   241篇
儿科学   523篇
妇产科学   328篇
基础医学   2742篇
口腔科学   504篇
临床医学   1750篇
内科学   3510篇
皮肤病学   330篇
神经病学   1701篇
特种医学   944篇
外国民族医学   25篇
外科学   2425篇
综合类   146篇
一般理论   7篇
预防医学   2040篇
眼科学   231篇
药学   1606篇
中国医学   58篇
肿瘤学   970篇
  2022年   169篇
  2021年   282篇
  2020年   176篇
  2019年   247篇
  2018年   312篇
  2017年   240篇
  2016年   271篇
  2015年   331篇
  2014年   418篇
  2013年   587篇
  2012年   840篇
  2011年   894篇
  2010年   480篇
  2009年   476篇
  2008年   816篇
  2007年   814篇
  2006年   815篇
  2005年   783篇
  2004年   722篇
  2003年   709篇
  2002年   664篇
  2001年   583篇
  2000年   616篇
  1999年   500篇
  1998年   205篇
  1997年   157篇
  1996年   129篇
  1995年   156篇
  1994年   193篇
  1993年   258篇
  1992年   593篇
  1991年   546篇
  1990年   451篇
  1989年   311篇
  1988年   295篇
  1987年   283篇
  1986年   238篇
  1985年   283篇
  1984年   214篇
  1983年   154篇
  1981年   127篇
  1979年   175篇
  1978年   140篇
  1977年   133篇
  1976年   145篇
  1975年   148篇
  1974年   185篇
  1973年   159篇
  1972年   126篇
  1971年   136篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
951.
Umbilical cord blood transplantation: current practice and future innovations   总被引:10,自引:0,他引:10  
As a source of hematopoietic stem cells (HSCs), umbilical cord blood (UCB) has the advantages of speed of availability, tolerance of 1-2 antigen HLA mismatch, and a low incidence of severe GVHD. Thus, UCB represents a highly convenient HSC source that may significantly extend the HSC donor pool. UCB transplantation (UCBT) has become a standard practice in the treatment of pediatric malignancies. Now, UCBT is being investigated in adults using both conventional and non-myeloablative preparative regimens. As graft cell dose is the major factor determining hematopoietic recovery and survival in URD UCBT, methods to increase cell dose such as multiple-unit transplant and ex vivo expansion are being pursued. This review outlines the current status of UCBT with emphasis on current and future innovations.  相似文献   
952.
Screening for bipolar disorder in the community   总被引:7,自引:0,他引:7  
BACKGROUND: Our goal was to estimate the rate of positive screens for bipolar I and bipolar II disorders in the general population of the United States. METHOD: The Mood Disorder Questionnaire (MDQ), a validated screening instrument for bipolar I and II disorders, was sent to a sample of 127,800 people selected to represent the U.S. adult population by demographic variables. 85,358 subjects (66.8% response rate) that were 18 years of age or above returned the survey and had usable data. Of the nonrespondents, 3404 subjects matched demographically to the 2000 U.S. Census data completed a telephone interview to estimate nonresponse bias. RESULTS: The overall positive screen rate for bipolar I and II disorders, weighted to match the 2000 U.S. Census demographics, was 3.4%. When adjusted for the nonresponse bias, the rate rose to 3.7%. Only 19.8% of the individuals with positive screens for bipolar I or II disorders reported that they had previously received a diagnosis of bipolar disorder from a physician, whereas 31.2% reported receiving a diagnosis of unipolar depression. An additional 49.0% reported receiving no diagnosis of either bipolar disorder or unipolar depression. Positive screens were more frequent in young adults and low income households. The rates of migraine, allergies, asthma, and alcohol and drug abuse were substantially higher among those with positive screens. CONCLUSION: The positive MDQ screen rate of 3.7% suggests that nearly 4% of American adults may suffer from bipolar I and II disorders. Young adults and individuals with lower income are at greater risk for this largely underdiagnosed disorder.  相似文献   
953.
Cadaveric-donor organ recovery at a hospital-independent facility   总被引:2,自引:0,他引:2  
BACKGROUND: Of the many logistic issues addressed throughout the cadaveric organ donation process, timely access to the operating theater for surgical recovery of organs and tissues can be one of the most problematic. Delay in recovery adds to cost, risks organ viability, and compounds donor family anguish with compromise to donation consent. METHODS: From March 1 to November 30, 2001, 25 cadaveric donors were selected and successfully transferred from local donor critical care units to an off-site facility, which was constructed, equipped, and staffed to allow surgical recovery of organs and tissues. Assessment of the recovery process and outcome results was compared to 42 consecutive, hospital-based, organ recoveries within the Mid-American Transplant Services (MTS) organ procurement organization region. RESULTS: Twenty-five MTS-facility and 42 hospital organ recoveries were successfully conducted with no technical losses and satisfactory function in all 206 transplanted organs. From the MTS donor group, 7 hearts, 4 lungs, 21 livers, 28 kidneys, and 5 pancreases were successfully transplanted. Statistically significant in the MTS group was higher donor age (44.1 vs. 30.2 years), shorter total donor management time (539 vs. 718 min), reduced delay in start of surgery (25 vs. 77 min), shorter cold ischemia time for recovered pancreases (355 vs. 630 min), and reduced mean cost per donor ($10,636 vs. $12,918). There was no significant difference in race, gender, cause of death, vasopressor requirements, organs per donor recovered (3.12 vs. 3.62) or transplanted (2.60 vs. 3.36), rate of tissue recoveries (68% vs. 67%), total operating room time (207 vs. 200 min.), or cold ischemia time (excluding pancreas). CONCLUSIONS: Cadaveric-donor multiorgan and tissue recovery at this hospital-independent facility was successfully accomplished in a manner indistinguishable from conventional hospital organ and tissue recovery. The intended objectives of improved access to the operating theater were realized along with the added benefit of significant cost savings and convenience to hospital personnel and surgical recovery teams.  相似文献   
954.
Background. A variety of rotary blood pumps are under development worldwide to serve as chronic ventricular assist devices (VADs). Historically VADs have been associated with thrombotic and thromboembolic complications, yet the ability to evaluate the thrombotic process in preclinical device testing has been limited.

Methods. We have developed and applied flow cytometric assays for activated platelets, platelet microaggregates, and platelet life span and consumption to calves implanted with an axial flow VAD and calves undergoing a sham surgical procedure.

Results. Surgical sham calves had significant increases in circulating activated platelets (p < 0.05) that resolved within 17 days, and no increases in circulating platelet microaggregates. Calves with uneventful VAD implant periods had early transient elevations in platelet microaggregates and prolonged elevations in activated platelets that did not recover to preoperative values during the study. Daily platelet consumption in VAD implanted calves was increased by 20% ± 3%. Calves with thrombotic deposition within the VAD and elevated thromboembolism observed at autopsy experienced increases in circulating activated platelets and microaggregates at the end of the implant period when VAD flow decreased.

Conclusions. This study demonstrates the ability of flow cytometry-based platelet assays to differentiate VAD implant operations from VAD support, and suggests differences that exist between uneventful VAD support and support with complications. These techniques should have value in evaluating other cardiovascular devices undergoing preclinical testing and provide insight into the temporal impact of these devices on the hemostatic system.  相似文献   

955.
Recently, inflammation has received considerable attention in the pathogenesis of both type 2 diabetes and atherosclerosis. The interleukin-1 receptor antagonist (IL-1ra) is a major modulator of the interleukin-1 pro-inflammatory pathway. We studied the relationship between a variable number tandem repeat (VNTR) polymorphism in intron 2 of the IL-1ra gene (IL1RN) and coronary artery disease (CAD) in patients with and without type 2 diabetes, following 787 consecutive patients admitted for suspected CAD. According to the current criteria of the American Diabetes Association, 250 patients had type 2 diabetes. In this group of patients, allele 2 carriers (n = 108) had an increased prevalence of CAD compared with noncarriers (85.2 vs. 73.2%), a difference that remained significant in a multivariate logistic regression model (odds ratio 2.2, 95% CI 1.1-4.3, P = 0.02). No association of CAD with allele 2 carrier status was present among nondiabetic patients (n = 537). Enzyme-linked immunosorbent assays showed decreased baseline plasma levels of IL-1ra in patients with type 2 diabetes, which may in part explain the role of the IL1RN VNTR in these patients.  相似文献   
956.
Between April 1996 and December 1999, 76 tibial shaft fractures were treated at the Department of Trauma Surgery of the Justus-Liebig-University in Giessen, Germany and the Department of Orthopedic Surgery of the University of Louisville, USA with a newly developed, unreamed, solid, small diameter tibial nail interlocked "biorigidly" with screws in grooves of the nail. 69 Patients were reviewed with a minimal follow-up period of 16 months. In 65 patients, the fractures united without exchange nailing, although four of these fractures showed a delay of healing. In further four cases, non-union occurred, one of which was associated with the only break of a nail located at a distal interlocking groove of the nail. In one patient, a late medullary infection so far has not recurred following treatment. In 358 implanted interlocking screws, no implant failure was observed. First clinical experience suggests that, especially due to the low rate of material fatigue, the biorigid nail is an alternative to other implants for unreamed intramedullary nailing of the tibia.  相似文献   
957.
During the 2001 AMIA Annual Symposium, the Anesthesia, Critical Care, and Emergency Medicine Working Group hosted the Roundtable on Bioterrorism Detection. Sixty-four people attended the roundtable discussion, during which several researchers discussed public health surveillance systems designed to enhance early detection of bioterrorism events. These systems make secondary use of existing clinical, laboratory, paramedical, and pharmacy data or facilitate electronic case reporting by clinicians. This paper combines case reports of six existing systems with discussion of some common techniques and approaches. The purpose of the roundtable discussion was to foster communication among researchers and promote progress by 1) sharing information about systems, including origins, current capabilities, stages of deployment, and architectures; 2) sharing lessons learned during the development and implementation of systems; and 3) exploring cooperation projects, including the sharing of software and data. A mailing list server for these ongoing efforts may be found at http://bt.cirg.washington.edu.Bioterrorism has quickly become a new and frightening part of life in America. A host of potential agents, with varying degrees of virulence and a confusing array of nonspecific symptoms, are now household words. The field of medical and public health informatics has long concerned itself with developing methods to represent, store, and analyze data that describe the complexities of individual and population-based health.1 Now, informatics tools such as knowledge representation, controlled vocabularies, heterogeneous databases, security and confidentiality, clinical decision support, data mining, and data visualization are being applied with a new urgency to the task of early detection of intentional outbreaks of disease.In November 2001, as part of the activities of the Anesthesia, Critical Care, and Emergency Medicine Working Group, investigators from several research groups took part in the “Roundtable on Bioterrorism Detection” at the AMIA Annual Symposium. The session was subtitled “Information System–based Sentinel Surveillance.” These researchers, and others, are developing public health surveillance systems that make secondary use of data gathered during normal clinical workflow or that facilitate electronic case reporting by clinicians. These surveillance strategies are intended to enhance early detection of changes in the health of the community. This paper combines brief case reports of a number of existing systems with a discussion of some commonly employed techniques and approaches.Several bioterrorism-related posters and papers were presented at the Symposium.2–7 A handful of systems, all in active development, are currently deployed. The utility of these systems in detecting bioterrorism events is unproven, and it is hoped that their full capabilities will never need to be tested directly. However, the value of monitoring and aggregating disease indicators across a population is clear, if intuitive, and such surveillance has a strong precedent in public health practice.8–10There are strategies for indirectly measuring the performance of these systems and for improving their diagnostic accuracy and timeliness, even in the absence of bioterrorism cases. These strategies include measuring the accuracy of detection of components of case definitions, as opposed to detection of outbreaks. Other strategies involve the detection of surrogate diseases, such as influenza, whose symptoms are similar to the initial symptoms of inhalational anthrax. Espino et al.4 showed a 44percent sensitivity and 97percent specificity in detection of cases of acute respiratory illness, a common symptom prodrome of many illnesses spread by bio-aerosol agents. A companion study3 showed that time–series analysis of such cases in a population could detect an outbreak of influenza. McClung et al.11 found relatively similar sensitivity and specificity (37 and 97percent, respectively) in a system detecting asthma visits, based on chief complaint on presentation to an emergency room.A number of federal and other agencies have funded the work on these surveillance systems. These include the Centers for Disease Control and Prevention (CDC), through State Bioterrorism Preparedness grants, the Health Alert Network program, and cooperative agreements; the Agency for Healthcare Research and Quality (AHRQ); the Defense Advanced Research Projects Agency (DARPA); the National Library of Medicine (NLM), both directly though grant funding and indirectly through support of NLM Fellowships in Informatics and Integrated Advanced Information Management System sites; and by state and local public health agencies using CDC and other funds.  相似文献   
958.
Data from the US population-based Third National Health and Nutrition Examination Survey, conducted from 1988 to 1994, were used to estimate the population prevalence, prevalence odds ratios, and attributable fractions for the association of chronic obstructive pulmonary disease (COPD) with employment by industry and occupation. The aim was to identify industries and occupations at increased risk of COPD. COPD was defined as forced expiratory volume in 1 second (FEV(1))/forced vital capacity <70% and FEV(1 )<80% predicted. The authors used SUDAAN software (Research Triangle Institute, Research Triangle Park, North Carolina) to estimate the weighted population prevalence and odds ratios using 9,823 subjects aged 30-75 years who underwent lung function tests. Odds ratios for COPD, adjusted for age, smoking status, pack-years of smoking, body mass index, education, and socioeconomic status, were increased for the following industries: rubber, plastics, and leather manufacturing; utilities; office building services; textile mill products manufacturing; the armed forces; food products manufacturing; repair services and gas stations; agriculture; sales; construction; transportation and trucking; personal services; and health care. Occupations associated with increased odds ratios for COPD were freight, stock, and material handlers; records processing and distribution clerks; sales; transportation-related occupations; machine operators; construction trades; and waitresses. The fraction of COPD attributable to work was estimated as 19.2% overall and 31.1% among never smokers.  相似文献   
959.
OBJECTIVE: To highlight the unique challenges of evaluative research on practice behavior change in the "real world" settings of contemporary managed-care organizations, using the experience of the Pediatric Asthma Care PORT (Patient Outcomes Research Team). STUDY SETTING: The Pediatric Asthma Care PORT is a five-year initiative funded by the Agency for Healthcare Research and Quality to study strategies for asthma care improvement in three managed-care plans in Chicago, Seattle, and Boston. At its core is a randomized trial of two care improvement strategies compared with usual care: (1) a targeted physician education program using practice based Peer Leaders (PL) as change agents, (2) adding to the PL intervention a "Planned Asthma Care Intervention" incorporating joint "asthma check-tips" by nurse-physician teams. During the trial, each of the participating organizations viewed asthma care improvement as an immediate priority and had their own corporate improvement programs underway. DATA COLLECTION: Investigators at each health plan described the organizational and implementation challenges in conducting the PAC PORT randomized trial. These experiences were reviewed for common themes and "lessons" that might be useful to investigators planning interventional research in similar care-delivery settings. CONCLUSIONS: Randomized trials in "real world" settings represent the most robust design available to test care improvement strategies. In complex, rapidly changing managed-care organizations, blinding is not feasible, corporate initiatives may complicate implementation, and the assumption that a "usual care" arm will be static is highly likely to be mistaken. Investigators must be prepared to use innovative strategies to maintain the integrity of the study design, including: continuous improvement within the intervention arms, comanagement by researchers and health plan managers of condition-related quality improvement initiatives, procedures for avoiding respondent burden in health plan enrollees, and anticipation and minimization of risks from experimental arm contamination and major organizational change. With attention to these delivery system issues, as well as the usual design features of randomized trials, we believe managed-care organizations can serve as important laboratories to test care improvement strategies.  相似文献   
960.
OBJECTIVE: To describe initial testing of the Assessment of Chronic Illness Care (ACIC), a practical quality-improvement tool to help organizations evaluate the strengths and weaknesses of their delivery of care for chronic illness in six areas: community linkages, self-management support, decision support, delivery system design, information systems, and organization of care. DATA SOURCES: (1) Pre-post, self-report ACIC data from organizational teams enrolled in 13-month quality-improvement collaboratives focused on care for chronic illness; (2) independent faculty ratings of team progress at the end of collaborative. STUDY DESIGN: Teams completed the ACIC at the beginning and end of the collaborative using a consensus format that produced average ratings of their system's approach to delivering care for the targeted chronic condition. Average ACIC subscale scores (ranging from 0 to 11, with 11 representing optimal care) for teams across all four collaboratives were obtained to indicate how teams rated their care for chronic illness before beginning improvement work. Paired t-tests were used to evaluate the sensitivity. of the ACIC to detect system improvements for teams in two (of four) collaboratives focused on care for diabetes and congestive heart failure (CHF). Pearson correlations between the ACIC subscale scores and a faculty rating of team performance were also obtained. RESULTS: Average baseline scores across all teams enrolled at the beginning of the collaboratives ranged from 4.36 (information systems) to 6.42 (organization of care), indicating basic to good care for chronic illness. All six ACIC subscale scores were responsive to system improvements diabetes and CHF teams made over the course of the collaboratives. The most substantial improvements were seen in decision support, delivery system design, and information systems. CHF teams had particularly high scores in self-management support at the completion of the collaborative. Pearson correlations between the ACIC subscales and the faculty rating ranged from .28 to .52. CONCLUSION: These results and feedback from teams suggest that the ACIC is responsive to health care quality-improvement efforts and may be a useful tool to guide quality improvement in chronic illness care and to track progress over time.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号