首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   162篇
  免费   8篇
  国内免费   2篇
耳鼻咽喉   2篇
儿科学   3篇
妇产科学   1篇
基础医学   40篇
口腔科学   7篇
临床医学   17篇
内科学   28篇
皮肤病学   1篇
神经病学   6篇
特种医学   14篇
外科学   12篇
综合类   6篇
预防医学   10篇
眼科学   8篇
药学   4篇
中国医学   2篇
肿瘤学   11篇
  2023年   8篇
  2022年   8篇
  2021年   14篇
  2020年   11篇
  2019年   6篇
  2018年   8篇
  2017年   2篇
  2016年   5篇
  2015年   8篇
  2014年   14篇
  2013年   12篇
  2012年   9篇
  2011年   10篇
  2010年   3篇
  2009年   11篇
  2008年   7篇
  2007年   10篇
  2006年   3篇
  2005年   4篇
  2003年   1篇
  2002年   2篇
  2001年   1篇
  2000年   1篇
  1999年   1篇
  1998年   1篇
  1997年   1篇
  1996年   1篇
  1995年   1篇
  1994年   2篇
  1993年   2篇
  1985年   3篇
  1983年   1篇
  1975年   1篇
排序方式: 共有172条查询结果,搜索用时 15 毫秒
1.
With the advent of computerized databases, medical data has become easy to accumulate; however, effective use of this data continues to pose significant problems. In other circumstances, smoothing algorithms have been used to uncover non-obvious correlations, trends and relationships in noisy data. We have applied four such algorithms to a large dataset of postoperative blood replacement in cardiopulmonary bypass patients. When applied to this dataset, one of the algorithms proved surprisingly effective. It confirmed several previously observed correlations, and also provided an additional series of counterintuitive and apparently unrelated associations. These associations have been explored in an accompanying paper.  相似文献   
2.
The dissemination of Electronic Health Records (EHRs) can be highly beneficial for a range of medical studies, spanning from clinical trials to epidemic control studies, but it must be performed in a way that preserves patients’ privacy. This is not straightforward, because the disseminated data need to be protected against several privacy threats, while remaining useful for subsequent analysis tasks. In this work, we present a survey of algorithms that have been proposed for publishing structured patient data, in a privacy-preserving way. We review more than 45 algorithms, derive insights on their operation, and highlight their advantages and disadvantages. We also provide a discussion of some promising directions for future research in this area.  相似文献   
3.
Today, most medical images are stored as a set of single-frame composite Digital Imaging and Communications in Medicine (DICOM) objects that contain the four levels of the DICOM information model—patient, study, series, and instance. Although DICOM addresses most of the issues related to medical image archiving, it has some limitations. Replicating the header information with each DICOM object increases the study size and the parsing overhead. Multi-frame DICOM (MFD) was developed to address this, among other issues. The MFD combines all DICOM objects belonging to a series into a single DICOM object. Hence, the series-level attributes are normalized, and the amount of header data repetition is reduced. In this paper, multi-series DICOM (MSD) is introduced as a potential extension to the DICOM standard that allows faster parsing, transmission, and storage of studies. MSD extends the MFD de-duplication of series-level attributes to study-level attributes. A single DICOM object that stores the whole study is proposed. An efficient algorithm, called the one-pass de-duplication algorithm, was developed to find and eliminate the replicated data elements within the study. A group of experiments were done that evaluate MSD and the one-pass de-duplication algorithm performance. The experiments show that MSD significantly reduces the amount of data repetition and decreases the time required to read and parse DICOM studies. MSD is one possible solution that addresses the DICOM limitations regarding header information repetition.  相似文献   
4.
BackgroundIncreasing evidence is becoming available on the aetiology and management of fevers in Asia; the importance of these fevers has increased with the decline in the incidence of malaria.AimsTo conduct a narrative review of the epidemiology and management of fevers in South and South-East Asia and to highlight gaps in our knowledge that impair evidence-based health policy decisions.SourcesA narrative review of papers published since 2012 on developments in fever epidemiology, diagnosis and treatment in South and South-East Asia. The papers that the authors felt were pivotal, from their personal perspectives, are discussed.ContentWe identified 100 studies. Among the 30 studies (30%)—including both children and adults—that investigated three or more pathogens, the most frequently reported fever aetiology was dengue (reported by 15, 50%), followed by leptospirosis (eight, 27%), scrub typhus (seven, 23%) and Salmonella serovar Typhi (six, 20%). Among four studies investigating three or more pathogens in children, dengue and Staphylococcus aureus were the most frequent, followed by non-typhoidal Salmonella spp, Streptococcus pneumoniae, Salmonella serovar Typhi, and Orientia tsutsugamushi. Increased awareness is needed that rickettsial pathogens are common but do not respond to cephalosporins, and that alternative therapies, such as tetracyclines, are required.ImplicationsMany key gaps remain, and consensus guidelines for study design are needed to aid comparative understanding of the epidemiology of fevers. More investment in developing accurate and affordable diagnostic tests for rural Asia and independent evaluation of those already on the market are needed. Treatment algorithms, including simple biomarker assays, appropriate for empirical therapy of fevers in different areas of rural Asia should be a major aim of fever research. Enhanced antimicrobial resistance (AMR) surveillance and openly accessible databases of geography-specific AMR data would inform policy on empirical and specific therapy. More investment in innovative strategies facilitating infectious disease surveillance in remote rural communities would be an important component of poverty reduction and improving public health.  相似文献   
5.
BackgroundUntil recently most testing algorithms in the United States (US) utilized Western blot (WB) as the supplemental test. CDC has proposed an algorithm for HIV diagnosis which includes an initial screen with a Combo Antigen/Antibody 4th generation-immunoassay (IA), followed by an HIV-1/2 discriminatory IA of initially reactive-IA specimens. Discordant results in the proposed algorithm are resolved by nucleic acid-amplification testing (NAAT).ObjectivesEvaluate the results obtained with the CDC proposed laboratory-based algorithm using specimens from men who have sex with men (MSM) obtained in five metropolitan statistical areas (MSAs).Study designSpecimens from 992 MSM from five MSAs participating in the CDC's National HIV Behavioral Surveillance System in 2011 were tested at local facilities and CDC. The five MSAs utilized algorithms of various screening assays and specimen types, and WB as the supplemental test. At the CDC, serum/plasma specimens were screened with 4th generation-IA and the Multispot HIV-1/HIV-2 discriminatory assay was used as the supplemental test. NAAT was used to resolve discordant results and to further identify acute HIV infections from all screened-non-reactive missed by the proposed algorithm. Performance of the proposed algorithm was compared to site-specific WB-based algorithms.ResultsThe proposed algorithm detected 254 infections. The WB-based algorithms detected 19 fewer infections; 4 by oral fluid (OF) rapid testing and 15 by WB supplemental testing (12 OF and 3 blood). One acute infection was identified by NAAT from all screened-non-reactive specimens.ConclusionsThe proposed algorithm identified more infections than the WB-based algorithms in a high-risk MSM population. OF testing was associated with most of the discordant results between algorithms. HIV testing with the proposed algorithm can increase diagnosis of infected individuals, including early infections.  相似文献   
6.
7.
《Annales d'endocrinologie》2022,83(6):440-453
The SFE-AFCE-SFMN 2022 consensus deals with the management of thyroid nodules, a condition that is a frequent reason for consultation in endocrinology. In more than 90% of cases, patients are euthyroid with benign and non-progressive nodules that do not warrant specific treatment. The clinician's objective is to detect malignant thyroid nodules at risk of recurrence and death, toxic nodules responsible for hyperthyroidism or compressive nodules warranting treatment. The diagnosis and treatment of thyroid nodules requires close collaboration between endocrinologists, nuclear medicine physicians and surgeons but also involves other specialists. Therefore, this consensus statement was established jointly by 3 societies, the French Society of Endocrinology (SFE), the French Association of Endocrine Surgery (AFCE) and the French Society of Nuclear Medicine (SFMN); the various working groups included experts from other specialties (pathologists, radiologists, pediatricians, biologists, etc.). This specific text is a summary chapter taking up the recommendations from specific sections and presenting algorithms for the exploration and management of thyroid nodules.  相似文献   
8.
Artificial intelligence (AI) demonstrated by machines is based on reinforcement learning and revolves around the usage of algorithms. The purpose of this review was to summarize concepts, the scope, applications, and limitations in major gastrointestinal surgery. This is a narrative review of the available literature on the key capabilities of AI to help anesthesiologists, surgeons, and other physicians to understand and critically evaluate ongoing and new AI applications in perioperative management. AI uses available databases called “big data” to formulate an algorithm. Analysis of other data based on these algorithms can help in early diagnosis, accurate risk assessment, intraoperative management, automated drug delivery, predicting anesthesia and surgical complications and postoperative outcomes and can thus lead to effective perioperative management as well as to reduce the cost of treatment. Perioperative physicians, anesthesiologists, and surgeons are well-positioned to help integrate AI into modern surgical practice. We all need to partner and collaborate with data scientists to collect and analyze data across all phases of perioperative care to provide clinical scenarios and context. Careful implementation and use of AI along with real-time human interpretation will revolutionize perioperative care, and is the way forward in future perioperative management of major surgery.  相似文献   
9.
To develop a generic Open Source MRI perfusion analysis tool for quantitative parameter mapping to be used in a clinical workflow and methods for quality management of perfusion data. We implemented a classic, pixel-by-pixel deconvolution approach to quantify T1-weighted contrast-enhanced dynamic MR imaging (DCE-MRI) perfusion data as an OsiriX plug-in. It features parallel computing capabilities and an automated reporting scheme for quality management. Furthermore, by our implementation design, it could be easily extendable to other perfusion algorithms. Obtained results are saved as DICOM objects and directly added to the patient study. The plug-in was evaluated on ten MR perfusion data sets of the prostate and a calibration data set by comparing obtained parametric maps (plasma flow, volume of distribution, and mean transit time) to a widely used reference implementation in IDL. For all data, parametric maps could be calculated and the plug-in worked correctly and stable. On average, a deviation of 0.032 ± 0.02 ml/100 ml/min for the plasma flow, 0.004 ± 0.0007 ml/100 ml for the volume of distribution, and 0.037 ± 0.03 s for the mean transit time between our implementation and a reference implementation was observed. By using computer hardware with eight CPU cores, calculation time could be reduced by a factor of 2.5. We developed successfully an Open Source OsiriX plug-in for T1-DCE-MRI perfusion analysis in a routine quality managed clinical environment. Using model-free deconvolution, it allows for perfusion analysis in various clinical applications. By our plug-in, information about measured physiological processes can be obtained and transferred into clinical practice.  相似文献   
10.
Modern radiotherapy requires accurate region of interest (ROI) inputs for plan optimization and delivery. Target delineation, however, remains operator-dependent and potentially serves as a major source of treatment delivery error. In order to optimize this critical, yet observer-driven process, a flexible web-based platform for individual and cooperative target delineation analysis and instruction was developed in order to meet the following unmet needs: (1) an open-source/open-access platform for automated/semiautomated quantitative interobserver and intraobserver ROI analysis and comparison, (2) a real-time interface for radiation oncology trainee online self-education in ROI definition, and (3) a source for pilot data to develop and validate quality metrics for institutional and cooperative group quality assurance efforts. The resultant software, Target Contour Testing/Instructional Computer Software (TaCTICS), developed using Ruby on Rails, has since been implemented and proven flexible, feasible, and useful in several distinct analytical and research applications.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号