首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   45篇
  免费   5篇
  国内免费   1篇
儿科学   2篇
基础医学   11篇
口腔科学   1篇
临床医学   2篇
内科学   1篇
特种医学   1篇
外科学   1篇
综合类   20篇
预防医学   2篇
药学   5篇
中国医学   5篇
  2024年   2篇
  2023年   3篇
  2022年   1篇
  2021年   7篇
  2020年   5篇
  2019年   4篇
  2018年   3篇
  2017年   2篇
  2016年   1篇
  2015年   3篇
  2014年   5篇
  2013年   1篇
  2012年   4篇
  2011年   4篇
  2010年   1篇
  2008年   1篇
  2007年   1篇
  2006年   1篇
  2004年   1篇
  1999年   1篇
排序方式: 共有51条查询结果,搜索用时 281 毫秒
41.
BackgroundIn Wisconsin, COVID-19 case interview forms contain free-text fields that need to be mined to identify potential outbreaks for targeted policy making. We developed an automated pipeline to ingest the free text into a pretrained neural language model to identify businesses and facilities as outbreaks.ObjectiveWe aimed to examine the precision and recall of our natural language processing pipeline against existing outbreaks and potentially new clusters.MethodsData on cases of COVID-19 were extracted from the Wisconsin Electronic Disease Surveillance System (WEDSS) for Dane County between July 1, 2020, and June 30, 2021. Features from the case interview forms were fed into a Bidirectional Encoder Representations from Transformers (BERT) model that was fine-tuned for named entity recognition (NER). We also developed a novel location-mapping tool to provide addresses for relevant NER. Precision and recall were measured against manually verified outbreaks and valid addresses in WEDSS.ResultsThere were 46,798 cases of COVID-19, with 4,183,273 total BERT tokens and 15,051 unique tokens. The recall and precision of the NER tool were 0.67 (95% CI 0.66-0.68) and 0.55 (95% CI 0.54-0.57), respectively. For the location-mapping tool, the recall and precision were 0.93 (95% CI 0.92-0.95) and 0.93 (95% CI 0.92-0.95), respectively. Across monthly intervals, the NER tool identified more potential clusters than were verified in WEDSS.ConclusionsWe developed a novel pipeline of tools that identified existing outbreaks and novel clusters with associated addresses. Our pipeline ingests data from a statewide database and may be deployed to assist local health departments for targeted interventions.  相似文献   
42.
医药领域中文本作为一种主要的信息载体,其非结构化特征导致很难利用计算机直接进行批量分析。自然语言处理技术是自然语言与计算机语言之间转换的一种工具,近几年随着深度学习的发展在文本处理领域中有了广泛的应用,而命名实体识别作为自然语言处理的一个分支,在知识库构建、信息抽取等任务中发挥着重要的作用。针对命名实体识别在医药文本中的应用,介绍了当前主流的命名实体识别研究方法及主要数据来源,突出深度学习在医药领域实体识别应用中的优势,为该领域相关研究提供参考。  相似文献   
43.
目的 从中医临床医案抽取症状命名实体。方法 对名老中医临床肺癌医案进行序列标记,利用条件随机场对标注样本进行学习,采取十折交叉验证对模型进行测试,使用多分类评价指标对模型结果进行评价。结果 CRF模型微平均的三个评价指标(PRF1)为(0.9233 ± 0.0063,0.9222 ± 0.0062,0.9211 ± 0.0062);宏平均评价指标为(0.8822 ± 0.0126,0.8322 ± 0.0215,0.8556 ± 0.0151)。病位权重由高到低依次为“背”、“胸”、“口”、“腰”、“鼻”等词;症状权重由高到低依次为“咳”、“痛”、“痰”、“酸”、“闷”等词。结论 利用条件随机场构建中医临床信息抽取模型,抽取结果符合中医辨证理论,能够有效实现中医临床医案症状命名实体识别。  相似文献   
44.
电子病历中命名实体的识别对于构建和挖掘大型临床数据库以服务于临床决策具有重要意义,而我国目前对此的研究相对较少。在比较现有的实体识别方法和模型后,采用条件随机场模型(CRF)机器学习的方法,对疾病、临床症状、手术操作3类中文病历中常见的命名实体进行智能识别。首先,通过分析电子病历的数据特征,选择以语言符号、词性、构词特征、词边界、上下文为特征集。然后,基于随机抽取的来自临床医院多个科室的电子病历数据,构建小规模语料库并进行标注。最后,利用条件随机场算法执行工具CRF++进行3次对照实验。通过逐步分析特征集中的多种特征对CRF自动识别的影响,提出在中文病历环境下CRF特征选择和模板设计的一些基本规则。在对照实验中,本方法取得了良好效果,3类实体的最佳F值分别达到了92.67%、93.76%和95.06%。  相似文献   
45.
ObjectiveThe goal of this study is to explore transformer-based models (eg, Bidirectional Encoder Representations from Transformers [BERT]) for clinical concept extraction and develop an open-source package with pretrained clinical models to facilitate concept extraction and other downstream natural language processing (NLP) tasks in the medical domain.MethodsWe systematically explored 4 widely used transformer-based architectures, including BERT, RoBERTa, ALBERT, and ELECTRA, for extracting various types of clinical concepts using 3 public datasets from the 2010 and 2012 i2b2 challenges and the 2018 n2c2 challenge. We examined general transformer models pretrained using general English corpora as well as clinical transformer models pretrained using a clinical corpus and compared them with a long short-term memory conditional random fields (LSTM-CRFs) mode as a baseline. Furthermore, we integrated the 4 clinical transformer-based models into an open-source package.Results and ConclusionThe RoBERTa-MIMIC model achieved state-of-the-art performance on 3 public clinical concept extraction datasets with F1-scores of 0.8994, 0.8053, and 0.8907, respectively. Compared to the baseline LSTM-CRFs model, RoBERTa-MIMIC remarkably improved the F1-score by approximately 4% and 6% on the 2010 and 2012 i2b2 datasets. This study demonstrated the efficiency of transformer-based models for clinical concept extraction. Our methods and systems can be applied to other clinical tasks. The clinical transformer package with 4 pretrained clinical models is publicly available at https://github.com/uf-hobi-informatics-lab/ClinicalTransformerNER. We believe this package will improve current practice on clinical concept extraction and other tasks in the medical domain.  相似文献   
46.
Deep learning applied to whole-slide histopathology images (WSIs) has the potential to enhance precision oncology and alleviate the workload of experts. However, developing these models necessitates large amounts of data with ground truth labels, which can be both time-consuming and expensive to obtain. Pathology reports are typically unstructured or poorly structured texts, and efforts to implement structured reporting templates have been unsuccessful, as these efforts lead to perceived extra workload. In this study, we hypothesised that large language models (LLMs), such as the generative pre-trained transformer 4 (GPT-4), can extract structured data from unstructured plain language reports using a zero-shot approach without requiring any re-training. We tested this hypothesis by utilising GPT-4 to extract information from histopathological reports, focusing on two extensive sets of pathology reports for colorectal cancer and glioblastoma. We found a high concordance between LLM-generated structured data and human-generated structured data. Consequently, LLMs could potentially be employed routinely to extract ground truth data for machine learning from unstructured pathology reports in the future. © 2023 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.  相似文献   
47.
对中国畜禽品种命名不规范问题进行了探讨和研究,并且提出了新的建议。  相似文献   
48.
目的 针对缺血性脑卒中这一发病率高、预后差的疾病,应用自然语言处理技术从患者出院小结中进行文本数据挖掘,并通过Python编程语言将非结构化的文本数据转换成供后续统计分析的结构化数据库.方法 利用缺血性脑卒中患者出院小结资料,构建基于知识增强的语义表示模型(ERNIE)+神经网络+条件随机场的命名实体识别模型,进行疾病、药物、手术、影像学检查、症状5种医疗命名实体的识别,提取实体构建半结构化数据库.为了进一步从半结构化数据库中提取出结构化数据,构建基于ERNIE的孪生文本相似度匹配模型,评价指标为准确率,采用最优模型构建协变量提取器.结果 命名实体识别模型总体F1值为90.27%,其中疾病F1值为88.41%,药物F1值为91.03%,影像学检查F1值为87.71%,手术F1值为87.07%,症状F1值为96.59%.文本相似度匹配模型的总体准确率为99.11%.结论 通过自然语言处理技术,实现了从完全的非结构化数据到半结构化数据再到结构化数据的构建流程,与人工阅读病历并手动提取病历信息相比,极大提高了数据库构建的效率.  相似文献   
49.
目的针对医疗数据发布和共享中患者隐私泄露风险以及人工去标识效率低的问题,本文提出了一种基于规则和机器学习结合的算法,以有效去除电子病历中的患者隐私信息。方法根据美国健康可携行与责任性法案和中文电子病历的表达习惯,将隐私数据分为数字、日期及命名实体三大类,利用正则表达式识别数字以及日期隐私数据,引入隐马尔科夫模型识别命名实体。最后使用上海市第六人民医院的出院小结作为测试数据,利用留出法测试了隐私数据识别的召回率和精确率。结果该模型总体得到了超过90%的召回率,其中数字和日期类型的隐私数据召回率都超过96%,中文人名的识别效果也超过了单人识别的效果。结论规则和机器学习结合的模型有效地识别了患者的隐私数据,有助于医疗数据的共享。  相似文献   
50.
ObjectiveClinical notes contain an abundance of important, but not-readily accessible, information about patients. Systems that automatically extract this information rely on large amounts of training data of which there exists limited resources to create. Furthermore, they are developed disjointly, meaning that no information can be shared among task-specific systems. This bottleneck unnecessarily complicates practical application, reduces the performance capabilities of each individual solution, and associates the engineering debt of managing multiple information extraction systems.Materials and MethodsWe address these challenges by developing Multitask-Clinical BERT: a single deep learning model that simultaneously performs 8 clinical tasks spanning entity extraction, personal health information identification, language entailment, and similarity by sharing representations among tasks.ResultsWe compare the performance of our multitasking information extraction system to state-of-the-art BERT sequential fine-tuning baselines. We observe a slight but consistent performance degradation in MT-Clinical BERT relative to sequential fine-tuning.DiscussionThese results intuitively suggest that learning a general clinical text representation capable of supporting multiple tasks has the downside of losing the ability to exploit dataset or clinical note-specific properties when compared to a single, task-specific model.ConclusionsWe find our single system performs competitively with all state-the-art task-specific systems while also benefiting from massive computational benefits at inference.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号