共查询到17条相似文献,搜索用时 0 毫秒
1.
Informatics for integrating biology and the bedside (i2b2) seeks to provide the instrumentation for using the informational by-products of health care and the biological materials accumulated through the delivery of health care to conduct discovery research and to study the healthcare system in vivo. This complements existing efforts such as prospective cohort studies or trials outside the delivery of routine health care. i2b2 has been used to generate genome-wide studies at less than one tenth the cost and one tenth the time of conventionally performed studies as well as to identify important risk from commonly used medications. i2b2 has been adopted by over 60 academic health centers internationally. 相似文献
2.
Toni Farley Jeff Kiefer Preston Lee Daniel Von Hoff Jeffrey M Trent Charles Colbourn Spyro Mousses 《J Am Med Inform Assoc》2013,20(1):128-133
Breakthroughs in molecular profiling technologies are enabling a new data-intensive approach to biomedical research, with the potential to revolutionize how we study, manage, and treat complex diseases. The next great challenge for clinical applications of these innovations will be to create scalable computational solutions for intelligently linking complex biomedical patient data to clinically actionable knowledge. Traditional database management systems (DBMS) are not well suited to representing complex syntactic and semantic relationships in unstructured biomedical information, introducing barriers to realizing such solutions. We propose a scalable computational framework for addressing this need, which leverages a hypergraph-based data model and query language that may be better suited for representing complex multi-lateral, multi-scalar, and multi-dimensional relationships. We also discuss how this framework can be used to create rapid learning knowledge base systems to intelligently capture and relate complex patient data to biomedical knowledge in order to automate the recovery of clinically actionable information. 相似文献
3.
Objectives
(a) To determine the extent and range of errors and issues in the Systematised Nomenclature of Medicine – Clinical Terms (SNOMED CT) hierarchies as they affect two practical projects. (b) To determine the origin of issues raised and propose methods to address them.Methods
The hierarchies for concepts in the Core Problem List Subset published by the Unified Medical Language System were examined for their appropriateness in two applications. Anomalies were traced to their source to determine whether they were simple local errors, systematic inferences propagated by SNOMED''s classification process, or the result of problems with SNOMED''s schemas. Conclusions were confirmed by showing that altering the root cause and reclassifying had the intended effects, and not others.Main results
Major problems were encountered, involving concepts central to medicine including myocardial infarction, diabetes, and hypertension. Most of the issues raised were systematic. Some exposed fundamental errors in SNOMED''s schemas, particularly with regards to anatomy. In many cases, the root cause could only be identified and corrected with the aid of a classifier.Limitations
This is a preliminary ‘experiment of opportunity.’ The results are not exhaustive; nor is consensus on all points definitive.Conclusions
The SNOMED CT hierarchies cannot be relied upon in their present state in our applications. However, systematic quality assurance and correction are possible and practical but require sound techniques analogous to software engineering and combined lexical and semantic techniques. Until this is done, anyone using SNOMED codes should exercise caution. Errors in the hierarchies, or attempts to compensate for them, are likely to compromise interoperability and meaningful use. 相似文献4.
5.
Objective
To evaluate existing automatic speech-recognition (ASR) systems to measure their performance in interpreting spoken clinical questions and to adapt one ASR system to improve its performance on this task.Design and measurements
The authors evaluated two well-known ASR systems on spoken clinical questions: Nuance Dragon (both generic and medical versions: Nuance Gen and Nuance Med) and the SRI Decipher (the generic version SRI Gen). The authors also explored language model adaptation using more than 4000 clinical questions to improve the SRI system''s performance, and profile training to improve the performance of the Nuance Med system. The authors reported the results with the NIST standard word error rate (WER) and further analyzed error patterns at the semantic level.Results
Nuance Gen and Med systems resulted in a WER of 68.1% and 67.4% respectively. The SRI Gen system performed better, attaining a WER of 41.5%. After domain adaptation with a language model, the performance of the SRI system improved 36% to a final WER of 26.7%.Conclusion
Without modification, two well-known ASR systems do not perform well in interpreting spoken clinical questions. With a simple domain adaptation, one of the ASR systems improved significantly on the clinical question task, indicating the importance of developing domain/genre-specific ASR systems. 相似文献6.
Philip R O Payne Rebecca D Jackson Thomas M Best Tara B Borlawsky Albert M Lai Stephen James Metin N Gurcan 《J Am Med Inform Assoc》2012,19(6):1110-1114
The conduct of clinical and translational research regularly involves the use of a variety of heterogeneous and large-scale data resources. Scalable methods for the integrative analysis of such resources, particularly when attempting to leverage computable domain knowledge in order to generate actionable hypotheses in a high-throughput manner, remain an open area of research. In this report, we describe both a generalizable design pattern for such integrative knowledge-anchored hypothesis discovery operations and our experience in applying that design pattern in the experimental context of a set of driving research questions related to the publicly available Osteoarthritis Initiative data repository. We believe that this ‘test bed’ project and the lessons learned during its execution are both generalizable and representative of common clinical and translational research paradigms. 相似文献
7.
Richard N Shiffman George Michel Richard M Rosenfeld Caryn Davidson 《J Am Med Inform Assoc》2012,19(1):94-101
Objective
To demonstrate the feasibility of capturing the knowledge required to create guideline recommendations in a systematic, structured, manner using a software assistant. Practice guidelines constitute an important modality that can reduce the delivery of inappropriate care and support the introduction of new knowledge into clinical practice. However, many guideline recommendations are vague and underspecified, lack any linkage to supporting evidence or documentation of how they were developed, and prove to be difficult to transform into systems that influence the behavior of care providers.Methods
The BRIDGE-Wiz application (Building Recommendations In a Developer''s Guideline Editor) uses a wizard approach to address the questions: (1) under what circumstances? (2) who? (3) ought (with what level of obligation?) (4) to do what? (5) to whom? (6) how and why? Controlled natural language was applied to create and populate a template for recommendation statements.Results
The application was used by five national panels to develop guidelines. In general, panelists agreed that the software helped to formalize a process for authoring guideline recommendations and deemed the application usable and useful.Discussion
Use of BRIDGE-Wiz promotes clarity of recommendations by limiting verb choices, building active voice recommendations, incorporating decidability and executability checks, and limiting Boolean connectors. It enhances transparency by incorporating systematic appraisal of evidence quality, benefits, and harms. BRIDGE-Wiz promotes implementability by providing a pseudocode rule, suggesting deontic modals, and limiting the use of ‘consider’.Conclusion
Users found that BRIDGE-Wiz facilitates the development of clear, transparent, and implementable guideline recommendations. 相似文献8.
9.
Aaron E Carroll Paul G Biondich Vibha Anand Tamara M Dugan Meena E Sheley Shawn Z Xu Stephen M Downs 《J Am Med Inform Assoc》2011,18(4):485-490
Objective
The Child Health Improvement through Computer Automation (CHICA) system is a decision-support and electronic-medical-record system for pediatric health maintenance and disease management. The purpose of this study was to explore CHICA''s ability to screen patients for disorders that have validated screening criteria—specifically tuberculosis (TB) and iron-deficiency anemia.Design
Children between 0 and 11 years were randomized by the CHICA system. In the intervention group, parents were asked about TB and iron-deficiency risk, and physicians received a tailored prompt. In the control group, no screens were performed, and the physician received a generic prompt about these disorders.Results
1123 participants were randomized to the control group and 1116 participants to the intervention group. Significantly more people reported positive risk factors for iron-deficiency anemia in the intervention group (17.5% vs 3.1%, OR 6.6, 95% CI 4.5 to 9.5). In general, far fewer parents reported risk factors for TB than for iron-deficiency anemia. Again, there were significantly higher detection rates of positive risk factors in the intervention group (1.8% vs 0.8%, OR 2.3, 95% CI 1.0 to 5.0).Limitations
It is possible that there may be more positive screens without improving outcomes. However, the guidelines are based on studies that have evaluated the questions the authors used as sensitive and specific, and there is no reason to believe that parents misunderstood them.Conclusions
Many screening tests are risk-based, not universal, leaving physicians to determine who should have a further workup. This can be a time-consuming process. The authors demonstrated that the CHICA system performs well in assessing risk automatically for TB and iron-deficiency anemia. 相似文献10.
As computing and network capabilities continue to rise, it becomes increasingly important to understand the varied applications for using them to provide healthcare. The objective of this review is to identify key characteristics and attributes of healthcare applications involving the use of advanced computing and communication technologies, drawing upon 45 research and development projects in telemedicine and other aspects of healthcare funded by the National Library of Medicine over the past 12 years. Only projects publishing in the professional literature were included in the review. Four projects did not publish beyond their final reports. In addition, the authors drew on their first-hand experience as project officers, reviewers and monitors of the work. Major themes in the corpus of work were identified, characterizing key attributes of advanced computing and network applications in healthcare. Advanced computing and network applications are relevant to a range of healthcare settings and specialties, but they are most appropriate for solving a narrower range of problems in each. Healthcare projects undertaken primarily to explore potential have also demonstrated effectiveness and depend on the quality of network service as much as bandwidth. Many applications are enabling, making it possible to provide service or conduct research that previously was not possible or to achieve outcomes in addition to those for which projects were undertaken. Most notable are advances in imaging and visualization, collaboration and sense of presence, and mobility in communication and information-resource use. 相似文献
11.
Charlene Weir Nanci McLeskey Cherie Brunker Denise Brooks Mark A Supiano 《J Am Med Inform Assoc》2011,18(6):827-834
Objective
The evidence base for information technology (IT) has been criticized, especially with the current emphasis on translational science. The purpose of this paper is to present an analysis of the role of IT in the implementation of a geriatric education and quality improvement (QI) intervention.Design
A mixed-method three-group comparative design was used. The PRECEDE/PROCEED implementation model was used to qualitatively identify key factors in the implementation process. These results were further explored in a quantitative analysis.Method
Thirty-three primary care clinics at three institutions (Intermountain Healthcare, VA Salt Lake City Health Care System, and University of Utah) participated. The program consisted of an onsite, didactic session, QI planning and 6 months of intense implementation support.Results
Completion rate was 82% with an average improvement rate of 21%. Important predisposing factors for success included an established electronic record and a culture of quality. The reinforcing and enabling factors included free continuing medical education credits, feedback, IT access, and flexible support. The relationship between IT and QI emerged as a central factor. Quantitative analysis found significant differences between institutions for pre–post changes even after the number and category of implementation strategies had been controlled for.Conclusions
The analysis illustrates the complex dependence between IT interventions, institutional characteristics, and implementation practices. Access to IT tools and data by individual clinicians may be a key factor for the success of QI projects. Institutions vary widely in the degree of access to IT tools and support. This article suggests that more attention be paid to the QI and IT department relationship. 相似文献12.
Stuart J Nelson Kelly Zeng John Kilbourne Tammy Powell Robin Moore 《J Am Med Inform Assoc》2011,18(4):441-448
Objective
In the 6 years since the National Library of Medicine began monthly releases of RxNorm, RxNorm has become a central resource for communicating about clinical drugs and supporting interoperation between drug vocabularies.Materials and methods
Built on the idea of a normalized name for a medication at a given level of abstraction, RxNorm provides a set of names and relationships based on 11 different external source vocabularies. The standard model enables decision support to take place for a variety of uses at the appropriate level of abstraction. With the incorporation of National Drug File Reference Terminology (NDF-RT) from the Veterans Administration, even more sophisticated decision support has become possible.Discussion
While related products such as RxTerms, RxNav, MyMedicationList, and MyRxPad have been recognized as helpful for various uses, tasks such as identifying exactly what is and is not on the market remain a challenge. 相似文献13.
Wright A Pang J Feblowitz JC Maloney FL Wilcox AR McLoughlin KS Ramelson H Schneider L Bates DW 《J Am Med Inform Assoc》2012,19(4):555-561
Background
Accurate clinical problem lists are critical for patient care, clinical decision support, population reporting, quality improvement, and research. However, problem lists are often incomplete or out of date.Objective
To determine whether a clinical alerting system, which uses inference rules to notify providers of undocumented problems, improves problem list documentation.Study Design and Methods
Inference rules for 17 conditions were constructed and an electronic health record-based intervention was evaluated to improve problem documentation. A cluster randomized trial was conducted of 11 participating clinics affiliated with a large academic medical center, totaling 28 primary care clinical areas, with 14 receiving the intervention and 14 as controls. The intervention was a clinical alert directed to the provider that suggested adding a problem to the electronic problem list based on inference rules. The primary outcome measure was acceptance of the alert. The number of study problems added in each arm as a pre-specified secondary outcome was also assessed. Data were collected during 6-month pre-intervention (11/2009–5/2010) and intervention (5/2010–11/2010) periods.Results
17 043 alerts were presented, of which 41.1% were accepted. In the intervention arm, providers documented significantly more study problems (adjusted OR=3.4, p<0.001), with an absolute difference of 6277 additional problems. In the intervention group, 70.4% of all study problems were added via the problem list alerts. Significant increases in problem notation were observed for 13 of 17 conditions.Conclusion
Problem inference alerts significantly increase notation of important patient problems in primary care, which in turn has the potential to facilitate quality improvement.Trial Registration
ClinicalTrials.gov: . NCT01105923相似文献14.
Objective
Information extraction and classification of clinical data are current challenges in natural language processing. This paper presents a cascaded method to deal with three different extractions and classifications in clinical data: concept annotation, assertion classification and relation classification.Materials and Methods
A pipeline system was developed for clinical natural language processing that includes a proofreading process, with gold-standard reflexive validation and correction. The information extraction system is a combination of a machine learning approach and a rule-based approach. The outputs of this system are used for evaluation in all three tiers of the fourth i2b2/VA shared-task and workshop challenge.Results
Overall concept classification attained an F-score of 83.3% against a baseline of 77.0%, the optimal F-score for assertions about the concepts was 92.4% and relation classifier attained 72.6% for relationships between clinical concepts against a baseline of 71.0%. Micro-average results for the challenge test set were 81.79%, 91.90% and 70.18%, respectively.Discussion
The challenge in the multi-task test requires a distribution of time and work load for each individual task so that the overall performance evaluation on all three tasks would be more informative rather than treating each task assessment as independent. The simplicity of the model developed in this work should be contrasted with the very large feature space of other participants in the challenge who only achieved slightly better performance. There is a need to charge a penalty against the complexity of a model as defined in message minimalisation theory when comparing results.Conclusion
A complete pipeline system for constructing language processing models that can be used to process multiple practical detection tasks of language structures of clinical records is presented. 相似文献15.
Objective
Uncovering the dominant molecular deregulation among the multitude of pathways implicated in aggressive prostate cancer is essential to intelligently developing targeted therapies. Paradoxically, published prostate cancer gene expression signatures of poor prognosis share little overlap and thus do not reveal shared mechanisms. The authors hypothesize that, by analyzing gene signatures with quantitative models of protein–protein interactions, key pathways will be elucidated and shown to be shared.Design
The authors statistically prioritized common interactors between established cancer genes and genes from each prostate cancer signature of poor prognosis independently via a previously validated single protein analysis of network (SPAN) methodology. Additionally, they computationally identified pathways among the aggregated interactors across signatures and validated them using a similarity metric and patient survival.Measurement
Using an information-theoretic metric, the authors assessed the mechanistic similarity of the interactor signature. Its prognostic ability was assessed in an independent cohort of 198 patients with high-Gleason prostate cancer using Kaplan–Meier analysis.Results
Of the 13 prostate cancer signatures that were evaluated, eight interacted significantly with established cancer genes (false discovery rate <5%) and generated a 42-gene interactor signature that showed the highest mechanistic similarity (p<0.0001). Via parameter-free unsupervised classification, the interactor signature dichotomized the independent prostate cancer cohort with a significant survival difference (p=0.009). Interpretation of the network not only recapitulated phosphatidylinositol-3 kinase/NF-κB signaling, but also highlighted less well established relevant pathways such as the Janus kinase 2 cascade.Conclusions
SPAN methodolgy provides a robust means of abstracting disparate prostate cancer gene expression signatures into clinically useful, prioritized pathways as well as useful mechanistic pathways. 相似文献16.
The diagnostic rules of peripheral lung cancer preliminary study based on data mining technique 总被引:1,自引:0,他引:1
Objective: To discuss the clinical and imaging diagnostic rules of peripheral lung cancer by data mining technique, and to explore new ideas in the diagnosis of peripheral lung cancer, and to obtain early-stage technology and knowledge support of computer-aided detecting (CAD). Methods: 58 cases of peripheral lung cancer confirmed by clinical pathology were collected. The data were imported into the database after the standardization of the clinical and CT findings attributes were identified. The data was studied comparatively based on Association Rules (AR) of the knowledge discovery process and the Rough Set (RS) reduction algorithm and Genetic Algorithm(GA) of the generic data analysis tool (ROSETTA), respectively. Results: The genetic classification algorithm of ROSETTA generates 5 000 or so diagnosis rules. The RS reduction algorithm of Johnson's Algorithm generates 51 diagnosis rules and the AR algorithm generates 123 diagnosis rules. Three data mining methods basically consider gender, age, cough, location, lobulation sign, shape, ground-glass density attributes as the main basis for the diagnosis of peripheral lung cancer. Conclusion: These diagnosis rules for peripheral lung cancer with three data mining technology is same as clinical diagnostic rules, and these rules also can be used to build the knowledge base of expert system. This study demonstrated the potential values of data mining technology in clinical imaging diagnosis and differential diagnosis. 相似文献
17.
Love JS Wright A Simon SR Jenter CA Soran CS Volk LA Bates DW Poon EG 《J Am Med Inform Assoc》2012,19(4):610-614