首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
随着越来越多的医疗机构开始应用电子健康档案系统(Electronic Health Records,EHR)来管理患者资料,基于在临床研究工作对患者资料的需求,各研究机构也开始以电子健康档案系统作为临床研究的数据来源。EHRCR(Electronic Health Records/Clinical Research)项目是在2006年12月由HL7技术委员会(Health Level Seven Technical Committee,HL7TC)和欧洲健康档案研究所(European Institute for Health Records,EuroRec)发起,旨在研究未来可以支持临床研究的电子健康档案系统应具有的功能,以及与此相关的系统、网络和业务流程。因此,对该项目的最新研究成果加以介绍,作为我国电子健康档案行业发展的参考。  相似文献   

2.
ObjectiveDue to a complex set of processes involved with the recording of health information in the Electronic Health Records (EHRs), the truthfulness of EHR diagnosis records is questionable. We present a computational approach to estimate the probability that a single diagnosis record in the EHR reflects the true disease.Materials and MethodsUsing EHR data on 18 diseases from the Mass General Brigham (MGB) Biobank, we develop generative classifiers on a small set of disease-agnostic features from EHRs that aim to represent Patients, pRoviders, and their Interactions within the healthcare SysteM (PRISM features).ResultsWe demonstrate that PRISM features and the generative PRISM classifiers are potent for estimating disease probabilities and exhibit generalizable and transferable distributional characteristics across diseases and patient populations. The joint probabilities we learn about diseases through the PRISM features via PRISM generative models are transferable and generalizable to multiple diseases.DiscussionThe Generative Transfer Learning (GTL) approach with PRISM classifiers enables the scalable validation of computable phenotypes in EHRs without the need for domain-specific knowledge about specific disease processes.ConclusionProbabilities computed from the generative PRISM classifier can enhance and accelerate applied Machine Learning research and discoveries with EHR data.  相似文献   

3.
ObjectiveIntegrated, real-time data are crucial to evaluate translational efforts to accelerate innovation into care. Too often, however, needed data are fragmented in disparate systems. The South Carolina Clinical & Translational Research Institute at the Medical University of South Carolina (MUSC) developed and implemented a universal study identifier—the Research Master Identifier (RMID)—for tracking research studies across disparate systems and a data warehouse-inspired model—the Research Integrated Network of Systems (RINS)—for integrating data from those systems.Materials and MethodsIn 2017, MUSC began requiring the use of RMIDs in informatics systems that support human subject studies. We developed a web-based tool to create RMIDs and application programming interfaces to synchronize research records and visualize linkages to protocols across systems. Selected data from these disparate systems were extracted and merged nightly into an enterprise data mart, and performance dashboards were created to monitor key translational processes.ResultsWithin 4 years, 5513 RMIDs were created. Among these were 726 (13%) bridged systems needed to evaluate research study performance, and 982 (18%) linked to the electronic health records, enabling patient-level reporting.DiscussionBarriers posed by data fragmentation to assessment of program impact have largely been eliminated at MUSC through the requirement for an RMID, its distribution via RINS to disparate systems, and mapping of system-level data to a single integrated data mart.ConclusionBy applying data warehousing principles to federate data at the “study” level, the RINS project reduced data fragmentation and promoted research systems integration.  相似文献   

4.
Objective Clinical decision support (CDS) is essential for delivery of high-quality, cost-effective, and safe healthcare. The authors sought to evaluate the CDS capabilities across electronic health record (EHR) systems.Methods We evaluated the CDS implementation capabilities of 8 Office of the National Coordinator for Health Information Technology Authorized Certification Body (ONC-ACB)-certified EHRs. Within each EHR, the authors attempted to implement 3 user-defined rules that utilized the various data and logic elements expected of typical EHRs and that represented clinically important evidenced-based care. The rules were: 1) if a patient has amiodarone on his or her active medication list and does not have a thyroid-stimulating hormone (TSH) result recorded in the last 12 months, suggest ordering a TSH; 2) if a patient has a hemoglobin A1c result >7% and does not have diabetes on his or her problem list, suggest adding diabetes to the problem list; and 3) if a patient has coronary artery disease on his or her problem list and does not have aspirin on the active medication list, suggest ordering aspirin.Results Most evaluated EHRs lacked some CDS capabilities; 5 EHRs were able to implement all 3 rules, and the remaining 3 EHRs were unable to implement any of the rules. One of these did not allow users to customize CDS rules at all. The most frequently found shortcomings included the inability to use laboratory test results in rules, limit rules by time, use advanced Boolean logic, perform actions from the alert interface, and adequately test rules.Conclusion Significant improvements in the EHR certification and implementation procedures are necessary.  相似文献   

5.
ObjectiveRespiratory support status is critical in understanding patient status, but electronic health record data are often scattered, incomplete, and contradictory. Further, there has been limited work on standardizing representations for respiratory support. The objective of this work was to (1) propose a practical terminology system for respiratory support methods; (2) develop (meta-)heuristics for constructing respiratory support episodes; and (3) evaluate the utility of respiratory support information for mortality prediction.Materials and MethodsAll analyses were performed using electronic health record data of COVID-19-tested, emergency department-admit, adult patients at a large, Midwestern healthcare system between March 1, 2020 and April 1, 2021. Logistic regression and XGBoost models were trained with and without respiratory support information, and performance metrics were compared. Importance of respiratory-support-based features was explored using absolute coefficient values for logistic regression and SHapley Additive exPlanations values for the XGBoost model.ResultsThe proposed terminology system for respiratory support methods is as follows: Low-Flow Oxygen Therapy (LFOT), High-Flow Oxygen Therapy (HFOT), Non-Invasive Mechanical Ventilation (NIMV), Invasive Mechanical Ventilation (IMV), and ExtraCorporeal Membrane Oxygenation (ECMO). The addition of respiratory support information significantly improved mortality prediction (logistic regression area under receiver operating characteristic curve, median [IQR] from 0.855 [0.852—0.855] to 0.881 [0.876—0.884]; area under precision recall curve from 0.262 [0.245—0.268] to 0.319 [0.313—0.325], both P < 0.01). The proposed generalizable, interpretable, and episodic representation had commensurate performance compared to alternate representations despite loss of granularity. Respiratory support features were among the most important in both models.ConclusionRespiratory support information is critical in understanding patient status and can facilitate downstream analyses.  相似文献   

6.
突发事件伤病员电子健康档案标准化研究   总被引:1,自引:0,他引:1  
以为突发事件伤痛员治疗、康复提供标准化支持为目的,在分析国内外电子健康档案标准化的基础上,对突发事件伤病员健康数据项进行整理和归类,构建突发事件伤痛员电子健康档案的体系结构,并对其包含的每项内容进行分析和说明.  相似文献   

7.
The New York City Clinical Data Research Network (NYC-CDRN), funded by the Patient-Centered Outcomes Research Institute (PCORI), brings together 22 organizations including seven independent health systems to enable patient-centered clinical research, support a national network, and facilitate learning healthcare systems. The NYC-CDRN includes a robust, collaborative governance and organizational infrastructure, which takes advantage of its participants’ experience, expertise, and history of collaboration. The technical design will employ an information model to document and manage the collection and transformation of clinical data, local institutional staging areas to transform and validate data, a centralized data processing facility to aggregate and share data, and use of common standards and tools. We strive to ensure that our project is patient-centered; nurtures collaboration among all stakeholders; develops scalable solutions facilitating growth and connections; chooses simple, elegant solutions wherever possible; and explores ways to streamline the administrative and regulatory approval process across sites.  相似文献   

8.
Objective The capability to share data, and harness its potential to generate knowledge rapidly and inform decisions, can have transformative effects that improve health. The infrastructure to achieve this goal at scale—marrying technology, process, and policy—is commonly referred to as the Learning Health System (LHS). Achieving an LHS raises numerous scientific challenges.Materials and methods The National Science Foundation convened an invitational workshop to identify the fundamental scientific and engineering research challenges to achieving a national-scale LHS. The workshop was planned by a 12-member committee and ultimately engaged 45 prominent researchers spanning multiple disciplines over 2 days in Washington, DC on 11–12 April 2013.Results The workshop participants collectively identified 106 research questions organized around four system-level requirements that a high-functioning LHS must satisfy. The workshop participants also identified a new cross-disciplinary integrative science of cyber-social ecosystems that will be required to address these challenges.Conclusions The intellectual merit and potential broad impacts of the innovations that will be driven by investments in an LHS are of great potential significance. The specific research questions that emerged from the workshop, alongside the potential for diverse communities to assemble to address them through a ‘new science of learning systems’, create an important agenda for informatics and related disciplines.  相似文献   

9.
ObjectiveLarge clinical databases are increasingly used for research and quality improvement. We describe an approach to data quality assessment from the General Medicine Inpatient Initiative (GEMINI), which collects and standardizes administrative and clinical data from hospitals.MethodsThe GEMINI database contained 245 559 patient admissions at 7 hospitals in Ontario, Canada from 2010 to 2017. We performed 7 computational data quality checks and iteratively re-extracted data from hospitals to correct problems. Thereafter, GEMINI data were compared to data that were manually abstracted from the hospital’s electronic medical record for 23 419 selected data points on a sample of 7488 patients.ResultsComputational checks flagged 103 potential data quality issues, which were either corrected or documented to inform future analysis. For example, we identified the inclusion of canceled radiology tests, a time shift of transfusion data, and mistakenly processing the chemical symbol for sodium (“Na”) as a missing value. Manual validation identified 1 important data quality issue that was not detected by computational checks: transfusion dates and times at 1 site were unreliable. Apart from that single issue, across all data tables, GEMINI data had high overall accuracy (ranging from 98%–100%), sensitivity (95%–100%), specificity (99%–100%), positive predictive value (93%–100%), and negative predictive value (99%–100%) compared to the gold standard.Discussion and ConclusionComputational data quality checks with iterative re-extraction facilitated reliable data collection from hospitals but missed 1 critical quality issue. Combining computational and manual approaches may be optimal for assessing the quality of large multisite clinical databases.  相似文献   

10.
Objective To review and evaluate available software tools for electronic health record–driven phenotype authoring in order to identify gaps and needs for future development.Materials and Methods Candidate phenotype authoring tools were identified through (1) literature search in four publication databases (PubMed, Embase, Web of Science, and Scopus) and (2) a web search. A collection of tools was compiled and reviewed after the searches. A survey was designed and distributed to the developers of the reviewed tools to discover their functionalities and features.Results Twenty-four different phenotype authoring tools were identified and reviewed. Developers of 16 of these identified tools completed the evaluation survey (67% response rate). The surveyed tools showed commonalities but also varied in their capabilities in algorithm representation, logic functions, data support and software extensibility, search functions, user interface, and data outputs.Discussion Positive trends identified in the evaluation included: algorithms can be represented in both computable and human readable formats; and most tools offer a web interface for easy access. However, issues were also identified: many tools were lacking advanced logic functions for authoring complex algorithms; the ability to construct queries that leveraged un-structured data was not widely implemented; and many tools had limited support for plug-ins or external analytic software.Conclusions Existing phenotype authoring tools could enable clinical researchers to work with electronic health record data more efficiently, but gaps still exist in terms of the functionalities of such tools. The present work can serve as a reference point for the future development of similar tools.  相似文献   

11.
ObjectiveSubstance use screening in adolescence is unstandardized and often documented in clinical notes, rather than in structured electronic health records (EHRs). The objective of this study was to integrate logic rules with state-of-the-art natural language processing (NLP) and machine learning technologies to detect substance use information from both structured and unstructured EHR data.Materials and MethodsPediatric patients (10-20 years of age) with any encounter between July 1, 2012, and October 31, 2017, were included (n = 3890 patients; 19 478 encounters). EHR data were extracted at each encounter, manually reviewed for substance use (alcohol, tobacco, marijuana, opiate, any use), and coded as lifetime use, current use, or family use. Logic rules mapped structured EHR indicators to screening results. A knowledge-based NLP system and a deep learning model detected substance use information from unstructured clinical narratives. System performance was evaluated using positive predictive value, sensitivity, negative predictive value, specificity, and area under the receiver-operating characteristic curve (AUC).ResultsThe dataset included 17 235 structured indicators and 27 141 clinical narratives. Manual review of clinical narratives captured 94.0% of positive screening results, while structured EHR data captured 22.0%. Logic rules detected screening results from structured data with 1.0 and 0.99 for sensitivity and specificity, respectively. The knowledge-based system detected substance use information from clinical narratives with 0.86, 0.79, and 0.88 for AUC, sensitivity, and specificity, respectively. The deep learning model further improved detection capacity, achieving 0.88, 0.81, and 0.85 for AUC, sensitivity, and specificity, respectively. Finally, integrating predictions from structured and unstructured data achieved high detection capacity across all cases (0.96, 0.85, and 0.87 for AUC, sensitivity, and specificity, respectively).ConclusionsIt is feasible to detect substance use screening and results among pediatric patients using logic rules, NLP, and machine learning technologies.  相似文献   

12.
Regional healthcare platforms collect clinical data from hospitals in specific areas for the purpose of healthcare management. It is a common requirement to reuse the data for clinical research. However, we have to face challenges like the inconsistence of terminology in electronic health records (EHR) and the complexities in data quality and data formats in regional healthcare platform. In this paper, we propose methodology and process on constructing large scale cohorts which forms the basis of causality and comparative effectiveness relationship in epidemiology. We firstly constructed a Chinese terminology knowledge graph to deal with the diversity of vocabularies on regional platform. Secondly, we built special disease case repositories (i.e., heart failure repository) that utilize the graph to search the related patients and to normalize the data. Based on the requirements of the clinical research which aimed to explore the effectiveness of taking statin on 180-days readmission in patients with heart failure, we built a large-scale retrospective cohort with 29647 cases of heart failure patients from the heart failure repository. After the propensity score matching, the study group (n=6346) and the control group (n=6346) with parallel clinical characteristics were acquired. Logistic regression analysis showed that taking statins had a negative correlation with 180-days readmission in heart failure patients. This paper presents the workflow and application example of big data mining based on regional EHR data.  相似文献   

13.
ObjectiveReal-world data (RWD), defined as routinely collected healthcare data, can be a potential catalyst for addressing challenges faced in clinical trials. We performed a scoping review of database-specific RWD applications within clinical trial contexts, synthesizing prominent uses and themes.Materials and MethodsQuerying 3 biomedical literature databases, research articles using electronic health records, administrative claims databases, or clinical registries either within a clinical trial or in tandem with methodology related to clinical trials were included. Articles were required to use at least 1 US RWD source. All abstract screening, full-text screening, and data extraction was performed by 1 reviewer. Two reviewers independently verified all decisions.ResultsOf 2020 screened articles, 89 qualified: 59 articles used electronic health records, 29 used administrative claims, and 26 used registries. Our synthesis was driven by the general life cycle of a clinical trial, culminating into 3 major themes: trial process tasks (51 articles); dissemination strategies (6); and generalizability assessments (34). Despite a diverse set of diseases studied, <10% of trials using RWD for trial process tasks evaluated medications or procedures (5/51). All articles highlighted data-related challenges, such as missing values.DiscussionDatabase-specific RWD have been occasionally leveraged for various clinical trial tasks. We observed underuse of RWD within conducted medication or procedure trials, though it is subject to the confounder of implicit report of RWD use.ConclusionEnhanced incorporation of RWD should be further explored for medication or procedure trials, including better understanding of how to handle related data quality issues to facilitate RWD use.  相似文献   

14.
ObjectiveThere are signals of clinicians’ expert and knowledge-driven behaviors within clinical information systems (CIS) that can be exploited to support clinical prediction. Describe development of the Healthcare Process Modeling Framework to Phenotype Clinician Behaviors for Exploiting the Signal Gain of Clinical Expertise (HPM-ExpertSignals).Materials and MethodsWe employed an iterative framework development approach that combined data-driven modeling and simulation testing to define and refine a process for phenotyping clinician behaviors. Our framework was developed and evaluated based on the Communicating Narrative Concerns Entered by Registered Nurses (CONCERN) predictive model to detect and leverage signals of clinician expertise for prediction of patient trajectories.ResultsSeven themes—identified during development and simulation testing of the CONCERN model—informed framework development. The HPM-ExpertSignals conceptual framework includes a 3-step modeling technique: (1) identify patterns of clinical behaviors from user interaction with CIS; (2) interpret patterns as proxies of an individual’s decisions, knowledge, and expertise; and (3) use patterns in predictive models for associations with outcomes. The CONCERN model differentiated at risk patients earlier than other early warning scores, lending confidence to the HPM-ExpertSignals framework.DiscussionThe HPM-ExpertSignals framework moves beyond transactional data analytics to model clinical knowledge, decision making, and CIS interactions, which can support predictive modeling with a focus on the rapid and frequent patient surveillance cycle.ConclusionsWe propose this framework as an approach to embed clinicians’ knowledge-driven behaviors in predictions and inferences to facilitate capture of healthcare processes that are activated independently, and sometimes well before, physiological changes are apparent.  相似文献   

15.
ObjectiveThis study sought to evaluate whether synthetic data derived from a national coronavirus disease 2019 (COVID-19) dataset could be used for geospatial and temporal epidemic analyses.Materials and MethodsUsing an original dataset (n = 1 854 968 severe acute respiratory syndrome coronavirus 2 tests) and its synthetic derivative, we compared key indicators of COVID-19 community spread through analysis of aggregate and zip code-level epidemic curves, patient characteristics and outcomes, distribution of tests by zip code, and indicator counts stratified by month and zip code. Similarity between the data was statistically and qualitatively evaluated.ResultsIn general, synthetic data closely matched original data for epidemic curves, patient characteristics, and outcomes. Synthetic data suppressed labels of zip codes with few total tests (mean = 2.9 ± 2.4; max = 16 tests; 66% reduction of unique zip codes). Epidemic curves and monthly indicator counts were similar between synthetic and original data in a random sample of the most tested (top 1%; n = 171) and for all unsuppressed zip codes (n = 5819), respectively. In small sample sizes, synthetic data utility was notably decreased.DiscussionAnalyses on the population-level and of densely tested zip codes (which contained most of the data) were similar between original and synthetically derived datasets. Analyses of sparsely tested populations were less similar and had more data suppression.ConclusionIn general, synthetic data were successfully used to analyze geospatial and temporal trends. Analyses using small sample sizes or populations were limited, in part due to purposeful data label suppression—an attribute disclosure countermeasure. Users should consider data fitness for use in these cases.  相似文献   

16.

Background

The electronic medical record (EMR)/electronic health record (EHR) is becoming an integral component of many primary-care outpatient practices. Before implementing an EMR/EHR system, primary-care practices should have an understanding of the potential benefits and limitations.

Objective

The objective of this study was to systematically review the recent literature around the impact of the EMR/EHR within primary-care outpatient practices.

Materials and methods

Searches of Medline, EMBASE, CINAHL, ABI Inform, and Cochrane Library were conducted to identify articles published between January 1998 and January 2010. The gray literature and reference lists of included articles were also searched. 30 studies met inclusion criteria.

Results and discussion

The EMR/EHR appears to have structural and process benefits, but the impact on clinical outcomes is less clear. Using Donabedian''s framework, five articles focused on the impact on healthcare structure, 21 explored healthcare process issues, and four focused on health-related outcomes.  相似文献   

17.
Electronic case reporting (eCR) is the automated generation and transmission of case reports from electronic health records to public health for review and action. These reports (electronic initial case reports: eICRs) adhere to recommended exchange and terminology standards. eCR is a partnership of the Centers for Disease Control and Prevention (CDC), Association of Public Health Laboratories (APHL) and Council of State and Territorial Epidemiologists (CSTE). The Minnesota Department of Health (MDH) received eICRs for COVID-19 from April 2020 (3 sites, manual process), automated eCR implementation in August 2020 (7 sites), and on-boarded ∼1780 clinical units in 460 sites across 6 integrated healthcare systems (through March 2022). Approximately 20 000 eICRs/month were reported to MDH during high-volume timeframes. With increasing provider/health system implementation, the proportion of COVID-19 cases with an eICR increased to 30% (March 2022). Evaluation of data quality for select demographic variables (gender, race, ethnicity, email, phone, language) across the 6 reporting health systems revealed a high proportion of completeness (>80%) for half of variables and less complete data for rest (ethnicity, email, language) along with low ethnicity data (<50%) for one health system. Presently eCR implementation at MDH includes only one EHR vendor. Next steps will focus on onboarding other EHRs, additional eICR data extraction/utilization, detailed analysis, outreach to address data quality issues, and expanding to other reportable conditions.  相似文献   

18.

Objective

To identify key principles for establishing a national clinical decision support (CDS) knowledge sharing framework.

Materials and methods

As part of an initiative by the US Office of the National Coordinator for Health IT (ONC) to establish a framework for national CDS knowledge sharing, key stakeholders were identified. Stakeholders'' viewpoints were obtained through surveys and in-depth interviews, and findings and relevant insights were summarized. Based on these insights, key principles were formulated for establishing a national CDS knowledge sharing framework.

Results

Nineteen key stakeholders were recruited, including six executives from electronic health record system vendors, seven executives from knowledge content producers, three executives from healthcare provider organizations, and three additional experts in clinical informatics. Based on these stakeholders'' insights, five key principles were identified for effectively sharing CDS knowledge nationally. These principles are (1) prioritize and support the creation and maintenance of a national CDS knowledge sharing framework; (2) facilitate the development of high-value content and tooling, preferably in an open-source manner; (3) accelerate the development or licensing of required, pragmatic standards; (4) acknowledge and address medicolegal liability concerns; and (5) establish a self-sustaining business model.

Discussion

Based on the principles identified, a roadmap for national CDS knowledge sharing was developed through the ONC''s Advancing CDS initiative.

Conclusion

The study findings may serve as a useful guide for ongoing activities by the ONC and others to establish a national framework for sharing CDS knowledge and improving clinical care.  相似文献   

19.
The development and implementation of clinical decision support (CDS) that trains itself and adapts its algorithms based on new data—here referred to as Adaptive CDS—present unique challenges and considerations. Although Adaptive CDS represents an expected progression from earlier work, the activities needed to appropriately manage and support the establishment and evolution of Adaptive CDS require new, coordinated initiatives and oversight that do not currently exist. In this AMIA position paper, the authors describe current and emerging challenges to the safe use of Adaptive CDS and lay out recommendations for the effective management and monitoring of Adaptive CDS.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号