首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
ObjectiveThe objective was to develop and operate a cloud-based federated system for managing, analyzing, and sharing patient data for research purposes, while allowing each resource sharing patient data to operate their component based upon their own governance rules. The federated system is called the Biomedical Research Hub (BRH).Materials and MethodsThe BRH is a cloud-based federated system built over a core set of software services called framework services. BRH framework services include authentication and authorization, services for generating and assessing findable, accessible, interoperable, and reusable (FAIR) data, and services for importing and exporting bulk clinical data. The BRH includes data resources providing data operated by different entities and workspaces that can access and analyze data from one or more of the data resources in the BRH.ResultsThe BRH contains multiple data commons that in aggregate provide access to over 6 PB of research data from over 400 000 research participants.Discussion and conclusionWith the growing acceptance of using public cloud computing platforms for biomedical research, and the growing use of opaque persistent digital identifiers for datasets, data objects, and other entities, there is now a foundation for systems that federate data from multiple independently operated data resources that expose FAIR application programming interfaces, each using a separate data model. Applications can be built that access data from one or more of the data resources.  相似文献   

2.
ObjectiveObtaining electronic patient data, especially from electronic health record (EHR) systems, for clinical and translational research is difficult. Multiple research informatics systems exist but navigating the numerous applications can be challenging for scientists. This article describes Architecture for Research Computing in Health (ARCH), our institution’s approach for matching investigators with tools and services for obtaining electronic patient data.Materials and MethodsSupporting the spectrum of studies from populations to individuals, ARCH delivers a breadth of scientific functions—including but not limited to cohort discovery, electronic data capture, and multi-institutional data sharing—that manifest in specific systems—such as i2b2, REDCap, and PCORnet. Through a consultative process, ARCH staff align investigators with tools with respect to study design, data sources, and cost. Although most ARCH services are available free of charge, advanced engagements require fee for service.ResultsSince 2016 at Weill Cornell Medicine, ARCH has supported over 1200 unique investigators through more than 4177 consultations. Notably, ARCH infrastructure enabled critical coronavirus disease 2019 response activities for research and patient care.DiscussionARCH has provided a technical, regulatory, financial, and educational framework to support the biomedical research enterprise with electronic patient data. Collaboration among informaticians, biostatisticians, and clinicians has been critical to rapid generation and analysis of EHR data.ConclusionA suite of tools and services, ARCH helps match investigators with informatics systems to reduce time to science. ARCH has facilitated research at Weill Cornell Medicine and may provide a model for informatics and research leaders to support scientists elsewhere.  相似文献   

3.
Institutions must decide how to manage the use of clinical data to support research while ensuring appropriate protections are in place. Questions about data use and sharing often go beyond what the Health Insurance Portability and Accountability Act of 1996 (HIPAA) considers. In this article, we describe our institution’s governance model and approach. Common questions we consider include (1) Is a request limited to the minimum data necessary to carry the research forward? (2) What plans are there for sharing data externally?, and (3) What impact will the proposed use of data have on patients and the institution? In 2020, 302 of the 319 requests reviewed were approved. The majority of requests were approved in less than 2 weeks, with few or no stipulations. For the remaining requests, the governance committee works with researchers to find solutions to meet their needs while also addressing our collective goal of protecting patients.  相似文献   

4.
5.
ObjectiveIntegrated, real-time data are crucial to evaluate translational efforts to accelerate innovation into care. Too often, however, needed data are fragmented in disparate systems. The South Carolina Clinical & Translational Research Institute at the Medical University of South Carolina (MUSC) developed and implemented a universal study identifier—the Research Master Identifier (RMID)—for tracking research studies across disparate systems and a data warehouse-inspired model—the Research Integrated Network of Systems (RINS)—for integrating data from those systems.Materials and MethodsIn 2017, MUSC began requiring the use of RMIDs in informatics systems that support human subject studies. We developed a web-based tool to create RMIDs and application programming interfaces to synchronize research records and visualize linkages to protocols across systems. Selected data from these disparate systems were extracted and merged nightly into an enterprise data mart, and performance dashboards were created to monitor key translational processes.ResultsWithin 4 years, 5513 RMIDs were created. Among these were 726 (13%) bridged systems needed to evaluate research study performance, and 982 (18%) linked to the electronic health records, enabling patient-level reporting.DiscussionBarriers posed by data fragmentation to assessment of program impact have largely been eliminated at MUSC through the requirement for an RMID, its distribution via RINS to disparate systems, and mapping of system-level data to a single integrated data mart.ConclusionBy applying data warehousing principles to federate data at the “study” level, the RINS project reduced data fragmentation and promoted research systems integration.  相似文献   

6.
基于数据仓库和数据挖掘的医院决策支持系统设计   总被引:1,自引:1,他引:0  
医院信息系统(HIS)的研究与开发是实现数字化医院的根本途径。基于数据仓库和数据挖掘技术的决策支持系统是对HIS数据的高层次开发。首先介绍了数据仓库、数据挖掘以及决策支持系统的技术特点,在此基础上,提出了基于数据仓库和数据挖掘技术的医院决策支持系统体系结构,并研究和分析了这种体系结构的优越性。  相似文献   

7.
ObjectiveIn response to COVID-19, the informatics community united to aggregate as much clinical data as possible to characterize this new disease and reduce its impact through collaborative analytics. The National COVID Cohort Collaborative (N3C) is now the largest publicly available HIPAA limited dataset in US history with over 6.4 million patients and is a testament to a partnership of over 100 organizations.Materials and MethodsWe developed a pipeline for ingesting, harmonizing, and centralizing data from 56 contributing data partners using 4 federated Common Data Models. N3C data quality (DQ) review involves both automated and manual procedures. In the process, several DQ heuristics were discovered in our centralized context, both within the pipeline and during downstream project-based analysis. Feedback to the sites led to many local and centralized DQ improvements.ResultsBeyond well-recognized DQ findings, we discovered 15 heuristics relating to source Common Data Model conformance, demographics, COVID tests, conditions, encounters, measurements, observations, coding completeness, and fitness for use. Of 56 sites, 37 sites (66%) demonstrated issues through these heuristics. These 37 sites demonstrated improvement after receiving feedback.DiscussionWe encountered site-to-site differences in DQ which would have been challenging to discover using federated checks alone. We have demonstrated that centralized DQ benchmarking reveals unique opportunities for DQ improvement that will support improved research analytics locally and in aggregate.ConclusionBy combining rapid, continual assessment of DQ with a large volume of multisite data, it is possible to support more nuanced scientific questions with the scale and rigor that they require.  相似文献   

8.
ObjectiveTo synthesize data quality (DQ) dimensions and assessment methods of real-world data, especially electronic health records, through a systematic scoping review and to assess the practice of DQ assessment in the national Patient-centered Clinical Research Network (PCORnet).Materials and MethodsWe started with 3 widely cited DQ literature—2 reviews from Chan et al (2010) and Weiskopf et al (2013a) and 1 DQ framework from Kahn et al (2016)—and expanded our review systematically to cover relevant articles published up to February 2020. We extracted DQ dimensions and assessment methods from these studies, mapped their relationships, and organized a synthesized summarization of existing DQ dimensions and assessment methods. We reviewed the data checks employed by the PCORnet and mapped them to the synthesized DQ dimensions and methods.ResultsWe analyzed a total of 3 reviews, 20 DQ frameworks, and 226 DQ studies and extracted 14 DQ dimensions and 10 assessment methods. We found that completeness, concordance, and correctness/accuracy were commonly assessed. Element presence, validity check, and conformance were commonly used DQ assessment methods and were the main focuses of the PCORnet data checks.DiscussionDefinitions of DQ dimensions and methods were not consistent in the literature, and the DQ assessment practice was not evenly distributed (eg, usability and ease-of-use were rarely discussed). Challenges in DQ assessments, given the complex and heterogeneous nature of real-world data, exist.ConclusionThe practice of DQ assessment is still limited in scope. Future work is warranted to generate understandable, executable, and reusable DQ measures.  相似文献   

9.

Objective

To review the published, peer-reviewed literature on clinical research data warehouse governance in distributed research networks (DRNs).

Materials and methods

Medline, PubMed, EMBASE, CINAHL, and INSPEC were searched for relevant documents published through July 31, 2013 using a systematic approach. Only documents relating to DRNs in the USA were included. Documents were analyzed using a classification framework consisting of 10 facets to identify themes.

Results

6641 documents were retrieved. After screening for duplicates and relevance, 38 were included in the final review. A peer-reviewed literature on data warehouse governance is emerging, but is still sparse. Peer-reviewed publications on UK research network governance were more prevalent, although not reviewed for this analysis. All 10 classification facets were used, with some documents falling into two or more classifications. No document addressed costs associated with governance.

Discussion

Even though DRNs are emerging as vehicles for research and public health surveillance, understanding of DRN data governance policies and procedures is limited. This is expected to change as more DRN projects disseminate their governance approaches as publicly available toolkits and peer-reviewed publications.

Conclusions

While peer-reviewed, US-based DRN data warehouse governance publications have increased, DRN developers and administrators are encouraged to publish information about these programs.  相似文献   

10.
The 21st Century Cures Act, passed in 2016, and the Final Rules it called for create a roadmap for enabling patient access to their electronic health information. The set of data to be made available, as determined by the Office of the National Coordinator for Health IT through the US Core Data for Interoperability expansion process, will impact the value creation of this improved data liquidity. In this commentary, we look at the potential for significant value creation from USCDI in the context of clinical bioinformatics research and advocate for the research community’s involvement in the USCDI process to propel this value creation forward. We also describe 1 mechanism—using existing required APIs for full data export capabilities—that could pragmatically enable this value creation at minimal additional technical lift beyond the current regulatory requirements.  相似文献   

11.
随着越来越多的医疗机构开始应用电子健康档案系统(Electronic Health Records,EHR)来管理患者资料,基于在临床研究工作对患者资料的需求,各研究机构也开始以电子健康档案系统作为临床研究的数据来源。EHRCR(Electronic Health Records/Clinical Research)项目是在2006年12月由HL7技术委员会(Health Level Seven Technical Committee,HL7TC)和欧洲健康档案研究所(European Institute for Health Records,EuroRec)发起,旨在研究未来可以支持临床研究的电子健康档案系统应具有的功能,以及与此相关的系统、网络和业务流程。因此,对该项目的最新研究成果加以介绍,作为我国电子健康档案行业发展的参考。  相似文献   

12.
Recently, an important public debate emerged about the digital afterlife of any personal data stored in the cloud. Such debate brings also to attention the importance of transparent management of electronic health record (EHR) data of deceased patients. In this perspective paper, we look at legal and regulatory policies for EHR data post mortem. We analyze observational research situations using EHR data that do not require institutional review board approval. We propose creation of a deceased subject integrated data repository (dsIDR) as an effective tool for piloting certain types of research projects. We highlight several dsIDR challenges in proving death status, informed consent, obtaining data from payers and healthcare providers and the involvement of next of kin.  相似文献   

13.
数据仓库在医院应用的研究   总被引:3,自引:1,他引:2  
目的:对医院信息进行全方位、多层次的查询和分析,为医院各类人员提供信息查询、数据分析和决策支持。方法:利用数据仓库(DW)技术,对“军字一号”医院信息系统中的数据进行提取、建模,并建立多维数据集。结果:建立了医院DW的构架,分析了各科室全费治愈患者平均住院费用和平均住院日。结论:DW技术是医院信思系统进一步的发展方向,将为医院决策支持提供最有用的信息。  相似文献   

14.
15.
ObjectiveThe electronic health record (EHR) data deluge makes data retrieval more difficult, escalating cognitive load and exacerbating clinician burnout. New auto-summarization techniques are needed. The study goal was to determine if problem-oriented view (POV) auto-summaries improve data retrieval workflows. We hypothesized that POV users would perform tasks faster, make fewer errors, be more satisfied with EHR use, and experience less cognitive load as compared with users of the standard view (SV).MethodsSimple data retrieval tasks were performed in an EHR simulation environment. A randomized block design was used. In the control group (SV), subjects retrieved lab results and medications by navigating to corresponding sections of the electronic record. In the intervention group (POV), subjects clicked on the name of the problem and immediately saw lab results and medications relevant to that problem.ResultsWith POV, mean completion time was faster (173 seconds for POV vs 205 seconds for SV; P < .0001), the error rate was lower (3.4% for POV vs 7.7% for SV; P = .0010), user satisfaction was greater (System Usability Scale score 58.5 for POV vs 41.3 for SV; P < .0001), and cognitive task load was less (NASA Task Load Index score 0.72 for POV vs 0.99 for SV; P < .0001).DiscussionThe study demonstrates that using a problem-based auto-summary has a positive impact on 4 aspects of EHR data retrieval, including cognitive load.ConclusionEHRs have brought on a data deluge, with increased cognitive load and physician burnout. To mitigate these increases, further development and implementation of auto-summarization functionality and the requisite knowledge base are needed.  相似文献   

16.
ObjectiveSuccessful technological implementations frequently involve individuals who serve as mediators between end users, management, and technology developers. The goal for this project was to evaluate the structure and activities of such mediators in a large-scale electronic health record implementation.Materials and MethodsField notes from observations taken during implementation beginning in November 2017 were analyzed qualitatively using a thematic analysis framework to examine the relationship between specific types of mediators and the type and level of support to end users.ResultsWe found that support personnel possessing both contextual knowledge of the institution’s workflow and training in the new technology were the most successful in mediation of adoption and use. Those that lacked context of either technology or institutional workflow often displayed barriers in communication, trust, and active problem solving.ConclusionsThese findings suggest that institutional investment in technology training and explicit programs to foster skills in mediation, including roles for professionals with career development opportunities, prior to implementation can be beneficial in easing the pain of system transition.  相似文献   

17.
ObjectiveThis study sought to evaluate whether synthetic data derived from a national coronavirus disease 2019 (COVID-19) dataset could be used for geospatial and temporal epidemic analyses.Materials and MethodsUsing an original dataset (n = 1 854 968 severe acute respiratory syndrome coronavirus 2 tests) and its synthetic derivative, we compared key indicators of COVID-19 community spread through analysis of aggregate and zip code-level epidemic curves, patient characteristics and outcomes, distribution of tests by zip code, and indicator counts stratified by month and zip code. Similarity between the data was statistically and qualitatively evaluated.ResultsIn general, synthetic data closely matched original data for epidemic curves, patient characteristics, and outcomes. Synthetic data suppressed labels of zip codes with few total tests (mean = 2.9 ± 2.4; max = 16 tests; 66% reduction of unique zip codes). Epidemic curves and monthly indicator counts were similar between synthetic and original data in a random sample of the most tested (top 1%; n = 171) and for all unsuppressed zip codes (n = 5819), respectively. In small sample sizes, synthetic data utility was notably decreased.DiscussionAnalyses on the population-level and of densely tested zip codes (which contained most of the data) were similar between original and synthetically derived datasets. Analyses of sparsely tested populations were less similar and had more data suppression.ConclusionIn general, synthetic data were successfully used to analyze geospatial and temporal trends. Analyses using small sample sizes or populations were limited, in part due to purposeful data label suppression—an attribute disclosure countermeasure. Users should consider data fitness for use in these cases.  相似文献   

18.
ObjectiveThe goals of this study were to harmonize data from electronic health records (EHRs) into common units, and impute units that were missing.Materials and MethodsThe National COVID Cohort Collaborative (N3C) table of laboratory measurement data—over 3.1 billion patient records and over 19 000 unique measurement concepts in the Observational Medical Outcomes Partnership (OMOP) common-data-model format from 55 data partners. We grouped ontologically similar OMOP concepts together for 52 variables relevant to COVID-19 research, and developed a unit-harmonization pipeline comprised of (1) selecting a canonical unit for each measurement variable, (2) arriving at a formula for conversion, (3) obtaining clinical review of each formula, (4) applying the formula to convert data values in each unit into the target canonical unit, and (5) removing any harmonized value that fell outside of accepted value ranges for the variable. For data with missing units for all the results within a lab test for a data partner, we compared values with pooled values of all data partners, using the Kolmogorov-Smirnov test.ResultsOf the concepts without missing values, we harmonized 88.1% of the values, and imputed units for 78.2% of records where units were absent (41% of contributors’ records lacked units).DiscussionThe harmonization and inference methods developed herein can serve as a resource for initiatives aiming to extract insight from heterogeneous EHR collections. Unique properties of centralized data are harnessed to enable unit inference.ConclusionThe pipeline we developed for the pooled N3C data enables use of measurements that would otherwise be unavailable for analysis.  相似文献   

19.
ObjectiveWe identified challenges and solutions to using electronic health record (EHR) systems for the design and conduct of pragmatic research.Materials and MethodsSince 2012, the Health Care Systems Research Collaboratory has served as the resource coordinating center for 21 pragmatic clinical trial demonstration projects. The EHR Core working group invited these demonstration projects to complete a written semistructured survey and used an inductive approach to review responses and identify EHR-related challenges and suggested EHR enhancements.ResultsWe received survey responses from 20 projects and identified 21 challenges that fell into 6 broad themes: (1) inadequate collection of patient-reported outcome data, (2) lack of structured data collection, (3) data standardization, (4) resources to support customization of EHRs, (5) difficulties aggregating data across sites, and (6) accessing EHR data.DiscussionBased on these findings, we formulated 6 prerequisites for PCTs that would enable the conduct of pragmatic research: (1) integrate the collection of patient-centered data into EHR systems, (2) facilitate structured research data collection by leveraging standard EHR functions, usable interfaces, and standard workflows, (3) support the creation of high-quality research data by using standards, (4) ensure adequate IT staff to support embedded research, (5) create aggregate, multidata type resources for multisite trials, and (6) create re-usable and automated queries.ConclusionWe are hopeful our collection of specific EHR challenges and research needs will drive health system leaders, policymakers, and EHR designers to support these suggestions to improve our national capacity for generating real-world evidence.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号