首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The New York City Clinical Data Research Network (NYC-CDRN), funded by the Patient-Centered Outcomes Research Institute (PCORI), brings together 22 organizations including seven independent health systems to enable patient-centered clinical research, support a national network, and facilitate learning healthcare systems. The NYC-CDRN includes a robust, collaborative governance and organizational infrastructure, which takes advantage of its participants’ experience, expertise, and history of collaboration. The technical design will employ an information model to document and manage the collection and transformation of clinical data, local institutional staging areas to transform and validate data, a centralized data processing facility to aggregate and share data, and use of common standards and tools. We strive to ensure that our project is patient-centered; nurtures collaboration among all stakeholders; develops scalable solutions facilitating growth and connections; chooses simple, elegant solutions wherever possible; and explores ways to streamline the administrative and regulatory approval process across sites.  相似文献   

2.
ObjectiveReal-world data (RWD), defined as routinely collected healthcare data, can be a potential catalyst for addressing challenges faced in clinical trials. We performed a scoping review of database-specific RWD applications within clinical trial contexts, synthesizing prominent uses and themes.Materials and MethodsQuerying 3 biomedical literature databases, research articles using electronic health records, administrative claims databases, or clinical registries either within a clinical trial or in tandem with methodology related to clinical trials were included. Articles were required to use at least 1 US RWD source. All abstract screening, full-text screening, and data extraction was performed by 1 reviewer. Two reviewers independently verified all decisions.ResultsOf 2020 screened articles, 89 qualified: 59 articles used electronic health records, 29 used administrative claims, and 26 used registries. Our synthesis was driven by the general life cycle of a clinical trial, culminating into 3 major themes: trial process tasks (51 articles); dissemination strategies (6); and generalizability assessments (34). Despite a diverse set of diseases studied, <10% of trials using RWD for trial process tasks evaluated medications or procedures (5/51). All articles highlighted data-related challenges, such as missing values.DiscussionDatabase-specific RWD have been occasionally leveraged for various clinical trial tasks. We observed underuse of RWD within conducted medication or procedure trials, though it is subject to the confounder of implicit report of RWD use.ConclusionEnhanced incorporation of RWD should be further explored for medication or procedure trials, including better understanding of how to handle related data quality issues to facilitate RWD use.  相似文献   

3.
ObjectivesThis systematic review aims to provide further insights into the conduct and reporting of clinical prediction model development and validation over time. We focus on assessing the reporting of information necessary to enable external validation by other investigators.Materials and MethodsWe searched Embase, Medline, Web-of-Science, Cochrane Library, and Google Scholar to identify studies that developed 1 or more multivariable prognostic prediction models using electronic health record (EHR) data published in the period 2009–2019.ResultsWe identified 422 studies that developed a total of 579 clinical prediction models using EHR data. We observed a steep increase over the years in the number of developed models. The percentage of models externally validated in the same paper remained at around 10%. Throughout 2009–2019, for both the target population and the outcome definitions, code lists were provided for less than 20% of the models. For about half of the models that were developed using regression analysis, the final model was not completely presented.DiscussionOverall, we observed limited improvement over time in the conduct and reporting of clinical prediction model development and validation. In particular, the prediction problem definition was often not clearly reported, and the final model was often not completely presented.ConclusionImprovement in the reporting of information necessary to enable external validation by other investigators is still urgently needed to increase clinical adoption of developed models.  相似文献   

4.
The Patient-Centered Outcomes Research Institute (PCORI) has launched PCORnet, a major initiative to support an effective, sustainable national research infrastructure that will advance the use of electronic health data in comparative effectiveness research (CER) and other types of research. In December 2013, PCORI''s board of governors funded 11 clinical data research networks (CDRNs) and 18 patient-powered research networks (PPRNs) for a period of 18 months. CDRNs are based on the electronic health records and other electronic sources of very large populations receiving healthcare within integrated or networked delivery systems. PPRNs are built primarily by communities of motivated patients, forming partnerships with researchers. These patients intend to participate in clinical research, by generating questions, sharing data, volunteering for interventional trials, and interpreting and disseminating results. Rapidly building a new national resource to facilitate a large-scale, patient-centered CER is associated with a number of technical, regulatory, and organizational challenges, which are described here.  相似文献   

5.
ObjectiveLarge clinical databases are increasingly used for research and quality improvement. We describe an approach to data quality assessment from the General Medicine Inpatient Initiative (GEMINI), which collects and standardizes administrative and clinical data from hospitals.MethodsThe GEMINI database contained 245 559 patient admissions at 7 hospitals in Ontario, Canada from 2010 to 2017. We performed 7 computational data quality checks and iteratively re-extracted data from hospitals to correct problems. Thereafter, GEMINI data were compared to data that were manually abstracted from the hospital’s electronic medical record for 23 419 selected data points on a sample of 7488 patients.ResultsComputational checks flagged 103 potential data quality issues, which were either corrected or documented to inform future analysis. For example, we identified the inclusion of canceled radiology tests, a time shift of transfusion data, and mistakenly processing the chemical symbol for sodium (“Na”) as a missing value. Manual validation identified 1 important data quality issue that was not detected by computational checks: transfusion dates and times at 1 site were unreliable. Apart from that single issue, across all data tables, GEMINI data had high overall accuracy (ranging from 98%–100%), sensitivity (95%–100%), specificity (99%–100%), positive predictive value (93%–100%), and negative predictive value (99%–100%) compared to the gold standard.Discussion and ConclusionComputational data quality checks with iterative re-extraction facilitated reliable data collection from hospitals but missed 1 critical quality issue. Combining computational and manual approaches may be optimal for assessing the quality of large multisite clinical databases.  相似文献   

6.
ObjectiveTo develop an algorithm for building longitudinal medication dose datasets using information extracted from clinical notes in electronic health records (EHRs).Materials and MethodsWe developed an algorithm that converts medication information extracted using natural language processing (NLP) into a usable format and builds longitudinal medication dose datasets. We evaluated the algorithm on 2 medications extracted from clinical notes of Vanderbilt’s EHR and externally validated the algorithm using clinical notes from the MIMIC-III clinical care database.ResultsFor the evaluation using Vanderbilt’s EHR data, the performance of our algorithm was excellent; F1-measures were ≥0.98 for both dose intake and daily dose. For the external validation using MIMIC-III, the algorithm achieved F1-measures ≥0.85 for dose intake and ≥0.82 for daily dose.DiscussionOur algorithm addresses the challenge of building longitudinal medication dose data using information extracted from clinical notes. Overall performance was excellent, but the algorithm can perform poorly when incorrect information is extracted by NLP systems. Although it performed reasonably well when applied to the external data source, its performance was worse due to differences in the way the drug information was written. The algorithm is implemented in the R package, “EHR,” and the extracted data from Vanderbilt’s EHRs along with the gold standards are provided so that users can reproduce the results and help improve the algorithm.ConclusionOur algorithm for building longitudinal dose data provides a straightforward way to use EHR data for medication-based studies. The external validation results suggest its potential for applicability to other systems.  相似文献   

7.
ObjectiveThe aim of this study was to collect and synthesize evidence regarding data quality problems encountered when working with variables related to social determinants of health (SDoH).Materials and MethodsWe conducted a systematic review of the literature on social determinants research and data quality and then iteratively identified themes in the literature using a content analysis process.ResultsThe most commonly represented quality issue associated with SDoH data is plausibility (n = 31, 41%). Factors related to race and ethnicity have the largest body of literature (n = 40, 53%). The first theme, noted in 62% (n = 47) of articles, is that bias or validity issues often result from data quality problems. The most frequently identified validity issue is misclassification bias (n = 23, 30%). The second theme is that many of the articles suggest methods for mitigating the issues resulting from poor social determinants data quality. We grouped these into 5 suggestions: avoid complete case analysis, impute data, rely on multiple sources, use validated software tools, and select addresses thoughtfully.DiscussionThe type of data quality problem varies depending on the variable, and each problem is associated with particular forms of analytical error. Problems encountered with the quality of SDoH data are rarely distributed randomly. Data from Hispanic patients are more prone to issues with plausibility and misclassification than data from other racial/ethnic groups.ConclusionConsideration of data quality and evidence-based quality improvement methods may help prevent bias and improve the validity of research conducted with SDoH data.  相似文献   

8.
Objectives To examine the feasibility of deploying a virtual web service for sharing data within a research network, and to evaluate the impact on data consistency and quality.Material and Methods Virtual machines (VMs) encapsulated an open-source, semantically and syntactically interoperable secure web service infrastructure along with a shadow database. The VMs were deployed to 8 Collaborative Pediatric Critical Care Research Network Clinical Centers.Results Virtual web services could be deployed in hours. The interoperability of the web services reduced format misalignment from 56% to 1% and demonstrated that 99% of the data consistently transferred using the data dictionary and 1% needed human curation.Conclusions Use of virtualized open-source secure web service technology could enable direct electronic abstraction of data from hospital databases for research purposes.  相似文献   

9.
ObjectiveAs a long-standing Clinical and Translational Science Awards (CTSA) Program hub, the University of Pittsburgh and the University of Pittsburgh Medical Center (UPMC) developed and implemented a modern research data warehouse (RDW) to efficiently provision electronic patient data for clinical and translational research.Materials and MethodsWe designed and implemented an RDW named Neptune to serve the specific needs of our CTSA. Neptune uses an atomic design where data are stored at a high level of granularity as represented in source systems. Neptune contains robust patient identity management tailored for research; integrates patient data from multiple sources, including electronic health records (EHRs), health plans, and research studies; and includes knowledge for mapping to standard terminologies.ResultsNeptune contains data for more than 5 million patients longitudinally organized as Health Insurance Portability and Accountability Act (HIPAA) Limited Data with dates and includes structured EHR data, clinical documents, health insurance claims, and research data. Neptune is used as a source for patient data for hundreds of institutional review board-approved research projects by local investigators and for national projects.DiscussionThe design of Neptune was heavily influenced by the large size of UPMC, the varied data sources, and the rich partnership between the University and the healthcare system. It includes several unique aspects, including the physical warehouse straddling the University and UPMC networks and management under an HIPAA Business Associates Agreement.ConclusionWe describe the design and implementation of an RDW at a large academic healthcare system that uses a distinctive atomic design where data are stored at a high level of granularity.  相似文献   

10.
Objective To assess the effectiveness of computer-aided clinical decision support systems (CDSS) in improving antibiotic prescribing in primary care.Methods A literature search utilizing Medline (via PubMed) and Embase (via Embase) was conducted up to November 2013. Randomized controlled trials (RCTs) and cluster randomized trials (CRTs) that evaluated the effects of CDSS aiming at improving antibiotic prescribing practice in an ambulatory primary care setting were included for review. Two investigators independently extracted data about study design and quality, participant characteristics, interventions, and outcomes.Results Seven studies (4 CRTs, 3 RCTs) met our inclusion criteria. All studies were performed in the USA. Proportions of eligible patient visits that triggered CDSS use varied substantially between intervention arms of studies (range 2.8–62.8%). Five out of seven trials showed marginal to moderate statistically significant effects of CDSS in improving antibiotic prescribing behavior. CDSS that automatically provided decision support were more likely to improve prescribing practice in contrast to systems that had to be actively initiated by healthcare providers.Conclusions CDSS show promising effectiveness in improving antibiotic prescribing behavior in primary care. Magnitude of effects compared to no intervention, appeared to be similar to other moderately effective single interventions directed at primary care providers. Additional research is warranted to determine CDSS characteristics crucial to triggering high adoption by providers as a perquisite of clinically relevant improvement of antibiotic prescribing.  相似文献   

11.
12.
ObjectiveData quality (DQ) must be consistently defined in context. The attributes, metadata, and context of longitudinal real-world data (RWD) have not been formalized for quality improvement across the data production and curation life cycle. We sought to complete a literature review on DQ assessment frameworks, indicators and tools for research, public health, service, and quality improvement across the data life cycle.Materials and MethodsThe review followed PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Databases from health, physical and social sciences were used: Cinahl, Embase, Scopus, ProQuest, Emcare, PsycINFO, Compendex, and Inspec. Embase was used instead of PubMed (an interface to search MEDLINE) because it includes all MeSH (Medical Subject Headings) terms used and journals in MEDLINE as well as additional unique journals and conference abstracts. A combined data life cycle and quality framework guided the search of published and gray literature for DQ frameworks, indicators, and tools. At least 2 authors independently identified articles for inclusion and extracted and categorized DQ concepts and constructs. All authors discussed findings iteratively until consensus was reached.ResultsThe 120 included articles yielded concepts related to contextual (data source, custodian, and user) and technical (interoperability) factors across the data life cycle. Contextual DQ subcategories included relevance, usability, accessibility, timeliness, and trust. Well-tested computable DQ indicators and assessment tools were also found.ConclusionsA DQ assessment framework that covers intrinsic, technical, and contextual categories across the data life cycle enables assessment and management of RWD repositories to ensure fitness for purpose. Balancing security, privacy, and FAIR principles requires trust and reciprocity, transparent governance, and organizational cultures that value good documentation.  相似文献   

13.
ObjectiveIn response to COVID-19, the informatics community united to aggregate as much clinical data as possible to characterize this new disease and reduce its impact through collaborative analytics. The National COVID Cohort Collaborative (N3C) is now the largest publicly available HIPAA limited dataset in US history with over 6.4 million patients and is a testament to a partnership of over 100 organizations.Materials and MethodsWe developed a pipeline for ingesting, harmonizing, and centralizing data from 56 contributing data partners using 4 federated Common Data Models. N3C data quality (DQ) review involves both automated and manual procedures. In the process, several DQ heuristics were discovered in our centralized context, both within the pipeline and during downstream project-based analysis. Feedback to the sites led to many local and centralized DQ improvements.ResultsBeyond well-recognized DQ findings, we discovered 15 heuristics relating to source Common Data Model conformance, demographics, COVID tests, conditions, encounters, measurements, observations, coding completeness, and fitness for use. Of 56 sites, 37 sites (66%) demonstrated issues through these heuristics. These 37 sites demonstrated improvement after receiving feedback.DiscussionWe encountered site-to-site differences in DQ which would have been challenging to discover using federated checks alone. We have demonstrated that centralized DQ benchmarking reveals unique opportunities for DQ improvement that will support improved research analytics locally and in aggregate.ConclusionBy combining rapid, continual assessment of DQ with a large volume of multisite data, it is possible to support more nuanced scientific questions with the scale and rigor that they require.  相似文献   

14.
电了病历在临床质量控制中的应用   总被引:4,自引:1,他引:3  
临床质量管理是电子病历的典型应用之一。论述了临床质量管理的功能框架和应用优势,探讨了网络条件下加强医疗质量控制,特别是加强电子病历质量控制的方法、措施和效果。  相似文献   

15.
ObjectiveThis systematic review aims to assess how information from unstructured text is used to develop and validate clinical prognostic prediction models. We summarize the prediction problems and methodological landscape and determine whether using text data in addition to more commonly used structured data improves the prediction performance.Materials and MethodsWe searched Embase, MEDLINE, Web of Science, and Google Scholar to identify studies that developed prognostic prediction models using information extracted from unstructured text in a data-driven manner, published in the period from January 2005 to March 2021. Data items were extracted, analyzed, and a meta-analysis of the model performance was carried out to assess the added value of text to structured-data models.ResultsWe identified 126 studies that described 145 clinical prediction problems. Combining text and structured data improved model performance, compared with using only text or only structured data. In these studies, a wide variety of dense and sparse numeric text representations were combined with both deep learning and more traditional machine learning methods. External validation, public availability, and attention for the explainability of the developed models were limited.ConclusionThe use of unstructured text in the development of prognostic prediction models has been found beneficial in addition to structured data in most studies. The text data are source of valuable information for prediction model development and should not be neglected. We suggest a future focus on explainability and external validation of the developed models, promoting robust and trustworthy prediction models in clinical practice.  相似文献   

16.
选择其中具有代表性的某院肿瘤科,对该科2005~2009年门诊住院数据中出院病人治疗转归情况、医疗工作馈、出院病人的性别、年龄、疾病种类、区域来源和发挥中医药特色优势等指标进行分析,展现r该专科近几年的医疗工作发展情况;继而对该分析结果进行讨论,强调了肿瘤科作为国家中医药管理局重点专科在本院、乃至全国确实起到领头羊作用,同时指出科室在长期规划、突出中医药特色优势方面尚需进一步加强。文章对政府及医院自身评价国家中医药管理局重点专科建设成果、制定今后工作计划提供了一定的参考。  相似文献   

17.
卫生类高职院校临床教学质量的实践与探索   总被引:1,自引:0,他引:1  
临床实践教学是医学教育的关键时期,卫生类高职院校应根据自身特点,适应区域卫生事业发展的需要,加强临床教学基地建设,开创临床教育新模式,促进医教沟通与协调,完善临床教学管理网络,不断提高临床实践教学质量。  相似文献   

18.
提高中医药临床研究报道质量的建议   总被引:9,自引:2,他引:9  
循证医学的兴起引起了国际医学界的极大关注并很快发展成为临床科研的标准.这不仅对临床随机对照试验的设计提出了严格的要求,如应用科学的随机方法和设定合理的对照组,同时也对准确、全面地报道临床科学研究成果提出了新的要求.尽管近年来中国国内中医药的临床研究成绩斐然,但由于报道质量上的缺陷,多数中医药的中文文献难于与国际标准接轨.随着世界上对中医药研究信息需求的日增,提高研究成果的报道质量势在必行.这就要求从报道文章的每一方面,即前言或背景介绍、材料和方法、结果以至讨论及结论各部分提高论文的质量;而各杂志的编辑则是质量把关的关键,应提高审稿标准并鼓励促进更多的高质量的来稿.  相似文献   

19.
Objective To review and evaluate available software tools for electronic health record–driven phenotype authoring in order to identify gaps and needs for future development.Materials and Methods Candidate phenotype authoring tools were identified through (1) literature search in four publication databases (PubMed, Embase, Web of Science, and Scopus) and (2) a web search. A collection of tools was compiled and reviewed after the searches. A survey was designed and distributed to the developers of the reviewed tools to discover their functionalities and features.Results Twenty-four different phenotype authoring tools were identified and reviewed. Developers of 16 of these identified tools completed the evaluation survey (67% response rate). The surveyed tools showed commonalities but also varied in their capabilities in algorithm representation, logic functions, data support and software extensibility, search functions, user interface, and data outputs.Discussion Positive trends identified in the evaluation included: algorithms can be represented in both computable and human readable formats; and most tools offer a web interface for easy access. However, issues were also identified: many tools were lacking advanced logic functions for authoring complex algorithms; the ability to construct queries that leveraged un-structured data was not widely implemented; and many tools had limited support for plug-ins or external analytic software.Conclusions Existing phenotype authoring tools could enable clinical researchers to work with electronic health record data more efficiently, but gaps still exist in terms of the functionalities of such tools. The present work can serve as a reference point for the future development of similar tools.  相似文献   

20.
ObjectiveThe electronic health record (EHR) data deluge makes data retrieval more difficult, escalating cognitive load and exacerbating clinician burnout. New auto-summarization techniques are needed. The study goal was to determine if problem-oriented view (POV) auto-summaries improve data retrieval workflows. We hypothesized that POV users would perform tasks faster, make fewer errors, be more satisfied with EHR use, and experience less cognitive load as compared with users of the standard view (SV).MethodsSimple data retrieval tasks were performed in an EHR simulation environment. A randomized block design was used. In the control group (SV), subjects retrieved lab results and medications by navigating to corresponding sections of the electronic record. In the intervention group (POV), subjects clicked on the name of the problem and immediately saw lab results and medications relevant to that problem.ResultsWith POV, mean completion time was faster (173 seconds for POV vs 205 seconds for SV; P < .0001), the error rate was lower (3.4% for POV vs 7.7% for SV; P = .0010), user satisfaction was greater (System Usability Scale score 58.5 for POV vs 41.3 for SV; P < .0001), and cognitive task load was less (NASA Task Load Index score 0.72 for POV vs 0.99 for SV; P < .0001).DiscussionThe study demonstrates that using a problem-based auto-summary has a positive impact on 4 aspects of EHR data retrieval, including cognitive load.ConclusionEHRs have brought on a data deluge, with increased cognitive load and physician burnout. To mitigate these increases, further development and implementation of auto-summarization functionality and the requisite knowledge base are needed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号