首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
2.
ObjectiveWe utilized a computerized order entry system–integrated function referred to as “void” to identify erroneous orders (ie, a “void” order). Using voided orders, we aimed to (1) identify the nature and characteristics of medication ordering errors, (2) investigate the risk factors associated with medication ordering errors, and (3) explore potential strategies to mitigate these risk factors.Materials and MethodsWe collected data on voided orders using clinician interviews and surveys within 24 hours of the voided order and using chart reviews. Interviews were informed by the human factors–based SEIPS (Systems Engineering Initiative for Patient Safety) model to characterize the work systems–based risk factors contributing to ordering errors; chart reviews were used to establish whether a voided order was a true medication ordering error and ascertain its impact on patient safety.ResultsDuring the 16-month study period (August 25, 2017, to December 31, 2018), 1074 medication orders were voided; 842 voided orders were true medication errors (positive predictive value = 78.3 ± 1.2%). A total of 22% (n=190) of the medication ordering errors reached the patient, with at least a single administration, without causing patient harm. Interviews were conducted on 355 voided orders (33% response). Errors were not uniquely associated with a single risk factor, but the causal contributors of medication ordering errors were multifactorial, arising from a combination of technological-, cognitive-, environmental-, social-, and organizational-level factors.ConclusionsThe void function offers a practical, standardized method to create a rich database of medication ordering errors. We highlight implications for utilizing the void function for future research, practice and learning opportunities.  相似文献   

3.
ObjectiveTo develop prediction models for intensive care unit (ICU) vs non-ICU level-of-care need within 24 hours of inpatient admission for emergency department (ED) patients using electronic health record data.Materials and MethodsUsing records of 41 654 ED visits to a tertiary academic center from 2015 to 2019, we tested 4 algorithms—feed-forward neural networks, regularized regression, random forests, and gradient-boosted trees—to predict ICU vs non-ICU level-of-care within 24 hours and at the 24th hour following admission. Simple-feature models included patient demographics, Emergency Severity Index (ESI), and vital sign summary. Complex-feature models added all vital signs, lab results, and counts of diagnosis, imaging, procedures, medications, and lab orders.ResultsThe best-performing model, a gradient-boosted tree using a full feature set, achieved an AUROC of 0.88 (95%CI: 0.87–0.89) and AUPRC of 0.65 (95%CI: 0.63–0.68) for predicting ICU care need within 24 hours of admission. The logistic regression model using ESI achieved an AUROC of 0.67 (95%CI: 0.65–0.70) and AUPRC of 0.37 (95%CI: 0.35–0.40). Using a discrimination threshold, such as 0.6, the positive predictive value, negative predictive value, sensitivity, and specificity were 85%, 89%, 30%, and 99%, respectively. Vital signs were the most important predictors.Discussion and ConclusionsUndertriaging admitted ED patients who subsequently require ICU care is common and associated with poorer outcomes. Machine learning models using readily available electronic health record data predict subsequent need for ICU admission with good discrimination, substantially better than the benchmarking ESI system. The results could be used in a multitiered clinical decision-support system to improve ED triage.  相似文献   

4.
ObjectiveAfter deploying a clinical prediction model, subsequently collected data can be used to fine-tune its predictions and adapt to temporal shifts. Because model updating carries risks of over-updating/fitting, we study online methods with performance guarantees. Materials and MethodsWe introduce 2 procedures for continual recalibration or revision of an underlying prediction model: Bayesian logistic regression (BLR) and a Markov variant that explicitly models distribution shifts (MarBLR). We perform empirical evaluation via simulations and a real-world study predicting Chronic Obstructive Pulmonary Disease (COPD) risk. We derive “Type I and II” regret bounds, which guarantee the procedures are noninferior to a static model and competitive with an oracle logistic reviser in terms of the average loss.ResultsBoth procedures consistently outperformed the static model and other online logistic revision methods. In simulations, the average estimated calibration index (aECI) of the original model was 0.828 (95%CI, 0.818–0.938). Online recalibration using BLR and MarBLR improved the aECI towards the ideal value of zero, attaining 0.265 (95%CI, 0.230–0.300) and 0.241 (95%CI, 0.216–0.266), respectively. When performing more extensive logistic model revisions, BLR and MarBLR increased the average area under the receiver-operating characteristic curve (aAUC) from 0.767 (95%CI, 0.765–0.769) to 0.800 (95%CI, 0.798–0.802) and 0.799 (95%CI, 0.797–0.801), respectively, in stationary settings and protected against substantial model decay. In the COPD study, BLR and MarBLR dynamically combined the original model with a continually refitted gradient boosted tree to achieve aAUCs of 0.924 (95%CI, 0.913–0.935) and 0.925 (95%CI, 0.914–0.935), compared to the static model’s aAUC of 0.904 (95%CI, 0.892–0.916).DiscussionDespite its simplicity, BLR is highly competitive with MarBLR. MarBLR outperforms BLR when its prior better reflects the data.ConclusionsBLR and MarBLR can improve the transportability of clinical prediction models and maintain their performance over time.  相似文献   

5.
ObjectiveTo compare Cox models, machine learning (ML), and ensemble models combining both approaches, for prediction of stroke risk in a prospective study of Chinese adults.Materials and MethodsWe evaluated models for stroke risk at varying intervals of follow-up (<9 years, 0–3 years, 3–6 years, 6–9 years) in 503 842 adults without prior history of stroke recruited from 10 areas in China in 2004–2008. Inputs included sociodemographic factors, diet, medical history, physical activity, and physical measurements. We compared discrimination and calibration of Cox regression, logistic regression, support vector machines, random survival forests, gradient boosted trees (GBT), and multilayer perceptrons, benchmarking performance against the 2017 Framingham Stroke Risk Profile. We then developed an ensemble approach to identify individuals at high risk of stroke (>10% predicted 9-yr stroke risk) by selectively applying either a GBT or Cox model based on individual-level characteristics.ResultsFor 9-yr stroke risk prediction, GBT provided the best discrimination (AUROC: 0.833 in men, 0.836 in women) and calibration, with consistent results in each interval of follow-up. The ensemble approach yielded incrementally higher accuracy (men: 76%, women: 80%), specificity (men: 76%, women: 81%), and positive predictive value (men: 26%, women: 24%) compared to any of the single-model approaches.Discussion and ConclusionAmong several approaches, an ensemble model combining both GBT and Cox models achieved the best performance for identifying individuals at high risk of stroke in a contemporary study of Chinese adults. The results highlight the potential value of expanding the use of ML in clinical practice.  相似文献   

6.
7.
ObjectiveLarge clinical databases are increasingly used for research and quality improvement. We describe an approach to data quality assessment from the General Medicine Inpatient Initiative (GEMINI), which collects and standardizes administrative and clinical data from hospitals.MethodsThe GEMINI database contained 245 559 patient admissions at 7 hospitals in Ontario, Canada from 2010 to 2017. We performed 7 computational data quality checks and iteratively re-extracted data from hospitals to correct problems. Thereafter, GEMINI data were compared to data that were manually abstracted from the hospital’s electronic medical record for 23 419 selected data points on a sample of 7488 patients.ResultsComputational checks flagged 103 potential data quality issues, which were either corrected or documented to inform future analysis. For example, we identified the inclusion of canceled radiology tests, a time shift of transfusion data, and mistakenly processing the chemical symbol for sodium (“Na”) as a missing value. Manual validation identified 1 important data quality issue that was not detected by computational checks: transfusion dates and times at 1 site were unreliable. Apart from that single issue, across all data tables, GEMINI data had high overall accuracy (ranging from 98%–100%), sensitivity (95%–100%), specificity (99%–100%), positive predictive value (93%–100%), and negative predictive value (99%–100%) compared to the gold standard.Discussion and ConclusionComputational data quality checks with iterative re-extraction facilitated reliable data collection from hospitals but missed 1 critical quality issue. Combining computational and manual approaches may be optimal for assessing the quality of large multisite clinical databases.  相似文献   

8.
BackgroundThe 21st Century Cures Act mandates patients’ access to their electronic health record (EHR) notes. To our knowledge, no previous work has systematically invited patients to proactively report diagnostic concerns while documenting and tracking their diagnostic experiences through EHR-based clinician note review.ObjectiveTo test if patients can identify concerns about their diagnosis through structured evaluation of their online visit notes.MethodsIn a large integrated health system, patients aged 18–85 years actively using the patient portal and seen between October 2019 and February 2020 were invited to respond to an online questionnaire if an EHR algorithm detected any recent unexpected return visit following an initial primary care consultation (“at-risk” visit). We developed and tested an instrument (Safer Dx Patient Instrument) to help patients identify concerns related to several dimensions of the diagnostic process based on notes review and recall of recent “at-risk” visits. Additional questions assessed patients’ trust in their providers and their general feelings about the visit. The primary outcome was a self-reported diagnostic concern. Multivariate logistic regression tested whether the primary outcome was predicted by instrument variables.ResultsOf 293 566 visits, the algorithm identified 1282 eligible patients, of whom 486 responded. After applying exclusion criteria, 418 patients were included in the analysis. Fifty-one patients (12.2%) identified a diagnostic concern. Patients were more likely to report a concern if they disagreed with statements “the care plan the provider developed for me addressed all my medical concerns” [odds ratio (OR), 2.65; 95% confidence interval [CI], 1.45–4.87) and “I trust the provider that I saw during my visit” (OR, 2.10; 95% CI, 1.19–3.71) and agreed with the statement “I did not have a good feeling about my visit” (OR, 1.48; 95% CI, 1.09–2.01).ConclusionPatients can identify diagnostic concerns based on a proactive online structured evaluation of visit notes. This surveillance strategy could potentially improve transparency in the diagnostic process.  相似文献   

9.
10.
ObjectiveFederated learning (FL) allows multiple distributed data holders to collaboratively learn a shared model without data sharing. However, individual health system data are heterogeneous. “Personalized” FL variations have been developed to counter data heterogeneity, but few have been evaluated using real-world healthcare data. The purpose of this study is to investigate the performance of a single-site versus a 3-client federated model using a previously described Coronavirus Disease 19 (COVID-19) diagnostic model. Additionally, to investigate the effect of system heterogeneity, we evaluate the performance of 4 FL variations.Materials and methodsWe leverage a FL healthcare collaborative including data from 5 international healthcare systems (US and Europe) encompassing 42 hospitals. We implemented a COVID-19 computer vision diagnosis system using the Federated Averaging (FedAvg) algorithm implemented on Clara Train SDK 4.0. To study the effect of data heterogeneity, training data was pooled from 3 systems locally and federation was simulated. We compared a centralized/pooled model, versus FedAvg, and 3 personalized FL variations (FedProx, FedBN, and FedAMP).ResultsWe observed comparable model performance with respect to internal validation (local model: AUROC 0.94 vs FedAvg: 0.95, P = .5) and improved model generalizability with the FedAvg model (P < .05). When investigating the effects of model heterogeneity, we observed poor performance with FedAvg on internal validation as compared to personalized FL algorithms. FedAvg did have improved generalizability compared to personalized FL algorithms. On average, FedBN had the best rank performance on internal and external validation.ConclusionFedAvg can significantly improve the generalization of the model compared to other personalization FL algorithms; however, at the cost of poor internal validity. Personalized FL may offer an opportunity to develop both internal and externally validated algorithms.  相似文献   

11.
12.
Background:Existing clinical prediction models for in vitro fertilization are based on the fresh oocyte cycle, and there is no prediction model to evaluate the probability of successful thawing of cryopreserved mature oocytes. This research aims to identify and study the characteristics of pre-oocyte-retrieval patients that can affect the pregnancy outcomes of emergency oocyte freeze-thaw cycles.Methods:Data were collected from the Reproductive Center, Peking University Third Hospital of China. Multivariable logistic regression model was used to derive the nomogram. Nomogram model performance was assessed by examining the discrimination and calibration in the development and validation cohorts. Discriminatory ability was assessed using the area under the receiver operating characteristic curve (AUC), and calibration was assessed using the Hosmer–Lemeshow goodness-of-fit test and calibration plots.Results:The predictors in the model of “no transferable embryo cycles” are female age (odds ratio [OR] = 1.099, 95% confidence interval [CI] = 1.003–1.205, P = 0.0440), duration of infertility (OR = 1.140, 95% CI = 1.018–1.276, P = 0.0240), basal follicle-stimulating hormone (FSH) level (OR = 1.205, 95% CI = 1.051–1.382, P = 0.0084), basal estradiol (E2) level (OR = 1.006, 95% CI = 1.001–1.010, P = 0.0120), and sperm from microdissection testicular sperm extraction (MESA) (OR = 7.741, 95% CI = 2.905–20.632, P < 0.0010). Upon assessing predictive ability, the AUC for the “no transferable embryo cycles” model was 0.799 (95% CI: 0.722–0.875, P < 0.0010). The Hosmer–Lemeshow test (P = 0.7210) and calibration curve showed good calibration for the prediction of no transferable embryo cycles. The predictors in the cumulative live birth were the number of follicles on the day of human chorionic gonadotropin (hCG) administration (OR = 1.088, 95% CI = 1.030–1.149, P = 0.0020) and endometriosis (OR = 0.172, 95% CI = 0.035–0.853, P = 0.0310). The AUC for the “cumulative live birth” model was 0.724 (95% CI: 0.647–0.801, P < 0.0010). The Hosmer–Lemeshow test (P = 0.5620) and calibration curve showed good calibration for the prediction of cumulative live birth.Conclusions:The predictors in the final multivariate logistic regression models found to be significantly associated with poor pregnancy outcomes were increasing female age, duration of infertility, high basal FSH and E2 level, endometriosis, sperm from MESA, and low number of follicles with a diameter >10 mm on the day of hCG administration.  相似文献   

13.
14.
15.
ObjectivesTo assess fairness and bias of a previously validated machine learning opioid misuse classifier.Materials & MethodsTwo experiments were conducted with the classifier’s original (n = 1000) and external validation (n = 53 974) datasets from 2 health systems. Bias was assessed via testing for differences in type II error rates across racial/ethnic subgroups (Black, Hispanic/Latinx, White, Other) using bootstrapped 95% confidence intervals. A local surrogate model was estimated to interpret the classifier’s predictions by race and averaged globally from the datasets. Subgroup analyses and post-hoc recalibrations were conducted to attempt to mitigate biased metrics.ResultsWe identified bias in the false negative rate (FNR = 0.32) of the Black subgroup compared to the FNR (0.17) of the White subgroup. Top features included “heroin” and “substance abuse” across subgroups. Post-hoc recalibrations eliminated bias in FNR with minimal changes in other subgroup error metrics. The Black FNR subgroup had higher risk scores for readmission and mortality than the White FNR subgroup, and a higher mortality risk score than the Black true positive subgroup (P < .05).DiscussionThe Black FNR subgroup had the greatest severity of disease and risk for poor outcomes. Similar features were present between subgroups for predicting opioid misuse, but inequities were present. Post-hoc mitigation techniques mitigated bias in type II error rate without creating substantial type I error rates. From model design through deployment, bias and data disadvantages should be systematically addressed.ConclusionStandardized, transparent bias assessments are needed to improve trustworthiness in clinical machine learning models.  相似文献   

16.
ObjectiveLike most real-world data, electronic health record (EHR)–derived data from oncology patients typically exhibits wide interpatient variability in terms of available data elements. This interpatient variability leads to missing data and can present critical challenges in developing and implementing predictive models to underlie clinical decision support for patient-specific oncology care. Here, we sought to develop a novel ensemble approach to addressing missing data that we term the “meta-model” and apply the meta-model to patient-specific cancer prognosis.Materials and MethodsUsing real-world data, we developed a suite of individual random survival forest models to predict survival in patients with advanced lung cancer, colorectal cancer, and breast cancer. Individual models varied by the predictor data used. We combined models for each cancer type into a meta-model that predicted survival for each patient using a weighted mean of the individual models for which the patient had all requisite predictors.ResultsThe meta-model significantly outperformed many of the individual models and performed similarly to the best performing individual models. Comparisons of the meta-model to a more traditional imputation-based method of addressing missing data supported the meta-model’s utility.ConclusionsWe developed a novel machine learning–based strategy to underlie clinical decision support and predict survival in cancer patients, despite missing data. The meta-model may more generally provide a tool for addressing missing data across a variety of clinical prediction problems. Moreover, the meta-model may address other challenges in clinical predictive modeling including model extensibility and integration of predictive algorithms trained across different institutions and datasets.  相似文献   

17.
The OneFlorida Data Trust is a centralized research patient data repository created and managed by the OneFlorida Clinical Research Consortium (“OneFlorida”). It comprises structured electronic health record (EHR), administrative claims, tumor registry, death, and other data on 17.2 million individuals who received healthcare in Florida between January 2012 and the present. Ten healthcare systems in Miami, Orlando, Tampa, Jacksonville, Tallahassee, Gainesville, and rural areas of Florida contribute EHR data, covering the major metropolitan regions in Florida. Deduplication of patients is accomplished via privacy-preserving entity resolution (precision 0.97–0.99, recall 0.75), thereby linking patients’ EHR, claims, and death data. Another unique feature is the establishment of mother-baby relationships via Florida vital statistics data. Research usage has been significant, including major studies launched in the National Patient-Centered Clinical Research Network (“PCORnet”), where OneFlorida is 1 of 9 clinical research networks. The Data Trust’s robust, centralized, statewide data are a valuable and relatively unique research resource.  相似文献   

18.
近日,根据中国科协、财政部、教育部、科技部、国家新闻出版署、中国科学院、中国工程院《关于组织实施中国科技期刊卓越行动计划有关项目申报的通知》及《中国科技期刊卓越行动计划评审细则》有关规定,经公开申报、资格审查、陈述答辩、专家委员会复核、结果公示,确定中国科技期刊卓越行动计划入选项目共计285项。  相似文献   

19.
20.
Background:Joint dislocations significantly impact public health. However, a comprehensive study on the incidence, distribution, and risk factors for joint dislocations in China is lacking. We conducted the China National Joint Dislocation Study, which is a part of the China National Fracture Study conducted to obtain the national incidence and risk factors for traumatic fractures, and to investigate the incidence and risk factors for joint dislocations.Methods:For this national retrospective epidemiological study, 512,187 participants were recruited using stratified random sampling and probability-proportional-to-size method from January 19 to May 16, 2015. Participants who sustained joint dislocations of the trunk, arms, or legs (skull, sternum, and ribs being excluded) in 2014 were personally interviewed to obtain data on age, educational background, ethnic origin, occupation, geographic region, and urbanization degree. The joint-dislocation incidence was calculated based on age, sex, body site, and demographic factors. The risk factors for different groups were examined using multiple logistic regression.Results:One hundred and nineteen participants sustained 121 joint dislocations in 2014. The population-weighted incidence rate of joint dislocations of the trunk, arms, or legs was 0.22 (95% confidence interval [CI]: 0.16, 0.27) per 1000 population in 2014 (men, 0.27 [0.20, 0.34]; women, 0.16 [0.10, 0.23]). For all ages, previous dislocation history (male: OR 42.33, 95% confidence interval [CI]: 12.03–148.90; female: OR 54.43, 95% CI: 17.37–170.50) and alcohol consumption (male: OR 3.50, 95% CI: 1.49–8.22; female: OR 2.65, 95% CI: 1.08–6.50) were risk factors for joint dislocation. Sleeping less than 7 h/day was a risk factor for men. Compared with children, women aged ≥15 years (female 15–64 years: OR 0.16, 95% CI: 0.04–0.61; female ≥65 years: OR 0.06, 95% CI: 0.01–0.41) were less likely to sustain joint dislocations. Women with more than three children were at higher dislocation risk than women without children (OR 6.92, 95% CI: 1.18–40.78).Conclusions:The up-to-date data on joint dislocation incidence, distribution, and risk factors can be used as a reference for national healthcare, prevention, and management in China. Specific strategies for decreasing alcohol consumption and encouraging adequate sleeping hours should be developed to prevent or reduce dislocation incidents.Trial Registration:Chinese Clinical Trial Registry, ChiCTR-EPR-15005878.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号