首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Background:It is crucial to differentiate accurately glioma recurrence and pseudoprogression which have entirely different prognosis and require different treatment strategies. This study aimed to assess the diagnostic accuracy of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) as a tool for distinguishing glioma recurrence and pseudoprogression.Methods:According to particular criteria of inclusion and exclusion, related studies up to May 1, 2019, were thoroughly searched from several databases including PubMed, Embase, Cochrane Library, and Chinese biomedical databases. The quality assessment of diagnostic accuracy studies was applied to evaluate the quality of the included studies. By using the “mada” package in R, the heterogeneity, overall sensitivity, specificity, and diagnostic odds ratio were calculated. Moreover, funnel plots were used to visualize and estimate the publication bias in this study. The area under the summary receiver operating characteristic (SROC) curve was computed to display the diagnostic efficiency of DCE-MRI.Results:In the present meta-analysis, a total of 11 studies covering 616 patients were included. The results showed that the pooled sensitivity, specificity, and diagnostic odds ratio were 0.792 (95% confidence interval [CI] 0.707–0.857), 0.779 (95% CI 0.715–0.832), and 16.219 (97.5% CI 9.123–28.833), respectively. The value of the area under the SROC curve was 0.846. In addition, the SROC curve showed high sensitivities (>0.6) and low false positive rates (<0.5) from most of the included studies, which suggest that the results of our study were reliable. Furthermore, the funnel plot suggested the existence of publication bias.Conclusions:While the DCE-MRI is not the perfect diagnostic tool for distinguishing glioma recurrence and pseudoprogression, it was capable of improving diagnostic accuracy. Hence, further investigations combining DCE-MRI with other imaging modalities are required to establish an efficient diagnostic method for glioma patients.  相似文献   

2.
3.
ObjectiveDrawing causal estimates from observational data is problematic, because datasets often contain underlying bias (eg, discrimination in treatment assignment). To examine causal effects, it is important to evaluate what-if scenarios—the so-called “counterfactuals.” We propose a novel deep learning architecture for propensity score matching and counterfactual prediction—the deep propensity network using a sparse autoencoder (DPN-SA)—to tackle the problems of high dimensionality, nonlinear/nonparallel treatment assignment, and residual confounding when estimating treatment effects.Materials and MethodsWe used 2 randomized prospective datasets, a semisynthetic one with nonlinear/nonparallel treatment selection bias and simulated counterfactual outcomes from the Infant Health and Development Program and a real-world dataset from the LaLonde’s employment training program. We compared different configurations of the DPN-SA against logistic regression and LASSO as well as deep counterfactual networks with propensity dropout (DCN-PD). Models’ performances were assessed in terms of average treatment effects, mean squared error in precision on effect’s heterogeneity, and average treatment effect on the treated, over multiple training/test runs.ResultsThe DPN-SA outperformed logistic regression and LASSO by 36%–63%, and DCN-PD by 6%–10% across all datasets. All deep learning architectures yielded average treatment effects close to the true ones with low variance. Results were also robust to noise-injection and addition of correlated variables. Code is publicly available at https://github.com/Shantanu48114860/DPN-SAz.Discussion and ConclusionDeep sparse autoencoders are particularly suited for treatment effect estimation studies using electronic health records because they can handle high-dimensional covariate sets, large sample sizes, and complex heterogeneity in treatment assignments.  相似文献   

4.

Objective

This study evaluated a computerized method for extracting numeric clinical measurements related to diabetes care from free text in electronic patient records (EPR) of general practitioners.

Design and Measurements

Accuracy of this number-oriented approach was compared to manual chart abstraction. Audits measured performance in clinical practice for two commonly used electronic record systems.

Results

Numeric measurements embedded within free text of the EPRs constituted 80% of relevant measurements. For 11 of 13 clinical measurements, the study extraction method was 94%–100% sensitive with a positive predictive value (PPV) of 85%–100%. Post-processing increased sensitivity several points and improved PPV to 100%. Application in clinical practice involved processing times averaging 7.8 minutes per 100 patients to extract all relevant data.

Conclusion

The study method converted numeric clinical information to structured data with high accuracy, and enabled research and quality of care assessments for practices lacking structured data entry.  相似文献   

5.
ObjectiveDespite broad electronic health record (EHR) adoption in U.S. hospitals, there is concern that an “advanced use” digital divide exists between critical access hospitals (CAHs) and non-CAHs. We measured EHR adoption and advanced use over time to analyzed changes in the divide.Materials and MethodsWe used 2008 to 2018 American Hospital Association Information Technology survey data to update national EHR adoption statistics. We stratified EHR adoption by CAH status and measured advanced use for both patient engagement (PE) and clinical data analytics (CDA) domains. We used a linear probability regression for each domain with year-CAH interactions to measure temporal changes in the relationship between CAH status and advanced use.ResultsIn 2018, 98.3% of hospitals had adopted EHRs; there were no differences by CAH status. A total of 58.7% and 55.6% of hospitals adopted advanced PE and CDA functions, respectively. In both domains, CAHs were less likely to be advanced users: 46.6% demonstrated advanced use for PE and 32.0% for CDA. Since 2015, the advanced use divide has persisted for PE and widened for CDA.DiscussionEHR adoption among hospitals is essentially ubiquitous; however, CAHs still lag behind in advanced use functions critical to improving care quality. This may be rooted in different advanced use needs among CAH patients and lack of access to technical expertise.ConclusionsThe advanced use divide prevents CAH patients from benefitting from a fully digitized healthcare system. To close the widening gap in CDA, policymakers should consider partnering with vendors to develop implementation guides and standards for functions like dashboards and high-risk patient identification algorithms to better support CAH adoption.  相似文献   

6.
ObjectiveDiagnostic errors are major contributors to preventable patient harm. We validated the use of an electronic health record (EHR)-based trigger (e-trigger) to measure missed opportunities in stroke diagnosis in emergency departments (EDs).MethodsUsing two frameworks, the Safer Dx Trigger Tools Framework and the Symptom-disease Pair Analysis of Diagnostic Error Framework, we applied a symptom–disease pair-based e-trigger to identify patients hospitalized for stroke who, in the preceding 30 days, were discharged from the ED with benign headache or dizziness diagnoses. The algorithm was applied to Veteran Affairs National Corporate Data Warehouse on patients seen between 1/1/2016 and 12/31/2017. Trained reviewers evaluated medical records for presence/absence of missed opportunities in stroke diagnosis and stroke-related red-flags, risk factors, neurological examination, and clinical interventions. Reviewers also estimated quality of clinical documentation at the index ED visit.ResultsWe applied the e-trigger to 7,752,326 unique patients and identified 46,931 stroke-related admissions, of which 398 records were flagged as trigger-positive and reviewed. Of these, 124 had missed opportunities (positive predictive value for “missed” = 31.2%), 93 (23.4%) had no missed opportunity (non-missed), 162 (40.7%) were miscoded, and 19 (4.7%) were inconclusive. Reviewer agreement was high (87.3%, Cohen’s kappa = 0.81). Compared to the non-missed group, the missed group had more stroke risk factors (mean 3.2 vs 2.6), red flags (mean 0.5 vs 0.2), and a higher rate of inadequate documentation (66.9% vs 28.0%).ConclusionIn a large national EHR repository, a symptom–disease pair-based e-trigger identified missed diagnoses of stroke with a modest positive predictive value, underscoring the need for chart review validation procedures to identify diagnostic errors in large data sets.  相似文献   

7.
ObjectiveOur study estimates the prevalence and predictors of wearable device adoption and data sharing with healthcare providers in a nationally representative sample.Materials and MethodsData were obtained from the 2019 Health Information National Trend Survey. We conducted multivariable logistic regression to examine predictors of device adoption and data sharing.ResultsThe sample contained 4159 individuals, 29.9% of whom had adopted a wearable device in 2019. Among adopters, 46.3% had shared data with their provider. Individuals with diabetes (odds ratio [OR], 2.39; 95% CI, 1.66–3.45; P < .0001), hypertension (OR, 2.80; 95% CI, 2.12–3.70; P < .0001), and multiple chronic conditions (OR, 1.55; 95% CI, 1.03–2.32; P < .0001) had significantly higher odds of wearable device adoption. Individuals with a usual source of care (OR, 2.44; 95% CI, 1.95–3.04; P < .0001), diabetes (OR, 1.66; 95% CI, 1.32–2.08; P < .0001), and hypertension (OR, 1.78; 95% CI, 1.44–2.20; P < .0001) had significantly higher odds of sharing data with providers.DiscussionA third of individuals adopted a wearable medical device and nearly 50% of individuals who owned a device shared data with a provider in 2019. Patients with certain conditions, such as diabetes and hypertension, were more likely to adopt devices and share data with providers. Social determinants of health, such as income and usual source of care, negatively affected wearable device adoption and data sharing, similarly to other consumer health technologies.ConclusionsWearable device adoption and data sharing with providers may be more common than prior studies have reported; however, digital disparities were noted. Studies are needed that test implementation strategies to expand wearable device use and data sharing into care delivery.  相似文献   

8.
Background:Compared with human leukocyte antigen (HLA)-matched sibling donor (MSD) transplantation, it remains unclear whether haploidentical donor (HID) transplantation has a superior graft-versus-leukemia (GVL) effect for Philadelphia-negative (Ph–) high-risk B-cell acute lymphoblastic leukemia (B-ALL). This study aimed to compare the GVL effect between HID and MSD transplantation for Ph– high-risk B-ALL.Methods:This study population came from two prospective multicenter trials (NCT01883180, NCT02673008). Immunosuppressant withdrawal and prophylactic or pre-emptive donor lymphocyte infusion (DLI) were administered in patients without active graft-versus-host disease (GVHD) to prevent relapse. All patients with measurable residual disease (MRD) positivity posttransplantation (post-MRD+) or non-remission (NR) pre-transplantation received prophylactic/pre-emptive interventions. The primary endpoint was the incidence of post-MRD+.Results:A total of 335 patients with Ph– high-risk B-ALL were enrolled, including 145 and 190, respectively, in the HID and MSD groups. The 3-year cumulative incidence of post-MRD+ was 27.2% (95% confidence interval [CI]: 20.2%–34.7%) and 42.6% (35.5%–49.6%) in the HID and MSD groups (P = 0.003), respectively. A total of 156 patients received DLI, including 60 (41.4%) and 96 (50.5%), respectively, in the HID and MSD groups (P = 0.096). The 3-year cumulative incidence of relapse was 18.6% (95% CI: 12.7%–25.4%) and 25.9% (19.9%–32.3%; P = 0.116) in the two groups, respectively. The 3-year overall survival (OS) was 67.4% (95% CI: 59.1%–74.4%) and 61.6% (54.2%–68.1%; P = 0.382), leukemia-free survival (LFS) was 63.4% (95% CI: 55.0%–70.7%) and 58.2% (50.8%–64.9%; P = 0.429), and GVHD-free/relapse-free survival (GRFS) was 51.7% (95% CI: 43.3%–59.5%) and 37.8% (30.9%–44.6%; P = 0.041), respectively, in the HID and MSD groups.Conclusion:HID transplantation has a lower incidence of post-MRD+ than MSD transplantation, suggesting that HID transplantation might have a superior GVL effect than MSD transplantation for Ph– high-risk B-ALL patients.Trial registration:ClinicalTrials.gov: NCT01883180, NCT02673008.  相似文献   

9.
ObjectiveThe aim of this study was to collect and synthesize evidence regarding data quality problems encountered when working with variables related to social determinants of health (SDoH).Materials and MethodsWe conducted a systematic review of the literature on social determinants research and data quality and then iteratively identified themes in the literature using a content analysis process.ResultsThe most commonly represented quality issue associated with SDoH data is plausibility (n = 31, 41%). Factors related to race and ethnicity have the largest body of literature (n = 40, 53%). The first theme, noted in 62% (n = 47) of articles, is that bias or validity issues often result from data quality problems. The most frequently identified validity issue is misclassification bias (n = 23, 30%). The second theme is that many of the articles suggest methods for mitigating the issues resulting from poor social determinants data quality. We grouped these into 5 suggestions: avoid complete case analysis, impute data, rely on multiple sources, use validated software tools, and select addresses thoughtfully.DiscussionThe type of data quality problem varies depending on the variable, and each problem is associated with particular forms of analytical error. Problems encountered with the quality of SDoH data are rarely distributed randomly. Data from Hispanic patients are more prone to issues with plausibility and misclassification than data from other racial/ethnic groups.ConclusionConsideration of data quality and evidence-based quality improvement methods may help prevent bias and improve the validity of research conducted with SDoH data.  相似文献   

10.
ObjectiveTo synthesize data quality (DQ) dimensions and assessment methods of real-world data, especially electronic health records, through a systematic scoping review and to assess the practice of DQ assessment in the national Patient-centered Clinical Research Network (PCORnet).Materials and MethodsWe started with 3 widely cited DQ literature—2 reviews from Chan et al (2010) and Weiskopf et al (2013a) and 1 DQ framework from Kahn et al (2016)—and expanded our review systematically to cover relevant articles published up to February 2020. We extracted DQ dimensions and assessment methods from these studies, mapped their relationships, and organized a synthesized summarization of existing DQ dimensions and assessment methods. We reviewed the data checks employed by the PCORnet and mapped them to the synthesized DQ dimensions and methods.ResultsWe analyzed a total of 3 reviews, 20 DQ frameworks, and 226 DQ studies and extracted 14 DQ dimensions and 10 assessment methods. We found that completeness, concordance, and correctness/accuracy were commonly assessed. Element presence, validity check, and conformance were commonly used DQ assessment methods and were the main focuses of the PCORnet data checks.DiscussionDefinitions of DQ dimensions and methods were not consistent in the literature, and the DQ assessment practice was not evenly distributed (eg, usability and ease-of-use were rarely discussed). Challenges in DQ assessments, given the complex and heterogeneous nature of real-world data, exist.ConclusionThe practice of DQ assessment is still limited in scope. Future work is warranted to generate understandable, executable, and reusable DQ measures.  相似文献   

11.
INTRODUCTIONTopical corticosteroids (TCS) are commonly used in dermatology for their anti-inflammatory action. The recent development of the TOPICOP© (Topical Corticosteroid Phobia) scale to assess steroid phobia has made the quantification and comparison of steroid phobia easier. The objective of this study was to assess the degree of steroid phobia at our institute and identify sources from which patients obtain information regarding TCS.METHODSA cross-sectional survey was performed of dermatology patients regardless of steroid use. TOPICOP scale was used for the survey. Sources from which patients obtained information were identified and their level of trust in these sources assessed.RESULTS186 surveys were analysed. The median domain TOPICOP subscores were 38.9% (interquartile range [IQR] 27.8%–50.0%, standard deviation [SD] 24.4%) for knowledge and beliefs, 44.4% (IQR 33.3%–66.7%, SD 24.4%) for fears and 55.6% (IQR 33.3%–66.7%, SD 27.2%) for behaviour. The median global TOPICOP score was 44.4% (IQR 33.3%–55.6%, SD 17.6%). Female gender was associated with higher behaviour, fear and global TOPICOP scores. There was no difference in the scores based on disease condition, steroid use, age or education. Dermatologists were the most common source of information on topical steroids and trust was highest in dermatologists.CONCLUSIONThe prevalence of steroid phobia in our dermatology outpatient setting was moderately high, with gender differences. Dermatologists were the most common source of information on TCS, and it was heartening to note that trust was also highest in dermatologists. Strategies to target steroid phobia should take into account these factors.  相似文献   

12.
ObjectiveAfter deploying a clinical prediction model, subsequently collected data can be used to fine-tune its predictions and adapt to temporal shifts. Because model updating carries risks of over-updating/fitting, we study online methods with performance guarantees. Materials and MethodsWe introduce 2 procedures for continual recalibration or revision of an underlying prediction model: Bayesian logistic regression (BLR) and a Markov variant that explicitly models distribution shifts (MarBLR). We perform empirical evaluation via simulations and a real-world study predicting Chronic Obstructive Pulmonary Disease (COPD) risk. We derive “Type I and II” regret bounds, which guarantee the procedures are noninferior to a static model and competitive with an oracle logistic reviser in terms of the average loss.ResultsBoth procedures consistently outperformed the static model and other online logistic revision methods. In simulations, the average estimated calibration index (aECI) of the original model was 0.828 (95%CI, 0.818–0.938). Online recalibration using BLR and MarBLR improved the aECI towards the ideal value of zero, attaining 0.265 (95%CI, 0.230–0.300) and 0.241 (95%CI, 0.216–0.266), respectively. When performing more extensive logistic model revisions, BLR and MarBLR increased the average area under the receiver-operating characteristic curve (aAUC) from 0.767 (95%CI, 0.765–0.769) to 0.800 (95%CI, 0.798–0.802) and 0.799 (95%CI, 0.797–0.801), respectively, in stationary settings and protected against substantial model decay. In the COPD study, BLR and MarBLR dynamically combined the original model with a continually refitted gradient boosted tree to achieve aAUCs of 0.924 (95%CI, 0.913–0.935) and 0.925 (95%CI, 0.914–0.935), compared to the static model’s aAUC of 0.904 (95%CI, 0.892–0.916).DiscussionDespite its simplicity, BLR is highly competitive with MarBLR. MarBLR outperforms BLR when its prior better reflects the data.ConclusionsBLR and MarBLR can improve the transportability of clinical prediction models and maintain their performance over time.  相似文献   

13.
The OneFlorida Data Trust is a centralized research patient data repository created and managed by the OneFlorida Clinical Research Consortium (“OneFlorida”). It comprises structured electronic health record (EHR), administrative claims, tumor registry, death, and other data on 17.2 million individuals who received healthcare in Florida between January 2012 and the present. Ten healthcare systems in Miami, Orlando, Tampa, Jacksonville, Tallahassee, Gainesville, and rural areas of Florida contribute EHR data, covering the major metropolitan regions in Florida. Deduplication of patients is accomplished via privacy-preserving entity resolution (precision 0.97–0.99, recall 0.75), thereby linking patients’ EHR, claims, and death data. Another unique feature is the establishment of mother-baby relationships via Florida vital statistics data. Research usage has been significant, including major studies launched in the National Patient-Centered Clinical Research Network (“PCORnet”), where OneFlorida is 1 of 9 clinical research networks. The Data Trust’s robust, centralized, statewide data are a valuable and relatively unique research resource.  相似文献   

14.
Background:Non-communicable chronic diseases have become the leading causes of disease burden worldwide. The trends and burden of “metabolic associated fatty liver disease” (MAFLD) are unknown. We aimed to investigate the cardiovascular and renal burdens in adults with MAFLD and non-alcoholic fatty liver disease (NAFLD).Methods:Nationally representative data were analyzed including data from 19,617 non-pregnant adults aged ≥20 years from the cross-sectional US National Health and Nutrition Examination Survey periods, 1999 to 2002, 2003 to 2006, 2007 to 2010, and 2011 to 2016. MAFLD was defined by the presence of hepatic steatosis plus general overweight/obesity, type 2 diabetes mellitus, or evidence of metabolic dysregulation.Results:The prevalence of MAFLD increased from 28.4% (95% confidence interval 26.3–30.6) in 1999 to 2002 to 35.8% (33.8–37.9) in 2011 to 2016. In 2011 to 2016, among adults with MAFLD, 49.0% (45.8–52.2) had hypertension, 57.8% (55.2–60.4) had dyslipidemia, 26.4% (23.9–28.9) had diabetes mellitus, 88.7% (87.0–80.1) had central obesity, and 18.5% (16.3–20.8) were current smokers. The 10-year cardiovascular risk ranged from 10.5% to 13.1%; 19.7% (17.6–21.9) had chronic kidney diseases (CKDs). Through the four periods, adults with MAFLD showed an increase in obesity; increase in treatment to lower blood pressure (BP), lipids, and hemoglobin A1c; and increase in goal achievements for BP and lipids but not in goal achievement for glycemic control in diabetes mellitus. Patients showed a decreasing 10-year cardiovascular risk over time but no change in the prevalence of CKDs, myocardial infarction, or stroke. Generally, although participants with NAFLD and those with MAFLD had a comparable prevalence of cardiovascular disease and CKD, the prevalence of MAFLD was significantly higher than that of NAFLD.Conclusions:From 1999 to 2016, cardiovascular and renal risks and diseases have become highly prevalent in adults with MAFLD. The absolute cardiorenal burden may be greater for MAFLD than for NAFLD. These data call for early identification and risk stratification of MAFLD and close collaboration between endocrinologists and hepatologists.  相似文献   

15.
ObjectiveWe investigated the progression of healthcare cybersecurity over 2014–2019 as measured by external risk ratings. We further examined the relationship between hospital data breaches and cybersecurity ratings.Materials and MethodsUsing Fortune 1000 firms as a benchmark, time trends in hospital cybersecurity ratings were compared using linear regression. Further, the relationship between hospital data breaches and cybersecurity ratings was modeled using logistic regression. Hospital breach data were collected from US HHS, and cybersecurity ratings were provided by BitSight. The resulting study sample yielded 3528 hospital-year observations.ResultsIn aggregate, we found that hospitals had significantly lower cybersecurity ratings than Fortune 1000 firms, however, hospitals have closed the gap in recent years. We also found that hospitals with the low security ratings were associated with significant risk of a data breach, with the probability of a breach in a given year ranging from 14% to 33%.DiscussionRecent cyber-attacks in healthcare continue to illustrate the need to better secure information systems. While hospitals have reduced cyber risk over the past decade, they remain statistically more vulnerable than the Fortune 1000 firms against botnets, spam, and malware.ConclusionPolicy makers should continue encouraging acute-care hospitals to proactively invest in security controls that reduce cyber risk. Best practices from other sectors like the financial services sector could provide useful guides and benchmarks for improvement.  相似文献   

16.
BackgroundThe 21st Century Cures Act mandates patients’ access to their electronic health record (EHR) notes. To our knowledge, no previous work has systematically invited patients to proactively report diagnostic concerns while documenting and tracking their diagnostic experiences through EHR-based clinician note review.ObjectiveTo test if patients can identify concerns about their diagnosis through structured evaluation of their online visit notes.MethodsIn a large integrated health system, patients aged 18–85 years actively using the patient portal and seen between October 2019 and February 2020 were invited to respond to an online questionnaire if an EHR algorithm detected any recent unexpected return visit following an initial primary care consultation (“at-risk” visit). We developed and tested an instrument (Safer Dx Patient Instrument) to help patients identify concerns related to several dimensions of the diagnostic process based on notes review and recall of recent “at-risk” visits. Additional questions assessed patients’ trust in their providers and their general feelings about the visit. The primary outcome was a self-reported diagnostic concern. Multivariate logistic regression tested whether the primary outcome was predicted by instrument variables.ResultsOf 293 566 visits, the algorithm identified 1282 eligible patients, of whom 486 responded. After applying exclusion criteria, 418 patients were included in the analysis. Fifty-one patients (12.2%) identified a diagnostic concern. Patients were more likely to report a concern if they disagreed with statements “the care plan the provider developed for me addressed all my medical concerns” [odds ratio (OR), 2.65; 95% confidence interval [CI], 1.45–4.87) and “I trust the provider that I saw during my visit” (OR, 2.10; 95% CI, 1.19–3.71) and agreed with the statement “I did not have a good feeling about my visit” (OR, 1.48; 95% CI, 1.09–2.01).ConclusionPatients can identify diagnostic concerns based on a proactive online structured evaluation of visit notes. This surveillance strategy could potentially improve transparency in the diagnostic process.  相似文献   

17.
Objective: In Japan, policies to ensure employment for persons aged 65 and older are being implemented. To facilitate the employment of older registered nurses working in hospitals, the understanding of registered nurses younger than 65 is necessary. We investigated the factors associated with the acceptance of employment of older registered nurses among registered nurses younger than 65.Materials and Methods: The subjects were female registered nurses younger than 65 working in 34 hospitals in Mie Prefecture. We distributed anonymous self-administered questionnaires. We conducted factor analyses of both respondents’ opinions on the employment of “Registered nurses aged 65–69” and “Registered nurses aged 70–74”. Multiple regression analysis was conducted to examine the associations between the “Acceptance of employing registered nurses aged 65–69” and “Opinions on the employment of registered nurses aged 65–69” (Statistical model 1). Moreover, multiple regression analysis was also conducted to examine the associations between the “Acceptance of employing registered nurses aged 70–74” and the “Opinions on the employment of registered nurses aged 70–74” (Statistical model 2).Results: Using factor analyses, the same factors were extracted for both, “Registered nurses aged 65–69” and “Registered nurses aged 70–74”. These factors were: “Health and job performance”, “Utilization of the knowledge and experience of older registered nurses”, “Reducing the workload burden of registered nurses”, and “Manners of older registered nurses”. Using multiple regression analyses, “Health and job performance”, “Utilization of the knowledge and experience of older registered nurses”, and “Reducing the workload burden of registered nurses” were significantly associated with “Acceptance of employing registered nurses aged 65–69” (Statistical model 1). The same 3 factors were also significantly associated with “Acceptance of employing registered nurses aged 70–74” (Statistical model 2).Conclusion: Hospital managers must pay careful attention to these 3 factors.  相似文献   

18.
ObjectiveGlycemic control is an important component of critical care. We present a data-driven method for predicting intensive care unit (ICU) patient response to glycemic control protocols while accounting for patient heterogeneity and variations in care.Materials and MethodsUsing electronic medical records (EMRs) of 18 961 ICU admissions from the MIMIC-III dataset, including 318 574 blood glucose measurements, we train and validate a gradient boosted tree machine learning (ML) algorithm to forecast patient blood glucose and a 95% prediction interval at 2-hour intervals. The model uses as inputs irregular multivariate time series data relating to recent in-patient medical history and glycemic control, including previous blood glucose, nutrition, and insulin dosing.ResultsOur forecasting model using routinely collected EMRs achieves performance comparable to previous models developed in planned research studies using continuous blood glucose monitoring. Model error, expressed as mean absolute percentage error is 16.5%–16.8%, with Clarke error grid analysis demonstrating that 97% of predictions would be clinically acceptable. The 95% prediction intervals achieve near intended coverage at 93%–94%.DiscussionML algorithms built on observational data sources, such as EMRs, present a promising approach for personalization and automation of glycemic control in critical care. Future research may benefit from applying a combination of methodologies and data sources to develop robust methodologies that account for the variations seen in ICU patients and difficultly in detecting the extremes of observed blood glucose values.ConclusionWe demonstrate that EMRs can be used to train ML algorithms that may be suitable for incorporation into ICU decision support systems.  相似文献   

19.
ObjectiveLike most real-world data, electronic health record (EHR)–derived data from oncology patients typically exhibits wide interpatient variability in terms of available data elements. This interpatient variability leads to missing data and can present critical challenges in developing and implementing predictive models to underlie clinical decision support for patient-specific oncology care. Here, we sought to develop a novel ensemble approach to addressing missing data that we term the “meta-model” and apply the meta-model to patient-specific cancer prognosis.Materials and MethodsUsing real-world data, we developed a suite of individual random survival forest models to predict survival in patients with advanced lung cancer, colorectal cancer, and breast cancer. Individual models varied by the predictor data used. We combined models for each cancer type into a meta-model that predicted survival for each patient using a weighted mean of the individual models for which the patient had all requisite predictors.ResultsThe meta-model significantly outperformed many of the individual models and performed similarly to the best performing individual models. Comparisons of the meta-model to a more traditional imputation-based method of addressing missing data supported the meta-model’s utility.ConclusionsWe developed a novel machine learning–based strategy to underlie clinical decision support and predict survival in cancer patients, despite missing data. The meta-model may more generally provide a tool for addressing missing data across a variety of clinical prediction problems. Moreover, the meta-model may address other challenges in clinical predictive modeling including model extensibility and integration of predictive algorithms trained across different institutions and datasets.  相似文献   

20.
ObjectiveWe utilized a computerized order entry system–integrated function referred to as “void” to identify erroneous orders (ie, a “void” order). Using voided orders, we aimed to (1) identify the nature and characteristics of medication ordering errors, (2) investigate the risk factors associated with medication ordering errors, and (3) explore potential strategies to mitigate these risk factors.Materials and MethodsWe collected data on voided orders using clinician interviews and surveys within 24 hours of the voided order and using chart reviews. Interviews were informed by the human factors–based SEIPS (Systems Engineering Initiative for Patient Safety) model to characterize the work systems–based risk factors contributing to ordering errors; chart reviews were used to establish whether a voided order was a true medication ordering error and ascertain its impact on patient safety.ResultsDuring the 16-month study period (August 25, 2017, to December 31, 2018), 1074 medication orders were voided; 842 voided orders were true medication errors (positive predictive value = 78.3 ± 1.2%). A total of 22% (n=190) of the medication ordering errors reached the patient, with at least a single administration, without causing patient harm. Interviews were conducted on 355 voided orders (33% response). Errors were not uniquely associated with a single risk factor, but the causal contributors of medication ordering errors were multifactorial, arising from a combination of technological-, cognitive-, environmental-, social-, and organizational-level factors.ConclusionsThe void function offers a practical, standardized method to create a rich database of medication ordering errors. We highlight implications for utilizing the void function for future research, practice and learning opportunities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号