首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ObjectiveDuring the coronavirus disease 2019 (COVID-19) pandemic, federally qualified health centers rapidly mobilized to provide SARS-CoV-2 testing, COVID-19 care, and vaccination to populations at increased risk for COVID-19 morbidity and mortality. We describe the development of a reusable public health data analytics system for reuse of clinical data to evaluate the health burden, disparities, and impact of COVID-19 on populations served by health centers.Materials and MethodsThe Multistate Data Strategy engaged project partners to assess public health readiness and COVID-19 data challenges. An infrastructure for data capture and sharing procedures between health centers and public health agencies was developed to support existing capabilities and data capacities to respond to the pandemic.ResultsBetween August 2020 and March 2021, project partners evaluated their data capture and sharing capabilities and reported challenges and preliminary data. Major interoperability challenges included poorly aligned federal, state, and local reporting requirements, lack of unique patient identifiers, lack of access to pharmacy, claims and laboratory data, missing data, and proprietary data standards and extraction methods.DiscussionEfforts to access and align project partners’ existing health systems data infrastructure in the context of the pandemic highlighted complex interoperability challenges. These challenges remain significant barriers to real-time data analytics and efforts to improve health outcomes and mitigate inequities through data-driven responses.ConclusionThe reusable public health data analytics system created in the Multistate Data Strategy can be adapted and scaled for other health center networks to facilitate data aggregation and dashboards for public health, organizational planning, and quality improvement and can inform local, state, and national COVID-19 response efforts.  相似文献   

2.
目的 研究基于传染病动力学易感-暴露-感染-恢复(SEIR)模型对新型冠状病毒肺炎(COVID-19)疫情发展情况的预测效果,为有效应对疫情提供指导。方法 利用Python爬虫自动更新功能获取中华人民共和国国家卫生健康委员会公布的疫情数据,通过改进传染病动力学SEIR模型,自动修正COVID-19基本再生数(R0),对中国湖北省和韩国的COVID-19疫情发展趋势进行预测。结果 模型预测的湖北省COVID-19疫情顶点在2020年2月21日,现有确诊病例数约为50 000例(2月19日),预计疫情将于3月4日回落至30 000例以下,并在5月10日左右结束。中华人民共和国国家卫生健康委员会公布的实际数据显示,确诊人数顶点为53 371例。模型预测的韩国疫情峰值在3月7日,将于4月底结束。结论 改进的传染病动力学SEIR模型在COVID-19疫情早期实现了较准确的数据预测,政府相关部门在疫情中及时、有效的强力干预明显影响了疫情的发展进程,东亚其他国家如韩国的疫情在3月仍处于上升期,提示中国需要提防输入性疫情风险。  相似文献   

3.
ObjectiveThis work investigates how reinforcement learning and deep learning models can facilitate the near-optimal redistribution of medical equipment in order to bolster public health responses to future crises similar to the COVID-19 pandemic.Materials and MethodsThe system presented is simulated with disease impact statistics from the Institute of Health Metrics, Centers for Disease Control and Prevention, and Census Bureau. We present a robust pipeline for data preprocessing, future demand inference, and a redistribution algorithm that can be adopted across broad scales and applications.ResultsThe reinforcement learning redistribution algorithm demonstrates performance optimality ranging from 93% to 95%. Performance improves consistently with the number of random states participating in exchange, demonstrating average shortage reductions of 78.74 ± 30.8% in simulations with 5 states to 93.50 ± 0.003% with 50 states.ConclusionsThese findings bolster confidence that reinforcement learning techniques can reliably guide resource allocation for future public health emergencies.  相似文献   

4.
目的 模拟预测和分析作战中的战斗减员总数、时空分布以及伤员构成分布。方法 使用系统动力学方法构建作战过程模拟和战斗减员预计模型,通过智能体建模方法从战斗减员预计模型中导入宏观减员数据,并对减员数据进行拆分和按特定比例赋予战伤信息。结果 基于系统动力学的战斗减员预计模型能够结合具体作战任务,分析作战影响因素、双方武器杀伤性能和防护水平。对交战过程构建因果回路和存量流量关系,将作战中红蓝双方各类目标毁伤程度转换为减员数据。提取战斗减员预计模型得出的宏观减员数据,通过构建作战目标毁伤程度与各类战伤之间的对应关系,实现对每一个战伤减员个体的具体伤情赋值和模拟,完成从减员流到伤员流的转换。结论 所建立的基于系统动力学的减员预计模型和基于智能体的伤员发生模拟模型,能够科学测算作战中的减员时空分布和减员结构。  相似文献   

5.
ObjectiveTo compare the accuracy of computer versus physician predictions of hospitalization and to explore the potential synergies of hybrid physician–computer models.Materials and MethodsA single-center prospective observational study in a tertiary pediatric hospital in Boston, Massachusetts, United States. Nine emergency department (ED) attending physicians participated in the study. Physicians predicted the likelihood of admission for patients in the ED whose hospitalization disposition had not yet been decided. In parallel, a random-forest computer model was developed to predict hospitalizations from the ED, based on data available within the first hour of the ED encounter. The model was tested on the same cohort of patients evaluated by the participating physicians.Results198 pediatric patients were considered for inclusion. Six patients were excluded due to incomplete or erroneous physician forms. Of the 192 included patients, 54 (28%) were admitted and 138 (72%) were discharged. The positive predictive value for the prediction of admission was 66% for the clinicians, 73% for the computer model, and 86% for a hybrid model combining the two. To predict admission, physicians relied more heavily on the clinical appearance of the patient, while the computer model relied more heavily on technical data-driven features, such as the rate of prior admissions or distance traveled to hospital.DiscussionComputer-generated predictions of patient disposition were more accurate than clinician-generated predictions. A hybrid prediction model improved accuracy over both individual predictions, highlighting the complementary and synergistic effects of both approaches.ConclusionThe integration of computer and clinician predictions can yield improved predictive performance.  相似文献   

6.
ObjectiveAccurate risk prediction is important for evaluating early medical treatment effects and improving health care quality. Existing methods are usually designed for dynamic medical data, which require long-term observations. Meanwhile, important personalized static information is ignored due to the underlying uncertainty and unquantifiable ambiguity. It is urgent to develop an early risk prediction method that can adaptively integrate both static and dynamic health data.Materials and MethodsData were from 6367 patients with Peptic Ulcer Bleeding between 2007 and 2016. This article develops a novel End-to-end Importance-Aware Personalized Deep Learning Approach (eiPDLA) to achieve accurate early clinical risk prediction. Specifically, eiPDLA introduces a long short-term memory with temporal attention to learn sequential dependencies from time-stamped records and simultaneously incorporating a residual network with correlation attention to capture their influencing relationship with static medical data. Furthermore, a new multi-residual multi-scale network with the importance-aware mechanism is designed to adaptively fuse the learned multisource features, automatically assigning larger weights to important features while weakening the influence of less important features.ResultsExtensive experimental results on a real-world dataset illustrate that our method significantly outperforms the state-of-the-arts for early risk prediction under various settings (eg, achieving an AUC score of 0.944 at 1 year ahead of risk prediction). Case studies indicate that the achieved prediction results are highly interpretable.ConclusionThese results reflect the importance of combining static and dynamic health data, mining their influencing relationship, and incorporating the importance-aware mechanism to automatically identify important features. The achieved accurate early risk prediction results save precious time for doctors to timely design effective treatments and improve clinical outcomes.  相似文献   

7.
ObjectiveLike most real-world data, electronic health record (EHR)–derived data from oncology patients typically exhibits wide interpatient variability in terms of available data elements. This interpatient variability leads to missing data and can present critical challenges in developing and implementing predictive models to underlie clinical decision support for patient-specific oncology care. Here, we sought to develop a novel ensemble approach to addressing missing data that we term the “meta-model” and apply the meta-model to patient-specific cancer prognosis.Materials and MethodsUsing real-world data, we developed a suite of individual random survival forest models to predict survival in patients with advanced lung cancer, colorectal cancer, and breast cancer. Individual models varied by the predictor data used. We combined models for each cancer type into a meta-model that predicted survival for each patient using a weighted mean of the individual models for which the patient had all requisite predictors.ResultsThe meta-model significantly outperformed many of the individual models and performed similarly to the best performing individual models. Comparisons of the meta-model to a more traditional imputation-based method of addressing missing data supported the meta-model’s utility.ConclusionsWe developed a novel machine learning–based strategy to underlie clinical decision support and predict survival in cancer patients, despite missing data. The meta-model may more generally provide a tool for addressing missing data across a variety of clinical prediction problems. Moreover, the meta-model may address other challenges in clinical predictive modeling including model extensibility and integration of predictive algorithms trained across different institutions and datasets.  相似文献   

8.
ObjectivesHand, foot and mouth disease (HFMD) is a widespread infectious disease that causes a significant disease burden on society. To achieve early intervention and to prevent outbreaks of disease, we propose a novel warning model that can accurately predict the incidence of HFMD.MethodsWe propose a spatial-temporal graph convolutional network (STGCN) that combines spatial factors for surrounding cities with historical incidence over a certain time period to predict the future occurrence of HFMD in Guangdong and Shandong between 2011 and 2019. The 2011–2018 data served as the training and verification set, while data from 2019 served as the prediction set. Six important parameters were selected and verified in this model and the deviation was displayed by the root mean square error and the mean absolute error.ResultsAs the first application using a STGCN for disease forecasting, we succeeded in accurately predicting the incidence of HFMD over a 12-week period at the prefecture level, especially for cities of significant concern.ConclusionsThis model provides a novel approach for infectious disease prediction and may help health administrative departments implement effective control measures up to 3 months in advance, which may significantly reduce the morbidity associated with HFMD in the future.  相似文献   

9.
ObjectiveThis case study illustrates the use of natural language processing for identifying administrative task categories, prevalence, and shifts necessitated by a major event (the COVID-19 [coronavirus disease 2019] pandemic) from user-generated data stored as free text in a task management system for a multisite mental health practice with 40 clinicians and 13 administrative staff members.Materials and MethodsStructural topic modeling was applied on 7079 task sequences from 13 administrative users of a Health Insurance Portability and Accountability Act–compliant task management platform. Context was obtained through interviews with an expert panel.ResultsTen task definitions spanning 3 major categories were identified, and their prevalence estimated. Significant shifts in task prevalence due to the pandemic were detected for tasks like billing inquiries to insurers, appointment cancellations, patient balances, and new patient follow-up.ConclusionsStructural topic modeling effectively detects task categories, prevalence, and shifts, providing opportunities for healthcare providers to reconsider staff roles and to optimize workflows and resource allocation.  相似文献   

10.
ObjectiveRisk prediction models are widely used to inform evidence-based clinical decision making. However, few models developed from single cohorts can perform consistently well at population level where diverse prognoses exist (such as the SARS-CoV-2 [severe acute respiratory syndrome coronavirus 2] pandemic). This study aims at tackling this challenge by synergizing prediction models from the literature using ensemble learning.Materials and MethodsIn this study, we selected and reimplemented 7 prediction models for COVID-19 (coronavirus disease 2019) that were derived from diverse cohorts and used different implementation techniques. A novel ensemble learning framework was proposed to synergize them for realizing personalized predictions for individual patients. Four diverse international cohorts (2 from the United Kingdom and 2 from China; N = 5394) were used to validate all 8 models on discrimination, calibration, and clinical usefulness.ResultsResults showed that individual prediction models could perform well on some cohorts while poorly on others. Conversely, the ensemble model achieved the best performances consistently on all metrics quantifying discrimination, calibration, and clinical usefulness. Performance disparities were observed in cohorts from the 2 countries: all models achieved better performances on the China cohorts.DiscussionWhen individual models were learned from complementary cohorts, the synergized model had the potential to achieve better performances than any individual model. Results indicate that blood parameters and physiological measurements might have better predictive powers when collected early, which remains to be confirmed by further studies.ConclusionsCombining a diverse set of individual prediction models, the ensemble method can synergize a robust and well-performing model by choosing the most competent ones for individual patients.  相似文献   

11.
ObjectiveTo develop a computer model to predict patients with nonalcoholic steatohepatitis (NASH) using machine learning (ML).Materials and MethodsThis retrospective study utilized two databases: a) the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) nonalcoholic fatty liver disease (NAFLD) adult database (2004-2009), and b) the Optum® de-identified Electronic Health Record dataset (2007-2018), a real-world dataset representative of common electronic health records in the United States. We developed an ML model to predict NASH, using confirmed NASH and non-NASH based on liver histology results in the NIDDK dataset to train the model.ResultsModels were trained and tested on NIDDK NAFLD data (704 patients) and the best-performing models evaluated on Optum data (~3,000,000 patients). An eXtreme Gradient Boosting model (XGBoost) consisting of 14 features exhibited high performance as measured by area under the curve (0.82), sensitivity (81%), and precision (81%) in predicting NASH. Slightly reduced performance was observed with an abbreviated feature set of 5 variables (0.79, 80%, 80%, respectively). The full model demonstrated good performance (AUC 0.76) to predict NASH in Optum data.DiscussionThe proposed model, named NASHmap, is the first ML model developed with confirmed NASH and non-NASH cases as determined through liver biopsy and validated on a large, real-world patient dataset. Both the 14 and 5-feature versions exhibit high performance.ConclusionThe NASHmap model is a convenient and high performing tool that could be used to identify patients likely to have NASH in clinical settings, allowing better patient management and optimal allocation of clinical resources.  相似文献   

12.
13.
BackgroundThe mathematical modelling of coronavirus disease-19 (COVID-19) pandemic has been attempted by a wide range of researchers from the very beginning of cases in India. Initial analysis of available models revealed large variations in scope, assumptions, predictions, course, effect of interventions, effect on health-care services, and so on. Thus, a rapid review was conducted for narrative synthesis and to assess correlation between predicted and actual values of cases in India.MethodsA comprehensive, two-step search strategy was adopted, wherein the databases such as Medline, google scholar, MedRxiv, and BioRxiv were searched. Later, hand searching for the articles and contacting known modelers for unpublished models was resorted. The data from the included studies were extracted by the two investigators independently and checked by third researcher.ResultsBased on the literature search, 30 articles were included in this review. As narrative synthesis, data from the studies were summarized in terms of assumptions, model used, predictions, main recommendations, and findings. The Pearson’s correlation coefficient (r) between predicted and actual values (n = 20) was 0.7 (p = 0.002) with R2 = 0.49. For Susceptible, Infected, Recovered (SIR) and its variant models (n = 16) ‘r’ was 0.65 (p = 0.02). The correlation for long-term predictions could not be assessed due to paucity of information.ConclusionReview has shown the importance of assumptions and strong correlation between short-term projections but uncertainties for long-term predictions. Thus, short-term predictions may be revised as more and more data become available. The assumptions too need to expand and firm up as the pandemic evolves.  相似文献   

14.
BackgroundIn India, huge mortality occurs due to cardiovascular diseases (CVDs) as these diseases are not diagnosed in early stages. Machine learning (ML) algorithms can be used to build efficient and economical prediction system for early diagnosis of CVDs in India.MethodsA total of 1670 anonymized medical records were collected from a tertiary hospital in South India. Seventy percent of the collected data were used to train the prediction system. Five state-of-the-art ML algorithms (k-Nearest Neighbours, Naïve Bayes, Logistic Regression, AdaBoost and Random Forest [RF]) were applied using Python programming language to develop the prediction system. The performance was evaluated over remaining 30% of data. The prediction system was later deployed in the cloud for easy accessibility via Internet.ResultsML effectively predicted the risk of heart disease. The best performing (RF) prediction system correctly classified 470 out of 501 medical records thus attaining a diagnostic accuracy of 93.8%. Sensitivity and specificity were observed to be 92.8% and 94.6%, respectively. The prediction system attained positive predictive value of 94% and negative predictive value of 93.6%. The prediction model developed in this study can be accessed at http://das.southeastasia.cloudapp.azure.com/predict/ConclusionsML-based prediction system developed in this study performs well in early diagnosis of CVDs and can be accessed via Internet. This study offers promising results suggesting potential use of ML-based heart disease prediction system as a screening tool to diagnose heart diseases in primary healthcare centres in India, which would otherwise get undetected.  相似文献   

15.
ObjectiveCause of death is used as an important outcome of clinical research; however, access to cause-of-death data is limited. This study aimed to develop and validate a machine-learning model that predicts the cause of death from the patient’s last medical checkup.Materials and MethodsTo classify the mortality status and each individual cause of death, we used a stacking ensemble method. The prediction outcomes were all-cause mortality, 8 leading causes of death in South Korea, and other causes. The clinical data of study populations were extracted from the national claims (n = 174 747) and electronic health records (n = 729 065) and were used for model development and external validation. Moreover, we imputed the cause of death from the data of 3 US claims databases (n = 994 518, 995 372, and 407 604, respectively). All databases were formatted to the Observational Medical Outcomes Partnership Common Data Model.ResultsThe generalized area under the receiver operating characteristic curve (AUROC) of the model predicting the cause of death within 60 days was 0.9511. Moreover, the AUROC of the external validation was 0.8887. Among the causes of death imputed in the Medicare Supplemental database, 11.32% of deaths were due to malignant neoplastic disease.DiscussionThis study showed the potential of machine-learning models as a new alternative to address the lack of access to cause-of-death data. All processes were disclosed to maintain transparency, and the model was easily applicable to other institutions.ConclusionA machine-learning model with competent performance was developed to predict cause of death.  相似文献   

16.
ObjectiveTo build a prostate cancer (PCa) risk prediction model based on common clinical indicators to provide a theoretical basis for the diagnosis and treatment of PCa and to evaluate the value of artificial intelligence (Al) technology under healthcare data platforms.MethodsAfter preprocessing of the data from Population Health Data Archive, smuothly clipped absolute deviation (SCAD) was used to select features. Random forest (RF), support vector machine (SVM), back propagation neural network (BP), and convolutional neural network (CNN) were used to predict the risk of PCa, among which BP and CNN were used on the enhanced data by SMOTE. The performances of models were compared using area under the curve (AUC) of the receiving operating characteristic curve. After the optimal model was selected, we used the Shiny to develop an online calculator for PCa risk prediction based on predictive indicators.ResultsInorganic phosphorus, triglycerides, and calcium were closely related to PCa in addition to the volume of fragmented tissue and free prostate-specific antigen (PSA). Among the four models, RF had the best performance in predicting PCa (accuracy: 96.80%; AUC: 0.975, 95% CI: 0.964-0.986). Followed by BP (accuracy: 85.36%; AUC: 0.892, 95% CI: 0.849-0.934) and SVM (accuracy: 82.67%; AUC: 0.824, 95% CI: 0.805-0.844). CNN performed worse (accuracy: 72.37%; AUC: 0.724, 95% CI: 0.670-0.779). An online platform for PCa risk prediction was developed based on the RF model and the predictive indicators.ConclusionsThis study revealed the application value of traditional machine learning and deep learning models in disease risk prediction under healthcare data platform, proposed new ideas for PCa risk prediction in patients suspected for PCa and had undergone core needle biopsy. Besides, the online calculation may enhance the practicability of Al prediction technology and facilitate medical diagnosis.  相似文献   

17.
ObjectiveTo rapidly develop, validate, and implement a novel real-time mortality score for the COVID-19 pandemic that improves upon sequential organ failure assessment (SOFA) for decision support for a Crisis Standards of Care team.Materials and MethodsWe developed, verified, and deployed a stacked generalization model to predict mortality using data available in the electronic health record (EHR) by combining 5 previously validated scores and additional novel variables reported to be associated with COVID-19-specific mortality. We verified the model with prospectively collected data from 12 hospitals in Colorado between March 2020 and July 2020. We compared the area under the receiver operator curve (AUROC) for the new model to the SOFA score and the Charlson Comorbidity Index.ResultsThe prospective cohort included 27 296 encounters, of which 1358 (5.0%) were positive for SARS-CoV-2, 4494 (16.5%) required intensive care unit care, 1480 (5.4%) required mechanical ventilation, and 717 (2.6%) ended in death. The Charlson Comorbidity Index and SOFA scores predicted mortality with an AUROC of 0.72 and 0.90, respectively. Our novel score predicted mortality with AUROC 0.94. In the subset of patients with COVID-19, the stacked model predicted mortality with AUROC 0.90, whereas SOFA had AUROC of 0.85.DiscussionStacked regression allows a flexible, updatable, live-implementable, ethically defensible predictive analytics tool for decision support that begins with validated models and includes only novel information that improves prediction.ConclusionWe developed and validated an accurate in-hospital mortality prediction score in a live EHR for automatic and continuous calculation using a novel model that improved upon SOFA.  相似文献   

18.
ObjectiveDuring the first 9 months of the coronavirus disease 2019 (COVID-19) pandemic, many emergency departments (EDs) experimented with telehealth applications to reduce virus exposure, decrease visit volume, and conserve personal protective equipment. We interviewed ED leaders who implemented telehealth programs to inform responses to the ongoing COVID-19 pandemic and future emergencies.Materials and MethodsFrom September to November 2020, we conducted semi-structured interviews with ED leaders across the United States. We identified EDs with pandemic-related telehealth programs through literature review and snowball sampling. Maximum variation sampling was used to capture a range of experiences. We used standard qualitative analysis techniques, consisting of both inductive and deductive approaches to identify and characterize themes.ResultsWe completed 15 interviews with EDs leaders in 10 states. From March to November 2020, participants experimented with more than a dozen different types of telehealth applications including tele-isolation, tele-triage, tele-consultation, virtual postdischarge assessment, acute care in the home, and tele-palliative care. Prior experience with telehealth was key for implementation of new applications. Most new telehealth applications turned out to be temporary because they were no longer needed to support the response. The leading barriers to telehealth implementation during the pandemic included technology challenges and the need for “hands-on” implementation support in the ED.ConclusionsIn response to the COVID-19 pandemic, EDs rapidly implemented many telehealth innovations. Their experiences can inform future responses.  相似文献   

19.
ObjectiveThe study sought to conduct an informatics analysis on the National Evaluation System for Health Technology Coordinating Center test case of cardiac ablation catheters and to demonstrate the role of informatics approaches in the feasibility assessment of capturing real-world data using unique device identifiers (UDIs) that are fit for purpose for label extensions for 2 cardiac ablation catheters from the electronic health records and other health information technology systems in a multicenter evaluation.Materials and MethodsWe focused on data capture and transformation and data quality maturity model specified in the National Evaluation System for Health Technology Coordinating Center data quality framework. The informatics analysis included 4 elements: the use of UDIs for identifying device exposure data, the use of standardized codes for defining computable phenotypes, the use of natural language processing for capturing unstructured data elements from clinical data systems, and the use of common data models for standardizing data collection and analyses.ResultsWe found that, with the UDI implementation at 3 health systems, the target device exposure data could be effectively identified, particularly for brand-specific devices. Computable phenotypes for study outcomes could be defined using codes; however, ablation registries, natural language processing tools, and chart reviews were required for validating data quality of the phenotypes. The common data model implementation status varied across sites. The maturity level of the key informatics technologies was highly aligned with the data quality maturity model.ConclusionsWe demonstrated that the informatics approaches can be feasibly used to capture safety and effectiveness outcomes in real-world data for use in medical device studies supporting label extensions.  相似文献   

20.
ObjectiveWe propose an interpretable disease prediction model that efficiently fuses multiple types of patient records using a self-attentive fusion encoder. We assessed the model performance in predicting cardiovascular disease events, given the records of a general patient population.Materials and MethodsWe extracted 798111 ses and 67 623 controls from the sample cohort database and nationwide healthcare claims data of South Korea. Among the information provided, our model used the sequential records of medical codes and patient characteristics, such as demographic profiles and the most recent health examination results. These two types of patient records were combined in our self-attentive fusion module, whereas previously dominant methods aggregated them using a simple concatenation. The prediction performance was compared to state-of-the-art recurrent neural network-based approaches and other widely used machine learning approaches.ResultsOur model outperformed all the other compared methods in predicting cardiovascular disease events. It achieved an area under the curve of 0.839, while the other compared methods achieved between 0.74111 d 0.830. Moreover, our model consistently outperformed the other methods in a more challenging setting in which we tested the model’s ability to draw an inference from more nonobvious, diverse factors.DiscussionWe also interpreted the attention weights provided by our model as the relative importance of each time step in the sequence. We showed that our model reveals the informative parts of the patients’ history by measuring the attention weights.ConclusionWe suggest an interpretable disease prediction model that efficiently fuses heterogeneous patient records and demonstrates superior disease prediction performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号