AimSkin tears are traumatic wounds characterised by separation of the skin layers. Severity evaluation is important in the management of skin tears. To support the assessment and management of skin tears, this study aimed to develop an algorithm to estimate a category of the Skin Tear Audit Research classification system (STAR classification) using digital images via machine learning. This was achieved by introducing shape features representing complicated shape of the skin tears.MethodsA skin tear image was separated into small segments, and features of each segment were estimated. The segments were then classified into different classes by machine learning algorithms, namely support vector machine and random forest. Their performance in classifying wound segments and STAR categories was evaluated with 31 images using the leave-one-out cross validation.ResultsSupport vector machine showed an accuracy of 74% and 69% in classifying wound segments and STAR categories, respectively. The corresponding accuracy using random forest were 71% and 63%.ConclusionMachine learning algorithms revealed capable of classifying categories of skin tears. This could offer the potential to aid nurses in their management of skin tears, even if they are not specialised in wound care. 相似文献
ObjectiveTo develop a prediction model for survival of patients with coronary artery disease (CAD) using health conditions beyond cardiovascular risk factors, including maximal exercise capacity, through the application of machine learning (ML) techniques.MethodsAnalysis of data from a retrospective cohort linking clinical, administrative, and vital status databases from 1995 to 2016 was performed. Inclusion criteria were age 18 years or older, diagnosis of CAD, referral to a cardiac rehabilitation program, and available baseline exercise test results. Primary outcome was death from any cause. Feature selection was performed using supervised and unsupervised ML techniques. The final prognostic model used the survival tree (ST) algorithm.ResultsFrom the cohort of 13,362 patients (60±11 years; 2400 [18%] women), 1577 died during a median follow-up of 8 years (interquartile range, 4 to 13 years), with an estimated survival of 67% up to 21 years. Feature selection revealed age and peak metabolic equivalents (METs) as the features with the greatest importance for mortality prediction. Using these 2 features, the ST generated a long-term prediction with a C-index of 0.729 by splitting patients in 8 clusters with different survival probabilities (P<.001). The ST root node was split by peak METs of 6.15 or less or more than 6.15, and each patient’s subgroup was further split by age or other peak METs cut points.ConclusionApplying ML techniques, age and maximal exercise capacity accurately predict mortality in patients with CAD and outperform variables commonly used for decision-making in clinical practice. A novel and simple prognostic model was established, and maximal exercise capacity was further suggested to be one of the most powerful predictors of mortality in CAD. 相似文献
Background and aimsWhile low-density lipoprotein cholesterol (LDL-C) is a good predictor of atherosclerotic cardiovascular disease, apolipoprotein B (ApoB) is superior when the two markers are discordant. We aimed to determine the impact of adiposity, diet and inflammation upon ApoB and LDL-C discordance.Methods and resultsMachine learning (ML) and structural equation models (SEMs) were applied to the National Health and Nutrition Examination Survey to investigate cardiometabolic and dietary factors when LDL-C and ApoB are concordant/discordant. Mendelian randomisation (MR) determined whether adiposity and inflammation exposures were causal of elevated/decreased LDL-C and/or ApoB. ML showed body mass index (BMI), dietary saturated fatty acids (SFA), dietary fibre, serum C-reactive protein (CRP) and uric acid were the most strongly associated variables (R2 = 0.70) in those with low LDL-C and high ApoB. SEMs revealed that fibre (b = ?0.42, p = 0.001) and SFA (b = 0.28, p = 0.014) had a significant association with our outcome (joined effect of ApoB and LDL-C). BMI (b = 0.65, p = 0.001), fibre (b = ?0.24, p = 0.014) and SFA (b = 0.26, p = 0.032) had significant associations with CRP. MR analysis showed genetically higher body fat percentage had a significant causal effect on ApoB (Inverse variance weighted (IVW) = Beta: 0.172, p = 0.0001) but not LDL-C (IVW = Beta: 0.006, p = 0.845).ConclusionOur data show increased discordance between ApoB and LDL-C is associated with cardiometabolic, clinical and dietary abnormalities and that body fat percentage is causal of elevated ApoB. 相似文献
Introduction: Identification of optimal drug doses and drug combinations is crucial for optimized treatment of tuberculosis.
Areas covered: An unprecedented level of research activity involving multiple approaches is seeking to improve tuberculosis treatment. This report is a review of the quantitative methods currently used on clinical data sets to identify drug exposure targets and optimal drug combinations for tuberculosis treatment. A high-level summary of the methods, including the strengths and weaknesses of each method and potential methodological improvements is presented. Methods incorporating data generated from multiple sources such as in vitro and clinical studies, and their potential to provide better estimates of pharmacokinetic/pharmacodynamic (PK/PD) targets, are discussed. PK/PD relationships identified are compared between different studies and data analysis methods.
Expert opinion: The relationships between drug exposures and tuberculosis treatment outcomes are complex and require analytical methods capable of handling the multidimensional nature of the relationships. The choice of a method is guided by its complexity, interpretability of results, and type of data available. 相似文献
Non-linear exposure-outcome relationships such as between body mass index (BMI) and mortality are common. They are best explored as continuous functions using individual participant data from multiple studies. We explore two two-stage methods for meta-analysis of such relationships, where the confounder-adjusted relationship is first estimated in a non-linear regression model in each study, then combined across studies. The “metacurve” approach combines the estimated curves using multiple meta-analyses of the relative effect between a given exposure level and a reference level. The “mvmeta” approach combines the estimated model parameters in a single multivariate meta-analysis. Both methods allow the exposure-outcome relationship to differ across studies. Using theoretical arguments, we show that the methods differ most when covariate distributions differ across studies; using simulated data, we show that mvmeta gains precision but metacurve is more robust to model mis-specification. We then compare the two methods using data from the Emerging Risk Factors Collaboration on BMI, coronary heart disease events, and all-cause mortality (>80 cohorts, >18 000 events). For each outcome, we model BMI using fractional polynomials of degree 2 in each study, with adjustment for confounders. For metacurve, the powers defining the fractional polynomials may be study-specific or common across studies. For coronary heart disease, metacurve with common powers and mvmeta correctly identify a small increase in risk in the lowest levels of BMI, but metacurve with study-specific powers does not. For all-cause mortality, all methods identify a steep U-shape. The metacurve and mvmeta methods perform well in combining complex exposure-disease relationships across studies. 相似文献
Missing covariates in regression analysis are a pervasive problem in medical, social, and economic researches. We study empirical-likelihood confidence regions for unconstrained and constrained regression parameters in a nonignorable covariate-missing data problem. For an assumed conditional mean regression model, we assume that some covariates are fully observed but other covariates are missing for some subjects. By exploitation of a probability model of missingness and a working conditional score model from a semiparametric perspective, we build a system of unbiased estimating equations, where the number of equations exceeds the number of unknown parameters. Based on the proposed estimating equations, we introduce unconstrained and constrained empirical-likelihood ratio statistics to construct empirical-likelihood confidence regions for the underlying regression parameters without and with constraints. We establish the asymptotic distributions of the proposed empirical-likelihood ratio statistics. Simulation results show that the proposed empirical-likelihood methods have a better finite-sample performance than other competitors in terms of coverage probability and interval length. Finally, we apply the proposed empirical-likelihood methods to the analysis of a data set from the US National Health and Nutrition Examination Survey. 相似文献
Accelerated failure time (AFT) models allowing for random effects are linear mixed models under the log-transformation of survival time with censoring and describe dependence in correlated survival data. It is well known that the AFT models are useful alternatives to frailty models. To the best of our knowledge, however, there is no literature on variable selection methods for such AFT models. In this paper, we propose a simple but unified variable-selection procedure of fixed effects in the AFT random-effect models using penalized h-likelihood (HL). We consider four penalty functions (ie, least absolute shrinkage and selection operator (LASSO), adaptive LASSO, smoothly clipped absolute deviation (SCAD), and HL). We show that the proposed method can be easily implemented via a slight modification to existing h-likelihood estimation procedures. We thus demonstrate that the proposed method can also be easily extended to AFT models with multilevel (or nested) structures. Simulation studies also show that the procedure using the adaptive LASSO, SCAD, or HL penalty performs well. In particular, we find via the simulation results that the variable selection method with HL penalty provides a higher probability of choosing the true model than other three methods. The usefulness of the new method is illustrated using two actual datasets from multicenter clinical trials. 相似文献