首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
OBJECTIVE: Ideally, clinical prediction models are generalizable to other patient groups. Unfortunately, they perform regularly worse when validated in new patients and are then often redeveloped. While the original prediction model usually has been developed on a large data set, redevelopment then often occurs on the smaller validation set. Recently, methods to update existing prediction models with the data of new patients have been proposed. We used an existing model that preoperatively predicts the risk of severe postoperative pain (SPP) to compare five updating methods. STUDY DESIGN AND SETTING: The model was tested and updated with a set of 752 new patients (274 [36] with SPP). We studied the discrimination (ability to distinguish between patients with and without SPP) and calibration (agreement between the predicted risks and observed frequencies of SPP) of the five updated models in 283 other patients (100 [35%] with SPP). RESULTS: Simple recalibration methods improved the calibration to a similar extent as revision methods that made more extensive adjustments to the original model. Discrimination could not be improved by any of the methods. CONCLUSION: When the performance is poor in new patients, updating methods can be applied to adjust the model, rather than to develop a new model.  相似文献   

2.
ObjectiveTo assess the transportability of an existing diagnostic questionnaire model for the sensitization to laboratory animal (LA) allergens.Study Design and SettingThe model was externally validated in 414 Canadian animal health apprentices. Several approaches were used: (1) no adjustment; (2) recalibration of the intercept of the model; (3) re-estimation of the intercept and the regression coefficients of predictors; and (4) model revision, by excluding the existing predictor(s) and/or including new predictor(s). The bootstrapping procedure was done following the third and fourth methods. The calibration was assessed graphically and with the Hosmer–Lemeshow (HL) test. Discriminative properties were determined by the area under the receiver operating characteristic curve (ROC area).ResultsWhen applied without adjustment, the model's discriminative ability was adequate (ROC area was 0.74 vs. the original ROC area of 0.76); the calibration was poor (HL test P < 0.001). The other methods yielded models with good calibration (P > 0.10) and reasonable discrimination (ROC area ranged between 0.73 and 0.75). The refitted and revised model showed a good internal validity (correction factor from the bootstrapping procedure was more than 0.90).ConclusionOnce updated, the diagnostic model is valid and can be applied with reasonable performance in an animal health apprentice setting.  相似文献   

3.
Prediction models fitted with logistic regression often show poor performance when applied in populations other than the development population. Model updating may improve predictions. Previously suggested methods vary in their extensiveness of updating the model. We aim to define a strategy in selecting an appropriate update method that considers the balance between the amount of evidence for updating in the new patient sample and the danger of overfitting. We consider recalibration in the large (re‐estimation of model intercept); recalibration (re‐estimation of intercept and slope) and model revision (re‐estimation of all coefficients) as update methods. We propose a closed testing procedure that allows the extensiveness of the updating to increase progressively from a minimum (the original model) to a maximum (a completely revised model). The procedure involves multiple testing with maintaining approximately the chosen type I error rate. We illustrate this approach with three clinical examples: patients with prostate cancer, traumatic brain injury and children presenting with fever. The need for updating the prostate cancer model was completely driven by a different model intercept in the update sample (adjustment: 2.58). Separate testing of model revision against the original model showed statistically significant results, but led to overfitting (calibration slope at internal validation = 0.86). The closed testing procedure selected recalibration in the large as update method, without overfitting. The advantage of the closed testing procedure was confirmed by the other two examples. We conclude that the proposed closed testing procedure may be useful in selecting appropriate update methods for previously developed prediction models. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
Calibration, that is, whether observed outcomes agree with predicted risks, is important when evaluating risk prediction models. For dichotomous outcomes, several tools exist to assess different aspects of model calibration, such as calibration‐in‐the‐large, logistic recalibration, and (non‐)parametric calibration plots. We aim to extend these tools to prediction models for polytomous outcomes. We focus on models developed using multinomial logistic regression (MLR): outcome Y with k categories is predicted using k ? 1 equations comparing each category i (i = 2, … ,k) with reference category 1 using a set of predictors, resulting in k ? 1 linear predictors. We propose a multinomial logistic recalibration framework that involves an MLR fit where Y is predicted using the k ? 1 linear predictors from the prediction model. A non‐parametric alternative may use vector splines for the effects of the linear predictors. The parametric and non‐parametric frameworks can be used to generate multinomial calibration plots. Further, the parametric framework can be used for the estimation and statistical testing of calibration intercepts and slopes. Two illustrative case studies are presented, one on the diagnosis of malignancy of ovarian tumors and one on residual mass diagnosis in testicular cancer patients treated with cisplatin‐based chemotherapy. The risk prediction models were developed on data from 2037 and 544 patients and externally validated on 1107 and 550 patients, respectively. We conclude that calibration tools can be extended to polytomous outcomes. The polytomous calibration plots are particularly informative through the visual summary of the calibration performance. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
The discriminative ability of risk models for dichotomous outcomes is often evaluated with the concordance index (c-index). However, many medical prediction problems are polytomous, meaning that more than two outcome categories need to be predicted. Unfortunately such problems are often dichotomized in prediction research. We present a perspective on the evaluation of discriminative ability of polytomous risk models, which may instigate researchers to consider polytomous prediction models more often. First, we suggest a “discrimination plot” as a tool to visualize the model’s discriminative ability. Second, we discuss the use of one overall polytomous c-index versus a set of dichotomous measures to summarize the performance of the model. Third, we address several aspects to consider when constructing a polytomous c-index. These involve the assessment of concordance in pairs versus sets of patients, weighting by outcome prevalence, the value related to models with random performance, the reduction to the dichotomous c-index for dichotomous problems, and interpretation. We illustrate these issues on case studies dealing with ovarian cancer (four outcome categories) and testicular cancer (three categories). We recommend the use of a discrimination plot together with an overall c-index such as the Polytomous Discrimination Index. If the overall c-index suggests that the model has relevant discriminative ability, pairwise c-indexes for each pair of outcome categories are informative. For pairwise c-indexes we recommend the ‘conditional-risk’ method which is consistent with the analytical approach of the multinomial logistic regression used to develop polytomous risk models.  相似文献   

6.
OBJECTIVE: Physicians commonly consider the presence of all differential diagnoses simultaneously. Polytomous logistic regression modeling allows for simultaneous estimation of the probability of multiple diagnoses. We discuss and (empirically) illustrate the value of this method for diagnostic research. STUDY DESIGN AND SETTING: We used data from a study on the diagnosis of residual retroperitoneal mass histology in patients presenting with nonseminomatous testicular germ cell tumor. The differential diagnoses include benign tissue, mature teratoma, and viable cancer. Probabilities of each diagnosis were estimated with a polytomous logistic regression model and compared with the probabilities estimated from two consecutive dichotomous logistic regression models. RESULTS: We provide interpretations of the odds ratios derived from the polytomous regression model and present a simple score chart to facilitate calculation of predicted probabilities from the polytomous model. For both modeling methods, we show the calibration plots and receiver operating characteristics curve (ROC) areas comparing each diagnostic outcome category with the other two. The ROC areas for benign tissue, mature teratoma, and viable cancer were similar for both modeling methods, 0.83 (95% confidence interval [CI]=0.80-0.85) vs. 0.83 (95% CI=0.80-0.85), 0.78 (95% CI=0.75-0.81) vs. 0.78 (95% CI=0.75-0.81), and 0.66 (95% CI=0.61-0.71) vs. 0.64 (95% CI=0.59-0.69), for polytomous and dichotomous regression models, respectively. CONCLUSION: Polytomous logistic regression is a useful technique to simultaneously model predicted probabilities of multiple diagnostic outcome categories. The performance of a polytomous prediction model can be assessed similarly to a dichotomous logistic regression model, and predictions by a polytomous model can be made with a user-friendly method. Because the simultaneous consideration of the presence of multiple (differential) conditions serves clinical practice better than consideration of the presence of only one target condition, polytomous logistic regression could be applied more often in diagnostic research.  相似文献   

7.
ObjectiveTo compare the psychometric properties of scales top measure activities of daily living, constructed with different scaling methods, and to check whether the most complex scales have higher discriminatory capacity.MethodSample of elderly people from the Spanish Survey on Disability, Personal Autonomy and Dependency We used 14 items that measured activities of daily living. Five scaling methods were applied: Sum and Rasch (both for dichotomous and polytomous items) and Guttman (dichotomous). We evaluated the discriminatory capacity (relative precision [RP]) and area under the curve (AUC).ResultsAll methods showed high Pearson correlations among them (0.765-0.993). They had similar discriminatory power when comparing extreme categories of individuals with no disability with severely limited (RP: 0.93-1.00). The polytomous Sum procedure showed the highest AUC (0.934; 95% confidence interval [95%CI]: 0.928-0.939) and Guttman the lowest (0.853; 95%CI: 0.845-0.861).ConclusionsPolytomous items have greater reliability than the dichotomous ones. Simplest methods (Sum) and most complex (Rasch) are equally valid. Guttman method presented worse discriminatory capacity.  相似文献   

8.
ObjectivesAccurately predicting hospital mortality is necessary to measure and compare patient care. External validation of predictive models is required to truly prove their utility. This study assessed the Kaiser Permanente inpatient risk adjustment methodology for hospital mortality in a patient population distinct from that used for its derivation.Study Design and SettingRetrospective cohort study at two hospitals in Ottawa, Canada, involving all inpatients admitted between January 1998 and April 2002 (n = 188,724). Statistical models for inpatient mortality were derived on a random half of the cohort and validated on the other half.ResultsInpatient mortality was 3.3%. The model using original parameter estimates had excellent discrimination (c-statistic 89.4, 95% confidence interval [CI] 0.891–0.898) but poor calibration. Using data-based parameter estimates, discrimination was excellent (c-statistic 0.915, 95% CI 0.912–0.918) and remained so when patient comorbidity was expressed in the model using the Elixhauser Index (0.901, 0.898–0.904) or the Charlson Index (0.894, 0.891–0.897). These models accurately predicted the risk of hospital death.ConclusionThe Kaiser Permanente inpatient risk adjustment methodology is a valid model for predicting hospital mortality risk. It performed equally well regardless of methods used to summarize patient comorbidity.  相似文献   

9.
ObjectiveValidation of clinical prediction models traditionally refers to the assessment of model performance in new patients. We studied different approaches to geographic and temporal validation in the setting of multicenter data from two time periods.Study Design and SettingWe illustrated different analytic methods for validation using a sample of 14,857 patients hospitalized with heart failure at 90 hospitals in two distinct time periods. Bootstrap resampling was used to assess internal validity. Meta-analytic methods were used to assess geographic transportability. Each hospital was used once as a validation sample, with the remaining hospitals used for model derivation. Hospital-specific estimates of discrimination (c-statistic) and calibration (calibration intercepts and slopes) were pooled using random-effects meta-analysis methods. I2 statistics and prediction interval width quantified geographic transportability. Temporal transportability was assessed using patients from the earlier period for model derivation and patients from the later period for model validation.ResultsEstimates of reproducibility, pooled hospital-specific performance, and temporal transportability were on average very similar, with c-statistics of 0.75. Between-hospital variation was moderate according to I2 statistics and prediction intervals for c-statistics.ConclusionThis study illustrates how performance of prediction models can be assessed in settings with multicenter data at different time periods.  相似文献   

10.
ObjectiveTo establish the association between prior knee-pain consultations and early diagnosis of knee osteoarthritis (OA) by weighted cumulative exposure (WCE) models.Study Design and SettingData were from an electronic health care record (EHR) database (Consultations in Primary Care Archive). WCE functions for modeling the cumulative effect of time-varying knee-pain consultations weighted by recency were derived as a predictive tool in a population-based case-control sample and validated in a prospective cohort sample. Two WCE functions ([i] weighting of the importance of past consultations determined a priori; [ii] flexible spline-based estimation) were comprehensively compared with two simpler models ([iii] time since most recent consultation; total number of past consultations) on model goodness of fit, discrimination, and calibration both in derivation and validation phases.ResultsPeople with the most recent and most frequent knee-pain consultations were more likely to have high WCE scores that were associated with increased risk of knee OA diagnosis both in derivation and validation phases. Better model goodness of fit, discrimination, and calibration were observed for flexible spline-based WCE models.ConclusionWCE functions can be used to model prediagnostic symptoms within routine EHR data and provide novel low-cost predictive tools contributing to early diagnosis.  相似文献   

11.
ObjectiveEarly identification of older people at risk of falling is the cornerstone of fall prevention. Many fall prediction tools exist but their external validity is lacking. External validation is a prerequisite before application in clinical practice. Models developed with electronic health record (EHR) data are especially challenging because of the uncontrolled nature of routinely collected data. We aimed to externally validate our previously developed and published prediction model for falls, using a large cohort of community-dwelling older people derived from primary care EHR data.DesignRetrospective analysis of a prospective cohort drawn from EHR data.Setting and ParticipantsPseudonymized EHR data were collected from individuals aged ≥65 years, who were enlisted in any of the participating 59 general practices between 2015 and 2020 in the Netherlands.MethodsTen predictors were defined and obtained using the same methods as in the development study. The outcome was 1-year fall and was obtained from free text. Both reproducibility and transportability were evaluated. Model performance was assessed in terms of discrimination using the area under the receiver operating characteristic curve (ROC-AUC), and in terms of calibration, using calibration-in-the-large, calibration slope and calibration plots.ResultsAmong 39,342 older people, 5124 (13.4%) fell in the 1-year follow-up. The characteristics of the validation and the development cohorts were similar. ROC-AUCs of the validation and development cohort were 0.690 and 0.705, respectively. Calibration-in-the-large and calibration slope were 0.012 and 0.878, respectively. Calibration plots revealed overprediction for high-risk groups in a small number of individuals.Conclusions and ImplicationsOur previously developed prediction model for falls demonstrated good external validity by reproducing its predictive performance in the validation cohort. The implementation of this model in the primary care setting could be considered after impact assessment.  相似文献   

12.
ObjectivesReadmission to acute care from the inpatient rehabilitation facility (IRF) setting is potentially preventable and an important target of quality improvement and cost savings. The objective of this study was to develop a risk calculator to predict 30-day all-cause readmissions from the IRF setting.DesignRetrospective database analysis using the Uniform Data System for Medical Rehabilitation (UDSMR) from 2015 through 2019.Setting and ParticipantsIn total, 956 US inpatient rehabilitation facilities and 1,849,768 IRF discharges comprising patients from 14 impairment groups.MethodsLogistic regression models were developed to calculate risk-standardized 30-day all-cause hospital readmission rates for patients admitted to an IRF. Models for each impairment group were assessed using 12 common clinical and demographic variables and all but 4 models included various special variables. Models were assessed for discrimination (c-statistics), calibration (calibration plots), and internal validation (bootstrapping). A readmission risk scoring system was created for each impairment group population and was graphically validated.ResultsThe mean age of the cohort was 68.7 (15.2) years, 50.7% were women, and 78.3% were Caucasian. Medicare was the primary payer for 73.1% of the study population. The final models for each impairment group included between 4 and 13 total predictor variables. Model c-statistics ranged from 0.65 to 0.70. There was good calibration represented for most models up to a readmission risk of 30%. Internal validation of the models using bootstrap samples revealed little bias. Point systems for determining risk of 30-day readmission were developed for each impairment group.Conclusions and ImplicationsMultivariable risk factor algorithms based upon administrative data were developed to assess 30-day readmission risk for patients admitted from IRF. This report represents the development of a readmission risk calculator for the IRF setting, which could be instrumental in identifying high risk populations for readmission and targeting resources towards a diverse group of IRF impairment groups.  相似文献   

13.
ObjectiveFrailty state progression is common among older adults, so it is necessary to identify predictors to implement individualized interventions. We aimed to develop and validate a nomogram to predict frailty progression in community-living older adults.DesignProspective cohort study.Setting and ParticipantsA total of 3170 Chinese community-living people aged ≥60 years were randomly assigned to a training set or validation set at a ratio of 6:4.MethodsCandidate predictors (demographic, lifestyle, and medical characteristics) were used to predict frailty state progression as measured with the Fried frailty phenotype at a 4-year follow-up, and multivariate logistic regression analysis was conducted to develop a nomogram, which was validated internally with 1000 bootstrap resamples and externally with the use of a validation set. The C index and calibration plot were used to assess discrimination and calibration of the nomogram, respectively.ResultsAfter a follow-up period of 4 years, 64.1% (917/1430) of the participants in the robust group and 26.0% (453/1740) in the prefrail group experienced frailty progression, which included 9.1% and 21.0%, respectively, who progressed to frailty. Predictors in the final nomogram were age, marital status, physical exercise, baseline frailty state, and diabetes. Based on this nomogram, an online calculator was also developed for easy use. The discriminative ability was good in the training set (C index = 0.861) and was validated using both the internal bootstrap method (C index = 0.861) and an external validation set (C index = 0.853). The calibration plots showed good agreement in both the training and validation sets.Conclusions and ImplicationsAn easy-to-use nomogram was developed with good apparent performance using 5 readily available variables to help physicians and public health practitioners to identify older adults at high risk for frailty progression and implement medical interventions.  相似文献   

14.
ObjectiveFall prevention is important in many hospitals. Current fall-risk-screening tools have limited predictive accuracy specifically for older inpatients. Their administration can be time-consuming. A reliable and easy-to-administer tool is desirable to identify older inpatients at higher fall risk. We aimed to develop and internally validate a prognostic prediction model for inpatient falls for older patients.DesignRetrospective analysis of a large cohort drawn from hospital electronic health record data.Setting and ParticipantsOlder patients (≥70 years) admitted to a university medical center (2016 until 2021).MethodsThe outcome was an inpatient fall (≥24 hours of admission). Two prediction models were developed using regularized logistic regression in 5 imputed data sets: one model without predictors indicating missing values (Model-without) and one model with these additional predictors indicating missing values (Model-with). We internally validated our whole model development strategy using 10-fold stratified cross-validation. The models were evaluated using discrimination (area under the receiver operating characteristic curve) and calibration (plot assessment). We determined whether the areas under the receiver operating characteristic curves (AUCs) of the models were significantly different using DeLong test.ResultsOur data set included 21,286 admissions. In total, 470 (2.2%) had a fall after 24 hours of admission. The Model-without had 12 predictors and Model-with 13, of which 4 were indicators of missing values. The AUCs of the Model-without and Model-with were 0.676 (95% CI 0.646-0.707) and 0.695 (95% CI 0.667-0.724). The AUCs between both models were significantly different (P = .013). Calibration was good for both models.Conclusions and ImplicationsBoth the Model-with and Model-without indicators of missing values showed good calibration and fair discrimination, where the Model-with performed better. Our models showed competitive performance to well-established fall-risk-screening tools, and they have the advantage of being based on routinely collected data. This may substantially reduce the burden on nurses, compared with nonautomatic fall-risk-screening tools.  相似文献   

15.
ObjectivesTo develop and validate a prediction model to detect sensitization to wheat allergens in bakery workers.Study Design and SettingThe prediction model was developed in 867 Dutch bakery workers (development set, prevalence of sensitization 13%) and included questionnaire items (candidate predictors). First, principal component analysis was used to reduce the number of candidate predictors. Then, multivariable logistic regression analysis was used to develop the model. Internal validation and extent of optimism was assessed with bootstrapping. External validation was studied in 390 independent Dutch bakery workers (validation set, prevalence of sensitization 20%).ResultsThe prediction model contained the predictors nasoconjunctival symptoms, asthma symptoms, shortness of breath and wheeze, work-related upper and lower respiratory symptoms, and traditional bakery. The model showed good discrimination with an area under the receiver operating characteristic (ROC) curve area of 0.76 (and 0.75 after internal validation). Application of the model in the validation set gave a reasonable discrimination (ROC area = 0.69) and good calibration after a small adjustment of the model intercept.ConclusionA simple model with questionnaire items only can be used to stratify bakers according to their risk of sensitization to wheat allergens. Its use may increase the cost-effectiveness of (subsequent) medical surveillance.  相似文献   

16.
《Annals of epidemiology》2014,24(7):532-537
PurposeWe compared the John's Hopkins' Aggregated Diagnosis Groups (ADGs), which are derived using inpatient and outpatient records, with the hospital record-derived Charlson and Elixhauser comorbidity indices for predicting outcomes in human immunodeficiency virus (HIV)-infected patients.MethodsWe used a validated algorithm to identify HIV-infected adults (n = 14,313) in Ontario, Canada, and randomly divided the sample into derivation and validation samples 100 times. The primary outcome was all-cause mortality within 1 year, and secondary outcomes included hospital admission and all-cause mortality within 1–2 years.ResultsThe ADG, Elixhauser, and Charlson methods had comparable discriminative performance for predicting 1-year mortality, with median c-statistics of 0.785, 0.767, and 0.788, respectively, across the 100 validation samples. All methods had lower predictive accuracy for all-cause mortality within 1–2 years. For hospital admission, the ADG method had greater discriminative performance than either the Elixhauser or Charlson methods, with median c-statistics of 0.727, 0.678, and 0.668, respectively. All models displayed poor calibration for each outcome.ConclusionsIn patients with HIV, the ADG, Charlson, and Elixhauser methods are comparable for predicting 1-year mortality. However, poor calibration limits the use of these methods for provider profiling and clinical application.  相似文献   

17.
OBJECTIVE: To compare polytomous and dichotomous logistic regression analyses in diagnosing serious bacterial infections (SBIs) in children with fever without apparent source (FWS). STUDY DESIGN AND SETTING: We analyzed data of 595 children aged 1-36 months, who attended the emergency department with fever without source. Outcome categories were SBI, subdivided in pneumonia and other-SBI (OSBI), and non-SBI. Potential predictors were selected based on previous studies and literature. Four models were developed: a polytomous model, estimating probabilities for three diagnostic categories simultaneously; two sequential dichotomous models, which differed in variable selection, discriminating SBI and non-SBI in step 1, and pneumonia and OSBI in step 2; and model 4, where each outcome category was opposed to the other two. The models were compared with respect to the area under the receiver-operating characteristic curve (AUC) for each of the three outcome categories and to the variable selection. RESULTS: Small differences were found in the variables that were selected in the polytomous and dichotomous models. The AUCs of the three outcome categories were similar for each modeling strategy. CONCLUSION: A polytomous logistic regression analysis did not outperform sequential and single application of dichotomous logistic regression analyses in diagnosing SBIs in children with FWS.  相似文献   

18.
ObjectiveCalibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions.Study Design and SettingWe present results based on simulated data sets.ResultsA common definition of calibration is “having an event rate of R% among patients with a predicted risk of R%,” which we refer to as “moderate calibration.” Weaker forms of calibration only require the average predicted risk (mean calibration) or the average prediction effects (weak calibration) to be correct. “Strong calibration” requires that the event rate equals the predicted risk for every covariate pattern. This implies that the model is fully correct for the validation setting. We argue that this is unrealistic: the model type may be incorrect, the linear predictor is only asymptotically unbiased, and all nonlinear and interaction effects should be correctly modeled. In addition, we prove that moderate calibration guarantees nonharmful decision making. Finally, results indicate that a flexible assessment of calibration in small validation data sets is problematic.ConclusionStrong calibration is desirable for individualized decision support but unrealistic and counter productive by stimulating the development of overly complex models. Model development and external validation should focus on moderate calibration.  相似文献   

19.
20.
In patients with community-acquired pneumonia (CAP) prediction rules based on individual predicted mortalities are frequently used to support decision-making for in-patient vs. outpatient management. We studied the accuracy and the need for recalibration of three risk prediction scores in a tertiary-care University hospital emergency-department setting in Switzerland. We pooled data from patients with CAP enrolled in two randomized controlled trials. We compared expected mortality from the original pneumonia severity index (PSI), CURB65 and CRB65 scores against observed mortality (calibration) and recalibrated the scores by fitting the intercept alpha and the calibration slope beta from our calibration model. Each of the original models underestimated the observed 30-day mortality of 11%, in 371 patients admitted to the emergency department with CAP (8.4%, 5.5% and 5.0% for the PSI, CURB65 and CRB65 scores, respectively). In particular, we observed a relevant mortality within the low risk classes of the original models (2.6%, 5.3%, and 3.7% for PSI classes I-III, CURB65 classes 0-1, and CRB65 class 0, respectively). Recalibration of the original risk models corrected the miscalibration. After recalibration, however, only PSI class I was sensitive enough to identify patients with a low risk (i.e. <1%) for mortality suitable for outpatient management. In our tertiary-care setting with mostly referred in-patients, CAP risk scores substantially underestimated observed mortalities misclassifying patients with relevant risks of death suitable for outpatient management. Prior to the implementation of CAP risk scores in the clinical setting, the need for recalibration and the accuracy of low-risk re-classification should be studied in order to adhere with discharge guidelines and guarantee patients' safety.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号