首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
NHS England recently mandated that the National Early Warning Score of vital signs be used in all acute hospital trusts in the UK despite limited validation in the postoperative setting. We undertook a multicentre UK study of 13,631 patients discharged from intensive care after risk-stratified cardiac surgery in four centres, all of which used VitalPACTM to electronically collect postoperative National Early Warning Score vital signs. We analysed 540,127 sets of vital signs to generate a logistic score, the discrimination of which we compared with the national additive score for the composite outcome of: in-hospital death; cardiac arrest; or unplanned intensive care admission. There were 578 patients (4.2%) with an outcome that followed 4300 sets of observations (0.8%) in the preceding 24 h: 499 out of 578 (86%) patients had unplanned re-admissions to intensive care. Discrimination by the logistic score was significantly better than the additive score. Respective areas (95%CI) under the receiver-operating characteristic curve with 24-h and 6-h vital signs were: 0.779 (0.771–0.786) vs. 0.754 (0.746–0.761), p < 0.001; and 0.841 (0.829–0.853) vs. 0.813 (0.800–0.825), p < 0.001, respectively. Our proposed logistic Early Warning Score was better than the current National Early Warning Score at discriminating patients who had an event after cardiac surgery from those who did not.  相似文献   

2.
Pre-operative risk stratification is a key part of the care pathway for emergency bowel surgery, as it facilitates the identification of high-risk patients. Several novel risk scores have recently been published that are designed to identify patients who are frail or significantly unwell. They can also be calculated pre-operatively from routinely collected clinical data. This study aimed to investigate the ability of these scores to predict 30-day mortality after emergency bowel surgery. A single centre cohort study was performed using our local data from the National Emergency Laparotomy Audit database. Further data were extracted from electronic hospital records (n = 1508). The National Early Warning Score, Laboratory Decision Tree Early Warning Score and Hospital Frailty Risk Score were then calculated. The most abnormal National or Laboratory Decision Tree Early Warning Score in the 24 or 72 h before surgery was used in analysis. Individual scores were reasonable predictors of mortality (c-statistic 0.699–0.740) but all were poorly calibrated. A National Early Warning Score ≥ 4 was associated with a high overall mortality rate (> 10%). A logistic regression model was developed using age, National Early Warning Score, Laboratory Decision Tree Early Warning Score and Hospital Frailty Risk Score as predictor variables, and its performance compared with other established risk models. The model demonstrated good discrimination and calibration (c-statistic 0.827) but was marginally outperformed by the National Emergency Laparotomy Audit score (c-statistic 0.861). All other models compared performed less well (c-statistics 0.734–0.808). Pre-operative patient vital signs, blood tests and markers of frailty can be used to accurately predict the risk of 30-day mortality after emergency bowel surgery.  相似文献   

3.
This study updates assessment of post‐transplant outcomes in IgAN patients in the modern era of immunosuppression. Using UNOS/OPTN data, patients ≥18 yr of age with first kidney transplant (1/1/1999 to 12/31/2008) were analyzed. Multivariable Cox regression models and propensity score‐based matching techniques were used to estimate hazard ratios (HRs) for death‐censored allograft survival (DCGS) and patient survival in IgAN compared to non‐IgAN. Results of multivariable regression were stratified by donor type (living vs. deceased). A total of 107, 747 recipients were included (4589 with IgAN and 103 158 with non‐IgAN). Adjusted HR for DCGS showed no significant difference between IgAN and non‐IgAN. IgAN had higher patient survival compared to non‐IgAN (HR 0.54, 95% CI 0.47–0.62, p < 0.0001 for deceased donors; HR 0.42, 95% CI 0.33–0.54, p < 0.0001 for living donors). Propensity score‐matched analysis was similar, with no significant difference in DCGS between matched groups and higher patient survival in IgAN patients compared to non‐IgAN group (HR 0.54, 95% CI 0.47, 0.63; p‐value <0.0001). IgAN patients with first kidney transplant have superior patient survival and similar graft survival compared to non‐IgAN recipients. Results can be used in prognostication and informed decision‐making about kidney transplantation in patients with IgAN.  相似文献   

4.
Parental hip fracture (HF) is associated with increased risk of offspring major osteoporotic fractures (MOFs; comprising hip, forearm, clinical spine or humerus fracture). Whether other sites of parental fracture should be used for fracture risk assessment is uncertain. The current study tested the association between objectively‐verified parental non‐hip MOF and offspring incident MOF. Using population‐based administrative healthcare data for the province of Manitoba, Canada, we identified 255,512 offspring with linkage to at least one parent (238,054 mothers and 209,423 fathers). Parental non‐hip MOF (1984–2014) and offspring MOF (1997–2014) were ascertained with validated case definitions. Time‐dependent multivariable Cox proportional hazards regression models were used to estimate adjusted hazard ratios (HRs) and 95% confidence intervals (95% CIs). During a median of 12 years of offspring follow‐up, we identified 7045 incident MOF among offspring (3.7% and 2.5% for offspring with and without a parental non‐hip MOF, p < 0.001). Maternal non‐hip MOF (HR 1.27; 95% CI, 1.19 to 1.35), paternal non‐hip MOF (HR 1.33; 95% CI, 1.20 to 1.48), and any parental non‐hip MOF (HR 1.28; 95% CI, 1.21 to 1.36) were significantly associated with offspring MOF after adjusting for covariates. The risk of MOF was even greater for offspring with both maternal and paternal non‐hip MOF (adjusted HR 1.61; 95% CI, 1.27 to 2.02). All HRs were similar for male and female offspring (all pinteraction >0.1). Risks associated with parental HF only (adjusted HR 1.26; 95% CI, 1.13 to 1.40) and non‐hip MOF only (adjusted HR 1.26; 95% CI, 1.18 to 1.34) were the same. The strength of association between any parental non‐hip MOF and offspring MOF decreased with older parental age at non‐hip MOF (ptrend = 0.028). In summary, parental non‐hip MOF confers an increased risk for offspring MOF, but the strength of the relationship decreases with older parental age at fracture. © 2016 American Society for Bone and Mineral Research.  相似文献   

5.
The aim was to evaluate the relationship between maintenance immunosuppression, subclinical tubulo‐interstitial inflammation and interstitial fibrosis/tubular atrophy (IF/TA) in surveillance biopsies performed in low immunological risk renal transplants at two transplant centers. The Barcelona cohort consisted of 109 early and 66 late biopsies in patients receiving high tacrolimus (TAC‐C0 target at 1‐year 6–10 ng/ml) and reduced MMF dose (500 mg bid at 1‐year). The Oslo cohort consisted of 262 early and 237 late biopsies performed in patients treated with low TAC‐C0 (target 3–7 ng/ml) and standard MMF dose (750 mg bid). Subclinical inflammation, adjusted for confounders, was associated with low TAC‐C0 in the early (OR: 0.75, 95% CI: 0.61–0.92; P = 0.006) and late biopsies (OR: 0.69, 95% CI: 0.50–0.95; P = 0.023) from Barcelona. In the Oslo cohort, it was associated with low MMF in early biopsies (OR: 0.90, 95% CI: 0.83–0.98; P = 0.0101) and with low TAC‐C0 in late biopsies (OR: 0.77, 95% CI: 0.61–0.97; P = 0.0286). MMF dose was significantly reduced in Oslo between early and late biopsies. IF/TA was not associated with TAC‐C0 or MMF dose in the multivariate analysis. Our data suggest that in TAC‐ and MMF‐based regimens, TAC‐C0 levels are associated with subclinical inflammation in patients receiving reduced MMF dose.  相似文献   

6.

Aim

The aim of this study was to investigate the efficacy of sacral neuromodulation (SNM) in the treatment of faecal incontinence and concomitant urinary incontinence in women with a history of obstetric anal sphincter injury (OASIS).

Method

In this prospective study, consecutive women with faecal incontinence following OASIS accepted for SNM were screened for concomitant urinary incontinence. The primary outcome was the change in urinary incontinence score on the International Consultation on Incontinence Questionnaire for Urinary Incontinence, Short Form (ICIQ‐UI‐SF), between baseline and 12 months. Secondary outcomes included the change in St Mark's score, sexual function and quality of life, change in grade of urinary incontinence and disappearance of urgency.

Results

From March 2012 to September 2014, 39 women with combined faecal incontinence and urinary incontinence received SNM. Thirty‐seven women were available for analysis after 12 months. The mean reduction in the ICIQ‐UI‐SF score between the baseline and 12 months was 5.8 (95% CI 3.7–8.0, < 0.001). ICIQ‐UI‐SF was reduced in 29 (78%) women, urinary incontinence resolved in 13/37 (35%, 95% CI 20%–50%) patients, and urgency disappeared in 14/33 (42%, 95% CI 26%–59%). The mean reduction in the St Mark's score was 10.6 (95% CI 8.6–12.7, < 0.001). Disease‐specific quality of life, Euroqual 5‐dimension visual analogue scale (EQ‐5D VAS) and several areas of sexual function changed significantly for the better.

Conclusion

More than three‐quarters of the women with combined faecal and urinary incontinence following OASIS reported a successful outcome with reduction in ICIQ‐UI‐SF at 12 months after SNM.  相似文献   

7.
Excellent outcomes have been demonstrated among select HIV‐positive kidney transplant (KT) recipients with well‐controlled infection, but to date, no national study has explored outcomes among HIV+ KT recipients by antiretroviral therapy (ART) regimen. Intercontinental Marketing Services (IMS) pharmacy fills (1/1/01–10/1/12) were linked with Scientific Registry of Transplant Recipients (SRTR) data. A total of 332 recipients with pre‐ and posttransplantation fills were characterized by ART at the time of transplantation as protease inhibitor (PI) or non–PI‐based ART (88 PI vs. 244 non‐PI). Cox proportional hazards models were adjusted for recipient and donor characteristics. Comparing recipients by ART regimen, there were no significant differences in age, race, or HCV status. Recipients on PI‐based regimens were significantly more likely to have an Estimated Post Transplant Survival (EPTS) score of >20% (70.9% vs. 56.3%, p = 0.02) than those on non‐PI regimens. On adjusted analyses, PI‐based regimens were associated with a 1.8‐fold increased risk of allograft loss (adjusted hazard ratio [aHR] 1.84, 95% confidence interval [CI] 1.22–2.77, p = 0.003), with the greatest risk observed in the first posttransplantation year (aHR 4.48, 95% CI 1.75–11.48, p = 0.002), and a 1.9‐fold increased risk of death as compared to non‐PI regimens (aHR 1.91, 95% CI 1.02–3.59, p = 0.05). These results suggest that whenever possible, recipients should be converted to a non‐PI regimen prior to kidney transplantation.  相似文献   

8.
To assess the effects of non‐steroidal antiandrogen monotherapy compared with luteinizing hormone‐releasing hormone agonists or surgical castration monotherapy for treating advanced hormone‐sensitive stages of prostate cancer. We searched the Cochrane Prostatic Diseases and Urologic Cancers Group Specialized Register (PROSTATE), the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, Web of Science with Conference Proceedings, three trial registries and abstracts from three major conferences to 23 December 2013, together with reference lists, and contacted selected experts in the field and manufacturers. We included randomized controlled trials comparing non‐steroidal antiandrogen monotherapy with medical or surgical castration monotherapy for men in advanced hormone‐sensitive stages of prostate cancer. Two review authors independently examined full‐text reports, identified relevant studies, assessed the eligibility of studies for inclusion, extracted data and assessed risk of bias as well as quality of evidence according to the GRADE working group guidelines. We used Review Manager 5.2 for data synthesis and the fixed‐effect model as primary analysis (when heterogeneity was low with I2 < 50%); we used a random‐effects model when confronted with substantial or considerable heterogeneity (when I2 ≥50%). A total of 11 studies involving 3060 randomly assigned participants were included in the present review. Use of non‐steroidal antiandrogens resulted in lower overall survival times (hazard ratio [HR] 1.24, 95% confidence interval [CI] 1.05–1.48, six studies, 2712 participants) and greater clinical progression (1 year: risk ratio [RR] 1.25, 95% CI 1.08–1.45, five studies, 2067 participants; 70 weeks: RR 1.26, 95% CI 1.08–1.45, six studies, 2373 participants; 2 years: RR 1.14, 95% CI 1.04–1.25, three studies, 1336 participants), as well as treatment failure (1 year: RR 1.19, 95% CI 1.02–1.38, four studies, 1539 participants; 70 weeks: RR 1.27, 95% CI 1.05–1.52, five studies, 1845 participants; 2 years: RR 1.14, 95% CI 1.05–1.24, two studies, 808 participants), compared with medical or surgical castration. The quality of evidence for overall survival, clinical progression and treatment failure was rated as moderate according to GRADE. Use of non‐steroidal antiandrogens increased the risk for treatment discontinuation as a result of adverse events (RR 1.82, 95% CI 1.13–2.94, eight studies, 1559 participants), including events such as breast pain (RR 22.97, 95% CI 14.79– 35.67, eight studies, 2670 participants) and gynaecomastia (RR 8.43, 95% CI 3.19–22.28, nine studies, 2774 participants) The risk of other adverse events, such as hot flushes (RR 0.23, 95% CI 0.19–0.27, nine studies, 2774 participants) was decreased when non‐steroidal antiandrogens were used. The quality of evidence for breast pain, gynaecomastia and hot flushes was rated as moderate according to GRADE. The effects of non‐steroidal antiandrogens on cancer‐specific survival and biochemical progression remained unclear. Non‐steroidal antiandrogen monotherapy compared with medical or surgical castration monotherapy for advanced prostate cancer is less effective in terms of overall survival, clinical progression, treatment failure and treatment discontinuation resulting from adverse events. Evidence quality was rated as moderate according to GRADE; therefore, further research is likely to have an important impact on results for patients with advanced but non‐metastatic prostate cancer treated with non‐steroidal antiandrogen monotherapy.  相似文献   

9.
Anatomical, neurological and behavioural research has suggested differences between the brains of right‐ and non‐right‐handed individuals, including differences in brain structure, electroencephalogram patterns, explicit memory and sleep architecture. Some studies have also found decreased longevity in left‐handed individuals. We therefore aimed to determine whether handedness independently affects the relationship between volatile anaesthetic concentration and the bispectral index, the incidence of definite or possible intra‐operative awareness with explicit recall, or postoperative mortality. We studied 5585 patients in this secondary analysis of data collected in a multicentre clinical trial. There were 4992 (89.4%) right‐handed and 593 (10.6%) non‐right‐handed patients. Handedness was not associated with (a) an alteration in anaesthetic sensitivity in terms of the relationship between the bispectral index and volatile anaesthetic concentration (estimated effect on the regression relationship ?0.52 parallel shift; 95% CI ?1.27 to 0.23, p = 0.17); (b) the incidence of intra‐operative awareness with 26/4992 (0.52%) right‐handed vs 1/593 (0.17%) non‐right‐handed (difference = 0.35%; 95% CI ?0.45 to 0.63%; p = 0.35); or (c) postoperative mortality rates (90‐day relative risk for non‐right‐handedness 1.19, 95% CI 0.76–1.86; p = 0.45). Thus, no change in anaesthetic management is indicated for non‐right‐handed patients.  相似文献   

10.
BackgroundObstetric early warning systems are recommended for monitoring hospitalised pregnant and postnatal women. We decided to compare: (i) vital sign values used to define physiological normality; (ii) symptoms and signs used to escalate care; (iii) type of chart used; and (iv) presence of explicit instructions for escalating care.MethodsOne-hundred-and-twenty obstetric early warning charts and escalation protocols were obtained from consultant-led maternity units in the UK and Channel Islands. These data were extracted: values used to determine normality for each maternal vital sign; chart colour-coding; instructions following early warning system triggering; other criteria used as triggers.ResultsThere was considerable variation in the charts, warning systems and escalation protocols. Of 120 charts, 89.2% used colour; 69.2% used colour-coded escalation systems. Forty-one (34.2%) systems required the calculation of weighted scores. Seventy-five discrete combinations of ‘normal’ vital sign ranges were found, the most common being: heart rate=50–99 beats/min; respiratory rate=11–20 breaths/min; blood pressure, systolic=100–149 mmHg, diastolic ≤89 mmHg; SpO2=95–100%; temperature=36.0–37.9°C; and Alert-Voice-Pain-Unresponsive assessment=Alert. Most charts (90.8%) provided instructions about who to contact following triggering, but only 41.7% gave instructions about subsequent observation frequency.ConclusionThe wide range of ‘normal’ vital sign values in different systems suggests a lack of equity in the processes for detecting deterioration and escalating care in hospitalised pregnant and postnatal women. Agreement regarding ‘normal’ vital sign ranges is urgently required and would assist the development of a standardised obstetric early warning system and chart.  相似文献   

11.
Our aim was to prospectively determine the predictive capabilities of SEPSIS‐1 and SEPSIS‐3 definitions in the emergency departments and general wards. Patients with National Early Warning Score (NEWS) of 3 or above and suspected or proven infection were enrolled over a 24‐h period in 13 Welsh hospitals. The primary outcome measure was mortality within 30 days. Out of the 5422 patients screened, 431 fulfilled inclusion criteria and 380 (88%) were recruited. Using the SEPSIS‐1 definition, 212 patients had sepsis. When using the SEPSIS‐3 definitions with Sequential Organ Failure Assessment (SOFA) score ≥ 2, there were 272 septic patients, whereas with quickSOFA score ≥ 2, 50 patients were identified. For the prediction of primary outcome, SEPSIS‐1 criteria had a sensitivity (95%CI) of 65% (54–75%) and specificity of 47% (41–53%); SEPSIS‐3 criteria had a sensitivity of 86% (76–92%) and specificity of 32% (27–38%). SEPSIS‐3 and SEPSIS‐1 definitions were associated with a hazard ratio (95%CI) 2.7 (1.5–5.6) and 1.6 (1.3–2.5), respectively. Scoring system discrimination evaluated by receiver operating characteristic curves was highest for Sequential Organ Failure Assessment score (0.69 (95%CI 0.63–0.76)), followed by NEWS (0.58 (0.51–0.66)) (p < 0.001). Systemic inflammatory response syndrome criteria (0.55 (0.49–0.61)) and quickSOFA score (0.56 (0.49–0.64)) could not predict outcome. The SEPSIS‐3 definition identified patients with the highest risk. Sequential Organ Failure Assessment score and NEWS were better predictors of poor outcome. The Sequential Organ Failure Assessment score appeared to be the best tool for identifying patients with high risk of death and sepsis‐induced organ dysfunction.  相似文献   

12.
We systematically reviewed 31 adult randomised clinical trials of the i‐gel® vs laryngeal mask airway. The mean (95% CI) leak pressure difference and relative risk (95% CI) of insertion on the first attempt were similar: 0.40 (?1.23 to 2.02) cmH2O and 0.98 (0.95–1.01), respectively. The mean (95% CI) insertion time and the relative risk (95% CI) of sore throat were less with the i‐gel: by 1.46 (0.33–2.60) s, p = 0.01, and 0.59 (0.38–0.90), p = 0.02, respectively. The relative risk of poor fibreoptic view through the i‐gel was 0.29 (0.16–0.54), p < 0.0001. All outcomes displayed substantial heterogeneity, I2 ≥ 75%. Subgroup analyses did not decrease heterogeneity, but suggested that insertion of the i‐gel was faster than for first‐generation laryngeal mask airways and that the i‐gel leak pressure was higher than first generation, but lower than second‐generation, laryngeal mask airways. A less frequent sore throat was the main clinical advantage of the i‐gel.  相似文献   

13.
Vocal cord paralysis (VCP) may complicate thoracic surgery and is associated with increased morbidity and mortality. Among lung transplant (LTx) recipients, chronic pulmonary aspiration can contribute to chronic allograft dysfunction (CLAD). We herein assessed the unknown incidence and clinical impact of VCP in a large LTx cohort. All first‐time bilateral LTx recipients, transplanted between January 2010 and June 2015 were included in a single‐centre retrospective analysis. Bronchoscopy reports were assessed for VCP. Patients exhibiting VCP were compared to propensity score‐matched negative controls regarding CLAD onset and graft survival and secondary end‐points, including inpatient duration and complications; lower respiratory tract infections (LRTI) within 24 months. In total, 583/713 (82%) patients were included in the analysis. A total of 52 (8.9%) exhibited VCP, which was transient in 34/52 patients (65%), recovering after median 6 months (IQR 2–12). Compared to 268 controls, 3‐year graft survival and CLAD‐free survival were non‐inferior in VCP [HR 0.74 (95% CI 0.35–1.57), and HR 0.74 (95% CI 0.39–1.41)] respectively. Duration of hospitalization was similar and no differences in LRTI rates or airway complications were observed. Lower pre‐Tx BMI increased risk for VCP [HR 0.88 (95% CI 0.79–0.99)]. Overall, VCP did not adversely affect graft and CLAD‐free survival and secondary outcomes including LRTIs and hospitalizations.  相似文献   

14.
PurposeThe purpose of this study was to identify computed tomography (CT) features associated with early recurrence of sigmoid volvulus (SV) after a first uncomplicated episode and to develop a score for early SV recurrence risk stratification.Materials and methodsA total of 95 patients (59 men, 36 women; mean age, 72 ± 15 [SD] years; age range: 57–87 years) who underwent abdominal CT examination for a first uncomplicated SV episode from January 1st 2006 to July 31st 2020 in two French University Hospitals were retrospectively included. A SV recurrence occurring within six months was defined as early SV recurrence. CT findings associated with SV were searched for using univariable analysis. CT features associated with early recurrence were computed into a multivariable logistic regression model that was further used to build a score to stratify SV recurrence risk. Kaplan-Meier curves were built to evaluate recurrence-free survival.ResultsEarly SV recurrence occurred in 53 patients (56%). At multivariable analysis, left lateral section volume < 150 cm3 and maximal colon distension > 10 cm were associated with early SV recurrence (Odds ratio [OR] = 4.62; 95% CI: 1.77–13.33; P = 0.002 and OR = 4.43 95% CI: 1.63–13.63; P = 0.005) respectively), and an early SV recurrence score with 1 point attributed to each of these two variables was built. Early SV recurrence was observed in 26%, 54% and 89% of patients with score of 0, 1 and 2, respectively (P < 0.001).ConclusionA simple CT score allows stratification of early SV recurrence after a first episode and helps to select patient who would not benefit from prophylactic colonic surgery because of a low SV recurrence risk.  相似文献   

15.
《European urology》2020,77(1):101-109
BackgroundVesical Imaging Reporting and Data System (VI-RADS) score is adopted to provide preoperative bladder cancer (BCa) staging. Repeated transurethral resection of bladder tumor (Re-TURBT) is recommended in most of high-risk non–muscle-invasive bladder cancers (HR-NMIBCs) due to possibility of persistent/understaged disease after initial TURBT. No diagnostic tools able to improve patient’s stratification for such recommendation exist.ObjectiveTo (1) prospectively validate VI-RADS for discriminating between NMIBC and muscle-invasive bladder cancer (MIBC) at TURBT, and (2) evaluate the accuracy of VI-RADS for identifying HR-NMIBC patients who could avoid Re-TURBT and detecting those at higher risk for understaging after TURBT.Design, setting, and participantsPatients with BCa suspicion were offered multiparametric magnetic resonance imaging (mpMRI) before TURBT. According to VI-RADS, a cutoff of ≥3 to define MIBC was assumed. TURBT reports were compared with preoperative VI-RADS scores to assess accuracy of mpMRI for discriminating between NMIBC and MIBC. HR-NMIBC Re-TURBT reports were compared with preoperatively recorded VI-RADS scores to assess mpMRI accuracy in predicting Re-TURBT outcomes.InterventionMultiparametric MRI of the bladder before TURBT.Outcome measurements and statistical analysisSensitivity, specificity, positive (PPV) and negative (NPV) predictive values were calculated for mpMRI performance in patients undergoing TURBT and for HR-NMIBC patients candidate for Re-TURBT. Performance of mpMRI was assessed by receiver operating characteristic curve analysis. Ƙ statistics was used to estimate inter- and intrareader variability.Results and limitationsA total of 231 patients were enrolled. Multiparametric MRI showed sensitivity, specificity, PPV, and NPV for discriminating NMIBC from MIBC at initial TURBT of 91.9% (95% confidence interval [CI]: 82.2–97.3), 91.1% (95% CI: 85.8–94.9), 77.5% (95% CI: 65.8–86.7), and 97.1% (95% CI: 93.3–99.1), respectively. The area under the curve (AUC) was 0.94 (95% CI: 0.91–0.97). Among HR-NMIBC patients (n = 114), mpMRI before TURBT showed sensitivity, specificity, PPV, and NPV of 85% (95% CI: 62.1–96.8), 93.6% (95% CI: 86.6–97.6), 74.5% (95% CI: 52.4–90.1), and 96.6% (95% CI: 90.5–99.3) respectively, to identify patients with MIBC at Re-TURBT. The AUC was 0.93 (95% CI: 0.87–0.97).ConclusionsVI-RADS is accurate for discriminating between NMIBC and MIBC. Within HR-NMIBC cases, VI-RADS could, in future, improve the selection of patients who are candidate for Re-TURBT.Patient summaryWe investigated the accuracy of Vesical Imaging Reporting and Data System (VI-RADS) score to asses bladder cancer staging before transurethral resection of bladder tumors, and we explored the performance of VI-RADS score as a future preoperative predictive tool for the selection of high-risk non–muscle-invasive bladder cancer patients who are candidate for undergoing early repeated transurethral resection of the primary tumor site.  相似文献   

16.
Aotearoa New Zealand uses a single early warning score (EWS) across all public and private hospitals to detect adult inpatient physiological deterioration. This combines the aggregate weighted scoring of the UK National Early Warning Score with single parameter activation from Australian medical emergency team systems. We conducted a retrospective analysis of a large vital sign dataset to validate the predictive performance of the New Zealand EWS in discriminating between patients at risk of serious adverse events and compared this with the UK EWS. We also compared predictive performance for patients admitted under medical vs. surgical specialties. A total of 1,738,787 aggregate scores (13,910,296 individual vital signs) were obtained from 102,394 hospital admissions to six hospitals within the Canterbury District Health Board of New Zealand's South Island. Predictive performance of each scoring system was determined using area under the receiver operating characteristic curve. Analysis showed that the New Zealand EWS is equivalent to the UK EWS in predicting patients at risk of serious adverse events (cardiac arrest, death and/or unanticipated ICU admission). Area under the receiver operating characteristic curve for both EWSs for any adverse outcome was 0.874 (95%CI 0.871–0.878) and 0.874 (95%CI 0.870–0.877), respectively. Both EWSs showed superior predictive value for cardiac arrest and/or death in patients admitted under surgical rather than medical specialties. Our study is the first validation of the New Zealand EWS in predicting serious adverse events in a broad dataset and supports previous work showing the UK EWS has superior predictive performance in surgical rather than medical patients.  相似文献   

17.
De novo donor‐specific antibodies (dnDSAs) have been associated with reduced graft survival. Tacrolimus (TAC)–based regimens are the most common among immunosuppressive approaches used in in clinical practice today, yet an optimal therapeutic dose to prevent dnDSAs has not been established. We evaluated mean TAC C0 (tacrolimus trough concentration) and TAC time in therapeutic range for the risk of dnDSAs in a cohort of 538 patients in the first year after kidney transplantation. A mean TAC C0 < 8 ng/mL was associated with dnDSAs by 6 months (odds ratio [OR] 2.51, 95% confidence interval [CI] 1.32–4.79, P = .005) and by 12 months (OR 2.32, 95% CI 1.30–4.15, P = .004), and there was a graded increase in risk with lower mean TAC C0. TAC time in the therapeutic range of <60% was associated with dnDSAs (OR 2.05, 95% CI 1.28‐3.30, P = .003) and acute rejection (hazard ratio [HR] 4.18, 95% CI 2.31–7.58, P < .001) by 12 months and death‐censored graft loss by 5 years (HR 3.12, 95% CI 1.53–6.37, P = .002). TAC minimization may come at a cost of higher rates of dnDSAs, and TAC time in therapeutic range may be a valuable strategy to stratify patients at increased risk of adverse outcomes.  相似文献   

18.
Background: Allowing spontaneous respiration after cardiac surgery eliminates complications related to mechanical ventilation and optimizes cardiopulmonary interaction. Epidural analgesia has been proposed to promote early extubation after cardiac surgery. Objective: To identify the characteristics of patients with epidural analgesia and safety profiles with respect to the timing of extubation following cardiac surgery. Design and method: A retrospective chart review of patients who underwent cardiac surgery during a 5‐year period. Demographic, procedural, and perioperative variables were analyzed to investigate factors that affect the timing of extubation. Results: A total of 750 records were reviewed. The patients’ median age was 12 months, and 52% were infants (<1 year). Seventy‐five percent of the patients utilized cardiopulmonary bypass. The study population was classified into three groups according to the timing of extubation: 66% were extubated in the operating room or upon arrival at the PICU (Immediate), 15% were extubated within 24 h (mean, 10.8 h; 95% CI, 9.0–12.6) (Early), and 19% were extubated after 24 h (Delayed). For the Immediate and Early groups, multivariate logistic regression identified young age, increased cross‐clamp time, and inotrope score as independent risk factors for the need for mechanical ventilation. Postextubation respiratory acidosis (mean PaCO2, 50 mmHg; 95% CI, 49–51) was well tolerated by all patients. There were no neurologic complications related to the epidural technique. Conclusion: Epidural analgesia in children undergoing cardiac surgery provides stable analgesia without complications in our experience.  相似文献   

19.
Communicating non‐urgent, urgent and frank emergency requests for assistance between anaesthetists in theatre often requires a ‘go‐between’ – frequently a non‐anaesthetic healthcare professional – to transmit information. We compared the currently recommended situation, background, assessment, recommendation (SBAR) tool with a newly devised Traffic Lights tool (‘red alert’, ‘amber assist’ and ‘green query’) in a simulation study to assess communication quality using 12 validated clinical scenarios of varying urgency. Compared to SBAR, Traffic Lights was used more consistently (‘very clear’ or ‘clear’ Traffic Lights 94% vs SBAR 69%); transferred information better (two or three pieces of information correctly transferred Traffic Lights 85%, SBAR 44%; and was judged to lead to greater clarity (all p < 0.0001). Message delivery time was significantly reduced (Traffic Lights 20.5 s vs SBAR 45.5 s, median (95% CI) difference 25 (19–30) s, p < 0.001). Users rated the Traffic Lights system as significantly more useful than SBAR, with 96% of participants preferring the Traffic Lights tool. Results were independent of go‐between training. We recommend the adoption of this communication tool as standard practice for anaesthetic teams.  相似文献   

20.
Wrist fractures are common in postmenopausal women and are associated with functional decline. Fracture patterns after wrist fracture are unclear. The goal of this study was to determine the frequency and types of fractures that occur after a wrist fracture among postmenopausal women. We carried out a post hoc analysis of data from the Women's Health Initiative Observational Study and Clinical Trials (1993–2010) carried out at 40 US clinical centers. Participants were postmenopausal women aged 50 to 79 years at baseline. Mean follow‐up duration was 11.8 years. Main measures included incident wrist, clinical spine, humerus, upper extremity, lower extremity, hip, and total non‐wrist fractures and bone mineral density (BMD) in a subset. Among women who experienced wrist fracture, 15.5% subsequently experienced non‐wrist fracture. The hazard for non‐wrist fractures was higher among women who had experienced previous wrist fracture than among women who had not experienced wrist fracture: non‐wrist fracture overall (hazard ratio [HR] = 1.40, 95% confidence interval [CI] 1.33–1.48), spine (HR = 1.48, 95% CI 1.32–1.66), humerus (HR = 1.78, 95% CI 1.57–2.02), upper extremity (non‐wrist) (HR = 1.88, 95% CI 1.70–2.07), lower extremity (non‐hip) (HR = 1.36, 95% CI 1.26–1.48), and hip (HR = 1.50, 95% CI 1.32–1.71) fracture. Associations persisted after adjustment for BMD, physical activity, and other risk factors. Risk of non‐wrist fracture was higher in women who were younger when they experienced wrist fracture (interaction p value 0.02). Associations between incident wrist fracture and subsequent non‐wrist fracture did not vary by baseline BMD category (normal, low bone density, osteoporosis). A wrist fracture is associated with increased risk of subsequent hip, vertebral, upper extremity, and lower extremity fractures. There may be substantial missed opportunity for intervention in the large number of women who present with wrist fractures. © 2015 American Society for Bone and Mineral Research.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号