首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.

Background

Epinephrine is recommended for the treatment of non-shockable out of hospital cardiac arrest (OHCA) to obtain return of spontaneous circulation (ROSC). Epinephrine efficiency and safety remain under debate.

Objective

We propose to describe the association between the cumulative dose of epinephrine and the failure of ROSC during the first 30?min of advanced life support (ALS).

Methodology

A retrospective observational cohort study using the Paris SAMU 75 registry including all non-traumatic OHCA. All OHCA receiving epinephrine during the first 30?min of ALS were enrolled. Cumulative epinephrine dose given during ALS to ROSC was retrieved from medical reports.

Results

Among 1532 patients with OHCA, 776 (51%) had initial non-shockable rhythm. Fifty-four patients were excluded for missing data.The mean value of cumulative dose of epinephrine was 10?±?4?mg in patients who failed to achieve ROSC (ROSC?) and 4?±?3?mg (p?=?0.04) for those who achieved ROSC.ROC curve analysis indicated a cut-off point of 7?mg total cumulative epinephrine associated with ROSC? (AUC?=?0.89 [0.86–0.92]).Using propensity score analysis including age, sex and no-flow duration, association with ROSC? only remained significant for epinephrine?>?7?mg (p?≤10–3, OR [CI95]?=?1.53 [1.42–1.65]).

Conclusion

An association between total cumulative epinephrine dose administered during OHCA resuscitation and ROSC? was reported with a threshold of 7?mg, best identifying patients with refractory OHCA. We suggest using this threshold in this context to guide the termination of ALS and early decide on the implementation of extracorporeal life support or organ harvesting in the first 30?min of ALS.  相似文献   

2.
3.

Introduction

The relationship between time of day and the clinical outcomes of patients with out-of-hospital cardiac arrest (OHCA) remains inconclusive. We undertook a meta-analysis to assess the available evidence on the relationship between nighttime and prognosis for patients with OHCA.

Materials and methods

PubMed and EMBASE were searched through June 20, 2018, to identify all studies assessing the relationship between nighttime and prognosis for patients with OHCA. Random effects modes were used to estimate odds ratios (ORs) with 95% confidence intervals (CIs).

Results

Eight observational studies met the inclusion criteria. Meta-analysis of 8 studies showed that compared with nighttime, the daytime OHCA patients had higher 1-month/in-hospital survival (OR, 1.25; 95% CI, 1.15–1.37; P?=?0.00), with high heterogeneity among the studies (I2?=?82.8%, P?=?0.00).

Conclusions

Patients who experienced OHCA during the nighttime had lower 1-month/in-hospital survival than those with daytime OHCA. In addition to arrest event and pre-hospital care factors, patients' comorbidity and hospital-based care may also be responsible for lower survival at night.  相似文献   

4.

Objective

To evaluate the cost-effectiveness of structured activities of daily living (ADL) retraining during posttraumatic amnesia (PTA) plus treatment as usual (TAU) vs TAU alone for inpatient rehabilitation following severe traumatic brain injury (TBI).

Design

Trial-based economic evaluation from a health-system perspective.

Setting

Inpatient rehabilitation center.

Participants

Participants (N=104) admitted to rehabilitation and in PTA for >7 days following severe TBI.

Interventions

Structured ADL retraining during PTA plus TAU vs TAU alone. Structured ADL retraining was manualized to minimize the risk of agitation and maximize functional improvement, following principles of errorless and procedural learning and targeting individualized therapy goals. TAU included physiotherapy and/or speech therapy during PTA plus ADL retraining after PTA emergence.

Main Outcome Measures

FIM total scores at baseline, PTA emergence, hospital discharge, or final follow-up (2mo postdischarge) where FIM total scores were calculated as the sum of 5 FIM motor self-care items and a FIM meal-preparation item.

Results

Structured ADL retraining during PTA significantly increased functional independence at PTA emergence (mean difference: 4.90, SE: 1.4, 95% confidence interval [CI]: 1.5, 8.3) and hospital discharge (mean difference: 5.22, SE: 1.4, 95% CI: 1.8, 8.7). Even in our most pessimistic scenario, structured ADL retraining was cost-saving as compared to TAU (mean: -$7762; 95% CI: -$8105, -$7419). Together, these results imply that structured ADL retraining dominates (less costly but no less effective) TAU when effectiveness is evaluated at PTA emergence and hospital discharge.

Conclusions

Structured ADL retraining during PTA yields net cost-savings to the health system and offers a cost-effective means of increasing functional independence at PTA emergence and hospital discharge.  相似文献   

5.

Objective

Both slow gait speed (GS) and higher levels of frailty are associated with adverse outcomes in community-dwelling older people. However these measures are not routinely utilized to stratify risk status in the hospital setting. Here we assessed their predictive validity in older inpatients.

Design

A prospective cohort study.

Setting

Inpatient rehabilitation wards of a tertiary hospital.

Participants

Adults 65 years and older (N=258).

Interventions

A frailty index (FI) was calculated from routinely collected data and GS was determined from a timed 10-meter walk test.

Main Outcome Measures

Adverse outcomes were longer length of stay (≥75th percentile), poor discharge outcome (discharge to a higher level of care or inpatient mortality), and inpatient delirium and falls.

Results

Mean age ± SD was 79±8 years and 54% were women. Mean FI ± SD on admission was 0.42±0.13 and an FI could be derived in all participants. Mean GS ± SD was 0.26±0.33 m/sec. Those unable to complete a timed walk on admission (50%) were allocated a GS of 0. There was a weak but significant inverse relationship between FI and GS (correlation coefficient -0.396). Both parameters were significantly associated with longer length of stay (P<.001), poor discharge outcome (P≤.001), and delirium (P<.05).The prevalence of adverse outcomes was highest in the cohort who were more frail and unable to mobilize at admission to rehabilitation.

Conclusions

FI and GS each showed predictive validity for adverse outcomes. In a geriatric rehabilitation setting, they measure different aspects of vulnerability and combining the 2 may add value in identifying patients most at risk.  相似文献   

6.

Objectives

To assess the difference in survival and neurological outcomes between endotracheal tube (ETT) intubation and supraglottic airway (SGA) devices used during out-of-hospital cardiac arrest (OHCA).

Methods

A systematic search of five databases was performed by two independent reviewers until September 2018. Included studies reported on (1) OHCA or cardiopulmonary resuscitation, and (2) endotracheal intubation versus supraglottic airway device intubation. Exclusion criteria (1) stimulation studies, (2) selectively included/excluded patients, (3) in-hospital cardiac arrest. Odds Ratios (OR) with random effect modelling was used. Primary outcomes: (1) return of spontaneous circulation (ROSC), (2) survival to hospital admission, (3) survival to hospital discharge, (4) discharge with a neurologically intact state.

Results

Twenty-nine studies (n?=?539,146) showed that overall, ETT use resulted in a heterogeneous, but significant increase in ROSC (OR?=?1.44; 95%CI?=?1.27 to 1.63; I2?=?91%; p?<?0.00001) and survival to admission (OR?=?1.36; 95%CI?=?1.12 to 1.66; I2?=?91%; p?=?0.002). There was no significant difference in survival to discharge or neurological outcome (p?>?0.0125). On sensitivity analysis of RCTs, there was no significant difference in ROSC, survival to admission, survival to discharge or neurological outcome (p?>?0.0125). On analysis of automated chest compression, without heterogeneity, ETT provided a significant increase in ROSC (OR?=?1.55; 95%CI?=?1.20 to 2.00; I2?=?0%; p?=?0.0009) and survival to admission (OR?=?2.16; 95%CI?=?1.54 to 3.02; I2?=?0%; p?<?0.00001).

Conclusions

The overall heterogeneous benefit in survival with ETT was not replicated in the low risk RCTs, with no significant difference in survival or neurological outcome. In the presence of automated chest compressions, ETT intubation may result in survival benefits.  相似文献   

7.
8.

Objective

This study assessed the association between the timing of first epinephrine administration (EA) and the neurological outcomes following out-of-hospital cardiac arrests (OHCAs) with both initial shockable and non-shockable rhythms.

Methods

This was a post-hoc analysis of a multicenter prospective cohort study (SOS-KANTO 2012), which registered OHCA patients in the Kanto region of Japan from January 2012 to March 2013. We included consecutive adult OHCA patients who received epinephrine. The primary result included 1-month favorable neurological outcomes defined as cerebral performance category (CPC) 1 or 2. Secondary results included 1-month survival and return of spontaneous circulation (ROSC) after arrival at the hospital. Multivariable logistic regression analysis determined the association between delay per minute of the time from call to first EA in both pre- or in-hospital settings and outcomes.

Results

Of the 16,452 patients, 9344 were eligible for our analyses. In univariable analysis, the delay in EA was associated with decreased favorable neurological outcomes only when the initial rhythm was a non-shockable rhythm. In multivariable analyses, delay in EA was associated with decreased ROSC (adjusted odds ratio [OR] for one minute delay, 0.97; 95% confidence interval [CI], 0.96–0.98) and 1-month survival (adjusted OR, 0.95; 95% CI, 0.92–0.97) when the initial rhythm was a non-shockable rhythm, whereas during a shockable rhythm, delay in EA was not associated with decreased ROSC and 1-month survival.

Conclusions

While assessing the effectiveness of epinephrine for OHCA, we should consider the time-limited effects of epinephrine. Additionally, consideration of early EA based on the pathophysiology is needed.  相似文献   

9.

Objective

To determine whether prehospital point-of-care lactate (pLA) is associated with mortality, admission, and duration of hospital stay.

Design

A retrospective clinical audit, where elevated lactate was defined as ≥2 mmol/L.

Setting

The ambulance service and primary referral hospital in the Australian Capital Territory from 1st July 2014 to 30th June 2015.

Participants

Adult patients (≥18 years) who had pLA measured and were transported to the primary referral hospital.

Main outcome measures

Mortality, admission, and duration of hospital stay.

Results

Two hundred fifty-three patients with a median pLA of 2.5 mmol/L (interquartile range [IQR]: 1.5–3.7) were analysed. Overall mortality was 8.3%; 68% were admitted to the hospital; 8.3% to the intensive care unit (ICU). pLA was non-significantly higher in those who died compared to survivors (3.5 [IQR: 2.75–5.85] vs 2.4 [1.5–3.6]; W = 1631.5; p = 0.053). pLA was higher for those admitted to the hospital (2.9 [1.9–3.9] vs 2.0 [1.4–3.1]; W = 5094.5, p = 0.001) and the ICU (3.2 [2.4–5.7] vs 2.4 [1.5–3.6]; W = 1578.5; p = 0.008). There was no relationship between pLA and duration of stay. Considered as a screening tool, at a cut-off of 2.5 mmol/L, pLA had a likelihood ratio+ of 1.61 for mortality and 1.44 for ICU admission; the odds ratio for mortality was 3.76 (95% confidence interval = 1.30, 13.89).

Conclusions

Elevated prehospital lactate was associated with significantly increased ICU and hospital admissions. There may be value in pLA as a screening tool.  相似文献   

10.

Objective

Early identification of shock allows for timely resuscitation. Previous studies note the utility of bedside calculations such as the shock index (SI) and quick sepsis-related organ failure assessment (qSOFA) to detect occult shock. Respiratory rate may also be an important marker of occult shock. The goal of our study was to evaluate whether using a modified SI with respiratory rate would improve identification of emergency department sepsis patients admitted to an ICU or stepdown unit.

Methods

A prospective, observational cohort study of the respiratory adjusted shock index (RASI), defined as HR/SBP?×?RR/10, was conducted. RASI was calculated from triage vital signs and compared to serum lactate. Primary outcome was admission to a higher level of care defined as ICU or stepdown unit. A multivariable logistic regression model including RASI, SI, lactate, age and sex was performed with disposition as the outcome variable. Areas under the curve (AUC) were calculated to detect occult shock and level of care for RASI, SI, and qSOFA.

Results

408 patients were enrolled, 360 were included in the analysis. Regression analysis revealed that lactate (OR 1.55, z?=?4.38, p?<?0.0001) and RASI (OR 2.27, z?=?3.03, p?<?0.002) were predictive of need for higher level of care. The AUC for RASI, SI, and qSOFA to detect occult shock were 0.71, 0.6, and 0.61 respectively. RASI also had a significant AUC in predicting level of care at 0.75 compared to SI (0.64) and qSOFA (0.62).

Conclusions

RASI may have utility as a rapid bedside tool for predicting critical illness in sepsis patients.  相似文献   

11.

Background

Experimental and animal studies suggested that vasopressin may have a favorable survival profile during CPR. This meta-analysis aimed to determine the efficacy of vasopressin in adult cardiac patients.

Methodology

Meta-analysis of randomized control trials (RCTs) comparing the efficacy of vasopressin containing regimen during CPR in adult cardiac arrest population with an epinephrine only regimen.

Results

A total of 6120 patients from 10 RCTs were included in this meta-analysis. Vasopressin use during CPR has no beneficial impact in an unselected population in ROSC [OR 1.19, 95% CI 0.93, 1.52], survival to hospital discharge [OR 1.13, 95% CI 0.89, 1.43], survival to hospital admission [OR 1.12, 95% CI 0.99, 1.27] and favorable neurological outcome [OR 1.02, 95% CI 0.75, 1.38]. ROSC in “in-hospital” cardiac arrest setting [OR 2.20, 95% CI 1.08, 4.47] is higher patients receiving vasopressin. Subgroup analyses revealed equal or higher chance of ROSC [OR 2.15, 95% CI 1.00, 4.61], higher possibility of survival to hospital discharge [OR 2.39, 95% CI 1.34, 4.27] and favorable neurological outcome [OR 2.58, 95% CI 1.39, 4.79] when vasopressin was used as repeated boluses of 4–5 times titrating desired effects during CPR.

Conclusion

ROSC in “in-hospital” cardiac arrest patients is significantly better when vasopressin was used. A subgroup analysis of this meta-analysis found that ROSC, survival to hospital admission and discharge and favorable neurological outcome may be better when vasopressin was used as repeated boluses of 4–5 times titrated to desired effects; however, overall no beneficial effect was noted in unselected cardiac arrest population.  相似文献   

12.

Introduction

Increased use of computed tomography (CT) during injury-related Emergency Department (ED) visits has been reported, despite increased awareness of CT radiation exposure risks. We investigated national trends in the use of chest CT during injury-related ED visits between 2012 and 2015.

Methods

Analyzing injury-related ED visits from the 2012–2015 United States (U.S.) National Hospital Ambulatory Medical Care Survey (NHAMCS), we determined the percentage of visits that had a chest CT and the diagnostic yield of these chest CTs for clinically-significant findings. We used survey-weighted multivariable logistic regression to determine which patient and visit characteristics were associated with chest CT use.

Results

Injury-related visits accounted for 30% of the 135 million yearly ED visits represented in NHAMCS. Of these visits, 817,480 (2%) received a chest CT over the study period. The diagnostic yield was 3.88%. Chest CT utilization did not change significantly from a rate of 1.73% in 2012 to a rate of 2.31% in 2015 (p?=?0.14). Multivariate logistic regression demonstrated increased odds of chest CT for patients seen by residents versus by attendings (adjusted odds ratio [AOR] 2.08, 95% confidence interval [CI] 1.41–3.08). Patients aged 18–59 and 60+ had higher AORs (5.75, CI 3.44–9.61 and 9.81, CI 5.90–16.33, respectively) than those <18?years of receiving chest CT.

Conclusions

Overall chest CT utilization showed an increased trend from 2012 to 2015, but the results were not statistically significant.  相似文献   

13.

Objective

We sought to evaluate the effectiveness of the “Timed Up and Go” (TUG) and the Chair test as screening tools in the Emergency Department (ED), stratified by sex.

Methods

This prospective cohort study was conducted at a Level 1 Trauma center. After consent, subjects performed the TUG and the Chair test. Subjects were contacted for phone follow-up and asked to self-report interim falling.

Results

Data from 192 subjects were analyzed. At baseline, 71.4% (n?=?137) screened positive for increased falls risk based on the TUG evaluation, and 77.1% (n?=?148) scored below average on the Chair test. There were no differences by patient sex.By the six-month evaluation 51 (26.6%) study participants reported at least one fall. Females reported a non-significant higher prevalence of falls compared to males (29.7% versus 22.2%, p?=?0.24). TUG test had a sensitivity of 70.6% (95% CI: 56.2%–82.5%), a specificity of 28.4% (95% CI: 21.1%–36.6%), a positive predictive (PP) value 26.3% (95% CI: 19.1%–34.5%) and a negative predictive (NP) value of 72.7% (95% CI: 59.0%–83.9%). Similar results were observed with the Chair test. It had a sensitivity of 78.4% (95% CI: 64.7%–88.7%), a specificity of 23.4% (95% CI: 16.7%–31.3%), a PP value 27.0% (95% CI: 20.1%–34.9%) and a NP value of 75.0% (95% CI: 59.7%–86.8%). No significant differences were observed between sexes.

Conclusions

There were no sex specific significant differences in TUG or Chair test screening performance. Neither test performed well as a screening tool for future falls in the elderly in the ED setting.  相似文献   

14.

Objective

The ABEM ConCert Examination is a summative examination that ABEM-certified physicians are required to pass once in every 10-year cycle to maintain certification. This study was undertaken to identify practice settings of emergency physicians, and to determine if there was a difference in performance on the 2017 ConCert between physicians of differing practice types and settings.

Methods

This was a mixed methods cross sectional-study, using a post-examination survey and test performance data. All physicians taking the 2017 ConCert Examination who completed three survey questions pertaining to practice type, practice locations, and teaching were included. These three questions address different aspects of academia: self-identification, an academic setting, and whether the physician teaches.

Results

Among 2796 test administrations of the 2017 ConCert Examination, 2693 (96.3%) completed the three survey questions about practice environment. The majority (N?=?2054; 76.3%) self-identified as primarily being a community physician, 528 (19.6%) as academic, and 111 (4.1%) as other. The average ConCert Examination score for community physicians was 83.5 (95% CI, 83.3–83.8); the academic group was 84.8 (95% CI, 84.3–85.3); and the other group was 82.3 (95% CI, 81.1–83.6). After controlling for initial ability as measured by the Qualifying Examination score, there was no significant difference in performance between academic and community physicians (p?=?.10).

Conclusions

Academic emergency physicians and community emergency physicians scored similarly on the ConCert. Working at a community teaching hospital was associated with higher examination performance. Teaching medical learners, especially non-emergency medicine residents, was also associated with better examination performance.  相似文献   

15.

Context

Identifying factors that affect terminally ill patients' preferences for and actual place of death may assist patients to die wherever they wish.

Objective

The objective of this study was to investigate factors associated with preferred and actual place of death for cancer patients in Johannesburg, South Africa.

Methods

In a prospective cohort study at a tertiary hospital in Johannesburg, South Africa, adult patients with advanced cancer and their caregivers were enrolled from 2016 to 2018. Study nurses interviewed the patients at enrollment and conducted postmortem interviews with the caregivers.

Results

Of 324 patients enrolled, 191 died during follow-up. Preferred place of death was home for 127 (66.4%) and a facility for 64 (33.5%) patients; 91 (47.6%) patients died in their preferred setting, with a kappa value of congruence of 0.016 (95% CI = ?0.107, 0.139). Factors associated with congruence were increasing age (odds ratio [OR]: 1.03, 95% CI: 1.00–1.05), use of morphine (OR: 1.87, 95% CI: 1.04–3.36), and wanting to die at home (OR: 0.44, 95% CI: 0.24–0.82). Dying at home was associated with increasing age (OR 1.03, 95% CI 1.00–1.05) and with the patient wishing to have family and/or friends present at death (OR 6.73, 95% CI 2.97–15.30).

Conclusion

Most patients preferred to die at home, but most died in hospital and fewer than half died in their preferred setting. Further research on modifiable factors, such as effective communication, access to palliative care and morphine, may ensure that more cancer patients in South Africa die wherever they wish.  相似文献   

16.

Purpose

It is critical to engage ED providers in antimicrobial stewardship programs (ASP). Emergency medicine pharmacists (EMPs) play an important role in ASP by working with providers to choose empiric antimicrobials. This study aimed to determine the impact of an EMP on appropriate empiric antibiotic prescribing for community-acquired pneumonia (CAP) and intra-abdominal infections (CA-IAI).

Methods

A retrospective cohort study was conducted evaluating adult patients admitted with CAP or CA-IAI. The primary outcome of this study was to compare guideline-concordant empiric antibiotic prescribing when an EMP was present vs. absent. We also aimed to compare the impact of an EMP in an early-ASP vs. established-ASP.

Results

320 patients were included in the study (EMP n?=?185, no-EMP n?=?135). Overall empiric antibiotic prescribing was more likely to be guideline-concordant when an EMP was present (78% vs. 61%, p?=?0.001); this was true for both the CAP (95% vs. 79%, p?=?0.005) and CA-IAI subgroups (62% vs. 44%, p?=?0.025). Total guideline-concordant prescribing significantly increased between the early-ASP and established-ASP (60% vs. 82.5%, p?<?0.001) and was more likely when an EMP was present (early-ASP: 68.3% vs. 45.8%, p?=?0.005; established-ASP: 90.5% vs. 73.7%, p?=?0.005). Patients receiving guideline-concordant antibiotics in the ED continued appropriate therapy upon admission 82.5% of the time vs. 18.8% if the ED antibiotic was inappropriate (p?<?0.001).

Conclusion

The presence of an EMP significantly improved guideline-concordant empiric antibiotic prescribing for CAP and CA-IAI in both an early and established ASP. Inpatient orders were more likely to be guideline-concordant if appropriate therapy was ordered in the ED.  相似文献   

17.

Objective

We studied the impact four new urgent care centers (UCCs) had on a hospital emergency department (ED) in terms of overall census and proportion of low acuity diagnoses from 2009 to 2016. We hypothesized that low acuity medical problems frequently seen in UCCs would decrease in the ED population. Since Medicaid was not accepted at these UCCs, we also studied the Medicaid vs non-Medicaid discharged populations to see if there were some differences related to access to urgent care.

Methods

We conducted a retrospective review of computerized billing data. We included all patients from 2009 to 2016 who were seen in the ED. We used the Cochran-Armitage Trend Test to examine trends over time.

Results

As hypothesized, the proportion of ED patients with a diagnosis of pharyngitis decreased significantly over this time period from 1% to 0.6% (p?<?0.0001). The rate of bronchitis in the total ED population also decreased significantly (0.5% to 0.13%, p?<?0.0001).When we looked at the discharged patients with and without Medicaid, we found that significantly more Medicaid than non-Medicaid patients presented with pharyngitis to the ED with an increasing trend from 2009 to 2016: OR?=?2.33, p?<?0.0001. The overall census of the ED rose over the period 2009 to 2016 (80,478 to 85,278/year). Overall admission rates decreased significantly: 36.9% to 34.5% (p?<?0.0001).

Conclusion

With the introduction of four new urgent care centers (UCCs) within 5?miles of the hospital, the ED diagnoses of pharyngitis and bronchitis, two of the most common diagnoses seen in UCCs, decreased significantly. Significantly more Medicaid discharged patients presented to the ED with pharyngitis than in the non-Medicaid discharged group, likely because Medicaid patients had no access to UCCs.  相似文献   

18.

Purpose

In 2015, approximately 13,436 snowboarding or skiing injuries occurred in children younger than 15. We describe injury patterns of pediatric snow sport participants based on age, activity at the time of injury, and use of protective equipment.

Methods

A retrospective analysis was performed of 10–17?year old patients with snow-sport related injuries at a Level-1 trauma center from 2005 to 2015. Participants were divided into groups, 10–13 (middle-school, MS) and 14–17?years (high-school, HS) and compared using chi-square, Student's t-tests, and multivariable logistic regression.

Results

We identified 235 patients. The HS group had a higher proportion of females than MS (17.5% vs. 7.4%, p?=?0.03) but groups were otherwise similar. Helmet use was significantly lower in the HS group (51.6% vs. 76.5%, p?<?0.01). MS students were more likely to suffer any head injury (aOR 4.66, 95% CI: 1.70–12.8), closed head injury (aOR 3.69 95% CI: 1.37–9.99), or loss of consciousness (aOR 5.56 95% CI 1.76–17.6) after 4?pm. HS students engaging in jumps or tricks had 2.79 times the risk of any head injury (aOR 2.79 95% CI: 1.18–6.57) compared to peers that did not. HS students had increased risk of solid organ injury when helmeted (aOR 4.86 95% CI: 1.30–18.2).

Conclusions

Injured high-school snow sports participants were less likely to wear helmets and more likely to have solid organ injuries when helmeted than middle-schoolers. Additionally, high-schoolers with head injuries were more like to sustain these injures while engaging in jumps or tricks. Injury prevention in this vulnerable population deserves further study.

Level of evidence

Level III (Retrospective Comparative Study).  相似文献   

19.

Background

Candida albicans germ tube antibody (CAGTA) may be helpful as a marker for the diagnosis of invasive candidiasis (IC). However, the performance has been variable. We conducted a meta-analysis to assess the diagnostic accuracy of this assay for diagnosing IC.

Method

We searched MEDLINE, EMBASE, Cochrane Collaboration databases, reference lists of retrieved studies, and review articles. The sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio, and a summary receiver-operating characteristic curve of CAGTA for diagnosing IC were pooled using meta-analysis.

Results

A total of 976 patients (262 with proven or probable IC), included in 7 studies, were analyzed. The pooled sensitivity, specificity, positive and negative likelihood ratios, and diagnostic odds ratios and area under the curve were 66% (95% confidence interval [95% CI], 59% to 73%), 76% (95% CI, 58% to 88%), 2.8 (95% CI, 1.5 to 5.8), 0.44 (95% CI, 0.34 to 0.57), 6 (95% CI, 3 to 5), and 0.68 (95% CI, 0.64 to 0.72), respectively. Heterogeneity of specificity was significant.

Conclusion

The diagnostic accuracy of the CAGTA assay is moderate for IC. Since the CAGTA assay is not absolutely sensitive and specific for IC, the CAGTA results should be interpreted in parallel with other biomarkers and clinical findings.  相似文献   

20.

Background

We seek to determine if experienced emergency medicine physicians can accurately predict the likelihood of admission for patients at the time of triage. Such predictions, if proven to be accurate, could decrease the time spent in the ED for patients who will ultimately be admitted by hastening downstream workflow.

Methods

This is a prospective cohort study of experienced physicians at a large urban hospital. Physicians were asked to predict the likelihood of admission for patients based only on information available in the EMR at the time of triage. Physicians also predicted the service to which the patients would be admitted. Physicians provided a confidence level of their prediction. Measures of predictive accuracy were calculated, including sensitivity, specificity, and area under the receiver operating characteristic curve.

Results

35 physicians evaluated 398 patient charts and made predictions. Sensitivity of determining admission for the entire cohort was 51.8%. The specificity was 89.1%. For those predictions made with a confidence level of >90%, sensitivity was 61.5% and specificity was 95.7%. Among physicians correctly predicting admission, the admitting service was predicted accurately 88.6% of the time.

Conclusion

Physicians performed poorly at predicting which patients would be admitted at the time of triage, even when they were confident in their predictions. Conversely, physicians accurately predicted who would be discharged. Physicians predicted with reasonable accuracy the service to which patients were ultimately admitted. More research and operational assessment needs to be performed to determine if these predictions can help improve ED efficiency.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号