首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Thies KC  Sep D  Derksen R 《Resuscitation》2006,68(3):359-363
INTRODUCTION: Recent accidents with helicopter emergency medical service (HEMS) aircraft raise the question how safe HEMS in Germany is and how accidents could be prevented. MATERIALS AND METHODS: We surveyed all German HEMS-programmes and reviewed the data of the German Aviation Authority regarding accidents with HEMS. RESULTS: An average German HEMS-programme encounters one accident leading to at least severe damage or loss of the helicopter in 26 operating years, one accident resulting in casualties in 65 operating years and one fatal accident in 111 operating years. The major causes of accidents were obstacle strikes during landing at the scene. Flying in bad weather conditions and lack of discipline were other factors contributing to HEMS-accidents. CONCLUSION: HEMS-safety could be improved by special training programmes for pilots and HEMS-crewmembers to address the factors listed above. Safety training for doctors is recommended but we did not find support for the notion of changing the doctor's legal position of a passenger to a HEMS-crewmember.  相似文献   

2.
3.
OBJECTIVE: To evaluate the association between trauma team activation according to well-established protocols and patient survival. METHODS: Single centre, registry study of data collected prospectively from trauma patients (who were treated in a trauma resuscitation room, who died or who were admitted to ICU) of a tertiary referral trauma centre Emergency Department (ED) in Hong Kong. A 10-point protocol was used to activate rapid trauma team response to the ED. The main outcome measures were mortality, need for ICU care, or operation within 6h of injury. RESULTS: Between 1 January 2001 and 31 December 2005, 2539 consecutive trauma patients were included in our trauma registry, of which 674 patients (mean age 43 years, S.D. 22; 71% male; 94% blunt trauma) met trauma call criteria. Four hundred and eighty two (72%) correctly triggered a trauma call, and 192 (28%) were not called ('undercall'). Patients were less likely to have a trauma call despite meeting criteria if they were aged over 64 years, had sustained a fall, had a respiratory rate <10 or >29 per minute, a systolic blood pressure between 60 and 89 mm Hg, or a GCS of 9-13. In a sub-group of moderately poor probability of survival (probability of survival, P(s), 0.5-0.75), the odds ratio for mortality in the undercall group compared with the trauma call group was 7.6 (95% CI, 1.1-33.0). CONCLUSIONS: In our institution, undercalls account for 28% of patients who meet trauma call criteria and in patients with moderately poor probability of survival undercall is associated with decreased survival. Although trauma team activation does not guarantee better survival, better compliance with trauma team activation protocols optimises processes of care and may translate into improved survival.  相似文献   

4.
The term 'idiom of distress' is used to describe culturally specific experiences of suffering. Most of these studies have been conducted with small groups, making comparison of symptom profiles difficult. Female undergraduate and graduate students in Japan (n = 50) and Korea (n = 61) completed the Beck Depression Inventory (BDI) and 7-day daily reports of their experiences of 46 somatic symptoms. Between-culture comparisons revealed that BDI scores did not differ; however, the Korean women had significantly higher somatic distress means than the Japanese women. Despite the higher Korean distress mean, regression analysis showed that somatic distress explained 30% of the variance of BDI score for the Japanese but only 22% of the variance for the Koreans. Within-culture comparisons showed that both high-BDI Japanese and Koreans had 19 somatic distress symptoms with significantly higher means than their low-BDI counterparts; 11 somatic symptoms were shared by the two groups. Multidimensional scaling matrices were used to compare symptom proximities and revealed cultural differences. The problems with using broad racial categories in clinical research, the clinical significance of these findings, and the implications for psychiatric nursing assessment and practice are discussed.  相似文献   

5.
Objectives: To assess ability of a medical acuity screening protocol to classify accurately deconditioned patients at risk for medical disruptions of acute rehabilitation. Design: Prospective comparison of 2 equivalent samples of consecutive admissions, 2 months apart, each divided into medically stable and unstable groups before admission. Setting: Acute rehabilitation unit, community hospital. Participants: 30 consecutive adult admissions to acute rehabilitation for deconditioning between September 25, 2000, and November 29, 2000; and 31 between February 1, 2001, and April 30, 2001. Interventions: Deconditioned rehabilitation candidates were screened for medical instability (protocol available), grouped as medically stable or unstable, reviewed with a physiatrist, and tracked prospectively. First sample findings were discussed with admitting physiatrists in multidisciplinary teams before the second sample. Main Outcome Measures: Planned completion of acute rehabilitation versus unplanned discharge due to acute medical setback, and length of rehabilitation stay. Results: Admission medical stability was associated with subsequent acute medical setbacks (Fisher exact test, P=.004). Medically unstable admissions had 6:1 odds of a medical disruption. Predictive success was associated with days from acute hospital to rehabilitation admission (Fisher exact test p=0.01). All predictive errors occurred with patients admitted to rehabilitation in less than 18 days from hospital admission. Medical disruptions of acute rehabilitation declined from the first (38.7%) to the second sample (16.7%). Conclusions: Data-driven team consensus about preadmission medical acuity screening preceded a decline in medical disruptions of acute rehabilitation among deconditioned patients.  相似文献   

6.
OBJECTIVE: To evaluate immediate life support (ILS) training in a primary care setting. METHODS: A 12 month pre/post-quasi-experimental and qualitative evaluation of ILS training across the counties of Devon and Cornwall (UK). Data were collected via feedback forms, pre/post course knowledge and skills tests and by focus group interviews with key stakeholders. RESULTS: One hundred and seventy-three professionals from 10 courses took part in the evaluation with a response rate of 93%. Feedback on the course was overwhelmingly positive. A significant improvement in both skills (p < or = 0.001) and knowledge (p < or = 0.001) was shown. However, a proportion of participants had a decline in knowledge by the end of the course. Those attending ILS had a significantly higher knowledge score at the start of the course (p = 0.002) than a group attending a BLS course, indicating that the preparatory course manual had been beneficial. Knowledge did not decline significantly by 6 months but skills did (p = 0.02), but remained higher than pre-course levels (p < or = 0.001). Knowledge (p = 0.008) and skill (p < or = 0.002) retention following the ILS course was significantly higher than in the BLS course sub-group, indicating the added value of ILS. The focus groups raised a number of themes relating to release of staff; funding issues; and the observed and reported effects of assessment inequity mainly relating to 'failure to fail' and 'dove and hawk' approaches. CONCLUSION: The course leads to a significant increase in skills and knowledge with good knowledge retention. Skill decline is significant which raises questions about the practice of practitioners who are not updated regularly. Issues of funding, staff resources and the assessment ethics and strategy need to be addressed.  相似文献   

7.
A large proportion of deaths in the Western World are caused by ischaemic heart disease. Among these patients a majority die outside hospital due to sudden cardiac death. The prognosis among these patients is in general, poor. However, a significant proportion are admitted to a hospital ward alive. The proportion of patients who survive the hospital phase of an out of hospital cardiac arrest varies considerably. Several treatment strategies are applicable during the post resuscitation care phase, but the level of evidence is weak for most of them. Four treatments are recommended for selected patients based on relatively good clinical evidence: therapeutic hypothermia, beta-blockers, coronary artery bypass grafting, and an implantable cardioverter defibrillator. The patient's cerebral function might influence implementation of the latter two alternatives. There is some evidence for revascularisation treatment in patients with suspected myocardial infarction. On pathophysiological grounds, an early coronary angiogram is a reasonable alternative. Further randomised clinical trials of other post resuscitation therapies are essential.  相似文献   

8.
Objective: To determine whether rehabilitation length of stay (LOS) is associated with discharge motor function for persons with spinal cord injury (SCI). Design: Longitudinal. Setting: Spinal Cord Injury Model Systems center. Participants: 920 persons with traumatic, complete SCI enrolled in the Spinal Cord Injury National Database, with levels of injury (LOI) at C5, C6, C7, and T1-5; and inpatient rehabilitation discharge dates between 1989 and 1992 (“early”) and 1999 and 2002 (“late”). Interventions: Not applicable. Main Outcome Measures: FIM™ instrument at rehabilitation discharge. Results: For all LOI groups, the late group had a LOS shorter than the early group, with the largest difference in the C7 group: 107 days (early) versus 59 days (late). FIM motor scores at rehabilitation discharge also differed significantly for the C5, C7, and T1-5 LOI groups. For each of these LOIs, the late group was discharged with lower FIM motor scores; the largest difference was again noted for the C7 group, which had FIM motor scores of 51.9 (early) versus 40.7 (late). Conclusions: Decreased inpatient rehabilitation LOS was associated with decreased function at rehabilitation discharge. Persons with C7-level SCI were the most affected group; this group had the largest decrease in LOS and motor FIM score.  相似文献   

9.

Objective

For automated external defibrillators (AEDs) to be practical for broad public use, responders must be able to use them safely and effectively. This study's objective was to determine whether untrained laypersons could accurately follow the visual and voice prompt instructions of an AED.

Methods

Each of four different AED models (AED1, AED2, AED3, and AED4) was randomly assigned to a different group of 16 untrained volunteers in a simulated cardiac arrest. Four usability indicators were observed: 1) number of volunteers able to apply the pads to the manikin skin, 2) appropriate pad positioning, 3) time from room entry to shock delivery, and 4) safety in terms of touching the patient during shock delivery.

Results

Some of the 64 volunteers who participated in the study failed to open the pad packaging or remove the lining, or placed the pads on top of clothing. Fifty-percent of AED2 pads and 44% of AED3 pads were not placed directly on the manikin skin compared with 100% of AED1 and AED4 pads. Adjacent pad displacements that potentially could affect defibrillation efficacy were observed in 6% of AED1, 11% of AED2, 0% of AED3, and 56% of AED4 usages. Time to deliver a shock was within 3.5 minutes for all AEDs, although the median times for AED1 and AED4 were the shortest at 1.6 and 1.7 minutes, respectively. No significant volunteer contact with the manikin occurred during shock delivery.

Conclusions

This study demonstrated that the AED user interface significantly influences the ability of untrained caregivers to appropriately place pads and quickly deliver a shock. Avoiding grossly inappropriate pad placement and failure to place AED pads directly on skin may be correctable with improvements in the AED instruction user interface.  相似文献   

10.
Objective: To reexamine recent stroke-acupuncture studies using “relative improvement” on outcome measures, as opposed to simple endpoint raw score, as the primary clinical outcome. Data Sources: Using the recent Sze meta-analysis published in Stroke (2002) as an organizing schema, clinical trials involving stroke patients by Gosman-Hedstrom (1998), Johansson (1993, 2001), Sallstrom (1996), Sze (2002), and Wong (1999) were reexamined. Study selection: Studies were evaluated using the criterion of “relative improvement” as the primary outcome measure. The studies focused on the use of acupuncture compared with various control conditions across studies, including sham acupuncture, shallow acupuncture, transcutaneous electroneural stimulation, and subliminal electric stimulation. Data Extraction: Relative improvement on standardized measures used in the studies reviewed, focusing on disability (eg, FIM™ instrument) and impairment (eg, Fugl-Meyer Assessment). Data Synthesis: Evidence that acupuncture is effective in stroke rehabilitation is stronger when using relative improvement as a criterion than when using endpoint raw scores only—the procedure used by most recent researchers. Conclusions: Acupuncture may be helpful as an adjunct rehabilitation treatment. A number of methodologic issues need to be addressed in future research, including the most appropriate definition of a stroke clinical outcome, and the timeframe within which acupuncture effects are observed.  相似文献   

11.

Objective

Use of ambulances for nonemergency and routine transportation is thought to be a serious and growing problem. Third-party payers frequently refuse payment when ambulance use is deemed inappropriate. The authors attempted to determine whether cases in which ambulance transport was denied were done appropriately.

Methods

Consecutive ambulance run forms of transports in which payment was denied by the state Medicaid carriers and corresponding emergency department (ED) charts were reviewed. Medical risk was evaluated by using the Evaluation and Management (E&M) Level of Care for the ED visit. Appropriateness of ambulance transport was evaluated by extracting the final diagnosis and the most serious written (and worked up) diagnosis in the differential. If either diagnosis could benefit from treatment in an ambulance or by rapid transport to a hospital, the transport was defined as appropriate.

Results

A total of 146 run forms and 104 corresponding charts were evaluated. Ambulance transport was appropriate in 63 (61%; 95% confidence interval, 51%-70%). Risk was minimal for two transports, low for two, moderate for 62, and high for 38 cases. Final diagnoses included several life-threatening ones.

Conclusion

In this population of patients for whom payment of their ambulance bill was denied, a high percentage of corresponding ED visits were for potentially serious medical problems.  相似文献   

12.
Objective: To evaluate the effect of severe left ventricular dysfunction on the improvement of functional capacity (FC) in cardiac patients undergoing phase 2 of a cardiac rehabilitation program (CRP). Design: Retrospective cohort study. Setting: Hospital-based CRP. Participants: 199 male cardiac patients. Group 1 (n=169) had a left ventricular ejection fraction (LVEF) >30% (age, 66.2±9.8y); group 2 (n=30) had a LVEF ≤30% (age, 69.0±8.1y). Intervention: 10 weeks, thrice weekly, of phase 2 CRP, consisting of 60 minutes of supervised exercise to reach the target heart rate determined by the Karvonen method. Main Outcome Measures: We measured FC before and after completion of the CRP and the improvement expressed in percents of FC before the CRP. The FC results were compared using the Student t test. Results: FC in both patient groups improved after the CRP. In group 1 patients, FC increased from 5.6±2.3 metabolic equivalents (METS) before the CRP to 7.5±2.6 METS after the CRP (P<.01). In group 2 patients, FC increased from 4.7±2.1 METS before the CRP to 6.2±2.2 METS after the CRP (P<.01). Before the CRP, group 2 patients had significantly lower FC compared with group 1 patients (P<.05). Similarly, after the CRP, the FC of group 2 patients remained lower than FC of group 1 patients (P<.05). However, the percentage of improvement for group 1 patients (40.6%±34.3%) did not differ significantly from the percentage of improvement for group 2 patients (39.9%±34.5%). Conclusions: The CRP improved FC of all cardiac patients, including those with severe left ventricular dysfunction. Patients with severe left ventricular dysfunction have lower FC before and after the CRP. However, the FC of these patients improved to the same degree as the patients with better left ventricular function. These findings are important in designing strategies for the CRP in patients with severely impaired LVEF.  相似文献   

13.
Initially, a set of guidelines for the use of ultrasound contrast agents was published in 2004 dealing only with liver applications. A second edition of the guidelines in 2008 reflected changes in the available contrast agents and updated the guidelines for the liver, as well as implementing some non-liver applications. Time has moved on, and the need for international guidelines on the use of CEUS in the liver has become apparent. The present document describes the third iteration of recommendations for the hepatic use of contrast enhanced ultrasound (CEUS) using contrast specific imaging techniques. This joint WFUMB-EFSUMB initiative has implicated experts from major leading ultrasound societies worldwide. These liver CEUS guidelines are simultaneously published in the official journals of both organizing federations (i.e., Ultrasound in Medicine and Biology for WFUMB and Ultraschall in der Medizin/European Journal of Ultrasound for EFSUMB). These guidelines and recommendations provide general advice on the use of all currently clinically available ultrasound contrast agents (UCA). They are intended to create standard protocols for the use and administration of UCA in liver applications on an international basis and improve the management of patients worldwide.  相似文献   

14.
Objective: To determine the efficacy of fluoroscopic caudal epidural steroid injections (ESIs) as a conservative treatment in patients with presumably chronic lumbar diskogenic pain. Design: Retrospective follow-up study. Setting: Physiatric interventional spine practice in a large, urban, academic institution. Participants: 97 patients from chart review meeting inclusion criteria: (1) predominately axial low back pain of >3 months in duration, (2) failure of conservative treatment (ie, nonsteriodal anti-inflammatory drugs, physical therapy), (3) clinical presentation and magnetic resonance imaging findings consistent with central lumbar disk protrusion and/or degeneration at L4-5 or L5-S1 without stenosis. Intervention: At least 1 fluoroscopically guided caudal ESI. Main Outcome Measures: Successful outcome was determined as follows: (1) pre-post ESI change in Roland-Morris Disability Questionnaire score of ≥2 points, decrease in visual numeric pain scale rating of >50%, and North American Spine Society patient satisfaction score of 1 to 2. Results: Only 19 patients (23%) were determined to have a successful long-term (>2y) outcome and 65 (77%) were deemed failures. Average follow-up was 28.6±15.6 months. Successes were found to differ significantly from failures in preinjection pain scores (8.53 vs 9.09, P=.04) and patient satisfaction (P<.001). There were no significant differences between diagnostic groups (disk herniations=64, degenerative disk without herniation=33). Overall patient satisfaction was 45%. Conclusions: At more than 2 years of follow-up, the efficacy of fluoroscopically guided caudal ESI in patients with chronic lumbar diskogenic pain is limited. Patient satisfaction exceeded the reported rate of efficacy. Patients responding to injection had significantly lower preinjection pain scores. Despite these low success rates, fluoroscopic caudal ESIs remain a viable option to more invasive treatment options for 1 in 4 patients from this population and the offer of at least a single injection should remain part of the treatment algorithm.  相似文献   

15.
16.
17.

Background

Cerebral regional oxygen saturation (rSO2) can be measured immediately and noninvasively just after arrival at the hospital and may be useful for evaluating the futility of resuscitation for a patient with out-of-hospital cardiopulmonary arrest (OHCA). We examined the best practices involving cerebral rSO2 as an indicator of the futility of resuscitation.

Methods

This study was a single-center, prospective, observational analysis of a cohort of consecutive adult OHCA patients who were transported to the University of Tokyo Hospital from October 1, 2012, to September 30, 2013, and whose cerebral rSO2 values were measured.

Results

During the study period, 69 adult OHCA patients were enrolled. Of the 54 patients with initial lower cerebral rSO2 values of 26% or less, 47 patients failed to achieve return of spontaneous circulation (ROSC) in the receiver operating characteristic curve analysis (optimal cutoff, 26%; sensitivity, 88.7%; specificity, 56.3%; positive predictive value, 87.0%; negative predictive value, 60.0%; area under the curve [AUC], 0.714; P = .0033). The AUC for the initial lower cerebral rSO2 value was greater than that for blood pH (AUC, 0.620; P = .1687) or lactate values (AUC, 0.627; P = .1081) measured upon arrival at the hospital as well as that for initial higher (AUC, 0.650; P = .1788) or average (AUC, 0.677; P = .0235) cerebral rSO2 values. The adjusted odds ratio of the initial lower cerebral rSO2 values of 26% or less for ROSC was 0.11 (95% confidence interval, 0.01-0.63; P = .0129).

Conclusions

Initial lower cerebral rSO2 just after arrival at the hospital, as a static indicator, is associated with non-ROSC. However, an initially lower cerebral rSO2 alone does not yield a diagnosis performance sufficient for evaluating the futility of resuscitation.  相似文献   

18.

Background

It is unclear whether the prehospital termination of resuscitation (TOR) rule is applicable in specific situations such as in areas extremely dense with hospitals.

Objectives

The objective of the study is to assess whether the prehospital TOR rule is applicable in the emergency medical services system in Japan, specifically, in an area dense with hospitals in Tokyo.

Methods

This study was a retrospective, observational analysis of a cohort of adult out-of-hospital cardiopulmonary arrest (OHCA) patients who were transported to the University of Tokyo Hospital from April 1, 2009, to March 31, 2011.

Results

During the study period, 189 adult OHCA patients were enrolled. Of the 189 patients, 108 patients met the prehospital TOR rule. The outcomes were significantly worse in the prehospital TOR rule–positive group than in the prehospital TOR–negative group, with 0.9% vs 11.1% of patients, respectively, surviving until discharge (relative risk [RR], 1.11; 95% confidence interval [CI], 1.03-1.21; P = .0020) and 0.0% vs 7.4% of patients, respectively, discharged with a favorable neurologic outcome (RR, 1.08; 95% CI, 1.02-1.15; P = .0040). The prehospital TOR rule had a positive predictive value (PPV) of 99.1% (95% CI, 96.3-99.8) and a specificity of 90.0% (95% CI, 60.5-98.2) for death and a PPV of 100.0% (95% CI, 97.9-100.0) and a specificity of 100.0% (95% CI, 61.7-100.0) for an unfavorable neurologic outcome.

Conclusions

This study suggested that the prehospital TOR rule predicted unfavorable outcomes even in an area dense with hospitals in Tokyo and might be helpful for identifying the OHCA patients for whom resuscitation efforts would be fruitless.  相似文献   

19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号