首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Context  Only 1% to 8% of adults with out-of-hospital cardiac arrest survive to hospital discharge. Objective  To compare resuscitation outcomes before and after an urban emergency medical services (EMS) system switched from manual cardiopulmonary resuscitation (CPR) to load-distributing band (LDB) CPR. Design, Setting, and Patients  A phased, observational cohort evaluation with intention-to-treat analysis of 783 adults with out-of-hospital, nontraumatic cardiac arrest. A total of 499 patients were included in the manual CPR phase (January 1, 2001, to March 31, 2003) and 284 patients in the LDB-CPR phase (December 20, 2003, to March 31, 2005); of these patients, the LDB device was applied in 210 patients. Intervention  Urban EMS system change from manual CPR to LDB-CPR. Main Outcome Measures  Return of spontaneous circulation (ROSC), with secondary outcome measures of survival to hospital admission and hospital discharge, and neurological outcome at discharge. Results  Patients in the manual CPR and LDB-CPR phases were comparable except for a faster response time interval (mean difference, 26 seconds) and more EMS-witnessed arrests (18.7% vs 12.6%) with LDB. Rates for ROSC and survival were increased with LDB-CPR compared with manual CPR (for ROSC, 34.5%; 95% confidence interval [CI], 29.2%-40.3% vs 20.2%; 95% CI, 16.9%-24.0%; adjusted odds ratio [OR], 1.94; 95% CI, 1.38-2.72; for survival to hospital admission, 20.9%; 95% CI, 16.6%-26.1% vs 11.1%; 95% CI, 8.6%-14.2%; adjusted OR, 1.88; 95% CI, 1.23-2.86; and for survival to hospital discharge, 9.7%; 95% CI, 6.7%-13.8% vs 2.9%; 95% CI, 1.7%-4.8%; adjusted OR, 2.27; 95% CI, 1.11-4.77). In secondary analysis of the 210 patients in whom the LDB device was applied, 38 patients (18.1%) survived to hospital admission (95% CI, 13.4%-23.9%) and 12 patients (5.7%) survived to hospital discharge (95% CI, 3.0%-9.3%). Among patients in the manual CPR and LDB-CPR groups who survived to hospital discharge, there was no significant difference between groups in Cerebral Performance Category (P = .36) or Overall Performance Category (P = .40). The number needed to treat for the adjusted outcome survival to discharge was 15 (95% CI, 9-33). Conclusion  Compared with resuscitation using manual CPR, a resuscitation strategy using LDB-CPR on EMS ambulances is associated with improved survival to hospital discharge in adults with out-of-hospital nontraumatic cardiac arrest.   相似文献   

2.
Context  Chlamydial and gonococcal infections are important causes of pelvic inflammatory disease, ectopic pregnancy, and infertility. Although screening for Chlamydia trachomatis is widely recommended among young adult women, little information is available regarding the prevalence of chlamydial and gonococcal infections in the general young adult population. Objective  To determine the prevalence of chlamydial and gonoccoccal infections in a nationally representative sample of young adults living in the United States. Design, Setting, and Participants  Cross-sectional analyses of a prospective cohort study of a nationally representative sample of 14 322 young adults aged 18 to 26 years. In-home interviews were conducted across the United States for Wave III of The National Longitudinal Study of Adolescent Health (Add Health) from April 2, 2001, to May 9, 2002. This study sample represented 66.3% of the original 18 924 participants in Wave I of Add Health. First-void urine specimens using ligase chain reaction assay were available for 12 548 (87.6%) of the Wave III participants. Main Outcome Measures  Prevalences of chlamydial and gonococcal infections in the general young adult population, and by age, self-reported race/ethnicity, and geographic region of current residence. Results  Overall prevalence of chlamydial infection was 4.19% (95% confidence interval [CI], 3.48%-4.90%). Women (4.74%; 95% CI, 3.93%-5.71%) were more likely to be infected than men (3.67%; 95% CI, 2.93%-4.58%; prevalence ratio, 1.29; 95% CI, 1.03-1.63). The prevalence of chlamydial infection was highest among black women (13.95%; 95% CI, 11.25%-17.18%) and black men (11.12%; 95% CI, 8.51%-14.42%); lowest prevalences were among Asian men (1.14%; 95% CI, 0.40%-3.21%), white men (1.38%; 95% CI, 0.93%-2.03%), and white women (2.52%; 95% CI, 1.90%-3.34%). Prevalence of chlamydial infection was highest in the south (5.39%; 95% CI, 4.24%-6.83%) and lowest in the northeast (2.39%; 95% CI, 1.56%-3.65%). Overall prevalence of gonorrhea was 0.43% (95% CI, 0.29%-0.63%). Among black men and women, the prevalence was 2.13% (95% CI, 1.46%-3.10%) and among white young adults, 0.10% (95% CI, 0.03%-0.27%). Prevalence of coinfection with both chlamydial and gonococcal infections was 0.030% (95% CI, 0.18%-0.49%). Conclusions  The prevalence of chlamydial infection is high among young adults in the United States. Substantial racial/ethnic disparities are present in the prevalence of both chlamydial and gonococcal infections.   相似文献   

3.
Seward JF  Zhang JX  Maupin TJ  Mascola L  Jumaan AO 《JAMA》2004,292(6):704-708
Context  Limited data are available on the contagiousness of vaccinated varicella cases. Objectives  To describe secondary attack rates within households according to disease history and vaccination status of the primary case and household contacts and to estimate varicella vaccine effectiveness. Design, Setting, and Patients  Population-based, active varicella surveillance project in a community of approximately 320 000 in Los Angeles County, California, during 1997 and 2001. Varicella cases were reported by child care centers, private and public schools, and health care clinicians and were investigated to collect demographic, clinical, medical, and vaccination data. Information on household contacts' age, varicella history, and vaccination status was collected. Main Outcome Measures  Varicella secondary attack rate among household contacts; vaccine effectiveness using secondary attack rates in unvaccinated and vaccinated children and adolescents. Results  A total of 6316 varicella cases were reported. Among children and adolescents aged 1 to 14 years, secondary attack rates varied according to age and by disease and vaccination status of the primary case and exposed household contacts. Among contacts aged 1 to 14 years exposed to unvaccinated cases, the secondary attack rate was 71.5% if they were unvaccinated and 15.1% if they were vaccinated (risk ratio [RR], 0.21; 95% confidence interval [CI], 0.15-0.30). Overall, vaccinated cases were half as contagious as unvaccinated cases. However, vaccinated cases with 50 lesions or more were similarly contagious as unvaccinated cases whereas those with fewer than 50 lesions were only one third as contagious (secondary attack rate, 23.4%; RR, 0.32 [95% CI, 0.19-0.53]). Vaccine effectiveness for prevention of all disease was 78.9% (95% CI, 69.7%-85.3%); moderate disease, 92% (50-500 lesions) and 100% (clinician visit); and severe disease, 100%. Conclusions  Under conditions of intense exposure, varicella vaccine was highly effective in preventing moderate and severe disease and about 80% effective in preventing all disease. Breakthrough varicella cases in household settings were half as contagious as unvaccinated persons with varicella, although contagiousness varied with numbers of lesions.   相似文献   

4.
Alam NH  Yunus M  Faruque AS  Gyr N  Sattar S  Parvin S  Ahmed JU  Salam MA  Sack DA 《JAMA》2006,296(5):567-573
Context  In May 2002, the World Health Organization and the United Nations Children's Fund recommended that the formulation of oral rehydration solution (ORS) for treatment of patients with diarrhea be changed to one with a reduced osmolarity and that safety of the new formulation, particularly development of symptomatic hyponatremia, be monitored. Objective  To measure the rates of symptomatic hyponatremia during treatment of patients with diarrhea with reduced osmolarity ORS. Design, Settings, and Patients  A phase 4 trial conducted at the Dhaka hospital (December 1, 2002-November 30, 2003) and Matlab hospital (February 2, 2003-January 31, 2004) of the International Centre for Diarrhoeal Disease Research Bangladesh: Centre for Health and Population Research, Dhaka, Bangladesh. All patients admitted with uncomplicated watery diarrhea were treated with the newly recommended ORS and monitored. Patients developing neurological symptoms (seizure or altered consciousness) were transferred to the special care ward for treatment and investigated to identify the cause of the symptoms. Patient records of the Dhaka hospital were reviewed during the previous year when the old ORS formulation was used. Intervention  Reduced osmolarity ORS. Main Outcome Measure  Incidence rate of symptomatic hyponatremia in a 1-year period. Results  A total of 53 280 patients, including 22 536 children younger than 60 months, were monitored at the Dhaka and Matlab hospitals. Twenty-four patients, none older than 36 months, developed seizures or altered consciousness associated with hyponatremia, with an overall incidence rate of 0.05% (95% confidence interval [CI], 0.03%-0.07%) at the Dhaka hospital and 0.03% (95% CI, 0.01%-0.09%) at the Matlab hospital. During the previous year, 47 patients at the Dhaka hospital had symptoms associated with hyponatremia, for an estimated incidence rate of 0.10% (95% CI, 0.07%-0.13%). The reduction in the rates was statistically significant (odds ratio, 0.50; 95% CI, 0.29-0.85; P = .009). Conclusion  The risk of symptoms associated with hyponatremia in patients treated with the reduced osmolarity ORS is minimal and did not increase with the change in formulation.   相似文献   

5.
Prevalence of chronic kidney disease in the United States   总被引:19,自引:0,他引:19  
Coresh J  Selvin E  Stevens LA  Manzi J  Kusek JW  Eggers P  Van Lente F  Levey AS 《JAMA》2007,298(17):2038-2047
Context  The prevalence and incidence of kidney failure treated by dialysis and transplantation in the United States have increased from 1988 to 2004. Whether there have been changes in the prevalence of earlier stages of chronic kidney disease (CKD) during this period is uncertain. Objective  To update the estimated prevalence of CKD in the United States. Design, Setting, and Participants  Cross-sectional analysis of the most recent National Health and Nutrition Examination Surveys (NHANES 1988-1994 and NHANES 1999-2004), a nationally representative sample of noninstitutionalized adults aged 20 years or older in 1988-1994 (n = 15 488) and 1999-2004 (n = 13 233). Main Outcome Measures  Chronic kidney disease prevalence was determined based on persistent albuminuria and decreased estimated glomerular filtration rate (GFR). Persistence of microalbuminuria (>30 mg/g) was estimated from repeat visit data in NHANES 1988-1994. The GFR was estimated using the abbreviated Modification of Diet in Renal Disease Study equation reexpressed to standard serum creatinine. Results  The prevalence of both albuminuria and decreased GFR increased from 1988-1994 to 1999-2004. The prevalence of CKD stages 1 to 4 increased from 10.0% (95% confidence interval [CI], 9.2%-10.9%) in 1988-1994 to 13.1% (95% CI, 12.0%-14.1%) in 1999-2004 with a prevalence ratio of 1.3 (95% CI, 1.2-1.4). The prevalence estimates of CKD stages in 1988-1994 and 1999-2004, respectively, were 1.7% (95% CI, 1.3%-2.2%) and 1.8% (95% CI, 1.4%-2.3%) for stage 1; 2.7% (95% CI, 2.2%-3.2%) and 3.2% (95% CI, 2.6%-3.9%) for stage 2; 5.4% (95% CI, 4.9%-6.0%) and 7.7% (95% CI, 7.0%-8.4%) for stage 3; and 0.21% (95% CI, 0.15%-0.27%) and 0.35% (0.25%-0.45%) for stage 4. A higher prevalence of diagnosed diabetes and hypertension and higher body mass index explained the entire increase in prevalence of albuminuria but only part of the increase in the prevalence of decreased GFR. Estimation of GFR from serum creatinine has limited precision and a change in mean serum creatinine accounted for some of the increased prevalence of CKD. Conclusions  The prevalence of CKD in the United States in 1999-2004 is higher than it was in 1988-1994. This increase is partly explained by the increasing prevalence of diabetes and hypertension and raises concerns about future increased incidence of kidney failure and other complications of CKD.   相似文献   

6.
Context  Violence-related behaviors such as fighting and weapon carrying are associated with serious physical and psychosocial consequences for adolescents. Objective  To measure trends in nonfatal violent behaviors among adolescents in the United States between 1991 and 1997. Design, Setting, and Participants  Nationally representative data from the 1991, 1993, 1995, and 1997 Youth Risk Behavior Surveys were analyzed to describe the percentage of students in grades 9 through 12 who engaged in behaviors related to violence. Overall response rates for each of these years were 68%, 70%, 60%, and 69%, respectively. To assess the statistical significance of time trends for these variables, logistic regression analyses were conducted that controlled for sex, grade, and race or ethnicity and simultaneously assessed linear and higher-order effects. Main Outcome Measures  Self-reported weapon carrying, physical fighting, fighting-related injuries, feeling unsafe, and damaged or stolen property. Results  Between 1991 and 1997, the percentage of students in a physical fight decreased 14%, from 42.5% (95% confidence interval [CI], 40.1%-44.9%) to 36.6% (95% CI, 34.6%-38.6%); the percentage of students injured in a physical fight decreased 20%, from 4.4% (95% CI, 3.6%-5.2%) to 3.5% (95% CI, 2.9%-4.1%); and the percentage of students who carried a weapon decreased 30%, from 26.1% (95% CI, 23.8%-28.4%) to 18.3% (95% CI, 16.5%-20.1%). Between 1993 and 1997, the percentage of students who carried a gun decreased 25%, from 7.9% (95% CI, 6.6%-9.2%) to 5.9% (95% CI, 5.1%-6.7%); the percentage of students in a physical fight on school property decreased 9%, from 16.2% (95% CI, 15.0%-17.4%) to 14.8% (95% CI, 13.5%-16.1%); and the percentage of students who carried a weapon on school property decreased 28%, from 11.8% (95% CI, 10.4%-13.2%) to 8.5% (95% CI, 7.0%-10.0%). All of these changes represent significant linear decreases. Conclusions  Declines in fighting and weapon carrying among US adolescents between 1991 and 1997 are encouraging and consistent with declines in homicide, nonfatal victimization, and school crime rates. Further research should explore why behaviors related to interpersonal violence are decreasing and what types of interventions are most effective.   相似文献   

7.
Landrigan CP  Barger LK  Cade BE  Ayas NT  Czeisler CA 《JAMA》2006,296(9):1063-1070
Context  Sleep deprivation is associated with increased risk of serious medical errors and motor vehicle crashes among interns. The Accreditation Council for Graduate Medical Education (ACGME) introduced duty-hour standards in 2003 to reduce work hours. Objective  To estimate compliance with the ACGME duty-hour standards among interns. Design, Setting, and Participants  National prospective cohort study with monthly Web-based survey assessment of intern work and sleep hours using a validated instrument, conducted preimplementation (July 2002 through May 2003) and postimplementation (July 2003 through May 2004) of ACGME standards. Participants were 4015 of the approximately 37 253 interns in US residency programs in all specialties during this time; they completed 29 477 reports of their work and sleep hours. Main Outcome Measure  Overall and monthly rates of compliance with the ACGME standards. Results  Postimplementation, 1068 (83.6%; 95% confidence interval [CI], 81.4%-85.5%) of 1278 of interns reported work hours in violation of the standards during 1 or more months. Working shifts greater than 30 consecutive hours was reported by 67.4% (95% CI, 64.8%-70.0%). Averaged over 4 weeks, 43.0% (95% CI, 40.3%-45.7%) reported working more than 80 hours weekly, and 43.7% (95% CI, 41.0%-46.5%) reported not having 1 day in 7 off work duties. Violations were reported during 3765 (44.0%; 95% CI, 43.0%-45.1%) of the 8553 intern-months assessed postimplementation (including vacation and ambulatory rotations), and during 2660 (61.5%; 95% CI, 60.0%-62.9%) of 4327 intern-months during which interns worked exclusively in inpatient settings. Postimplementation, 29.0% (95% CI, 28.7%-29.7%) of reported work weeks were more than 80 hours per week, 12.1% (95% CI, 11.8%-12.6%) were 90 or more hours per week, and 3.9% (95% CI, 3.7%-4.2%) were 100 or more hours per week. Comparing preimplementation to postimplementation responses, reported mean work duration decreased 5.8% from 70.7 (95% CI, 70.5-70.9) hours to 66.6 (95% CI, 66.3-66.9) hours per week (P<.001), and reported mean sleep duration increased 6.1% (22 minutes) from 5.91 (95% CI, 5.88-5.94) hours to 6.27 (95% CI, 6.23-6.31) hours per night (P<.001). However, reported mean sleep during extended shifts decreased 4.5%, from 2.69 (95% CI, 2.66-2.73) hours to 2.57 (95% CI, 2.52-2.62) hours (P<.001). Conclusion  In the first year following implementation of the ACGME duty-hour standards, interns commonly reported noncompliance with these requirements.   相似文献   

8.
Windish DM  Huot SJ  Green ML 《JAMA》2007,298(9):1010-1022
Context  Physicians depend on the medical literature to keep current with clinical information. Little is known about residents' ability to understand statistical methods or how to appropriately interpret research outcomes. Objective  To evaluate residents' understanding of biostatistics and interpretation of research results. Design, Setting, and Participants  Multiprogram cross-sectional survey of internal medicine residents. Main Outcome Measure  Percentage of questions correct on a biostatistics/study design multiple-choice knowledge test. Results  The survey was completed by 277 of 367 residents (75.5%) in 11 residency programs. The overall mean percentage correct on statistical knowledge and interpretation of results was 41.4% (95% confidence interval [CI], 39.7%-43.3%) vs 71.5% (95% CI, 57.5%-85.5%) for fellows and general medicine faculty with research training (P < .001). Higher scores in residents were associated with additional advanced degrees (50.0% [95% CI, 44.5%-55.5%] vs 40.1% [95% CI, 38.3%-42.0%]; P < .001); prior biostatistics training (45.2% [95% CI, 42.7%-47.8%] vs 37.9% [95% CI, 35.4%-40.3%]; P = .001); enrollment in a university-based training program (43.0% [95% CI, 41.0%-45.1%] vs 36.3% [95% CI, 32.6%-40.0%]; P = .002); and male sex (44.0% [95% CI, 41.4%-46.7%] vs 38.8% [95% CI, 36.4%-41.1%]; P = .004). On individual knowledge questions, 81.6% correctly interpreted a relative risk. Residents were less likely to know how to interpret an adjusted odds ratio from a multivariate regression analysis (37.4%) or the results of a Kaplan-Meier analysis (10.5%). Seventy-five percent indicated they did not understand all of the statistics they encountered in journal articles, but 95% felt it was important to understand these concepts to be an intelligent reader of the literature. Conclusions  Most residents in this study lacked the knowledge in biostatistics needed to interpret many of the results in published clinical research. Residency programs should include more effective biostatistics training in their curricula to successfully prepare residents for this important lifelong learning skill.   相似文献   

9.
Context  Severe acute respiratory syndrome (SARS) is an emerging infectious disease that first manifested in humans in China in November 2002 and has subsequently spread worldwide. Objectives  To describe the clinical characteristics and short-term outcomes of SARS in the first large group of patients in North America; to describe how these patients were treated and the variables associated with poor outcome. Design, Setting, and Patients  Retrospective case series involving 144 adult patients admitted to 10 academic and community hospitals in the greater Toronto, Ontario, area between March 7 and April 10, 2003, with a diagnosis of suspected or probable SARS. Patients were included if they had fever, a known exposure to SARS, and respiratory symptoms or infiltrates observed on chest radiograph. Patients were excluded if an alternative diagnosis was determined. Main Outcome Measures  Location of exposure to SARS; features of the history, physical examination, and laboratory tests at admission to the hospital; and 21-day outcomes such as death or intensive care unit (ICU) admission with or without mechanical ventilation. Results  Of the 144 patients, 111 (77%) were exposed to SARS in the hospital setting. Features of the clinical examination most commonly found in these patients at admission were self-reported fever (99%), documented elevated temperature (85%), nonproductive cough (69%), myalgia (49%), and dyspnea (42%). Common laboratory features included elevated lactate dehydrogenase (87%), hypocalcemia (60%), and lymphopenia (54%). Only 2% of patients had rhinorrhea. A total of 126 patients (88%) were treated with ribavirin, although its use was associated with significant toxicity, including hemolysis (in 76%) and decrease in hemoglobin of 2 g/dL (in 49%). Twenty-nine patients (20%) were admitted to the ICU with or without mechanical ventilation, and 8 patients died (21-day mortality, 6.5%; 95% confidence interval [CI], 1.9%-11.8%). Multivariable analysis showed that the presence of diabetes (relative risk [RR], 3.1; 95% CI, 1.4-7.2; P = .01) or other comorbid conditions (RR, 2.5; 95% CI, 1.1-5.8; P = .03) were independently associated with poor outcome (death, ICU admission, or mechanical ventilation). Conclusions  The majority of cases in the SARS outbreak in the greater Toronto area were related to hospital exposure. In the event that contact history becomes unreliable, several features of the clinical presentation will be useful in raising the suspicion of SARS. Although SARS is associated with significant morbidity and mortality, especially in patients with diabetes or other comorbid conditions, the vast majority (93.5%) of patients in our cohort survived.   相似文献   

10.
Follow-up testing among children with elevated screening blood lead levels   总被引:2,自引:0,他引:2  
Kemper AR  Cohn LM  Fant KE  Dombkowski KJ  Hudson SR 《JAMA》2005,293(18):2232-2237
Context  Follow-up testing after an abnormal screening blood lead level is a key component of lead poisoning prevention. Objectives  To measure the proportion of children with elevated screening lead levels who have follow-up testing and to determine factors associated with such care. Design, Setting, and Participants  Retrospective, observational cohort study of 3682 Michigan Medicaid-enrolled children aged 6 years or younger who had a screening blood lead level of at least 10 µg/dL (0.48 µmol/L) between January 1, 2002, and June 30, 2003. Main Outcome Measure  Testing within 180 days of an elevated screening lead level. Results  Follow-up testing was received by 53.9% (95% confidence interval [CI], 52.2%-55.5%) of the children. In multivariate analysis adjusting for age, screening blood lead level results, and local health department catchment area, the relative risk of follow-up testing was lower for Hispanic or nonwhite children than for white children (0.91; 95% CI, 0.87-0.94), for children living in urban compared with rural areas (0.92; 95% CI, 0.89-0.96), and for children living in high- compared with low-risk lead areas (0.94; 95% CI, 0.92-0.96). Among children who did not have follow-up testing, 58.6% (95% CI, 56.3%-61.0%) had at least 1 medical encounter in the 6-month period after the elevated screening blood lead level, including encounters for evaluation and management (39.3%; 95% CI, 36.9%-41.6%) or preventive care (13.2%; 95% CI, 11.6%-14.8%). Conclusions  The rate of follow-up testing after an abnormal screening blood lead level was low, and children with increased likelihood of lead poisoning were less likely to receive follow-up testing. At least half of the children had a missed opportunity for follow-up testing. The observed disparities of care may increase the burden of cognitive impairment among at-risk children.   相似文献   

11.
Prevalence of HPV infection among females in the United States   总被引:15,自引:1,他引:14  
Context  Human papillomavirus (HPV) infection is estimated to be the most common sexually transmitted infection. Baseline population prevalence data for HPV infection in the United States before widespread availability of a prophylactic HPV vaccine would be useful. Objective  To determine the prevalence of HPV among females in the United States. Design, Setting, and Participants  The National Health and Nutrition Examination Survey (NHANES) uses a representative sample of the US noninstitutionalized civilian population. Females aged 14 to 59 years who were interviewed at home for NHANES 2003-2004 were examined in a mobile examination center and provided a self-collected vaginal swab specimen. Swabs were analyzed for HPV DNA by L1 consensus polymerase chain reaction followed by type-specific hybridization. Demographic and sexual behavior information was obtained from all participants. Main Outcome Measures  HPV prevalence by polymerase chain reaction. Results  The overall HPV prevalence was 26.8% (95% confidence interval [CI], 23.3%-30.9%) among US females aged 14 to 59 years (n = 1921). HPV prevalence was 24.5% (95% CI, 19.6%-30.5%) among females aged 14 to 19 years, 44.8% (95% CI, 36.3%-55.3%) among women aged 20 to 24 years, 27.4% (95% CI, 21.9%-34.2%) among women aged 25 to 29 years, 27.5% (95% CI, 20.8%-36.4%) among women aged 30 to 39 years, 25.2% (95% CI, 19.7%-32.2%) among women aged 40 to 49 years, and 19.6% (95% CI, 14.3%-26.8%) among women aged 50 to 59 years. There was a statistically significant trend for increasing HPV prevalence with each year of age from 14 to 24 years (P<.001), followed by a gradual decline in prevalence through 59 years (P = .06). HPV vaccine types 6 and 11 (low-risk types) and 16 and 18 (high-risk types) were detected in 3.4% of female participants; HPV-6 was detected in 1.3% (95% CI, 0.8%-2.3%), HPV-11 in 0.1% (95% CI, 0.03%-0.3%), HPV-16 in 1.5% (95% CI, 0.9%-2.6%), and HPV-18 in 0.8% (95% CI, 0.4%-1.5%) of female participants. Independent risk factors for HPV detection were age, marital status, and increasing numbers of lifetime and recent sex partners. Conclusions  HPV is common among females in the United States. Our data indicate that the burden of prevalent HPV infection among females was greater than previous estimates and was highest among those aged 20 to 24 years. However, the prevalence of HPV vaccine types was relatively low.   相似文献   

12.
Hu FB  Li TY  Colditz GA  Willett WC  Manson JE 《JAMA》2003,289(14):1785-1791
Context  Current public health campaigns to reduce obesity and type 2 diabetes have largely focused on increasing exercise, but have paid little attention to the reduction of sedentary behaviors. Objective  To examine the relationship between various sedentary behaviors, especially prolonged television (TV) watching, and risk of obesity and type 2 diabetes in women. Design, Setting, and Participants  Prospective cohort study conducted from 1992 to 1998 among women from 11 states in the Nurses' Health Study. The obesity analysis included 50 277 women who had a body mass index (BMI) of less than 30 and were free from diagnosed cardiovascular disease, diabetes, or cancer and completed questions on physical activity and sedentary behaviors at baseline. The diabetes analysis included 68 497 women who at baseline were free from diagnosed diabetes mellitus, cardiovascular disease, or cancer. Main Outcome Measures  Onset of obesity and type 2 diabetes mellitus. Results  During 6 years of follow-up, 3757 (7.5%) of 50 277 women who had a BMI of less than 30 in 1992 became obese (BMI 30). Overall, we documented 1515 new cases of type 2 diabetes. Time spent watching TV was positively associated with risk of obesity and type 2 diabetes. In the multivariate analyses adjusting for age, smoking, exercise levels, dietary factors, and other covariates, each 2-h/d increment in TV watching was associated with a 23% (95% confidence interval [CI], 17%-30%) increase in obesity and a 14% (95% CI, 5%-23%) increase in risk of diabetes; each 2-h/d increment in sitting at work was associated with a 5% (95% CI, 0%-10%) increase in obesity and a 7% (95% CI, 0%-16%) increase in diabetes. In contrast, standing or walking around at home (2 h/d) was associated with a 9% (95% CI, 6%-12%) reduction in obesity and a 12% (95% CI, 7%-16%) reduction in diabetes. Each 1 hour per day of brisk walking was associated with a 24% (95% CI, 19%-29%) reduction in obesity and a 34% (95% CI, 27%-41%) reduction in diabetes. We estimated that in our cohort, 30% (95% CI, 24%-36%) of new cases of obesity and 43% (95% CI, 32%-52%) of new cases of diabetes could be prevented by adopting a relatively active lifestyle (<10 h/wk of TV watching and 30 min/d of brisk walking). Conclusions  Independent of exercise levels, sedentary behaviors, especially TV watching, were associated with significantly elevated risk of obesity and type 2 diabetes, whereas even light to moderate activity was associated with substantially lower risk. This study emphasizes the importance of reducing prolonged TV watching and other sedentary behaviors for preventing obesity and diabetes.   相似文献   

13.
Wright TC  Denny L  Kuhn L  Pollack A  Lorincz A 《JAMA》2000,283(1):81-86
Context  More than half of the women diagnosed as having cervical cancer in the United States have not been screened within the last 3 years, despite many having had contact with the health care system. In many other regions of the world, there is only limited access to cervical cancer screening. Objective  To determine whether testing of self-collected vaginal swabs for human papillomavirus (HPV) DNA can be used to screen for cervical disease in women aged 35 years and older. Design  Cross-sectional observational study comparing Papanicolaou smears with HPV DNA testing of self-collected vaginal swabs. Setting  Outpatient clinics in a periurban settlement outside of Cape Town, South Africa, between January 1998 and April 1999. Participants  Screening was performed on 1415 previously unscreened black South African women aged 35 to 65 years. Intervention  Women self-collected a vaginal swab for HPV testing in the clinic and were then screened using 4 different tests: Papanicolaou smear, direct visual inspection of the cervix after the application of 5% acetic acid, cervicography, and HPV DNA testing of a clinician-obtained cervical sample. Women with abnormal results on any of the screening tests were referred for colposcopy. Main Outcome Measure  Biopsy-confirmed high-grade cervical squamous intraepithelial lesions or invasive cancer. Results  High-grade squamous intraepithelial lesions were identified in 47 (3.4%) of 1365 women adequately assessed, and there were 9 cases of invasive cancer. Of women with high-grade disease, 66.1% (95% confidence interval [CI], 52.1%-77.8%) had high-risk HPV detected in self-collected vaginal samples, and 67.9% (95% CI, 53.9%-79.4%) had an abnormal Papanicolaou smear (P = .78). The false-positive rates for HPV DNA testing of self-collected vaginal samples and Papanicolaou smears were 17.1% (95% CI, 15.1%-19.3%) and 12.3% (95% CI, 10.5%-14.2%), respectively (P<.001). A high-risk type of HPV DNA was detected in 83.9% (95% CI, 71.2%-91.9%) of women with high-grade disease and 15.5% (95% CI, 13.6%-17.7%) of women with no evidence of cervical disease using a clinician-obtained cervical sample. Conclusions  These results indicate that HPV testing of self-collected vaginal swabs is less specific than but as sensitive as Papanicolaou smears for detecting high-grade cervical disease in women aged 35 years and older, and HPV testing offers an important new way to increase screening in settings where cytology is not readily performed.   相似文献   

14.
Critically ill patients with severe acute respiratory syndrome   总被引:23,自引:0,他引:23  
Context  Severe acute respiratory syndrome (SARS) is a newly recognized infectious disease capable of causing severe respiratory failure. Objective  To determine the epidemiological features, course, and outcomes of patients with SARS-related critical illness. Design, Setting, and Patients  Retrospective case series of 38 adult patients with SARS-related critical illness admitted to 13 intensive care units (ICUs) in the Toronto area between the onset of the outbreak and April 15, 2003. Data were collected daily during the first 7 days in the ICUs, and patients were followed up for 28 days. Main Outcome Measures  The primary outcome was mortality at 28 days after ICU admission. Secondary outcomes included rate of SARS-related critical illness, number of tertiary care ICUs and staff placed under quarantine, and number of health care workers (HCWs) contracting SARS secondary to ICU-acquired transmission. Results  Of 196 patients with SARS, 38 (19%) became critically ill, 7 (18%) of whom were HCWs. The median (interquartile range [IQR]) age of the 38 patients was 57.4 (39.0-69.6) years. The median (IQR) duration between initial symptoms and admission to the ICU was 8 (5-10) days. Twenty-nine (76%) required mechanical ventilation and 10 of these (34%) experienced barotrauma. Mortality at 28 days was 13 (34%) of 38 patients and for those requiring mechanical ventilation, mortality was 13 (45%) of 29. Six patients (16%) remained mechanically ventilated at 28 days. Two of these patients had died by 8 weeks' follow-up. Patients who died were more often older, had preexisting diabetes mellitus, and on admission to hospital were more likely to have bilateral radiographic infiltrates. Transmission of SARS in 6 study ICUs led to closure of 73 medical-surgical ICU beds. In 2 university ICUs, 164 HCWs were quarantined and 16 (10%) developed SARS. Conclusions  Critical illness was common among patients with SARS. Affected patients had primarily single-organ respiratory failure, and half of mechanically ventilated patients died. The SARS outbreak greatly strained regional critical care resources.   相似文献   

15.
Context  Although acute renal failure (ARF) is believed to be common in the setting of critical illness and is associated with a high risk of death, little is known about its epidemiology and outcome or how these vary in different regions of the world. Objectives  To determine the period prevalence of ARF in intensive care unit (ICU) patients in multiple countries; to characterize differences in etiology, illness severity, and clinical practice; and to determine the impact of these differences on patient outcomes. Design, Setting, and Patients  Prospective observational study of ICU patients who either were treated with renal replacement therapy (RRT) or fulfilled at least 1 of the predefined criteria for ARF from September 2000 to December 2001 at 54 hospitals in 23 countries. Main Outcome Measures  Occurrence of ARF, factors contributing to etiology, illness severity, treatment, need for renal support after hospital discharge, and hospital mortality. Results  Of 29 269 critically ill patients admitted during the study period, 1738 (5.7%; 95% confidence interval [CI], 5.5%-6.0%) had ARF during their ICU stay, including 1260 who were treated with RRT. The most common contributing factor to ARF was septic shock (47.5%; 95% CI, 45.2%-49.5%). Approximately 30% of patients had preadmission renal dysfunction. Overall hospital mortality was 60.3% (95% CI, 58.0%-62.6%). Dialysis dependence at hospital discharge was 13.8% (95% CI, 11.2%-16.3%) for survivors. Independent risk factors for hospital mortality included use of vasopressors (odds ratio [OR], 1.95; 95% CI, 1.50-2.55; P<.001), mechanical ventilation (OR, 2.11; 95% CI, 1.58-2.82; P<.001), septic shock (OR, 1.36; 95% CI, 1.03-1.79; P = .03), cardiogenic shock (OR, 1.41; 95% CI, 1.05-1.90; P = .02), and hepatorenal syndrome (OR, 1.87; 95% CI, 1.07-3.28; P = .03). Conclusion  In this multinational study, the period prevalence of ARF requiring RRT in the ICU was between 5% and 6% and was associated with a high hospital mortality rate.   相似文献   

16.
Context  Breast augmentation is not associated with an increased risk of breast cancer; however, implants may interfere with the detection of breast cancer thereby delaying cancer diagnosis in women with augmentation. Objective  To determine whether mammography accuracy and tumor characteristics are different for women with and without augmentation. Design, Setting, and Participants  A prospective cohort of 137 women with augmentation and 685 women without augmentation diagnosed with breast cancer between January 1, 1995, and October 15, 2002, matched (1:5) by age, race/ethnicity, previous mammography screening, and mammography registry, and 10 533 women with augmentation and 974 915 women without augmentation and without breast cancer among 7 mammography registries in Denver, Colo; Lebanon, NH; Albuquerque, NM; Chapel Hill, NC; San Francisco, Calif; Seattle, Wash; and Burlington, Vt. Main Outcome Measures  Comparison between women with and without augmentation of mammography performance measures and cancer characteristics, including invasive carcinoma or ductal carcinoma in situ, tumor stage, nodal status, size, grade, and estrogen-receptor status. Results  Among asymptomatic women, the sensitivity of screening mammography based on the final assessment was lower in women with breast augmentation vs women without (45.0% [95% confidence interval {CI}, 29.3%-61.5%] vs 66.8% [95% CI, 60.4%-72.8%]; P = .008), and specificity was slightly higher in women with augmentation (97.7% [95% CI, 97.4%-98.0%] vs 96.7% [95% CI, 96.6%-96.7%]; P<.001). Among symptomatic women, both sensitivity and specificity were lower for women with augmentation compared with women without but these differences were not significant. Tumors were of similar stage, size, estrogen-receptor status, and nodal status but tended to be lower grade (P = .052) for women with breast augmentation vs without. Conclusions  Breast augmentation decreases the sensitivity of screening mammography among asymptomatic women but does not increase the false-positive rate. Despite the lower accuracy of mammography in women with augmentation, the prognostic characteristics of tumors are not influenced by augmentation.   相似文献   

17.
Context  Bare-metal stenting with abciximab pretreatment is currently considered a reasonable reperfusion strategy for acute ST-segment elevation myocardial infarction (STEMI). Sirolimus-eluting stents significantly reduce the need for target-vessel revascularization (TVR) vs bare-metal stents but substantially increase procedural costs. At current European list prices, the use of tirofiban instead of abciximab would absorb the difference in cost between stenting with sirolimus-eluting vs bare-metal stents. Objective  To evaluate the clinical and angiographic impact of single high-dose bolus tirofiban plus sirolimus-eluting stenting vs abciximab plus bare-metal stenting in patients with STEMI. Design, Setting, and Patients  Prospective, single-blind, randomized controlled study (Single High Dose Bolus Tirofiban and Sirolimus Eluting Stent vs Abciximab and Bare Metal Stent in Myocardial Infarction [STRATEGY]) of 175 patients (median age, 63 [interquartile range, 55-72] years) presenting to a single referral center in Italy with STEMI or presumed new left bundle-branch block and randomized between March 6, 2003, and April 23, 2004. Intervention  Single high-dose bolus tirofiban regimen plus sirolimus-eluting stenting (n = 87) vs standard-dose abciximab plus bare-metal stenting (n = 88). Main Outcome Measures  The primary end point was a composite of death, nonfatal myocardial infarction, stroke, or binary restenosis at 8 months. Secondary outcomes included freedom, at day 30 and month 8, from major cardiac or cerebrovascular adverse events (composite of death, reinfarction, stroke, and repeat TVR). Results  Cumulatively, 14 of 74 patients (19%; 95% confidence interval [CI], 10%-28%) in the tirofiban plus sirolimus-eluting stent group and 37 of 74 patients (50%; 95% CI, 44%-56%) in the abciximab plus bare-metal stent group reached the primary end point (hazard ratio, 0.33; 95% CI, 0.18-0.60; P<.001 [P<.001 by Fischer exact test]). The cumulative incidence of death, reinfarction, stroke, or TVR was significantly lower in the tirofiban plus sirolimus-eluting stent group (18%) vs the abciximab plus bare-metal stent group (32%) (hazard ratio, 0.53; 95% CI, 0.28-0.92; P = .04), predominantly reflecting a reduction in the need for TVR. Binary restenosis was present in 6 of 67 (9%; 95% CI, 2%-16%) and 24 of 66 (36%; 95% CI, 26%-46%) patients in the tirofiban plus sirolimus-eluting stent and abciximab plus bare-metal stent groups, respectively (P = .002). Conclusion  Tirofiban-supported sirolimus-eluting stenting of infarcted arteries holds promise for improving outcomes while limiting health care expenditure in patients with myocardial infarction undergoing primary intervention.   相似文献   

18.
Context  Although proponents argue that specialty cardiac hospitals provide high-quality cost-efficient care, strong financial incentives for physicians at these facilities could result in greater procedure utilization. Objective  To determine whether the opening of cardiac hospitals was associated with increasing population-based rates of coronary revascularization. Design, Setting, and Patients  In a study of Medicare beneficiaries from 1995 through 2003, we calculated annual population-based rates for total revascularization (coronary artery bypass graft [CABG] plus percutaneous coronary intervention [PCI]), CABG, and PCI. Hospital referral regions (HRRs) were used to categorize health care markets into those where (1) cardiac hospitals opened (n = 13), (2) new cardiac programs opened at general hospitals (n = 142), and (3) no new programs opened (n = 151). Main Outcome Measures  Rates of change in total revascularization, CABG, and PCI using multivariable linear regression models with generalized estimating equations. Results  Overall, rates of change for total revascularization were higher in HRRs after cardiac hospitals opened when compared with HRRs where new cardiac programs opened at general hospitals and HRRs with no new programs (P<.001 for both comparisons). Four years after their opening, the relative increase in adjusted rates was more than 2-fold higher in HRRs where cardiac hospitals opened (19.2% [95% confidence interval {CI}, 6.1%-32.2%], P<.001) when compared with HRRs where new cardiac programs opened at general hospitals (6.5% [95% CI, 3.2%-9.9%], P<.001) and HRRs with no new programs (7.4% [95% CI, 3.2%-11.5%], P<.001). These findings were consistent when rates for CABG and PCI were considered separately. For PCI, this growth appeared largely driven by increased utilization among patients without acute myocardial infarction (42.1% [95% CI, 21.4%-62.9%], P<.001). Conclusion  The opening of a cardiac hospital within an HRR is associated with increasing population-based rates of coronary revascularization in Medicare beneficiaries.   相似文献   

19.
Quality of cardiopulmonary resuscitation during out-of-hospital cardiac arrest   总被引:34,自引:2,他引:32  
Context  Cardiopulmonary resuscitation (CPR) guidelines recommend target values for compressions, ventilations, and CPR-free intervals allowed for rhythm analysis and defibrillation. There is little information on adherence to these guidelines during advanced cardiac life support in the field. Objective  To measure the quality of out-of-hospital CPR performed by ambulance personnel, as measured by adherence to CPR guidelines. Design and Setting  Case series of 176 adult patients with out-of-hospital cardiac arrest treated by paramedics and nurse anesthetists in Stockholm, Sweden, London, England, and Akershus, Norway, between March 2002 and October 2003. The defibrillators recorded chest compressions via a sternal pad fitted with an accelerometer and ventilations by changes in thoracic impedance between the defibrillator pads, in addition to standard event and electrocardiographic recordings. Main Outcome Measure  Adherence to international guidelines for CPR. Results  Chest compressions were not given 48% (95% CI, 45%-51%) of the time without spontaneous circulation; this percentage was 38% (95% CI, 36%-41%) when subtracting the time necessary for electrocardiographic analysis and defibrillation. Combining these data with a mean compression rate of 121/min (95% CI, 118-124/min) when compressions were given resulted in a mean compression rate of 64/min (95% CI, 61-67/min). Mean compression depth was 34 mm (95% CI, 33-35 mm), 28% (95% CI, 24%-32%) of the compressions had a depth of 38 mm to 51 mm (guidelines recommendation), and the compression part of the duty cycle was 42% (95% CI, 41%-42%). A mean of 11 (95% CI, 11-12) ventilations were given per minute. Sixty-one patients (35%) had return of spontaneous circulation, and 5 of 6 patients discharged alive from the hospital had normal neurological outcomes. Conclusions  In this study of CPR during out-of-hospital cardiac arrest, chest compressions were not delivered half of the time, and most compressions were too shallow. Electrocardiographic analysis and defibrillation accounted for only small parts of intervals without chest compressions.   相似文献   

20.
Context  Adverse drug events are common and often preventable causes of medical injuries. However, timely, nationally representative information on outpatient adverse drug events is limited. Objective  To describe the frequency and characteristics of adverse drug events that lead to emergency department visits in the United States. Design, Setting, and Participants  Active surveillance from January 1, 2004, through December 31, 2005, through the National Electronic Injury Surveillance System–Cooperative Adverse Drug Event Surveillance project. Main Outcome Measures  National estimates of the numbers, population rates, and severity (measured by hospitalization) of individuals with adverse drug events treated in emergency departments. Results  Over the 2-year study period, 21 298 adverse drug event cases were reported, producing weighted annual estimates of 701 547 individuals (95% confidence interval [CI], 509 642-893 452) or 2.4 individuals per 1000 population (95% CI, 1.7-3.0) treated in emergency departments. Of these cases, 3487 individuals required hospitalization (annual estimate, 117 318 [16.7%]; 95% CI, 13.1%-20.3%). Adverse drug events accounted for 2.5% (95% CI, 2.0%-3.1%) of estimated emergency department visits for all unintentional injuries and 6.7% (95% CI, 4.7%-8.7%) of those leading to hospitalization and accounted for 0.6% of estimated emergency department visits for all causes. Individuals aged 65 years or older were more likely than younger individuals to sustain adverse drug events (annual estimate, 4.9 vs 2.0 per 1000; rate ratio [RR], 2.4; 95% CI, 1.8-3.0) and more likely to require hospitalization (annual estimate, 1.6 vs 0.23 per 1000; RR, 6.8; 95% CI, 4.3-9.2). Drugs for which regular outpatient monitoring is used to prevent acute toxicity accounted for 41.5% of estimated hospitalizations overall (1381 cases; 95% CI, 30.9%-52.1%) and 54.4% of estimated hospitalizations among individuals aged 65 years or older (829 cases; 95% CI, 45.0%-63.7%). Conclusions  Adverse drug events among outpatients that lead to emergency department visits are an important cause of morbidity in the United States, particularly among individuals aged 65 years or older. Ongoing, population-based surveillance can help monitor these events and target prevention strategies.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号