首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Windish DM  Huot SJ  Green ML 《JAMA》2007,298(9):1010-1022
Context  Physicians depend on the medical literature to keep current with clinical information. Little is known about residents' ability to understand statistical methods or how to appropriately interpret research outcomes. Objective  To evaluate residents' understanding of biostatistics and interpretation of research results. Design, Setting, and Participants  Multiprogram cross-sectional survey of internal medicine residents. Main Outcome Measure  Percentage of questions correct on a biostatistics/study design multiple-choice knowledge test. Results  The survey was completed by 277 of 367 residents (75.5%) in 11 residency programs. The overall mean percentage correct on statistical knowledge and interpretation of results was 41.4% (95% confidence interval [CI], 39.7%-43.3%) vs 71.5% (95% CI, 57.5%-85.5%) for fellows and general medicine faculty with research training (P < .001). Higher scores in residents were associated with additional advanced degrees (50.0% [95% CI, 44.5%-55.5%] vs 40.1% [95% CI, 38.3%-42.0%]; P < .001); prior biostatistics training (45.2% [95% CI, 42.7%-47.8%] vs 37.9% [95% CI, 35.4%-40.3%]; P = .001); enrollment in a university-based training program (43.0% [95% CI, 41.0%-45.1%] vs 36.3% [95% CI, 32.6%-40.0%]; P = .002); and male sex (44.0% [95% CI, 41.4%-46.7%] vs 38.8% [95% CI, 36.4%-41.1%]; P = .004). On individual knowledge questions, 81.6% correctly interpreted a relative risk. Residents were less likely to know how to interpret an adjusted odds ratio from a multivariate regression analysis (37.4%) or the results of a Kaplan-Meier analysis (10.5%). Seventy-five percent indicated they did not understand all of the statistics they encountered in journal articles, but 95% felt it was important to understand these concepts to be an intelligent reader of the literature. Conclusions  Most residents in this study lacked the knowledge in biostatistics needed to interpret many of the results in published clinical research. Residency programs should include more effective biostatistics training in their curricula to successfully prepare residents for this important lifelong learning skill.   相似文献   

2.
Biochemical diagnosis of pheochromocytoma: which test is best?   总被引:33,自引:0,他引:33  
Context  Diagnosis of pheochromocytoma depends on biochemical evidence of catecholamine production by the tumor. However, the best test to establish the diagnosis has not been determined. Objective  To determine the biochemical test or combination of tests that provides the best method for diagnosis of pheochromocytoma. Design, Setting, and Participants  Multicenter cohort study of patients tested for pheochromocytoma at 4 referral centers between 1994 and 2001. The analysis included 214 patients in whom the diagnosis of pheochromocytoma was confirmed and 644 patients who were determined to not have the tumor. Main Outcome Measures  Test sensitivity and specificity, receiver operating characteristic curves, and positive and negative predictive values at different pretest prevalences using plasma free metanephrines, plasma catecholamines, urinary catecholamines, urinary total and fractionated metanephrines, and urinary vanillylmandelic acid. Results  Sensitivities of plasma free metanephrines (99% [95% confidence interval {CI}, 96%-100%]) and urinary fractionated metanephrines (97% [95% CI, 92%-99%]) were higher than those for plasma catecholamines (84% [95% CI, 78%-89%]), urinary catecholamines (86% [95% CI, 80%-91%]), urinary total metanephrines (77% [95% CI, 68%-85%]), and urinary vanillylmandelic acid (64% [95% CI, 55%-71%]). Specificity was highest for urinary vanillylmandelic acid (95% [95% CI, 93%-97%]) and urinary total metanephrines (93% [95% CI, 89%-97%]); intermediate for plasma free metanephrines (89% [95% CI, 87%-92%]), urinary catecholamines (88% [95% CI, 85%-91%]), and plasma catecholamines (81% [95% CI, 78%-84%]); and lowest for urinary fractionated metanephrines (69% [95% CI, 64%-72%]). Sensitivity and specificity values at different upper reference limits were highest for plasma free metanephrines using receiver operating characteristic curves. Combining different tests did not improve the diagnostic yield beyond that of a single test of plasma free metanephrines. Conclusion  Plasma free metanephrines provide the best test for excluding or confirming pheochromocytoma and should be the test of first choice for diagnosis of the tumor.   相似文献   

3.
Context  Only 1% to 8% of adults with out-of-hospital cardiac arrest survive to hospital discharge. Objective  To compare resuscitation outcomes before and after an urban emergency medical services (EMS) system switched from manual cardiopulmonary resuscitation (CPR) to load-distributing band (LDB) CPR. Design, Setting, and Patients  A phased, observational cohort evaluation with intention-to-treat analysis of 783 adults with out-of-hospital, nontraumatic cardiac arrest. A total of 499 patients were included in the manual CPR phase (January 1, 2001, to March 31, 2003) and 284 patients in the LDB-CPR phase (December 20, 2003, to March 31, 2005); of these patients, the LDB device was applied in 210 patients. Intervention  Urban EMS system change from manual CPR to LDB-CPR. Main Outcome Measures  Return of spontaneous circulation (ROSC), with secondary outcome measures of survival to hospital admission and hospital discharge, and neurological outcome at discharge. Results  Patients in the manual CPR and LDB-CPR phases were comparable except for a faster response time interval (mean difference, 26 seconds) and more EMS-witnessed arrests (18.7% vs 12.6%) with LDB. Rates for ROSC and survival were increased with LDB-CPR compared with manual CPR (for ROSC, 34.5%; 95% confidence interval [CI], 29.2%-40.3% vs 20.2%; 95% CI, 16.9%-24.0%; adjusted odds ratio [OR], 1.94; 95% CI, 1.38-2.72; for survival to hospital admission, 20.9%; 95% CI, 16.6%-26.1% vs 11.1%; 95% CI, 8.6%-14.2%; adjusted OR, 1.88; 95% CI, 1.23-2.86; and for survival to hospital discharge, 9.7%; 95% CI, 6.7%-13.8% vs 2.9%; 95% CI, 1.7%-4.8%; adjusted OR, 2.27; 95% CI, 1.11-4.77). In secondary analysis of the 210 patients in whom the LDB device was applied, 38 patients (18.1%) survived to hospital admission (95% CI, 13.4%-23.9%) and 12 patients (5.7%) survived to hospital discharge (95% CI, 3.0%-9.3%). Among patients in the manual CPR and LDB-CPR groups who survived to hospital discharge, there was no significant difference between groups in Cerebral Performance Category (P = .36) or Overall Performance Category (P = .40). The number needed to treat for the adjusted outcome survival to discharge was 15 (95% CI, 9-33). Conclusion  Compared with resuscitation using manual CPR, a resuscitation strategy using LDB-CPR on EMS ambulances is associated with improved survival to hospital discharge in adults with out-of-hospital nontraumatic cardiac arrest.   相似文献   

4.
Context  Patients evaluated at emergency departments often present with nonemergency conditions that can be treated in other clinical settings. High-deductible health plans have been promoted as a means of reducing overutilization but could also be related to worse outcomes if patients defer necessary care. Objectives  To determine the relationship between transition to a high-deductible health plan and emergency department use for low- and high-severity conditions and to examine changes in subsequent hospitalizations. Design, Setting, and Participants  Analysis of emergency department visits and subsequent hospitalizations among 8724 individuals for 1 year before and after their employers mandated a switch from a traditional health maintenance organization plan to a high-deductible health plan, compared with 59 557 contemporaneous controls who remained in the traditional plan. All persons were aged 1 to 64 years and insured by a Massachusetts health plan between March 1, 2001, and June 30, 2005. Main Outcome Measures  Rates of first and repeat emergency department visits classified as low, indeterminate, or high severity during the baseline and follow-up periods, as well as rates of inpatient admission after emergency department visits. Results  Between the baseline and follow-up periods, emergency department visits among members who switched to high-deductible coverage decreased from 197.5 to 178.1 per 1000 members, while visits among controls remained at approximately 220 per 1000 (–10.0% adjusted difference in difference; 95% confidence interval [CI], –16.6% to –2.8%; P = .007). The high-deductible plan was not associated with a change in the rate of first visits occurring during the study period (–4.1% adjusted difference in difference; 95% CI, –11.8% to 4.3%). Repeat visits in the high-deductible group decreased from 334.6 to 255.3 visits per 1000 members and increased from 321.1 to 334.4 per 1000 members in controls (–24.9% difference in difference; 95% CI, –37.5% to –9.7%; P = .002). Low-severity repeat emergency department visits decreased in the high-deductible group from 142.5 to 92.1 per 1000 members and increased in controls from 128.0 to 132.5 visits per 1000 members (–36.4% adjusted difference in difference; 95% CI, –51.1% to –17.2%; P<.001), whereas a small decrease in high-severity visits in the high-deductible group could not be excluded. The percentage of patients admitted from the emergency department in the high-deductible group decreased from 11.8 % to 10.9% and increased from 11.9% to 13.6% among controls (–24.7% adjusted difference in difference; 95% CI, –41.0% to –3.9%; P = .02). Conclusions  Traditional health plan members who switched to high-deductible coverage visited the emergency department less frequently than controls, with reductions occurring primarily in repeat visits for conditions that were not classified as high severity, and had decreases in the rate of hospitalizations from the emergency department. Further research is needed to determine long-term health care utilization patterns under high-deductible coverage and to assess risks and benefits related to clinical outcomes.   相似文献   

5.
Survival from in-hospital cardiac arrest during nights and weekends   总被引:4,自引:0,他引:4  
Mary Ann Peberdy, MD; Joseph P. Ornato, MD; G. Luke Larkin, MD, MSPH, MS; R. Scott Braithwaite, MD; T. Michael Kashner, PhD, JD; Scott M. Carey; Peter A. Meaney, MD, MPH; Liyi Cen, MS; Vinay M. Nadkarni, MD, MS; Amy H. Praestgaard, MS; Robert A. Berg, MD; for the National Registry of Cardiopulmonary Resuscitation Investigators

JAMA. 2008;299(7):785-792.

Context  Occurrence of in-hospital cardiac arrest and survival patterns have not been characterized by time of day or day of week. Patient physiology and process of care for in-hospital cardiac arrest may be different at night and on weekends because of hospital factors unrelated to patient, event, or location variables.

Objective  To determine whether outcomes after in-hospital cardiac arrest differ during nights and weekends compared with days/evenings and weekdays.

Design and Setting  We examined survival from cardiac arrest in hourly time segments, defining day/evening as 7:00 AM to 10:59 PM, night as 11:00 PM to 6:59 AM, and weekend as 11:00 PM on Friday to 6:59 AM on Monday, in 86 748 adult, consecutive in-hospital cardiac arrest events in the National Registry of Cardiopulmonary Resuscitation obtained from 507 medical/surgical participating hospitals from January 1, 2000, through February 1, 2007.

Main Outcome Measures  The primary outcome of survival to discharge and secondary outcomes of survival of the event, 24-hour survival, and favorable neurological outcome were compared using odds ratios and multivariable logistic regression analysis. Point estimates of survival outcomes are reported as percentages with 95% confidence intervals (95% CIs).

Results  A total of 58 593 cases of in-hospital cardiac arrest occurred during day/evening hours (including 43 483 on weekdays and 15 110 on weekends), and 28 155 cases occurred during night hours (including 20 365 on weekdays and 7790 on weekends). Rates of survival to discharge (14.7% [95% CI, 14.3%-15.1%] vs 19.8% [95% CI, 19.5%-20.1%], return of spontaneous circulation for longer than 20 minutes (44.7% [95% CI, 44.1%-45.3%] vs 51.1% [95% CI, 50.7%-51.5%]), survival at 24 hours (28.9% [95% CI, 28.4%-29.4%] vs 35.4% [95% CI, 35.0%-35.8%]), and favorable neurological outcomes (11.0% [95% CI, 10.6%-11.4%] vs 15.2% [95% CI, 14.9%-15.5%]) were substantially lower during the night compared with day/evening (all P values < .001). The first documented rhythm at night was more frequently asystole (39.6% [95% CI, 39.0%-40.2%] vs 33.5% [95% CI, 33.2%-33.9%], P < .001) and less frequently ventricular fibrillation (19.8% [95% CI, 19.3%-20.2%] vs 22.9% [95% CI, 22.6%-23.2%], P < .001). Among in-hospital cardiac arrests occurring during day/evening hours, survival was higher on weekdays (20.6% [95% CI, 20.3%-21%]) than on weekends (17.4% [95% CI, 16.8%-18%]; odds ratio, 1.15 [95% CI, 1.09-1.22]), whereas among in-hospital cardiac arrests occurring during night hours, survival to discharge was similar on weekdays (14.6% [95% CI, 14.1%-15.2%]) and on weekends (14.8% [95% CI, 14.1%-15.2%]; odds ratio, 1.02 [95% CI, 0.94-1.11]).

Conclusion  Survival rates from in-hospital cardiac arrest are lower during nights and weekends, even when adjusted for potentially confounding patient, event, and hospital characteristics.

  相似文献   


6.
Prevalence of chronic kidney disease in the United States   总被引:19,自引:0,他引:19  
Coresh J  Selvin E  Stevens LA  Manzi J  Kusek JW  Eggers P  Van Lente F  Levey AS 《JAMA》2007,298(17):2038-2047
Context  The prevalence and incidence of kidney failure treated by dialysis and transplantation in the United States have increased from 1988 to 2004. Whether there have been changes in the prevalence of earlier stages of chronic kidney disease (CKD) during this period is uncertain. Objective  To update the estimated prevalence of CKD in the United States. Design, Setting, and Participants  Cross-sectional analysis of the most recent National Health and Nutrition Examination Surveys (NHANES 1988-1994 and NHANES 1999-2004), a nationally representative sample of noninstitutionalized adults aged 20 years or older in 1988-1994 (n = 15 488) and 1999-2004 (n = 13 233). Main Outcome Measures  Chronic kidney disease prevalence was determined based on persistent albuminuria and decreased estimated glomerular filtration rate (GFR). Persistence of microalbuminuria (>30 mg/g) was estimated from repeat visit data in NHANES 1988-1994. The GFR was estimated using the abbreviated Modification of Diet in Renal Disease Study equation reexpressed to standard serum creatinine. Results  The prevalence of both albuminuria and decreased GFR increased from 1988-1994 to 1999-2004. The prevalence of CKD stages 1 to 4 increased from 10.0% (95% confidence interval [CI], 9.2%-10.9%) in 1988-1994 to 13.1% (95% CI, 12.0%-14.1%) in 1999-2004 with a prevalence ratio of 1.3 (95% CI, 1.2-1.4). The prevalence estimates of CKD stages in 1988-1994 and 1999-2004, respectively, were 1.7% (95% CI, 1.3%-2.2%) and 1.8% (95% CI, 1.4%-2.3%) for stage 1; 2.7% (95% CI, 2.2%-3.2%) and 3.2% (95% CI, 2.6%-3.9%) for stage 2; 5.4% (95% CI, 4.9%-6.0%) and 7.7% (95% CI, 7.0%-8.4%) for stage 3; and 0.21% (95% CI, 0.15%-0.27%) and 0.35% (0.25%-0.45%) for stage 4. A higher prevalence of diagnosed diabetes and hypertension and higher body mass index explained the entire increase in prevalence of albuminuria but only part of the increase in the prevalence of decreased GFR. Estimation of GFR from serum creatinine has limited precision and a change in mean serum creatinine accounted for some of the increased prevalence of CKD. Conclusions  The prevalence of CKD in the United States in 1999-2004 is higher than it was in 1988-1994. This increase is partly explained by the increasing prevalence of diabetes and hypertension and raises concerns about future increased incidence of kidney failure and other complications of CKD.   相似文献   

7.
Follow-up testing among children with elevated screening blood lead levels   总被引:2,自引:0,他引:2  
Kemper AR  Cohn LM  Fant KE  Dombkowski KJ  Hudson SR 《JAMA》2005,293(18):2232-2237
Context  Follow-up testing after an abnormal screening blood lead level is a key component of lead poisoning prevention. Objectives  To measure the proportion of children with elevated screening lead levels who have follow-up testing and to determine factors associated with such care. Design, Setting, and Participants  Retrospective, observational cohort study of 3682 Michigan Medicaid-enrolled children aged 6 years or younger who had a screening blood lead level of at least 10 µg/dL (0.48 µmol/L) between January 1, 2002, and June 30, 2003. Main Outcome Measure  Testing within 180 days of an elevated screening lead level. Results  Follow-up testing was received by 53.9% (95% confidence interval [CI], 52.2%-55.5%) of the children. In multivariate analysis adjusting for age, screening blood lead level results, and local health department catchment area, the relative risk of follow-up testing was lower for Hispanic or nonwhite children than for white children (0.91; 95% CI, 0.87-0.94), for children living in urban compared with rural areas (0.92; 95% CI, 0.89-0.96), and for children living in high- compared with low-risk lead areas (0.94; 95% CI, 0.92-0.96). Among children who did not have follow-up testing, 58.6% (95% CI, 56.3%-61.0%) had at least 1 medical encounter in the 6-month period after the elevated screening blood lead level, including encounters for evaluation and management (39.3%; 95% CI, 36.9%-41.6%) or preventive care (13.2%; 95% CI, 11.6%-14.8%). Conclusions  The rate of follow-up testing after an abnormal screening blood lead level was low, and children with increased likelihood of lead poisoning were less likely to receive follow-up testing. At least half of the children had a missed opportunity for follow-up testing. The observed disparities of care may increase the burden of cognitive impairment among at-risk children.   相似文献   

8.
Keiser J  Utzinger J 《JAMA》2008,299(16):1937-1948
Jennifer Keiser, PhD; Jürg Utzinger, PhD

JAMA. 2008;299(16):1937-1948.

Context  More than a quarter of the human population is likely infected with soil-transmitted helminths (Ascaris lumbricoides, hookworm, and Trichuris trichiura) in highly endemic areas. Preventive chemotherapy is the mainstay of control, but only 4 drugs are available: albendazole, mebendazole, levamisole, and pyrantel pamoate.

Objective  To assess the efficacy of single-dose oral albendazole, mebendazole, levamisole, and pyrantel pamoate against A lumbricoides, hookworm, and T trichiura infections.

Data Sources  A systematic search of PubMed, ISI Web of Science, ScienceDirect, the World Health Organization library database, and the Cochrane Central Register of Controlled Trials (1960 to August 2007).

Study Selection  From 168 studies, 20 randomized controlled trials were included.

Data Extraction and Data Synthesis  Information on study year and country, sample size, age of study population, mean infection intensity before treatment, diagnostic method used, time between evaluations before and after treatment, cure rate (the percentage of individuals who became helminth egg negative following treatment with an anthelminthic drug), egg reduction rate, adverse events, and trial quality was extracted. Relative risk, including a 95% confidence interval (CI), was used to measure the effect of the drugs on the risk of infection prevalence with a random-effects model.

Results  Single-dose oral albendazole, mebendazole, and pyrantel pamoate for infection with A lumbricoides resulted in cure rates of 88% (95% CI, 79%-93%; 557 patients), 95% (95% CI, 91%-97%; 309 patients), and 88% (95% CI, 79%-93%; 131 patients), respectively. Cure rates for infection with T trichiura following treatment with single-dose oral albendazole and mebendazole were 28% (95% CI, 13%-39%; 735 patients) and 36% (95% CI, 16%-51%; 685 patients), respectively. The efficacy of single-dose oral albendazole, mebendazole, and pyrantel pamoate against hookworm infections was 72% (95% CI, 59%-81%; 742 patients), 15% (95% CI, 1%-27%; 853 patients), and 31% (95% CI, 19%-42%; 152 patients), respectively. No pooled relative risks could be calculated for pyrantel pamoate against T trichiura and levamisole for any of the parasites investigated.

Conclusions  Single-dose oral albendazole, mebendazole, and pyrantel pamoate show high cure rates against A lumbricoides. For hookworm infection, albendazole was more efficacious than mebendazole and pyrantel pamoate. Treatment of T trichiura with single oral doses of current anthelminthics is unsatisfactory. New anthelminthics are urgently needed.

  相似文献   


9.
Topiramate for treating alcohol dependence: a randomized controlled trial   总被引:3,自引:0,他引:3  
Context  Hypothetically, topiramate can improve drinking outcomes among alcohol-dependent individuals by reducing alcohol's reinforcing effects through facilitation of -aminobutyric acid function and inhibition of glutaminergic pathways in the corticomesolimbic system. Objective  To determine if topiramate is a safe and efficacious treatment for alcohol dependence. Design, Setting, and Participants  Double-blind, randomized, placebo-controlled, 14-week trial of 371 men and women aged 18 to 65 years diagnosed with alcohol dependence, conducted between January 27, 2004, and August 4, 2006, at 17 US sites. Interventions  Up to 300 mg/d of topiramate (n = 183) or placebo (n = 188), along with a weekly compliance enhancement intervention. Main Outcome Measures  Primary efficacy variable was self-reported percentage of heavy drinking days. Secondary outcomes included other self-reported drinking measures (percentage of days abstinent and drinks per drinking day) along with the laboratory measure of alcohol consumption (plasma -glutamyltransferase). Results  Treating all dropouts as relapse to baseline, topiramate was more efficacious than placebo at reducing the percentage of heavy drinking days from baseline to week 14 (mean difference, 8.44%; 95% confidence interval, 3.07%-13.80%; P = .002). Prespecified mixed-model analysis also showed that topiramate compared with placebo decreased the percentage of heavy drinking days (mean difference, 16.19%; 95% confidence interval, 10.79%-21.60%; P < .001) and all other drinking outcomes (P < .001 for all comparisons). Adverse events that were more common with topiramate vs placebo, respectively, included paresthesia (50.8% vs 10.6%), taste perversion (23.0% vs 4.8%), anorexia (19.7% vs 6.9%), and difficulty with concentration (14.8% vs 3.2%). Conclusion  Topiramate is a promising treatment for alcohol dependence. Trial Registration  clinicaltrials.gov Identifier: NCT00210925   相似文献   

10.
Wang CS  FitzGerald JM  Schulzer M  Mak E  Ayas NT 《JAMA》2005,294(15):1944-1956
Charlie S. Wang, MD; J. Mark FitzGerald, MB, DM; Michael Schulzer, MD, PhD; Edwin Mak; Najib T. Ayas, MD, MPH

JAMA. 2005;294:1944-1956.

Context  Dyspnea is a common complaint in the emergency department where physicians must accurately make a rapid diagnosis.

Objective  To assess the usefulness of history, symptoms, and signs along with routine diagnostic studies (chest radiograph, electrocardiogram, and serum B-type natriuretic peptide [BNP]) that differentiate heart failure from other causes of dyspnea in the emergency department.

Data Sources  We searched MEDLINE (1966-July 2005) and the reference lists from retrieved articles, previous reviews, and physical examination textbooks.

Study Selection  We retained 22 studies of various findings for diagnosing heart failure in adult patients presenting with dyspnea to the emergency department.

Data Extraction  Two authors independently abstracted data (sensitivity, specificity, and likelihood ratios [LRs]) and assessed methodological quality.

Data Synthesis  Many features increased the probability of heart failure, with the best feature for each category being the presence of (1) past history of heart failure (positive LR = 5.8; 95% confidence interval [CI], 4.1-8.0); (2) the symptom of paroxysmal nocturnal dyspnea (positive LR = 2.6; 95% CI, 1.5-4.5); (3) the sign of the third heart sound (S3) gallop (positive LR = 11; 95% CI, 4.9-25.0); (4) the chest radiograph showing pulmonary venous congestion (positive LR = 12.0; 95% CI, 6.8-21.0); and (5) electrocardiogram showing atrial fibrillation (positive LR = 3.8; 95% CI, 1.7-8.8). The features that best decreased the probability of heart failure were the absence of (1) past history of heart failure (negative LR = 0.45; 95% CI, 0.38-0.53); (2) the symptom of dyspnea on exertion (negative LR = 0.48; 95% CI, 0.35-0.67); (3) rales (negative LR = 0.51; 95% CI, 0.37-0.70); (4) the chest radiograph showing cardiomegaly (negative LR = 0.33; 95% CI, 0.23-0.48); and (5) any electrocardiogram abnormality (negative LR = 0.64; 95% CI, 0.47-0.88). A low serum BNP proved to be the most useful test (serum B-type natriuretic peptide <100 pg/mL; negative LR = 0.11; 95% CI, 0.07-0.16).

Conclusions  For dyspneic adult emergency department patients, a directed history, physical examination, chest radiograph, and electrocardiography should be performed. If the suspicion of heart failure remains, obtaining a serum BNP level may be helpful, especially for excluding heart failure.

  相似文献   


11.
Pang X  Zhu Z  Xu F  Guo J  Gong X  Liu D  Liu Z  Chin DP  Feikin DR 《JAMA》2003,290(24):3215-3221
Context  Beijing, China, experienced the world's largest outbreak of severe acute respiratory syndrome (SARS) beginning in March 2003, with the outbreak resolving rapidly, within 6 weeks of its peak in late April. Little is known about the control measures implemented during this outbreak. Objective  To describe and evaluate the measures undertaken to control the SARS outbreak. Design, Setting, and Participants  Data were reviewed from standardized surveillance forms from SARS cases (2521 probable cases) and their close contacts observed in Beijing between March 5, 2003, and May 29, 2003. Procedures implemented by health authorities were investigated through review of official documents and discussions with public health officials. Main Outcome Measures  Timeline of major control measures; number of cases and quarantined close contacts and attack rates, with changes in infection control measures, management, and triage of suspected cases; and time lag between illness onset and hospitalization with information dissemination. Results  Health care worker training in use of personal protective equipment and management of patients with SARS and establishing fever clinics and designated SARS wards in hospitals predated the steepest decline in cases. During the outbreak, 30 178 persons were quarantined. Among 2195 quarantined close contacts in 5 districts, the attack rate was 6.3% (95% confidence interval [CI], 5.3%-7.3%), with a range of 15.4% (95% CI, 11.5%-19.2%) among spouses to 0.36% (95% CI, 0%-0.77%) among work and school contacts. The attack rate among quarantined household members increased with age from 5.0% (95% CI, 0%-10.5%) in children younger than 10 years to 27.6% (95% CI, 18.2%-37.0%) in adults aged 60 to 69 years. Among almost 14 million people screened for fever at the airport, train stations, and roadside checkpoints, only 12 were found to have probable SARS. The national and municipal governments held 13 press conferences about SARS. The time lag between illness onset and hospitalization decreased from a median of 5 to 6 days on or before April 20, 2003, the day the outbreak was announced to the public, to 2 days after April 20 (P<.001). Conclusions  The rapid resolution of the SARS outbreak was multifactorial, involving improvements in management and triage in hospitals and communities of patients with suspected SARS and the dissemination of information to health care workers and the public.   相似文献   

12.
Context  Although acute renal failure (ARF) is believed to be common in the setting of critical illness and is associated with a high risk of death, little is known about its epidemiology and outcome or how these vary in different regions of the world. Objectives  To determine the period prevalence of ARF in intensive care unit (ICU) patients in multiple countries; to characterize differences in etiology, illness severity, and clinical practice; and to determine the impact of these differences on patient outcomes. Design, Setting, and Patients  Prospective observational study of ICU patients who either were treated with renal replacement therapy (RRT) or fulfilled at least 1 of the predefined criteria for ARF from September 2000 to December 2001 at 54 hospitals in 23 countries. Main Outcome Measures  Occurrence of ARF, factors contributing to etiology, illness severity, treatment, need for renal support after hospital discharge, and hospital mortality. Results  Of 29 269 critically ill patients admitted during the study period, 1738 (5.7%; 95% confidence interval [CI], 5.5%-6.0%) had ARF during their ICU stay, including 1260 who were treated with RRT. The most common contributing factor to ARF was septic shock (47.5%; 95% CI, 45.2%-49.5%). Approximately 30% of patients had preadmission renal dysfunction. Overall hospital mortality was 60.3% (95% CI, 58.0%-62.6%). Dialysis dependence at hospital discharge was 13.8% (95% CI, 11.2%-16.3%) for survivors. Independent risk factors for hospital mortality included use of vasopressors (odds ratio [OR], 1.95; 95% CI, 1.50-2.55; P<.001), mechanical ventilation (OR, 2.11; 95% CI, 1.58-2.82; P<.001), septic shock (OR, 1.36; 95% CI, 1.03-1.79; P = .03), cardiogenic shock (OR, 1.41; 95% CI, 1.05-1.90; P = .02), and hepatorenal syndrome (OR, 1.87; 95% CI, 1.07-3.28; P = .03). Conclusion  In this multinational study, the period prevalence of ARF requiring RRT in the ICU was between 5% and 6% and was associated with a high hospital mortality rate.   相似文献   

13.
Context  Chlamydial and gonococcal infections are important causes of pelvic inflammatory disease, ectopic pregnancy, and infertility. Although screening for Chlamydia trachomatis is widely recommended among young adult women, little information is available regarding the prevalence of chlamydial and gonococcal infections in the general young adult population. Objective  To determine the prevalence of chlamydial and gonoccoccal infections in a nationally representative sample of young adults living in the United States. Design, Setting, and Participants  Cross-sectional analyses of a prospective cohort study of a nationally representative sample of 14 322 young adults aged 18 to 26 years. In-home interviews were conducted across the United States for Wave III of The National Longitudinal Study of Adolescent Health (Add Health) from April 2, 2001, to May 9, 2002. This study sample represented 66.3% of the original 18 924 participants in Wave I of Add Health. First-void urine specimens using ligase chain reaction assay were available for 12 548 (87.6%) of the Wave III participants. Main Outcome Measures  Prevalences of chlamydial and gonococcal infections in the general young adult population, and by age, self-reported race/ethnicity, and geographic region of current residence. Results  Overall prevalence of chlamydial infection was 4.19% (95% confidence interval [CI], 3.48%-4.90%). Women (4.74%; 95% CI, 3.93%-5.71%) were more likely to be infected than men (3.67%; 95% CI, 2.93%-4.58%; prevalence ratio, 1.29; 95% CI, 1.03-1.63). The prevalence of chlamydial infection was highest among black women (13.95%; 95% CI, 11.25%-17.18%) and black men (11.12%; 95% CI, 8.51%-14.42%); lowest prevalences were among Asian men (1.14%; 95% CI, 0.40%-3.21%), white men (1.38%; 95% CI, 0.93%-2.03%), and white women (2.52%; 95% CI, 1.90%-3.34%). Prevalence of chlamydial infection was highest in the south (5.39%; 95% CI, 4.24%-6.83%) and lowest in the northeast (2.39%; 95% CI, 1.56%-3.65%). Overall prevalence of gonorrhea was 0.43% (95% CI, 0.29%-0.63%). Among black men and women, the prevalence was 2.13% (95% CI, 1.46%-3.10%) and among white young adults, 0.10% (95% CI, 0.03%-0.27%). Prevalence of coinfection with both chlamydial and gonococcal infections was 0.030% (95% CI, 0.18%-0.49%). Conclusions  The prevalence of chlamydial infection is high among young adults in the United States. Substantial racial/ethnic disparities are present in the prevalence of both chlamydial and gonococcal infections.   相似文献   

14.
Context  Violence-related behaviors such as fighting and weapon carrying are associated with serious physical and psychosocial consequences for adolescents. Objective  To measure trends in nonfatal violent behaviors among adolescents in the United States between 1991 and 1997. Design, Setting, and Participants  Nationally representative data from the 1991, 1993, 1995, and 1997 Youth Risk Behavior Surveys were analyzed to describe the percentage of students in grades 9 through 12 who engaged in behaviors related to violence. Overall response rates for each of these years were 68%, 70%, 60%, and 69%, respectively. To assess the statistical significance of time trends for these variables, logistic regression analyses were conducted that controlled for sex, grade, and race or ethnicity and simultaneously assessed linear and higher-order effects. Main Outcome Measures  Self-reported weapon carrying, physical fighting, fighting-related injuries, feeling unsafe, and damaged or stolen property. Results  Between 1991 and 1997, the percentage of students in a physical fight decreased 14%, from 42.5% (95% confidence interval [CI], 40.1%-44.9%) to 36.6% (95% CI, 34.6%-38.6%); the percentage of students injured in a physical fight decreased 20%, from 4.4% (95% CI, 3.6%-5.2%) to 3.5% (95% CI, 2.9%-4.1%); and the percentage of students who carried a weapon decreased 30%, from 26.1% (95% CI, 23.8%-28.4%) to 18.3% (95% CI, 16.5%-20.1%). Between 1993 and 1997, the percentage of students who carried a gun decreased 25%, from 7.9% (95% CI, 6.6%-9.2%) to 5.9% (95% CI, 5.1%-6.7%); the percentage of students in a physical fight on school property decreased 9%, from 16.2% (95% CI, 15.0%-17.4%) to 14.8% (95% CI, 13.5%-16.1%); and the percentage of students who carried a weapon on school property decreased 28%, from 11.8% (95% CI, 10.4%-13.2%) to 8.5% (95% CI, 7.0%-10.0%). All of these changes represent significant linear decreases. Conclusions  Declines in fighting and weapon carrying among US adolescents between 1991 and 1997 are encouraging and consistent with declines in homicide, nonfatal victimization, and school crime rates. Further research should explore why behaviors related to interpersonal violence are decreasing and what types of interventions are most effective.   相似文献   

15.
Nathens AB  Jurkovich GJ  Cummings P  Rivara FP  Maier RV 《JAMA》2000,283(15):1990-1994
Context  Despite calls for wider national implementation of an integrated approach to trauma care, the effectiveness of this approach at a regional or state level remains unproven. Objective  To determine whether implementation of an organized system of trauma care reduces mortality due to motor vehicle crashes. Design  Cross-sectional time-series analysis of crash mortality data collected for 1979 through 1995 from the Fatality Analysis Reporting System. Setting  All 50 US states and the District of Columbia. Subjects  All front-seat passenger vehicle occupants aged 15 to 74 years. Main Outcome Measures  Rates of death due to motor vehicle crashes compared before and after implementation of an organized trauma care system. Estimates are based on within-state comparisons adjusted for national trends in crash mortality. Results  Ten years following initial trauma system implementation, mortality due to traffic crashes began to decline; about 15 years following trauma system implementation, mortality was reduced by 8% (95% confidence interval [CI], 3%-12%) after adjusting for secular trends in crash mortality, age, and the introduction of traffic safety laws. Implementation of primary enforcement of restraint laws and laws deterring drunk driving resulted in reductions in crash mortality of 13% (95% CI, 11%-16%) and 5% (95% CI, 3%-7%), respectively, while relaxation of state speed limits increased mortality by 7% (95% CI, 3%-10%). Conclusions  Our data indicate that implementation of an organized system of trauma care reduces crash mortality. The effect does not appear for 10 years, a finding consistent with the maturation and development of trauma triage protocols, interhospital transfer agreements, organization of trauma centers, and ongoing quality assurance.   相似文献   

16.
The changing relationship of obesity and disability, 1988-2004   总被引:1,自引:0,他引:1  
Alley DE  Chang VW 《JAMA》2007,298(17):2020-2027
Context  Recent studies suggest that the obese population may have been growing healthier since the 1960s, as indicated by a decrease in mortality and cardiovascular risk factors. However, whether these improvements have conferred decreased risk for disability is unknown. The obese population may be living longer with better-controlled risk factors but paradoxically experiencing more disability. Objective  To determine whether the association between obesity and disability has changed over time. Design, Setting, and Participants  Adults aged 60 years and older (N = 9928) with measured body mass index from 2 waves of the nationally representative National Health and Nutrition Examination Surveys (NHANES III [1988-1994] and NHANES 1999-2004). Main Outcome Measures  Reports of much difficulty or inability to perform tasks in 2 disability domains: functional limitations (walking one-fourth mile, walking up 10 steps, stooping, lifting 10 lb, walking between rooms, and standing from an armless chair) and activities of daily living (ADL) limitations (transferring, eating, and dressing). Results  Among obese individuals, the prevalence of functional impairment increased 5.4% (from 36.8%-42.2%; P = .03) between the 2 surveys, and ADL impairment did not change. At time 1 (1988-1994), the odds of functional impairment for obese individuals were 1.78 times greater than for normal-weight individuals (95% confidence interval [CI], 1.47-2.16). At time 2 (1999-2004), this odds ratio increased to 2.75 (95% CI, 2.39-3.17), because the odds of functional impairment increased by 43% (OR 1.43; 95% CI, 1.18-1.75) among obese individuals during this period, but did not change among nonobese individuals. With respect to ADL impairment, odds for obese individuals were not significantly greater than for individuals with normal weight (OR, 1.31; 95% CI, 0.92-1.88) at time 1, but increased to 2.05 (95% CI, 1.45-2.88) at time 2. This was because the odds of ADL impairment did not change for obese individuals but decreased by 34% among nonobese individuals (OR, 0.66; 95% CI, 0.50-0.88). Conclusions  Recent cardiovascular improvements have not been accompanied by reduced disability within the obese older population. Rather, obese participants surveyed during 1999-2004 were more likely to report functional impairments than obese participants surveyed during 1988-1994, and reductions in ADL impairment observed for nonobese older individuals did not occur in those who were obese. Over time, declines in obesity-related mortality, along with a younger age at onset of obesity, could lead to an increased burden of disability within the obese older population.   相似文献   

17.
Roy M. Soetikno, MD, MS; Tonya Kaltenbach, MD, MS; Robert V. Rouse, MD; Walter Park, MD; Anamika Maheshwari, MD; Tohru Sato, MD; Suzanne Matsui, MD; Shai Friedland, MD, MS

JAMA. 2008;299(9):1027-1035.

Context  Colorectal cancer is the second leading cause of cancer death in the United States. Prevention has focused on the detection and removal of polypoid neoplasms. Data are limited on the significance of nonpolypoid colorectal neoplasms (NP-CRNs).

Objectives  To determine the prevalence of NP-CRNs in a veterans hospital population and to characterize their association with colorectal cancer.

Design, Setting, and Patients  Cross-sectional study at a veterans hospital in California with 1819 patients undergoing elective colonoscopy from July 2003 to June 2004.

Main Outcome Measures  Endoscopic appearance, location, size, histology, and depth of invasion of neoplasms.

Results  The overall prevalence of NP-CRNs was 9.35% (95% confidence interval [95% CI], 8.05%-10.78%; n = 170). The prevalence of NP-CRNs in the subpopulations for screening, surveillance, and symptoms was 5.84% (95% CI, 4.13%-8.00%; n = 36), 15.44% (95% CI, 12.76%-18.44%; n = 101), and 6.01% (95% CI, 4.17%-8.34%; n = 33), respectively. The overall prevalence of NP-CRNs with in situ or submucosal invasive carcinoma was 0.82% (95% CI, 0.46%-1.36%; n = 15); in the screening population, the prevalence was 0.32% (95% CI, 0.04%-1.17%; n = 2). Overall, NP-CRNs were more likely to contain carcinoma (odds ratio, 9.78; 95% CI, 3.93-24.4) than polypoid lesions, irrespective of the size. The positive size-adjusted association of NP-CRNs with in situ or submucosal invasive carcinoma was also observed in subpopulations for screening (odds ratio, 2.01; 95% CI, 0.27-15.3) and surveillance (odds ratio, 63.7; 95% CI, 9.41-431). The depressed type had the highest risk (33%). Nonpolypoid colorectal neoplasms containing carcinoma were smaller in diameter as compared with the polypoid ones (mean [SD] diameter, 15.9 [10.2] mm vs 19.2 [9.6] mm, respectively). The procedure times did not change appreciably as compared with historical controls.

Conclusion  In this group of veteran patients, NP-CRNs were relatively common lesions diagnosed during routine colonoscopy and had a greater association with carcinoma compared with polypoid neoplasms, irrespective of size.

  相似文献   


18.
Context  Despite more than 2 decades of outcomes research after very preterm birth, clinicians remain uncertain about the extent to which neonatal morbidities predict poor long-term outcomes of extremely low-birth-weight (ELBW) infants. Objective  To determine the individual and combined prognostic effects of bronchopulmonary dysplasia (BPD), ultrasonographic signs of brain injury, and severe retinopathy of prematurity (ROP) on 18-month outcomes of ELBW infants. Design  Inception cohort assembled for the Trial of Indomethacin Prophylaxis in Preterms (TIPP). Setting and Participants  A total of 910 infants with birth weights of 500 to 999 g who were admitted to 1 of 32 neonatal intensive care units in Canada, the United States, Australia, New Zealand, and Hong Kong between 1996 and 1998 and who survived to a postmenstrual age of 36 weeks. Main Outcome Measures  Combined end point of death or survival to 18 months with 1 or more of cerebral palsy, cognitive delay, severe hearing loss, and bilateral blindness. Results  Each of the neonatal morbidities was similarly and independently correlated with a poor 18-month outcome. Odds ratios were 2.4 (95% confidence interval [CI], 1.8-3.2) for BPD, 3.7 (95% CI, 2.6-5.3) for brain injury, and 3.1 (95% CI, 1.9-5.0) for severe ROP. In children who were free of BPD, brain injury, and severe ROP the rate of poor long-term outcomes was 18% (95% CI, 14%-22%). Corresponding rates with any 1, any 2, and all 3 neonatal morbidities were 42% (95% CI, 37%-47%), 62% (95% CI, 53%-70%), and 88% (64%-99%), respectively. Conclusion  In ELBW infants who survive to a postmenstrual age of 36 weeks, a simple count of 3 common neonatal morbidities strongly predicts the risk of later death or neurosensory impairment.   相似文献   

19.
Prevalence of HPV infection among females in the United States   总被引:15,自引:1,他引:14  
Context  Human papillomavirus (HPV) infection is estimated to be the most common sexually transmitted infection. Baseline population prevalence data for HPV infection in the United States before widespread availability of a prophylactic HPV vaccine would be useful. Objective  To determine the prevalence of HPV among females in the United States. Design, Setting, and Participants  The National Health and Nutrition Examination Survey (NHANES) uses a representative sample of the US noninstitutionalized civilian population. Females aged 14 to 59 years who were interviewed at home for NHANES 2003-2004 were examined in a mobile examination center and provided a self-collected vaginal swab specimen. Swabs were analyzed for HPV DNA by L1 consensus polymerase chain reaction followed by type-specific hybridization. Demographic and sexual behavior information was obtained from all participants. Main Outcome Measures  HPV prevalence by polymerase chain reaction. Results  The overall HPV prevalence was 26.8% (95% confidence interval [CI], 23.3%-30.9%) among US females aged 14 to 59 years (n = 1921). HPV prevalence was 24.5% (95% CI, 19.6%-30.5%) among females aged 14 to 19 years, 44.8% (95% CI, 36.3%-55.3%) among women aged 20 to 24 years, 27.4% (95% CI, 21.9%-34.2%) among women aged 25 to 29 years, 27.5% (95% CI, 20.8%-36.4%) among women aged 30 to 39 years, 25.2% (95% CI, 19.7%-32.2%) among women aged 40 to 49 years, and 19.6% (95% CI, 14.3%-26.8%) among women aged 50 to 59 years. There was a statistically significant trend for increasing HPV prevalence with each year of age from 14 to 24 years (P<.001), followed by a gradual decline in prevalence through 59 years (P = .06). HPV vaccine types 6 and 11 (low-risk types) and 16 and 18 (high-risk types) were detected in 3.4% of female participants; HPV-6 was detected in 1.3% (95% CI, 0.8%-2.3%), HPV-11 in 0.1% (95% CI, 0.03%-0.3%), HPV-16 in 1.5% (95% CI, 0.9%-2.6%), and HPV-18 in 0.8% (95% CI, 0.4%-1.5%) of female participants. Independent risk factors for HPV detection were age, marital status, and increasing numbers of lifetime and recent sex partners. Conclusions  HPV is common among females in the United States. Our data indicate that the burden of prevalent HPV infection among females was greater than previous estimates and was highest among those aged 20 to 24 years. However, the prevalence of HPV vaccine types was relatively low.   相似文献   

20.
Kirsten Johnson, MD, MPH; Jana Asher, MSc; Stephanie Rosborough, MD, MPH; Amisha Raja, MA, PsyD; Rajesh Panjabi, MD, MPH; Charles Beadling, MD; Lynn Lawry, MD, MSPH, MSc

JAMA. 2008;300(6):676-690.

Context  Liberia's wars since 1989 have cost tens of thousands of lives and left many people mentally and physically traumatized.

Objectives  To assess the prevalence and impact of war-related psychosocial trauma, including information on participation in the Liberian civil wars, exposure to sexual violence, social functioning, and mental health.

Design, Setting, and Participants  A cross-sectional, population-based, multistage random cluster survey of 1666 adults aged 18 years or older using structured interviews and questionnaires, conducted during a 3-week period in May 2008 in Liberia.

Main Outcome Measures  Symptoms of major depressive disorder (MDD) and posttraumatic stress disorder (PTSD), social functioning, exposure to sexual violence, and health and mental health needs among Liberian adults who witnessed or participated in the conflicts during the last 2 decades.

Results  In the Liberian adult household–based population, 40% (95% confidence interval [CI], 36%-45%; n = 672/1659) met symptom criteria for MDD, 44% (95% CI, 38%-49%; n = 718/1661) met symptom criteria for PTSD, and 8% (95% CI, 5%-10%; n = 133/1666) met criteria for social dysfunction. Thirty-three percent of respondents (549/1666) reported having served time with fighting forces, and 33.2% of former combatant respondents (182/549) were female. Former combatants experienced higher rates of exposure to sexual violence than noncombatants: among females, 42.3% (95% CI, 35.4%-49.1%) vs 9.2% (95% CI, 6.7%-11.7%), respectively; among males, 32.6% (95% CI, 27.6%-37.6%) vs 7.4% (95% CI, 4.5%-10.4%). The rates of symptoms of PTSD, MDD, and suicidal ideation were higher among former combatants than noncombatants and among those who experienced sexual violence vs those who did not. The prevalence of PTSD symptoms among female former combatants who experienced sexual violence (74%; 95% CI, 63%-84%) was higher than among those who did not experience sexual violence (44%; 95% CI, 33%-53%). The prevalence of PTSD symptoms among male former combatants who experienced sexual violence was higher (81%; 95% CI, 74%-87%) than among male former combatants who did not experience sexual violence (46%; 95% CI, 39%-52%). Male former combatants who experienced sexual violence also reported higher rates of symptoms of depression and suicidal ideation. Both former combatants and noncombatants experienced inadequate access to health care (33.0% [95% CI, 22.6%-43.4%] and 30.1% [95% CI, 18.7%-41.6%], respectively).

Conclusions  Former combatants in Liberia were not exclusively male. Both female and male former combatants who experienced sexual violence had worse mental health outcomes than noncombatants and other former combatants who did not experience exposure to sexual violence.

  相似文献   


设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号