首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Context  Atrial tachyarrhythmias after cardiac surgery are associated with adverse outcomes and increased costs. Previous trials of amiodarone prophylaxis, while promising, were relatively small and yielded conflicting results. Objective  To determine whether a brief perioperative course of oral amiodarone is an effective and safe prophylaxis for atrial tachyarrhythmias after cardiac surgery overall and in important subgroups. Design, Setting, and Patients  Double-blind randomized controlled trial of 601 patients listed for nonemergent coronary artery bypass graft (CABG) surgery and/or valve replacement/repair surgery between February 1, 1999, and September 26, 2003, at a tertiary care hospital. The patients were followed up for 1 year. Intervention  Oral amiodarone (10 mg/kg daily) or placebo administered 6 days prior to surgery through 6 days after surgery (13 days). Randomization was stratified for subgroups defined by age, type of surgery, and use of preoperative -blockers. Main Outcome Measure  Incidence of atrial tachyarrhythmias lasting 5 minutes or longer that prompted therapy by the sixth postoperative day. Results  Atrial tachyarrhythmias occurred in fewer amiodarone patients (48/299; 16.1%) than in placebo patients (89/302; 29.5%) overall (hazard ratio [HR], 0.52; 95% confidence interval [CI], 0.34-0.69; P<.001); in patients younger than 65 years (19 [11.2%] vs 36 [21.1%]; HR, 0.51 [95% CI, 0.28-0.94]; P = .02); in patients aged 65 years or older (28 [21.7%] vs 54 [41.2%]; HR, 0.45 [95% CI, 0.27-0.75]; P<.001); in patients who had CABG surgery only (22 [11.3%] vs 46 [23.6%]; HR, 0.45 [95% CI, 0.26-0.79]; P = .002); in patients who had valve replacement/repair surgery with or without CABG surgery (25 [23.8%] vs 44 [44.1%]; HR, 0.51 [95% CI, 0.31-0.84; P = .008); in patients who received preoperative -blocker therapy (27 [15.3%] vs 42 [25.0%]; HR, 0.58 [95% CI, 0.34-0.99]; P = .03); and in patients who did not receive preoperative -blocker therapy (20 [16.3%] vs 48 [35.8%]; HR, 0.40 [95% CI, 0.22-0.71]; P<.001), respectively. Postoperative sustained ventricular tachyarrhythmias occurred less frequently in amiodarone patients (1/299; 0.3%) than in placebo patients (8/302; 2.6%) (P = .04). Dosage reductions of blinded therapy were more common in amiodarone patients (34/299; 11.4%) than in placebo patients (16/302; 5.3%) (P = .008). There were no differences in serious postoperative complications, in-hospital mortality, or readmission to the hospital within 6 months of discharge or in 1-year mortality. Conclusion  Oral amiodarone prophylaxis of atrial tachyarrhythmias after cardiac surgery is effective and may be safe overall and in important patient subgroups. Clinical Trials Registration  ClinicalTrials.gov Identifier: NCT00251706   相似文献   

2.
Survival from in-hospital cardiac arrest during nights and weekends   总被引:4,自引:0,他引:4  
Mary Ann Peberdy, MD; Joseph P. Ornato, MD; G. Luke Larkin, MD, MSPH, MS; R. Scott Braithwaite, MD; T. Michael Kashner, PhD, JD; Scott M. Carey; Peter A. Meaney, MD, MPH; Liyi Cen, MS; Vinay M. Nadkarni, MD, MS; Amy H. Praestgaard, MS; Robert A. Berg, MD; for the National Registry of Cardiopulmonary Resuscitation Investigators

JAMA. 2008;299(7):785-792.

Context  Occurrence of in-hospital cardiac arrest and survival patterns have not been characterized by time of day or day of week. Patient physiology and process of care for in-hospital cardiac arrest may be different at night and on weekends because of hospital factors unrelated to patient, event, or location variables.

Objective  To determine whether outcomes after in-hospital cardiac arrest differ during nights and weekends compared with days/evenings and weekdays.

Design and Setting  We examined survival from cardiac arrest in hourly time segments, defining day/evening as 7:00 AM to 10:59 PM, night as 11:00 PM to 6:59 AM, and weekend as 11:00 PM on Friday to 6:59 AM on Monday, in 86 748 adult, consecutive in-hospital cardiac arrest events in the National Registry of Cardiopulmonary Resuscitation obtained from 507 medical/surgical participating hospitals from January 1, 2000, through February 1, 2007.

Main Outcome Measures  The primary outcome of survival to discharge and secondary outcomes of survival of the event, 24-hour survival, and favorable neurological outcome were compared using odds ratios and multivariable logistic regression analysis. Point estimates of survival outcomes are reported as percentages with 95% confidence intervals (95% CIs).

Results  A total of 58 593 cases of in-hospital cardiac arrest occurred during day/evening hours (including 43 483 on weekdays and 15 110 on weekends), and 28 155 cases occurred during night hours (including 20 365 on weekdays and 7790 on weekends). Rates of survival to discharge (14.7% [95% CI, 14.3%-15.1%] vs 19.8% [95% CI, 19.5%-20.1%], return of spontaneous circulation for longer than 20 minutes (44.7% [95% CI, 44.1%-45.3%] vs 51.1% [95% CI, 50.7%-51.5%]), survival at 24 hours (28.9% [95% CI, 28.4%-29.4%] vs 35.4% [95% CI, 35.0%-35.8%]), and favorable neurological outcomes (11.0% [95% CI, 10.6%-11.4%] vs 15.2% [95% CI, 14.9%-15.5%]) were substantially lower during the night compared with day/evening (all P values < .001). The first documented rhythm at night was more frequently asystole (39.6% [95% CI, 39.0%-40.2%] vs 33.5% [95% CI, 33.2%-33.9%], P < .001) and less frequently ventricular fibrillation (19.8% [95% CI, 19.3%-20.2%] vs 22.9% [95% CI, 22.6%-23.2%], P < .001). Among in-hospital cardiac arrests occurring during day/evening hours, survival was higher on weekdays (20.6% [95% CI, 20.3%-21%]) than on weekends (17.4% [95% CI, 16.8%-18%]; odds ratio, 1.15 [95% CI, 1.09-1.22]), whereas among in-hospital cardiac arrests occurring during night hours, survival to discharge was similar on weekdays (14.6% [95% CI, 14.1%-15.2%]) and on weekends (14.8% [95% CI, 14.1%-15.2%]; odds ratio, 1.02 [95% CI, 0.94-1.11]).

Conclusion  Survival rates from in-hospital cardiac arrest are lower during nights and weekends, even when adjusted for potentially confounding patient, event, and hospital characteristics.

  相似文献   


3.
Context  Information on the school-age functioning and special health care needs of extremely low-birth-weight (ELBW, <1000 g) children is necessary to plan for medical and educational services. Objective  To examine neurosensory, developmental, and medical conditions together with the associated functional limitations and special health care needs of ELBW children compared with normal-birth-weight (NBW) term-born children (controls). Design, Setting, and Participants  A follow-up study at age 8 years of a cohort of 219 ELBW children born 1992 to 1995 (92% of survivors) and 176 NBW controls of similar sociodemographic status conducted in Cleveland, Ohio. Main Outcome Measures  Parent Questionnaire for Identifying Children with Chronic Conditions of 12 months or more and categorization of specific medical diagnoses and developmental disabilities based on examination of the children. Results  In logistic regression analyses adjusting for sociodemographic status and sex, ELBW children had significantly more chronic conditions than NBW controls, including functional limitations (64% vs 20%, respectively; odds ratio [OR], 8.1; 95% confidence interval [CI], 5.0-13.1; P<.001), compensatory dependency needs (48% vs 23%, respectively; OR, 3.0; 95% CI, 1.9-4.7; P<.001), and services above those routinely required by children (65% vs 27%, respectively; OR, 5.4; 95% CI, 3.4-8.5; P<.001). These differences remained significant when the 36 ELBW children with neurosensory impairments were excluded. Specific diagnoses and disabilities for ELBW vs NBW children included cerebral palsy (14% vs 0%, respectively; P<.001), asthma (21% vs 9%; OR, 3.0; 95% CI, 1.6-5.6; P = .001), vision of less than 20/200 (10% vs 3%; OR, 3.1; 95% CI, 1.2-7.8; P = .02), low IQ of less than 85 (38% vs 14%; OR, 4.5; 95% CI, 2.7-7.7; P<.001), limited academic skills (37% vs 15%; OR, 4.2; 95% CI, 2.5-7.3; P<.001), poor motor skills (47% vs 10%; OR, 7.8; 95% CI, 4.5-13.6; P<.001), and poor adaptive functioning (69% vs 34%; OR, 6.5; 95% CI, 4.0-10.6; P<.001). Conclusion  The ELBW survivors in school at age 8 years who were born in the 1990s have considerable long-term health and educational needs.   相似文献   

4.
Beverly B. Green, MD, MPH; Andrea J. Cook, PhD; James D. Ralston, MD, MPH; Paul A. Fishman, PhD; Sheryl L. Catz, PhD; James Carlson, PharmD; David Carrell, PhD; Lynda Tyll, RN, MS; Eric B. Larson, MD, MPH; Robert S. Thompson, MD

JAMA. 2008;299(24):2857-2867.

Context  Treating hypertension decreases mortality and disability from cardiovascular disease, but most hypertension remains inadequately controlled.

Objective  To determine if a new model of care that uses patient Web services, home blood pressure (BP) monitoring, and pharmacist-assisted care improves BP control.

Design, Setting, and Participants  A 3-group randomized controlled trial, the Electronic Communications and Home Blood Pressure Monitoring study was based on the Chronic Care Model. The trial was conducted at an integrated group practice in Washington state, enrolling 778 participants aged 25 to 75 years with uncontrolled essential hypertension and Internet access. Care was delivered over a secure patient Web site from June 2005 to December 2007.

Interventions  Participants were randomly assigned to usual care, home BP monitoring and secure patient Web site training only, or home BP monitoring and secure patient Web site training plus pharmacist care management delivered through Web communications.

Main Outcome Measures  Percentage of patients with controlled BP (<140/90 mm Hg) and changes in systolic and diastolic BP at 12 months.

Results  Of 778 patients, 730 (94%) completed the 1-year follow-up visit. Patients assigned to the home BP monitoring and Web training only group had a nonsignificant increase in the percentage of patients with controlled BP (<140/90 mm Hg) compared with usual care (36% [95% confidence interval {CI}, 30%-42%] vs 31% [95% CI, 25%-37%]; P = .21). Adding Web-based pharmacist care to home BP monitoring and Web training significantly increased the percentage of patients with controlled BP (56%; 95% CI, 49%-62%) compared with usual care (P < .001) and home BP monitoring and Web training only (P < .001). Systolic BP was decreased stepwise from usual care to home BP monitoring and Web training only to home BP monitoring and Web training plus pharmacist care. Diastolic BP was decreased only in the pharmacist care group compared with both the usual care and home BP monitoring and Web training only groups. Compared with usual care, the patients who had baseline systolic BP of 160 mm Hg or higher and received home BP monitoring and Web training plus pharmacist care had a greater net reduction in systolic BP (–13.2 mm Hg [95% CI, –19.2 to –7.1]; P < .001) and diastolic BP (–4.6 mm Hg [95% CI, –8.0 to –1.2]; P < .001), and improved BP control (relative risk, 3.32 [95% CI, 1.86 to 5.94]; P<.001).

Conclusion  Pharmacist care management delivered through secure patient Web communications improved BP control in patients with hypertension.

Trial Registration  clinicaltrials.gov Identifier: NCT00158639

  相似文献   


5.
Context  Certificate of need regulations were enacted to control health care costs by limiting unnecessary expansion of services. While many states have repealed certificate of need regulations in recent years, few analyses have examined relationships between certificate of need regulations and outcomes of care. Objective  To compare rates of coronary revascularization and mortality after acute myocardial infarction in states with and without certificate of need regulations. Design, Setting, and Participants  Retrospective cohort study of 1 139 792 Medicare beneficiaries aged 68 years or older with AMI who were admitted to 4587 US hospitals during 2000-2003. Main Outcome Measures  Thirty-day risk-adjusted rates of coronary revascularization with either coronary artery bypass graft surgery or percutaneous coronary intervention and 30-day all-cause mortality. Results  The 624 421 patients in states with certificate of need regulations were less likely to be admitted to hospitals with coronary revascularization services (321 573 [51.5%] vs 323 695 [62.8%]; P<.001) or to undergo revascularization at the admitting hospital (163 120 [26.1%] vs 163 877 [31.8%]; P<.001) than patients in states without certificates of need but were more likely to undergo revascularization at a transfer hospital (73 379 [11.7%] vs 45 907 [8.9%]; P<.001). Adjusting for demographic and clinical risk factors, patients in states with highly and moderately stringent certificate of need regulations, respectively, were less likely to undergo revascularization within the first 2 days (adjusted hazard ratios, 0.68; 95% confidence interval [CI], 0.54-0.87; P = .002 and 0.80; 95% CI, 0.71-0.90; P<.001) relative to patients in states without certificates of need, although no differences in the likelihood of revascularization were observed during days 3 through 30. Unadjusted 30-day mortality was similar in states with and without certificates of need (109 304 [17.5%] vs 90 104 [17.5%]; P = .76), as was adjusted mortality (odds ratio, 1.00; 95% CI, 0.97-1.03; P = .90). Conclusions  Patients with acute myocardial infarction were less likely to be admitted to hospitals offering coronary revascularization and to undergo early revascularization in states with certificate of need regulations. However, differences in the availability and use of revascularization therapies were not associated with mortality.   相似文献   

6.
Obesity, weight gain, and the risk of kidney stones   总被引:8,自引:0,他引:8  
Taylor EN  Stampfer MJ  Curhan GC 《JAMA》2005,293(4):455-462
Context  Larger body size may result in increased urinary excretion of calcium, oxalate, and uric acid, thereby increasing the risk for calcium-containing kidney stones. It is unclear if obesity increases the risk of stone formation, and it is not known if weight gain influences risk. Objective  To determine if weight, weight gain, body mass index (BMI), and waist circumference are associated with kidney stone formation. Design, Setting, and Participants  A prospective study of 3 large cohorts: the Health Professionals Follow-up Study (N = 45 988 men; age range at baseline, 40-75 years), the Nurses’ Health Study I (N = 93 758 older women; age range at baseline, 34-59 years), and the Nurses’ Health Study II (N = 101 877 younger women; age range at baseline, 27-44 years). Main Outcome Measures  Incidence of symptomatic kidney stones. Results  We documented 4827 incident kidney stones over a combined 46 years of follow-up. After adjusting for age, dietary factors, fluid intake, and thiazide use, the relative risk (RR) for stone formation in men weighing more than 220 lb (100.0 kg) vs men less than 150 lb (68.2 kg) was 1.44 (95% confidence interval [CI], 1.11-1.86; P = .002 for trend). In older and younger women, RRs for these weight categories were 1.89 (95% CI, 1.52-2.36; P<.001 for trend) and 1.92 (95% CI, 1.59-2.31; P<.001 for trend), respectively. The RR in men who gained more than 35 lb (15.9 kg) since age 21 years vs men whose weight did not change was 1.39 (95% CI, 1.14-1.70; P = .001 for trend). Corresponding RRs for the same categories of weight gain since age 18 years in older and younger women were 1.70 (95% CI, 1.40-2.05; P<.001 for trend) and 1.82 (95% CI, 1.50-2.21; P<.001 for trend). Body mass index was associated with the risk of kidney stone formation: the RR for men with a BMI of 30 or greater vs those with a BMI of 21 to 22.9 was 1.33 (95% CI, 1.08-1.63; P<.001 for trend). Corresponding RRs for the same categories of BMI in older and younger women were 1.90 (95% CI, 1.61-2.25; P<.001 for trend) and 2.09 (95% CI, 1.77-2.48; P<.001 for trend). Waist circumference was also positively associated with risk in men (P = .002 for trend) and in older and younger women (P<.001 for trend for both). Conclusions  Obesity and weight gain increase the risk of kidney stone formation. The magnitude of the increased risk may be greater in women than in men.   相似文献   

7.
Context  Although reperfusion therapy, aspirin, -blockers, and angiotensin-converting enzyme inhibitors reduce mortality when used early in patients with acute myocardial infarction (MI), mortality and morbidity remain high. No antithrombotic or newer antiplatelet drug has been shown to reduce mortality in acute MI. Objective  To evaluate the effects of reviparin, a low-molecular-weight heparin, when initiated early and given for 7 days in addition to usual therapy on the primary composite outcome of death, myocardial reinfarction, or strokes at 7 and 30 days. Design, Setting, and Patients  A randomized, double-blind, placebo-controlled trial (Clinical Trial of Reviparin and Metabolic Modulation in Acute Myocardial Infarction Treatment Evaluation [CREATE]) of 15 570 patients with ST-segment elevation or new left bundle-branch block, presenting within 12 hours of symptom onset at 341 hospitals in India and China from July 2001 through July 2004. Intervention  Reviparin or placebo subcutaneously twice daily for 7 days. Main Outcome Measure  Primary composite outcome of death, myocardial reinfarction, or stroke at 7 and 30 days. Results  The primary composite outcome was significantly reduced from 854 (11.0%) of 7790 patients in the placebo group to 745 (9.6%) of 7780 in the reviparin group (hazard ratio [HR], 0.87; 95% CI, 0.79-0.96; P = .005). These benefits persisted at 30 days (1056 [13.6%] vs 921 [11.8%] patients; HR, 0.87; 95% CI, 0.79-0.95; P = .001) with significant reductions in 30-day mortality (877 [11.3%] vs 766 [9.8%]; HR, 0.87; 95% CI, 0.79-0.96; P = .005) and reinfarction (199 [2.6%] vs 154 [2.0%]; HR, 0.77; 95% CI, 0.62-0.95; P = .01), and no significant differences in strokes (64 [0.8%] vs 80 [1.0%]; P = .19). Reviparin treatment was significantly better when it was initiated very early after symptom onset at 7 days (<2 hours: HR, 0.70; 95% CI, 0.52-0.96; P = .03; 30/1000 events prevented; 2 to <4 hours: HR, 0.81; 95% CI, 0.67-0.98; P = .03; 21/1000 events prevented; 4 to <8 hours: HR, 0.85; 95% CI, 0.73-0.99; P = .05; 16/1000 events prevented; and 8 hours: HR, 1.06; 95% CI, 0.86-1.30; P = .58; P = .04 for trend). There was an increase in life-threatening bleeding at 7 days with reviparin and placebo (17 [0.2%] vs 7 [0.1%], respectively; P = .07), but the absolute excess was small (1 more per 1000) vs reductions in the primary outcome (18 fewer per 1000) or mortality (15 fewer per 1000). Conclusions  In patients with acute ST-segment elevation or new left bundle-branch block MI, reviparin reduces mortality and reinfarction, without a substantive increase in overall stroke rates. There is a small absolute excess of life-threatening bleeding but the benefits outweigh the risks.   相似文献   

8.
Contemporary clinical profile and outcome of prosthetic valve endocarditis   总被引:7,自引:0,他引:7  
Context  Prosthetic valve endocarditis (PVE) is associated with significant mortality and morbidity. The contemporary clinical profile and outcome of PVE are not well defined. Objectives  To describe the prevalence, clinical characteristics, and outcome of PVE, with attention to health care–associated infection, and to determine prognostic factors associated with in-hospital mortality. Design, Setting, and Participants  Prospective, observational cohort study conducted at 61 medical centers in 28 countries, including 556 patients with definite PVE as defined by Duke University diagnostic criteria who were enrolled in the International Collaboration on Endocarditis-Prospective Cohort Study from June 2000 to August 2005. Main Outcome Measure  In-hospital mortality. Results  Definite PVE was present in 556 (20.1%) of 2670 patients with infective endocarditis. Staphylococcus aureus was the most common causative organism (128 patients [23.0%]), followed by coagulase-negative staphylococci (94 patients [16.9%]). Health care–associated PVE was present in 203 (36.5%) of the overall cohort. Seventy-one percent of health care–associated PVE occurred within the first year of valve implantation, and the majority of cases were diagnosed after the early (60-day) period. Surgery was performed in 272 (48.9%) patients during the index hospitalization. In-hospital death occurred in 127 (22.8%) patients and was predicted by older age, health care–associated infection (62/203 [30.5%]; adjusted odds ratio [OR], 1.62; 95% confidence interval [CI], 1.08-2.44; P = .02), S aureus infection (44/128 [34.4%]; adjusted OR, 1.73; 95% CI, 1.01-2.95; P = .05), and complications of PVE, including heart failure (60/183 [32.8%]; adjusted OR, 2.33; 95% CI, 1.62-3.34; P<.001), stroke (34/101 [33.7%]; adjusted OR, 2.25; 95% CI, 1.25-4.03; P = .007), intracardiac abscess (47/144 [32.6%]; adjusted OR, 1.86; 95% CI, 1.10-3.15; P = .02), and persistent bacteremia (27/49 [55.1%]; adjusted OR, 4.29; 95% CI, 1.99-9.22; P<.001). Conclusions  Prosthetic valve endocarditis accounts for a high percentage of all cases of infective endocarditis in many regions of the world. Staphylococcus aureus is now the leading cause of PVE. Health care–associated infection significantly influences the clinical characteristics and outcome of PVE. Complications of PVE strongly predict in-hospital mortality, which remains high despite prompt diagnosis and the frequent use of surgical intervention.   相似文献   

9.
Context  Although acute renal failure (ARF) is believed to be common in the setting of critical illness and is associated with a high risk of death, little is known about its epidemiology and outcome or how these vary in different regions of the world. Objectives  To determine the period prevalence of ARF in intensive care unit (ICU) patients in multiple countries; to characterize differences in etiology, illness severity, and clinical practice; and to determine the impact of these differences on patient outcomes. Design, Setting, and Patients  Prospective observational study of ICU patients who either were treated with renal replacement therapy (RRT) or fulfilled at least 1 of the predefined criteria for ARF from September 2000 to December 2001 at 54 hospitals in 23 countries. Main Outcome Measures  Occurrence of ARF, factors contributing to etiology, illness severity, treatment, need for renal support after hospital discharge, and hospital mortality. Results  Of 29 269 critically ill patients admitted during the study period, 1738 (5.7%; 95% confidence interval [CI], 5.5%-6.0%) had ARF during their ICU stay, including 1260 who were treated with RRT. The most common contributing factor to ARF was septic shock (47.5%; 95% CI, 45.2%-49.5%). Approximately 30% of patients had preadmission renal dysfunction. Overall hospital mortality was 60.3% (95% CI, 58.0%-62.6%). Dialysis dependence at hospital discharge was 13.8% (95% CI, 11.2%-16.3%) for survivors. Independent risk factors for hospital mortality included use of vasopressors (odds ratio [OR], 1.95; 95% CI, 1.50-2.55; P<.001), mechanical ventilation (OR, 2.11; 95% CI, 1.58-2.82; P<.001), septic shock (OR, 1.36; 95% CI, 1.03-1.79; P = .03), cardiogenic shock (OR, 1.41; 95% CI, 1.05-1.90; P = .02), and hepatorenal syndrome (OR, 1.87; 95% CI, 1.07-3.28; P = .03). Conclusion  In this multinational study, the period prevalence of ARF requiring RRT in the ICU was between 5% and 6% and was associated with a high hospital mortality rate.   相似文献   

10.
Context  Combination therapy is now widely advocated as first-line treatment for uncomplicated malaria in Africa. However, it is not clear which treatment regimens are optimal or how to best assess comparative efficacies in highly endemic areas. Objective  To compare the efficacy and safety of 3 leading combination therapies for the treatment of uncomplicated malaria. Design, Setting, and Participants  Single-blind randomized clinical trial, conducted between November 2004 and June 2006, of treatment for all episodes of uncomplicated malaria in children in an urban community in Kampala, Uganda. A total of 601 healthy children (aged 1-10 years) were randomly selected and were followed up for 13 to 19 months, receiving all medical care at the study clinic. Interventions  Study participants were randomized to receive 1 of 3 combination therapies (amodiaquine plus sulfadoxine-pyrimethamine, amodiaquine plus artesunate, or artemether-lumefantrine) when diagnosed with their first episode of uncomplicated malaria. The same assigned treatment was given for all subsequent episodes. Main Outcome Measure  28-Day risk of parasitological failure (unadjusted and adjusted by genotyping to distinguish recrudescence from new infection) for each episode of uncomplicated malaria treated with study drugs. Results  Of enrolled children, 329 of 601 were diagnosed with at least 1 episode of uncomplicated malaria, and 687 episodes of Plasmodium falciparum malaria were treated with study drugs. The 28-day risk of treatment failure (unadjusted by genotyping) for individual episodes of malaria were 26.1% (95% CI, 21.1%-32.1%) for amodiaquine plus sulfadoxine-pyrimethamine, 17.4% (95% CI, 13.1%-23.1%) for amodiaquine plus artesunate, and 6.7% (95% CI, 3.9%-11.2%) for artemether-lumefantrine (P<.05 for all pairwise comparisons). When only recrudescent treatment failures were considered, the risks of failure were 14.1% (95% CI, 10.3%-19.2%), 4.6% (95% CI, 2.5%-8.3%), and 1.0% (95% CI, 0.3%-4.0%) for the same order of study drugs, respectively (P.008 for all pairwise comparisons, except amodiaquine plus artesunate vs artemether-lumefantrine, P = .05). There were no deaths or cases of severe malaria. Significant reductions in anemia (9.3% [95% CI, 7.0%-12.0%] at enrollment vs 0.6% [95% CI, 0.1%-2.2%] during the last 2 months of follow-up; P<.001) and asymptomatic parasitemia (18.6% [95% CI, 15.5%-22.1%] at enrollment vs 2.3% [95% CI, 1.5%-3.5%] during the last 2 months of follow-up; P<.001) were observed according to routine testing. Conclusions  Artemether-lumefantrine was the most efficacious treatment for uncomplicated malaria in the study population. With all study regimens, the provision of prompt and reasonably effective facility-based treatment was associated with good outcomes in long-term health measures. Trial Registration  isrctn.org Identifier: ISRCTN37517549   相似文献   

11.
Context  Although liberalization of donor criteria could expand the donor pool, the use of certain "marginal donors," such as those who are hepatitis C virus (HCV) positive, is controversial. Little is known about the effect of donor HCV positivity on survival in cardiac transplantation. Objectives  To examine the association between donor HCV positivity and survival among heart transplant recipients and to determine the effects of recipient age and recipient HCV status on this association. Design, Setting, and Participants  A multicenter cohort study was performed using the US Scientific Registry of Transplant Recipients. Adult heart transplant patients who received their transplants between April 1, 1994, and July 31, 2003, were eligible for inclusion. Main Outcome Measure  All-cause mortality. Results  Of 10 915 patients meeting entry criteria, 261 received an HCV-positive donor heart. Mortality was higher among recipients of HCV-positive donor hearts at 1 year (16.9% vs 8.2%; P<.001), 5 years (41.8% vs 18.5%; P<.001), and 10 years (50.6% vs 24.3%; P<.001). Using Kaplan-Meier methods, 1-, 5-, and 10-year survival rates were 83%, 53%, and 25%, and 92%, 77%, and 53% for recipients of HCV-positive and HCV-negative donor hearts, respectively (P<.001, log-rank test). Recipients of HCV-positive donor hearts were more likely to die of liver disease and coronary vasculopathy. After propensity matching, the overall hazard ratio (HR) associated with receipt of an HCV-positive donor heart was 2.10 (95% confidence interval [CI], 1.60-2.75). Stratified analyses showed that HRs did not vary by recipient HCV status or by recipient age (for recipients aged 18-39 years: HR, 1.75 [95% CI, 0.70-4.40]; for recipients aged 40-59 years: HR, 2.23 [95% CI, 1.42-3.52]; and for recipients aged 60 years and older: HR, 2.07 [95% CI, 1.32-3.27]; overall P value for interaction, >.10). Conclusions  Receipt of a heart from an HCV-positive donor is associated with decreased survival in heart transplant recipients. This association appears to be independent of recipient HCV status and age. Preferential allocation of HCV-positive donors to HCV-positive recipients and/or older recipients is not warranted.   相似文献   

12.
Context  Past studies of the efficacy of hip protectors to prevent hip fracture in nursing home residents have had conflicting results, possibly due to potential biases from clustered randomization designs and modest adherence to intervention. Objective  To determine whether an energy-absorbing and energy-dispersing hip protector would reduce the risk of hip fracture when worn by nursing home residents. Design, Setting, and Participants  Multicenter, randomized controlled clinical trial in which 37 nursing homes were randomly assigned to having residents wear a 1-sided hip protector on the left or right hip. Participants were 1042 nursing home residents (mean [SD] aged 85 [7] years; 79% women) who consented and adhered to the hip protector use during a 2-week run-in period and were enrolled. Participating facilities were in greater Boston, Massachusetts, St Louis, Missouri, and Baltimore, Maryland from October 2002 to October 2004. Mean duration of participation for nursing home residents was 7.8 months. None were withdrawn because of adverse effects. Intervention(s)  Undergarments with a 1-sided hip protector made of a 0.32-cm outer layer of polyethylene (2.7 kg/m3) backed by a hard high-density polyethylene shield (0.95 cm) that was backed by 0.9 kg/m3 of 1.27-kg ethylene vinyl acetate foam. Each facility was visited 3 times per week to assess adherence and provide staff support. Main Outcome Measure  Adjudicated hip fracture occurrences on padded vs unpadded hips. Results  After a 20-month follow-up (676 person-years of observation), the study was terminated due to a lack of efficacy. The incidence rate of hip fracture on protected vs unprotected hips did not differ (3.1%; 95% confidence interval [CI], 1.8%-4.4% vs 2.5%; 95% CI, 1.3%-3.7%; P = .70). For the 334 nursing home residents with greater than 80% adherence to hip protector use, the incidence rate of hip fracture on protected vs unprotected hips did not differ (5.3%; 95% CI, 2.6%-8.8% vs 3.5%; 95% CI, 1.3%-5.7%; P = .42). Overall adherence was 73.8%. Conclusions  In this clinical trial of an energy-absorbing/shunting hip protector conducted in US nursing homes, we were unable to detect a protective effect on the risk of hip fracture, despite good adherence to protocol. These results add to the increasing body of evidence that hip protectors, as currently designed, are not effective for preventing hip fracture among nursing home residents. Trial Registration  clinicaltrials.gov Identifier: NCT00058864   相似文献   

13.
Context  Even though the strong association between physical inactivity and ill health is well documented, 60% of the population is inadequately active or completely inactive. Traditional methods of prescribing exercise have not proven effective for increasing and maintaining a program of regular physical activity. Objective  To compare the 24-month intervention effects of a lifestyle physical activity program with traditional structured exercise on improving physical activity, cardiorespiratory fitness, and cardiovascular disease risk factors. Design  Randomized clinical trial conducted from August 1, 1993, through July 31, 1997. Participants  Sedentary men (n = 116) and women (n = 119) with self-reported physical activity of less than 36 and 34 kcal/kg per day, respectively. Interventions  Six months of intensive and 18 months of maintenance intervention on either a lifestyle physical activity or a traditional structured exercise program. Main Outcome Measures  Primary outcomes were physical activity assessed by the 7-Day Physical Activity Recall and peak oxygen consumption (VO2peak) by a maximal exercise treadmill test. Secondary outcomes were plasma lipid and lipoprotein cholesterol concentrations, blood pressure, and body composition. All measures were obtained at baseline and at 6 and 24 months. Results  Both the lifestyle and structured activity groups had significant and comparable improvements in physical activity and cardiorespiratory fitness from baseline to 24 months. Adjusted mean changes (95% confidence intervals [CIs]) were 0.84 (95% CI, 0.42-1.25 kcal/kg per day; P<.001) and 0.69 (95% CI, 0.25-1.12 kcal/kg day; P = .002) for activity, and 0.77 (95% CI, 0.18-1.36 mL/kg per minute; P = .01) and 1.34 (95% CI, 0.72-1.96 mL/kg per minute; P<.001) for VO2peak for the lifestyle and structured activity groups, respectively. There were significant and comparable reductions in systolic blood pressure (-3.63 [95% CI, -5.54 to -1.72 mm Hg; P<.001] and -3.26 [95% CI, -5.26 to -1.25 mm Hg; P = .002]) and diastolic blood pressure (-5.38 [95% CI, -6.90 to -3.86 mm Hg; P<.001] and -5.14 [95% CI, -6.73 to -3.54 mm Hg; P<.001) for the lifestyle and structured activity groups, respectively. Neither group significantly changed their weight (-0.05 [95% CI, -1.05 to 0.96 kg; P = .93] and 0.69 [95% CI, -0.37 to 1.74 kg; P = .20]), but each group significantly reduced their percentage of body fat (-2.39% [95% CI, -2.92% to -1.85%; P<.001] and -1.85% [95% CI, -2.41% to -1.28%; P<.001]) in the lifestyle and structured activity groups, respectively. Conclusions  In previously sedentary healthy adults, a lifestyle physical activity intervention is as effective as a structured exercise program in improving physical activity, cardiorespiratory fitness, and blood pressure.   相似文献   

14.
Context  Plasmodium falciparum appears to have a particular propensity to involve the brain but the burden, risk factors, and full extent of neurological involvement have not been systematically described. Objectives  To determine the incidence and describe the clinical phenotypes and outcomes of neurological involvement in African children with acute falciparum malaria. Design, Setting, and Patients  A review of records of all children younger than 14 years admitted to a Kenyan district hospital with malaria from January 1992 through December 2004. Neurological involvement was defined as convulsive seizures, agitation, prostration, or impaired consciousness or coma. Main Outcome Measures  The incidence, pattern, and outcome of neurological involvement. Results  Of 58 239 children admitted, 19 560 (33.6%) had malaria as the primary clinical diagnosis. Neurological involvement was observed in 9313 children (47.6%) and manifested as seizures (6563/17 517 [37.5%]), agitation (316/11 193 [2.8%]), prostration (3223/15 643 [20.6%]), and impaired consciousness or coma (2129/16 080 [13.2%]). In children younger than 5 years, the mean annual incidence of admissions with malaria was 2694 per 100 000 persons and the incidence of malaria with neurological involvement was 1156 per 100 000 persons. However, readmissions may have led to a 10% overestimate in incidence. Children with neurological involvement were older (median, 26 [interquartile range {IQR}, 15-41] vs 21 [IQR, 10-40] months; P<.001), had a shorter duration of illness (median, 2 [IQR, 1-3] vs 3 [IQR, 2-3] days; P<.001), and a higher geometric mean parasite density (42.0 [95% confidence interval {CI}, 40.0-44.1] vs 30.4 [95% CI, 29.0-31.8] x 103/µL; P<.001). Factors independently associated with neurological involvement included past history of seizures (adjusted odds ratio [AOR], 3.50; 95% CI, 2.78-4.42), fever lasting 2 days or less (AOR, 2.02; 95% CI, 1.64-2.49), delayed capillary refill time (AOR, 3.66; 95% CI, 2.40-5.56), metabolic acidosis (AOR, 1.55; 95% CI, 1.29-1.87), and hypoglycemia (AOR, 2.11; 95% CI, 1.31-3.37). Mortality was higher in patients with neurological involvement (4.4% [95% CI, 4.2%-5.1%] vs 1.3% [95% CI, 1.1%-1.5%]; P<.001). At discharge, 159 (2.2%) of 7281 patients had neurological deficits. Conclusions  Neurological involvement is common in children in Kenya with acute falciparum malaria, and is associated with metabolic derangements, impaired perfusion, parasitemia, and increased mortality and neurological sequelae. This study suggests that falciparum malaria exposes many African children to brain insults.   相似文献   

15.
Ost D  Tepper J  Mihara H  Lander O  Heinzer R  Fein A 《JAMA》2005,294(6):706-715
David Ost, MD; Josh Tepper, MD; Hanako Mihara, MD, MPH; Owen Lander, MD; Raphael Heinzer, MD; Alan Fein, MD

JAMA. 2005;294:706-715.

Context  Patients with venous thromboembolism (VTE) are susceptible to recurrent events, but whether prolonging anticoagulation is warranted in patients with VTE remains controversial.

Objective  To review the available evidence and quantify the risks and benefits of extending the duration of anticoagulation in patients with VTE.

Data Sources  PubMed, EMBase Pharmacology, the Cochrane database, clinical trial Web sites, and a hand search of reference lists.

Study Selection  Included studies were randomized controlled trials with results published from 1969 through 2004 and evaluating the duration of anticoagulation in patients with VTE that measured recurrent VTE. Excluded studies were those enrolling only pure populations of high-risk patients. Two independent reviewers assessed each article for inclusion and exclusion criteria, with adjudication by a third reviewer in cases of disagreement. Fifteen of 67 studies were included in the analysis.

Data Extraction  Two independent reviewers performed data extraction using a standardized form, with adjudication by the remainder of the investigators in cases of disagreement. Data regarding recurrent VTE, major bleeding, person-time at risk, and study quality were extracted.

Data Synthesis  If patients in the long-term therapy group remained receiving anticoagulation, the risk of recurrent VTE with long- vs short-term therapy was reduced (weighted incidence rate, 0.020 vs 0.126 events/person-year; rate difference, –0.106 [95% confidence interval {CI}, –0.145 to –0.067]; P<.001; pooled incidence rate ratio [IRR], 0.21 [95% CI, 0.14 to 0.31]; P<.001). If anticoagulation in the long-term therapy group was discontinued, the risk reduction was less pronounced (weighted incidence rate, 0.052 vs 0.072 events/person-year; rate difference, –0.020 [95% CI, –0.039 to –0.001]; P = .04; pooled IRR, 0.69 [95% CI, 0.53 to 0.91]; P = .009). The risk of major bleeding with long- vs short-term therapy was similar (weighted incidence rate, 0.011 vs 0.006 events/person-year; rate difference, 0.005 [95% CI, –0.002 to 0.011]; P = .14; pooled IRR, 1.80 [95% CI, 0.72 to 4.51]; P = .21).

Conclusions  Patients who receive extended anticoagulation are protected from recurrent VTE while receiving long-term therapy. The clinical benefit is maintained after anticoagulation is discontinued, but the magnitude of the benefit is less pronounced.

  相似文献   


16.
Context  Transmission of human immunodeficiency virus type 1 (HIV-1) is known to occur through breastfeeding, but the magnitude of risk has not been precisely defined. Whether breast milk HIV-1 transmission risk exceeds the potential risk of formula-associated diarrheal mortality in developing countries is unknown. Objectives  To determine the frequency of breast milk transmission of HIV-1 and to compare mortality rates and HIV-1–free survival in breastfed and formula-fed infants. Design and Setting  Randomized clinical trial conducted from November 1992 to July 1998 in antenatal clinics in Nairobi, Kenya, with a median follow-up period of 24 months. Participants  Of 425 HIV-1–seropositive, antiretroviral-naive pregnant women enrolled, 401 mother-infant pairs were included in the analysis of trial end points. Interventions  Mother-infant pairs were randomized to breastfeeding (n = 212) vs formula feeding arms (n = 213). Main Outcome Measures  Infant HIV-1 infection and death during the first 2 years of life, compared between the 2 intervention groups. Results  Compliance with the assigned feeding modality was 96% in the breastfeeding arm and 70% in the formula arm (P<.001). Median duration of breastfeeding was 17 months. Of the 401 infants included in the analysis, 94% were followed up to HIV-1 infection or mortality end points: 83% for the HIV-1 infection end point and 93% to the mortality end point. The cumulative probability of HIV-1 infection at 24 months was 36.7% (95% confidence interval [CI], 29.4%-44.0%) in the breastfeeding arm and 20.5% (95% CI, 14.0%-27.0%) in the formula arm (P = .001). The estimated rate of breast milk transmission was 16.2% (95% CI, 6.5%-25.9%). Forty-four percent of HIV-1 infection in the breastfeeding arm was attributable to breast milk. Most breast milk transmission occurred early, with 75% of the risk difference between the 2 arms occurring by 6 months, although transmission continued throughout the duration of exposure. The 2-year mortality rates in both arms were similar (breastfeeding arm, 24.4% [95% CI, 18.2%-30.7%] vs formula feeding arm, 20.0% [95% CI, 14.4%-25.6%]; P = .30). The rate of HIV-1–free survival at 2 years was significantly lower in the breastfeeding arm than in the formula feeding arm (58.0% vs 70.0%, respectively; P = .02). Conclusions  The frequency of breast milk transmission of HIV-1 was 16.2% in this randomized clinical trial, and the majority of infections occurred early during breastfeeding. The use of breast milk substitutes prevented 44% of infant infections and was associated with significantly improved HIV-1–free survival.   相似文献   

17.
Janet Lee, MS; Lisa A. Croen, PhD; Kendall H. Backstrand, BA; Cathleen K. Yoshida, MS; Louis H. Henning, BA; Camilla Lindan, MD; Donna M. Ferriero, MD; Heather J. Fullerton, MD; A. J. Barkovich, MD; Yvonne W. Wu, MD, MPH

JAMA. 2005;293:723-729.

Context  Perinatal arterial ischemic stroke (PAS) is a common cause of hemiplegic cerebral palsy. Risk factors for this condition have not been clearly defined.

Objective  To determine maternal and infant characteristics associated with PAS.

Design, Setting, and Patients  Case-control study nested within the cohort of all 199 176 infants born from 1997 through 2002 in the Kaiser Permanente Medical Care Program, a managed care organization providing care for more than 3 million residents of northern California. Case patients were confirmed by review of brain imaging and medical records (n = 40). Three controls per case were randomly selected from the study population.

Main Outcome Measure  Association of maternal and infant complications with risk of PAS.

Results  The population prevalence of PAS was 20 per 100 000 live births. The majority (85%) of infants with PAS were delivered at term. The following prepartum and intrapartum factors were more common among case than control infants: primiparity (73% vs 44%, P = .002), fetal heart rate abnormality (46% vs 14%, P<.001), emergency cesarean delivery (35% vs 13%, P = .002), chorioamnionitis (27% vs 11%, P = .03), prolonged rupture of membranes (26% vs 7%, P = .002), prolonged second stage of labor (25% vs 4%, P<.001), vacuum extraction (24% vs 11%, P = .04), cord abnormality (22% vs 6%, P = .01), preeclampsia (19% vs 5%, P = .01), and oligohydramnios (14% vs 3%, P = .01). Risk factors independently associated with PAS on multivariate analysis were history of infertility (odds ratio [OR], 7.5; 95% confidence interval [CI], 1.3-45.0), preeclampsia (OR, 5.3; 95% CI, 1.3-22.0), prolonged rupture of membranes (OR, 3.8; 95% CI, 1.1-12.8), and chorioamnionitis (OR, 3.4; 95% CI, 1.1-10.5). The rate of PAS increased dramatically when multiple risk factors were present.

Conclusions  Perinatal arterial ischemic stroke in infants is associated with several independent maternal risk factors. How these complications, along with their potential effects on the placenta and fetus, may play a role in causing perinatal stroke deserves further study.

  相似文献   


18.
Context  Hospice care may improve the quality of end-of-life care for nursing home residents, but hospice is underutilized by this population, at least in part because physicians are not aware of their patients’ preferences. Objective  To determine whether it is possible to increase hospice utilization and improve the quality of end-of-life care by identifying residents whose goals and preferences are consistent with hospice care. Design, Setting, and Participants  Randomized controlled trial (December 2003-December 2004) of nursing home residents and their surrogate decision makers (N=205) in 3 US nursing homes. Intervention  A structured interview identified residents whose goals for care, treatment preferences, and palliative care needs made them appropriate for hospice care. These residents’ physicians were notified and asked to authorize a hospice informational visit. Main Outcome Measures  The primary outcome measures were (1) hospice enrollment within 30 days of the intervention and (2) families’ ratings of the quality of care for residents who died during the 6-month follow-up period. Results  Of the 205 residents in the study sample, 107 were randomly assigned to receive the intervention, and 98 received usual care. Intervention residents were more likely than usual care residents to enroll in hospice within 30 days (21/107 [20%] vs 1/98 [1%]; P<.001 [Fisher exact test]) and to enroll in hospice during the follow-up period (27/207 [25%] vs 6/98 [6%]; P<.001). Intervention residents had fewer acute care admissions (mean: 0.28 vs 0.49; P = .04 [Wilcoxon rank sum test]) and spent fewer days in an acute care setting (mean: 1.2 vs 3.0; P = .03 [Wilcoxon rank sum test]). Families of intervention residents rated the resident’s care more highly than did families of usual care residents (mean on a scale of 1-5: 4.1 vs 2.5; P = .04 [Wilcoxon rank sum test]). Conclusion  A simple communication intervention can increase rates of hospice referrals and families’ ratings of end-of-life care and may also decrease utilization of acute care resources.   相似文献   

19.
Context  Single-site studies suggest that a 2-week program of constraint-induced movement therapy (CIMT) for patients more than 1 year after stroke who maintain some hand and wrist movement can improve upper extremity function that persists for at least 1 year. Objective  To compare the effects of a 2-week multisite program of CIMT vs usual and customary care on improvement in upper extremity function among patients who had a first stroke within the previous 3 to 9 months. Design and Setting  The Extremity Constraint Induced Therapy Evaluation (EXCITE) trial, a prospective, single-blind, randomized, multisite clinical trial conducted at 7 US academic institutions between January 2001 and January 2003. Participants  Two hundred twenty-two individuals with predominantly ischemic stroke. Interventions  Participants were assigned to receive either CIMT (n = 106; wearing a restraining mitt on the less-affected hand while engaging in repetitive task practice and behavioral shaping with the hemiplegic hand) or usual and customary care (n = 116; ranging from no treatment after concluding formal rehabilitation to pharmacologic or physiotherapeutic interventions); patients were stratified by sex, prestroke dominant side, side of stroke, and level of paretic arm function. Main Outcome Measures  The Wolf Motor Function Test (WMFT), a measure of laboratory time and strength-based ability and quality of movement (functional ability), and the Motor Activity Log (MAL), a measure of how well and how often 30 common daily activities are performed. Results  From baseline to 12 months, the CIMT group showed greater improvements than the control group in both the WMFT Performance Time (decrease in mean time from 19.3 seconds to 9.3 seconds [52% reduction] vs from 24.0 seconds to 17.7 seconds [26% reduction]; between-group difference, 34% [95% confidence interval {CI}, 12%-51%]; P<.001) and in the MAL Amount of Use (on a 0-5 scale, increase from 1.21 to 2.13 vs from 1.15 to 1.65; between-group difference, 0.43 [95% CI, 0.05-0.80]; P<.001) and MAL Quality of Movement (on a 0-5 scale, increase from 1.26 to 2.23 vs 1.18 to 1.66; between-group difference, 0.48 [95% CI, 0.13-0.84]; P<.001). The CIMT group achieved a decrease of 19.5 in self-perceived hand function difficulty (Stroke Impact Scale hand domain) vs a decrease of 10.1 for the control group (between-group difference, 9.42 [95% CI, 0.27-18.57]; P=.05). Conclusion  Among patients who had a stroke within the previous 3 to 9 months, CIMT produced statistically significant and clinically relevant improvements in arm motor function that persisted for at least 1 year. Trial Registration  clinicaltrials.gov Identifier: NCT00057018   相似文献   

20.
Context  The effect of antihypertensive drugs on cardiovascular events in patients with coronary artery disease (CAD) and normal blood pressure remains uncertain. Objective  To compare the effects of amlodipine or enalapril vs placebo on cardiovascular events in patients with CAD. Design, Setting, and Participants  Double-blind, randomized, multicenter, 24-month trial (enrollment April 1999-April 2002) comparing amlodipine or enalapril with placebo in 1991 patients with angiographically documented CAD (>20% stenosis by coronary angiography) and diastolic blood pressure <100 mm Hg. A substudy of 274 patients measured atherosclerosis progression by intravascular ultrasound (IVUS). Interventions  Patients were randomized to receive amlodipine, 10 mg; enalapril, 20 mg; or placebo. IVUS was performed at baseline and study completion. Main Outcome Measures  The primary efficacy parameter was incidence of cardiovascular events for amlodipine vs placebo. Other outcomes included comparisons of amlodipine vs enalapril and enalapril vs placebo. Events included cardiovascular death, nonfatal myocardial infarction, resuscitated cardiac arrest, coronary revascularization, hospitalization for angina pectoris, hospitalization for congestive heart failure, fatal or nonfatal stroke or transient ischemic attack, and new diagnosis of peripheral vascular disease. The IVUS end point was change in percent atheroma volume. Results  Baseline blood pressure averaged 129/78 mm Hg for all patients; it increased by 0.7/0.6 mm Hg in the placebo group and decreased by 4.8/2.5 mm Hg and 4.9/2.4 mm Hg in the amlodipine and enalapril groups, respectively (P<.001 for both vs placebo). Cardiovascular events occurred in 151 (23.1%) placebo-treated patients, in 110 (16.6%) amlodipine-treated patients (hazard ratio [HR], 0.69; 95% CI, 0.54-0.88 [P = .003]), and in 136 (20.2%) enalapril-treated patients (HR, 0.85; 95% CI, 0.67-1.07 [P = .16]. Primary end point comparison for enalapril vs amlodipine was not significant (HR, 0.81; 95% CI, 0.63-1.04 [P = .10]). The IVUS substudy showed a trend toward less progression of atherosclerosis in the amlodipine group vs placebo (P = .12), with significantly less progression in the subgroup with systolic blood pressures greater than the mean (P = .02). Compared with baseline, IVUS showed progression in the placebo group (P<.001), a trend toward progression in the enalapril group (P = .08), and no progression in the amlodipine group (P = .31). For the amlodipine group, correlation between blood pressure reduction and progression was r = 0.19, P = .07. Conclusions  Administration of amlodipine to patients with CAD and normal blood pressure resulted in reduced adverse cardiovascular events. Directionally similar, but smaller and nonsignificant, treatment effects were observed with enalapril. For amlodipine, IVUS showed evidence of slowing of atherosclerosis progression.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号