首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Brady K  Pearlstein T  Asnis GM  Baker D  Rothbaum B  Sikes CR  Farfel GM 《JAMA》2000,283(14):1837-1844
Context  Despite the high prevalence, chronicity, and associated comorbidity of posttraumatic stress disorder (PTSD) in the community, few placebo-controlled studies have evaluated the efficacy of pharmacotherapy for this disorder. Objective  To determine if treatment with sertraline hydrochloride effectively diminishes symptoms of PTSD of moderate to marked severity. Design  Twelve-week, double-blind, placebo-controlled trial preceded by a 2-week, single-blind placebo lead-in period, conducted between May 1996 and June 1997. Setting  Outpatient psychiatric clinics in 8 academic medical centers and 6 clinical research centers. Patients  A total of 187 outpatients with a Diagnostic and Statistical Manual of Mental Disorders, Revised Third Edition diagnosis of PTSD and a Clinician Administered PTSD Scale Part 2 (CAPS-2) minimum total severity score of at least 50 at baseline (mean age, 40 years; mean duration of illness, 12 years; 73% were women; and 61.5% experienced physical or sexual assault). Intervention  Patients were randomized to acute treatment with sertraline hydrochloride in flexible daily dosages of 50 to 200 mg/d, following 1 week at 25 mg/d (n=94); or placebo (n=93). Main Outcome Measures  Baseline-to-end-point changes in CAPS-2 total severity score, Impact of Event Scale total score (IES), and Clinical Global Impression–Severity (CGI-S), and CGI-Improvement (CGI-I) ratings, compared by treatment vs placebo groups. Results  Sertraline treatment yielded significantly greater improvement than placebo on 3 of the 4 primary outcome measures (mean change from baseline to end point for CAPS-2 total score, -33.0 vs -23.2 [P=.02], and for CGI-S, -1.2 vs -0.8 [P=.01]; mean CGI-I score at end point, 2.5 vs 3.0 [P=.02]), with the fourth measure, the IES total score, showing a trend toward significance (mean change from baseline to end point, -16.2 vs -12.1; P=.07). Using a conservative last-observation-carried-forward analysis, treatment with sertraline resulted in a responder rate of 53% at study end point compared with 32% for placebo (P=.008, with responder defined as >30% reduction from baseline in CAPS-2 total severity score and a CGI-I score of 1 [very much improved], or 2 [much improved]). Significant (P<.05) efficacy was evident for sertraline from week 2 on the CAPS-2 total severity score. Sertraline had significant efficacy vs placebo on the CAPS-2 PTSD symptom clusters of avoidance/numbing (P=.02) and increased arousal (P=.03) but not on reexperiencing/intrusion (P=.14). Sertraline was well tolerated, with insomnia the only adverse effect reported significantly more often than placebo (16.0% vs 4.3%; P=.01). Conclusions  Our data suggest that sertraline is a safe, well-tolerated, and effective treatment for PTSD.   相似文献   

2.
Context  Practice guidelines play an important role in medicine. Methodological principles have been formulated to guide their development. Objective  To determine whether practice guidelines in peer-reviewed medical literature adhered to established methodological standards for practice guidelines. Design  Structured review of guidelines published from 1985 through June 1997 identified by a MEDLINE search. Main Outcome Measures  Mean number of standards met based on a 25-item instrument and frequency of adherence. Results  We evaluated 279 guidelines, published from 1985 through June 1997, produced by 69 different developers. Mean overall adherence to standards by each guideline was 43.1% (10.77/25). Mean (SD) adherence to methodological standards on guideline development and format was 51.1% (25.3%); on identification and summary of evidence, 33.6% (29.9%); and on the formulation of recommendations, 46% (45%). Mean adherence to standards by each guideline improved from 36.9% (9.2/25) in 1985 to 50.4% (12.6/25) in 1997 (P<.001). However, there was little improvement over time in adherence to standards on identification and summary of evidence from 34.6% prior to 1990 to 36.1% after 1995 (P=.11). There was no difference in the mean number of standards satisfied by guidelines produced by subspecialty medical societies, general medical societies, or government agencies (P=.55). Guideline length was positively correlated with adherence to methodological standards (P=.001). Conclusion  Guidelines published in the peer-reviewed medical literature during the past decade do not adhere well to established methodological standards. While all areas of guideline development need improvement, greatest improvement is needed in the identification, evaluation, and synthesis of the scientific evidence.   相似文献   

3.
Context.— Increasing the number of minority physicians is a long-standing goal of professional associations and government. Objective.— To determine the effectiveness of an intensive summer educational program for minority college students and recent graduates on the probability of acceptance to medical school. Design.— Nonconcurrent prospective cohort study based on data from medical school applications, Medical College Admission Tests, and the Association of American Medical Colleges Student and Applicant Information Management System. Setting.— Eight US medical schools or consortia of medical schools. Participants.— Underrepresented minority (black, Mexican American, mainland Puerto Rican, and American Indian) applicants to US allopathic medical schools in 1997 (N=3830), 1996 (N=4654), and 1992 (N=3447). Intervention.— The Minority Medical Education Program (MMEP), a 6-week, residential summer educational program focused on training in the sciences and improvement of writing, verbal reasoning, studying, test taking, and presentation skills. Main Outcome Measure.— Probability of acceptance to at least 1 medical school. Results.— In the 1997 medical school application cohort, 223 (49.3%) of 452 MMEP participants were accepted compared with 1406 (41.6%) of 3378 minority nonparticipants (P=.002). Positive and significant program effects were also found in the 1996 (P=.01) and 1992 (P=.005) cohorts and in multivariate analysis after adjusting for nonprogrammatic factors likely to influence acceptance (P<.001). Program effects were also observed in students who participated in the MMEP early in college as well as those who participated later and among those with relatively high as well as low grades and test scores. Conclusions.— The MMEP enhanced the probability of medical school acceptance among its participants. Intensive summer education is a strategy that may help improve diversity in the physician workforce.   相似文献   

4.
Bellini LM  Baime M  Shea JA 《JAMA》2002,287(23):3143-3146
Context  Internship is a time of great transition, during which mood disturbances are common. However, variations in mood and empathy levels throughout the internship year have not been investigated. Objective  To examine mood patterns and changes in empathy among internal medicine residents over the course of the internship year. Design  Cohort study of interns involving completion of survey instruments at 4 points: time 1 (June 2000; Profile of Mood States [POMS] and Interpersonal Reactivity Index [IRI]), times 2 and 3 (November 2000 and February 2001; POMS), and time 4 (June 2001; POMS and IRI). Setting  Internal medicine residency program at a university-based medical center. Participants  Sixty-one interns. Main Outcome Measures  Baseline scores of mood states and empathy; trends in mood states and empathy over the internship year. Results  Response rates for time 1 were 98%; for time 2, 72%; for time 3, 79%; and for time 4, 79%. Results of the POMS revealed that physicians starting their internship exhibit less tension, depression, anger, fatigue, and confusion and have more vigor than general adult and college student populations (P<.001 for all). Results of the IRI showed better baseline scores for perspective taking (P<.001) and empathic concern (P = .007) and lower scores for personal distress (P<.001) among interns compared with norms. Five months into internship, however, POMS scores revealed significant increases in the depression-dejection (P<.001), anger-hostility (P<.001), and fatigue-inertia (P<.001) scales, as well as an increase in IRI personal distress level (P<.001). These increases corresponded with decreases in the POMS vigor-activity scores (P<.001) and IRI empathic concern measures (P = .005). Changes persisted throughout the internship period. Conclusions  We found that, in this sample, enthusiasm at the beginning of internship soon gave way to depression, anger, and fatigue. Future research should be aimed at determining whether these changes persist beyond internship.   相似文献   

5.
Context  Alcohol dependence treatment may include medications, behavioral therapies, or both. It is unknown how combining these treatments may impact their effectiveness, especially in the context of primary care and other nonspecialty settings. Objectives  To evaluate the efficacy of medication, behavioral therapies, and their combinations for treatment of alcohol dependence and to evaluate placebo effect on overall outcome. Design, Setting, and Participants  Randomized controlled trial conducted January 2001-January 2004 among 1383 recently alcohol-abstinent volunteers (median age, 44 years) from 11 US academic sites with Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, diagnoses of primary alcohol dependence. Interventions  Eight groups of patients received medical management with 16 weeks of naltrexone (100 mg/d) or acamprosate (3 g/d), both, and/or both placebos, with or without a combined behavioral intervention (CBI). A ninth group received CBI only (no pills). Patients were also evaluated for up to 1 year after treatment. Main Outcome Measures  Percent days abstinent from alcohol and time to first heavy drinking day. Results  All groups showed substantial reduction in drinking. During treatment, patients receiving naltrexone plus medical management (n = 302), CBI plus medical management and placebos (n = 305), or both naltrexone and CBI plus medical management (n = 309) had higher percent days abstinent (80.6, 79.2, and 77.1, respectively) than the 75.1 in those receiving placebos and medical management only (n = 305), a significant naltrexone x behavioral intervention interaction (P = .009). Naltrexone also reduced risk of a heavy drinking day (hazard ratio, 0.72; 97.5% CI, 0.53-0.98; P = .02) over time, most evident in those receiving medical management but not CBI. Acamprosate showed no significant effect on drinking vs placebo, either by itself or with any combination of naltrexone, CBI, or both. During treatment, those receiving CBI without pills or medical management (n = 157) had lower percent days abstinent (66.6) than those receiving placebo plus medical management alone (n = 153) or placebo plus medical management and CBI (n = 156) (73.8 and 79.8, respectively; P<.001). One year after treatment, these between-group effects were similar but no longer significant. Conclusions  Patients receiving medical management with naltrexone, CBI, or both fared better on drinking outcomes, whereas acamprosate showed no evidence of efficacy, with or without CBI. No combination produced better efficacy than naltrexone or CBI alone in the presence of medical management. Placebo pills and meeting with a health care professional had a positive effect above that of CBI during treatment. Naltrexone with medical management could be delivered in health care settings, thus serving alcohol-dependent patients who might otherwise not receive treatment. Trial Registration  clinicaltrials.gov Identifier: NCT00006206   相似文献   

6.
Baernstein A  Liss HK  Carney PA  Elmore JG 《JAMA》2007,298(9):1038-1045
Context  Evidence-based medical education requires rigorous studies appraising educational efficacy. Objectives  To assess trends over time in methods used to evaluate undergraduate medical education interventions and to identify whether participation of medical education departments or centers is associated with more rigorous methods. Data Sources  The PubMed, Cochrane Controlled Trials Registry, Campbell Collaboration, and ERIC databases (January 1966–March 2007) were searched using terms equivalent to students, medical and education, medical crossed with all relevant study designs. Study Selection  We selected publications in all languages from every fifth year, plus the most recent 12 months, that evaluated an educational intervention for undergraduate medical students. Four hundred seventy-two publications met criteria for review. Data Extraction  Data were abstracted on number of participants; types of comparison groups; whether outcomes assessed were objective, subjective, and/or validated; timing of outcome assessments; funding; and participation of medical education departments and centers. Ten percent of publications were independently abstracted by 2 authors to assess validity of the data abstraction. Results  The annual number of publications increased over time from 1 (1969-1970) to 147 (2006-2007). In the most recent year, there was a mean of 145 medical student participants; 9 (6%) recruited participants from multiple institutions; 80 (54%) used comparison groups; 37 (25%) used randomized control groups; 91 (62%) had objective outcomes; 23 (16%) had validated outcomes; 35 (24%) assessed an outcome more than 1 month later; 21 (14%) estimated statistical power; and 66 (45%) reported funding. In 2006-2007, medical education department or center participation, reported in 46 (31%) of the recent publications, was associated only with enrolling more medical student participants (P = .04); for all studies from 1969 to 2007, it was associated only with measuring an objective outcome (P = .048). Between 1969 and 2007, the percentage of publications reporting statistical power and funding increased; percentages did not change for other study features. Conclusions  The annual number of published studies of undergraduate medical education interventions demonstrating methodological rigor has been increasing. However, considerable opportunities for improvement remain.   相似文献   

7.
Context  Asphyxia is the most common cause of death after avalanche burial. A device that allows a person to breathe air contained in snow by diverting expired carbon dioxide (CO2) away from a 500-cm3 artificial inspiratory air pocket may improve chances of survival in avalanche burial. Objective  To determine the duration of adequate oxygenation and ventilation during burial in dense snow while breathing with vs without the artificial air pocket device. Design  Field study of physiologic respiratory measures during snow burial with and without the device from December 1998 to March 1999. Study burials were terminated at the subject's request, when oxygen saturation as measured by pulse oximetry (SpO2) dropped to less than 84%, or after 60 minutes elapsed. Setting  Mountainous outdoor site at 2385 m elevation, with an average barometric pressure of 573 mm Hg. Participants  Six male and 2 female volunteers (mean age, 34.6 years; range, 28-39 years). Main Outcome Measures  Burial time, SpO2, partial pressure of end-tidal CO2 (ETCO2), partial pressure of inspiratory CO2(PICO2), respiratory rate, and heart rate at baseline (in open atmosphere) and during snow burial while breathing with the device and without the device but with a 500-cm3 air pocket in the snow. Results  Mean burial time was 58 minutes (range, 45-60 minutes) with the device and 10 minutes (range, 5-14 minutes) without it (P=.001). A mean baseline SpO2 of 96% (range, 90%-99%) decreased to 90% (range, 77%-96%) in those buried with the device (P=.01) and to 84% (range, 79%-92%) in the control burials (P=.02). Only 1 subject buried with the device, but 6 control subjects buried without the device, decreased SpO2 to less than 88% (P=.005). A mean baseline ETCO2 of 32 mm Hg (range, 27-38 mm Hg) increased to 45 mm Hg (range, 32-53 mm Hg) in the burials with the device (P=.02) and to 54 mm Hg (range, 44-63 mm Hg) in the control burials (P=.02). A mean baseline PICO2 of 2 mm Hg (range, 0-3 mm Hg) increased to 32 mm Hg (range, 20-44 mm Hg) in the burials with the device (P=.01) and to 44 mm Hg (range, 37-50 mm Hg) in the control burials (P=.02). Respiratory and heart rates did not change in burials with the device but significantly increased in control burials. Conclusions  In our study, although hypercapnia developed, breathing with the device during snow burial considerably extended duration of adequate oxygenation compared with breathing with an air pocket in the snow. Further study will be needed to determine whether the device improves survival during avalanche burial.   相似文献   

8.
Context  Observational studies have shown that psychosocial factors are associated with increased risk for cardiovascular morbidity and mortality, but the effects of behavioral interventions on psychosocial and medical end points remain uncertain. Objective  To determine the effect of 2 behavioral programs, aerobic exercise training and stress management training, with routine medical care on psychosocial functioning and markers of cardiovascular risk. Design, Setting, and Patients  Randomized controlled trial of 134 patients (92 male and 42 female; aged 40-84 years) with stable ischemic heart disease (IHD) and exercise-induced myocardial ischemia. Conducted from January 1999 to February 2003. Interventions  Routine medical care (usual care); usual care plus supervised aerobic exercise training for 35 minutes 3 times per week for 16 weeks; usual care plus weekly 1.5-hour stress management training for 16 weeks. Main Outcome Measures  Self-reported measures of general distress (General Health Questionnaire [GHQ]) and depression (Beck Depression Inventory [BDI]); left ventricular ejection fraction (LVEF) and wall motion abnormalities (WMA); flow-mediated dilation; and cardiac autonomic control (heart rate variability during deep breathing and baroreflex sensitivity). Results  Patients in the exercise and stress management groups had lower mean (SE) BDI scores (exercise: 8.2 [0.6]; stress management: 8.2 [0.6]) vs usual care (10.1 [0.6]; P = .02); reduced distress by GHQ scores (exercise: 56.3 [0.9]; stress management: 56.8 [0.9]) vs usual care (53.6 [0.9]; P = .02); and smaller reductions in LVEF during mental stress testing (exercise: –0.54% [0.44%]; stress management: –0.34% [0.45%]) vs usual care (–1.69% [0.46%]; P = .03). Exercise and stress management were associated with lower mean (SE) WMA rating scores (exercise: 0.20 [0.07]; stress management: 0.10 [0.07]) in a subset of patients with significant stress-induced WMA at baseline vs usual care (0.36 [0.07]; P = .02). Patients in the exercise and stress management groups had greater mean (SE) improvements in flow-mediated dilation (exercise: mean [SD], 5.6% [0.45%]; stress management: 5.2% [0.47%]) vs usual care patients (4.1% [0.48%]; P = .03). In a subgroup, those receiving stress management showed improved mean (SE) baroreflex sensitivity (8.2 [0.8] ms/mm Hg) vs usual care (5.1 [0.9] ms/mm Hg; P = .02) and significant increases in heart rate variability (193.7 [19.6] ms) vs usual care (132.1 [21.5] ms; P = .04). Conclusion  For patients with stable IHD, exercise and stress management training reduced emotional distress and improved markers of cardiovascular risk more than usual medical care alone.   相似文献   

9.
Context  Depression is a common condition associated with significant morbidity in adolescents. Few depressed adolescents receive effective treatment for depression in primary care settings. Objective  To evaluate the effectiveness of a quality improvement intervention aimed at increasing access to evidence-based treatments for depression (particularly cognitive-behavior therapy and antidepressant medication), relative to usual care, among adolescents in primary care practices. Design, Setting, and Participants  Randomized controlled trial conducted between 1999 and 2003 enrolling 418 primary care patients with current depressive symptoms, aged 13 through 21 years, from 5 health care organizations purposively selected to include managed care, public sector, and academic medical center clinics in the United States. Intervention  Usual care (n = 207) or 6-month quality improvement intervention (n = 211) including expert leader teams at each site, care managers who supported primary care clinicians in evaluating and managing patients’ depression, training for care managers in manualized cognitive-behavior therapy for depression, and patient and clinician choice regarding treatment modality. Participating clinicians also received education regarding depression evaluation, management, and pharmacological and psychosocial treatment. Main Outcome Measures  Depressive symptoms assessed by Center for Epidemiological Studies-Depression Scale (CES-D) score. Secondary outcomes were mental health–related quality of life assessed by Mental Health Summary Score (MCS-12) and satisfaction with mental health care assessed using a 5-point scale. Results  Six months after baseline assessments, intervention patients, compared with usual care patients, reported significantly fewer depressive symptoms (mean [SD] CES-D scores, 19.0 [11.9] vs 21.4 [13.1]; P = .02), higher mental health–related quality of life (mean [SD] MCS-12 scores, 44.6 [11.3] vs 42.8 [12.9]; P = .03), and greater satisfaction with mental health care (mean [SD] scores, 3.8 [0.9] vs 3.5 [1.0]; P = .004). Intervention patients also reported significantly higher rates of mental health care (32.1% vs 17.2%, P<.001) and psychotherapy or counseling (32.0% vs 21.2%, P = .007). Conclusions  A 6-month quality improvement intervention aimed at improving access to evidence-based depression treatments through primary care was significantly more effective than usual care for depressed adolescents from diverse primary care practices. The greater uptake of counseling vs medication under the intervention reinforces the importance of practice interventions that include resources to enable evidence-based psychotherapy for depressed adolescents.   相似文献   

10.
Discussion of medical errors in morbidity and mortality conferences   总被引:5,自引:0,他引:5  
Pierluissi E  Fischer MA  Campbell AR  Landefeld CS 《JAMA》2003,290(21):2838-2842
Context  Morbidity and mortality conferences in residency programs are intended to discuss adverse events and errors with a goal to improve patient care. Little is known about whether residency training programs are accomplishing this goal. Objective  To determine the frequency at which morbidity and mortality conference case presentations include adverse events and errors and whether the errors are discussed and attributed to a particular cause. Design, Setting, and Participants  Prospective survey conducted by trained physician observers from July 2000 through April 2001 on 332 morbidity and mortality conference case presentations and discussions in internal medicine (n = 100) and surgery (n = 232) at 4 US academic hospitals. Main Outcome Measures  Frequencies of presentation of adverse events and errors, discussion of errors, and attribution of errors. Results  In internal medicine morbidity and mortality conferences, case presentations and discussions were 3 times longer than in surgery conferences (34.1 minutes vs 11.7 minutes; P = .001), more time was spent listening to invited speakers (43.1% vs 0%; P<.001), and less time was spent in audience discussion (15.2% vs 36.6%; P<.001). Fewer internal medicine case presentations included adverse events (37 [37%] vs 166 surgery case presentations [72%]; P<.001) or errors causing an adverse event (18 [18%] vs 98 [42%], respectively; P = .001). When an error caused an adverse event, the error was discussed as an error less often in internal medicine (10 errors [48%] vs 85 errors in surgery [77%]; P = .02). Errors were attributed to a particular cause less often in medicine than in surgery conferences (8 [38%] of 21 medicine errors vs 88 [79%] of 112 surgery errors; P<.001). In discussions of cases with errors, conference leaders in both internal medicine and surgery infrequently used explicit language to signal that an error was being discussed and infrequently acknowledged having made an error. Conclusions  Our findings call into question whether adverse events and errors are routinely discussed in internal medicine training programs. Although adverse events and errors were discussed frequently in surgery cases, teachers in both surgery and internal medicine missed opportunities to model recognition of error and to use explicit language in error discussion by acknowledging their personal experiences with error.   相似文献   

11.
Influence of a Child's Sex on Medulloblastoma Outcome   总被引:7,自引:0,他引:7  
Context.— Aggressive treatment of medulloblastoma, the most common pediatric brain tumor, has not improved survival. Identifying better prognostic indicators may warrant less morbid therapy. Objective.— To investigate the role of sex on outcome of medulloblastoma. Design.— Retrospective study of significant factors for survival with a median follow-up of 82 months. Setting.— University medical center. Patients.— A total of 109 consecutive, pediatric patients treated for primary medulloblastoma from 1970 to 1995 with surgery and postoperative radiotherapy and, after 1979, chemotherapy. Main Outcome Measures.— Factors independently associated with survival. Results.— The final multivariate model predicting improved survival included sex (hazard ratio, 0.52; 95% confidence interval [CI], 0.29-0.92; P=.03; favoring female), metastases at presentation (hazard ratio, 2.01; 95% CI, 1.14-3.52; P=.02), and extent of surgical resection (hazard ratio, 0.60; 95% CI, 0.34-1.04; P=.07; favoring greater resection). The overall, 5-year freedom from progression was 40% and survival was 49%. Radiotherapy dose (P=.72), and chemotherapy (P=.90) did not significantly affect a disease outcome. Conclusions.— The sex of the child was an important predictor for survival of medulloblastoma; girls had a much better outcome. The difference in survival between sexes should be evaluated in prospective, clinical trials.   相似文献   

12.
Context  Infertile men with obstructive azoospermia may have mutations in the cystic fibrosis transmembrane conductance regulator (CFTR) gene, many of which are rare in classic cystic fibrosis and not evaluated in most routine mutation screening. Objective  To assess how often CFTR mutations or sequence alterations undetected by routine screening are detected with more extensive screening in obstructive azoospermia. Design  Routine screening for the 31 most common CFTR mutations associated with the CF phenotype in white populations, testing for the 5-thymidine variant of the polythymidine tract of intron 8 (IVS8-5T) by allele-specific oligonucleotide hybridization, and screening of all exons through multiplex heteroduplex shift analysis followed by direct DNA sequencing. Setting  Male infertility clinic of a Canadian university-affiliated hospital. Subjects  Of 198 men with obstructive (n=149) or nonobstructive (n=49; control group) azoospermia, 64 had congenital bilateral absence of the vas deferens (CBAVD), 10 had congenital unilateral absence of the vas deferens (CUAVD), and 75 had epididymal obstruction (56/75 were idiopathic). Main Outcome Measure  Frequency of mutations found by routine and nonroutine tests in men with obstructive vs nonobstructive azoospermia. Results  Frequency of mutations and the IVS8-5T variant in the nonobstructive azoospermia group (controls) (2% and 5.1% allele frequency, respectively) did not differ significantly from that in the general population (2% and 5.2%, respectively). In the CBAVD group, 72 mutations were found by DNA sequencing and IVS8-5T testing (47 and 25, respectively; P<.001 and P=.002 vs controls) vs 39 by the routine panel (P<.001 vs controls). In the idiopathic epididymal obstruction group, 24 mutations were found by DNA sequencing and IVS8-5T testing (12 each; P=.01 and P=.14 vs controls) vs 5 by the routine panel (P=.33 vs controls). In the CUAVD group, 2 mutations were found by routine testing (P=.07 vs controls) vs 4 (2 each, respectively; P=.07 and P=.40 vs controls) by DNA sequencing and IVS8-5T testing. The routine panel did not identify 33 (46%) of 72, 2 (50%) of 4, and 19 (79%) of 24 detectable CFTR mutations and IVS8-5T in the CBAVD, CUAVD, and idiopathic epididymal obstruction groups, respectively. Conclusions  Routine testing for CFTR mutations may miss mild or rare gene alterations. The barrier to conception for men with obstructive infertility has been overcome by assisted reproductive technologies, thus raising the concern of iatrogenically transmitting pathogenic CFTR mutations to the progeny.   相似文献   

13.
Testa  Marcia A.; Simonson  Donald C. 《JAMA》1998,280(17):1490-1496
Context.— Although the long-term health benefits of good glycemic control in patients with diabetes are well documented, shorter-term quality of life (QOL) and economic savings generally have been reported to be minimal or absent. Objective.— To examine short-term outcomes of glycemic control in type 2 diabetes mellitus (DM). Design.— Double-blind, randomized, placebo-controlled, parallel trial. Setting.— Sixty-two sites in the United States. Participants.— A total of 569 male and female volunteers with type 2 DM. Intervention.— After a 3-week, single-blind placebo-washout period, participants were randomized to diet and titration with either 5 to 20 mg of glipizide gastrointestinal therapeutic system (GITS) (n=377) or placebo (n=192) for 12 weeks. Main Outcome Measures.— Change from baseline in glucose and hemoglobin A1c (HbA1c) levels and symptom distress, QOL, and health economic indicators by questionnaires and diaries. Results.— After 12 weeks, mean (±SE) HbA1c and fasting blood glucose levels decreased with active therapy (glipizide GITS) vs placebo (7.5%±0.1% vs 9.3%±0.1% and 7.0±0.1 mmol/L [126±2 mg/dL] vs 9.3±0.2 mmol/L [168±4 mg/dL], respectively; P<.001). Quality-of-life treatment differences (SD units) for symptom distress (+0.59; P<.001), general perceived health (+0.36; P=.004), cognitive functioning (+0.34; P=.005), and the overall visual analog scale (VAS) (+0.24; P=.04) were significantly more favorable for active therapy. Subscales of acuity (+0.38; P=.002), VAS emotional health (+0.35; P =.003), general health (+0.27; P =.01), sleep (+0.26; P =.04), depression (+0.25; P =.05), disorientation and detachment (+0.23; P =.05), and vitality (+0.22; P =.04) were most affected. Favorable health economic outcomes for glipizide GITS included higher retained employment (97% vs 85%; P<.001), greater productive capacity (99% vs 87%; P<.001), less absenteeism (losses=$24 vs $115 per worker per month; P <.001), fewer bed-days (losses=$1539 vs $1843 per 1000 person-days; P=.05), and fewer restricted-activity days (losses=$2660 vs $4275 per 1000 person-days; P=.01). Conclusions.— Improved glycemic control of type 2 DM is associated with substantial short-term symptomatic, QOL, and health economic benefits.   相似文献   

14.
Medical Care Delivery at the 1996 Olympic Games   总被引:13,自引:0,他引:13  
Context.— Mass gatherings like the 1996 Olympic Games require medical services for large populations assembled under unusual circumstances. Objective.— To examine delivery of medical services and to provide data for planning future events. Design.— Observational cohort study, with review of medical records at Olympics medical facilities. Setting.— One large multipurpose clinic and 128 medical aid stations operating at Olympics-sponsored sites in the vicinity of Atlanta, Ga. Participants.— A total of 10 715 patients, including 1804 athletes, 890 officials, 480 Olympic dignitaries, 3280 volunteers, 3482 spectators, and 779 others who received medical care from a physician at an Olympic medical station. Main Outcome Measures.— Number of injuries and cases of heat-related illness among participant categories, medical use rates among participants with official Games credentials, and use rates per 10000 persons attending athletic competitions. Results.— Injuries, accounting for 35% of all medical visits, were more common among athletes (51.9% of their visits, P<.001) than among other groups. Injuries accounted for 31.4% of all other groups combined. Spectators and volunteers accounted for most (88.9%, P<.001) of the 1059 visits for heat-related illness. The rates for number of medical encounters treated by a physician were highest for athletes (16.2 per 100 persons, P<.001) and lowest for volunteers (2.0 per 100). Overall physician treatment rate was 4.2 per 10000 in attendance (range, 1.6-30.1 per 10000). A total of 432 patients were transferred to hospitals. Conclusions.— Organizers used these data during the Games to monitor the health of participants and to redirect medical and other resources to areas of increased need. These data should be useful for planning medical services for future mass gatherings.   相似文献   

15.
Context.— Adverse drug events (ADEs) are a significant and costly cause of injury during hospitalization. Objectives.— To evaluate the efficacy of 2 interventions for preventing nonintercepted serious medication errors, defined as those that either resulted in or had potential to result in an ADE and were not intercepted before reaching the patient. Design.— Before-after comparison between phase 1 (baseline) and phase 2 (after intervention was implemented) and, within phase 2, a randomized comparison between physican computer order entry (POE) and the combination of POE plus a team intervention. Setting.— Large tertiary care hospital. Participants.— For the comparison of phase 1 and 2, all patients admitted to a stratified random sample of 6 medical and surgical units in a tertiary care hospital over a 6-month period, and for the randomized comparison during phase 2, all patients admitted to the same units and 2 randomly selected additional units over a subsequent 9-month period. Interventions.— A physician computer order entry system (POE) for all units and a team-based intervention that included changing the role of pharmacists, implemented for half the units. Main Outcome Measure.— Nonintercepted serious medication errors. Results.— Comparing identical units between phases 1 and 2, nonintercepted serious medication errors decreased 55%, from 10.7 events per 1000 patient-days to 4.86 events per 1000 (P=.01). The decline occurred for all stages of the medication-use process. Preventable ADEs declined 17% from 4.69 to 3.88 (P=.37), while nonintercepted potential ADEs declined 84% from 5.99 to 0.98 per 1000 patient-days (P=.002). When POE-only was compared with the POE plus team intervention combined, the team intervention conferred no additonal benefit over POE. Conclusions.— Physician computer order entry decreased the rate of nonintercepted serious medication errors by more than half, although this decrease was larger for potential ADEs than for errors that actually resulted in an ADE.   相似文献   

16.
Context  Little is known about potential long-term health effects of bioterrorism-related Bacillus anthracis infection. Objective  To describe the relationship between anthrax infection and persistent somatic symptoms among adults surviving bioterrorism-related anthrax disease approximately 1 year after illness onset in 2001. Design, Setting, and Participants  Cross-sectional study of 15 of 16 adult survivors from September through December 2002 using a clinical interview, a medical review-of-system questionnaire, 2 standardized self-administered questionnaires, and a review of available medical records. Main Outcome Measures  Health complaints summarized by the body system affected and by symptom categories; psychological distress measured by the Revised 90-Item Symptom Checklist; and health-related quality-of-life indices by the Medical Outcomes Study 36-Item Short-Form Health Survey (version 2). Results  The anthrax survivors reported symptoms affecting multiple body systems, significantly greater overall psychological distress (P<.001), and significantly reduced health-related quality-of-life indices compared with US referent populations. Eight survivors (53%) had not returned to work since their infection. Comparing disease manifestations, inhalational survivors reported significantly lower overall physical health than cutaneous survivors (mean scores, 30 vs 41; P = .02). Available medical records could not explain the persisting health complaints. Conclusion  The anthrax survivors continued to report significant health problems and poor life adjustment 1 year after onset of bioterrorism–related anthrax disease.   相似文献   

17.
Context  Understanding why some terminally ill patients desire a hastened death has become an important issue in palliative care and the debate regarding legalization of assisted suicide. Objectives  To assess the prevalence of desire for hastened death among terminally ill cancer patients and to identify factors corresponding to desire for hastened death. Design  Prospective survey conducted in a 200-bed palliative care hospital in New York, NY. Patients  Ninety-two terminally ill cancer patients (60% female; 70% white; mean age, 65.9 years) admitted between June 1998 and January 1999 for end-of-life care who passed a cognitive screening test and provided sufficient data to permit analysis. Main Outcome Measure  Scores on the Schedule of Attitudes Toward Hastened Death (SAHD), a self-report measure assessing desire for hastened death among individuals with life-threatening medical illness. Results  Sixteen patients (17%) were classified as having a high desire for hastened death based on the SAHD and 15 (16%) of 89 patients met criteria for a current major depressive episode. Desire for hastened death was significantly associated with a clinical diagnosis of depression (P = .001) as well as with measures of depressive symptom severity (P<.001) and hopelessness (P<.001). In multivariate analyses, depression (P = .003) and hopelessness (P<.001) provided independent and unique contributions to the prediction of desire for hastened death, while social support (P = .05) and physical functioning (P = .02) added significant but smaller contributions. Conclusions  Desire for hastened death among terminally ill cancer patients is not uncommon. Depression and hopelessness are the strongest predictors of desire for hastened death in this population and provide independent and unique contributions. Interventions addressing depression, hopelessness, and social support appear to be important aspects of adequate palliative care, particularly as it relates to desire for hastened death.   相似文献   

18.
Context  Although a quarter of US women undergo elective hysterectomy before menopause, controlled trials that evaluate the benefits and harms are lacking. Objective  To compare the effect of hysterectomy vs expanded medical treatment on health-related quality of life. Design, Setting, and Participants  A multicenter, randomized controlled trial (August 1997–December 2000) of 63 premenopausal women, aged 30 to 50 years, with abnormal uterine bleeding for a median of 4 years who were dissatisfied with medical treatments, including medroxyprogesterone acetate. The participants, who were patients at gynecology clinics and affiliated practices of 4 US academic medical centers, were followed up for 2 years. Interventions  Participants were randomly assigned to undergo hysterectomy or expanded medical treatment with estrogen and/or progesterone and/or a prostaglandin synthetase inhibitor. The hysterectomy route and medical regimen were determined by the participating gynecologist. Main Outcome Measures  The primary outcome was mental health measured by the Mental Component Summary (MCS) of the 36-Item Short-Form Health Survey (SF-36). Secondary outcomes included physical health measured by the Physical Component Summary (PCS), symptom resolution and satisfaction, body image, and sexual functioning, as well as other aspects of mental health and general health perceptions. Results  At 6 months, women in the hysterectomy group had greater improvement in MCS scores than women in the medicine group (8 vs 2, P = .04). They also had greater improvement in symptom resolution (75 vs 29, P<.001), symptom satisfaction (44 vs 7, P<.001), interference with sex (41 vs 22, P = .003), sexual desire (21 vs 3, P = .01), health distress (33 vs 13, P = .009), sleep problems (13 vs 1, P = .03), overall health (12 vs 2, P = .006), and satisfaction with health (31 vs 14, P = .01). By the end of the study, 17 (53%) of the women in the medicine group had requested and received hysterectomy, and these women reported improvements in quality-of-life outcomes during the 2 years that were similar to those reported by women randomized to the hysterectomy group. Women who continued medical treatment also reported some improvements (P<.001 for within-group change in many outcomes), with the result that most differences between randomized groups at the end of the study were no longer statistically significant in the intention-to-treat analysis. Conclusions  Among women with abnormal uterine bleeding and dissatisfaction with medroxyprogesterone, hysterectomy was superior to expanded medical treatment for improving health-related quality-of-life after 6 months. With longer follow-up, half the women randomized to medicine elected to undergo hysterectomy, with similar and lasting quality-of-life improvements; those who continued medical treatment also reported some improvements.   相似文献   

19.
Burke AP  Farb A  Malcom GT  Liang Y  Smialek JE  Virmani R 《JAMA》1999,281(10):921-926
Context  Exertion has been reported to acutely increase the risk of sudden coronary death, but the underlying mechanisms are unclear. Objective  To determine the frequency of plaque rupture in sudden deaths related to exertion compared with sudden deaths not related to exertion. Design  Autopsy survey. Coronary arteries were perfusion fixed and segments with more than 50% luminal narrowing were examined histologically. Ruptured plaques were defined as intraplaque hemorrhage with disruption of the fibrous cap and luminal thrombus. Exertion before death was determined by the investigator of the death. Setting  Medical examiner's office. Patients  A total of 141 men with severe coronary artery disease who died suddenly, including 116 whose deaths occurred at rest (mean [SD] age, 51 [11] years) and 25 who died during strenuous activity or emotional stress (age, 49 [9] years). Main Outcome Measures  The frequency and morphology of plaque rupture was compared in men dying at rest vs those dying during exertion. Independent association of risk factors (total cholesterol, high-density lipoprotein cholesterol, glycosylated hemoglobin, cigarette smoking) in addition to acute exertion with plaque rupture were determined. Results  The mean (SD) number of vulnerable plaques in the coronary arteries of men in the exertional-death group was 1.6 (1.5) and in the at-rest group was 0.9 (1.2) (P=.03). The culprit plaque in men dying during exertion was plaque rupture in 17 (68%) of 25 vs 27 (23%) of 116 men dying at rest (P<.001). Hemorrhage into the plaque occurred in 18 (72%) of 25 men in the exertional-death group and 47 (41%) of 116 men in the rest group (P=.007). Histological evidence of acute myocardial infarction was present in 0 of 25 in the exertion group and in 15 (13%) of 116 in the rest group. Men dying during exertion had a significantly higher mean (SD) total cholesterol–high-density lipoprotein cholesterol ratio (8.2 [3.0]) than those dying at rest (6.2 [ 2.7]; P=.002), and the majority (21/25) were not conditioned. In multivariate analysis, both exertion (P=.002) and total cholesterol–high-density lipoprotein cholesterol ratio (P=.002) were associated with acute plaque rupture, independent of age and other cardiac risk factors. Conclusion  In men with severe coronary artery disease, sudden death related to exertion was associated with acute plaque rupture.   相似文献   

20.
Context  Hematopoietic cell transplantation (HCT) is an effective and widely used treatment for hematologic malignancies. The rate and predictors of physical and emotional recovery after HCT have not been adequately defined in prospective long-term studies. Objective  To examine the course of recovery and return to work after HCT. Design, Setting, and Patients  Prospective, longitudinal cohort study at a US academic center specializing in HCT. Function was assessed from pretransplantation to 5-year follow-up for 319 adults who had myeloablative HCT for treatment of leukemia or lymphoma and spoke English. Of the 99 long-term survivors who had no recurrent malignancy, 94 completed 5-year follow-up. Main Outcome Measures  Physical limitations, return to work, depression, and distress related to treatment or disease were evaluated before transplantation, at 90 days, and at 1, 3, and 5 years after HCT. Results  Physical recovery occurred earlier than psychological or work recovery. Only 21 patients (19%) recovered on all outcomes at 1 year. The proportion without major limitations increased to 63% (n = 57) by 5 years. Among survivors without recurrent malignancy, 84% (n = 74) returned to full-time work by 5 years. Patients with slower physical recovery had higher medical risk and were more depressed before HCT (P.001). Patients with chronic graft-vs-host disease (P = .01), with less social support before HCT (P = .001), and women (P<.001) were more depressed after transplantation. Transplant-related distress was slower to recover for allogeneic transplant recipients and those with less social support before HCT (P.01). Patients who had more experience with cancer treatment before beginning HCT had more rapid recovery from depression (P = .04) and treatment-related distress (P = .009). Conclusions  Full recovery after HCT is a 3- to 5-year process. Recovery might be accelerated by more effective interventions to increase work-related capabilities, improve social support, and manage depression.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号