首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.

Objective

To study gender differences in management and outcome in patients with non‐ST‐elevation acute coronary syndrome.

Design, setting and patients

Cohort study of 53 781 consecutive patients (37% women) from the Register of Information and Knowledge about Swedish Heart Intensive care Admissions (RIKS‐HIA), with a diagnosis of either unstable angina pectoris or non‐ST‐elevation myocardial infarction. All patients were admitted to intensive coronary care units in Sweden, between 1998 and 2002, and followed for 1 year.

Main outcome measures

Treatment intensity and in‐hospital, 30‐day and 1‐year mortality.

Results

Women were older (73 vs 69 years, p<0.001) and more likely to have a history of hypertension and diabetes, but less likely to have a history of myocardial infarction or revascularisation. After adjustment, there were no major differences in acute pharmacological treatment or prophylactic medication at discharge.Revascularisation was, however, even after adjustment, performed more often in men (OR 1.15; 95% CI, 1.09 to 1.21). After adjustment, there was no significant difference in in‐hospital (OR 1.03; 95% CI, 0.94 to 1.13) or 30‐days (OR 1.07; 95% CI, 0.99 to 1.15) mortality, but at 1 year being male was associated with higher mortality (OR 1.12; 95% CI, 1.06 to 1.19).

Conclusion

Although women are somewhat less intensively treated, especially regarding invasive procedures, after adjustment for differences in background characteristics, they have better long‐term outcomes than men.Since the beginning of the 1990s there have been numerous studies on gender differences in management of acute coronary syndromes (ACS). Many earlier studies,1,2,3,4,5,6,7,8 but not all,9 found that women were treated less intensively in the acute phase. In some of the studies, after adjustment for age, comorbidity and severity of the disease, most of the differences disappeared.6,7 There is also conflicting evidence on gender differences in evidence‐based treatment at discharge.1,3,5,6,8,10,11After acute myocardial infarction (AMI), a higher short‐term mortality in women is documented in several studies.2,5,6,7,12,13,14 After adjustment for age and comorbidity some difference has usually,2,5,12,13 but not always,11,14 remained. On the other hand, most studies assessing long‐term outcome have found no difference between the genders, or a better outcome in women, at least after adjustment.7,10,13,14 Earlier studies focusing on gender differences in outcome after an acute coronary syndrome have usually studied patients with AMI, including both ST‐elevation myocardial infarction and non‐ST‐elevation myocardial infarction (NSTEMI).2,5,6,7,12,13,14 However, the pathophysiology and initial management differs between these two conditions,15 as does outcome according to gender.11,16 In patients with NSTEMI or unstable angina pectoris (UAP), women seem to have an equal or better outcome, after adjustment for age and comorbidity.1,4,8,11,16,17 Studies on differences between genders, in treatment and outcome, in real life, contemporary, non‐ST‐elevation acute coronary syndrome (NSTE ACS) populations, large enough to make necessary adjustments for confounders, are lacking.The aim of this study was to assess gender differences in background characteristics, management and outcome in a real‐life intensive coronary care unit (ICCU) population, with NSTE ACS.  相似文献   

2.

Background

Poor prognosis in heart failure (HF) patients with diabetes is often attributed to increased co‐morbidity and advanced disease. Further, this effect may be worse in women.

Objective

To determine whether the effect of diabetes on outcomes and the sex‐related variation persisted in a propensity score‐matched HF population, and whether the sex‐related variation was a function of age.

Methods

Of the 7788 HF patients in the Digitalis Investigation Group trial, 2218 had a history of diabetes. Propensity score for diabetes was calculated for each patient using a non‐parsimonious logistic regression model incorporating all measured baseline covariates, and was used to match 2056 (93%) diabetic patients with 2056 non‐diabetic patients.

Results

All‐cause mortality occurred in 135 (25%) and 216 (39%) women without and with diabetes (adjusted HR = 1.67; 95% CI = 1.34 to 2.08; p<0.001). Among men, 535 (36%) and 609 (41%) patients without and with diabetes died from all causes (adjusted HR = 1.21; 95% CI = 1.07 to 1.36; p = 0.002). Sex–diabetes interaction (overall adjusted p<0.001) was only significant in patients ⩾65 years (15% absolute risk increase in women; multivariable p for interaction = 0.005), but not in younger patients (2% increase in women; p for interaction = 0.173). Risk‐adjusted HR (95% CI) for all‐cause hospitalisation for women and men were 1.49 (1.28 to 1.72) and 1.21 (1.11 to 1.32), respectively, also with significant sex–diabetes interaction (p = 0.011).

Conclusions

Diabetes‐associated increases in morbidity and mortality in chronic HF were more pronounced in women, and theses sex‐related differences in outcomes were primarily observed in elderly patients.Diabetes is common in heart failure (HF) and is associated with poor outcomes.1,2 HF patients with diabetes are sicker and have a higher burden of co‐morbidity than those without diabetes.1,2 Diabetes is also associated with activation of the renin–angiotensin–aldosterone and sympathetic nervous systems.3,4 There is mounting evidence that diabetes adversely affects collagen production in fibroblasts and calcium homeostasis in cardiac myocytes.5,6 However, it is not clear to what extent the diabetes‐associated poor outcomes in HF are due to the direct effects of diabetes.Although outcome‐based multivariable risk adjustment models can account for these confounding covariates to some extent, concerns for residual bias limit interpretation of these results.7 To address this concern, propensity score matching can be used to assemble cohorts of patients with and without an exposure who would be well balanced in all measured baseline covariates.8,9,10 More importantly, as investigators remain blinded during the design phase of a randomised clinical trial, this process of bias reduction and study cohort assembly can be done without any knowledge or use of the outcomes data, and the magnitude of bias reduction may be objectively assessed using standardised differences.7,9,10,11Data from patients with coronary artery disease and elderly patients hospitalised with systolic HF suggest that the effect of diabetes might be worse in women than in men.12,13,14 However, little is known about the sex‐related variation in the effect of diabetes on outcomes in a more stable younger ambulatory patient population with mild to moderate systolic and diastolic HF. It is also unknown if this sex‐related difference in the effect of diabetes on HF is a function of age. The purpose of this study thus is to determine the effect of diabetes on mortality and hospitalisation in propensity score‐matched ambulatory HF patients and to determine if the effect varies by sex and if the sex‐related differences vary by age.  相似文献   

3.

Objective

Myocardial scintigraphy and/or conventional angiography (CA) are often performed before cardiac surgery in an attempt to identify unsuspected coronary artery disease which might result in significant cardiac morbidity and mortality. Multidetector CT coronary angiography (MDCTCA) has a recognised high negative predictive value and may provide a non‐invasive alternative in this subset of patients. The aim of this study was to evaluate the clinical value of MDCTCA as a preoperative screening test in candidates for non‐coronary cardiac surgery.

Methods

132 patients underwent MDCTCA (Somatom Sensation 16 Cardiac, Siemens) in the assessment of the cardiac risk profile before surgery. Coronary arteries were screened for ⩾50% stenosis. Patients without significant stenosis (Group 1) underwent surgery without any adjunctive screening tests while all patients with coronary lesions ⩾50% at MDCTCA (Group 2) underwent CA.

Results

16 patients (12.1%) were excluded due to poor image quality. 72 patients without significant coronary stenosis at MDCTCA were submitted to surgery. 30 out of 36 patients with significant (⩾50%) coronary stenosis at MDCTCA and CA underwent adjunctive bypass surgery or coronary angioplasty. In 8 patients, MDCTCA overestimated the severity of the coronary lesions (>50% MDCTCA, <50% CA).No severe cardiovascular perioperative events such as myocardial ischaemia, myocardial infarction or cardiac failure occurred in any patient in Group 1.

Conclusions

MDCTCA seems to be effective as a preoperative screening test prior to non‐coronary cardiac surgery. In this era of cost containment and optimal care of patients, MDCTCA is able to provide coronary vessel and ventricular function evaluation and may become the method of choice for the assessment of a cardiovascular risk profile prior to major surgery.Since its introduction in the 1960s,1 conventional coronary angiography (CA) has been considered the gold standard for the diagnosis of coronary artery disease because of its high contrast, temporal and spatial resolution.2,3,4 In the past few years, we have witnessed a considerable increase in diagnostic and interventional procedures. Despite the high degree of accuracy (73–89%) of non‐invasive diagnostic tests such as exercise ECG, myocardial scintigraphy and stress‐echocardiography in detecting myocardial ischaemia,5 about 20% of patients undergoing CA due to a positive result of these non‐invasive tests, had no evidence of coronary lesions.6,7Multidetector CT (MDCT), introduced into clinical practice in 2000, has demonstrated excellent technical characteristics for coronary artery evaluation. Results in the literature show a high degree of diagnostic accuracy in detecting significant coronary lesions and, particularly, an excellent capability of excluding them, due to negative predictive values ranging from 96 to 99%.8,9,10,11,12,13,14,15,16,17,18Patients who are candidates for major non‐coronary cardiac or vascular surgery, such as heart valve replacement, aortic aneurysm and aortic dissection, require a complete assessment of potentially dangerous co‐morbidities. There is a 5 to 10% perioperative cardiac morbidity rate during vascular surgery, even in patients at low risk for coronary disease.19 According to Paul et al.20 there is a 17% risk of severe multivessel disease in low clinical risk asymptomatic patients undergoing vascular surgery. American College of Cardiology/American Heart Association (ACC/AHA) guidelines for preoperative evaluation before major surgery recommend stratification of ischaemic heart disease with clinical and non‐invasive tests.19,20,21,22 The diagnostic accuracy is 68 to 77% for exercise ECG and 73 to 85% for stress‐echocardiography. Myocardial scintigraphy provides an accuracy of 87–89% in patients with normal resting ECG, with a radiation exposure ranging from 4.6 to 20 mSv,23 almost equivalent to MDCT coronary angiography (MDCTCA). However, for certain high‐risk patients, ACC guidelines suggest proceeding directly with coronary angiography rather than performing a non‐invasive test. Therefore, in clinical practice, CA is often performed before major vascular or cardiac surgery. Considering that millions of surgical procedures are probably performed every year worldwide (eg, 95 000 heart valve replacements/year in the USA),6,7 several hundred thousand negative CAs are still performed.After some years of validation studies comparing MDCTCA with CA, studies on clinical utility are now warranted to demonstrate whether and how this technique can change and improve the current management of patients. The purpose of this study is to evaluate the clinical impact of MDCTCA as a preoperative screening test for cardiac risk assessment in patients who are candidates for major non‐coronary cardiac surgery and who are asymptomatic for ischaemic heart disease.  相似文献   

4.

Objective

To investigate risk factors for non‐Hodgkin''s lymphoma (NHL) and analyse NHL subtypes and characteristics in patients with systemic lupus erythematosus (SLE).

Methods

A national SLE cohort identified through SLE discharge diagnoses in the Swedish hospital discharge register during 1964 to 1995 (n = 6438) was linked to the national cancer register. A nested case control study on SLE patients who developed NHL during this observation period was performed with SLE patients without malignancy as controls. Medical records from cases and controls were reviewed. Tissue specimens on which the lymphoma diagnosis was based were retrieved and reclassified according to the WHO classification. NHLs of the subtype diffuse large B cell lymphoma (DLBCL) were subject to additional immunohistochemical staining using antibodies against bcl‐6, CD10 and IRF‐4 for further subclassification into germinal centre (GC) or non‐GC subtypes.

Results

16 patients with SLE had NHL, and the DLBCL subtype dominated (10 cases). The 5‐year overall survival and mean age at NHL diagnosis were comparable with NHL in the general population—50% and 61 years, respectively. Cyclophosphamide or azathioprine use did not elevate lymphoma risk, but the risk was elevated if haematological or sicca symptoms, or pulmonary involvement was present in the SLE disease. Two patients had DLBCL‐GC subtype and an excellent prognosis.

Conclusions

NHL in this national SLE cohort was predominated by the aggressive DLBCL subtype. The prognosis of NHL was comparable with that of the general lymphoma population. There were no indications of treatment‐induced lymphomas. Molecular subtyping could be a helpful tool to predict prognosis also in SLE patients with DLBCL.Evidence of an increased risk to develop haematological malignancy, and especially non‐Hodgkin''s lymphoma (NHL) in autoimmune diseases, has been gathered since the 1970s. First studies of Sjögren''s syndrome,1 then rheumatoid arthritis (RA)2 and now in the last decade studies from uni‐/multicentre SLE cohorts3,4,5,6,7,8 and national SLE cohorts9,10 have consistently shown a markedly increased risk of NHLs. As for NHL subtype, knowledge is more limited. RA and SLE share several disease manifestations like arthritis and “extra‐articular manifestations” such as serositis, sicca symptoms and interstitial inflammatory lung disease. In RA, a pronounced over‐representation of diffuse large B cell lymphoma (DLBCL) has been reported from a large population‐based cohort.11 This lymphoma subtype was also the most frequent in an international multicentre study with lupus patients.12 In Sjögren''s syndrome, approximately 85% are MALT lymphomas,13 although a recent study from a mono‐centre primary Sjögren''s syndrome cohort—with patients fulfilling the American–European Consensus Group criteria14—showed a predominance of DLBCL.15The pathophysiological mechanisms for the enhanced risk of developing NHL in patients with chronic inflammatory diseases are still not fully understood. Similarities of a variety of immunological disturbances that characterise both rheumatic conditions and lymphomas have been suggested as a linkage between these disorders as well as a possible potentiation of immunosuppressive drugs or certain viral infections, especially Epstein–Barr virus (EBV).5,16Recently, advances in molecular characterisation have enabled more detailed subclassification of lymphomas based on the molecular expression of the tumour cells. For DLBCL, two prognostic groups have been identified among DLBCL in the general lymphoma population depending on the resemblance of gene expression profile with normal germinal centre (GC) or activated B cells by using global gene expression profiling17,18 and immunophenotyping of tumour cells.19,20 The GC DLBC lymphomas had a significantly better survival than those with non‐GC subtype.17,18,19,20 No such subtyping has been reported in SLE patients.In a previous register study of a population‐based national Swedish SLE cohort, a threefold increased risk of lymphoma was found.10 This nested case‐control study focuses on those SLE patients that developed NHL. Information on clinical manifestations and pharmacological (cytotoxic) treatment of the SLE disease was retrieved from patient records. The lymphomas were re‐examined and reclassified, and DLBCLs were further divided into GC or non‐GC subtypes by immunohistochemistry. The presence of EBV in the lymphomas was also analysed.  相似文献   

5.

Background

Gender differences in management and outcomes have been reported in acute coronary syndrome (ACS).

Objectives

To assess such gender differences in a Swiss national registry.

Methods

20 290 patients with ACS enrolled in the AMIS Plus Registry from January 1997 to March 2006 by 68 hospitals were included in a prospective observational study. Data on patients'' characteristics, diagnoses, procedures, complications and outcomes were recorded. Odds ratios (ORs) of in‐hospital mortality were calculated using logistic regression models.

Results

5633 (28%) patients were female and 14 657 (72%) male. Female patients were older than men (mean (SD) age 70.9 (12.1) vs 63.4 (12.9) years; p<0.001), had more comorbidities and came to hospital later. They underwent percutaneous coronary intervention (PCI) less frequently (OR = 0.65; 95% CI 0.61 to 0.69) and their unadjusted in‐hospital mortality was higher overall (10.7% vs 6.3%; p<0.001) and in those who underwent PCI (3.0% vs 4.2%; p = 0.018). Mortality differences between women and men disappeared after adjustments for other predictors (adjusted OR (aOR) for women vs men: 1.09; 95% CI 0.95 to 1.25), except in women aged 51–60 years (aOR = 1.78; 95% CI 1.04 to 3.04). However, even after adjustments, female gender remained significantly associated with a lower probability of undergoing PCI (OR = 0.70; 95% CI 0.64 to 0.76).

Conclusions

The analysis showed gender differences in baseline characteristics and in the rate of PCI in patients admitted for ACS in Swiss hospitals between 1997 and 2006. Reasons for the significant underuse of PCI in women, and a slightly higher in‐hospital mortality in the 51–60 year age group, need to be investigated further.Coronary artery disease and, in particular, acute coronary syndrome (ACS), is the leading cause of mortality and morbidity in the Western world, in both women and men.The benefits of reperfusion treatment for patients with ACS have been well established and it has become standard treatment for both women and men with ST‐segment elevation acute coronary syndrome (STE‐ACS); however, there is variation in the method of reperfusion chosen, and in which patients are considered eligible.1 Controversies also exist about the type and the time of reperfusion and about its outcomes in patients presenting with unstable angina or non‐ST‐segment elevation (NSTE‐ACS).It has also been shown that women with acute myocardial infarction (AMI) are less likely than men to undergo reperfusion treatment,2,3 and that there is a lack of awareness of risk among women.4 In addition, there are conflicting data from randomised trials about the benefit of early invasive treatment in women.5,6,7 Differences in survival between men and women reported in some studies may not only reflect gender bias in management, but also differences in coronary anatomy, age and comorbidities. In the CADILLAC Trial, women had higher mortality than men after interventional treatment for AMI, which the authors attributed to smaller body surface area and more comorbidities.3 On the contrary, other authors have suggested that the higher mortality seen in women after an AMI might be explained by less aggressive treatment,8 and if women had access to the same quality of care as men, their survival would be the same.9 Finally, the results of outcome studies in unselected patients suggest that gender is not an independent predictor of mortality after percutaneous coronary intervention (PCI)2,10 and that improvement in prognosis associated with reperfusion treatment is independent from it.10,11,12,13 The data of 3100 female patients enrolled in the Euro Heart Survey ACS showed that female gender in the “real world” was not independently associated with worse in‐hospital mortality, irrespective of the type of ACS.14 The authors interestingly emphasised the need to evaluate outcomes of ACS in surveys or registries, rather than from data derived from clinical trials.14 This suggestion, however, did not solve the controversy since, in the New York angioplasty registry, in‐hospital mortality for female patients undergoing angioplasty after having reached hospital within 6 hours was 9.04% vs 4.42% for male (p<0.001) for the years 1993–6.15Thus, the aim of this study was to assess outcomes in unselected female and male patients admitted between 1997 and 2006 for ACS in Swiss hospitals and to put these results in the perspective of their baseline characteristics, comorbidities and management.  相似文献   

6.

Objective

Progression of neointimal stent coverage (NSC) and changes in thrombus were evaluated serially by coronary angioscopy for up to 2 years after sirolimus‐eluting stent (SES) implantation.

Methods

Serial angioscopic observations were performed in 20 segments of 20 patients at baseline, at 6 months and at 2 years after SES implantation. NSC was classified as follows: 0, uncovered struts; 1, visible struts through thin neointima; or 2, no visible struts. In each patient, maximum and minimum NSC was evaluated. Existence of thrombus was also examined.

Results

The maximum NSC increased from 6 months to 2 years (mean (SD) 1.2 (0.4) vs 1.8 (0.4), respectively, p = 0.005), while the minimum NSC did not change (0.7 (0.5) vs 0.8 (0.4), respectively, p = 0.25). The prevalence of patients with uncovered struts did not decrease from 6 months to 2 years (35% vs 20%, respectively, p = 0.29). Although there were no thrombus‐related adverse events, new thrombus formation was found in 5% of 6‐month, and in 20% of 2‐year follow‐up evaluations. The prevalence of thrombus inside the SES at baseline, 6 months and 2 years was similar (40%, 40% and 30%, respectively; p = NS).

Conclusions

Neointimal growth inside the SES progressed heterogeneously. Uncovered struts persisted in 20% of the patients for up to 2 years and subclinical thrombus formation was not uncommon.Recently, occurrence of late stent thrombosis (LST) after drug‐eluting stent implantation has became a major clinical concern.1,2,3 A long‐term follow‐up study demonstrated that LST occurs at a constant rate of 0.6% a year for up to 3 years after drug‐eluting stent implantation.3 Pathological investigation showed that delayed arterial healing, characterised by an incomplete endothelialisation and persistence of fibrin, has a key role in the occurrence of LST.4,5 Moreover, a powerful predictor of LST is the existence of uncovered struts without endothelialisation.5 We therefore suggested that the uncovered struts of a sirolimus‐eluting stent (SES) remain for an extended period of time.Coronary angioscopy provides direct visualisation of the lumen and detailed information on the condition of neointimal stent coverage (NSC) and thrombus.6,7,8 This imaging modality has the advantage of allowing the identification of an intracoronary thrombus.8 Presently, no long‐term angioscopic follow‐up data after SES implantation are available. Here we present our findings from angioscopic examination, focusing on the long‐term serial changes in the NSC, especially the uncovered stent struts, and the presence of thrombus inside the SES.  相似文献   

7.

Objective

Progression of neointimal stent coverage (NSC) and changes in thrombus were evaluated serially by coronary angioscopy for up to 2 years after sirolimus‐eluting stent (SES) implantation.

Design

Serial angioscopic observations were performed in 20 segments of 20 patients at baseline, and at 6 months and 2 years after SES implantation. NSC was classified as follows: 0, uncovered struts; 1, visible struts through thin neointima; or 2, no visible struts. In each patient, maximum and minimum NSC was evaluated. Existence of thrombus was also examined.

Results

The maximum NSC increased from 6 months to 2 years (1.2 (0.4) vs 1.8 (0.4), respectively, p = 0.005), while the minimum NSC did not change (0.7 (0.5) vs 0.8 (0.4), respectively, p = 0.25). The prevalence of patients with uncovered struts did not decrease from 6 months to 2 years (35% vs 20%, respectively, p = 0.29). Although there were no thrombus‐related adverse events, new thrombus formation was found in one patient (5%) at the 6‐month, and in four patients (20%) at the 2‐year follow‐up evaluations. Frequencies of thrombus inside the SES at baseline, 6 months and 2 years did not differ one from another (40%, 40% and 30%, respectively; p = NS).

Conclusions

Neointimal growth inside the SES progressed heterogeneously. Uncovered struts persisted in 20% of the patients for up to 2 years and subclinical thrombus formation was not a rare phenomenon.Recently, occurrence of late stent thrombosis (LST) after drug‐eluting stent implantation has became a major clinical concern.1,2,3 A long‐term follow‐up study demonstrated that LST occurs at a constant rate of 0.6% a year for up to 3 years after drug‐eluting stent implantation.3 Pathological investigation shows that delayed arterial healing, characterised by an incomplete endothelialisation and persistence of fibrin, has a key role in the occurrence of LST.4,5 Moreover, a powerful predictor of LST is the existence of uncovered struts without endothelialisation.5 We therefore proposed the hypothesis that the uncovered struts of a sirolimus‐eluting stent (SES) remain for an extended period of time.Coronary angioscopy provides a direct visualisation of the lumen and detailed information on the condition of neointimal stent coverage (NSC) and thrombus.6,7,8 This imaging modality has the advantage of allowing the identification of an intracoronary thrombus.8 Presently, no long‐term angioscopic follow‐up data after SES implantation are available. We herein present our findings as derived from angioscopic examination, focusing on the long‐term serial changes in the NSC, especially the uncovered stent struts, and the presence of thrombus inside the SES.  相似文献   

8.

Objective

To analyse the short and long term outcome of endoscopic stent treatment after bile duct injury (BDI), and to determine the effect of multiple stent treatment.

Design, setting and patients

A retrospective cohort study was performed in a tertiary referral centre to analyse the outcome of endoscopic stenting in 67 patients with cystic duct leakage, 26 patients with common bile duct leakage and 110 patients with a bile duct stricture.

Main outcome measures

Long term outcome and independent predictors for successful stent treatment.

Results

Overall success in patients with cystic duct leakage was 97%. In patients with common bile duct leakage, stent related complications occurred in 3.8% (n = 1). The overall success rate was 89% (n = 23). In patients with a bile duct stricture, stent related complications occurred in 33% (n = 36) and the overall success rate was 74% (n = 81). After a mean follow up of 4.5 years, liver function tests did not identify “occult” bile duct strictures. Independent predictors for outcome were the number of stents inserted during the first procedure (OR 3.2 per stent; 95% CI 1.3 to 8.4), injuries classified as Bismuth III (OR 0.12; 95% CI 0.02 to 0.91) and IV (OR 0.04; CI 0.003 to 0.52) and endoscopic stenting before referral (OR 0.24; CI 0.06 to 0.88). Introduction of sequential insertion of multiple stents did not improve outcome (before 77% vs after 66%, p = 0.25), but more patients reported stent related pain (before 11% vs after 28%, p = 0.02).

Conclusions

In patients with a postoperative bile duct leakage and/or strictures, endoscopic stent treatment should be regarded as the choice of primary treatment because of safety and favourable long term outcome. Apart from the early insertion of more than one stent, the benefit from sequential insertion of multiple stents did not become readily apparent from this series.Bile duct injury (BDI) occurs in 0.2 to 1.4% of patients following laparoscopic cholecystectomy and is a severe surgical complication.1,2,3,4 BDI related morbidity is illustrated by increased hospital stay, poor long term quality of life and high rates of malpractice litigation.5,6,7,8 Although surgical reconstruction, mainly a hepaticojejunostomy, is a procedure associated with low mortality and low morbidity if performed in a tertiary centre, this is only indicated in selected patients with BDI; a population‐based study from the USA demonstrated the detrimental effect of BDI on survival in patients who underwent surgical reconstruction.9 The majority of biliary injuries, including cystic duct leakage, common bile duct (CBD) leakage or bile duct strictures, can be treated successfully in 70–95% of the patients by means of endoscopic or radiological interventions.10,11,12,13,14,15,16It has been suggested that endoscopic treatment is associated with an increased risk of re‐stenosis and biliary cirrhosis followed by end‐stage liver disease. However, reliable data about the long term outcome of endoscopic management of BDI are scarce and predicting factors for successful outcome are unreported. Several years ago reports from uncontrolled studies indicated that a more aggressive type of dilation treatment in patients with bile duct strictures, based on the sequential insertion of multiple stents, may be associated with a more favourable outcome and this treatment policy has been adapted in our clinic since the end of 2001.17,18,19,20The purpose of this study was to analyse the short and long term outcome of stent treatment in BDI patients (including liver function test after long term follow up) and to determine factors that are predictive for successful outcome in patients who are stented for a bile duct stricture. In addition, the outcomes of patients treated before and after the introduction of sequential insertion of multiple stents were compared.  相似文献   

9.

Aims

To evaluate the effect of a disease management programme for patients with coronary heart disease (CHD) and chronic heart failure (CHF) in primary care.

Methods

A cluster randomised controlled trial of 1316 patients with CHD and CHF from 20 primary care practices in the UK was carried out. Care in the intervention practices was delivered by specialist nurses trained in the management of patients with CHD and CHF. Usual care was delivered by the primary healthcare team in the control practices.

Results

At follow up, significantly more patients with a history of myocardial infarction in the intervention group were prescribed a beta‐blocker compared to the control group (adjusted OR 1.43, 95% CI 1.19 to 1.99). Significantly more patients with CHD in the intervention group had adequate management of their blood pressure (<140/85 mm Hg) (OR 1.61, 95% CI 1.22 to 2.13) and their cholesterol (<5 mmol/l) (OR 1.58, 95% CI 1.05 to 2.37) compared to those in the control group. Significantly more patients with an unconfirmed diagnosis of CHF had a diagnosis of left ventricular systolic dysfunction confirmed (OR 4.69, 95% CI 1.88 to 11.66) or excluded (OR 3.80, 95% CI 1.50 to 9.64) in the intervention group compared to the control group. There were significant improvements in some quality‐of‐life measures in patients with CHD in the intervention group.

Conclusions

Disease management programmes can lead to improvements in the care of patients with CHD and presumed CHF in primary care.Cardiovascular diseases including coronary heart disease (CHD) and chronic heart failure (CHF) are the main cause of morbidity and mortality in most European countries.1 Mortality from cardiovascular disease has declined over the last 30 years, a trend which has been attributed to secondary prevention therapies.2,3 However, European surveys have shown considerable potential for improved levels of secondary prevention in people with established CHD.4 Studies in primary care, where most of these patients are managed, have also reported considerable potential to further increase secondary prevention through medical and lifestyle interventions.5,6 “Medical” measures include aspirin therapy and blood pressure and lipid control, while “lifestyle” measures include increased exercise, dietary modification and smoking cessation.5 CHF is also a highly prevalent, chronic condition with high mortality and morbidity. It is increasing in prevalence and the public health burden from CHF is therefore likely to rise substantially over the next 10 years.7 The quality of life of patients with CHF is worse than for most chronic conditions managed in primary care and five‐year survival is worse than for many malignant conditions.8 However, appropriate treatment, including inhibitors of the renin‐angiotensin‐aldosterone system and beta‐blockers, has the potential to reduce hospitalisation and mortality in these patients.9,10 The task of implementing a comprehensive package of effective measures for large numbers of patients has been described as daunting.5 It is therefore important to develop implementation strategies that are practical and effective. Many patients with CHF are incorrectly diagnosed and inadequately treated in primary care11 and obstacles to appropriate primary care management include lack of knowledge, fear of complications with pharmacological treatments, lack of time and limited facilities for investigations.12,13Systematic reviews indicate that secondary prevention programmes improve the process of care, reduce admissions to hospital and enhance quality of life or functional status in patients with CHD.14 Similarly, systematic reviews of disease management programmes in CHF suggest that specialised, multidisciplinary follow‐up can reduce hospitalisation and may lead to cost saving.15,16,17 However, all the CHF trials included in these systematic reviews were conducted in highly specialised centres and recruited patients following discharge after hospitalisation. The applicability of the available CHF management programmes to countries with a primary care‐based healthcare system has therefore recently been questioned.18To achieve improved secondary prevention of CHD and CHF, primary care will need to adopt a systematic approach. Although disease management clinics for the management of CHD in primary care can improve patients'' outcomes,5 there are no such studies in the management of patients with CHF. Since the majority of patients with CHF will also have CHD,19 we investigated the effect of a disease management programme for patients with either or both conditions in primary care.  相似文献   

10.

Objective

S100A12 is a pro‐inflammatory protein that is secreted by granulocytes. S100A12 serum levels increase during inflammatory bowel disease (IBD). We performed the first study analysing faecal S100A12 in adults with signs of intestinal inflammation.

Methods

Faecal S100A12 was determined by ELISA in faecal specimens of 171 consecutive patients and 24 healthy controls. Patients either suffered from infectious gastroenteritis confirmed by stool analysis (65 bacterial, 23 viral) or underwent endoscopic and histological investigation (32 with Crohn''s disease, 27 with ulcerative colitis, and 24 with irritable bowel syndrome; IBS). Intestinal S100A12 expression was analysed in biopsies obtained from all patients. Faecal calprotectin was used as an additional non‐invasive surrogate marker.

Results

Faecal S100A12 was significantly higher in patients with active IBD (2.45 ± 1.15 mg/kg) compared with healthy controls (0.006 ± 0.03 mg/kg; p<0.001) or patients with IBS (0.05 ± 0.11 mg/kg; p<0.001). Faecal S100A12 distinguished active IBD from healthy controls with a sensitivity of 86% and a specificity of 100%. We also found excellent sensitivity of 86% and specificity of 96% for distinguishing IBD from IBS. Faecal S100A12 was also elevated in bacterial enteritis but not in viral gastroenteritis. Faecal S100A12 correlated better with intestinal inflammation than faecal calprotectin or other biomarkers.

Conclusions

Faecal S100A12 is a novel non‐invasive marker distinguishing IBD from IBS or healthy individuals with a high sensitivity and specificity. Furthermore, S100A12 reflects inflammatory activity of chronic IBD. As a marker for neutrophil activation, faecal S100A12 may significantly improve our arsenal of non‐invasive biomarkers of intestinal inflammation.The etiology of inflammatory bowel disease (IBD) consisting of ulcerative colitis and Crohn''s disease involves complex interactions among susceptibility genes, the environment, and the immune system. These interactions lead to a cascade of events that involve the activation of neutrophils, production of proinflammatory mediators, and tissue damage.1 As intestinal symptoms are a frequent cause of referrals to gastroenterologists, it is crucial to differentiate between non‐inflammatory irritable bowel syndrome (IBS) and IBD. To date, there is a lack of biological markers to determine intestinal inflammation.2,3 Therefore, invasive procedures are required to confirm the diagnosis of IBD. Furthermore, the natural history of chronic IBD is characterised by an unpredictable variation in the degree of inflammation. Biological markers are needed to confirm remission, detect early relapses or local reactivation, and to monitor anti‐inflammatory therapies reliably. Whereas serum markers of inflammation are still not very helpful,3,4,5 assays that determine intestinal inflammation by detecting neutrophil‐derived products in stool show great potential.6An important mechanism in the initiation and perturbation of inflammation in IBD is the activation of innate immune mechanisms.7,8 Among the factors released by infiltrating neutrophils are proteins of the S100 family.9,10 One example is calprotectin, which is detectable in the serum and stool during intestinal inflammation.11 Calprotectin was initially described as a protein of 36 kDa, but was later characterised as a complex of two distinct S100 proteins, S100A8 and S100A9.12,13,14 In recent years, calprotectin has been proposed as a faecal marker of gut inflammation reflecting the degree of phagocyte activation.6,15,16,17,18,19,20 Unfortunately, variation in faecal calprotectin assays still impedes the routine use of this marker as a sole parameter in clinical practice. The observed variation may be caused by the broad expression pattern of calprotectin, which is found in granulocytes as well as monocytes and is also inducible in epithelial cells.21,22 In this context, the elevation in lactose intolerance is notable.16,17,23S100A12 is more restricted to granulocytes. It is secreted by activated neutrophils and is abundant in the intestinal mucosa of patients with IBD.9,24 Overexpression at the site of inflammation and correlation with disease activity in a variety of inflammatory disorders underscore the role of this granulocytic protein as a proinflammatory molecule.25 The binding of S100A12 to the receptor for advanced glycation endproducts (RAGE) leads to the long‐term activation of nuclear factor kappa B, which promotes inflammation.26 In mouse models of colitis, blocking the interaction of S100A12 with RAGE has been proved to attenuate inflammation. Data on murine models of colitis as well as human IBD point to an important role for S100A12 during the pathogenesis of these disorders.9,26,27In a previous study, we demonstrated that S100A12 is overexpressed during chronic active IBD and serves as a useful serum marker for disease activity in patients with IBD.9 De Jong et al.28 recently reported that S100A12 can be detected in the stool of children with Crohn''s disease. The aim of our present study was thus to analyse S100A12 in stool samples as well as its expression in the intestinal tissue of patients with confirmed IBD or IBS and in the stool of a normal control group. We correlated faecal S100A12 levels with endoscopic and histological findings in the same patients and investigated the diagnostic accuracy of S100A12 to detect intestinal inflammation.  相似文献   

11.

Objectives

To derive age and sex specific estimates of transition rates from advanced adenomas to colorectal cancer by combining data of a nationwide screening colonoscopy registry and national data on colorectal cancer (CRC) incidence.

Design

Registry based study.

Setting

National screening colonoscopy programme in Germany.

Patients

Participants of screening colonoscopy in 2003 and 2004 (n = 840 149).

Main outcome measures

Advanced adenoma prevalence, colorectal cancer incidence, annual and 10 year cumulative risk of developing CRC among carriers of advanced adenomas according to sex and age (range 55–80+ years)

Results

The age gradient is much stronger for CRC incidence than for advanced adenoma prevalence. As a result, projected annual transition rates from advanced adenomas to CRC strongly increase with age (from 2.6% in age group 55–59 years to 5.6% in age group ⩾80 years among women, and from 2.6% in age group 55–59 years to 5.1% in age group ⩾80 years among men). Projections of 10 year cumulative risk increase from 25.4% at age 55 years to 42.9% at age 80 years in women, and from 25.2% at age 55 years to 39.7% at age 80 years in men.

Conclusions

Advanced adenoma transition rates are similar in both sexes, but there is a strong age gradient for both sexes. Our estimates of transition rates in older age groups are in line with previous estimates derived from small case series in the pre‐colonoscopy era independent of age. However, our projections for younger age groups are considerably lower. These findings may have important implications for the design of CRC screening programmes.Most colorectal cancers (CRCs) develop from adenomas, among which “advanced” adenomas are considered to be the clinically relevant precursors of CRC. The natural history of colorectal adenomas is a decisive factor for the design of CRC screening measures and their cost effectiveness. Since advanced adenomas need to be removed once they are detected, any direct observation of their natural history would be unethical. Thus, available estimates for the progression of adenomas mostly stem from radiological surveillance data or from autopsy series collected prior to the colonoscopy era.1,2,3,4,5,6,7,8,9 However, these data are rather vague as they were derived from small and potentially selective samples. For example, a major source for the estimation of adenoma transition rates has been a retrospective review of Mayo Clinic records from 226 patients with colonic polyps ⩾10 mm in diameter in whom periodic radiographical examination of the colon was performed, and in whom 21 invasive carcinomas were identified during a mean follow up of 9 years.9 Despite the undoubted usefulness of available data sources from the pre‐colonoscopy era, sample size limitations did not allow reliable estimates of adenoma transition rates according to key factors, such as age and sex. Accordingly, a common estimate of these transition rates for all ages and both sexes has generally been assumed in previous studies on effectiveness and cost effectiveness of CRC screening.10,11,12,13,14,15,16,17 Given that sensitivity analyses in these studies showed that the advanced adenoma–carcinoma transition rate represents a very influential parameter,10,11,13,18 its variation by age and sex might have a large impact on relative effectiveness and cost effectiveness of various screening schemes.The aim of this paper was to estimate risk for developing CRC according to age and sex among carriers of advanced adenomas by combining data from a large national colonoscopy screening database and national data on CRC incidence in Germany.  相似文献   

12.

Objective

Socioeconomic status (SES) is inversely associated with coronary heart disease (CHD) risk. Cumulative pathogen burden may also predict future CHD. The hypothesis was tested that lower SES is associated with a greater pathogen burden, and that pathogen burden accounts in part for SES differences in cardiovascular risk factors.

Methods

This was a cross‐sectional observational study involving the clinical examination of 451 men and women aged 51–72 without CHD, recruited from the Whitehall II epidemiological cohort. SES was defined by grade of employment, and pathogen burden by summing positive serostatus for Chlamydia pneumoniae, cytomegalovirus and herpes simplex virus 1. Cardiovascular risk factors were also assessed.

Results

Pathogen burden averaged 1.94 (SD) 0.93 in the lower grade group, compared with 1.64 (0.97) and 1.64 (0.93) in the intermediate and higher grade groups (p = 0.011). Pathogen burden was associated with a higher body mass index, waist/hip ratio, blood pressure and incidence of diabetes. There were SES differences in waist/hip ratio, high‐density lipoprotein‐cholesterol, fasting glucose, glycated haemoglobin, lung function, smoking and diabetes. The SES gradient in these cardiovascular risk factors was unchanged when pathogen burden was taken into account statistically.

Conclusions

Although serological signs of infection with common pathogens are more frequent in lower SES groups, their distribution across the social gradient does not match the linear increases in CHD risk present across higher, intermediate and lower SES groups. Additionally, pathogen burden does not appear to mediate SES differences in cardiovascular risk profiles.There is a socioeconomic gradient in coronary heart disease (CHD) mortality and cardiovascular disease risk in the USA, UK and many other countries.1,2 Lower socioeconomic status (SES) is associated with a range of cardiovascular risk factors including smoking, adverse lipid profiles, abdominal adiposity, glucose intolerance and inflammatory markers.3,4,5,6,7 Both early life SES and adult socioeconomic position appear to contribute to the social gradient.5,8A history of infection may contribute to cardiovascular disease risk by stimulating sustained vascular inflammation. Evidence concerning the relevance of individual pathogens is mixed, but the cumulative pathogen burden, defined by positive serostatus for a range of pathogens, has been associated with coronary artery disease and carotid atherosclerosis in case–control9,10,11 and longitudinal cohort studies.12,13,14 Pathogen burden is also related to cardiovascular risk markers such as endothelial dysfunction,15 low high‐density lipoprotein (HDL)‐cholesterol16 and insulin resistance,17 in some but not all studies.9,18It is plausible that pathogen burden could contribute to SES differences in cardiovascular disease risk. Exposure to infection is greater in lower SES groups, particularly in early life,19 and childhood infection is associated with endothelial dysfunction.20 We therefore tested the hypothesis that lower SES is associated with greater cumulative pathogen burden in healthy middle‐aged and older adults. Seropositivity was measured for three pathogens, Chlamydia pneumoniae, cytomegalovirus (CMV) and herpes simplex virus 1 (HSV‐1), that have been associated with cardiovascular disease risk,21,22,23 and have contributed to studies of cumulative pathogen burden.10,12,14,15 We also determined whether variations in pathogen burden accounted for SES differences in cardiovascular risk factors.  相似文献   

13.

Objective

Treatment delays may result in different clinical outcomes in patients with ST‐segment elevation myocardial infarction (STEMI) who receive fibrinolytic therapy vs primary percutaneous coronary intervention (PCI). The aim of this analysis was to examine how treatment delays relate to 6‐month mortality in reperfusion‐treated patients enrolled in the Global Registry of Acute Coronary Events (GRACE).

Design

Prospective, observational cohort study.

Setting

106 hospitals in 14 countries.

Patients

3959 patients who presented with STEMI within 6 h of symptom onset and received reperfusion with either a fibrin‐specific fibrinolytic drug or primary PCI.

Main outcome measures

6‐month mortality.

Methods

Multivariable logistic regression was used to assess the relationship between outcomes and treatment delay separately in each cohort, with time modelled with a quadratic term after adjusting for covariates from the GRACE risk score.

Results

A total of 1786 (45.1%) patients received fibrinolytic therapy, and 2173 (54.9%) underwent primary PCI. After multivariable adjustment, longer treatment delays were associated with a higher 6‐month mortality in both fibrinolytic therapy and primary PCI patients (p<0.001 for both cohorts). For patients who received fibrinolytic therapy, 6‐month mortality increased by 0.30% per 10‐min delay in door‐to‐needle time between 30 and 60 min compared with 0.18% per 10‐min delay in door‐to‐balloon time between 90 and 150 min for patients undergoing primary PCI.

Conclusions

Treatment delays in reperfusion therapy are associated with higher 6‐month mortality, but this relationship may be even more critical in patients receiving fibrinolytic therapy.Treatment delays in the delivery of fibrinolytic therapy and primary percutaneous coronary intervention (PCI) are associated with increased rates of mortality in patients with ST‐segment elevation myocardial infarction (STEMI).1,2 However, there is controversy as to whether treatment delays in primary PCI are less important than those in fibrinolytic therapy, especially when fibrin‐specific agents are utilised.3 This is important because a differential effect of treatment delays on outcomes may influence the selection between these two reperfusion strategies.3,4,5,6,7 Accordingly, using data from the ongoing, multinational Global Registry of Acute Coronary Events (GRACE), we examined how treatment delays relate to 6‐month mortality in patients with STEMI who received fibrinolytic therapy with a fibrin‐specific agent or primary PCI. GRACE provides an ideal resource for such an investigation because it includes patients who received both types of reperfusion strategies, and the data for each strategy are collected under identical circumstances.  相似文献   

14.

Introduction

Latent tuberculosis infection (LTBI) is detected with the tuberculin skin test (TST) before anti‐TNF therapy. We aimed to investigate in vitro blood assays with TB‐specific antigens (CFP‐10, ESAT‐6), in immune‐mediated inflammatory diseases (IMID) for LTBI screening.

Patients and methods

Sixty‐eight IMID patients with (n = 35) or without (n = 33) LTBI according to clinico‐radiographic findings or TST results (10 mm cutoff value) underwent cell proliferation assessed by thymidine incorporation and PKH‐26 dilution assays, and IFNγ‐release enzyme‐linked immunosorbent spot (ELISPOT) assays with TB‐specific antigens.

Results

In vitro blood assays gave higher positive results in patients with LTBI than without (p<0.05), with some variations between tests. Among the 13 patients with LTBI diagnosed independently of TST results, 5 had a negative TST (38.5%) and only 2 a negative blood assays result (15.4%). The 5 LTBI patients with negative TST results all had positive blood assays results. Ten patients without LTBI but with intermediate TST results (6–10 mm) had no different result than patients with TST result ⩽5 mm (p>0.3) and lower results than those with LTBI (p<0.05) on CFP‐10+ESAT‐6 ELISPOT and CFP‐10 proliferation assays.

Conclusion

Anti‐TB blood assays are beneficial for LTBI diagnosis in IMID. Compared with TST, they show a better sensitivity, as seen by positive results in 5 patients with certain LTBI and negative TST, and better specificity, as seen by negative results in most patients with intermediate TST as the only criteria of LTBI. In the absence of clinico‐radiographic findings for LTBI, blood assays could replace TST for antibiotherapy decision before anti‐TNF.TNFα blocker agents are approved for the treatment of immune‐mediated inflammatory diseases (IMID) and provide marked clinical benefit. However, they can reactivate tuberculosis (TB) infection in patients previously exposed to TB bacilli.1,2 The presence of quiescent mycobacteria defines latent TB infection (LTBI).3,4 Thus, screening for LTBI is necessary before initiating therapy with TNF blockers.5 However, to date, no perfect gold standard exists for detecting LTBI, and tuberculin skin test (TST) remains largely used. The recommendations for detecting LTBI differ worldwide.3,6,7 In France, recommendations were established in 2002 by the RATIO (Research Axed on Tolerance of Biotherapies) study group for the Agence Française de Sécurité Sanitaire des Produits de Santé.8,9 Patients are considered to have LTBI requiring treatment with prophylactic antibiotics before starting anti‐TNFα therapy if they had previous TB with no adequate treatment, tuberculosis primo‐infection, residual nodular tuberculous lesions larger than 1 cm3 or old lesions suggesting TB diagnosis (parenchymatous abnormalities or pleural thickening) as seen on chest radiography or weals larger than 10 mm in diameter in response to the TST. Adequate anti‐TB treatment was defined as treatment initiated after 1970, lasting at least 6 months and including at least 2 months with the combination rifampicin–pyrazinamide. The choice of the threshold of 10 mm for the TST result was established in 2002 in France since the programme of vaccination with bacille Calmette–Guérin (BCG) was mandated in France, and nearly 100% of the population has been vaccinated. Nevertheless, after July 2005, the threshold was decreased to 5 mm as in most of all other countries.10The TST is the current method to detect LTBI but has numerous drawbacks. Indeed, the TST requires a return visit for reading the test result. It has a poor specificity, since previous BCG vaccination and environmental mycobacterial exposure can result in false‐positive results in all subjects.6,11,12 This poor specificity can lead to unnecessary treatment with antibiotics, with a significant risk of drug toxicity.13,14,15 On the other hand, TST in IMID may often give a more negative reaction than in the general population, mainly because of the disease or immunosuppressive drug use.16,17 This poor sensitivity can lead to false‐negative results, with a subsequent risk of TB reactivation with anti‐TNF therapy.The identification of genes in the mycobacterium TB genome that are absent in BCG and most environmental mycobacteria offers an opportunity to develop more specific tests to investigate Mycobacterium tuberculosis (M. tuberculosis) infection, particularly LTBI.18 Culture fibrate protein‐10 (CFP‐10) and early secretory antigen target‐6 (ESAT‐6) are two such gene products that are strong targets of the cellular immune response in TB patients. In vivo‐specific T‐cell based assay investigating interferon gamma (IFNγ) release or T‐cell proliferation in the presence of these specific mycobacterial antigens could be useful in screening for LTBI before anti‐TNF therapy. New IFNγ‐based ex vivo assays involving CFP‐10 and ESAT‐6 (T‐SPOT TB, Oxford Immunotec, Abingdon, UK) and QuantiFERON TB Gold (QFT‐G; Cellestis, Carnegie, Australia) allow for diagnosis of active TB, recent primo‐infection or LTBI.12 These tests seem to be more accurate than the TST for this purpose in the general population.12 To date, the performance of the commercial assays in detecting LTBI in patients with IMID receiving immunosuppressive drugs has not been demonstrated, and the frequency of indeterminate results is still debated.19,20,21We aimed to investigate the performance of homemade anti‐CFP‐10 and anti‐ESAT‐6 proliferative and enzyme‐linked immunosorbent spot (ELISPOT) assays in detecting LTBI in patients with IMID before anti‐TNFα therapy. We analysed two subgroups of patients: those with confirmed LTBI independent of TST result, and those with LTBI based exclusively on a positive TST result between 6 and 10 mm.  相似文献   

15.
16.

Objectives

To evaluate inter‐observer agreement for microscopic measurement of inflammation in synovial tissue using manual quantitative, semiquantitative and computerised digital image analysis.

Methods

Paired serial sections of synovial tissue, obtained at arthroscopic biopsy of the knee from patients with rheumatoid arthritis (RA), were stained immunohistochemically for T lymphocyte (CD3) and macrophage (CD68) markers. Manual quantitative and semiquantitative scores for sub‐lining layer CD3+ and CD68+ cell infiltration were independently derived in 6 international centres. Three centres derived scores using computerised digital image analysis. Inter‐observer agreement was evaluated using Spearman''s Rho and intraclass correlation coefficients (ICCs).

Results

Paired tissue sections from 12 patients were selected for evaluation. Satisfactory inter‐observer agreement was demonstrated for all 3 methods of analysis. Using manual methods, ICCs for measurement of CD3+ and CD68+ cell infiltration were 0.73 and 0.73 for quantitative analysis and 0.83 and 0.78 for semiquantitative analysis, respectively. Corresponding ICCs of 0.79 and 0.58 were observed for the use of digital image analysis. All ICCs were significant at levels of p<0.0001. At each participating centre, use of computerised image analysis produced results that correlated strongly and significantly with those obtained using manual measurement.

Conclusion

Strong inter‐observer agreement was demonstrated for microscopic measurement of synovial inflammation in RA using manual quantitative, semiquantitative and computerised digital methods of analysis. This further supports the development of these methods as outcome measures in RA.Microscopic measurement of inflammation in synovial tissue is employed globally by centres working in the field of arthritis research.1 Adequate and comparable synovial tissue can be safely obtained using blind‐needle biopsy or rheumatological arthroscopy.2,3,4 In the acquired samples, various parameters may be examined, including cell populations, vascularity, cytokines and adhesion molecules. In rheumatoid arthritis (RA), many of these have been found to relate to disease activity, severity, outcome, and to exhibit a change after treatment with corticosteroids, disease‐modifying antirheumatic drugs (DMARDs) and biological therapy.5,6,7,8,9,10,11,12,13,14,15Several analysis techniques have been employed to measure these parameters. Semiquantitative analysis is a relatively quick method and therefore may facilitate examining large quantities of tissue.7 Quantitative analysis is time‐consuming but more sensitive than semiquantitative scoring to change in individual patients.16 It has been shown in previous studies that these methods can reflect overall joint inflammation when applied to relatively limited amounts of synovial tissue, even though inflammation may differ widely between individual sites in a single joint.17,18,19 Computerised digital image analysis has been applied more recently in this area and has been shown to correlate well with conventional methods of measurement.20,21,22This multi‐centre study was undertaken to standardise and validate the methods mentioned previously by evaluating inter‐observer agreement between multiple examiners in the measurement of selected parameters of inflammation in RA synovial tissue by manual quantitative, semiquantitative and computerised image analysis.  相似文献   

17.

Objectives

To assess the effects of intravenous magnesium on converting acute onset atrial fibrillation to sinus rhythm, reducing ventricular response and risk of bradycardia.

Design and data sources

Randomised controlled trials evaluating intravenous magnesium to treat acute onset atrial fibrillation from MEDLINE (1966 to 2006), EMBASE (1990 to 2006) and Cochrane Controlled Trials Register without language restrictions.

Review methods

Two researchers independently performed the literature search and data extraction.

Results

10 randomised controlled trials, including a total of 515 patients with acute onset atrial fibrillation, were considered. Intravenous magnesium was not effective in converting acute onset atrial fibrillation to sinus rhythm when compared to placebo or an alternative antiarrhythmic drug. When compared to placebo, adding intravenous magnesium to digoxin increased the proportion of patients with a ventricular response <100 beats/min (58.8% vs 32.6%; OR 3.2, 95% CI 1.93 to 5.42; p<0.001). When compared to calcium antagonists or amiodarone, intravenous magnesium was less effective in reducing the ventricular response (21.4% vs 58.5%; OR 0.19, 95% CI 0.09 to 0.44; p<0.001) but also less likely to induce significant bradycardia or atrioventricular block (0% vs 9.2%; OR 0.13, 95% CI 0.02 to 0.76; p = 0.02). The use of intravenous magnesium was associated with transient minor symptoms of flushing, tingling and dizziness in about 17% of the patients (OR 14.5, 95% CI 3.7 to 56.7; p<0.001).

Conclusions

Adding intravenous magnesium to digoxin reduces fast ventricular response in acute onset atrial fibrillation. The effect of intravenous magnesium on the ventricular rate and its cardiovascular side effects are less significant than other calcium antagonists or amiodarone. Intravenous magnesium can be considered as a safe adjunct to digoxin in controlling the ventricular response in atrial fibrillation.Atrial fibrillation is the commonest cardiac arrhythmia in clinical practice. Atrial fibrillation affects an estimated 2.2 million adults in the USA and has an estimated incidence of 1.0 per 1000 person‐years in the UK.1,2 Atrial fibrillation is associated with significant morbidity and mortality. Patients in atrial fibrillation have a fivefold increased risk of thromboembolic stroke and twofold increased risk of death when compared to the general population.3,4 Atrial fibrillation can also cause tachycardia‐induced heart failure if the rapid ventricular response is sustained for a prolonged period of time.5Most patients in acute atrial fibrillation have no significant haemodynamic instability and as such, pharmacological therapy is usually the initial treatment of choice. A variety of pharmacological agents can be used, either to control the rapid ventricular response or convert the arrhythmia to sinus rhythm, with variable results. The agents evaluated include digoxin, beta‐blockers, calcium antagonists, flecainide, propafenone, ibutilide, and amiodarone.6 However, in patients with impaired left ventricular function, digoxin or amiodarone is the pharmacological agent of choice because of their minimal negative inotropic effects.6Magnesium has many significant physiological and pharmacological effects on different organ systems. The mechanisms of its action include calcium antagonism, regulation of energy transfer and membrane stabilisation.7 Intravenous magnesium has a high therapeutic‐to‐toxic ratio and minimal negative inotropic effects.8,9 Intravenous magnesium can reduce automaticity,10 atrioventricular nodal conduction,11,12 polymorphic ventricular tachycardia due to prolonged QT interval and digoxin‐induced arrhythmias.7,8,13 Prophylactic use of intravenous magnesium can also reduce the occurrence of atrial fibrillation after cardiac surgery.14 However, there are no large randomised controlled studies or meta‐analyses that evaluate intravenous magnesium as an antiarrhythmic agent in the setting of acute onset atrial fibrillation.Rhythm control by pharmacological agents is often most effective when the drug is initiated within 10 days of onset of atrial fibrillation.15 We hypothesised that intravenous magnesium could be an effective antiarrhythmic agent in patients with acute onset atrial fibrillation. We assessed the potential beneficial and harmful effects of intravenous magnesium, when compared to placebo or an alternative arrhythmic agent, in the setting of acute onset atrial fibrillation (<7 days) in this meta‐analysis. The end‐points assessed in this study included rhythm control, ventricular response <100 beats/minute, bradycardia, hypotension, and other side effects.  相似文献   

18.
19.
Havemann BD  Henderson CA  El-Serag HB 《Gut》2007,56(12):1654-1664

Background and aim

Gastro‐oesophageal reflux disease (GORD) has been linked to a number of extra‐esophageal symptoms and disorders, primarily in the respiratory tract. This systematic review aimed to provide an estimate of the strength and direction of the association between GORD and asthma.

Methods

Studies that assessed the prevalence or incidence of GORD in individuals with asthma, or of asthma in individuals with GORD, were identified in Medline and EMBASE via a systematic search strategy.

Results

Twenty‐eight studies met the selection criteria. The sample size weighted average prevalence of GORD symptoms in asthma patients was 59.2%, whereas in controls it was 38.1%. In patients with asthma, the average prevalence of abnormal oesophageal pH, oesophagitis and hiatal hernia was 50.9%, 37.3% and 51.2%, respectively. The average prevalence of asthma in individuals with GORD was 4.6%, whereas in controls it was 3.9%. Pooling the odds ratios gave an overall ratio of 5.5 (95% CI 1.9–15.8) for studies reporting the prevalence of GORD symptoms in individuals with asthma, and 2.3 (95% CI 1.8–2.8) for those studies measuring the prevalence of asthma in GORD. One longitudinal study showed a significant association between a diagnosis of asthma and a subsequent diagnosis of GORD (relative risk 1.5; 95% CI 1.2–1.8), whereas the two studies that assessed whether GORD precedes asthma gave inconsistent results. The severity–response relationship was examined in only nine studies, with inconsistent findings.

Conclusions

This systematic review indicates that there is a significant association between GORD and asthma, but a paucity of data on the direction of causality.Gastro‐oesophageal reflux disease (GORD) develops when the reflux of stomach contents into the oesophagus causes chronic troublesome symptoms or complications.1 The most recognisable symptoms of GORD are heartburn and acid regurgitation, but the reflux of noxious material may have wider‐reaching effects. In addition to the well‐established oesophageal complications associated with the disease,2 GORD is believed to lead to extra‐oesophageal symptoms and complications, primarily in the respiratory tract.3 An association between GORD and asthma has been accepted for many years, and has been the focus of numerous studies and reviews.4,5 Asthma could arise as a result of acid reflux via two possible mechanisms: damage to the pulmonary tree after direct exposure to acid refluxate (reflux theory); or through bronchial constriction as a result of the stimulation of vagal nerve endings in the oesophagus (reflex theory).6 In addition, cough and increased respiratory effort may exacerbate GORD by bringing about an increased pressure gradient across the lower oesophageal sphincter.7 This could have particular relevance in patients with hiatus hernia, as gastro‐oesophageal junction competence is compromised by hiatus hernia during intra‐abdominal pressure increases.8The aim of this systematic review is to provide a realistic estimate of the strength and direction of the association between GORD and asthma in adults. Despite the large number of publications examining the clinical and epidemiological nature of this association, ambiguity remains. For example, estimates of the prevalence of GORD in individuals with asthma vary from 30% to 90%.9 A particular challenge is that the prevalence of GORD has been measured in a number of different ways in the literature. First, symptom frequency and/or severity have been used as a measure of disease. This is a patient‐focused method that can be used in large population‐based surveys, but a definitive symptom cutoff point for disease has not yet been established. At least weekly heartburn and/or acid regurgitation is known to impair quality of life,10 and this definition has been used in a recent systematic review,11 which reported that 10–20% of the population in the western world have GORD. Oesophageal pH monitoring is a more objective way of measuring abnormal acid reflux, but its diagnostic accuracy is modest.12,13 Endoscopy is an objective way of examining for the presence of oesophagitis, but it cannot distinguish microscopic changes in the oesophageal mucosa that may underlie symptoms in some individuals. Erosive oesophagitis is present in approximately 20–40% of individuals with GORD.14,15,16We have therefore chosen to review all of these different methodologies to gain a realistic picture of the association between the two diseases. We examined studies that assess the prevalence or incidence of GORD in individuals with asthma, and the prevalence or incidence of asthma in individuals with GORD. We have employed an epidemiological framework for causality that assesses the strength of association, the consistency of association, the temporal association between GORD and asthma, and finally, the severity–response association between the two diseases.  相似文献   

20.

Objective

Approximately 2.8% of pregnancies are Ro/La antibody positive. 3–15% of fetuses develop complete heart block (CHB). First‐degree atrioventricular heart block (1° AVB) is reported in a third of Ro/La fetuses but as most have a normal postnatal ECG this may reflect inadequacies of Doppler measurement techniques.

Methods

Comparison was made between mechanical (mPR) and electrical (ePR) intervals obtained prospectively using Doppler and non‐invasive fetal ECG (fECG) in 52 consecutive Ro/La pregnancies in 46 women carrying 54 fetuses in an observational study at a fetal medicine unit.121 mPR and 37 ePR intervals were recorded in 49 Ro/La fetuses. Five were referred with CHB and excluded. ePR was measured successfully in 35/37 (94%) and mPR was measured in all cases. 1° AVB was defined as PR >95% CI. Logistic regression predicted abnormal final fetal rhythm from first mPR or ePR.

Results

The ePR model gave 66.7% sensitivity (6 of 8 final abnormal fetal rhythm cases were predicted correctly in fetuses >20 weeks) and 96.2% specificity. mPR gave 44.4% sensitivity (4 of 9 cases) and 88.5% specificity. Z scores for ePR (zPR) were calculated from 199 normal fetuses. The area under the receiver operator characteristic (ROC) curve was 0.88 (95% CI, 0.754 to 1.007). A cut‐off of 1.65 gave a sensitivity of 87.5% and specificity of 95% for those with prolonged and normal ePR intervals, respectively.

Conclusion

zPR is better than mPR at differentiating between normal and prolonged PR intervals, suggesting that fECG is the diagnostic tool of choice to investigate the natural history and therapy of conduction abnormalities in Ro/La pregnancies.Anti‐Ro or La antibody positive pregnancies (Ro/La) have been found in about 2.8% of the pregnant population. The most serious consequence of transplacental transfer of these antibodies to the fetus is complete heart block (CHB), which affects 3% of Ro/La pregnancies, with the risk rising to 15% in a subsequent pregnancy.1,2,3,4 In addition to the morbidity and mortality associated with pacing procedures in young infants,5,6 progressive myocardial fibrosis has been reported which affects long‐term cardiac function and may necessitate transplantation.7,8,9 The alloimmune process is thought to exert its effect in more than the 3% of fetuses affected by CHB with PR interval prolongation (first‐degree atrioventricular heart block or 1° AVB) described in up to a third of fetuses in studies using Doppler methods of measurement.10 As most babies have a normal outcome, with progression to CHB described in only a small proportion, there is some debate over whether these findings represent a transient biological response to the presence of antibodies or whether they may be due to inadequacies of the existing measurement techniques.11A simple and robust method of monitoring the PR interval in affected pregnancies is required to enable studies to assess the extent and determinants of progression to CHB in Ro/La pregnancies. Such a tool would also permit the assessment of different treatment strategies designed to halt this progression and to minimise later myocardial damage.12We report the use of a novel technique, non‐invasive fetal ECG (fECG), in a comparison of Doppler mechanical PR (mPR) and electrical PR (ePR) measurements and their ability to predict final rhythm in 52 consecutive Ro/La pregnancies studied prospectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号