首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Objective

To study gender differences in management and outcome in patients with non‐ST‐elevation acute coronary syndrome.

Design, setting and patients

Cohort study of 53 781 consecutive patients (37% women) from the Register of Information and Knowledge about Swedish Heart Intensive care Admissions (RIKS‐HIA), with a diagnosis of either unstable angina pectoris or non‐ST‐elevation myocardial infarction. All patients were admitted to intensive coronary care units in Sweden, between 1998 and 2002, and followed for 1 year.

Main outcome measures

Treatment intensity and in‐hospital, 30‐day and 1‐year mortality.

Results

Women were older (73 vs 69 years, p<0.001) and more likely to have a history of hypertension and diabetes, but less likely to have a history of myocardial infarction or revascularisation. After adjustment, there were no major differences in acute pharmacological treatment or prophylactic medication at discharge.Revascularisation was, however, even after adjustment, performed more often in men (OR 1.15; 95% CI, 1.09 to 1.21). After adjustment, there was no significant difference in in‐hospital (OR 1.03; 95% CI, 0.94 to 1.13) or 30‐days (OR 1.07; 95% CI, 0.99 to 1.15) mortality, but at 1 year being male was associated with higher mortality (OR 1.12; 95% CI, 1.06 to 1.19).

Conclusion

Although women are somewhat less intensively treated, especially regarding invasive procedures, after adjustment for differences in background characteristics, they have better long‐term outcomes than men.Since the beginning of the 1990s there have been numerous studies on gender differences in management of acute coronary syndromes (ACS). Many earlier studies,1,2,3,4,5,6,7,8 but not all,9 found that women were treated less intensively in the acute phase. In some of the studies, after adjustment for age, comorbidity and severity of the disease, most of the differences disappeared.6,7 There is also conflicting evidence on gender differences in evidence‐based treatment at discharge.1,3,5,6,8,10,11After acute myocardial infarction (AMI), a higher short‐term mortality in women is documented in several studies.2,5,6,7,12,13,14 After adjustment for age and comorbidity some difference has usually,2,5,12,13 but not always,11,14 remained. On the other hand, most studies assessing long‐term outcome have found no difference between the genders, or a better outcome in women, at least after adjustment.7,10,13,14 Earlier studies focusing on gender differences in outcome after an acute coronary syndrome have usually studied patients with AMI, including both ST‐elevation myocardial infarction and non‐ST‐elevation myocardial infarction (NSTEMI).2,5,6,7,12,13,14 However, the pathophysiology and initial management differs between these two conditions,15 as does outcome according to gender.11,16 In patients with NSTEMI or unstable angina pectoris (UAP), women seem to have an equal or better outcome, after adjustment for age and comorbidity.1,4,8,11,16,17 Studies on differences between genders, in treatment and outcome, in real life, contemporary, non‐ST‐elevation acute coronary syndrome (NSTE ACS) populations, large enough to make necessary adjustments for confounders, are lacking.The aim of this study was to assess gender differences in background characteristics, management and outcome in a real‐life intensive coronary care unit (ICCU) population, with NSTE ACS.  相似文献   

2.

Objective

The aim of the study was to compare time‐trends in mortality rates and treatment patterns between patients with and without diabetes based on the Swedish register of coronary care (Register of Information and Knowledge about Swedish Heart Intensive Care Admission [RIKS‐HIA]).

Methods

Post myocardial infarction mortality rate is high in diabetic patients, who seem to receive less evidence‐based treatment. Mortality rates and treatment in 1995–1998 and 1999–2002 were studied in 70 882 patients (age <80 years), 14 873 of whom had diabetes (the first registry recorded acute myocardial infarction), following adjustments for differences in clinical and other parameters.

Results

One‐year mortality rates decreased from 1995 to 2002 from 16.6% to 12.1% in patients without diabetes and from 29.7% to 19.7%, respectively, in those with diabetes. Patients with diabetes had an adjusted relative 1‐year mortality risk of 1.44 (95% CI 1.36 to 1.52) in 1995–1998 and 1.31 (95% CI 1.24 to 1.38) in 1999–2002. Despite improved pre‐admission and in‐hospital treatment, diabetic patients were less often offered acute reperfusion therapy (adjusted OR 0.85, 95% CI 0.80 to 0.90), acute revascularisation (adjusted OR 0.78, 95% CI 0.69 to 0.87) or revascularisation within 14 days (OR 0.80, 95% CI 0.75 to 0.85), aspirin (OR 0.90, 95% CI 0.84 to 0.98) and lipid‐lowering treatment at discharge (OR 0.81, 95% CI 0.77 to 0.86).

Conclusion

Despite a clear improvement in the treatment and myocardial infarction survival rate in patients with diabetes, mortality rate remains higher than in patients without diabetes. Part of the excess mortality may be explained by co‐morbidities and diabetes itself, but a lack of application of evidence‐based treatment also contributes, underlining the importance of the improved management of diabetic patients.Patients with diabetes have higher short‐ and long‐term mortality rates after acute myocardial infarction (MI) than those without diabetes.1,2,3,4 This pattern has remained even after the introduction of modern therapeutic principles.5,6,7 According to US mortality rate trends diabetic patients have not experienced a similar mortality rate reduction as that seen in non‐diabetic patients.8,9,10 Less use of evidence‐based treatment has been suggested as an important explanation.4,10,11,12,13,14 The systematic use of such therapy should decrease hospital mortality rate in diabetic patients so that it approaches that in those without diabetes.15The Register of Information and Knowledge about Swedish Heart Intensive Care Admissions (RIKS‐HIA), covering almost all Swedish patients with MI, offers detailed information on treatment patterns and prognosis in unselected patients with and without diabetes. The aim of this study is to analyse time trends in treatment patterns and prognosis in order to see whether management has improved.  相似文献   

3.

Background

Poor prognosis in heart failure (HF) patients with diabetes is often attributed to increased co‐morbidity and advanced disease. Further, this effect may be worse in women.

Objective

To determine whether the effect of diabetes on outcomes and the sex‐related variation persisted in a propensity score‐matched HF population, and whether the sex‐related variation was a function of age.

Methods

Of the 7788 HF patients in the Digitalis Investigation Group trial, 2218 had a history of diabetes. Propensity score for diabetes was calculated for each patient using a non‐parsimonious logistic regression model incorporating all measured baseline covariates, and was used to match 2056 (93%) diabetic patients with 2056 non‐diabetic patients.

Results

All‐cause mortality occurred in 135 (25%) and 216 (39%) women without and with diabetes (adjusted HR = 1.67; 95% CI = 1.34 to 2.08; p<0.001). Among men, 535 (36%) and 609 (41%) patients without and with diabetes died from all causes (adjusted HR = 1.21; 95% CI = 1.07 to 1.36; p = 0.002). Sex–diabetes interaction (overall adjusted p<0.001) was only significant in patients ⩾65 years (15% absolute risk increase in women; multivariable p for interaction = 0.005), but not in younger patients (2% increase in women; p for interaction = 0.173). Risk‐adjusted HR (95% CI) for all‐cause hospitalisation for women and men were 1.49 (1.28 to 1.72) and 1.21 (1.11 to 1.32), respectively, also with significant sex–diabetes interaction (p = 0.011).

Conclusions

Diabetes‐associated increases in morbidity and mortality in chronic HF were more pronounced in women, and theses sex‐related differences in outcomes were primarily observed in elderly patients.Diabetes is common in heart failure (HF) and is associated with poor outcomes.1,2 HF patients with diabetes are sicker and have a higher burden of co‐morbidity than those without diabetes.1,2 Diabetes is also associated with activation of the renin–angiotensin–aldosterone and sympathetic nervous systems.3,4 There is mounting evidence that diabetes adversely affects collagen production in fibroblasts and calcium homeostasis in cardiac myocytes.5,6 However, it is not clear to what extent the diabetes‐associated poor outcomes in HF are due to the direct effects of diabetes.Although outcome‐based multivariable risk adjustment models can account for these confounding covariates to some extent, concerns for residual bias limit interpretation of these results.7 To address this concern, propensity score matching can be used to assemble cohorts of patients with and without an exposure who would be well balanced in all measured baseline covariates.8,9,10 More importantly, as investigators remain blinded during the design phase of a randomised clinical trial, this process of bias reduction and study cohort assembly can be done without any knowledge or use of the outcomes data, and the magnitude of bias reduction may be objectively assessed using standardised differences.7,9,10,11Data from patients with coronary artery disease and elderly patients hospitalised with systolic HF suggest that the effect of diabetes might be worse in women than in men.12,13,14 However, little is known about the sex‐related variation in the effect of diabetes on outcomes in a more stable younger ambulatory patient population with mild to moderate systolic and diastolic HF. It is also unknown if this sex‐related difference in the effect of diabetes on HF is a function of age. The purpose of this study thus is to determine the effect of diabetes on mortality and hospitalisation in propensity score‐matched ambulatory HF patients and to determine if the effect varies by sex and if the sex‐related differences vary by age.  相似文献   

4.

Objective

S100A12 is a pro‐inflammatory protein that is secreted by granulocytes. S100A12 serum levels increase during inflammatory bowel disease (IBD). We performed the first study analysing faecal S100A12 in adults with signs of intestinal inflammation.

Methods

Faecal S100A12 was determined by ELISA in faecal specimens of 171 consecutive patients and 24 healthy controls. Patients either suffered from infectious gastroenteritis confirmed by stool analysis (65 bacterial, 23 viral) or underwent endoscopic and histological investigation (32 with Crohn''s disease, 27 with ulcerative colitis, and 24 with irritable bowel syndrome; IBS). Intestinal S100A12 expression was analysed in biopsies obtained from all patients. Faecal calprotectin was used as an additional non‐invasive surrogate marker.

Results

Faecal S100A12 was significantly higher in patients with active IBD (2.45 ± 1.15 mg/kg) compared with healthy controls (0.006 ± 0.03 mg/kg; p<0.001) or patients with IBS (0.05 ± 0.11 mg/kg; p<0.001). Faecal S100A12 distinguished active IBD from healthy controls with a sensitivity of 86% and a specificity of 100%. We also found excellent sensitivity of 86% and specificity of 96% for distinguishing IBD from IBS. Faecal S100A12 was also elevated in bacterial enteritis but not in viral gastroenteritis. Faecal S100A12 correlated better with intestinal inflammation than faecal calprotectin or other biomarkers.

Conclusions

Faecal S100A12 is a novel non‐invasive marker distinguishing IBD from IBS or healthy individuals with a high sensitivity and specificity. Furthermore, S100A12 reflects inflammatory activity of chronic IBD. As a marker for neutrophil activation, faecal S100A12 may significantly improve our arsenal of non‐invasive biomarkers of intestinal inflammation.The etiology of inflammatory bowel disease (IBD) consisting of ulcerative colitis and Crohn''s disease involves complex interactions among susceptibility genes, the environment, and the immune system. These interactions lead to a cascade of events that involve the activation of neutrophils, production of proinflammatory mediators, and tissue damage.1 As intestinal symptoms are a frequent cause of referrals to gastroenterologists, it is crucial to differentiate between non‐inflammatory irritable bowel syndrome (IBS) and IBD. To date, there is a lack of biological markers to determine intestinal inflammation.2,3 Therefore, invasive procedures are required to confirm the diagnosis of IBD. Furthermore, the natural history of chronic IBD is characterised by an unpredictable variation in the degree of inflammation. Biological markers are needed to confirm remission, detect early relapses or local reactivation, and to monitor anti‐inflammatory therapies reliably. Whereas serum markers of inflammation are still not very helpful,3,4,5 assays that determine intestinal inflammation by detecting neutrophil‐derived products in stool show great potential.6An important mechanism in the initiation and perturbation of inflammation in IBD is the activation of innate immune mechanisms.7,8 Among the factors released by infiltrating neutrophils are proteins of the S100 family.9,10 One example is calprotectin, which is detectable in the serum and stool during intestinal inflammation.11 Calprotectin was initially described as a protein of 36 kDa, but was later characterised as a complex of two distinct S100 proteins, S100A8 and S100A9.12,13,14 In recent years, calprotectin has been proposed as a faecal marker of gut inflammation reflecting the degree of phagocyte activation.6,15,16,17,18,19,20 Unfortunately, variation in faecal calprotectin assays still impedes the routine use of this marker as a sole parameter in clinical practice. The observed variation may be caused by the broad expression pattern of calprotectin, which is found in granulocytes as well as monocytes and is also inducible in epithelial cells.21,22 In this context, the elevation in lactose intolerance is notable.16,17,23S100A12 is more restricted to granulocytes. It is secreted by activated neutrophils and is abundant in the intestinal mucosa of patients with IBD.9,24 Overexpression at the site of inflammation and correlation with disease activity in a variety of inflammatory disorders underscore the role of this granulocytic protein as a proinflammatory molecule.25 The binding of S100A12 to the receptor for advanced glycation endproducts (RAGE) leads to the long‐term activation of nuclear factor kappa B, which promotes inflammation.26 In mouse models of colitis, blocking the interaction of S100A12 with RAGE has been proved to attenuate inflammation. Data on murine models of colitis as well as human IBD point to an important role for S100A12 during the pathogenesis of these disorders.9,26,27In a previous study, we demonstrated that S100A12 is overexpressed during chronic active IBD and serves as a useful serum marker for disease activity in patients with IBD.9 De Jong et al.28 recently reported that S100A12 can be detected in the stool of children with Crohn''s disease. The aim of our present study was thus to analyse S100A12 in stool samples as well as its expression in the intestinal tissue of patients with confirmed IBD or IBS and in the stool of a normal control group. We correlated faecal S100A12 levels with endoscopic and histological findings in the same patients and investigated the diagnostic accuracy of S100A12 to detect intestinal inflammation.  相似文献   

5.

Background

Gender differences in management and outcomes have been reported in acute coronary syndrome (ACS).

Objectives

To assess such gender differences in a Swiss national registry.

Methods

20 290 patients with ACS enrolled in the AMIS Plus Registry from January 1997 to March 2006 by 68 hospitals were included in a prospective observational study. Data on patients'' characteristics, diagnoses, procedures, complications and outcomes were recorded. Odds ratios (ORs) of in‐hospital mortality were calculated using logistic regression models.

Results

5633 (28%) patients were female and 14 657 (72%) male. Female patients were older than men (mean (SD) age 70.9 (12.1) vs 63.4 (12.9) years; p<0.001), had more comorbidities and came to hospital later. They underwent percutaneous coronary intervention (PCI) less frequently (OR = 0.65; 95% CI 0.61 to 0.69) and their unadjusted in‐hospital mortality was higher overall (10.7% vs 6.3%; p<0.001) and in those who underwent PCI (3.0% vs 4.2%; p = 0.018). Mortality differences between women and men disappeared after adjustments for other predictors (adjusted OR (aOR) for women vs men: 1.09; 95% CI 0.95 to 1.25), except in women aged 51–60 years (aOR = 1.78; 95% CI 1.04 to 3.04). However, even after adjustments, female gender remained significantly associated with a lower probability of undergoing PCI (OR = 0.70; 95% CI 0.64 to 0.76).

Conclusions

The analysis showed gender differences in baseline characteristics and in the rate of PCI in patients admitted for ACS in Swiss hospitals between 1997 and 2006. Reasons for the significant underuse of PCI in women, and a slightly higher in‐hospital mortality in the 51–60 year age group, need to be investigated further.Coronary artery disease and, in particular, acute coronary syndrome (ACS), is the leading cause of mortality and morbidity in the Western world, in both women and men.The benefits of reperfusion treatment for patients with ACS have been well established and it has become standard treatment for both women and men with ST‐segment elevation acute coronary syndrome (STE‐ACS); however, there is variation in the method of reperfusion chosen, and in which patients are considered eligible.1 Controversies also exist about the type and the time of reperfusion and about its outcomes in patients presenting with unstable angina or non‐ST‐segment elevation (NSTE‐ACS).It has also been shown that women with acute myocardial infarction (AMI) are less likely than men to undergo reperfusion treatment,2,3 and that there is a lack of awareness of risk among women.4 In addition, there are conflicting data from randomised trials about the benefit of early invasive treatment in women.5,6,7 Differences in survival between men and women reported in some studies may not only reflect gender bias in management, but also differences in coronary anatomy, age and comorbidities. In the CADILLAC Trial, women had higher mortality than men after interventional treatment for AMI, which the authors attributed to smaller body surface area and more comorbidities.3 On the contrary, other authors have suggested that the higher mortality seen in women after an AMI might be explained by less aggressive treatment,8 and if women had access to the same quality of care as men, their survival would be the same.9 Finally, the results of outcome studies in unselected patients suggest that gender is not an independent predictor of mortality after percutaneous coronary intervention (PCI)2,10 and that improvement in prognosis associated with reperfusion treatment is independent from it.10,11,12,13 The data of 3100 female patients enrolled in the Euro Heart Survey ACS showed that female gender in the “real world” was not independently associated with worse in‐hospital mortality, irrespective of the type of ACS.14 The authors interestingly emphasised the need to evaluate outcomes of ACS in surveys or registries, rather than from data derived from clinical trials.14 This suggestion, however, did not solve the controversy since, in the New York angioplasty registry, in‐hospital mortality for female patients undergoing angioplasty after having reached hospital within 6 hours was 9.04% vs 4.42% for male (p<0.001) for the years 1993–6.15Thus, the aim of this study was to assess outcomes in unselected female and male patients admitted between 1997 and 2006 for ACS in Swiss hospitals and to put these results in the perspective of their baseline characteristics, comorbidities and management.  相似文献   

6.

Objective

To investigate risk factors for non‐Hodgkin''s lymphoma (NHL) and analyse NHL subtypes and characteristics in patients with systemic lupus erythematosus (SLE).

Methods

A national SLE cohort identified through SLE discharge diagnoses in the Swedish hospital discharge register during 1964 to 1995 (n = 6438) was linked to the national cancer register. A nested case control study on SLE patients who developed NHL during this observation period was performed with SLE patients without malignancy as controls. Medical records from cases and controls were reviewed. Tissue specimens on which the lymphoma diagnosis was based were retrieved and reclassified according to the WHO classification. NHLs of the subtype diffuse large B cell lymphoma (DLBCL) were subject to additional immunohistochemical staining using antibodies against bcl‐6, CD10 and IRF‐4 for further subclassification into germinal centre (GC) or non‐GC subtypes.

Results

16 patients with SLE had NHL, and the DLBCL subtype dominated (10 cases). The 5‐year overall survival and mean age at NHL diagnosis were comparable with NHL in the general population—50% and 61 years, respectively. Cyclophosphamide or azathioprine use did not elevate lymphoma risk, but the risk was elevated if haematological or sicca symptoms, or pulmonary involvement was present in the SLE disease. Two patients had DLBCL‐GC subtype and an excellent prognosis.

Conclusions

NHL in this national SLE cohort was predominated by the aggressive DLBCL subtype. The prognosis of NHL was comparable with that of the general lymphoma population. There were no indications of treatment‐induced lymphomas. Molecular subtyping could be a helpful tool to predict prognosis also in SLE patients with DLBCL.Evidence of an increased risk to develop haematological malignancy, and especially non‐Hodgkin''s lymphoma (NHL) in autoimmune diseases, has been gathered since the 1970s. First studies of Sjögren''s syndrome,1 then rheumatoid arthritis (RA)2 and now in the last decade studies from uni‐/multicentre SLE cohorts3,4,5,6,7,8 and national SLE cohorts9,10 have consistently shown a markedly increased risk of NHLs. As for NHL subtype, knowledge is more limited. RA and SLE share several disease manifestations like arthritis and “extra‐articular manifestations” such as serositis, sicca symptoms and interstitial inflammatory lung disease. In RA, a pronounced over‐representation of diffuse large B cell lymphoma (DLBCL) has been reported from a large population‐based cohort.11 This lymphoma subtype was also the most frequent in an international multicentre study with lupus patients.12 In Sjögren''s syndrome, approximately 85% are MALT lymphomas,13 although a recent study from a mono‐centre primary Sjögren''s syndrome cohort—with patients fulfilling the American–European Consensus Group criteria14—showed a predominance of DLBCL.15The pathophysiological mechanisms for the enhanced risk of developing NHL in patients with chronic inflammatory diseases are still not fully understood. Similarities of a variety of immunological disturbances that characterise both rheumatic conditions and lymphomas have been suggested as a linkage between these disorders as well as a possible potentiation of immunosuppressive drugs or certain viral infections, especially Epstein–Barr virus (EBV).5,16Recently, advances in molecular characterisation have enabled more detailed subclassification of lymphomas based on the molecular expression of the tumour cells. For DLBCL, two prognostic groups have been identified among DLBCL in the general lymphoma population depending on the resemblance of gene expression profile with normal germinal centre (GC) or activated B cells by using global gene expression profiling17,18 and immunophenotyping of tumour cells.19,20 The GC DLBC lymphomas had a significantly better survival than those with non‐GC subtype.17,18,19,20 No such subtyping has been reported in SLE patients.In a previous register study of a population‐based national Swedish SLE cohort, a threefold increased risk of lymphoma was found.10 This nested case‐control study focuses on those SLE patients that developed NHL. Information on clinical manifestations and pharmacological (cytotoxic) treatment of the SLE disease was retrieved from patient records. The lymphomas were re‐examined and reclassified, and DLBCLs were further divided into GC or non‐GC subtypes by immunohistochemistry. The presence of EBV in the lymphomas was also analysed.  相似文献   

7.

Objective

Progression of neointimal stent coverage (NSC) and changes in thrombus were evaluated serially by coronary angioscopy for up to 2 years after sirolimus‐eluting stent (SES) implantation.

Design

Serial angioscopic observations were performed in 20 segments of 20 patients at baseline, and at 6 months and 2 years after SES implantation. NSC was classified as follows: 0, uncovered struts; 1, visible struts through thin neointima; or 2, no visible struts. In each patient, maximum and minimum NSC was evaluated. Existence of thrombus was also examined.

Results

The maximum NSC increased from 6 months to 2 years (1.2 (0.4) vs 1.8 (0.4), respectively, p = 0.005), while the minimum NSC did not change (0.7 (0.5) vs 0.8 (0.4), respectively, p = 0.25). The prevalence of patients with uncovered struts did not decrease from 6 months to 2 years (35% vs 20%, respectively, p = 0.29). Although there were no thrombus‐related adverse events, new thrombus formation was found in one patient (5%) at the 6‐month, and in four patients (20%) at the 2‐year follow‐up evaluations. Frequencies of thrombus inside the SES at baseline, 6 months and 2 years did not differ one from another (40%, 40% and 30%, respectively; p = NS).

Conclusions

Neointimal growth inside the SES progressed heterogeneously. Uncovered struts persisted in 20% of the patients for up to 2 years and subclinical thrombus formation was not a rare phenomenon.Recently, occurrence of late stent thrombosis (LST) after drug‐eluting stent implantation has became a major clinical concern.1,2,3 A long‐term follow‐up study demonstrated that LST occurs at a constant rate of 0.6% a year for up to 3 years after drug‐eluting stent implantation.3 Pathological investigation shows that delayed arterial healing, characterised by an incomplete endothelialisation and persistence of fibrin, has a key role in the occurrence of LST.4,5 Moreover, a powerful predictor of LST is the existence of uncovered struts without endothelialisation.5 We therefore proposed the hypothesis that the uncovered struts of a sirolimus‐eluting stent (SES) remain for an extended period of time.Coronary angioscopy provides a direct visualisation of the lumen and detailed information on the condition of neointimal stent coverage (NSC) and thrombus.6,7,8 This imaging modality has the advantage of allowing the identification of an intracoronary thrombus.8 Presently, no long‐term angioscopic follow‐up data after SES implantation are available. We herein present our findings as derived from angioscopic examination, focusing on the long‐term serial changes in the NSC, especially the uncovered stent struts, and the presence of thrombus inside the SES.  相似文献   

8.

Objective

Progression of neointimal stent coverage (NSC) and changes in thrombus were evaluated serially by coronary angioscopy for up to 2 years after sirolimus‐eluting stent (SES) implantation.

Methods

Serial angioscopic observations were performed in 20 segments of 20 patients at baseline, at 6 months and at 2 years after SES implantation. NSC was classified as follows: 0, uncovered struts; 1, visible struts through thin neointima; or 2, no visible struts. In each patient, maximum and minimum NSC was evaluated. Existence of thrombus was also examined.

Results

The maximum NSC increased from 6 months to 2 years (mean (SD) 1.2 (0.4) vs 1.8 (0.4), respectively, p = 0.005), while the minimum NSC did not change (0.7 (0.5) vs 0.8 (0.4), respectively, p = 0.25). The prevalence of patients with uncovered struts did not decrease from 6 months to 2 years (35% vs 20%, respectively, p = 0.29). Although there were no thrombus‐related adverse events, new thrombus formation was found in 5% of 6‐month, and in 20% of 2‐year follow‐up evaluations. The prevalence of thrombus inside the SES at baseline, 6 months and 2 years was similar (40%, 40% and 30%, respectively; p = NS).

Conclusions

Neointimal growth inside the SES progressed heterogeneously. Uncovered struts persisted in 20% of the patients for up to 2 years and subclinical thrombus formation was not uncommon.Recently, occurrence of late stent thrombosis (LST) after drug‐eluting stent implantation has became a major clinical concern.1,2,3 A long‐term follow‐up study demonstrated that LST occurs at a constant rate of 0.6% a year for up to 3 years after drug‐eluting stent implantation.3 Pathological investigation showed that delayed arterial healing, characterised by an incomplete endothelialisation and persistence of fibrin, has a key role in the occurrence of LST.4,5 Moreover, a powerful predictor of LST is the existence of uncovered struts without endothelialisation.5 We therefore suggested that the uncovered struts of a sirolimus‐eluting stent (SES) remain for an extended period of time.Coronary angioscopy provides direct visualisation of the lumen and detailed information on the condition of neointimal stent coverage (NSC) and thrombus.6,7,8 This imaging modality has the advantage of allowing the identification of an intracoronary thrombus.8 Presently, no long‐term angioscopic follow‐up data after SES implantation are available. Here we present our findings from angioscopic examination, focusing on the long‐term serial changes in the NSC, especially the uncovered stent struts, and the presence of thrombus inside the SES.  相似文献   

9.

Objective

Approximately 2.8% of pregnancies are Ro/La antibody positive. 3–15% of fetuses develop complete heart block (CHB). First‐degree atrioventricular heart block (1° AVB) is reported in a third of Ro/La fetuses but as most have a normal postnatal ECG this may reflect inadequacies of Doppler measurement techniques.

Methods

Comparison was made between mechanical (mPR) and electrical (ePR) intervals obtained prospectively using Doppler and non‐invasive fetal ECG (fECG) in 52 consecutive Ro/La pregnancies in 46 women carrying 54 fetuses in an observational study at a fetal medicine unit.121 mPR and 37 ePR intervals were recorded in 49 Ro/La fetuses. Five were referred with CHB and excluded. ePR was measured successfully in 35/37 (94%) and mPR was measured in all cases. 1° AVB was defined as PR >95% CI. Logistic regression predicted abnormal final fetal rhythm from first mPR or ePR.

Results

The ePR model gave 66.7% sensitivity (6 of 8 final abnormal fetal rhythm cases were predicted correctly in fetuses >20 weeks) and 96.2% specificity. mPR gave 44.4% sensitivity (4 of 9 cases) and 88.5% specificity. Z scores for ePR (zPR) were calculated from 199 normal fetuses. The area under the receiver operator characteristic (ROC) curve was 0.88 (95% CI, 0.754 to 1.007). A cut‐off of 1.65 gave a sensitivity of 87.5% and specificity of 95% for those with prolonged and normal ePR intervals, respectively.

Conclusion

zPR is better than mPR at differentiating between normal and prolonged PR intervals, suggesting that fECG is the diagnostic tool of choice to investigate the natural history and therapy of conduction abnormalities in Ro/La pregnancies.Anti‐Ro or La antibody positive pregnancies (Ro/La) have been found in about 2.8% of the pregnant population. The most serious consequence of transplacental transfer of these antibodies to the fetus is complete heart block (CHB), which affects 3% of Ro/La pregnancies, with the risk rising to 15% in a subsequent pregnancy.1,2,3,4 In addition to the morbidity and mortality associated with pacing procedures in young infants,5,6 progressive myocardial fibrosis has been reported which affects long‐term cardiac function and may necessitate transplantation.7,8,9 The alloimmune process is thought to exert its effect in more than the 3% of fetuses affected by CHB with PR interval prolongation (first‐degree atrioventricular heart block or 1° AVB) described in up to a third of fetuses in studies using Doppler methods of measurement.10 As most babies have a normal outcome, with progression to CHB described in only a small proportion, there is some debate over whether these findings represent a transient biological response to the presence of antibodies or whether they may be due to inadequacies of the existing measurement techniques.11A simple and robust method of monitoring the PR interval in affected pregnancies is required to enable studies to assess the extent and determinants of progression to CHB in Ro/La pregnancies. Such a tool would also permit the assessment of different treatment strategies designed to halt this progression and to minimise later myocardial damage.12We report the use of a novel technique, non‐invasive fetal ECG (fECG), in a comparison of Doppler mechanical PR (mPR) and electrical PR (ePR) measurements and their ability to predict final rhythm in 52 consecutive Ro/La pregnancies studied prospectively.  相似文献   

10.

Background

A high diagnostic accuracy of 64‐slice CT coronary angiography (CTCA) has been reported in selected patients with stable angina pectoris, but only scant information is available in patients with non‐ST elevation acute coronary syndrome (ACS).

Objectives

To study the diagnostic performance of 64‐slice CTCA in patients with non‐ST elevation ACS.

Patients and methods

64‐slice CTCA was performed in 104 patients (mean (SD) age 59 (9) years) with non‐ST elevation ACS. Two independent, blinded observers assessed all coronary arteries for stenosis, using conventional quantitative angiography as a reference. Coronary lesions with ⩾50% luminal narrowing were classified as significant.

Results

Conventional coronary angiography demonstrated the absence of significant disease in 15% (16/104) of patients, and the presence of single‐vessel disease in 40% (42/104) and multivessel disease in 44% (46/104) of patients. Sensitivity for detecting significant coronary stenoses on a patient‐by‐patient analysis was 100% (88/88; 95% CI 95 to 100), specificity 75% (12/16; 95% CI 47 to 92), and positive and negative predictive values were 96% (88/92; 95% CI 89 to 99) and 100% (12/12; 95% CI 70 to 100), respectively.

Conclusion

64‐slice CTCA has a high sensitivity to detect significant coronary stenoses, and is reliable to exclude the presence of significant coronary artery disease in patients who present with a non‐ST elevation ACS.Patients with a non‐ST elevation acute coronary syndrome (ACS) are usually stratified into high and low risk for progression to myocardial infarction or death on the basis of their clinical presentation, ECG changes, biomarkers, electrical or haemodynamical instability, and presence of diabetes mellitus.1 An invasive management strategy, including conventional coronary angiography (CCA) and revascularisation, is recommended in high‐risk patients, whereas a conservative strategy with ischaemia‐guided revascularisation is recommended in low‐risk patients.1,2,3 We investigated the feasibility and diagnostic accuracy of 64‐slice CT coronary angiography (CTCA) in 104 patients with non‐ST elevation ACS as a first step to evaluate the potential decision‐making role of CT in this patient cohort.  相似文献   

11.

Aims

To evaluate the effect of a disease management programme for patients with coronary heart disease (CHD) and chronic heart failure (CHF) in primary care.

Methods

A cluster randomised controlled trial of 1316 patients with CHD and CHF from 20 primary care practices in the UK was carried out. Care in the intervention practices was delivered by specialist nurses trained in the management of patients with CHD and CHF. Usual care was delivered by the primary healthcare team in the control practices.

Results

At follow up, significantly more patients with a history of myocardial infarction in the intervention group were prescribed a beta‐blocker compared to the control group (adjusted OR 1.43, 95% CI 1.19 to 1.99). Significantly more patients with CHD in the intervention group had adequate management of their blood pressure (<140/85 mm Hg) (OR 1.61, 95% CI 1.22 to 2.13) and their cholesterol (<5 mmol/l) (OR 1.58, 95% CI 1.05 to 2.37) compared to those in the control group. Significantly more patients with an unconfirmed diagnosis of CHF had a diagnosis of left ventricular systolic dysfunction confirmed (OR 4.69, 95% CI 1.88 to 11.66) or excluded (OR 3.80, 95% CI 1.50 to 9.64) in the intervention group compared to the control group. There were significant improvements in some quality‐of‐life measures in patients with CHD in the intervention group.

Conclusions

Disease management programmes can lead to improvements in the care of patients with CHD and presumed CHF in primary care.Cardiovascular diseases including coronary heart disease (CHD) and chronic heart failure (CHF) are the main cause of morbidity and mortality in most European countries.1 Mortality from cardiovascular disease has declined over the last 30 years, a trend which has been attributed to secondary prevention therapies.2,3 However, European surveys have shown considerable potential for improved levels of secondary prevention in people with established CHD.4 Studies in primary care, where most of these patients are managed, have also reported considerable potential to further increase secondary prevention through medical and lifestyle interventions.5,6 “Medical” measures include aspirin therapy and blood pressure and lipid control, while “lifestyle” measures include increased exercise, dietary modification and smoking cessation.5 CHF is also a highly prevalent, chronic condition with high mortality and morbidity. It is increasing in prevalence and the public health burden from CHF is therefore likely to rise substantially over the next 10 years.7 The quality of life of patients with CHF is worse than for most chronic conditions managed in primary care and five‐year survival is worse than for many malignant conditions.8 However, appropriate treatment, including inhibitors of the renin‐angiotensin‐aldosterone system and beta‐blockers, has the potential to reduce hospitalisation and mortality in these patients.9,10 The task of implementing a comprehensive package of effective measures for large numbers of patients has been described as daunting.5 It is therefore important to develop implementation strategies that are practical and effective. Many patients with CHF are incorrectly diagnosed and inadequately treated in primary care11 and obstacles to appropriate primary care management include lack of knowledge, fear of complications with pharmacological treatments, lack of time and limited facilities for investigations.12,13Systematic reviews indicate that secondary prevention programmes improve the process of care, reduce admissions to hospital and enhance quality of life or functional status in patients with CHD.14 Similarly, systematic reviews of disease management programmes in CHF suggest that specialised, multidisciplinary follow‐up can reduce hospitalisation and may lead to cost saving.15,16,17 However, all the CHF trials included in these systematic reviews were conducted in highly specialised centres and recruited patients following discharge after hospitalisation. The applicability of the available CHF management programmes to countries with a primary care‐based healthcare system has therefore recently been questioned.18To achieve improved secondary prevention of CHD and CHF, primary care will need to adopt a systematic approach. Although disease management clinics for the management of CHD in primary care can improve patients'' outcomes,5 there are no such studies in the management of patients with CHF. Since the majority of patients with CHF will also have CHD,19 we investigated the effect of a disease management programme for patients with either or both conditions in primary care.  相似文献   

12.
13.
See article on page 1577Diabetes mellitus (diabetes), in particular type 2 diabetes, constitutes one of the largest emerging threats to health in the 21st century. It is estimated that by 2030 as many as 360 million people world wide will be affected.1The cause of death in those with diabetes is dominated by coronary heart disease, accompanied by increased rates of stroke and peripheral vascular disease: so called macrovascular complications. At least two‐thirds of deaths are attributable to these cardiovascular diseases and their sequelae.2 A true picture of the extent of macrovascular complications is obscured, however, by inaccurate death certification3 and diagnostic criteria4 based on the development of microvascular complications (retinopathy, neuropathy, nephropathy). The Euro Heart5 Survey demonstrated that if one applied oral glucose tolerance tests to those presenting with all forms of acute coronary syndrome, two‐thirds display impaired glucose regulation.In the USA, although the overall mortality rate associated with coronary heart disease has declined over the past 20 years, this trend has not been reflected in a decline of mortality rates in patients with diabetes.6 Results published in this issue of Heart from the Swedish registry on coronary care (RIKS‐HIA) confirm this trend (see article on page 1577).7 After a myocardial infarction (MI), patients with diabetes had an increased mortality rate compared with non‐diabetic patients. Moreover, despite the existence of treatments which may benefit diabetic patients disproportionately, the relative hazard of mortality in diabetic patients has improved little compared with non‐diabetic patients between the time periods 1995–8 and 1999–2002. These observations give us a glimpse of the vast epidemic that is approaching, if not upon us.  相似文献   

14.
15.

Objective

To determine if an aggressive approach to coronary revascularisation with oversized balloons is counterproductive, we studied the effect of increasing balloon‐to‐artery (B:A) ratio on neointimal hyperplasia following primary stent placement using a non‐atherosclerotic porcine coronary overstretch model.

Methods

60 vessels in 33 Yorkshire swine were randomly assigned to one of five B:A ratios between 1.0:1 and 1.4:1. Intravascular ultrasound (IVUS) imaging was performed before bare‐metal stent placement to accurately determine vessel size, after stent placement, and at 28 days.

Results

The mean prestent vessel diameter was 3.05 (0.31) (SD) mm. In‐stent neointimal volume, in‐stent volume stenosis and cross‐sectional area stenosis at the stent minimum lumen diameter increased significantly with increasing achieved B:A ratio (multilevel regression test for slope, p<0.001, p = 0.002 and p<0.001, respectively) and were independent of vessel size. Even minor vessel overstretch at an achieved B:A ratio of 1.1:1 resulted in significant neointimal hyperplasia. Larger B:A ratios were also associated with more neointima beyond the stent edges (p = 0.008). For vessels from the same animal, neointimal response at a given B:A ratio was dependent upon the animal treated.

Conclusions

In a porcine model of IVUS‐guided coronary primary stent placement, vessel overexpansion is counterproductive. Neointimal hyperplasia at 28 days is strongly associated with increasing B:A ratio. In addition, vessels do not respond independently of each other when multiple stents are placed within the same animal using a range of B:A ratios.The trauma of coronary stent placement triggers an inflammatory reaction that may culminate in migration and proliferation of cells within the lumen of the vessel, a process termed neointimal hyperplasia. Approximately 10–20% of patients treated with a non‐drug‐eluting, or bare‐metal, stent experience symptomatic restenosis and require revascularisation within 1 year.1,2,3,4,5,6The non‐atherosclerotic, porcine in vivo coronary artery injury model is the preferred model for studying neointimal hyperplasia because swine coronary artery size and anatomy closely resemble human coronary vasculature.7 This model remains relevant due to the significant delay in neointimal hyperplasia and the requirement for prolonged animal observation associated with the use of drug‐eluting stents. In a previous study using this model, neointimal hyperplasia correlated with the degree of vascular injury using a single balloon‐to‐artery (B:A) ratio.8 The purpose of our study was to use intravascular ultrasound (IVUS) in the porcine overstretch model to determine the extent of neointimal hyperplasia after bare‐metal stent placement using a range of pre‐established and randomised B:A ratios. In addition, we examined the within‐animal correlation between neointimal growth and achieved B:A ratio to determine if data from arteries within the same animal are independent of each other.  相似文献   

16.

Introduction

Latent tuberculosis infection (LTBI) is detected with the tuberculin skin test (TST) before anti‐TNF therapy. We aimed to investigate in vitro blood assays with TB‐specific antigens (CFP‐10, ESAT‐6), in immune‐mediated inflammatory diseases (IMID) for LTBI screening.

Patients and methods

Sixty‐eight IMID patients with (n = 35) or without (n = 33) LTBI according to clinico‐radiographic findings or TST results (10 mm cutoff value) underwent cell proliferation assessed by thymidine incorporation and PKH‐26 dilution assays, and IFNγ‐release enzyme‐linked immunosorbent spot (ELISPOT) assays with TB‐specific antigens.

Results

In vitro blood assays gave higher positive results in patients with LTBI than without (p<0.05), with some variations between tests. Among the 13 patients with LTBI diagnosed independently of TST results, 5 had a negative TST (38.5%) and only 2 a negative blood assays result (15.4%). The 5 LTBI patients with negative TST results all had positive blood assays results. Ten patients without LTBI but with intermediate TST results (6–10 mm) had no different result than patients with TST result ⩽5 mm (p>0.3) and lower results than those with LTBI (p<0.05) on CFP‐10+ESAT‐6 ELISPOT and CFP‐10 proliferation assays.

Conclusion

Anti‐TB blood assays are beneficial for LTBI diagnosis in IMID. Compared with TST, they show a better sensitivity, as seen by positive results in 5 patients with certain LTBI and negative TST, and better specificity, as seen by negative results in most patients with intermediate TST as the only criteria of LTBI. In the absence of clinico‐radiographic findings for LTBI, blood assays could replace TST for antibiotherapy decision before anti‐TNF.TNFα blocker agents are approved for the treatment of immune‐mediated inflammatory diseases (IMID) and provide marked clinical benefit. However, they can reactivate tuberculosis (TB) infection in patients previously exposed to TB bacilli.1,2 The presence of quiescent mycobacteria defines latent TB infection (LTBI).3,4 Thus, screening for LTBI is necessary before initiating therapy with TNF blockers.5 However, to date, no perfect gold standard exists for detecting LTBI, and tuberculin skin test (TST) remains largely used. The recommendations for detecting LTBI differ worldwide.3,6,7 In France, recommendations were established in 2002 by the RATIO (Research Axed on Tolerance of Biotherapies) study group for the Agence Française de Sécurité Sanitaire des Produits de Santé.8,9 Patients are considered to have LTBI requiring treatment with prophylactic antibiotics before starting anti‐TNFα therapy if they had previous TB with no adequate treatment, tuberculosis primo‐infection, residual nodular tuberculous lesions larger than 1 cm3 or old lesions suggesting TB diagnosis (parenchymatous abnormalities or pleural thickening) as seen on chest radiography or weals larger than 10 mm in diameter in response to the TST. Adequate anti‐TB treatment was defined as treatment initiated after 1970, lasting at least 6 months and including at least 2 months with the combination rifampicin–pyrazinamide. The choice of the threshold of 10 mm for the TST result was established in 2002 in France since the programme of vaccination with bacille Calmette–Guérin (BCG) was mandated in France, and nearly 100% of the population has been vaccinated. Nevertheless, after July 2005, the threshold was decreased to 5 mm as in most of all other countries.10The TST is the current method to detect LTBI but has numerous drawbacks. Indeed, the TST requires a return visit for reading the test result. It has a poor specificity, since previous BCG vaccination and environmental mycobacterial exposure can result in false‐positive results in all subjects.6,11,12 This poor specificity can lead to unnecessary treatment with antibiotics, with a significant risk of drug toxicity.13,14,15 On the other hand, TST in IMID may often give a more negative reaction than in the general population, mainly because of the disease or immunosuppressive drug use.16,17 This poor sensitivity can lead to false‐negative results, with a subsequent risk of TB reactivation with anti‐TNF therapy.The identification of genes in the mycobacterium TB genome that are absent in BCG and most environmental mycobacteria offers an opportunity to develop more specific tests to investigate Mycobacterium tuberculosis (M. tuberculosis) infection, particularly LTBI.18 Culture fibrate protein‐10 (CFP‐10) and early secretory antigen target‐6 (ESAT‐6) are two such gene products that are strong targets of the cellular immune response in TB patients. In vivo‐specific T‐cell based assay investigating interferon gamma (IFNγ) release or T‐cell proliferation in the presence of these specific mycobacterial antigens could be useful in screening for LTBI before anti‐TNF therapy. New IFNγ‐based ex vivo assays involving CFP‐10 and ESAT‐6 (T‐SPOT TB, Oxford Immunotec, Abingdon, UK) and QuantiFERON TB Gold (QFT‐G; Cellestis, Carnegie, Australia) allow for diagnosis of active TB, recent primo‐infection or LTBI.12 These tests seem to be more accurate than the TST for this purpose in the general population.12 To date, the performance of the commercial assays in detecting LTBI in patients with IMID receiving immunosuppressive drugs has not been demonstrated, and the frequency of indeterminate results is still debated.19,20,21We aimed to investigate the performance of homemade anti‐CFP‐10 and anti‐ESAT‐6 proliferative and enzyme‐linked immunosorbent spot (ELISPOT) assays in detecting LTBI in patients with IMID before anti‐TNFα therapy. We analysed two subgroups of patients: those with confirmed LTBI independent of TST result, and those with LTBI based exclusively on a positive TST result between 6 and 10 mm.  相似文献   

17.

Objective

To analyse the short and long term outcome of endoscopic stent treatment after bile duct injury (BDI), and to determine the effect of multiple stent treatment.

Design, setting and patients

A retrospective cohort study was performed in a tertiary referral centre to analyse the outcome of endoscopic stenting in 67 patients with cystic duct leakage, 26 patients with common bile duct leakage and 110 patients with a bile duct stricture.

Main outcome measures

Long term outcome and independent predictors for successful stent treatment.

Results

Overall success in patients with cystic duct leakage was 97%. In patients with common bile duct leakage, stent related complications occurred in 3.8% (n = 1). The overall success rate was 89% (n = 23). In patients with a bile duct stricture, stent related complications occurred in 33% (n = 36) and the overall success rate was 74% (n = 81). After a mean follow up of 4.5 years, liver function tests did not identify “occult” bile duct strictures. Independent predictors for outcome were the number of stents inserted during the first procedure (OR 3.2 per stent; 95% CI 1.3 to 8.4), injuries classified as Bismuth III (OR 0.12; 95% CI 0.02 to 0.91) and IV (OR 0.04; CI 0.003 to 0.52) and endoscopic stenting before referral (OR 0.24; CI 0.06 to 0.88). Introduction of sequential insertion of multiple stents did not improve outcome (before 77% vs after 66%, p = 0.25), but more patients reported stent related pain (before 11% vs after 28%, p = 0.02).

Conclusions

In patients with a postoperative bile duct leakage and/or strictures, endoscopic stent treatment should be regarded as the choice of primary treatment because of safety and favourable long term outcome. Apart from the early insertion of more than one stent, the benefit from sequential insertion of multiple stents did not become readily apparent from this series.Bile duct injury (BDI) occurs in 0.2 to 1.4% of patients following laparoscopic cholecystectomy and is a severe surgical complication.1,2,3,4 BDI related morbidity is illustrated by increased hospital stay, poor long term quality of life and high rates of malpractice litigation.5,6,7,8 Although surgical reconstruction, mainly a hepaticojejunostomy, is a procedure associated with low mortality and low morbidity if performed in a tertiary centre, this is only indicated in selected patients with BDI; a population‐based study from the USA demonstrated the detrimental effect of BDI on survival in patients who underwent surgical reconstruction.9 The majority of biliary injuries, including cystic duct leakage, common bile duct (CBD) leakage or bile duct strictures, can be treated successfully in 70–95% of the patients by means of endoscopic or radiological interventions.10,11,12,13,14,15,16It has been suggested that endoscopic treatment is associated with an increased risk of re‐stenosis and biliary cirrhosis followed by end‐stage liver disease. However, reliable data about the long term outcome of endoscopic management of BDI are scarce and predicting factors for successful outcome are unreported. Several years ago reports from uncontrolled studies indicated that a more aggressive type of dilation treatment in patients with bile duct strictures, based on the sequential insertion of multiple stents, may be associated with a more favourable outcome and this treatment policy has been adapted in our clinic since the end of 2001.17,18,19,20The purpose of this study was to analyse the short and long term outcome of stent treatment in BDI patients (including liver function test after long term follow up) and to determine factors that are predictive for successful outcome in patients who are stented for a bile duct stricture. In addition, the outcomes of patients treated before and after the introduction of sequential insertion of multiple stents were compared.  相似文献   

18.

Objectives

To assess the effects of intravenous magnesium on converting acute onset atrial fibrillation to sinus rhythm, reducing ventricular response and risk of bradycardia.

Design and data sources

Randomised controlled trials evaluating intravenous magnesium to treat acute onset atrial fibrillation from MEDLINE (1966 to 2006), EMBASE (1990 to 2006) and Cochrane Controlled Trials Register without language restrictions.

Review methods

Two researchers independently performed the literature search and data extraction.

Results

10 randomised controlled trials, including a total of 515 patients with acute onset atrial fibrillation, were considered. Intravenous magnesium was not effective in converting acute onset atrial fibrillation to sinus rhythm when compared to placebo or an alternative antiarrhythmic drug. When compared to placebo, adding intravenous magnesium to digoxin increased the proportion of patients with a ventricular response <100 beats/min (58.8% vs 32.6%; OR 3.2, 95% CI 1.93 to 5.42; p<0.001). When compared to calcium antagonists or amiodarone, intravenous magnesium was less effective in reducing the ventricular response (21.4% vs 58.5%; OR 0.19, 95% CI 0.09 to 0.44; p<0.001) but also less likely to induce significant bradycardia or atrioventricular block (0% vs 9.2%; OR 0.13, 95% CI 0.02 to 0.76; p = 0.02). The use of intravenous magnesium was associated with transient minor symptoms of flushing, tingling and dizziness in about 17% of the patients (OR 14.5, 95% CI 3.7 to 56.7; p<0.001).

Conclusions

Adding intravenous magnesium to digoxin reduces fast ventricular response in acute onset atrial fibrillation. The effect of intravenous magnesium on the ventricular rate and its cardiovascular side effects are less significant than other calcium antagonists or amiodarone. Intravenous magnesium can be considered as a safe adjunct to digoxin in controlling the ventricular response in atrial fibrillation.Atrial fibrillation is the commonest cardiac arrhythmia in clinical practice. Atrial fibrillation affects an estimated 2.2 million adults in the USA and has an estimated incidence of 1.0 per 1000 person‐years in the UK.1,2 Atrial fibrillation is associated with significant morbidity and mortality. Patients in atrial fibrillation have a fivefold increased risk of thromboembolic stroke and twofold increased risk of death when compared to the general population.3,4 Atrial fibrillation can also cause tachycardia‐induced heart failure if the rapid ventricular response is sustained for a prolonged period of time.5Most patients in acute atrial fibrillation have no significant haemodynamic instability and as such, pharmacological therapy is usually the initial treatment of choice. A variety of pharmacological agents can be used, either to control the rapid ventricular response or convert the arrhythmia to sinus rhythm, with variable results. The agents evaluated include digoxin, beta‐blockers, calcium antagonists, flecainide, propafenone, ibutilide, and amiodarone.6 However, in patients with impaired left ventricular function, digoxin or amiodarone is the pharmacological agent of choice because of their minimal negative inotropic effects.6Magnesium has many significant physiological and pharmacological effects on different organ systems. The mechanisms of its action include calcium antagonism, regulation of energy transfer and membrane stabilisation.7 Intravenous magnesium has a high therapeutic‐to‐toxic ratio and minimal negative inotropic effects.8,9 Intravenous magnesium can reduce automaticity,10 atrioventricular nodal conduction,11,12 polymorphic ventricular tachycardia due to prolonged QT interval and digoxin‐induced arrhythmias.7,8,13 Prophylactic use of intravenous magnesium can also reduce the occurrence of atrial fibrillation after cardiac surgery.14 However, there are no large randomised controlled studies or meta‐analyses that evaluate intravenous magnesium as an antiarrhythmic agent in the setting of acute onset atrial fibrillation.Rhythm control by pharmacological agents is often most effective when the drug is initiated within 10 days of onset of atrial fibrillation.15 We hypothesised that intravenous magnesium could be an effective antiarrhythmic agent in patients with acute onset atrial fibrillation. We assessed the potential beneficial and harmful effects of intravenous magnesium, when compared to placebo or an alternative arrhythmic agent, in the setting of acute onset atrial fibrillation (<7 days) in this meta‐analysis. The end‐points assessed in this study included rhythm control, ventricular response <100 beats/minute, bradycardia, hypotension, and other side effects.  相似文献   

19.

Objectives

(1) To investigate whether inflammatory synovial tissues from patients with rheumatoid arthritis (RA) express endothelial protein C receptor (EPCR) and (2) to determine the major cell type(s) that EPCR is associated with and whether EPCR functions to mediate the effects of activated protein C (APC) on these cells.

Methods

EPCR, CD68 and PC/APC in synovial tissues were detected by immunostaining and in situ PCR. Monocytes were isolated from peripheral blood of patients with RA and treated with APC, lipopolysaccharide (LPS), and/or EPCR blocking antibody RCR252. Cells and supernatants were collected for RT‐PCR, western blotting, enzyme‐linked immuosorbent assay and chemotaxis assay.Results: EPCR was expressed by both OA and RA synovial tissues but was markedly increased in RA synovium. EPCR was colocalised with PC/APC mostly on CD68 positive cells in synovium. In RA monocytes, APC upregulated EPCR expression and reduced monocyte chemoattractant protein‐1‐induced chemotaxis of monocytes by approximately 50%. APC also completely suppressed LPS‐stimulated NF‐κB activation and attenuated TNF‐α protein by more than 40% in RA monocytes. The inhibitory effects of APC were reversed by RCR252, indicating that EPCR is required.

Conclusions

Our results demonstrate for the first time that EPCR is expressed by synovial tissues, particularly in RA, where it co‐localises with PC/APC on monocytes/macrophages. In addition, APC inhibits the migration and activation of RA monocytes via EPCR. These inhibitory effects on RA monocytes suggest that PC pathway may have a beneficial therapeutic effect in RA.Rheumatoid arthritis (RA) is a chronic autoimmune disease with persistent inflammation of multiple synovial joints, which results in progressive tissue destruction of bone and cartilage.1,2 It is characterised by the infiltration of inflammatory cells (neutrophils, monocytes and lymphocytes) into the synovial compartment and the production of inflammatory mediators. In RA, monocytes migrate into the synovium to become activated macrophages where they secrete significant amounts of inflammatory cytokines such as interleukin (IL)‐1, tumour necrosis factor (TNF)‐α and proteases, which are important in initiating, propagating and maintaining synovial inflammation.3 Macrophages can also differentiate into dendritic cells and osteoclasts,4 the latter being recognised as the key cellular effectors of pathological bone erosion in arthritis.5 In rheumatoid synovial sections, most synovial lining cells are highly activated macrophage‐like cells functioning as antigen‐processing and antigen‐presenting cells to T lymphocytes.6 Macrophages are critically involved in the pathogenesis of RA, not only by producing a variety of pro‐inflammatory cytokines and chemokines, but also by contributing to the cartilage and bone destruction.Activated protein C (APC) is a 61‐kDa serine protease derived from its vitamin K‐dependent plasma precursor, protein C (PC). Activation of PC occurs on the endothelial cell surface and is triggered by a complex formed between thrombin and thrombomodulin. The conversion to APC is augmented in the presence of its specific receptor, endothelial protein C receptor (EPCR),7 which is expressed on the surface of endothelial cells, keratinocytes8 and some leucocytes, including eosinophils, neutrophils and monocytes.9APC acts as an anticoagulant by neutralising the procoagulant activities of factors Va and VIIIa and inhibiting thrombin generation. In addition, APC exerts significant anti‐inflammatory properties, associated with a decrease in pro‐inflammatory mediators and a reduction of leucocyte recruitment.10 Many anti‐inflammatory properties of APC are mediated through EPCR, which itself can independently exert anti‐inflammatory effects.11,12,13 For example, severe EPCR deficiency adversely affects survival and cardiac function of mice subjected to challenge by endotoxin infusion.13 Baboons treated with an antibody to block PC binding to EPCR respond lethally to normally sublethal concentrations of E coli and exhibit disseminated intravascular coagulation, intense neutrophil influx into the tissues and elevation of inflammatory cytokines, indicating that EPCR provides a critical step in the host defense against E coli.12 Over expression of EPCR protects transgenic mice from endotoxin‐induced injury.14 In addition, recent findings suggest that EPCR is required for embryo survival and development.15,16,17PC/APC is elevated in RA synovial fluid and synovial joints, where it co‐localises with MMP‐2.18 However, whether EPCR is present in the inflammatory joint is unknown. The purpose of this study was: (1) to determine whether inflammatory (RA) synovial tissue expresses EPCR and if so whether these levels are higher than non‐inflammatory OA synovial tissue; and (2) to elucidate the major cell type(s) EPCR is associated with and whether it functions to mediate the effects of APC on these cells.  相似文献   

20.

Objectives

To derive age and sex specific estimates of transition rates from advanced adenomas to colorectal cancer by combining data of a nationwide screening colonoscopy registry and national data on colorectal cancer (CRC) incidence.

Design

Registry based study.

Setting

National screening colonoscopy programme in Germany.

Patients

Participants of screening colonoscopy in 2003 and 2004 (n = 840 149).

Main outcome measures

Advanced adenoma prevalence, colorectal cancer incidence, annual and 10 year cumulative risk of developing CRC among carriers of advanced adenomas according to sex and age (range 55–80+ years)

Results

The age gradient is much stronger for CRC incidence than for advanced adenoma prevalence. As a result, projected annual transition rates from advanced adenomas to CRC strongly increase with age (from 2.6% in age group 55–59 years to 5.6% in age group ⩾80 years among women, and from 2.6% in age group 55–59 years to 5.1% in age group ⩾80 years among men). Projections of 10 year cumulative risk increase from 25.4% at age 55 years to 42.9% at age 80 years in women, and from 25.2% at age 55 years to 39.7% at age 80 years in men.

Conclusions

Advanced adenoma transition rates are similar in both sexes, but there is a strong age gradient for both sexes. Our estimates of transition rates in older age groups are in line with previous estimates derived from small case series in the pre‐colonoscopy era independent of age. However, our projections for younger age groups are considerably lower. These findings may have important implications for the design of CRC screening programmes.Most colorectal cancers (CRCs) develop from adenomas, among which “advanced” adenomas are considered to be the clinically relevant precursors of CRC. The natural history of colorectal adenomas is a decisive factor for the design of CRC screening measures and their cost effectiveness. Since advanced adenomas need to be removed once they are detected, any direct observation of their natural history would be unethical. Thus, available estimates for the progression of adenomas mostly stem from radiological surveillance data or from autopsy series collected prior to the colonoscopy era.1,2,3,4,5,6,7,8,9 However, these data are rather vague as they were derived from small and potentially selective samples. For example, a major source for the estimation of adenoma transition rates has been a retrospective review of Mayo Clinic records from 226 patients with colonic polyps ⩾10 mm in diameter in whom periodic radiographical examination of the colon was performed, and in whom 21 invasive carcinomas were identified during a mean follow up of 9 years.9 Despite the undoubted usefulness of available data sources from the pre‐colonoscopy era, sample size limitations did not allow reliable estimates of adenoma transition rates according to key factors, such as age and sex. Accordingly, a common estimate of these transition rates for all ages and both sexes has generally been assumed in previous studies on effectiveness and cost effectiveness of CRC screening.10,11,12,13,14,15,16,17 Given that sensitivity analyses in these studies showed that the advanced adenoma–carcinoma transition rate represents a very influential parameter,10,11,13,18 its variation by age and sex might have a large impact on relative effectiveness and cost effectiveness of various screening schemes.The aim of this paper was to estimate risk for developing CRC according to age and sex among carriers of advanced adenomas by combining data from a large national colonoscopy screening database and national data on CRC incidence in Germany.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号