首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Objective

To study gender differences in management and outcome in patients with non‐ST‐elevation acute coronary syndrome.

Design, setting and patients

Cohort study of 53 781 consecutive patients (37% women) from the Register of Information and Knowledge about Swedish Heart Intensive care Admissions (RIKS‐HIA), with a diagnosis of either unstable angina pectoris or non‐ST‐elevation myocardial infarction. All patients were admitted to intensive coronary care units in Sweden, between 1998 and 2002, and followed for 1 year.

Main outcome measures

Treatment intensity and in‐hospital, 30‐day and 1‐year mortality.

Results

Women were older (73 vs 69 years, p<0.001) and more likely to have a history of hypertension and diabetes, but less likely to have a history of myocardial infarction or revascularisation. After adjustment, there were no major differences in acute pharmacological treatment or prophylactic medication at discharge.Revascularisation was, however, even after adjustment, performed more often in men (OR 1.15; 95% CI, 1.09 to 1.21). After adjustment, there was no significant difference in in‐hospital (OR 1.03; 95% CI, 0.94 to 1.13) or 30‐days (OR 1.07; 95% CI, 0.99 to 1.15) mortality, but at 1 year being male was associated with higher mortality (OR 1.12; 95% CI, 1.06 to 1.19).

Conclusion

Although women are somewhat less intensively treated, especially regarding invasive procedures, after adjustment for differences in background characteristics, they have better long‐term outcomes than men.Since the beginning of the 1990s there have been numerous studies on gender differences in management of acute coronary syndromes (ACS). Many earlier studies,1,2,3,4,5,6,7,8 but not all,9 found that women were treated less intensively in the acute phase. In some of the studies, after adjustment for age, comorbidity and severity of the disease, most of the differences disappeared.6,7 There is also conflicting evidence on gender differences in evidence‐based treatment at discharge.1,3,5,6,8,10,11After acute myocardial infarction (AMI), a higher short‐term mortality in women is documented in several studies.2,5,6,7,12,13,14 After adjustment for age and comorbidity some difference has usually,2,5,12,13 but not always,11,14 remained. On the other hand, most studies assessing long‐term outcome have found no difference between the genders, or a better outcome in women, at least after adjustment.7,10,13,14 Earlier studies focusing on gender differences in outcome after an acute coronary syndrome have usually studied patients with AMI, including both ST‐elevation myocardial infarction and non‐ST‐elevation myocardial infarction (NSTEMI).2,5,6,7,12,13,14 However, the pathophysiology and initial management differs between these two conditions,15 as does outcome according to gender.11,16 In patients with NSTEMI or unstable angina pectoris (UAP), women seem to have an equal or better outcome, after adjustment for age and comorbidity.1,4,8,11,16,17 Studies on differences between genders, in treatment and outcome, in real life, contemporary, non‐ST‐elevation acute coronary syndrome (NSTE ACS) populations, large enough to make necessary adjustments for confounders, are lacking.The aim of this study was to assess gender differences in background characteristics, management and outcome in a real‐life intensive coronary care unit (ICCU) population, with NSTE ACS.  相似文献   

2.
Can CD4+ and CD8+ “memory” T cells that are generated and maintained in the context of low-level virus persistence protect, in the absence of antibody, against a repeat challenge with the same pathogen? Although immune T cells exert effective, long-term control of a persistent γ-herpesvirus (γHV68) in Ig–/– μMT mice, subsequent exposure to a high dose of the same virus leads to further low-level replication in the lung. This lytic phase in the respiratory tract is dealt with effectively by the recall of memory T cells induced by a γHV68 recombinant (M3LacZ) that does not express the viral M3 chemokine binding protein. At least for the CD8+ response, greater numbers of memory T cells confer enhanced protection in the M3LacZ-immune mice. However, neither WT γHV68 nor the minimally persistent M3LacZ primes the T cell response to the extent that a WT γHV68 challenge fails to establish latency in the μMT mice. Memory CD4+ and CD8+ T cells thus act together to limit γHV68 infection but are unable to provide absolute protection against a high-dose, homologous challenge.Along-term debate in the immunology community concerns the importance of antigen persistence for maintaining T cell memory (13). The discussion is often confused by different interpretations of the term “memory” (4). If we look at memory simply as the capacity to maintain antigen-specific, “resting” T cell numbers indefinitely, then it seems that the continued presence of the particular MHC class I or class II glycoprotein plus peptide (epitope) is certainly not required (2, 3, 5, 6). However, if memory is used in the sense of the protective immunity that might be the focus of a candidate T cell-based vaccine, then the case that continued (or sporadic) reexposure to the inducing antigen is advantageous could well have merit (7).Acutely activated T cells deal very effectively with a homologous virus challenge (8). On the other hand, the time taken to recall “resting” memory allows an invading organism to become established (9), although the pathogen may either be cleared more rapidly or (for an agent that persists) be held to a lower “set point” (10). Although cytotoxic T cells can be induced very rapidly in vivo (11, 12), their localization to (for example) the respiratory mucosa may be delayed by the need for further activation and proliferation in the lymphoid tissue (13). Could protection be made more immediate by achieving a continuing state of enhanced lymphocyte turnover (14) and activation?The herpesviruses (HVs) provide a natural system for analyzing immunity in the context of controlled virus persistence (15, 16). Vaccination strategies with the γHVs, like Kaposi''s sarcoma virus (HHV8) and Epstein–Barr virus (EBV), can be investigated (1722) with the murine γ-herpesvirus 68 (γHV68), a virus that has high level sequence homology with HHV8 and a pathogenesis similar to that of EBV (1719). Respiratory exposure of C57BL/6J (B6) mice to γHV68 induces transient, lytic infection of the respiratory tract and latency in B lymphocytes and macrophages (20, 21). Virus is not normally detected by plaque assay of lung homogenates for >10–12 days after the initial challenge. Genetically disrupted μMT mice that lack both B cells and antibody (Ig–/–) also clear γHV68 from the lung and show little evidence of latency by infectious center assay (22). However, a more sensitive limiting dilution analysis (LDA) showed that γHV68 persists in macrophages and, perhaps, in other cells from Ig–/– mice after both i.p. and intranasal (i.n.) challenge (21, 23).The present experiments ask whether the combination of CD4+ and CD8+ T cell memory (24) in mice infected once with γHV68 (25, 26) protects against superinfection with the same virus. Antibody is, of course, likely to neutralize the majority of input virus in this circumstance (2729). Therefore, the experiments were done with Ig–/– μMT mice (30) that had been infected with either WT γHV68 or with a mutant virus (M3LacZ) that causes a normal, lytic infection in the lung, but a much lower level of latency in the lymphoid tissue of Ig+/+ controls (22). It is also the case that the protective capacity of immune CD4+ and CD8+ T cells that are maintained where there is the possibility of continued, low level antigen challenge has not (to our knowledge) been analyzed previously for any virus system.  相似文献   

3.

Objective

Socioeconomic status (SES) is inversely associated with coronary heart disease (CHD) risk. Cumulative pathogen burden may also predict future CHD. The hypothesis was tested that lower SES is associated with a greater pathogen burden, and that pathogen burden accounts in part for SES differences in cardiovascular risk factors.

Methods

This was a cross‐sectional observational study involving the clinical examination of 451 men and women aged 51–72 without CHD, recruited from the Whitehall II epidemiological cohort. SES was defined by grade of employment, and pathogen burden by summing positive serostatus for Chlamydia pneumoniae, cytomegalovirus and herpes simplex virus 1. Cardiovascular risk factors were also assessed.

Results

Pathogen burden averaged 1.94 (SD) 0.93 in the lower grade group, compared with 1.64 (0.97) and 1.64 (0.93) in the intermediate and higher grade groups (p = 0.011). Pathogen burden was associated with a higher body mass index, waist/hip ratio, blood pressure and incidence of diabetes. There were SES differences in waist/hip ratio, high‐density lipoprotein‐cholesterol, fasting glucose, glycated haemoglobin, lung function, smoking and diabetes. The SES gradient in these cardiovascular risk factors was unchanged when pathogen burden was taken into account statistically.

Conclusions

Although serological signs of infection with common pathogens are more frequent in lower SES groups, their distribution across the social gradient does not match the linear increases in CHD risk present across higher, intermediate and lower SES groups. Additionally, pathogen burden does not appear to mediate SES differences in cardiovascular risk profiles.There is a socioeconomic gradient in coronary heart disease (CHD) mortality and cardiovascular disease risk in the USA, UK and many other countries.1,2 Lower socioeconomic status (SES) is associated with a range of cardiovascular risk factors including smoking, adverse lipid profiles, abdominal adiposity, glucose intolerance and inflammatory markers.3,4,5,6,7 Both early life SES and adult socioeconomic position appear to contribute to the social gradient.5,8A history of infection may contribute to cardiovascular disease risk by stimulating sustained vascular inflammation. Evidence concerning the relevance of individual pathogens is mixed, but the cumulative pathogen burden, defined by positive serostatus for a range of pathogens, has been associated with coronary artery disease and carotid atherosclerosis in case–control9,10,11 and longitudinal cohort studies.12,13,14 Pathogen burden is also related to cardiovascular risk markers such as endothelial dysfunction,15 low high‐density lipoprotein (HDL)‐cholesterol16 and insulin resistance,17 in some but not all studies.9,18It is plausible that pathogen burden could contribute to SES differences in cardiovascular disease risk. Exposure to infection is greater in lower SES groups, particularly in early life,19 and childhood infection is associated with endothelial dysfunction.20 We therefore tested the hypothesis that lower SES is associated with greater cumulative pathogen burden in healthy middle‐aged and older adults. Seropositivity was measured for three pathogens, Chlamydia pneumoniae, cytomegalovirus (CMV) and herpes simplex virus 1 (HSV‐1), that have been associated with cardiovascular disease risk,21,22,23 and have contributed to studies of cumulative pathogen burden.10,12,14,15 We also determined whether variations in pathogen burden accounted for SES differences in cardiovascular risk factors.  相似文献   

4.

Objectives

To evaluate inter‐observer agreement for microscopic measurement of inflammation in synovial tissue using manual quantitative, semiquantitative and computerised digital image analysis.

Methods

Paired serial sections of synovial tissue, obtained at arthroscopic biopsy of the knee from patients with rheumatoid arthritis (RA), were stained immunohistochemically for T lymphocyte (CD3) and macrophage (CD68) markers. Manual quantitative and semiquantitative scores for sub‐lining layer CD3+ and CD68+ cell infiltration were independently derived in 6 international centres. Three centres derived scores using computerised digital image analysis. Inter‐observer agreement was evaluated using Spearman''s Rho and intraclass correlation coefficients (ICCs).

Results

Paired tissue sections from 12 patients were selected for evaluation. Satisfactory inter‐observer agreement was demonstrated for all 3 methods of analysis. Using manual methods, ICCs for measurement of CD3+ and CD68+ cell infiltration were 0.73 and 0.73 for quantitative analysis and 0.83 and 0.78 for semiquantitative analysis, respectively. Corresponding ICCs of 0.79 and 0.58 were observed for the use of digital image analysis. All ICCs were significant at levels of p<0.0001. At each participating centre, use of computerised image analysis produced results that correlated strongly and significantly with those obtained using manual measurement.

Conclusion

Strong inter‐observer agreement was demonstrated for microscopic measurement of synovial inflammation in RA using manual quantitative, semiquantitative and computerised digital methods of analysis. This further supports the development of these methods as outcome measures in RA.Microscopic measurement of inflammation in synovial tissue is employed globally by centres working in the field of arthritis research.1 Adequate and comparable synovial tissue can be safely obtained using blind‐needle biopsy or rheumatological arthroscopy.2,3,4 In the acquired samples, various parameters may be examined, including cell populations, vascularity, cytokines and adhesion molecules. In rheumatoid arthritis (RA), many of these have been found to relate to disease activity, severity, outcome, and to exhibit a change after treatment with corticosteroids, disease‐modifying antirheumatic drugs (DMARDs) and biological therapy.5,6,7,8,9,10,11,12,13,14,15Several analysis techniques have been employed to measure these parameters. Semiquantitative analysis is a relatively quick method and therefore may facilitate examining large quantities of tissue.7 Quantitative analysis is time‐consuming but more sensitive than semiquantitative scoring to change in individual patients.16 It has been shown in previous studies that these methods can reflect overall joint inflammation when applied to relatively limited amounts of synovial tissue, even though inflammation may differ widely between individual sites in a single joint.17,18,19 Computerised digital image analysis has been applied more recently in this area and has been shown to correlate well with conventional methods of measurement.20,21,22This multi‐centre study was undertaken to standardise and validate the methods mentioned previously by evaluating inter‐observer agreement between multiple examiners in the measurement of selected parameters of inflammation in RA synovial tissue by manual quantitative, semiquantitative and computerised image analysis.  相似文献   

5.

Objectives

To derive age and sex specific estimates of transition rates from advanced adenomas to colorectal cancer by combining data of a nationwide screening colonoscopy registry and national data on colorectal cancer (CRC) incidence.

Design

Registry based study.

Setting

National screening colonoscopy programme in Germany.

Patients

Participants of screening colonoscopy in 2003 and 2004 (n = 840 149).

Main outcome measures

Advanced adenoma prevalence, colorectal cancer incidence, annual and 10 year cumulative risk of developing CRC among carriers of advanced adenomas according to sex and age (range 55–80+ years)

Results

The age gradient is much stronger for CRC incidence than for advanced adenoma prevalence. As a result, projected annual transition rates from advanced adenomas to CRC strongly increase with age (from 2.6% in age group 55–59 years to 5.6% in age group ⩾80 years among women, and from 2.6% in age group 55–59 years to 5.1% in age group ⩾80 years among men). Projections of 10 year cumulative risk increase from 25.4% at age 55 years to 42.9% at age 80 years in women, and from 25.2% at age 55 years to 39.7% at age 80 years in men.

Conclusions

Advanced adenoma transition rates are similar in both sexes, but there is a strong age gradient for both sexes. Our estimates of transition rates in older age groups are in line with previous estimates derived from small case series in the pre‐colonoscopy era independent of age. However, our projections for younger age groups are considerably lower. These findings may have important implications for the design of CRC screening programmes.Most colorectal cancers (CRCs) develop from adenomas, among which “advanced” adenomas are considered to be the clinically relevant precursors of CRC. The natural history of colorectal adenomas is a decisive factor for the design of CRC screening measures and their cost effectiveness. Since advanced adenomas need to be removed once they are detected, any direct observation of their natural history would be unethical. Thus, available estimates for the progression of adenomas mostly stem from radiological surveillance data or from autopsy series collected prior to the colonoscopy era.1,2,3,4,5,6,7,8,9 However, these data are rather vague as they were derived from small and potentially selective samples. For example, a major source for the estimation of adenoma transition rates has been a retrospective review of Mayo Clinic records from 226 patients with colonic polyps ⩾10 mm in diameter in whom periodic radiographical examination of the colon was performed, and in whom 21 invasive carcinomas were identified during a mean follow up of 9 years.9 Despite the undoubted usefulness of available data sources from the pre‐colonoscopy era, sample size limitations did not allow reliable estimates of adenoma transition rates according to key factors, such as age and sex. Accordingly, a common estimate of these transition rates for all ages and both sexes has generally been assumed in previous studies on effectiveness and cost effectiveness of CRC screening.10,11,12,13,14,15,16,17 Given that sensitivity analyses in these studies showed that the advanced adenoma–carcinoma transition rate represents a very influential parameter,10,11,13,18 its variation by age and sex might have a large impact on relative effectiveness and cost effectiveness of various screening schemes.The aim of this paper was to estimate risk for developing CRC according to age and sex among carriers of advanced adenomas by combining data from a large national colonoscopy screening database and national data on CRC incidence in Germany.  相似文献   

6.

Objective

Myocardial scintigraphy and/or conventional angiography (CA) are often performed before cardiac surgery in an attempt to identify unsuspected coronary artery disease which might result in significant cardiac morbidity and mortality. Multidetector CT coronary angiography (MDCTCA) has a recognised high negative predictive value and may provide a non‐invasive alternative in this subset of patients. The aim of this study was to evaluate the clinical value of MDCTCA as a preoperative screening test in candidates for non‐coronary cardiac surgery.

Methods

132 patients underwent MDCTCA (Somatom Sensation 16 Cardiac, Siemens) in the assessment of the cardiac risk profile before surgery. Coronary arteries were screened for ⩾50% stenosis. Patients without significant stenosis (Group 1) underwent surgery without any adjunctive screening tests while all patients with coronary lesions ⩾50% at MDCTCA (Group 2) underwent CA.

Results

16 patients (12.1%) were excluded due to poor image quality. 72 patients without significant coronary stenosis at MDCTCA were submitted to surgery. 30 out of 36 patients with significant (⩾50%) coronary stenosis at MDCTCA and CA underwent adjunctive bypass surgery or coronary angioplasty. In 8 patients, MDCTCA overestimated the severity of the coronary lesions (>50% MDCTCA, <50% CA).No severe cardiovascular perioperative events such as myocardial ischaemia, myocardial infarction or cardiac failure occurred in any patient in Group 1.

Conclusions

MDCTCA seems to be effective as a preoperative screening test prior to non‐coronary cardiac surgery. In this era of cost containment and optimal care of patients, MDCTCA is able to provide coronary vessel and ventricular function evaluation and may become the method of choice for the assessment of a cardiovascular risk profile prior to major surgery.Since its introduction in the 1960s,1 conventional coronary angiography (CA) has been considered the gold standard for the diagnosis of coronary artery disease because of its high contrast, temporal and spatial resolution.2,3,4 In the past few years, we have witnessed a considerable increase in diagnostic and interventional procedures. Despite the high degree of accuracy (73–89%) of non‐invasive diagnostic tests such as exercise ECG, myocardial scintigraphy and stress‐echocardiography in detecting myocardial ischaemia,5 about 20% of patients undergoing CA due to a positive result of these non‐invasive tests, had no evidence of coronary lesions.6,7Multidetector CT (MDCT), introduced into clinical practice in 2000, has demonstrated excellent technical characteristics for coronary artery evaluation. Results in the literature show a high degree of diagnostic accuracy in detecting significant coronary lesions and, particularly, an excellent capability of excluding them, due to negative predictive values ranging from 96 to 99%.8,9,10,11,12,13,14,15,16,17,18Patients who are candidates for major non‐coronary cardiac or vascular surgery, such as heart valve replacement, aortic aneurysm and aortic dissection, require a complete assessment of potentially dangerous co‐morbidities. There is a 5 to 10% perioperative cardiac morbidity rate during vascular surgery, even in patients at low risk for coronary disease.19 According to Paul et al.20 there is a 17% risk of severe multivessel disease in low clinical risk asymptomatic patients undergoing vascular surgery. American College of Cardiology/American Heart Association (ACC/AHA) guidelines for preoperative evaluation before major surgery recommend stratification of ischaemic heart disease with clinical and non‐invasive tests.19,20,21,22 The diagnostic accuracy is 68 to 77% for exercise ECG and 73 to 85% for stress‐echocardiography. Myocardial scintigraphy provides an accuracy of 87–89% in patients with normal resting ECG, with a radiation exposure ranging from 4.6 to 20 mSv,23 almost equivalent to MDCT coronary angiography (MDCTCA). However, for certain high‐risk patients, ACC guidelines suggest proceeding directly with coronary angiography rather than performing a non‐invasive test. Therefore, in clinical practice, CA is often performed before major vascular or cardiac surgery. Considering that millions of surgical procedures are probably performed every year worldwide (eg, 95 000 heart valve replacements/year in the USA),6,7 several hundred thousand negative CAs are still performed.After some years of validation studies comparing MDCTCA with CA, studies on clinical utility are now warranted to demonstrate whether and how this technique can change and improve the current management of patients. The purpose of this study is to evaluate the clinical impact of MDCTCA as a preoperative screening test for cardiac risk assessment in patients who are candidates for major non‐coronary cardiac surgery and who are asymptomatic for ischaemic heart disease.  相似文献   

7.
8.

Objective

The aim of the study was to compare time‐trends in mortality rates and treatment patterns between patients with and without diabetes based on the Swedish register of coronary care (Register of Information and Knowledge about Swedish Heart Intensive Care Admission [RIKS‐HIA]).

Methods

Post myocardial infarction mortality rate is high in diabetic patients, who seem to receive less evidence‐based treatment. Mortality rates and treatment in 1995–1998 and 1999–2002 were studied in 70 882 patients (age <80 years), 14 873 of whom had diabetes (the first registry recorded acute myocardial infarction), following adjustments for differences in clinical and other parameters.

Results

One‐year mortality rates decreased from 1995 to 2002 from 16.6% to 12.1% in patients without diabetes and from 29.7% to 19.7%, respectively, in those with diabetes. Patients with diabetes had an adjusted relative 1‐year mortality risk of 1.44 (95% CI 1.36 to 1.52) in 1995–1998 and 1.31 (95% CI 1.24 to 1.38) in 1999–2002. Despite improved pre‐admission and in‐hospital treatment, diabetic patients were less often offered acute reperfusion therapy (adjusted OR 0.85, 95% CI 0.80 to 0.90), acute revascularisation (adjusted OR 0.78, 95% CI 0.69 to 0.87) or revascularisation within 14 days (OR 0.80, 95% CI 0.75 to 0.85), aspirin (OR 0.90, 95% CI 0.84 to 0.98) and lipid‐lowering treatment at discharge (OR 0.81, 95% CI 0.77 to 0.86).

Conclusion

Despite a clear improvement in the treatment and myocardial infarction survival rate in patients with diabetes, mortality rate remains higher than in patients without diabetes. Part of the excess mortality may be explained by co‐morbidities and diabetes itself, but a lack of application of evidence‐based treatment also contributes, underlining the importance of the improved management of diabetic patients.Patients with diabetes have higher short‐ and long‐term mortality rates after acute myocardial infarction (MI) than those without diabetes.1,2,3,4 This pattern has remained even after the introduction of modern therapeutic principles.5,6,7 According to US mortality rate trends diabetic patients have not experienced a similar mortality rate reduction as that seen in non‐diabetic patients.8,9,10 Less use of evidence‐based treatment has been suggested as an important explanation.4,10,11,12,13,14 The systematic use of such therapy should decrease hospital mortality rate in diabetic patients so that it approaches that in those without diabetes.15The Register of Information and Knowledge about Swedish Heart Intensive Care Admissions (RIKS‐HIA), covering almost all Swedish patients with MI, offers detailed information on treatment patterns and prognosis in unselected patients with and without diabetes. The aim of this study is to analyse time trends in treatment patterns and prognosis in order to see whether management has improved.  相似文献   

9.

Objectives

Several studies have revealed increased bone mineral density (BMD) in patients with knee or hip osteoarthritis, but few studies have addressed this issue in hand osteoarthritis (HOA). The aims of this study were to compare BMD levels and frequency of osteoporosis between female patients with HOA, rheumatoid arthritis (RA) and controls aged 50–70 years, and to explore possible relationships between BMD and disease characteristics in patients with HOA.

Methods

190 HOA and 194 RA patients were recruited from the respective disease registers in Oslo, and 122 controls were selected from the population register of Oslo. All participants underwent BMD measurements of femoral neck, total hip and lumbar spine (dual‐energy x ray absorptiometry), interview, clinical joint examination and completed self‐reported questionnaires.

Results

Age‐, weight‐ and height‐adjusted BMD values were significantly higher in HOA versus RA and controls, the latter only significant for femoral neck and lumbar spine. The frequency of osteoporosis was not significantly different between HOA and controls, but significantly lower in HOA versus RA. Adjusted BMD values did not differ between HOA patients with and without knee OA, and significant associations between BMD levels and symptom duration or disease measures were not observed.

Conclusion

HOA patients have a higher BMD than population‐based controls, and this seems not to be limited to patients with involvement of larger joints. The lack of correlation between BMD and disease duration or severity does not support the hypothesis that higher BMD is a consequence of the disease itself.Osteoporosis is recognised as a frequent complication to rheumatoid arthritis (RA).1 Osteoarthritis (OA) is the most frequent rheumatic joint disease, and contrary to the situation in RA, several studies have revealed increased bone mineral density (BMD) in patients with knee or hip OA, even if the results have not been consistent in all studies.2,3,4,5,6,7,8,9,10 The hand is a frequent site of peripheral joint involvement in OA. However, a limited number of studies have addressed the issue of osteoporosis in hand osteoarthritis (HOA), and the results from these few studies have been inconsistent regarding levels of BMD compared with controls.8,9,10,11,12,13,14,15OA has generally been considered as a cartilage disease characterised by slow progressive degeneration of articular cartilage due to “wear and tear” mechanisms. However, there is increasing evidence that abnormalities in the subchondral bone and systemic factors may contribute to the pathophysiological process. Studies of subchondral bone have revealed alterations in microstructure including increased BMD. This local increase in BMD in OA joints may be a consequence of reduced shock absorption in joints with degenerated cartilage,5 or on the contrary, thickening and stiffening of the subchondral bone with increased BMD may lead to development of OA.16 However, elevated BMD levels at sites remote from the arthritic process cannot be explained by local biomechanical factors, and the question of whether primary OA rather is a systemic bone disease has been raised.17 Systemic changes in subchondral bone could be explained by genetic factors, hormonal influences, vitamin D concentrations, growth factors or activity of bone‐forming cells.15,18,19,20,21 Better knowledge about the relationship between BMD and HOA may contribute to the understanding of the pathogenesis of OA.Disease registers of patients with RA22 and HOA23 have been established in the city of Oslo. We have previously compared BMD levels in a cohort of RA patients from the Oslo RA Register (ORAR) and healthy controls.24 The current study was designed to compare levels of BMD and the frequency of osteoporosis between patients with HOA, RA and controls. A second aim was to explore possible relationships between BMD levels and disease characteristics in patients with HOA.  相似文献   

10.

Background and aims

Crohn''s disease is a life‐long form of inflammatory bowel disease (IBD) mediated by mucosal immune abnormalities. Understanding of the pathogenesis is limited because it is based on data from adults with chronic Crohn''s disease. We investigated mucosal T‐cell immunoregulatory events in children with early Crohn''s disease.

Methods

Mucosal biopsies and T‐cell clones were derived from children experiencing the first attack of Crohn''s disease, children with long‐standing Crohn''s disease, infectious colitis, and children without gut inflammation.

Results

As in acute infectious colitis, interleukin (IL) 12 induced T cells from early Crohn''s disease to acquire a strongly polarised T helper (Th) type 1 response characterised by high IFN‐γ production and IL12Rβ2 chain expression. Th1 polarisation was not induced in clones from late Crohn''s disease. Mucosal levels of IL12p40 and IL12Rβ2 messenger RNA were significantly higher in children with early than late Crohn''s disease. These results demonstrate that susceptibility to IL12‐mediated modulation is strongly dependent on the stage of Crohn''s disease.

Conclusions

At the onset of Crohn''s disease mucosal T cells appear to mount a typical Th1 response that resembles an acute infectious process, and is lost with progression to late Crohn''s disease. This suggests that mucosal T‐cell immunoregulation varies with the course of human IBD. Patients with the initial manifestations of IBD may represent an ideal population in which immunomodulation may have optimal therapeutic efficacy.Both forms of inflammatory bowel disease (IBD), Crohn''s disease and ulcerative colitis are life‐long conditions whose initial clinical manifestations appear in the first decades of life. The incidence of IBD is increasing worldwide, and recent epidemiological data show that the diagnosis of IBD is increasingly more frequent in children as the age of onset is decreasing,1,2 with an equal incidence among all ethnic groups.3 Investigation of IBD has largely relied on studies of adult patients with established disease or animal models of acute IBD,4 making it difficult to reconcile data from humans with chronic disease with data from animals with acute disease. Pathogenic events may vary during the course of chronic gut inflammation,5 but claims that new‐onset and long‐standing IBD may be different, or that IBD in children is distinct from IBD in adults are so far unsubstantiated.6 The investigation of human IBD at the earliest possible time, as in children with the first clinical manifestations of Crohn''s disease, could obviate the many confounding variables associated with chronicity. In this population early mechanisms of gut inflammation could be examined before masking or modification by disease evolution or therapy. In addition, unique immunological reactivities associated with specific types of inflammation may be uncovered, as demonstrated in children with Crohn''s disease.7In addition to the type of triggering agents and the intrinsic properties of the affected tissue, the long‐term outcome of an inflammatory response is strongly influenced by the local cytokine milieu. This milieu changes with time, and different mediators are involved in the inductive versus the effector phase of an immune response, which eventually acquires distinctive T helper (Th) type 1, Th2, Th17 or alternative profiles.8,9 On the basis of this paradigm, evidence indicates that Crohn''s disease is a Th1/Th17‐like condition,10,11 whereas ulcerative colitis appears to represent an atypical Th2 condition.12 These conclusions are largely based on T‐cell cytokine profiles from adults with long‐standing disease and numerous animal models of IBD.13 There is some evidence that cytokine levels may vary with the evolution of human IBD,5 but whether T cells produce quantitatively or qualitatively different cytokine profiles in the early compared with the late stages of IBD is unclear. More importantly, no information is available on cytokines produced by mucosal T cells at the time of disease onset. In addition to the type and quantity of antigen, and how antigen‐presenting cells handle the antigen(s), the outcome of an inflammatory process also depends on the cytokine make‐up at the beginning of such a process. Therefore, it seems essential to define as early as possible in the course of Crohn''s disease the response of mucosal T cells to the immunoregulatory cytokines that condition T helper cell differentiation.14 This could reveal whether the cytokine profiles found in late Crohn''s disease reflect a fixed response set in motion at the initiation of disease, or represent an evolutionary response to the progression of IBD. This fundamental question cannot be answered using T cells from resected tissues because children with early Crohn''s disease seldom undergo an operation. Therefore, with the caveat that the chosen approach may not be entirely representative of the intestinal immune response because other compartments such as the mesenteric lymph nodes are not sampled, we studied T cells derived from colonoscopic mucosal biopsies from a unique population of children with the very first attack of Crohn''s disease. The production of interleukin (IL) 2, IFN‐γ, IL4, and IL10, as well as IL12 receptor (IL12R) β1 and β2 chain expression were measured in T‐cell clones exposed to the immunomodulatory effects of IL12 and IL4. In addition, total levels of IL12 and IL12Rβ2 chain messenger RNA were measured in the biopsies. The results show that differences in cytokine production and susceptibility to immune modulation occur exclusively in early Crohn''s disease. IL12 conditioning of mucosal T cells from children with early Crohn''s disease induces high levels of IFN‐γ similar to those produced by IL12‐stimulated T cells from children with acute infectious colitis. In contrast, mucosal T cells of children with late Crohn''s disease fail to upregulate IFN‐γ production in response to IL12. In addition, the expression level of the IL12Rβ2 chain is significantly higher in T cells from children with early Crohn''s disease and infectious colitis than children with late IBD, and tissue IL12 and IL12Rβ2 chain mRNA were also higher in early than late Crohn''s disease. Therefore, at the beginning of IBD, mucosal T cells mount a Th1 response that resembles an acute infectious process, and is lost with progression to late disease.  相似文献   

11.
12.

Aims

To evaluate the effect of a disease management programme for patients with coronary heart disease (CHD) and chronic heart failure (CHF) in primary care.

Methods

A cluster randomised controlled trial of 1316 patients with CHD and CHF from 20 primary care practices in the UK was carried out. Care in the intervention practices was delivered by specialist nurses trained in the management of patients with CHD and CHF. Usual care was delivered by the primary healthcare team in the control practices.

Results

At follow up, significantly more patients with a history of myocardial infarction in the intervention group were prescribed a beta‐blocker compared to the control group (adjusted OR 1.43, 95% CI 1.19 to 1.99). Significantly more patients with CHD in the intervention group had adequate management of their blood pressure (<140/85 mm Hg) (OR 1.61, 95% CI 1.22 to 2.13) and their cholesterol (<5 mmol/l) (OR 1.58, 95% CI 1.05 to 2.37) compared to those in the control group. Significantly more patients with an unconfirmed diagnosis of CHF had a diagnosis of left ventricular systolic dysfunction confirmed (OR 4.69, 95% CI 1.88 to 11.66) or excluded (OR 3.80, 95% CI 1.50 to 9.64) in the intervention group compared to the control group. There were significant improvements in some quality‐of‐life measures in patients with CHD in the intervention group.

Conclusions

Disease management programmes can lead to improvements in the care of patients with CHD and presumed CHF in primary care.Cardiovascular diseases including coronary heart disease (CHD) and chronic heart failure (CHF) are the main cause of morbidity and mortality in most European countries.1 Mortality from cardiovascular disease has declined over the last 30 years, a trend which has been attributed to secondary prevention therapies.2,3 However, European surveys have shown considerable potential for improved levels of secondary prevention in people with established CHD.4 Studies in primary care, where most of these patients are managed, have also reported considerable potential to further increase secondary prevention through medical and lifestyle interventions.5,6 “Medical” measures include aspirin therapy and blood pressure and lipid control, while “lifestyle” measures include increased exercise, dietary modification and smoking cessation.5 CHF is also a highly prevalent, chronic condition with high mortality and morbidity. It is increasing in prevalence and the public health burden from CHF is therefore likely to rise substantially over the next 10 years.7 The quality of life of patients with CHF is worse than for most chronic conditions managed in primary care and five‐year survival is worse than for many malignant conditions.8 However, appropriate treatment, including inhibitors of the renin‐angiotensin‐aldosterone system and beta‐blockers, has the potential to reduce hospitalisation and mortality in these patients.9,10 The task of implementing a comprehensive package of effective measures for large numbers of patients has been described as daunting.5 It is therefore important to develop implementation strategies that are practical and effective. Many patients with CHF are incorrectly diagnosed and inadequately treated in primary care11 and obstacles to appropriate primary care management include lack of knowledge, fear of complications with pharmacological treatments, lack of time and limited facilities for investigations.12,13Systematic reviews indicate that secondary prevention programmes improve the process of care, reduce admissions to hospital and enhance quality of life or functional status in patients with CHD.14 Similarly, systematic reviews of disease management programmes in CHF suggest that specialised, multidisciplinary follow‐up can reduce hospitalisation and may lead to cost saving.15,16,17 However, all the CHF trials included in these systematic reviews were conducted in highly specialised centres and recruited patients following discharge after hospitalisation. The applicability of the available CHF management programmes to countries with a primary care‐based healthcare system has therefore recently been questioned.18To achieve improved secondary prevention of CHD and CHF, primary care will need to adopt a systematic approach. Although disease management clinics for the management of CHD in primary care can improve patients'' outcomes,5 there are no such studies in the management of patients with CHF. Since the majority of patients with CHF will also have CHD,19 we investigated the effect of a disease management programme for patients with either or both conditions in primary care.  相似文献   

13.

Objective

To analyse the short and long term outcome of endoscopic stent treatment after bile duct injury (BDI), and to determine the effect of multiple stent treatment.

Design, setting and patients

A retrospective cohort study was performed in a tertiary referral centre to analyse the outcome of endoscopic stenting in 67 patients with cystic duct leakage, 26 patients with common bile duct leakage and 110 patients with a bile duct stricture.

Main outcome measures

Long term outcome and independent predictors for successful stent treatment.

Results

Overall success in patients with cystic duct leakage was 97%. In patients with common bile duct leakage, stent related complications occurred in 3.8% (n = 1). The overall success rate was 89% (n = 23). In patients with a bile duct stricture, stent related complications occurred in 33% (n = 36) and the overall success rate was 74% (n = 81). After a mean follow up of 4.5 years, liver function tests did not identify “occult” bile duct strictures. Independent predictors for outcome were the number of stents inserted during the first procedure (OR 3.2 per stent; 95% CI 1.3 to 8.4), injuries classified as Bismuth III (OR 0.12; 95% CI 0.02 to 0.91) and IV (OR 0.04; CI 0.003 to 0.52) and endoscopic stenting before referral (OR 0.24; CI 0.06 to 0.88). Introduction of sequential insertion of multiple stents did not improve outcome (before 77% vs after 66%, p = 0.25), but more patients reported stent related pain (before 11% vs after 28%, p = 0.02).

Conclusions

In patients with a postoperative bile duct leakage and/or strictures, endoscopic stent treatment should be regarded as the choice of primary treatment because of safety and favourable long term outcome. Apart from the early insertion of more than one stent, the benefit from sequential insertion of multiple stents did not become readily apparent from this series.Bile duct injury (BDI) occurs in 0.2 to 1.4% of patients following laparoscopic cholecystectomy and is a severe surgical complication.1,2,3,4 BDI related morbidity is illustrated by increased hospital stay, poor long term quality of life and high rates of malpractice litigation.5,6,7,8 Although surgical reconstruction, mainly a hepaticojejunostomy, is a procedure associated with low mortality and low morbidity if performed in a tertiary centre, this is only indicated in selected patients with BDI; a population‐based study from the USA demonstrated the detrimental effect of BDI on survival in patients who underwent surgical reconstruction.9 The majority of biliary injuries, including cystic duct leakage, common bile duct (CBD) leakage or bile duct strictures, can be treated successfully in 70–95% of the patients by means of endoscopic or radiological interventions.10,11,12,13,14,15,16It has been suggested that endoscopic treatment is associated with an increased risk of re‐stenosis and biliary cirrhosis followed by end‐stage liver disease. However, reliable data about the long term outcome of endoscopic management of BDI are scarce and predicting factors for successful outcome are unreported. Several years ago reports from uncontrolled studies indicated that a more aggressive type of dilation treatment in patients with bile duct strictures, based on the sequential insertion of multiple stents, may be associated with a more favourable outcome and this treatment policy has been adapted in our clinic since the end of 2001.17,18,19,20The purpose of this study was to analyse the short and long term outcome of stent treatment in BDI patients (including liver function test after long term follow up) and to determine factors that are predictive for successful outcome in patients who are stented for a bile duct stricture. In addition, the outcomes of patients treated before and after the introduction of sequential insertion of multiple stents were compared.  相似文献   

14.

Introduction

Latent tuberculosis infection (LTBI) is detected with the tuberculin skin test (TST) before anti‐TNF therapy. We aimed to investigate in vitro blood assays with TB‐specific antigens (CFP‐10, ESAT‐6), in immune‐mediated inflammatory diseases (IMID) for LTBI screening.

Patients and methods

Sixty‐eight IMID patients with (n = 35) or without (n = 33) LTBI according to clinico‐radiographic findings or TST results (10 mm cutoff value) underwent cell proliferation assessed by thymidine incorporation and PKH‐26 dilution assays, and IFNγ‐release enzyme‐linked immunosorbent spot (ELISPOT) assays with TB‐specific antigens.

Results

In vitro blood assays gave higher positive results in patients with LTBI than without (p<0.05), with some variations between tests. Among the 13 patients with LTBI diagnosed independently of TST results, 5 had a negative TST (38.5%) and only 2 a negative blood assays result (15.4%). The 5 LTBI patients with negative TST results all had positive blood assays results. Ten patients without LTBI but with intermediate TST results (6–10 mm) had no different result than patients with TST result ⩽5 mm (p>0.3) and lower results than those with LTBI (p<0.05) on CFP‐10+ESAT‐6 ELISPOT and CFP‐10 proliferation assays.

Conclusion

Anti‐TB blood assays are beneficial for LTBI diagnosis in IMID. Compared with TST, they show a better sensitivity, as seen by positive results in 5 patients with certain LTBI and negative TST, and better specificity, as seen by negative results in most patients with intermediate TST as the only criteria of LTBI. In the absence of clinico‐radiographic findings for LTBI, blood assays could replace TST for antibiotherapy decision before anti‐TNF.TNFα blocker agents are approved for the treatment of immune‐mediated inflammatory diseases (IMID) and provide marked clinical benefit. However, they can reactivate tuberculosis (TB) infection in patients previously exposed to TB bacilli.1,2 The presence of quiescent mycobacteria defines latent TB infection (LTBI).3,4 Thus, screening for LTBI is necessary before initiating therapy with TNF blockers.5 However, to date, no perfect gold standard exists for detecting LTBI, and tuberculin skin test (TST) remains largely used. The recommendations for detecting LTBI differ worldwide.3,6,7 In France, recommendations were established in 2002 by the RATIO (Research Axed on Tolerance of Biotherapies) study group for the Agence Française de Sécurité Sanitaire des Produits de Santé.8,9 Patients are considered to have LTBI requiring treatment with prophylactic antibiotics before starting anti‐TNFα therapy if they had previous TB with no adequate treatment, tuberculosis primo‐infection, residual nodular tuberculous lesions larger than 1 cm3 or old lesions suggesting TB diagnosis (parenchymatous abnormalities or pleural thickening) as seen on chest radiography or weals larger than 10 mm in diameter in response to the TST. Adequate anti‐TB treatment was defined as treatment initiated after 1970, lasting at least 6 months and including at least 2 months with the combination rifampicin–pyrazinamide. The choice of the threshold of 10 mm for the TST result was established in 2002 in France since the programme of vaccination with bacille Calmette–Guérin (BCG) was mandated in France, and nearly 100% of the population has been vaccinated. Nevertheless, after July 2005, the threshold was decreased to 5 mm as in most of all other countries.10The TST is the current method to detect LTBI but has numerous drawbacks. Indeed, the TST requires a return visit for reading the test result. It has a poor specificity, since previous BCG vaccination and environmental mycobacterial exposure can result in false‐positive results in all subjects.6,11,12 This poor specificity can lead to unnecessary treatment with antibiotics, with a significant risk of drug toxicity.13,14,15 On the other hand, TST in IMID may often give a more negative reaction than in the general population, mainly because of the disease or immunosuppressive drug use.16,17 This poor sensitivity can lead to false‐negative results, with a subsequent risk of TB reactivation with anti‐TNF therapy.The identification of genes in the mycobacterium TB genome that are absent in BCG and most environmental mycobacteria offers an opportunity to develop more specific tests to investigate Mycobacterium tuberculosis (M. tuberculosis) infection, particularly LTBI.18 Culture fibrate protein‐10 (CFP‐10) and early secretory antigen target‐6 (ESAT‐6) are two such gene products that are strong targets of the cellular immune response in TB patients. In vivo‐specific T‐cell based assay investigating interferon gamma (IFNγ) release or T‐cell proliferation in the presence of these specific mycobacterial antigens could be useful in screening for LTBI before anti‐TNF therapy. New IFNγ‐based ex vivo assays involving CFP‐10 and ESAT‐6 (T‐SPOT TB, Oxford Immunotec, Abingdon, UK) and QuantiFERON TB Gold (QFT‐G; Cellestis, Carnegie, Australia) allow for diagnosis of active TB, recent primo‐infection or LTBI.12 These tests seem to be more accurate than the TST for this purpose in the general population.12 To date, the performance of the commercial assays in detecting LTBI in patients with IMID receiving immunosuppressive drugs has not been demonstrated, and the frequency of indeterminate results is still debated.19,20,21We aimed to investigate the performance of homemade anti‐CFP‐10 and anti‐ESAT‐6 proliferative and enzyme‐linked immunosorbent spot (ELISPOT) assays in detecting LTBI in patients with IMID before anti‐TNFα therapy. We analysed two subgroups of patients: those with confirmed LTBI independent of TST result, and those with LTBI based exclusively on a positive TST result between 6 and 10 mm.  相似文献   

15.
16.
Background/Aim:The efficacy of flexible spectral imaging color enhancement (FICE) ch. 1 (F1) for the detection of ulcerative lesions and angioectasias in the small intestine with capsule endoscopy (CE) has been reported. In the present study, we evaluated whether F1 could detect incremental findings in patients with no findings in a standard review mode.Results:F1 detected five significant lesions in three patients with overt OGIB; three erosions, one aphtha, and one angioectasia. For nonsignificant lesions, F1 detected 12 red mucosas and 16 red spots. Moreover, 29 patients with 71 findings were considered false positives.Conclusion:F1 detected incremental significant findings in a small percentage of patients with no findings in the standard review mode. In addition, F1 showed many false-positive findings. The incremental effect of a repeated review by F1 in patients with no findings in the first review is limited.Key Words: Capsule endoscopy, flexible spectral imaging color enhancement, obscure gastrointestinal bleeding, repeat review, small intestinal lesionFlexible spectral imaging color enhancement (FICE), an image-enhanced endoscopy (IEE) technique, has been used widely in gastroscopy and colonoscopy.[1,2] FICE depends on optical filters and the use of spectral estimation technology to reconstruct images at different wavelengths based on images from white-light endoscopy. It has been reported that it improves the visualization of both neoplastic and non-neoplastic lesions in gastroscopy and colonoscopy.[3,4,5]Capsule endoscopy (CE) has become an important examination of the small intestine.[6] The efficacy of CE for small intestinal diseases has been reported.[7,8,9,10,11] CE has demonstrated efficacy for patients with obscure gastrointestinal bleeding (OGIB). It can detect various kinds of disease states such as tumors, polyps, angioectasias, ulcers, and erosions.Rapid 6.5 (Given Imaging Ltd., Yoqneam, Israel), a CE reading system, includes FICE.[12] Several studies have shown the effects of FICE in CE.[13,14,15,16,17,18,19,20] In a previous study, we found that FICE Ch. 1 (F1) detected a larger number of ulcerative lesions and angioectasias in the small intestine compared to a standard review.[16] However, little is known about the impact of FICE on CE in patients, with no findings in the standard mode CE. In the present study, we investigated whether F1 could detect incremental findings in patients with no findings in the standard review mode.  相似文献   

17.

Objective

To investigate risk factors for non‐Hodgkin''s lymphoma (NHL) and analyse NHL subtypes and characteristics in patients with systemic lupus erythematosus (SLE).

Methods

A national SLE cohort identified through SLE discharge diagnoses in the Swedish hospital discharge register during 1964 to 1995 (n = 6438) was linked to the national cancer register. A nested case control study on SLE patients who developed NHL during this observation period was performed with SLE patients without malignancy as controls. Medical records from cases and controls were reviewed. Tissue specimens on which the lymphoma diagnosis was based were retrieved and reclassified according to the WHO classification. NHLs of the subtype diffuse large B cell lymphoma (DLBCL) were subject to additional immunohistochemical staining using antibodies against bcl‐6, CD10 and IRF‐4 for further subclassification into germinal centre (GC) or non‐GC subtypes.

Results

16 patients with SLE had NHL, and the DLBCL subtype dominated (10 cases). The 5‐year overall survival and mean age at NHL diagnosis were comparable with NHL in the general population—50% and 61 years, respectively. Cyclophosphamide or azathioprine use did not elevate lymphoma risk, but the risk was elevated if haematological or sicca symptoms, or pulmonary involvement was present in the SLE disease. Two patients had DLBCL‐GC subtype and an excellent prognosis.

Conclusions

NHL in this national SLE cohort was predominated by the aggressive DLBCL subtype. The prognosis of NHL was comparable with that of the general lymphoma population. There were no indications of treatment‐induced lymphomas. Molecular subtyping could be a helpful tool to predict prognosis also in SLE patients with DLBCL.Evidence of an increased risk to develop haematological malignancy, and especially non‐Hodgkin''s lymphoma (NHL) in autoimmune diseases, has been gathered since the 1970s. First studies of Sjögren''s syndrome,1 then rheumatoid arthritis (RA)2 and now in the last decade studies from uni‐/multicentre SLE cohorts3,4,5,6,7,8 and national SLE cohorts9,10 have consistently shown a markedly increased risk of NHLs. As for NHL subtype, knowledge is more limited. RA and SLE share several disease manifestations like arthritis and “extra‐articular manifestations” such as serositis, sicca symptoms and interstitial inflammatory lung disease. In RA, a pronounced over‐representation of diffuse large B cell lymphoma (DLBCL) has been reported from a large population‐based cohort.11 This lymphoma subtype was also the most frequent in an international multicentre study with lupus patients.12 In Sjögren''s syndrome, approximately 85% are MALT lymphomas,13 although a recent study from a mono‐centre primary Sjögren''s syndrome cohort—with patients fulfilling the American–European Consensus Group criteria14—showed a predominance of DLBCL.15The pathophysiological mechanisms for the enhanced risk of developing NHL in patients with chronic inflammatory diseases are still not fully understood. Similarities of a variety of immunological disturbances that characterise both rheumatic conditions and lymphomas have been suggested as a linkage between these disorders as well as a possible potentiation of immunosuppressive drugs or certain viral infections, especially Epstein–Barr virus (EBV).5,16Recently, advances in molecular characterisation have enabled more detailed subclassification of lymphomas based on the molecular expression of the tumour cells. For DLBCL, two prognostic groups have been identified among DLBCL in the general lymphoma population depending on the resemblance of gene expression profile with normal germinal centre (GC) or activated B cells by using global gene expression profiling17,18 and immunophenotyping of tumour cells.19,20 The GC DLBC lymphomas had a significantly better survival than those with non‐GC subtype.17,18,19,20 No such subtyping has been reported in SLE patients.In a previous register study of a population‐based national Swedish SLE cohort, a threefold increased risk of lymphoma was found.10 This nested case‐control study focuses on those SLE patients that developed NHL. Information on clinical manifestations and pharmacological (cytotoxic) treatment of the SLE disease was retrieved from patient records. The lymphomas were re‐examined and reclassified, and DLBCLs were further divided into GC or non‐GC subtypes by immunohistochemistry. The presence of EBV in the lymphomas was also analysed.  相似文献   

18.
Background/Aim:Over the past two decades, several advances have been made in the management of patients with hepatocellular carcinoma (HCC) and portal vein tumor thrombosis (PVTT). Yttrium-90 (90Y) radioembolization has recently been made a treatment option for patients with HCC and PVTT. However, there is still a need to systematicly evaluate the outcomes of 90Y radioembolization for HCC and PVTT. We aimed to assess the safety and effectiveness of 90Y radioembolization for HCC and PVTT. We performed a systematic review of clinical trials, clinical studies, and abstracts from conferences that qualified for analysis.Results:A total of 14 clinical studies and three abstracts from conferences including 722 patients qualified for the analysis. The median length of follow-up was 7.2 months; the median time to progression was 5.6 months, and median disease control rate was 74.3%. Radiological response data were reported in five studies, and the median reported value of patients with complete response, partial response, stable disease, and progressive disease were 3.2%, 16.5%, 31.3%, and 28%, respectively. The median survival was 9.7 months for all patients, including the median overall survival (OS) were 12.1, 6.1 months of Child-Pugh class A and B patients, and the median OS were 6.1, 13.4 months of main and branch PVTT patients, respectively. The common toxicities were fatigue, nausea/vomiting, abdominal pain, mostly not requiring medical intervention needed no medication intervention.Conclusions:90Y radioembolization is a safe and effective treatment for HCC and PVTT.Key Words: Hepatocellular carcinoma, portal vein tumor thrombosis, radioembolization, toxicity, yttrium-90Portal vein tumor thrombosis (PVTT) occurs in a substantial portion of hepatocellular carcinoma (HCC) patients and in approximately 10%–40% of patients at diagnosis.[1,2] PVTT has a profound adverse effect on prognosis, with the median survival time of patients who have unresectable HCC with PVTT being significantly reduced (2–4 months) compared with those without PVTT (10–24 months).[1,3] The presence of PVTT also limits the treatment options, with HCC treatment guidelines often considering PVTT a contraindication for transplantation, curative resection, and transarterial chemoembolization (TACE).[4,5] Although the presence of PVTT poses a challenging treatment dilemma,[1] many treatments of HCC with PVTT have been reported, including surgical,[2] TACE,[4,5,6,7,8,9] external beam radiotherapy,[10] gamma-knife radiosurgery,[4] TACE combined with endovascular implantation of an iodine-125 seed strand,[11] and transarterial radioembolization.[12] However, the optimal treatment for patients with HCC and PVTT remains largely controversial.[7] Yttrium-90 (90 Y) radioembolization is a locoregional liver-directed therapy that involves transcatheter delivery of particles embedded with the radioisotope 90Y. In addition to obliteration of the arterial blood supply, the 90Y results in a 50–150 Gy dose of radiation to the tumor tissue, which results in tumor necrosis, including HCC and PVTT.[13] It was reported that 90Y radioembolization is a safe and effective treatment for patients with HCC and PVTT.[6] However, there is still a need to systematically evaluate the outcomes of this treatment modality.The purpose of this study was to comprehensively review the safety and effectiveness of 90Y radioembolization for HCC and PVTT.  相似文献   

19.
20.
Infections with antibiotic-resistant bacteria (ARB) in hospitalized patients are becoming increasingly frequent despite extensive infection-control efforts. Infections with ARB are most common in the intensive care units of tertiary-care hospitals, but the underlying cause of the increases may be a steady increase in the number of asymptomatic carriers entering hospitals. Carriers may shed ARB for years but remain undetected, transmitting ARB to others as they move among hospitals, long-term care facilities, and the community. We apply structured population models to explore the dynamics of ARB, addressing the following questions: (i) What is the relationship between the proportion of carriers admitted to a hospital, transmission, and the risk of infection with ARB? (ii) How do frequently hospitalized patients contribute to epidemics of ARB? (iii) How do transmission in the community, long-term care facilities, and hospitals interact to determine the proportion of the population that is carrying ARB? We offer an explanation for why ARB epidemics have fast and slow phases and why resistance may continue to increase despite infection-control efforts. To successfully manage ARB at tertiary-care hospitals, regional coordination of infection control may be necessary, including tracking asymptomatic carriers through health-care systems.Nosocomial infections with antibiotic-resistant bacteria (ARB) occur with alarming frequency (1), and new multidrug-resistant bacteria continue to emerge, including vancomycin-resistant enterococci (VRE), methicillin-resistant Staphylococcus aureus (MRSA), two recent but isolated cases of vancomycin-resistant S. aureus (2, 3), and multiple-drug resistance in Gram-negative bacteria. In response, hospitals have used a variety of infection-control measures, some of which are costly and difficult to implement (4). Despite efforts to reduce transmission of ARB within hospitals, the incidence of VRE, MRSA, and other antibiotic-resistant infections continues to increase (1).An important distinction in the epidemiology of ARB is made between infection and colonization. Infection is characterized by serious illness when ARB contaminate wounds, the bloodstream, or other tissues. In contrast, colonization with ARB may occur in the gut, nasal cavities, or other body surfaces. Colonizing bacteria may persist for years without causing disease or harming their hosts (5, 6); we call such individuals carriers. These carriers increase colonization pressure; the number of patients who are shedding increases the risk that another patient becomes a carrier for ARB or acquires a resistant infection (7). Hospitals that reduce the incidence of resistance (the number of new cases) may see no reduction in overall prevalence (the fraction of patients with ARB), because these hospitals admit an increasing number of ARB carriers (8, 9). Patients infected with ARB generally remain hospitalized until the symptoms are cured, but they may continue to carry and shed ARB for months or years.We show that the prevalence of ARB in hospitals approaches equilibrium rapidly because of the rapid turnover of patients; the average length of stay (LOS) is ≈5 days (10, 11). Moreover, prevalence changes rapidly in response to changes in hospital infection control (11-17), so slow and steady increases in resistance must be due to something else, such as increases in the proportion of carriers admitted from the catchment population of a hospital, defined as the population from which patients are drawn, including long-term care facilities (LTCFs), other hospitals, and the community (5, 18). The health-care institutions that serve a common catchment population vary substantially in their relative size, transmission rates, and average LOS. How do increases in the number of ARB carriers in the catchment population contribute to increases in infections by ARB in the hospital, and what can be done about it?Mathematical models play an important role in understanding the spread of ARB (19). We have built on existing theory for the transmission dynamics of ARB developed for simple, well-mixed populations (12, 20), but we are focused on phenomena that occur at a large spatial scale, ignoring competition between sensitive and resistant bacteria and the biological cost of resistance (21) and the relationship between antibiotic use and the prevalence of ARB (11, 22, 23). We have developed mathematical models with multiple institutions connected by patient movement; such models are called “structured” populations or “metapopulations.” Thus, we are developing epidemiological models (20, 21, 24) focused specifically on the consequences of persistent colonization and population structure, applying existing theory for structured populations (20, 24-27).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号