首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Background/Aim:Over the past two decades, several advances have been made in the management of patients with hepatocellular carcinoma (HCC) and portal vein tumor thrombosis (PVTT). Yttrium-90 (90Y) radioembolization has recently been made a treatment option for patients with HCC and PVTT. However, there is still a need to systematicly evaluate the outcomes of 90Y radioembolization for HCC and PVTT. We aimed to assess the safety and effectiveness of 90Y radioembolization for HCC and PVTT. We performed a systematic review of clinical trials, clinical studies, and abstracts from conferences that qualified for analysis.Results:A total of 14 clinical studies and three abstracts from conferences including 722 patients qualified for the analysis. The median length of follow-up was 7.2 months; the median time to progression was 5.6 months, and median disease control rate was 74.3%. Radiological response data were reported in five studies, and the median reported value of patients with complete response, partial response, stable disease, and progressive disease were 3.2%, 16.5%, 31.3%, and 28%, respectively. The median survival was 9.7 months for all patients, including the median overall survival (OS) were 12.1, 6.1 months of Child-Pugh class A and B patients, and the median OS were 6.1, 13.4 months of main and branch PVTT patients, respectively. The common toxicities were fatigue, nausea/vomiting, abdominal pain, mostly not requiring medical intervention needed no medication intervention.Conclusions:90Y radioembolization is a safe and effective treatment for HCC and PVTT.Key Words: Hepatocellular carcinoma, portal vein tumor thrombosis, radioembolization, toxicity, yttrium-90Portal vein tumor thrombosis (PVTT) occurs in a substantial portion of hepatocellular carcinoma (HCC) patients and in approximately 10%–40% of patients at diagnosis.[1,2] PVTT has a profound adverse effect on prognosis, with the median survival time of patients who have unresectable HCC with PVTT being significantly reduced (2–4 months) compared with those without PVTT (10–24 months).[1,3] The presence of PVTT also limits the treatment options, with HCC treatment guidelines often considering PVTT a contraindication for transplantation, curative resection, and transarterial chemoembolization (TACE).[4,5] Although the presence of PVTT poses a challenging treatment dilemma,[1] many treatments of HCC with PVTT have been reported, including surgical,[2] TACE,[4,5,6,7,8,9] external beam radiotherapy,[10] gamma-knife radiosurgery,[4] TACE combined with endovascular implantation of an iodine-125 seed strand,[11] and transarterial radioembolization.[12] However, the optimal treatment for patients with HCC and PVTT remains largely controversial.[7] Yttrium-90 (90 Y) radioembolization is a locoregional liver-directed therapy that involves transcatheter delivery of particles embedded with the radioisotope 90Y. In addition to obliteration of the arterial blood supply, the 90Y results in a 50–150 Gy dose of radiation to the tumor tissue, which results in tumor necrosis, including HCC and PVTT.[13] It was reported that 90Y radioembolization is a safe and effective treatment for patients with HCC and PVTT.[6] However, there is still a need to systematically evaluate the outcomes of this treatment modality.The purpose of this study was to comprehensively review the safety and effectiveness of 90Y radioembolization for HCC and PVTT.  相似文献   

3.
Background/Aim:To elucidate colorectal cancer (CRC) disease patterns, demographics, characteristics, stage at presentation, metastases, and survival rates of patients, particularly those with liver metastases, at our center as the first report from the Kingdom of Saudi Arabia.Results:427 cases of CRC with a mean age at diagnosis of 55.47 ± 12.85 years, out of which 96% were resected. Stage II was predominant at presentation, followed by both stage III and IV, with the remainder being stage I. One hundred patients had distant metastases, of which the liver was the only location in 54 patients. Mean survival was 3.0 years. Overall survival rates for CRC patients with liver metastases who underwent resection were 30% at 2 years and 17% at 5 years, and the mean survival rate was 1.4 years.Conclusions:Both the mean survival rate of our CRC patients with resectable liver metastases and the 5-year survival rate of these patients are lower than global averages. This discrepancy is likely due to late diagnoses rather than more aggressive disease.Key Words: Colon cancer, metastasis, Saudi Arabia, survivalColorectal cancer (CRC) is considered to be the third most common malignancy worldwide.[1] It affects roughly 30–50 people per 100,000 individuals in the USA and Europe,[2] and roughly half of these patients develop metastases during the course of the disease.[3,4] The liver is the predominant site of metastasis in CRC, and 25% of patients present with metastases at the time of diagnosis (synchronous).[4,5,6] Throughout the course of the disease, in particular after resection of the primary tumor, approximately half of all CRC patients develop liver metastases.[4,5,6] CRC rates are markedly lower in Africa and the Middle East, occurring at an estimated rate of 3–11 per 100,000 individuals.[7] Specifically, in the Kingdom of Saudi Arabia (KSA), the incidence of CRC is increasing from 6.6 per 100,000 individuals in 2003[8] to over 12 per 100,000 individuals in 2008.[9]In a recent meta-analysis by Kanakas et al., the global 3 and 5-year mean survival rates of CRC were estimated to be 57.6 and 40.3%, respectively;[10] however, 5-year survival rates in the USA are estimated to be greater than 65%.[11] In comparison, the 5-year survival rate of CRC in the KSA is reported to be 44.6%.[12] Survival rates have improved tremendously for CRC due to early detection, application of total mesorectal excision, and addition of radiochemotherapy.[13,14] With respect to CRC patients with liver metastases, the estimated global mean 5-year survival is 38%, although this number varies dramatically from region to region.[10] Survival improvement for metastatic CRC is majorly due to the aggressive surgical approach in resecting all metastasis and the addition of effective systemic chemotherapy.[15] Despite increasing data pertaining to CRC incidence and survival in the KSA, information relating to the leading cause of death from this disease, namely liver metastasis, is currently scarce.[16] Therefore, the aim of this study is to elucidate CRC disease pattern and survival rates, particularly of patients with liver metastases, in the KSA population and to compare these figures to the global averages.  相似文献   

4.
Background/Aim:The efficacy of flexible spectral imaging color enhancement (FICE) ch. 1 (F1) for the detection of ulcerative lesions and angioectasias in the small intestine with capsule endoscopy (CE) has been reported. In the present study, we evaluated whether F1 could detect incremental findings in patients with no findings in a standard review mode.Results:F1 detected five significant lesions in three patients with overt OGIB; three erosions, one aphtha, and one angioectasia. For nonsignificant lesions, F1 detected 12 red mucosas and 16 red spots. Moreover, 29 patients with 71 findings were considered false positives.Conclusion:F1 detected incremental significant findings in a small percentage of patients with no findings in the standard review mode. In addition, F1 showed many false-positive findings. The incremental effect of a repeated review by F1 in patients with no findings in the first review is limited.Key Words: Capsule endoscopy, flexible spectral imaging color enhancement, obscure gastrointestinal bleeding, repeat review, small intestinal lesionFlexible spectral imaging color enhancement (FICE), an image-enhanced endoscopy (IEE) technique, has been used widely in gastroscopy and colonoscopy.[1,2] FICE depends on optical filters and the use of spectral estimation technology to reconstruct images at different wavelengths based on images from white-light endoscopy. It has been reported that it improves the visualization of both neoplastic and non-neoplastic lesions in gastroscopy and colonoscopy.[3,4,5]Capsule endoscopy (CE) has become an important examination of the small intestine.[6] The efficacy of CE for small intestinal diseases has been reported.[7,8,9,10,11] CE has demonstrated efficacy for patients with obscure gastrointestinal bleeding (OGIB). It can detect various kinds of disease states such as tumors, polyps, angioectasias, ulcers, and erosions.Rapid 6.5 (Given Imaging Ltd., Yoqneam, Israel), a CE reading system, includes FICE.[12] Several studies have shown the effects of FICE in CE.[13,14,15,16,17,18,19,20] In a previous study, we found that FICE Ch. 1 (F1) detected a larger number of ulcerative lesions and angioectasias in the small intestine compared to a standard review.[16] However, little is known about the impact of FICE on CE in patients, with no findings in the standard mode CE. In the present study, we investigated whether F1 could detect incremental findings in patients with no findings in the standard review mode.  相似文献   

5.

Background/Aims:

This study aimed to examine whether UHRF-1 and p53 overexpression is a prognostic marker for gastric cancer.

Patients and Methods:

Sixty-four patients with gastric cancer (study group) and 23 patients with gastritis (control group) were evaluated. Immunohistochemistry was used to examine expression of UHRF-1 and p53 in gastric cancers and a control group diagnosed with gastritis.

Results:

The median age was 63 years (18-83 years) in the study group. UHRF-1 was positive in 15 (23%) patients with gastric cancer and five (21.7%) patients with gastritis (P = 0.559). UHRF1 expression level in gastric cancer is more powerful than in gastritis (P = 0.046). Thirty-seven (61%) patients with gastric cancer and only one patient with gastritis were p53 positive (P < 0.001). After a median follow-up of 12 months (1–110), the 2-year overall survival rates were 55% and 30% in negative and positive p53, respectively (P = 0.084). Also, the 2-year overall survival rates were 45% and 53% in negative and positive UHRF-1, respectively (P = 0.132).

Conclusion:

According to this study, UHRF-1and p53 were not prognostic factors for gastric cancer, whereas they may have a diagnostic value for differantiating between gastric cancer and gastritis.Key Words: Gastric cancer, p53 genes, prognosis, survival, UHRF-1 proteinGastric cancer is one of the most frequent cancers in the world, for both men and women. Gastric cancer is the fourth most frequent cancer and is the second leading cause of cancer-related death worldwide. Nearly two-thirds of stomach cancers occur in developing countries.[1] Despite innovations in treatment modalities, gastric cancer still remains a mortal disease.[2] Gastric cancer is more frequent in the Eastern population and in Turkey than in the European population. It is the second leading cause of cancer death in men, and the third leading cause in women in Turkey.[3] Unfortunately, diagnostic and prognostic markers are insufficient for this mortal and frequent tumor.Epigenetic silencing of tumor suppressor genes is a hallmark in human cancers affecting multiple cellular pathways, such as cell signaling, adhesion and invasion, cell cycle, angiogenesis, DNA repair, and apoptosis.[4,5] Epigenetic alterations contribute significantly to the development and progression of gastric cancer.[6] Ubiquitin-like, containing PHD and RING finger domains 1 (UHRF1) is a newly discovered gene reported to have a function in maintaining DNA methylation by helping recruit DNA methyltransferase 1 (DNMT1) to hemimethylated DNA.[7] UHRF1 is a putative oncogenic factor and, is overexpressed in numerous cancers.[8,9,10,11,12] There is insufficient data on the role of UHRF-1 in gastric carcinoma in the literature.P53 is a well-known tumor suppressor gene and many human cancers are found to have a mutant p53 gene. In gastric carcinomas, p53 expression frequency has been reported to vary from 25% to 60%.[13,14,15,16] Its prognostic role in gastric cancer has remained controversial.[17]In this study, p53 and UHRF-1 expression were investigated in gastric carcinoma and a control group. We aimed to investigate the diagnostic and prognostic value of these parameters for gastric carcinoma.  相似文献   

6.
7.
Background/Aim:Increasing resistance of Helicobacter pylori to antimicrobials necessitated the development of new regimens and the modification of existing regimens. The present study aimed to compare the efficacy of two bismuth-containing quadruple regimens–one including clarithromycin (C) instead of metronidazole (M) and triple therapy.Results:At per-protocol analysis, the eradication rates were 64.7% (95% confidence interval 60.4–68.7) with the triple therapy (n = 504), 95.4% (95% confidence interval 91.5–99.4) with the bismuth group C (n = 501), and 93.9% (95% confidence interval 89.7–98.7) with the bismuth group M (n = 505). The eradication rates were similar between the two bismuth groups (P > 0.05) but significantly greater than that of the triple therapy (P < 0.05).Conclusion:In our study, both of the bismuth-containing quadruple therapies reached high eradication rates, whereas triple therapy was shown to be ineffective. Moreover, clarithromycin may also be a component of bismuth-containing quadruple therapy.Key Words: Bismuth, clarithromycin, eradication, Helicobacter pylori, metronidazoleHelicobacter pylori infection is a worldwide problem. Eighty percent of the population in developing countries and 20%–50% of the population in the developed countries are estimated to carry this pathogen.[1,2,3] The ultimate clinical manifestations of H. pylori infection include gastric and duodenal ulcer, gastric mucosa–associated lymphoid tissue lymphoma, and adenocarcinoma.[4,5] H. pylori eradication remains a challenge for the physicians, since no firstline regimen is able to cure the infection in all treated patients due to antibiotic resistance. The efficacy of standard triple therapy has decreased recently and is less than the 80% rate aimed for at the beginning.[5,6,7,8] The background rate of clarithromycin resistance is critically important as its presence negatively impacts the efficacy of standard triple therapy.[9] For this reason bismuth-containing quadruple therapies are recommended for firstline empirical treatment in areas of high clarithromycin resistance (>15%–20%) according to Maastricht IV consensus report.[8]It is known that resistance to metronidazole can be partially overcome by increased dose and duration of treatment.[10] This multicenter study aimed to perform a comparison among two bismuth-containing quadruple therapies—one including clarithromycin (C) instead of metronidazole (M) and triple therapy for H. pylori eradication in dyspeptic patients.  相似文献   

8.

Objective

To study gender differences in management and outcome in patients with non‐ST‐elevation acute coronary syndrome.

Design, setting and patients

Cohort study of 53 781 consecutive patients (37% women) from the Register of Information and Knowledge about Swedish Heart Intensive care Admissions (RIKS‐HIA), with a diagnosis of either unstable angina pectoris or non‐ST‐elevation myocardial infarction. All patients were admitted to intensive coronary care units in Sweden, between 1998 and 2002, and followed for 1 year.

Main outcome measures

Treatment intensity and in‐hospital, 30‐day and 1‐year mortality.

Results

Women were older (73 vs 69 years, p<0.001) and more likely to have a history of hypertension and diabetes, but less likely to have a history of myocardial infarction or revascularisation. After adjustment, there were no major differences in acute pharmacological treatment or prophylactic medication at discharge.Revascularisation was, however, even after adjustment, performed more often in men (OR 1.15; 95% CI, 1.09 to 1.21). After adjustment, there was no significant difference in in‐hospital (OR 1.03; 95% CI, 0.94 to 1.13) or 30‐days (OR 1.07; 95% CI, 0.99 to 1.15) mortality, but at 1 year being male was associated with higher mortality (OR 1.12; 95% CI, 1.06 to 1.19).

Conclusion

Although women are somewhat less intensively treated, especially regarding invasive procedures, after adjustment for differences in background characteristics, they have better long‐term outcomes than men.Since the beginning of the 1990s there have been numerous studies on gender differences in management of acute coronary syndromes (ACS). Many earlier studies,1,2,3,4,5,6,7,8 but not all,9 found that women were treated less intensively in the acute phase. In some of the studies, after adjustment for age, comorbidity and severity of the disease, most of the differences disappeared.6,7 There is also conflicting evidence on gender differences in evidence‐based treatment at discharge.1,3,5,6,8,10,11After acute myocardial infarction (AMI), a higher short‐term mortality in women is documented in several studies.2,5,6,7,12,13,14 After adjustment for age and comorbidity some difference has usually,2,5,12,13 but not always,11,14 remained. On the other hand, most studies assessing long‐term outcome have found no difference between the genders, or a better outcome in women, at least after adjustment.7,10,13,14 Earlier studies focusing on gender differences in outcome after an acute coronary syndrome have usually studied patients with AMI, including both ST‐elevation myocardial infarction and non‐ST‐elevation myocardial infarction (NSTEMI).2,5,6,7,12,13,14 However, the pathophysiology and initial management differs between these two conditions,15 as does outcome according to gender.11,16 In patients with NSTEMI or unstable angina pectoris (UAP), women seem to have an equal or better outcome, after adjustment for age and comorbidity.1,4,8,11,16,17 Studies on differences between genders, in treatment and outcome, in real life, contemporary, non‐ST‐elevation acute coronary syndrome (NSTE ACS) populations, large enough to make necessary adjustments for confounders, are lacking.The aim of this study was to assess gender differences in background characteristics, management and outcome in a real‐life intensive coronary care unit (ICCU) population, with NSTE ACS.  相似文献   

9.
10.
Infections with antibiotic-resistant bacteria (ARB) in hospitalized patients are becoming increasingly frequent despite extensive infection-control efforts. Infections with ARB are most common in the intensive care units of tertiary-care hospitals, but the underlying cause of the increases may be a steady increase in the number of asymptomatic carriers entering hospitals. Carriers may shed ARB for years but remain undetected, transmitting ARB to others as they move among hospitals, long-term care facilities, and the community. We apply structured population models to explore the dynamics of ARB, addressing the following questions: (i) What is the relationship between the proportion of carriers admitted to a hospital, transmission, and the risk of infection with ARB? (ii) How do frequently hospitalized patients contribute to epidemics of ARB? (iii) How do transmission in the community, long-term care facilities, and hospitals interact to determine the proportion of the population that is carrying ARB? We offer an explanation for why ARB epidemics have fast and slow phases and why resistance may continue to increase despite infection-control efforts. To successfully manage ARB at tertiary-care hospitals, regional coordination of infection control may be necessary, including tracking asymptomatic carriers through health-care systems.Nosocomial infections with antibiotic-resistant bacteria (ARB) occur with alarming frequency (1), and new multidrug-resistant bacteria continue to emerge, including vancomycin-resistant enterococci (VRE), methicillin-resistant Staphylococcus aureus (MRSA), two recent but isolated cases of vancomycin-resistant S. aureus (2, 3), and multiple-drug resistance in Gram-negative bacteria. In response, hospitals have used a variety of infection-control measures, some of which are costly and difficult to implement (4). Despite efforts to reduce transmission of ARB within hospitals, the incidence of VRE, MRSA, and other antibiotic-resistant infections continues to increase (1).An important distinction in the epidemiology of ARB is made between infection and colonization. Infection is characterized by serious illness when ARB contaminate wounds, the bloodstream, or other tissues. In contrast, colonization with ARB may occur in the gut, nasal cavities, or other body surfaces. Colonizing bacteria may persist for years without causing disease or harming their hosts (5, 6); we call such individuals carriers. These carriers increase colonization pressure; the number of patients who are shedding increases the risk that another patient becomes a carrier for ARB or acquires a resistant infection (7). Hospitals that reduce the incidence of resistance (the number of new cases) may see no reduction in overall prevalence (the fraction of patients with ARB), because these hospitals admit an increasing number of ARB carriers (8, 9). Patients infected with ARB generally remain hospitalized until the symptoms are cured, but they may continue to carry and shed ARB for months or years.We show that the prevalence of ARB in hospitals approaches equilibrium rapidly because of the rapid turnover of patients; the average length of stay (LOS) is ≈5 days (10, 11). Moreover, prevalence changes rapidly in response to changes in hospital infection control (11-17), so slow and steady increases in resistance must be due to something else, such as increases in the proportion of carriers admitted from the catchment population of a hospital, defined as the population from which patients are drawn, including long-term care facilities (LTCFs), other hospitals, and the community (5, 18). The health-care institutions that serve a common catchment population vary substantially in their relative size, transmission rates, and average LOS. How do increases in the number of ARB carriers in the catchment population contribute to increases in infections by ARB in the hospital, and what can be done about it?Mathematical models play an important role in understanding the spread of ARB (19). We have built on existing theory for the transmission dynamics of ARB developed for simple, well-mixed populations (12, 20), but we are focused on phenomena that occur at a large spatial scale, ignoring competition between sensitive and resistant bacteria and the biological cost of resistance (21) and the relationship between antibiotic use and the prevalence of ARB (11, 22, 23). We have developed mathematical models with multiple institutions connected by patient movement; such models are called “structured” populations or “metapopulations.” Thus, we are developing epidemiological models (20, 21, 24) focused specifically on the consequences of persistent colonization and population structure, applying existing theory for structured populations (20, 24-27).  相似文献   

11.
12.
13.
14.

Background/Aims:

Tumor recurrence after curative therapy is common for patients with hepatocellular carcinoma (HCC). As fibrosis and chronic inflammation contribute to the progression of HCC, we aimed to identify the predictive value of inflammatory and fibrosis markers for HCC recurrence after curative therapy using radiofrequency ablation (RFA).

Materials and Methods:

We retrospectively reviewed the records of patients with HCC treated with RFA between October 2005 and September 2013. The median duration of follow-up was 40 months (4–95 months). Inflammatory and fibrosis markers and demographic and clinical data were analyzed by Cox proportional hazards model using univariate and multivariate analyses and longitudinal analysis.

Results:

A total of 98 patients were included for analysis. There were 54 cases of HCC recurrence (55.1%). The aspartate aminotransferase-to-platelet ratio index (APRI; 2.3 ± 1.8 vs. 1.3 ± 1.4, P = 0.018) was significantly higher in the recurrence group than in the recurrence-free group. In multivariate analysis, APRI (hazard ratio, 2.64; confidence interval, 1.488–4.714; P = 0.001) was an independent risk factor for tumor recurrence. In particular, patients with APRI >1.38 showed a higher recurrence rate than patients with APRI ≤1.38 (P < 0.001). Longitudinal analysis showed persistently higher APRI values when assessed 12 months after RFA in patients who developed recurrence during follow-up than those who remained recurrence-free.

Conclusions:

These findings show that a high APRI value is associated with HCC recurrence after RFA. Therefore, APRI could play an important role in predicting HCC recurrence after RFA.Key Words: Aspartate aminotransferase-to-platelet ratio index, hepatocellular carcinoma, radiofrequency ablationHepatocellular carcinoma (HCC) is the fifth most common cancer worldwide, and its incidence is increasing in Asia and in the United States.[1] Its development is linked to the occurrence of a chronic liver disease due to chronic infection of hepatitis B virus (HBV) or hepatitis C virus (HCV), alcohol consumption, and metabolic syndrome. The underlying liver parenchyma is rarely normal and shows various histological changes including inflammation and fibrosis leading to cirrhosis. These histological changes of the underlying liver and the multicentric origin of tumors hamper the possibility of curative treatments including radiofrequency ablation (RFA), partial liver resection, and liver transplantation.[2,3] The cumulative 5-year recurrence rate is approximately 67–80%.[4,5] Furthermore, most patients with HCC are not candidates for surgical resection because of poor hepatic reserve. In this perspective, several minimally invasive techniques were studied, such as RFA, ethanol or acetic acid injection. RFA was gradually replaced by ethanol injection, which is the most widely used percutaneous treatment.[6]Recurrence of HCC after RFA occurs frequently as a result of local tumor progression (LTP) or intrahepatic distant recurrence (IDR). LTP can occur along the peripheral margin of the ablative lesion when the primary tumor has not been controlled completely after RFA. In contrast, IDR is thought to be the result of the multicentric origin of the HCC.[7] IDR may be related more to systemic factors, such as fibrosis and chronic inflammatory processes, than to local factors.[8] In addition, some noninvasive inflammatory and fibrosis markers may reflect the severity of fibrosis and inflammation in the background liver tissue.[9,10,11] Hence, to determine the appropriate treatment modality and optimal follow-up interval, it is important to identify surrogate markers that can predict HCC recurrence after RFA.The aim of this study was to evaluate the correlation between inflammatory and fibrosis markers and HCC recurrence after RFA.  相似文献   

15.

Objective

Socioeconomic status (SES) is inversely associated with coronary heart disease (CHD) risk. Cumulative pathogen burden may also predict future CHD. The hypothesis was tested that lower SES is associated with a greater pathogen burden, and that pathogen burden accounts in part for SES differences in cardiovascular risk factors.

Methods

This was a cross‐sectional observational study involving the clinical examination of 451 men and women aged 51–72 without CHD, recruited from the Whitehall II epidemiological cohort. SES was defined by grade of employment, and pathogen burden by summing positive serostatus for Chlamydia pneumoniae, cytomegalovirus and herpes simplex virus 1. Cardiovascular risk factors were also assessed.

Results

Pathogen burden averaged 1.94 (SD) 0.93 in the lower grade group, compared with 1.64 (0.97) and 1.64 (0.93) in the intermediate and higher grade groups (p = 0.011). Pathogen burden was associated with a higher body mass index, waist/hip ratio, blood pressure and incidence of diabetes. There were SES differences in waist/hip ratio, high‐density lipoprotein‐cholesterol, fasting glucose, glycated haemoglobin, lung function, smoking and diabetes. The SES gradient in these cardiovascular risk factors was unchanged when pathogen burden was taken into account statistically.

Conclusions

Although serological signs of infection with common pathogens are more frequent in lower SES groups, their distribution across the social gradient does not match the linear increases in CHD risk present across higher, intermediate and lower SES groups. Additionally, pathogen burden does not appear to mediate SES differences in cardiovascular risk profiles.There is a socioeconomic gradient in coronary heart disease (CHD) mortality and cardiovascular disease risk in the USA, UK and many other countries.1,2 Lower socioeconomic status (SES) is associated with a range of cardiovascular risk factors including smoking, adverse lipid profiles, abdominal adiposity, glucose intolerance and inflammatory markers.3,4,5,6,7 Both early life SES and adult socioeconomic position appear to contribute to the social gradient.5,8A history of infection may contribute to cardiovascular disease risk by stimulating sustained vascular inflammation. Evidence concerning the relevance of individual pathogens is mixed, but the cumulative pathogen burden, defined by positive serostatus for a range of pathogens, has been associated with coronary artery disease and carotid atherosclerosis in case–control9,10,11 and longitudinal cohort studies.12,13,14 Pathogen burden is also related to cardiovascular risk markers such as endothelial dysfunction,15 low high‐density lipoprotein (HDL)‐cholesterol16 and insulin resistance,17 in some but not all studies.9,18It is plausible that pathogen burden could contribute to SES differences in cardiovascular disease risk. Exposure to infection is greater in lower SES groups, particularly in early life,19 and childhood infection is associated with endothelial dysfunction.20 We therefore tested the hypothesis that lower SES is associated with greater cumulative pathogen burden in healthy middle‐aged and older adults. Seropositivity was measured for three pathogens, Chlamydia pneumoniae, cytomegalovirus (CMV) and herpes simplex virus 1 (HSV‐1), that have been associated with cardiovascular disease risk,21,22,23 and have contributed to studies of cumulative pathogen burden.10,12,14,15 We also determined whether variations in pathogen burden accounted for SES differences in cardiovascular risk factors.  相似文献   

16.

Objectives

Several studies have revealed increased bone mineral density (BMD) in patients with knee or hip osteoarthritis, but few studies have addressed this issue in hand osteoarthritis (HOA). The aims of this study were to compare BMD levels and frequency of osteoporosis between female patients with HOA, rheumatoid arthritis (RA) and controls aged 50–70 years, and to explore possible relationships between BMD and disease characteristics in patients with HOA.

Methods

190 HOA and 194 RA patients were recruited from the respective disease registers in Oslo, and 122 controls were selected from the population register of Oslo. All participants underwent BMD measurements of femoral neck, total hip and lumbar spine (dual‐energy x ray absorptiometry), interview, clinical joint examination and completed self‐reported questionnaires.

Results

Age‐, weight‐ and height‐adjusted BMD values were significantly higher in HOA versus RA and controls, the latter only significant for femoral neck and lumbar spine. The frequency of osteoporosis was not significantly different between HOA and controls, but significantly lower in HOA versus RA. Adjusted BMD values did not differ between HOA patients with and without knee OA, and significant associations between BMD levels and symptom duration or disease measures were not observed.

Conclusion

HOA patients have a higher BMD than population‐based controls, and this seems not to be limited to patients with involvement of larger joints. The lack of correlation between BMD and disease duration or severity does not support the hypothesis that higher BMD is a consequence of the disease itself.Osteoporosis is recognised as a frequent complication to rheumatoid arthritis (RA).1 Osteoarthritis (OA) is the most frequent rheumatic joint disease, and contrary to the situation in RA, several studies have revealed increased bone mineral density (BMD) in patients with knee or hip OA, even if the results have not been consistent in all studies.2,3,4,5,6,7,8,9,10 The hand is a frequent site of peripheral joint involvement in OA. However, a limited number of studies have addressed the issue of osteoporosis in hand osteoarthritis (HOA), and the results from these few studies have been inconsistent regarding levels of BMD compared with controls.8,9,10,11,12,13,14,15OA has generally been considered as a cartilage disease characterised by slow progressive degeneration of articular cartilage due to “wear and tear” mechanisms. However, there is increasing evidence that abnormalities in the subchondral bone and systemic factors may contribute to the pathophysiological process. Studies of subchondral bone have revealed alterations in microstructure including increased BMD. This local increase in BMD in OA joints may be a consequence of reduced shock absorption in joints with degenerated cartilage,5 or on the contrary, thickening and stiffening of the subchondral bone with increased BMD may lead to development of OA.16 However, elevated BMD levels at sites remote from the arthritic process cannot be explained by local biomechanical factors, and the question of whether primary OA rather is a systemic bone disease has been raised.17 Systemic changes in subchondral bone could be explained by genetic factors, hormonal influences, vitamin D concentrations, growth factors or activity of bone‐forming cells.15,18,19,20,21 Better knowledge about the relationship between BMD and HOA may contribute to the understanding of the pathogenesis of OA.Disease registers of patients with RA22 and HOA23 have been established in the city of Oslo. We have previously compared BMD levels in a cohort of RA patients from the Oslo RA Register (ORAR) and healthy controls.24 The current study was designed to compare levels of BMD and the frequency of osteoporosis between patients with HOA, RA and controls. A second aim was to explore possible relationships between BMD levels and disease characteristics in patients with HOA.  相似文献   

17.
Can CD4+ and CD8+ “memory” T cells that are generated and maintained in the context of low-level virus persistence protect, in the absence of antibody, against a repeat challenge with the same pathogen? Although immune T cells exert effective, long-term control of a persistent γ-herpesvirus (γHV68) in Ig–/– μMT mice, subsequent exposure to a high dose of the same virus leads to further low-level replication in the lung. This lytic phase in the respiratory tract is dealt with effectively by the recall of memory T cells induced by a γHV68 recombinant (M3LacZ) that does not express the viral M3 chemokine binding protein. At least for the CD8+ response, greater numbers of memory T cells confer enhanced protection in the M3LacZ-immune mice. However, neither WT γHV68 nor the minimally persistent M3LacZ primes the T cell response to the extent that a WT γHV68 challenge fails to establish latency in the μMT mice. Memory CD4+ and CD8+ T cells thus act together to limit γHV68 infection but are unable to provide absolute protection against a high-dose, homologous challenge.Along-term debate in the immunology community concerns the importance of antigen persistence for maintaining T cell memory (13). The discussion is often confused by different interpretations of the term “memory” (4). If we look at memory simply as the capacity to maintain antigen-specific, “resting” T cell numbers indefinitely, then it seems that the continued presence of the particular MHC class I or class II glycoprotein plus peptide (epitope) is certainly not required (2, 3, 5, 6). However, if memory is used in the sense of the protective immunity that might be the focus of a candidate T cell-based vaccine, then the case that continued (or sporadic) reexposure to the inducing antigen is advantageous could well have merit (7).Acutely activated T cells deal very effectively with a homologous virus challenge (8). On the other hand, the time taken to recall “resting” memory allows an invading organism to become established (9), although the pathogen may either be cleared more rapidly or (for an agent that persists) be held to a lower “set point” (10). Although cytotoxic T cells can be induced very rapidly in vivo (11, 12), their localization to (for example) the respiratory mucosa may be delayed by the need for further activation and proliferation in the lymphoid tissue (13). Could protection be made more immediate by achieving a continuing state of enhanced lymphocyte turnover (14) and activation?The herpesviruses (HVs) provide a natural system for analyzing immunity in the context of controlled virus persistence (15, 16). Vaccination strategies with the γHVs, like Kaposi''s sarcoma virus (HHV8) and Epstein–Barr virus (EBV), can be investigated (1722) with the murine γ-herpesvirus 68 (γHV68), a virus that has high level sequence homology with HHV8 and a pathogenesis similar to that of EBV (1719). Respiratory exposure of C57BL/6J (B6) mice to γHV68 induces transient, lytic infection of the respiratory tract and latency in B lymphocytes and macrophages (20, 21). Virus is not normally detected by plaque assay of lung homogenates for >10–12 days after the initial challenge. Genetically disrupted μMT mice that lack both B cells and antibody (Ig–/–) also clear γHV68 from the lung and show little evidence of latency by infectious center assay (22). However, a more sensitive limiting dilution analysis (LDA) showed that γHV68 persists in macrophages and, perhaps, in other cells from Ig–/– mice after both i.p. and intranasal (i.n.) challenge (21, 23).The present experiments ask whether the combination of CD4+ and CD8+ T cell memory (24) in mice infected once with γHV68 (25, 26) protects against superinfection with the same virus. Antibody is, of course, likely to neutralize the majority of input virus in this circumstance (2729). Therefore, the experiments were done with Ig–/– μMT mice (30) that had been infected with either WT γHV68 or with a mutant virus (M3LacZ) that causes a normal, lytic infection in the lung, but a much lower level of latency in the lymphoid tissue of Ig+/+ controls (22). It is also the case that the protective capacity of immune CD4+ and CD8+ T cells that are maintained where there is the possibility of continued, low level antigen challenge has not (to our knowledge) been analyzed previously for any virus system.  相似文献   

18.

Objective

S100A12 is a pro‐inflammatory protein that is secreted by granulocytes. S100A12 serum levels increase during inflammatory bowel disease (IBD). We performed the first study analysing faecal S100A12 in adults with signs of intestinal inflammation.

Methods

Faecal S100A12 was determined by ELISA in faecal specimens of 171 consecutive patients and 24 healthy controls. Patients either suffered from infectious gastroenteritis confirmed by stool analysis (65 bacterial, 23 viral) or underwent endoscopic and histological investigation (32 with Crohn''s disease, 27 with ulcerative colitis, and 24 with irritable bowel syndrome; IBS). Intestinal S100A12 expression was analysed in biopsies obtained from all patients. Faecal calprotectin was used as an additional non‐invasive surrogate marker.

Results

Faecal S100A12 was significantly higher in patients with active IBD (2.45 ± 1.15 mg/kg) compared with healthy controls (0.006 ± 0.03 mg/kg; p<0.001) or patients with IBS (0.05 ± 0.11 mg/kg; p<0.001). Faecal S100A12 distinguished active IBD from healthy controls with a sensitivity of 86% and a specificity of 100%. We also found excellent sensitivity of 86% and specificity of 96% for distinguishing IBD from IBS. Faecal S100A12 was also elevated in bacterial enteritis but not in viral gastroenteritis. Faecal S100A12 correlated better with intestinal inflammation than faecal calprotectin or other biomarkers.

Conclusions

Faecal S100A12 is a novel non‐invasive marker distinguishing IBD from IBS or healthy individuals with a high sensitivity and specificity. Furthermore, S100A12 reflects inflammatory activity of chronic IBD. As a marker for neutrophil activation, faecal S100A12 may significantly improve our arsenal of non‐invasive biomarkers of intestinal inflammation.The etiology of inflammatory bowel disease (IBD) consisting of ulcerative colitis and Crohn''s disease involves complex interactions among susceptibility genes, the environment, and the immune system. These interactions lead to a cascade of events that involve the activation of neutrophils, production of proinflammatory mediators, and tissue damage.1 As intestinal symptoms are a frequent cause of referrals to gastroenterologists, it is crucial to differentiate between non‐inflammatory irritable bowel syndrome (IBS) and IBD. To date, there is a lack of biological markers to determine intestinal inflammation.2,3 Therefore, invasive procedures are required to confirm the diagnosis of IBD. Furthermore, the natural history of chronic IBD is characterised by an unpredictable variation in the degree of inflammation. Biological markers are needed to confirm remission, detect early relapses or local reactivation, and to monitor anti‐inflammatory therapies reliably. Whereas serum markers of inflammation are still not very helpful,3,4,5 assays that determine intestinal inflammation by detecting neutrophil‐derived products in stool show great potential.6An important mechanism in the initiation and perturbation of inflammation in IBD is the activation of innate immune mechanisms.7,8 Among the factors released by infiltrating neutrophils are proteins of the S100 family.9,10 One example is calprotectin, which is detectable in the serum and stool during intestinal inflammation.11 Calprotectin was initially described as a protein of 36 kDa, but was later characterised as a complex of two distinct S100 proteins, S100A8 and S100A9.12,13,14 In recent years, calprotectin has been proposed as a faecal marker of gut inflammation reflecting the degree of phagocyte activation.6,15,16,17,18,19,20 Unfortunately, variation in faecal calprotectin assays still impedes the routine use of this marker as a sole parameter in clinical practice. The observed variation may be caused by the broad expression pattern of calprotectin, which is found in granulocytes as well as monocytes and is also inducible in epithelial cells.21,22 In this context, the elevation in lactose intolerance is notable.16,17,23S100A12 is more restricted to granulocytes. It is secreted by activated neutrophils and is abundant in the intestinal mucosa of patients with IBD.9,24 Overexpression at the site of inflammation and correlation with disease activity in a variety of inflammatory disorders underscore the role of this granulocytic protein as a proinflammatory molecule.25 The binding of S100A12 to the receptor for advanced glycation endproducts (RAGE) leads to the long‐term activation of nuclear factor kappa B, which promotes inflammation.26 In mouse models of colitis, blocking the interaction of S100A12 with RAGE has been proved to attenuate inflammation. Data on murine models of colitis as well as human IBD point to an important role for S100A12 during the pathogenesis of these disorders.9,26,27In a previous study, we demonstrated that S100A12 is overexpressed during chronic active IBD and serves as a useful serum marker for disease activity in patients with IBD.9 De Jong et al.28 recently reported that S100A12 can be detected in the stool of children with Crohn''s disease. The aim of our present study was thus to analyse S100A12 in stool samples as well as its expression in the intestinal tissue of patients with confirmed IBD or IBS and in the stool of a normal control group. We correlated faecal S100A12 levels with endoscopic and histological findings in the same patients and investigated the diagnostic accuracy of S100A12 to detect intestinal inflammation.  相似文献   

19.
20.

Objectives

To evaluate inter‐observer agreement for microscopic measurement of inflammation in synovial tissue using manual quantitative, semiquantitative and computerised digital image analysis.

Methods

Paired serial sections of synovial tissue, obtained at arthroscopic biopsy of the knee from patients with rheumatoid arthritis (RA), were stained immunohistochemically for T lymphocyte (CD3) and macrophage (CD68) markers. Manual quantitative and semiquantitative scores for sub‐lining layer CD3+ and CD68+ cell infiltration were independently derived in 6 international centres. Three centres derived scores using computerised digital image analysis. Inter‐observer agreement was evaluated using Spearman''s Rho and intraclass correlation coefficients (ICCs).

Results

Paired tissue sections from 12 patients were selected for evaluation. Satisfactory inter‐observer agreement was demonstrated for all 3 methods of analysis. Using manual methods, ICCs for measurement of CD3+ and CD68+ cell infiltration were 0.73 and 0.73 for quantitative analysis and 0.83 and 0.78 for semiquantitative analysis, respectively. Corresponding ICCs of 0.79 and 0.58 were observed for the use of digital image analysis. All ICCs were significant at levels of p<0.0001. At each participating centre, use of computerised image analysis produced results that correlated strongly and significantly with those obtained using manual measurement.

Conclusion

Strong inter‐observer agreement was demonstrated for microscopic measurement of synovial inflammation in RA using manual quantitative, semiquantitative and computerised digital methods of analysis. This further supports the development of these methods as outcome measures in RA.Microscopic measurement of inflammation in synovial tissue is employed globally by centres working in the field of arthritis research.1 Adequate and comparable synovial tissue can be safely obtained using blind‐needle biopsy or rheumatological arthroscopy.2,3,4 In the acquired samples, various parameters may be examined, including cell populations, vascularity, cytokines and adhesion molecules. In rheumatoid arthritis (RA), many of these have been found to relate to disease activity, severity, outcome, and to exhibit a change after treatment with corticosteroids, disease‐modifying antirheumatic drugs (DMARDs) and biological therapy.5,6,7,8,9,10,11,12,13,14,15Several analysis techniques have been employed to measure these parameters. Semiquantitative analysis is a relatively quick method and therefore may facilitate examining large quantities of tissue.7 Quantitative analysis is time‐consuming but more sensitive than semiquantitative scoring to change in individual patients.16 It has been shown in previous studies that these methods can reflect overall joint inflammation when applied to relatively limited amounts of synovial tissue, even though inflammation may differ widely between individual sites in a single joint.17,18,19 Computerised digital image analysis has been applied more recently in this area and has been shown to correlate well with conventional methods of measurement.20,21,22This multi‐centre study was undertaken to standardise and validate the methods mentioned previously by evaluating inter‐observer agreement between multiple examiners in the measurement of selected parameters of inflammation in RA synovial tissue by manual quantitative, semiquantitative and computerised image analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号