首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Aim

To investigate the inhibitive effects of triptolide (TPL) combined with 5‐fluorouracil (5‐FU) on colon carcinoma HT‐29 cells in vitro and in vivo and their side effects.

Methods

HT‐29 cells were cultured with RPMI 1640 medium. The single or combined effects of TPL and 5‐FU on HT‐29 cells were examined by MTT assay, flow cytometry. The combined effects were evaluated by the median‐effect principle. The model of tumour xenografts was established in nude mice. TPL 0.25 mg/kg/day and 5‐FU 12 mg/kg/day, either in combination or on their own, were injected into mice and the inhibitive effects and side effects were observed.

Results

TPL and 5‐FU either combined or alone inhibited significantly the proliferation of HT‐29 cells and induced obvious apoptosis. Mean (SD) growth inhibition rate reached 94.92 (2.76)% and the apoptic rate at 48 h reached 41.71 (1.38)%. The combined effects were synergistic (CI<1) at lower concentrations. TPL or 5‐FU alone inhibited significantly the growth of tumour xenografts and the inhibition rates were 78.53% and 84.16%; the drugs combined had more significant effect, the tumour inhibition rate reaching 96.78%. During the course of chemotherapy, no obvious side effect was observed.

Conclusion

The combined effects of TPL and 5‐FU on the growth of colon carcinoma in vitro and in vivo were superior to the effects when the agents were used individually. TPL combined with 5‐FU had synergistic effects at lower concentrations and promoted apoptosis, but did not increase the side effects of chemotherapy.Colon carcinoma is one of the most common malignant diseases worldwide.1,2 Generally, approximately half of all patients with colon cancer can be cured with surgical resection of the primary tumour, while the remainder will eventually succumb to the predominant distant disease. Metastasis may already occur before the primary tumour can be detected. This characteristic of the disease has prevented any remarkable improvement in the cure rates in spite of the advances in surgical techniques. In order to improve the prognosis, chemotherapy is often used in a variety of clinical situations. The toxicity of these chemotherapeutic agents to normal tissues has been one of the major obstacles to successful cancer chemotherapy. Therefore, combined treatments with several chemotherapy regimens or even chemopreventive medicine are often used not only to enhance the treatment effect, but also to reduce the toxicity of these drugs.Over the past 40 years, 5‐fluorouracil (5‐FU) has been the major chemotherapeutic agent for treating colorectal carcinoma; however, response rates have been around 20–35% with median overall survival no more than 1 year.3,4,5 So finding new anti‐cancer drugs with high therapeutic effect which can be used in combination with existing agents may provide an important way forward in the treatment of colorectal carcinoma.Triptolide (TPL) is a diterpenoid triepoxide derived from the herb Tripterygium wilfordii that has been used as a natural medicine in China for many years. TPL exerts both anti‐inflammatory and antifertility activities through its ability to inhibit the proliferation of both activated monocytes and spermatocytes.6,7,8,9 Several reports have indicated that TPL also inhibits the proliferation of cancer cells in vitro and reduces the growth of some tumours and sensitises them to chemotherapy.10,11,12,13,14 In addition, clinical trials in China showed that TPL could achieve a total remission rate of 71% in mononucleocytic leukaemia and 87% in granulocytic leukaemia, which was more effective than any other chemotherapeutic agent currently available.13,15In this study, we examined the effects of TPL combined with 5‐FU in regard to their activity against colon carcinoma in vitro and in vivo.  相似文献   

2.

Background

P‐POSSUM (Physiological and Operative Severity Score for the enumeration of Mortality and morbidity) predicts mortality and morbidity in general surgical patients providing an adjunct to surgical audit. O‐POSSUM was designed specifically to predict mortality and morbidity in patients undergoing oesophagogastric surgery.

Aim

To compare P‐POSSUM and O‐POSSUM in predicting surgical mortality in patients undergoing elective oesophagogastric cancer resections.

Methods

Elective oesophagogastric cancer resections in a district general hospital from 1990 to 2002 were scored by P‐POSSUM and O‐POSSUM methods. Observed mortality rates were compared to predicted mortality rates in six risk groups for each model using the Hosmer–Lemeshow goodness‐of‐fit test. The power to discriminate between patients who died and those who survived was assessed using the area under the receiver–operator characteristic (ROC) curve.

Results

313 patients underwent oesophagogastric resections. 32 died within 30 days (10.2%). P‐POSSUM predicted 36 deaths (χ2 = 15.19, df = 6, p = 0.019, Hosmer–Lemeshow goodness‐of‐fit test), giving a standardised mortality ratio (SMR) of 0.89. O‐POSSUM predicted 49 deaths (χ2 = 16.51, df = 6, p = 0.011), giving an SMR of 0.65. The area under the ROC curve was 0.68 (95% confidence interval 0.59 to 0.76) for P‐POSSUM and 0.61 (95% confidence interval 0.50 to 0.72) for O‐POSSUM.

Conclusion

Neither model accurately predicted the risk of postoperative death. P‐POSSUM provided a better fit to observed results than O‐POSSUM, which overpredicted total mortality. P‐POSSUM also had superior discriminatory power.Oesophagogastric cancers continue to be a major cause of cancer mortality. Scotland currently has one of the highest incidences of oesophageal cancers in Europe and within the UK.1 Surgical resection continues to be the mainstay for treatment of oesophagogastric cancers. Postoperative mortality following oesophagogastric resections is significant and varies between 1.4–23%.2,3 In the UK, mortality for oesophagogastric cancer is higher than in the rest of Europe.4 In Scotland, patients with oesophagogastric cancers have a poor prognosis in comparison with other European countries.1 Postoperative mortality has continued to decline over the last decade mainly due to improved case selection, specialised provision of services and multidisciplinary involvement.Preoperative risk assessment and informed consent play a vital role in the management of oesophagogastric cancers. It is essential for both the patient and surgeon to have an assessment preoperatively of the probability of success of a major surgical procedure. This should take into account the surgeon''s performance, hospital performance, physiological status of the patient, and multidisciplinary involvement including interventional radiologists. This will enable a fully informed consent to be obtained from the patient. In addition this will identify patients who are at high risk from the operative procedure. In this group, preventive measures can be instituted and postoperative complications may be predicted to enable early recognition and institution of appropriate treatment, which may result in a better outcome.Predicting postoperative mortality and risk assessment before surgery continues to be a challenge. Over the last decade the Physiological and Operative Severity Score for the enumeration of Mortality and morbidity (POSSUM),5 and its modifications such as P‐POSSUM6,7 have been used in general surgery and allied specialities to predict postoperative mortality with varying degree of success. POSSUM and P‐POSSUM both use a four grade, 12 factor Physiological Score and a six factor Operative Severity Score to predict operative mortality. These scoring systems, when used appropriately, can be useful in providing an estimation of postoperative mortality for an individual patient.7O‐POSSUM was derived to provide a dedicated scoring system to predict postoperative mortality specifically for oesophageal and gastric surgery.8 This system was based on the methods used by POSSUM and P‐POSSUM, the primary end point being in‐hospital mortality. In O‐POSSUM, the risk factors were selected on the basis of their clinical relevance. Operative blood loss and number of procedures, which describe structure and process of care, were excluded from multivariate analysis.The aim of our study was to compare the predictive accuracy of P‐POSSUM and O‐POSSUM in patients undergoing elective oseophagogastric resections for cancer.  相似文献   

3.

Objective

To examine the effects of comorbidity and hospital care on mortality in patients with elevated cardiac troponin T.

Design

Observational study.

Setting

A large university hospital with on‐site diagnostic cardiac catheter laboratory.

Patients

All hospitalised patients with elevated cardiac troponin T level (⩾0.01 μg/l) over an 8‐week period.

Main outcome measures

6‐month all‐cause mortality.

Results

Among 313 patients with elevated cardiac troponin T, 195 had acute coronary syndrome and 118 had other conditions. Multivariate analysis showed that among patients with acute coronary syndrome, increasing comorbidity score (odds ratio (OR) 1.23 per point increase, 95% confidence interval (CI) 1.00 to 1.51; p = 0.048), age (OR 1.08 per year, 95% CI 1.04 to 1.13; p<0.001), raised troponin T level (OR 2.22 per 10‐fold increase, 95% CI 1.27 to 3.89; p = 0.005), and ST depression (OR 3.12, 95% CI 1.38 to 7.03; p = 0.006) were independent adverse predictors, while cardiologist care (OR 0.22, 95% CI 0.09 to 0.51; p<0.001) was associated with a better survival. Increasing troponin T level (OR 3.33 per 10‐fold increase, 95% CI 1.24 to 8.91; p = 0.017) was found to predict a worse prognosis among patients without acute coronary syndrome, and cardiologist care did not affect outcome in this group. Among hospital survivors with acute coronary syndrome, increasing comorbidity score, age and a lack of cardiologist care were independently associated with lesser use of effective medications.

Conclusions

Comorbidity was associated with a higher 6‐month mortality in patients having acute coronary syndrome, and lesser use of effective medicines among hospital survivors. Cardiologist care was associated with better 6‐month survival in patients with acute coronary syndrome, but not in those without acute coronary syndrome.Prognostic indices including the original Charlson''s comorbidity index1 have shown that comorbidity was important in determining the short and long term outcome in patients with various medical conditions, including those with acute myocardial infarction.2,3,4,5 Among patients admitted to hospital with suspected acute coronary syndrome, an abnormally raised cardiac troponin level can be found in patients with, and also without, acute coronary syndrome.6,7 An increasing cardiac troponin level was associated with increasing mortality in patients with acute coronary syndrome,8 and also those without acute coronary syndrome.9 Despite the availability of international management guidelines, care provided for patients with acute coronary syndrome varied in hospitals with or without interventional facilities, and was affected by whether patients received cardiologist care.10 We examine the effects of comorbid diseases, including a validated comorbidity index,11 and hospital care on the 6‐month outcome among patients with elevated cardiac troponin T, caused by acute coronary syndrome and other conditions.  相似文献   

4.
5.

Background

There is a shortage of reports on what potential recipients of implantable cardioverter–defibrillators (ICDs) need to be informed about and what role they can and want to play in the decision‐making process when it comes to whether or not to implant an ICD.

Aims

To explore how patients with heart failure and previous episodes of malignant arrhythmia experience and view their role in the decision to initiate ICD treatment.

Patients and methods

A qualitative content analysis of semistructured interviews was used. The study population consisted of 31 outpatients with moderate heart failure at the time of their first ICD implantation.

Setting

The study was performed at Sahlgrenska University Hospital, Göteborg, Sweden.

Results

None of the respondents had discussed the alternative option of receiving treatment with anti‐arrhythmic drugs, the estimated risk of a fatal arrhythmia, or the expected time of survival from heart failure in itself. Even so, very little criticism was directed at the lack of information or the lack of participation in the decision‐making process. The respondents felt that they had to rely on the doctors'' recommendation when it comes to such a complex and important decision. None of them regretted implantation of the ICD.

Conclusions

The respondents were confronted by a matter of fact. They needed an ICD and were given an offer they could not refuse, simply because life was precious to them. Being able to give well‐informed consent seemed to be a matter of less importance for them.Treatment with automatic implantable cardioverter–defibrillators (ICDs) has been shown to be more effective than medical treatment in preventing sudden cardiac death among patients who have survived life‐threatening ventricular arrhythmias (ventricular tachycardia/fibrillation).1 As a result, international guidelines recommend the ICD as the treatment of choice for these patients.2,3,4 The risk of fatal arrhythmias recurring in the absence of a clear reversible cause ranges from 30% to 50% at the 2‐year follow‐up.4Approximately one‐third of the patients who receive an ICD experience heart failure.1 Chronic heart failure is a common syndrome caused by reduced cardiac function that leads to the failure of the heart to pump blood. This in turn gives rise to disabling symptoms including breathlessness and fatigue. Depending on the degree of impaired exercise tolerance in daily life, the patients are classified according to the New York Heart Association (NYHA) in one of four classes (I–IV). This is a serious condition with high mortality. Approximately half the patients who develop severe heart failure corresponding to NYHA class IV will die within a year. Even in the milder or moderate stages of heart failure, the 5‐year mortality is almost 50%. However, it is difficult to estimate the short‐term prognosis due to various clinical courses. About one‐third of the patients will die suddenly and unexpectedly, one‐third will die suddenly in conjunction with a period of deterioration or a myocardial infarction, and one‐third will die following a progressive deterioration in heart failure symptoms.3,5Patients with reduced ventricular function (ejection fraction <35–40%) and advanced heart failure (NYHA⩾ III) obtain the greatest benefit from ICD treatment in terms of survival.1 In one study, mortality was reduced by 29% in patients treated with ICDs over a period of 3 years compared with those who received the best medical treatment for the prevention of arrhythmias (anti‐arrhythmics).6 Alternately, it could be said that though the device would save these patients from a sudden, dramatic, painless and somewhat premature death due to arrhythmia, thereby leading to a limited prolongation of life, the price that is paid for this effect could also be a more painful end, due to symptoms of progressive heart failure and the potential negative effects related to the treatment itself.  相似文献   

6.

Background

Biliary complications continue to be an important determinant of the recipient''s survival rate after orthotopic liver transplantation (OLT). The objective of this study was to evaluate the incidence of early biliary complications in OLT in the presence or absence of a T‐tube.

Methods

This retrospective study, based on inpatient data, focused on the relationship between T‐tube placement and early biliary complications of 84 patients after OLT, from November 2002 to June 2005. Patients were divided into two groups based on whether or not a T‐tube was used following bile duct reconstruction: T‐tube group (group I, n = 33); non‐T‐tube group (group II, n = 51).

Results

45.2% of OLT recipients had a malignant neoplasm. There were no significant differences in the demographic characteristics or operation data between the two groups. Overall, early biliary tract complications developed in 19.0% (16/84) of patients. The rate of early biliary complications was 30.3% (10/33) and 11.8% (6/51) in groups I II, respectively (p = 0.035). Biliary complications which were directly caused by T‐tube placement occurred in 12.1% (4/33) of patients in group I. Overall, the percentage of malignant neoplasms, chronic viral cirrhosis, fulminant liver failure and other primary disease recipients with early biliary complications were 6.2%, 37.5%, 43.8% and 12.5%, respectively.

Conclusion

This study suggests that the use of a T‐tube in Chinese patients undergoing OLT causes a higher incidence of early biliary complications. Most of the early biliary complications occurred in chronic viral cirrhosis and fulminant liver failure recipients.Biliary reconstruction is a major cause of morbidity associated with orthotopic liver transplantation (OLT).1,2 Currently, there are two or three popular reconstruction methods available for surgeons based on their experience. Most centres'' reports showed that non‐T‐tube reconstruction may be associated with fewer complications and could be more cost effective.3,4,5,6,7,8,9 According to their reports, cholangitis, fistula, dislodgement, obstruction and peritonitis are complications directly related to the T‐tube, accounting for 60% of all postoperative biliary problems.9 However, to date, there have been no formal reports on this problem published in China. Furthermore, Chinese recipients are unique as the main cause of end stage liver disease is different from that in other countries. The aim of this study was to compare early biliary tract complications in the presence or absence of a T‐tube and assess our centre''s experience.  相似文献   

7.

Objective

To evaluate feasibility of the guidelines of the Groupe Francophone de Réanimation et Urgence Pédiatriques (French‐speaking group of paediatric intensive and emergency care; GFRUP) for limitation of treatments in the paediatric intensive care unit (PICU).

Design

A 2‐year prospective survey.

Setting

A 12‐bed PICU at the Hôpital Jeanne de Flandre, Lille, France.

Patients

Were included when limitation of treatments was expected.

Results

Of 967 children admitted, 55 were included with a 2‐day median delay. They were younger than others (24 v 60 months), had a higher paediatric risk of mortality (PRISM) score (14 v 4), and a higher paediatric overall performance category (POPC) score at admission (2 v 1); all p<0.002. 34 (50% of total deaths) children died. A limitation decision was made without meeting for 7 children who died: 6 received do‐not‐resuscitate orders (DNROs) and 1 received withholding decision. Decision‐making meetings were organised for 31 children, and the following decisions were made: 12 DNROs (6 deaths and 6 survivals), 4 withholding (1 death and 3 survivals), with 14 withdrawing (14 deaths) and 1 continuing treatment (survival). After limitation, 21 (31% of total deaths) children died and 10 survived (POPC score 4). 13 procedures were interrupted because of death and 11 because of clinical improvement (POPC score 4). Parents'' opinions were obtained after 4 family conferences (for a total of 110 min), 3 days after inclusion. The first meeting was planned for 6 days after inclusion and held on the 7th day after inclusion; 80% of parents were immediately informed of the decision, which was implemented after half a day.

Conclusions

GFRUPs procedure was applicable in most cases. The main difficulties were anticipating the correct date for the meeting and involving nurses in the procedure. Children for whom the procedure was interrupted because of clinical improvement and who survived in poor condition without a formal decision pointed out the need for medical criteria for questioning, which should systematically lead to a formal decision‐making process.In developed countries, ⩾70% of children die in hospital, mainly in paediatric intensive care units (PICUs).1,2 Decisions on forgoing life‐sustaining treatment are made for 30–40% of dying children.3,4,5Although formal guidelines in the English language for withholding or withdrawing treatment in critically ill children have been available since the 1990s, recommendations in French were lacking until recently.6,7,8 Because of this lack and because several studies have shown that French‐speaking doctors in the intensive care units did not follow US guidelines,9 the French‐speaking group of the intensive care unit organised a workshop, including PICU nurses and doctors, parents of patients, palliative care specialists, philosophers and people who had conducted ethics research. This group worked from 1999 to 2000 and its conclusions were published in July 2002 as a book that was disseminated to all French PICUs.10 Recently, French paediatric guidelines were derived directly from this text and validated by the ethics commission of the French Paediatric Society; the proposed procedure is summarised in box 1.11 Contrary to English guidelines that regard parents as the most appropriate bearers of decisional authority, French guidelines are more doctor centred, recommending that parents choose their level of participation, without shifting the weight of responsibility for the decision on them.The purpose of this study was to evaluate the feasibility of the procedure, to record related medical and paramedical time, and to point out ethical problems that could be implied by the procedure itself.  相似文献   

8.

Objectives

Obesity is an increasing problem in the UK and bariatric surgery is likely to increase in volume in the future. While substantial weight loss is the primary outcome following bariatric surgery, the effect on obesity‐related morbidity, mortality and quality of life (QOL) is equally important. This study reports on weight loss, QOL, and health outcomes following laparoscopic adjustable gastric banding (LAGB) in a low volume bariatric centre (<20 cases/year) and presents the first assessment of factors relating to the QOL which has been produced from a UK based surgical practice.

Study design

Questionnaire based study of patients who had LAGB. Each patients'' initial body mass index (BMI), QOL, and comorbidities were recorded. Change in these parameters was measured including excess weight loss, and output from both the Moorehead–Ardelt QOL questionnaire, and the Bariatric Analysis and Reporting Outcome System (BAROS).

Results

Eighty‐one patients (14 males, 67 females) answered the questionnaire. More than 50% excess weight loss was recorded in 52/81 patients (64%). Sixty‐four patients (79%) reported improvement in their QOL including self‐esteem, physical activity, social involvement, and ability to work. Seventy‐one patients had initial obesity related comorbidity. In 61 of these patients (86%) their comorbidities resolved or improved. Minor port site related complications were recorded in nine patients while two patients had removal of the band because of infection.

Conclusion

LAGB is a safe method of bariatric surgery. It can achieve satisfactory weight loss with significant improvement in QOL and comorbidity.Laparoscopic adjustable gastric banding (LAGB) is a minimally invasive bariatric procedure that has been widely used since its introduction in 1993.1 Its popularity is in part due to the relative safety of the technique compared to other bariatric procedures. It involves minimal dissection around the gastro‐oesophageal junction without the need for any surgical reconfiguration of normal anatomy.2,3 Furthermore, as a restrictive bariatric procedure it avoids the risks of malabsorption and can be adjusted according to the progress of the patient.4Reduction in weight is the most commonly reported outcome measure following a bariatric procedure.5 However, the effect of weight loss on obesity‐related morbidity, mortality and quality of life (QOL) following the bariatric procedure are all important measures to report.6,7 Oria and Moorehead addressed these outcome measures when they introduced the Bariatric Analysis and Reporting Outcome System (BAROS) in 19986 that has been used and validated in subsequent trials.7,8 These measures are the primary goal of obesity treatment, and therefore the most relevant health outcomes for assessing treatment effects.Previous studies that reported on these measures were usually from specialised high volume bariatric centres.7,9 This may explain the limited number of reports from the UK where bariatric surgery is usually integrated into surgical practice within relatively smaller volume centres. However, with the increasing demand on bariatric surgery, reports from such low volume centres are of crucial importance as they may provide guidance on whether to emphasise the low volume bariatric surgical practice or to move forward towards a larger scale and more specialised centres of bariatric excellence.This study reports on the outcome of morbidly obese patients who had LAGB in a low volume bariatric centre (<20 cases/year) in terms of QOL, change in comorbidities and weight loss. It also allows definition of the relationship between the quantity of weight lost and the degree of improvement in health outcomes.  相似文献   

9.

Aim

To assess the glucose tolerance of South Asian and Caucasian women with previous gestational diabetes mellitus (GDM).

Method

A retrospective follow‐up study of 189 women diagnosed with GDM between 1995 and 2001. Glucose tolerance was reassessed by oral glucose tolerance test at a mean duration since pregnancy of 4.38 years.

Results

South Asian women comprised 65% of the GDM population. Diabetes developed in 36.9% of the population, affecting more South Asian (48.6%) than Caucasian women (25.0%). Women developing diabetes were older at follow‐up (mean (SD) 38.8 (5.7) vs 35.9 (5.6) years; p<0.05) and had been heavier (body mass index 31.4 (6.3) vs 27.7 (6.7) kg/m2; p<0.05), more hyperglycaemic (Gl0 6.5 (1.7) vs 5.2 (1.1) mmol/l; p<0.01: G120 11.4 (3.3) vs 9.6 (1.8) mmol/l; p<0.01: HbA1c 6.4 (1.0) vs 5.6 (0.7); p<0.01) and more likely to require insulin during pregnancy (88.1% vs 34.0%; p<0.01). Future diabetes was associated with and predicted by HbA1c taken at GDM diagnosis in both South Asian (odds ratio 4.09, 95% confidence interval 1.35 to 12.40; p<0.05) and Caucasian women (OR 9.15, 95% CI 1.91 to 43.87; p<0.01) as well as by previously reported risk factors of increasing age at follow‐up, pregnancy weight, increasing hyperglycaemia and insulin requirement during pregnancy.

Conclusion

GDM represents a significant risk factor for future DM development regardless of ethnicity. Glycated haemoglobin values at GDM diagnosis have value in predicting future diabetes mellitus.Gestational diabetes mellitus (GDM) is defined as abnormal carbohydrate tolerance that is diagnosed or first recognised in pregnancy1 and affects approximately 5% of pregnancies.2 However, the prevalence depends on the population studied and the diagnostic criteria used3 with an increased frequency of GDM when less stringent diagnostic criteria are used and in ethnic groups who traditionally have a higher rate of type 2 diabetes.4,5,6 Differences in the prevalence of GDM reflect the background susceptibility of individual ethnic groups2,7 and possibly a different stage within the natural history of diabetes at the time of pregnancy.8Previous GDM confers an increased risk of subsequent diabetes mellitus such that 50% of women will have diabetes mellitus after 10 years.9,10 Several antenatal and maternal factors have been shown to predict this11,12,13 and identification of these during the screening of women with GDM may lead to more effective targeting of strategies for primary prevention of diabetes in local populations.3,14 Glycated haemoglobin (HbA1c), while convenient to measure, has little sensitivity in making the diagnosis of GDM15 and has been little studied as a risk marker for predicting future diabetes.A number of studies have suggested that diabetes following GDM develops more rapidly in non‐Caucasian groups.5,16,17 A recent meta‐analysis, however, suggested that differences between the ethnic groups studied could largely be explained by standardising diagnostic criteria, duration of follow‐up and patient retention.18 The Leicestershire population consists of a significant minority of women from the Indian subcontinent who have higher rates of glucose intolerance both in and out of pregnancy.19 This study examined the development of glucose intolerance and its pregnancy associations in this ethnically mixed population.  相似文献   

10.

Objective

To evaluate the classical and non‐classical cardiovascular risk factors that effect patency of native arteriovenous fistulas (AVF) in end stage renal disease (ESRD) patients who are undergoing regular haemodialysis treatment and have a percutaneous transluminal angioplasty (PTA) procedure.

Methods

All PTAs performed between 1 October 2002 and 30 September 2004 were identified from case notes and the computerised database and follow up to 31 March 2005. The definition of patency of AVF after PTA was including primary or secondary patencies. Risks were analysed to assess the influence on survival following PTAs of age, sex, serum cholesterol, serum triglyceride, diabetes, use of aspirin, current smoking and hypertension, serum albumin, serum calcium–phosphate product, intact parathyroid hormone (I‐PTH), and urea reduction ratio (URR).

Results

The patency rate of AVFs of all interventions was 65% at 6 months. Factors with poor patencies of AVFs after PTA procedures were higher serum calcium–phosphate product (p = 0.033), higher URR (p<0.001), lower serum albumin (p<0.001), non‐hypertension (p = 0.010) and “non‐smoker + ex‐smoker group” (p = 0.033). The hypertensive patients and current smokers had lower patency failure after PTAs (p<0.01 and p<0.05, respectively).

Conclusions

Unfavourable cumulative patency rates are observed in haemodialysis patients with higher URR, higher serum calcium–phosphate product and hypoalbuminaemia (lower serum albumin before the PTA procedure). Hypertension and current smoking were associated with better patency rates of AVF after PTA.Construction and maintenance of a well‐functioning vascular access remains one of the most important tasks for haemodialysis patients. Complications related to vascular access are the main cause of hospitalisations, being responsible for up to 25% of hospitalisations among dialysis patients.1 Thrombosis (occlusion) and atherosclerosis (stenosis) are the leading cause of arteriovenous fistula (AVF) dysfunction among dialysis patients.2 Percutaneous transluminal angioplasty (PTA) is an accepted therapeutic procedure for AVF dysfunction management.1 Generally, the native AVF is considered the best access for chronic haemodialysis. Age, diabetes, increased serum lipoprotein Lp(a), increased serum fibronectin and synthetic grafts (polytetrafluoroethylene (PTFE)) have been associated with vascular access dysfunction and may influence the survival of AVFs in patients on haemodialysis.3,4A variety of factors are involved in the pathogenesis of vascular diseases associated with chronic renal failure. Classical cardiovascular risk factors such as age, male gender, smoking, hypertension, dyslipidaemia, and diabetes exist in the general population and in patients with chronic renal failure. Additional non‐classical risk factors such as oxidative stress, dysparathyroidism, hyperhomocysteinaemia, dialytic inadequacy, malnutrition and disruption of calcium–phosphate homeostasis play more important roles in cardiovascular disease in chronic renal failure patients.5,6 In fact, classical cardiovascular risk factors alone have been reported to be inadequate predictors of cardiovascular disease in haemodialysis patients in a recent report.5 To our knowledge, the comparison of classical and non‐classical cardiovascular risk factors influencing the patency of native AVF in ESRD patients is rarely investigated. The first aim of our study is to identify possible cardiovascular risk factors influencing patency rate of native AVF after PTAs among haemodialysis patients. The second aim is to determine whether non‐classical cardiovascular risk factors play more important roles in influencing the patency rate of AVF after PTAs.  相似文献   

11.

Objectives

Appropriate assessment of community‐acquired pneumonia (CAP) allows accurate severity scoring and hence optimal management, leading to reduced morbidity and mortality. British Thoracic Society (BTS) guidelines provide an appropriate score. Adherence to BTS guidelines was assessed in our medical assessment unit (MAU) in 2001/2 and again in 2005/6, 3 years after introducing an educational programme.

Methods

A retrospective case‐note study, comparing diagnosis, documentation of severity, management and outcome of CAP during admission to MAU during 3 months of each winter in 2001/2 and 2005/6.

Results

In 2001/2, 65/165 patients were wrongly coded as CAP and 100 were included in the study. In 2005/6 43/130 were excluded and 87 enrolled. In 2005/6, 87% did not receive a severity score, a significant increase from 48% in 2001/2 (p<0.0001). Parenteral antibiotics were given to 79% of patients in 2001/2 and 77% in 2005/6, and third generation cephalosporins were given to 63% in 2001/2 and 54% in 2005/6 (p = NS). In 2001, 15 different antibiotic regimens were prescribed, increasing to 19 in 2005/6.

Conclusions

Coding remains poor. Adherence to CAP management guidelines was poor and has significantly worsened. Educational programmes, alone, do not improve adherence. Restriction of antibiotic prescribing should be considered.Community‐acquired pneumonia (CAP) accounted for almost 100 000 (2%) of acute hospital admissions in the UK in the financial year 2004/5,1,2 and is associated with significant morbidity, mortality and expenditure.3 The appropriate assessment of patients with CAP allows accurate classification of severity of disease and optimal management.4 Early identification of severity significantly improves prognosis. Furthermore, CAP patients can avoid unnecessary admission and inappropriate antibiotic prescribing.5National guidelines for the assessment of severity of CAP and its management have been produced in many countries. In the UK, the British Thoracic Society (BTS) initially described guidelines for the management of CAP in 19936 and updated these in 20013 and again in 2004,7 with particular reference to severity scoring. Lim et al validated a prognostic score for mortality in CAP patients in 2003.8 This included the previous “CURB” score of Confusion, raised Urea, increased Respiratory rate and hypotension (BP) to which they added age over 65 to produce the CURB‐65 score. Scoring of 0 or 1 for each of the 5 points produced a prognostic index of outcome with a score of 0 suggesting a 30 day mortality risk of 0.7% and a score of 5 predicting a 57% mortality risk. Severe CAP was classified as a score of ⩾3. This score is simpler to use and more clinically useful than the more complex scoring system proposed by the Infectious Diseases Society of America.9,10,11 Implementation and maintained adherence to these guidelines is necessary to realise the benefits in morbidity, mortality and cost reduction.12,13Historically, adherence to guidelines has been poor, resulting in inappropriate management which may affect both morbidity and mortality.14,15 Misuse or over‐use of antibiotics can result in antibiotic‐associated diarrhoea or colonisation with antibiotic resistant organisms, increased hospital stay and increased costs.16,17,18In 2001/2, a retrospective study showed that adherence to the guidelines was poor in the acute medical assessment unit (MAU) of the Royal Liverpool University Hospital.19 We present the further findings of 2005/6 alongside these, after the introduction of an educational programme.  相似文献   

12.

Background

More and more quantitative information is becoming available about the risks of complications arising from medical treatment. In everyday practice, this raises the question whether each and every risk, however low, should be disclosed to patients. What could be good reasons for doing or not doing so? This will increasingly become a dilemma for practitioners.

Objective

To report doctors'' views on whether to disclose or withhold information on low risks of complications.

Methods

In a qualitative study design, 37 respondents (gastroenterologists and gynaecologists or obstetricians) were included. Focus group interviews were held with 22 respondents and individual in‐depth interviews with 15.

Results

Doctors have doubts about disclosing or withholding information on complication risk, especially in a risk range of 1 in 200 to 1 in 10 000. Their considerations on whether to disclose or to withhold information depend on a complicated mix of patient and doctor‐associated reasons; on medical and personal considerations; and on the kind and purpose of intervention.

Discussion

Even though the degree of a risk is important in a doctor''s considerations, the severity of the possible complications and patients'' wishes and competencies have an important role as well. Respondents said that low risks should always be communicated when there are alternatives for the intervention or when the patient may prevent or mitigate the risk. When the appropriateness of disclosing risks is doubtful, doctors should always tell their patients that no intervention is without risk, give them the opportunity to gather all the information they need or want, and enable them to detect a complication at an early stage.The concept of risk has become an important guiding concept in medicine. The “risk epidemic”, as some call it,1 confronts doctors with new questions about what risks they should discuss with their patients.There are large differences in legal standards for what should be disclosed to patients. For instance, UK and German law take as a standard “what a reasonable doctor would disclose”. Both the USA and The Netherlands (Medical Treatment Agreement Act (Wet op de geneeskundige behandelingovereenkomst)) describe the doctor''s duty to inform in terms of what Beauchamp and Childress2 call a “reasonable patient” standard: what a reasonable patient would need or want to know to be able to give informed consent.When the complication risk is high and consequences may be severe, it is obvious that doctors have to inform their patients. But in cases of low or negligible risk, doctors have doubts about disclosing information because it is not clear what a reasonable patient would need or want to know. There may also be a danger of information overkill, threatening instead of strengthening patient autonomy. The ethical question here is, What should doctors do when it is unclear whether a reasonable patient would want to have particular risk information?A large amount of literature is available on how to disclose both low and high risks; for instance, the BMJ issue of September 2003 contains a highly informative special section on this problem.3 Risks, when disclosed, may be improperly and incorrectly perceived by both patients and doctors4,5,6,7 and patients may have only a poor memory of what is disclosed by the doctor.4,7,8,9 Nevertheless, patients generally seem to appreciate communication on the risks involved.10We will not deal with the issue of how to communicate. Instead, we will take up the problem raised by the philosopher Onora O''Neill,11 who argued that the preoccupation with informed consent has led us to disregard forms of shaping autonomy that rely less heavily on giving exhaustive information, and that the question is not only how we should inform about risk but also to what extent.We explored the views, motives and practices of doctors on the question of what complication risks doctors should inform their patients about.  相似文献   

13.

Background

Shortage of donor organs is one of the major problems for liver transplant programmes. Living liver donation is a possible alternative, which could increase the amount of donor organs available in the short term.

Objective

To assess the attitude towards living organ donation in the general population to have an overview of the overall attitude within Germany.

Methods

A representative quota of people was evaluated by a mail questionnaire (n = 250). This questionnaire had 24 questions assessing the willingness to be a living liver donor for different potential recipients. Factors for and against living liver donation were assessed.

Results

Donating a part of the liver was almost as accepted as donating a kidney. The readiness to donate was highest when participants were asked to donate for children. In an urgent life‐threatening situation the will to donate was especially high, whereas it was lower in the case of recipient substance misuse. More women than men expressed a higher disposition to donate for their children. Sex, religion, state of health and age of the donor, however, did not influence other questions on the readiness to consider living organ donation. The will for postmortem organ donation positively correlated with the will to be a living organ donor.

Conclusions

The motivation in different demographic subgroups to participate in living liver transplantation is described. Differences in donation readiness resulting from the situation of every donor and recipient are thoroughly outlined. The acceptance for a living liver donation was found to be high – and comparable to that of living kidney donation.The shortage of donor organs is one of the key problems in solid organ transplantation. Many patients with clear indications for transplantation have to wait for several months (lung, heart or liver) or even years (kidney) in a declining state of health and with a decreasing quality of life.1 In some cases, patients requiring transplant die while on the waiting lists. To overcome the gap between organs needed for transplantation and those available, various strategies have been considered.The first studies on xenotransplantation were started in the 1960s using non‐human primates, pigs and other animals as potential donor. Although some of the immunological and infectious obstacles have been overcome during the past two decades, xenotransplantation is still far from being introduced into clinical practice.2 Replacing organ function by artificial devices is a standard procedure in cases of progressive kidney failure. Although long‐term dialysis can keep patients alive with an acceptable state of health, kidney transplantation is considered to be the better alternative in most cases, increasing the patients'' life standard and decreasing the overall sociomedical costs.3,4 Distinct methods of replacing other organs with substitution devices, with or without the use of living cells (eg, intracorporal heart pumps or bioartifical liver reactors) may be of additional use in the future. The first preclinical trials utilising these techniques have been initiated; the broad application of such methods, however, cannot be predicted. In addition, promising approaches for using stem‐cell‐based treatments have been described recently. Some of the new stem cell techniques may have the potential to solve the problem of organ shortage in the future. Today, their clinical application is still far away.In contrast, living organ donation can be immediately applied to compensate for the lack of donor organs without major technical problems. The first experiences with transplantation of parts of the liver were made in the early 1980s (mostly with children as recipients).5 In the late 1980s and 1990s, split liver transplantation was developed, offering the possibility to treat two recipients with only one cadaveric organ graft.6 Later, reduced size liver transplantation and split techniques formed the basis on which living related liver transplantation was introduced. The first reports of living related liver transplantation were published in the late 1980s and 1990s.7,8 Living related liver transplantation as a widely used procedure for children and adults has been reported throughout the past decade.9 Today, recipient outcome in the hands of experienced centres is at least as good as that for cadaveric donation.10,11 A certain risk for the organ donor, however, remains. Living liver donation has a donor mortality of approximately 0.2–0.6% for right liver lobe donation and less than 0.1% for left lateral lobe donation (estimated by reported donor deaths) and is associated with some typical complications, mostly affecting the biliary system.12,13 This leads to considerable ethical problems for all who are associated with the process, including the donor, the recipient and the transplant team.14,15,16This study focused on the overall motivation to become a living liver donor among the general population in Germany. Two hundred and fifty citizens, who were not directly linked to a situation of organ transplantation, were asked for their attitude towards living organ donation. Here, we detail the social circumstances and demographic factors that result in changes of the donation readiness, thereby providing important data that will allow improved communication between potential donors and their transplant centres.  相似文献   

14.

Background

This study was proposed to develop a composite of outcome measures using forced expiratory volume percentage of predicted, exercise capacity and quality of life scores for assessment of chronic obstructive pulmonary disease (COPD) severity.

Materials and methods

Eighty‐six patients with COPD were enrolled into a prospective, observational study at the respiratory outpatient clinic, National University Hospital Malaysia (Hospital Universiti Kebangsaan Malaysia ‐ HUKM), Kuala Lumpur.

Results

Our study found modest correlation between the forced expiratory volume in 1 s (FEV1), 6 min walk distance and the SGRQ scores with mean (SD) values of 0.97 (0.56) litres/s, 322 (87) m and 43.7 (23.6)%, respectively. K‐Means cluster analysis identified four distinct clusters which reached statistical significance which was refined to develop a new cumulative staging system. The SAFE Index score correlated with the number of exacerbations in 2 years (r = 0.497, p<0.001).

Conclusion

We have developed the SGRQ, Air‐Flow limitation and Exercise tolerance Index (SAFE Index) for the stratification of severity in COPD. This index incorporates the SGRQ score, the FEV1 % predicted and the 6 min walk distance. The SAFE Index is moderately correlated with the number of disease exacerbations.The diagnosis of chronic obstructive pulmonary disease (COPD) is confirmed by spirometry when the forced expiratory volume in 1 s to forced vital capacity (FEV1/FVC) ratio is less than 70%. Both the American Thoracic Society and the European Respiratory Society recommend a simple staging system to assess COPD severity based on post‐bronchodilator FEV1 as percentage of the predicted value (FEV1%Pred).The FEV1 cut‐off points used to define different stages of COPD are arbitrary and have not been clinically validated. Although FEV1 does not accurately measure small airflow obstruction, it is the most objective and reproducible measurement to assess physiologically the degree of airflow limitation. A recent study suggested that prognosis of all‐cause mortality was found to be strongly associated with age, smoking, and the best attainable FEV1%Pred in COPD.1On the contrary, other studies have shown good correlation between disease severity and quality of life (QOL) scores independent of the underlying physiologic markers measured by spirometry.2,3,4 Patients with poor QOL scores based on the St George''s Respiratory Questionnaire (SGRQ) are at greater risk of hospital readmission whereas the FEV1%Pred or FVC is not related to readmission.3QOL scores and spirometric values also measure different dimensions of disease severity.5 Wijnhoven et al found that a reduced score on the Health‐Related Quality of Life Questionnaire (HRQOL) was strongly associated with greater respiratory complaints whereas no association between pulmonary function level and symptoms was found.1Another aspect of disease severity in COPD is exercise tolerance. Studies in pulmonary rehabilitation have shown that assessment of exercise tolerance correlates well with disease severity.6,7,8 Walking distance also corresponds well with QOL scores, independent of the severity as assessed by spirometry.8 In another study by Wegner et al, exercise capacity, dyspnoea scores and airway obstruction independently characterised the pathophysiologic conditions of patients with severe COPD.9Would it be possible then to determine the severity of COPD using other independent parameters in addition to the degree of airway obstruction as measured by FEV1%Pred? In a landmark study, Celli et al introduced and validated a multifactorial grading system that incorporated the body mass index, degree of airflow obstruction, functional dyspnoea and exercise capacity of patients with COPD. The cumulative scores of the BODE index correlated well with mortality.10 However, no QOL questionnaires were used in the study. The incorporation of a QOL assessment would have provided a more holistic stratification of severity in patients with COPD.Currently the severity of COPD is determined arbitrarily by a spirometric measure of a lung function, FEV1. Although the decline of FEV1 is a good marker of disease progression, it does not accurately assess the global manifestations of COPD. We hypothesised that inclusion of other outcome measures such as exercise capacity and health‐related QOL scores, in addition to the spirometric measurement FEV1%Pred, would provide better overall assessment of COPD severity. In this study we have developed a composite of outcome measures using post‐bronchodilator FEV1%Pred, exercise capacity and QOL scores to assess the severity of COPD. In addition, we validated the new composite score against the patients'' exacerbation frequency.  相似文献   

15.
Wang Z  Xia B  Ma C  Hu Z  Chen X  Cao P 《Postgraduate medical journal》2007,83(977):192-195

Background

Fatty liver disease (FLD) is highly prevalent in Western countries, but recent data have shown that FLD is also emerging in China.

Aim

To investigate the prevalence and risk factors of FLD in the Shuiguohu district of Wuhan city, central China, during 1995–2004.

Methods

12247 individuals (7179 men and 5068 women) over 18 years of age who were living in the area were investigated for FLD in the Zhongnan Hospital of Wuhan University from 1995 to 2004. FLD was determined by the ultrasonographic method. Height, weight, blood pressure, fasting blood sugar, alanine aminotransferase, total cholesterol and triglyceride were determined by routine laboratory methods.

Results

The prevalence of FLD was 12.5% in 1995, and rose gradually to 24.5% by 2003–4. The prevalence was twice as high in men (28.1%) as in women (13.8%), and increased with age in females, and males <60 years of age. Multivariate analysis showed that several risk factors were profoundly associated with the prevalence of FLD, including male sex, old age, obesity, hyperlipidaemia (cholesterol or triglyceride), fasting hyperglycemia and hypertension.

Conclusion

The prevalence of FLD in the Shuiguohu district of Wuhan city, central China, was shown to have increased during the 10‐year period, 1995 to 2004. The FLD was found to be closely associated with sex, age, obesity and other metabolic syndrome features.Fatty liver disease (FLD) is an increasingly recognised disease in the world. FLD can be either alcoholic or non‐alcoholic, and both conditions may progress to the end stage of liver disease. The mean prevalence of FLD in western countries, as measured by ultrasonography, ranges from 20% to 60%.1 Many potential risk factors for non‐alcoholic fatty liver disease (NAFLD) including obesity, insulin resistance, hyperlipidaemia and diabetes have been identified previously.2,3,4 Two reports from Shanghai5 and Shenzhen6 showed that the prevalence of FLD was 20.8% (17.9% after adjustment by age and sex) in East China and 20.7% in South China, respectively, lower than that in western countries.7 However, the incidence and prevalence of FLD in other area of China is unclear. The incidence of FLD is likely to rise steadily in the Chinese population owing to the increase in elderly population, changes in life style, alcohol and excessive food intake, westernisation of the diet style, a general lack of exercise and prevalence of viral hepatitis. To date, there is no report of a change in FLD prevalence during the recent 10 years in China. The current study aimed to investigate the prevalence and risk factors of FLD during 1995–2004 in the Shuiguohu district of Wuhan city, central China and to gain a better understanding of the changes in FLD prevalence and the aetiology of FLD.  相似文献   

16.

Background

Children with allergic diseases such as asthma and atopic dermatitis experience increased gastrointestinal symptoms. Further, physiological and histological abnormalities of the gastrointestinal tract in patients with allergic diseases have been reported. It is not certain whether adult patients experience increased gastrointestinal symptoms.

Methods

A retrospective, case–control study of 7235 adult (⩾20 years old) primary care patients was conducted. A general practitioner diagnosis of irritable bowel syndrome was used to serve as a marker of lower gastrointestinal symptoms. The prevalence of lower gastrointestinal symptoms was calculated in patients with asthma or allergic rhinitis and compared with that in patients with other chronic diseases (insulin‐dependent diabetes mellitus, osteoarthritis and rheumatoid arthritis) and with the remaining population.

Results

Gastrointestinal symptoms were significantly more common in patients with asthma (9.9%) as compared with patients with chronic diseases (4.9%; odds ratio (OR) 2.13, 95% confidence interval (CI) 1.39 to 2.56; p<0.002) or the remaining non‐asthmatic population (5.5%; OR 1.89, 95% CI 1.39 to 2.56; p<0.001). Gastrointestinal symptoms were also significantly more common in patients with allergic rhinitis (7.9%) as compared with patients with chronic diseases (4.9%; OR 1.66, 95% CI 1.02 to 2.7; p<0.05) and the remaining population (5.5%; OR 1.47, 95% CI 1.04 to 2.1; p<0.02). This phenomenon was independent of age, sex and inhaled asthma therapy in the case of patients with asthma.

Conclusions

Our findings support the hypothesis that lower gastrointestinal symptoms are more common in patients with allergic diseases such as asthma and allergic rhinitis.Lower gastrointestinal symptoms such as diarrhoea and abdominal pain are common in children with allergic diseases such as asthma1 and atopic dermatitis.2 Although food allergies and rare organic gastrointestinal diseases such as eosinophilic gastroenteropathy are associated with atopic disease, it is unlikely that they alone would account for such symptoms. Histological abnormalities of the gastrointestinal tract in patients with allergic airway diseases have also been reported. Small bowel biopsy specimens from patients with asthma and allergic rhinitis show features in common with the inflammatory reaction observed in the airways, with accumulation of eosinophils, T cells, mast cells, macrophages and increased expression of proallergic cytokines, such as interleukin (IL)4 and IL5.3,4 Accumulation of eosinophils in the oesophageal mucosa has also been reported in patients with allergic rhinitis and asthma compared with controls.5 It is yet to be established whether these microscopic inflammatory changes influence gastrointestinal function. Interestingly, absorption studies with chromium 51‐labelled EDTA suggest a permeability defect of the gastrointestinal tract in patients with asthma.6We hypothesised that lower gastrointestinal symptoms would be more prevalent in patients with asthma and allergic rhinitis. We conducted a retrospective, case–control study of community‐based patients by evaluating computerised records of patients from a large primary healthcare centre in the UK. This strategy allowed us to identify large numbers of patients with asthma and allergic rhinitis through specific diagnostic codes (James Read codes). We used a general practitioner diagnosis of irritable bowel syndrome (IBS) to serve as a marker for lower gastrointestinal symptoms. IBS has no specific features, and, typically, diagnosis depends on the presence of lower gastrointestinal tract symptoms in the absence of organic bowel disorders. Although criteria‐based definitions of IBS have been developed,7,8 it was not deemed necessary to use them strictly in this study, as we were simply using the diagnostic label to indicate the presence of lower gastrointestinal symptoms without an obvious organic cause.  相似文献   

17.

Methods

The quality of clinical studies published in five different specialties, over three decades was evaluated. Computerised search of the Medline database was undertaken to evaluate the articles published in 25 clinical journals in 1983, 1993, and 2003 from five different specialties (medicine, surgery, paediatrics, anaesthesia, and psychiatry). The number of randomised controlled trials (RCTs), meta‐analyses, and other clinical trials (non‐RCT) were noted.

Results

From the 27 030 articles evaluated, there were 2283 (8.4%) RCTs, 166 (0.6%) meta‐analyses, and 4153 (15.4%) other clinical trials. For the proportion of RCTs, the rank order of the specialties was; anaesthesia (503; 18%), psychiatry (294; 9.6%), medicine (899; 8.1%), paediatrics (326; 6.4%), and surgery (261; 5.3%) (p<0.001). For the proportion of meta‐analysis, the rank order of the specialties was; psychiatry (36; 1.2%), medicine (105; 0.9%), paediatrics (15; 0.3%), anaesthesia (6; 0.2%), and surgery (4; 0.1%) (p<0.001). Overall, from 1983 to 2003, there were increases in the proportion of RCTs (449, 5.9% to 1027, 9.6%), meta‐analysis (0, 0% to 127, 1.2%), and other clinical trials (897, 12% to 1983, 19%) (p<0.001). This trend was apparent in each clinical specialty (p<0.001).

Conclusions

Over the three decades evaluated, clinical trials, notably RCTs and meta‐analysis form only a small proportion of articles published in prominent journals from five clinical specialties. This is notwithstanding the modest increases in the proportions of RCTs and meta‐analysis over the same period.  相似文献   

18.

Aim

To investigate the non‐operative primary care management (splintage, task modification advice, steroid injections and oral medications) of carpal tunnel syndrome before patients were referred to a hand surgeon for decompression.

Design and setting

Preoperative data were obtained on age, gender, body mass index, employment, symptom duration, and preoperative clinical stage for patients undergoing carpal tunnel decompression (263 in the USA, 227 in the UK).

Results

Primary care physicians made relatively poor use of beneficial treatment options with the exception of splintage in the US (73% of cases compared with 22.8% in the UK). Steroid injections were used in only 22.6% (US) and 9.8% (UK) of cases. Task modification advice was almost never given. Oral medication was employed in 18.8% of US cases and 8.9% of UK cases.

Conclusions

This study analyses the non‐operative modalities available and suggests that there is scope for more effective use of non‐operative treatment before referral for carpal tunnel decompression.Carpal tunnel syndrome (CTS) usually develops slowly, often with a fluctuating level of symptoms over several months or years, with only gradual deterioration. In such circumstances appropriate conservative treatment can be extremely effective in controlling symptoms for several years, delaying the need for operative intervention.Decompression of the carpal tunnel is generally considered an effective intervention,1,2 but some patients are left with persistent problems such as scar sensitivity, in part arising from the intervention. A decision to go forward with an operative intervention is an important issue for patients and their families, even in situations where the procedure is provided free. Indirect expenses for carpal tunnel decompression for a UK patient average £800 sterling (€1100, $1600) (range £65–£3970 (€95–€5800, $130–$7800)).3 A variety of conservative treatment options are available4 which can delay the need for operative intervention with its inherent risks.Effective primary care modalities of treatment for CTS include task modification, the use of splints and steroid injections proximal to the carpal tunnel. Oral medication is not considered to be of likely benefit. This study seeks to investigate the primary care management of diagnosed carpal tunnel cases considered to merit surgical decompression and referred by general practitioners to consultant hand surgeons in two communities (USA and UK).  相似文献   

19.

Background

Cocaine is a sympathomimetic agent that can cause coronary artery vasospasm leading to myocardial ischaemia, acute coronary syndrome and acute myocardial infarction (ACS/AMI). The management of cocaine‐induced ACS/AMI is different to classical atheromatous ACS/MI, because the mechanisms are different.

Methods

Knowledge study—Junior medical staff were given a scenario of a patient with ACS and asked to identify potential risk factors for ACS and which ones they routinely asked about in clinical practice. Retrospective study—Retrospective notes reviews of patients with suspected and proven (elevated troponin T concentration) ACS were undertaken to determine the recording of cocaine use/non‐use in clinical notes.

Results

Knowledge study—There was no significant difference in the knowledge that cocaine was a risk factor compared to other “classical” cardiovascular risk factors, but juniors doctors were less likely to ask routinely about cocaine use compared to other “classical” risk factors (52.9% vs >90% ,respectively). Retrospective study—Cocaine use or non‐use was documented in 3.7% (4/109) and 4% (2/50) of clinical notes of patients with suspected and proven ACS, respectively.

Discussion

Although junior medical staff are aware that cocaine is a risk factor for ACS/AMI, they are less likely to ask about it in routine clinical practice or record its use/non‐use in clinical notes. It is essential that patients presenting with suspected ACS are asked about cocaine use, since the management of these patients is different to those with ACS secondary to “classical” cardiovascular risk factors.Cocaine is a sympathomimetic agent, causing inhibition of pre‐synaptic re‐uptake of norepinephrine and dopamine as well as stimulation of the release of catecholamines.1,2 The increased concentration of catecholamines peripherally causes stimulation of both α and β adrenergic receptors.2 This stimulation can cause coronary artery vasospasm and associated decreased oxygenated blood supply to the cardiac muscle, leading to myocardial ischaemia and acute coronary syndrome (ACS). Increased concentration of catecholamines also causes an increased heart rate and therefore increased myocardial oxygen demand, which can further worsen the myocardial ischaemia. In severe cases this can lead to acute myocardial infarction (AMI), although the pathophysiology is more one of vasospasm rather than secondary to atherosclerotic disease, so the treatment options are therefore different. This risk of ACS and AMI is greatest in the first hour following cocaine use.3Knowledge that cocaine is a risk factor for ACS and AMI is therefore essential in ensuring correct treatment of these patients. In a study in the USA of 129 patients presenting to an emergency department with “chest pain syndromes”, cocaine use or non‐use was only recorded in 18 (13%) of notes.4 Of these episodes where cocaine use/non‐use was recorded, only 9 (50%) were recorded by members of the emergency department team who are the people most likely to review patients in the first hour following cocaine use, when the risk of ACS and/or AMI is greatest. There was no comparison of those with proven ACS/AMI and other diagnoses and recording of cocaine pattern of use or non‐use.The prevalence of cocaine use is increasing in the UK and in a recent population study its point prevalence of use was 2.4%.5 However, a study of the recording of any cocaine use in patients with suspected and proven ACS, as well as doctors'' knowledge of cocaine as a risk factor for ACS, has not previously been reported in the UK. We therefore designed a study to determine whether junior medical staff are aware of cocaine as a risk factor for ACS and/or AMI, and whether this risk factor is being recorded in patients presenting with suspected or proven ACS and/or AMI in clinical practice.  相似文献   

20.

Background

Although the association between type 1 diabetes mellitus (T1DM) and coeliac disease is well known, the presenting features and clinical characteristics of the two diseases when they coexist are less well documented.

Methods

All patients with T1DM attending a paediatric diabetes clinic in London, UK, were screened for coeliac disease by serological testing for coeliac antibodies (antiendomysial and either/both tissue transglutaminase and antigliadin). Antibody positive patients were reviewed and their presenting symptoms, tissue biopsy result and coexisting morbidities investigated. Glycaemic control, growth and the effect of a gluten‐free diet on these variables were also evaluated.

Results

Of the 113 patients with T1DM, 7 (6.2%) tested antibody positive. Jejunal biopsy confirmed coeliac disease in 5 of the 7 (4.4%) patients. Coeliac disease presented atypically or silently in the majority of cases with an unpredictable interval between diagnosis of diabetes and coeliac disease presentation. Coeliac disease did not appear to affect growth. Mean glycated haemoglobin (HbA1c) levels were not significantly raised in subjects (9.87%) compared with matched controls without coeliac disease (9.08%) (p = 0.249). Analyses of the effect of a gluten‐free diet on growth and HbA1c were limited. Of the seven subjects, two suffered other autoimmune diseases.

Conclusion

Coeliac disease presents atypically and unexpectedly in children and adolescents with T1DM. This, along with the strong association between the two diseases, supports the regular screening of coeliac disease among these patients. The value of a gluten‐free diet cannot be commented on from this study alone although other studies show it reduces the risk of complications.The association between type 1 diabetes mellitus (T1DM) and coeliac disease was observed as early as the late 1960s and has been noted in various studies since.1,2,3 This is unsurprising given that both conditions are strongly linked to the HLA system, in particular the haplotypes A1, B8, DR3 and DQ2.4 Coeliac disease and T1DM coexist more frequently than would be expected by chance and the prevalence of coeliac disease among patients with T1DM has been estimated as being between 1–10%. A large UK based study estimated the prevalence among children and adolescents to be 4.8%.5Healthcare professionals face two challenges in caring for young people with coeliac disease and T1DM: firstly, the diagnosis of coeliac disease among a large number of patients who present asymptomatically or atypically; and secondly, the prevention of the long term complications of coeliac disease. Given the increased prevalence of coeliac disease among diabetics, regular and repeated screening for coeliac autoantibodies has become a widely accepted practice. Symptomatic coeliac disease is only the “tip of the iceberg” and it has been recognised that coeliac disease is “more common and more varied in its presentation than previously thought”.5 The classical symptoms of failure to thrive, weight loss, steatorrhoea and a change in bowel habit are less commonly seen than milder or less specific symptoms (for example, recurrent abdominal pain).6Coeliac disease is believed to have an adverse effect on T1DM, particularly with regards to glycaemic control. In addition, coeliac disease carries with it an increased risk of long term complications, including decreased bone density and gastrointestinal malignancies.7,8 Adherence to a gluten‐free diet is difficult but appears to reduce the risk of malignancy.9 However, its effect on diabetes remains controversial.This retrospective study aims to:
  • estimate the prevalence of coeliac disease among a population of children and adolescents with T1DM within a clinical setting;
  • investigate how coeliac disease presents among children and adolescents with T1DM in terms of its presentation and time course of development;
  • investigate the effect of coeliac disease on the growth and glycaemic control of children and adolescents with T1DM and the benefit of a gluten free diet;
  • examine the association of other diseases with coeliac disease and T1DM.
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号