首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.

Aim

To investigate the time trends of leukemia and lymphoma in Croatia from 1988-2009, compare them with trends in other populations, and identify possible changes.

Methods

The data sources were the Croatian National Cancer Registry for incidence data, Croatian Bureau of Statistics for the numbers of deaths, and United Nations population estimates. Joinpoint regression analysis using the age-standardized rates was used to analyze incidence and mortality trends.

Results

Acute lymphoblastic leukemia and chronic lymphocytic leukemia incidence did not significantly change. Acute myeloid leukemia incidence significantly increased in women, with estimated annual percentage change (EAPC) of 2.6% during the whole period, and in men since 1998, with EAPC of 3.2%. Chronic myeloid leukemia incidence significantly decreased in women (EAPC -3.7%) and remained stable in men. Mortality rates were stable for both lymphoid and myeloid leukemia in both sexes. Hodgkin lymphoma non-significantly increased in incidence and significantly decreased in mortality (EAPCs of -5.6% in men and -3.7% in women). Non-Hodgkin lymphoma significantly increased in incidence in women (EAPC 3.2%) and non-significantly in men and in mortality in both men (EAPC 1.6%) and women (EAPC 1.8%).

Conclusion

While Croatia had similar leukemia and lymphoma incidence trends as the other countries, the mortality trends were less favorable than in Western Europe. The lack of declines of leukemia incidence and non-Hodgkin lymphoma mortality could be attributed to late introduction of optimal therapies. As currently the most up-to-date diagnostics and treatments are available and covered by health insurance, we expect more favorable trends in the future.Leukemias and lymphomas contribute 5% to the overall cancer incidence in Croatia (1). They comprise disease entities diverse in etiology, incidence, prognosis, and treatment. The four major leukemia subtypes include acute lymphoblastic leukemia (ALL), chronic lymphocytic leukemia (CLL), acute myeloid leukemia (AML), and chronic myeloid leukemia (CML), while lymphomas include Hodgkin lymphoma (HL) and non-Hodgkin lymphoma (NHL).Estimated 5-year relative survival for patients diagnosed between 2000 and 2002 in Europe, according to EUROCARE-4 results, is 43.4% for the overall group of leukemias. CLL has the highest 5-year survival rate (70.2%), followed by CML (37.2%), ALL (28.8%), and AML (15.8%). Five-year survival rates for lymphomas were 81.9% for HL and 53.6% for NHL (2).Recognized environmental risk factors for leukemia are exposure to ionising radiation (3-5), chemicals such as benzene (6), pesticides (7), chemotherapy (8), cigarette smoking (9), genetic disorders (10,11), family history in case of CLL (12), infection with HTLV-I (13), socio-economic status (14), and obesity (15). However, those risk factors could explain only a minority of cases, and leukemia etiology remains largely unknown. Environmental risk factors for NHL are exposure to pesticides, solvents (16,17) and HIV infection (18), while those for HL include HIV (19) and Epstein-Barr virus infection (20).The last decades brought significant improvements in diagnosis and treatment of leukemias and lymphomas. The aim of our study was to investigate the time trends of leukemia and lymphoma in Croatia from 1988-2009, compare them with trends in other populations, and identify possible changes.  相似文献   

2.
AimTo evaluate shear-wave elastographic (SWE) and related gray-scale features of pure invasive lobular breast carcinoma (ILC) and compare them with invasive ductal breast cancers (IDC).MethodsQuantitative SWE features of mean (El-mean), maximum (El-max), minimum (El-min) elasticity values of the stiffest portion of the mass, and lesion-to-fat elasticity ratio (E-ratio) were measured in 40 patients with pure ILC and compared with 75 patients with IDC. Qualitative gray-scale features of lesion size, echogenicity, orientation, and presence of distal shadowing were determined and compared between the groups.ResultsILC were significantly larger than IDC (P = 0.008) and exhibited significantly higher El-max (P = 0.015) and higher El-mean (P = 0.008) than IDC. ILC were significantly more often horizontally oriented, while IDC were significantly more often vertically oriented (P < 0.001); ILC were significantly more often hyperechoic than IDC (P < 0.001). Differences in stiffness between ILC and IDC determined by quantitative SWE parameters were present only in small tumors (≤1.5 cm in size), ie, small ILC had significantly higher El-max (P = 0.030), El-mean (P = 0.014), and El-min (P = 0.045) than small IDC, while tumors larger than 1.5 cm had almost equal stiffness, without significant differences between the groups.ConclusionSpecific histopathologic features of ILC are translated into their qualitative sonographic and quantitative sonoelastographic appearance, with higher stiffness of small ILC compared to small IDC. Gray-scale and sonoelastographic features may help in diagnosing ILC.Invasive ductal cancer (IDC) is the most common breast cancer, while invasive lobular cancer (ILC) is the second most common and accounts for 6%-12% of breast cancers (1-3). ILC differs considerably from IDC by having a unique pathological growth pattern, the so called Indian-file pattern, with sheets of single-cell layers growing along the Cooper ligaments, ductuli, and other breast structures, resembling a spiderweb that diffusely spreads in the breast, producing minor desmoplastic reaction (4,5). This spiderweb-like growth is reflected in imaging features of ILC, as well as in its clinical presentation (6). IDC usually clinically manifests as a firm lump, while ILC usually manifests as a palpable thickening and skin or nipple retraction (3,5). ILC has increased tendency for multifocality and multicentricity, a higher risk of bilateral breast cancer (20%-29%), and older age at onset (7,8). Lymph node metastases are less common in ILC than in IDC of equal size, because ILC tumor cells lack cellular atypia and often have low mitotic rate (9). ILC has the propensity to metastasize to the chest, peritoneum, retroperitoneum, and pelvis (10).Because of its growth pattern of mass infiltrating surrounding tissues, IDC is much more easily detected than ILC also on mammography. ILC has higher false-negative mammographic rates than IDC, since ILC may be invisible or may have quite low mammographic density, and microcalcifications are uncommon (6,11). Due to the higher propensity for multicentric and bilateral lesions, it is generally considered that patients with ILC should be referred to preoperative breast MRI, the best imaging modality to evaluate the tumor extent, while the benefit for preoperative MRI in IDC has not yet been proven (12,13). Fine-needle aspiration is not as sensitive for the diagnosis of ILC as it is for IDC, and core-biopsy should be performed when ILC is suspected, even in cases of palpable lesions (14,15). ILC is associated more often than IDC with positive margins on surgical excision and is more often treated with mastectomy, because of the large size at diagnosis and underestimation of tumor extent with conventional imaging (16).Ultrasound of the breast is widely used in the diagnosis of breast cancer, usually after mammography, and most image-guided core biopsies of breast lesions are routinely performed under the sonographic guidance (17,18). Ultrasound is highly operator-dependent, much more than mammography or MRI. The quality of ultrasonic equipment and transducers is variable, suboptimal examinations are common, and interobserver variability is high; sensitivity of ultrasound in detection of ILC is reported in the range of 68%-88% (6,12,19).Sonoelastography is a relatively new ultrasonographic method, which may help in the detection and differentiation of benign and malignant breast lesions (18,20). Strain elastography allows qualitative estimation of the breast lesion stiffness, while shear-wave elastography (SWE) allows quantification of lesion stiffness in kilopascals (kPa) (18). Multicentric studies found that SWE features can help discriminate breast cancers and benign breast lesions, and breast cancers among themselves (20-22). It was also shown that some IDC, like triple negative breast cancers, differ in their stiffness compared to other IDC (23). Studies evaluating some SWE features of invasive cancers were done in a small number of patients with ILC, but to the best of our knowledge none so far has provided values specific for a larger, homogeneous group of patients with pure ILC (24,25).The aim of this single-center study was to evaluate and establish SWE and related conventional sonographic features of pure ILC of the breast in a group of 40 patients, and to compare these features with the most common invasive breast cancer, IDC. SWE features within ILC group were also correlated with tumor size, extent, histologic grade, and the presence of nodal metastases.  相似文献   

3.

Aim

To analyze the serum nicotinamide phosphoribosyltransferase (Nampt) level and its prognostic value in bladder cancer (BC).

Methods

The study included 131 patients with transitional cell BC and 109 healthy controls from the West China Hospital of Sichuan University in the period between 2007 and 2013. Nampt concentration in serum was measured by commercial ELISA kits for human Nampt.

Results

The serum Nampt protein level in patients with BC (mean ± standard deviation, 16.02 ± 7.95 ng/mL) was significantly higher than in the control group (6.46 ± 2.08 ng/mL) (P < 0.001). Serum Nampt level was an independent prognostic marker of non-muscle-invasive BC, with a higher serum Nampt level (>14.74 ng/mL) indicating shorter recurrence-free survival rate (hazard ratio = 2.85, 95% confidence interval, 1.01-8.06; P = 0.048).

Conclusion

Our results suggest that serum Nampt level may serve as a biomarker of BC and an independent prognostic marker of non-muscle-invasive BC.Bladder cancer (BC) is the ninth most common cancer diagnosis worldwide (1) and the most expensive cancer to treat (2). Among men it is the fourth most common cancer, with incidence four times higher than in women (3). In China, BC caused 17 365 deaths in 2005, with a steady increase in mortality between 1991 and 2005 (4). Of newly diagnosed BC cases, 70%-80% will present with non-muscle-invasive disease, 50%-70% will recur despite endoscopic and intravesical treatments, and 10%-30% will progress to muscle-invasive disease (5,6). Most recurrences occur within 5 years (7). Therefore, to develop improved, more effective prevention and treatments there is a need to find new biomarkers of tumorigenesis and prognosis of BC.Nicotinamide phosphoribosyltransferase (Nampt) is a rate-limiting enzyme in the mammalian NAD+ biosynthesis of a salvage pathway (8). Previous studies have shown that it is significantly increased in primary colorectal cancer (9-11), lung cancer (12), breast cancer (13), prostate cancer (14) and gastric cancer (15). Thus, Nampt may be a good biomarker of malignant potential and stage progression (12,16). Our previous study revealed that genetic variants in NAMPT may predict BC risk and prognosis (17). In the present study, we analyzed the serum Nampt level and its prognostic value in BC.  相似文献   

4.

Aim

To evaluate the possible prognostic role of the expression of MAGE-A4 and NY-ESO-1 cancer/testis antigens in women diagnosed with invasive ductal breast cancer and determine the expression of HER-2 antigen.

Methods

The expression of MAGE-A4, NY-ESO-1, and HER-2 antigens was evaluated immunohistochemically on archival paraffin-embedded samples of breast cancer tissue from 81 patients. All patients had T1 to T3, N0 to N1, M0 tumors and underwent postoperative radiotherapy and, if indicated, systemic therapy (chemotherapy and hormonal therapy). The antigen expression in women who were disease-free for 5 years of follow up (n = 23) was compared with that in women with either locoregional relapse (n = 30) or bone metastases (n = 28). Patient survival after 10 years of follow up was assessed.

Results

The three groups of women were comparable in terms of age, type of operation, tumor size, tumor grade, number of metastatically involved axillary lymph nodes, Nottingham prognostic index (NPI), progesterone receptor (PR) status, and adjuvant hormonal therapy. Estrogen receptors (ER) were positive in 13 women in the 5-year relapse-free group vs 8 in locoregional relapse and 7 in bone metastases group (P = 0.032). There were significantly fewer women who received adjuvant chemotherapy in the 5-year relapse-free group than in other two groups (7 vs 23 with locoregional relapse and 25 with bone metastases; P<0.001). This group also had a significantly better 10-year survival (14 women vs 1 with locoregional relapse and 1 with bone metastases; P<0.001). The three groups did not differ in the NY-ESO-1 or HER-2 expression, but the number of patients expressing MAGE-A4 antigen was significantly lower in the group with locoregional relapse (P = 0.014). In all groups, MAGE-A4 antigen expression was associated with the NY-ESO-1 antigen expression (P = 0.006), but not with tumor size and grade, number of metastatically involved axillary lymph nodes, or the ER and PR status. MAGE-A4-positive patients had a significantly longer survival than the MAGE-A4-negative patients (P = 0.046). This was not observed with NY-ESO-1 and HER-2 antigens.

Conclusion

Our results suggest that the MAGE-A4 antigen may be used as a tumor marker of potential prognostic relevance.Breast cancer is the most common malignancy in women (1). Its clinical course may vary from indolent and slowly progressive to rapidly metastatic disease. Identification of prognostic and predictive factors that reflect the biology of breast cancer is important for the assessment of prognosis and selection of patients who may benefit from adjuvant and/or systemic therapy. The important aspects of prognostic factors suitable for clinical use are their availability, reproducibility, and cost. In routine clinical practice, treatment decisions and selection of treatment modalities for each individual patient are based on the standard prognostic factors, such as age (1,2), menopausal status (3), tumor size (1-4), tumor grade (3-5), steroid-hormone receptor status (1-5), and nodal metastases (1-5).Variability in clinical course of breast cancer is partly related to tumor cell growth rate and other features, such as invasiveness or metastatic potential. Research in molecular biology has identified genes and their products involved in or associated with the malignant cell transformation and behavior. Moreover, expression of some of these molecules, such as p53 (1,6,7), Ki-67 (7,8), nm23 (1,7), catepsin D (1,7), Ep-CAM (9,10), HER-2 (1,2,6), and urokinase-type plasminogen activator and its inhibitor (1,11), is associated with the patient’s prognosis. As it seems that many genes and molecules might be involved in malignant transformation and cell behavior, other additional molecules may also be tested as potential prognostic factors.The cancer/testis (C/T) genes encode tumor-associated antigens (TAA) found in various tumors of different histological origin, but not in normal tissues other than testis (12,13). Their physiological function is unknown. Peptides derived from these antigens could be used as targets in active immunotherapy. Analysis of the expression of these genes or their products in malignancies could also be of potential diagnostic and/or prognostic relevance (14,15). Therefore, we performed a retrospective analysis of immunohistochemical expression of C/T antigens MAGE-A4 and NY-ESO 1 in women with invasive breast cancer. We also analyzed the expression of HER-2 antigen, because it has a prognostic and predictive role (1,16).  相似文献   

5.

Aim

To analyze potential and actual drug-drug interactions reported to the Spontaneous Reporting Database of the Croatian Agency for Medicinal Products and Medical Devices (HALMED) and determine their incidence.

Methods

In this retrospective observational study performed from March 2005 to December 2008, we detected potential and actual drug-drug interactions using interaction programs and analyzed them.

Results

HALMED received 1209 reports involving at least two drugs. There were 468 (38.7%) reports on potential drug-drug interactions, 94 of which (7.8% of total reports) were actual drug-drug interactions. Among actual drug-drug interaction reports, the proportion of serious adverse drug reactions (53 out of 94) and the number of drugs (n = 4) was significantly higher (P < 0.001) than among the remaining reports (580 out of 1982; n = 2, respectively). Actual drug-drug interactions most frequently involved nervous system agents (34.0%), and interactions caused by antiplatelet, anticoagulant, and non-steroidal anti-inflammatory drugs were in most cases serious. In only 12 out of 94 reports, actual drug-drug interactions were recognized by the reporter.

Conclusion

The study confirmed that the Spontaneous Reporting Database was a valuable resource for detecting actual drug-drug interactions. Also, it identified drugs leading to serious adverse drug reactions and deaths, thus indicating the areas which should be in the focus of health care education.Adverse drug reactions (ADR) are among the leading causes of mortality and morbidity responsible for causing additional complications (1,2) and longer hospital stays. Magnitude of ADRs and the burden they place on health care system are considerable (3-6) yet preventable public health problems (7) if we take into consideration that an important cause of ADRs are drug-drug interactions (8,9). Although there is a substantial body of literature on ADRs caused by drug-drug interactions, it is difficult to accurately estimate their incidence, mainly because of different study designs, populations, frequency measures, and classification systems (10-15).Many studies including different groups of patients found the percentage of potential drug-drug interactions resulting in ADRs to be from 0%-60% (10,11,16-25). System analysis of ADRs showed that drug-drug interactions represented 3%-5% of all in-hospital medication errors (3). The most endangered groups were elderly and polimedicated patients (22,26-28), and emergency department visits were a frequent result (29). Although the overall incidence of ADRs caused by drug-drug interactions is modest (11-13,15,29,30), they are severe and in most cases lead to hospitalization (31,32).Potential drug-drug interactions are defined on the basis of on retrospective chart reviews and actual drug-drug interactions are defined on the basis of clinical evidence, ie, they are confirmed by laboratory tests or symptoms (33). The frequency of potential interactions is higher than that of actual interactions, resulting in large discrepancies among study findings (24).A valuable resource for detecting drug-drug interactions is a spontaneous reporting database (15,34). It currently uses several methods to detect possible drug-drug interactions (15,29,35,36). However, drug-drug interactions in general are rarely reported and information about the ADRs due to drug-drug interactions is usually lacking.The aim of this study was to estimate the incidence of actual and potential drug-drug interactions in the national Spontaneous Reporting Database of ADRs in Croatia. Additionally, we assessed the clinical significance and seriousness of drug-drug interactions and their probable mechanism of action.  相似文献   

6.

Aim

To explore the prevalence of psychiatric heredity (family history of psychiatric illness, alcohol dependence disorder, and suicidality) and its association with the diagnosis of stress-related disorders in Croatian war veterans established during psychiatric examination.

Methods

The study included 415 war veterans who were psychiatrically assessed and diagnosed by the same psychiatrist during an expert examination conducted for the purposes of compensation seeking. Data were collected by a structured diagnostic procedure.

Results

There was no significant correlation between psychiatric heredity of psychiatric illness, alcohol dependence, or suicidality and diagnosis of posttraumatic stress disorder (PTSD) or PTSD with psychiatric comorbidity. Diagnoses of psychosis or psychosis with comorbidity significantly correlated with psychiatric heredity (φ = 0.111; P = 0.023). There was a statistically significant correlation between maternal psychiatric illness and the patients’ diagnoses of partial PTSD or partial PTSD with comorbidity (φ = 0.104; P = 0.035) and psychosis or psychosis with comorbidity (φ = 0.113; P = 0.022); paternal psychiatric illness and the patients’ diagnoses of psychosis or psychosis with comorbidity (φ = 0.130; P = 0.008), alcohol dependence or alcohol dependence with comorbidity (φ = 0.166; P = 0.001); psychiatric illness in the primary family with the patients’ psychosis or psychosis with comorbidity (φ = 0.115; P = 0.019); alcohol dependence in the primary family with the patients’ personality disorder or personality disorder with comorbidity (φ = 0.099; P = 0.044); and suicidality in the primary family and a diagnosis of personality disorder or personality disorder with comorbidity (φ = 0.128; P = 0.009).

Conclusion

The study confirmed that parental and familial positive history of psychiatric disorders puts the individual at higher risk for developing psychiatric illness or alcohol or drug dependence disorder. Psychiatric heredity might not be necessary for the individual who was exposed to severe combat-related events to develop symptoms of PTSD.There are several risk factors associated with the development of posttraumatic stress disorder (PTSD), such as factors related to cognitive and biological systems and genetic and familial risk (1), environmental and demographic factors (2), and personality and psychiatric anamnesis (3).They are usually grouped into three categories: factors that preceded the exposure to trauma or pre-trauma factors; factors associated with trauma exposure itself; and post-trauma factors that are associated with the recovery environment (2,4).There are many studies which support the hypothesis that pre-trauma factors, such as ongoing life stress, psychiatric history, female sex (3), childhood abuse, low economic status, lack of education, low intelligence, lack of social support (5), belonging to racial and ethnic minority, previous traumatic events, psychiatric heredity, and a history of perceived life threat, influence the development of stress related disorders (6). Many findings suggest that ongoing life stress or prior trauma history sensitizes a person to a new stressor (2,7-9). The same is true for the lack of social support, particularly the loss of support from significant others (2,9-11), as well as from friends and community (12-14). If the community does not have an elaborated plan for providing socioeconomic support to the victims, then the low socioeconomic status can also be an important predictor of a psychological outcome such as PTSD (2,10,15). Unemployment was recognized as a risk factor for developing PTSD in a survey of 374 trauma survivors (16). It is known that PTSD commonly occurs in patients with a previous psychiatric history of mental disorders, such as affective disorders, other anxiety disorders, somatization, substance abuse, or dissociative disorders (17-21). Epidemiological studies showed that pre-existing psychiatric problems are one of the three factors that can predict the development of PTSD (2,22). Pre-existing anxiety disorders, somatoform disorders, and depressive disorders can significantly increase the risk of PTSD (23). Women have a higher vulnerability for PTSD than men if they experienced sexually motivated violence or had pre-existing anxiety disorders (23,24). A number of studies have examined the effects of gender differences on the predisposition for developing PTSD, with the explanation that women generally have higher rates of depression and anxiety disorders (3,25,26). War-zone stressors were described as more important for PTSD in men, whereas post-trauma resilience-recovery factors as more important for women (27).Lower levels of education and poorer cognitive abilities also appear to be risk factors (25). Golier et al (25) reported that low levels of education and low IQ were associated with poorer recall on words memorization tasks. In addition, this study found that the PTSD group with lower Wechsler Adult Intelligence Scale-Revised (WAIS-R) scores had fewer years of education (25). Nevertheless, some experts provided evidence for poorer cognitive ability in PTSD patients as a result or consequence rather than the cause of stress-related symptoms (28-31). Studies of war veterans showed that belonging to racial and ethnic minority could influence higher rates of developing PTSD even after the adjustment for combat exposure (32,33). Many findings suggest that early trauma in childhood, such as physical or sexual abuse or even neglect, can be associated with adult psychopathology and lead to the development of PTSD (2,5,26,34,35). Surveys on animal models confirm the findings of lifelong influences of early experience on stress hormone reactivity (36).Along with the reports on the effects of childhood adversity as a risk factor for the later development of PTSD, there is also evidence for the influence of previous exposure to trauma related events on PTSD (9,26,28). Breslau et al (36) reported that previous trauma experience substantially increased the risk for chronic PTSD.Perceived life threats and coping strategies carry a high risk for developing PTSD (9,26). For instance, Ozer et al (9) reported that dissociation during trauma exposure has high predictive value for later development of PTSD. Along with that, the way in which people process and interpret perceived threats has a great impact on the development or maintenance of PTSD (37,38).Brewin et al (2) reported that individual and family psychiatric history had more uniform predictive effects than other risk factors. Still, this kind of influence has not been examined yet.Keeping in mind the lack of investigation of parental psychiatric heredity on the development of stress-related disorders, the aim of our study was to explore the prevalence and correlation between the heredity of psychiatric illness, alcohol dependence, suicidality, and the established diagnosis of stress-related disorders in Croatian 1991-1995 war veterans.  相似文献   

7.

Aim

To elucidate the involvement of noradrenergic system in the mechanism by which diazepam suppresses basal hypothalamic-pituitary-adrenal (HPA) axis activity.

Methods

Plasma corticosterone and adrenocorticotropic hormone (ACTH) levels were determined in female rats treated with diazepam alone, as well as with diazepam in combination with clonidine (α2-adrenoreceptor agonist), yohimbine (α2-adrenoreceptor antagonist), alpha-methyl-p-tyrosine (α-MPT, an inhibitor of catecholamine synthesis), or reserpine (a catecholamine depleting drug) and yohimbine.

Results

Diazepam administered in a dose of 2.0 mg/kg suppressed basal HPA axis activity, ie, decreased plasma corticosterone and ACTH levels. Pretreatment with clonidine or yohimbine failed to affect basal plasma corticosterone and ACTH concentrations, but abolished diazepam-induced inhibition of the HPA axis activity. Pretreatment with α-MPT, or with a combination of reserpine and yohimbine, increased plasma corticosterone and ACTH levels and prevented diazepam-induced inhibition of the HPA axis activity.

Conclusion

The results suggest that α2-adrenoreceptors activity, as well as intact presynaptic noradrenergic function, are required for the suppressive effect of diazepam on the HPA axis activity.Benzodiazepines are used for their anxiolytic, sedative-hypnotic, muscle relaxant, and anticonvulsant properties in the treatment of a variety of neuropsychiatric disorders (1,2), including anxiety and depression, which are often related to disturbances in the activity of hypothalamic-pituitary-adrenal (HPA) axis (3,4). Although these drugs exert most of their pharmacological effects via γ-aminobutyric acidA (GABAA) receptors (5,6), benzodiazepine administration has been associated with alterations in neuroendocrine function both in experimental animals and humans (7-9). However, even after years of extensive studies, the complex mechanisms by which these widely used drugs produce their effects on the HPA axis are still not known.Although most of the previous studies have demonstrated that classical benzodiazepines such as diazepam decrease the HPA axis activity in stressful contexts (10-14), under basal conditions they have been shown to stimulate (9,11,15-18), inhibit (15,19-22), and not affect (17,23-25) the HPA axis activity. Such diverse results might be related to several factors such as the dose and gender (15,16,20,21,26-28), or may also be a consequence of the net effect of non-selective benzodiazepines on the various GABAA receptor isoforms (9).Our previous studies demonstrated that while diazepam (1 mg/kg) produced no change in plasma corticosterone levels in male rats (15,20), it decreased basal levels of corticosterone in female rats (15,26). However, although diazepam inhibited the HPA axis activity of female rats following administration of lower doses (1 or 2 mg/kg) (15,20,21,26), it stimulated the HPA axis activity following administration of high doses (10 mg/kg) (15,16,26). Moreover, whereas the suppressive effect of the lower doses of diazepam (2.0 mg/kg) on the HPA axis activity in female rats involves the GABAA receptor complex (21), increases in corticosterone levels by a higher dose of diazepam (10 mg/kg) do not involve the stimulation of GABAA receptors (16). In addition, stimulatory effect of 10 mg/kg diazepam on the HPA axis activity in rats seems not to be mediated by the benzodiazepine/GABA/channel chloride complex or by peripheral benzodiazepine receptors, but rather by a cyclic adenosine monophosphate (AMP)-dependent mechanism (18).Since our previous results suggested that the effect of a high dose of diazepam on the activity of the HPA axis in female rats might be due to a blockade of α2-adrenergic receptors (16), the aim of this study was to elucidate whether noradrenergic system also has a modulatory role in the inhibitory effect of 2.0 mg/kg diazepam on basal plasma adrenocorticotropic hormone (ACTH) and corticosterone levels in female rats.  相似文献   

8.

Aim

To describe and interpret lung cancer incidence and mortality trends in Croatia between 1988 and 2008.

Methods

Incidence data on lung cancer for the period 1988-2008 were obtained from the Croatian National Cancer Registry, while mortality data were obtained from the World Health Organization mortality database. Population estimates for Croatia were obtained from the Population Division of the Department of Economic and Social Affairs of the United Nations. We also calculated and analyzed age-standardized incidence and mortality rates. To describe time incidence and mortality trends, we used joinpoint regression analysis.

Results

Lung cancer incidence and mortality rates in men decreased significantly in all age groups younger than 70 years. Age-standardized incidence rates in men decreased significantly by -1.3% annually. Joinpoint analysis of mortality in men identified three trends, and average annual percent change (AAPC) decreased significantly by -1.1%. Lung cancer incidence and mortality rates in women increased significantly in all age groups older than 40 years and decreased in younger women (30-39- years). Age-standardized incidence rates increased significantly by 1.7% annually. Joinpoint analysis of age-standardized mortality rates in women identified two trends, and AAPC increased significantly by 1.9%.

Conclusion

Despite the overall decreasing trend, Croatia is still among the European countries with the highest male lung cancer incidence and mortality. Although the incidence trend in women is increasing, their age standardized incidence rates are still 5-fold lower than in men. These trends follow the observed decrease and increase in the prevalence of male and female smokers, respectively. These findings indicate the need for further introduction of smoking prevention and cessation policies targeting younger population, particularly women.Lung cancer is the most common malignancy worldwide, accounting for one fifth of all cancer-related deaths (1). There are different trends of lung cancer incidence and mortality throughout Europe, mostly reflecting different phases of smoking epidemic in individual countries. In many European countries, the rates in men have recently decreased or stabilized, while the rates in women increased (2-4). Because the majority of lung cancer deaths are attributed to tobacco smoking, any decline or deceleration in the lung cancer death rates could be attributed to the past antismoking interventions (5,6). Early indicators of progress in tobacco-smoking control are lung cancer trends in young adults (6).About 90% of lung cancers in men and 83% in women are caused by smoking (7). The risk of developing lung cancer is affected by the level of consumption and duration of smoking (8), as well as the level of exposure to environmental tobacco smoke (9). The second most important cause of lung cancer is radon, which was estimated to be responsible for 9% of lung cancer deaths in European countries (10). Other risk factors include exposure to asbestos (11), silica (12), nitrogen oxides (13), radiation to the chest as part of the treatment of malignant diseases (14-16), and scarring on the lungs due to tuberculosis or recurrent pneumonia (17).Currently in Croatia, lung cancer is the most common cancer in men and the fifth most common cancer in women, accounting for more than 2000 and 600 deaths per year, respectively (18,19). The aim of this study was to provide an overview of the temporal trends of lung cancer incidence and mortality in Croatia for the period 1988-2008.  相似文献   

9.

Aim

To evaluate the importance of epidermal growth factor receptor (EGFR) protein overexpression and gene amplification in carcinogenesis of glottic cancer.

Method

In order to evaluate EGFR expression at protein and gene level, immunohistochemical (IHC) analysis and fluorescent in situ hybridization (FISH) were performed on tissue microarrays of laryngeal tissue (145 samples) – 38 samples of normal mucosa, 46 samples of hyperplastic lesions, and 61 samples of cancerous lesions.

Results

Membranous (mEGFR) and cytoplasmic (cEGFR) EGFR expression was significantly different between the analyzed groups. The differences were most striking in the suprabasal-transforming zone. IHC evaluation showed that high and low mEGFR staining contributed to the differentiation of dysplastic lesions, simple hyperplasia, and cancerous tissue, as well as between different degrees of atypia in hyperplastic lesions (P < 0.050). EGFR gene amplification was not found in simple and abnormal hyperplastic lesions, but it was confirmed in 2/21 atypical hyperplasias, indicating that gene amplification can facilitate identification of malignant potential in hyperplastic lesions. In cancerous tissue, EGFR gene amplification was found in 8/50 samples. EGFR gene amplification was found in preinvasive cancer in one patient. In invasive carcinomas, gene amplification was not associated with stage or grade. Carcinomas with gene amplification showed significantly higher cEGFR expression (basal layer P = 0.003; suprabasal layer P = 0.002).

Conclusions

This study confirmed an increase in EGFR protein expression and gene amplification with the increase in biological aggressiveness of glottic lesions. A correlation between EGFR gene amplification and protein expression was established. Gene amplification proved to be an early event in glottic carcinogenesis, indicating its importance for glottic cancer prevention, early detection, and protocol selection.Epidermal growth factor receptor (EGFR), a 170 kDa transmembrane tyrosine kinase receptor, is a member of the EGFR family of cell surface receptors (1,2). The EGFR gene is located on the chromosome 7p12 (1). The most researched family member is human epidermal growth factor receptor 2 (HER-2), which is the target of routine diagnostic and therapeutic protocols for breast cancer (3). Even though less investigated than HER-2, EGFR protein is overexpressed in many solid head and neck tumors. It is connected with advanced and aggressive tumors and poor prognoses (1,2,4-6).The relation between protein overexpression and gene amplification still remains unclear. Although immunohistochemical (IHC) protein analysis of EGFR has been extensively researched, there is a lack of studies on gene amplification which seems to be important for new anti-EGFR therapies in some tumor types (2,7-14). Two kinds of drugs have become the part of cancer therapy protocols: monoclonal antibodies that act against the ligand-binding domain and small molecules that inhibit the tyrosine-kinase activity of the receptor. They have been approved for use in some types of EGFR-dependent cancers (eg, colon, lung, and pancreas) (2,7-10).Understanding the EGFR signaling pathway and its implication in tumorigenesis is crucial in the selection of patients who could benefit from EGFR-targeted therapies. The selection of patients suitable for treatment is based on biomarkers that can predict the effectiveness of new therapeutic agents. Intensive studies of some type of lung cancers showed that gene amplification and mutations were more precise markers of treatment response than EGFR protein expression (8,11,12). Research on colonic cancer demonstrated that EGFR gene amplification and protein overexpression were insufficient in predicting therapy response even though they were linked to prognosis. Patients demonstrating gene amplification showed better response to monoclonal antibody treatment in some studies, but downstream molecules such as v-Ki-ras2 Kirsten rat sarcoma viral oncogene homologue seem to be even more important in predicting response to therapy (9,13,14).Although EGFR protein overexpression has been studied in glottic lesions, there is a lack of information on gene amplification for this specific area. There is the need for further research of EGFR involvement in glottic carcinogenesis in order to optimize treatment protocols for this tumor type. Laryngeal cancer is the most prevalent malignant head and neck tumor in the male population in Croatia (15), with one of the highest mortality rates among cancers of the head and neck region (15,16). The glottic region is the most common site of origin of laryngeal cancers. This cancer was chosen for research because the follow-up of atypical hyperplastic lesions of vocal cords and other precancerous formations is especially difficult and this cancer considerably affects patients'' quality of life.The aim of this study was to investigate the influence of EGFR protein overexpression and gene amplification on cancerogenesis of glottic cancer and their possible role in improvement of follow-up and treatment protocols of precancerous and cancerous glottic lesions.  相似文献   

10.

Aim

To immunohistochemically evaluate the expression of MAGE-A1, MAGE-A, and NY-ESO-1 cancer/testis (C/T) tumor antigens in medullary breast cancer (MBC) tumor samples and to analyze it in relation to the clinicopathological features.

Methods

This retrospective study included samples from 49 patients: 40 with typical MBC and 9 with atypical MBC. Tumor specimens were obtained from patients operated on in the University Hospital for Tumors and the Sisters of Mercy University Hospital, Zagreb, Croatia, from 1999 to 2005. Standard immunohistochemistry was used on archival paraffin-embedded MBC tissues.

Results

MAGE-A1, MAGE-A, and NY-ESO-1 antigens were expressed in 33% (16/49), 33% (16/49), and 22% (11/49) of patients, respectively. No difference between the groups with and without C/T tumor antigen expression in age at diagnosis, tumor size, axillary lymph node metastasis, adjuvant therapy, and HER-2 expression was identified. Significantly more patients died in the MAGE-A-positive group than in the MAGE-A-negative group (P = 0.010), whereas a borderline significance was found between MAGE-A1-positive and the MAGE-A1-negative group (P = 0.079) and between NY-ESO-1-positive and NY-ESO-1-negative group (P = 0.117). Overall survival, as evaluated by the Kaplan-Meier curves, was lower in MAGE-A1- (P = 0.031), MAGE-A- (P = 0.004), NY-ESO-1-positive groups (P = 0.077).

Conclusion

Expression of C/T antigens may represent a marker of potential prognostic relevance in MBC.Breast cancers are a very heterogeneous group of diseases in terms of natural history, histopathological features, genetic alterations, gene-expression profiles, and response to treatment (1-5). Medullary breast cancers (MBC), both typical and atypical, account for <2% of breast invasive carcinomas. Despite histopathologically highly malignant characteristics, operable and non-metastatic MBCs have a more favorable prognosis than the more common infiltrating ductal breast carcinoma of the same stage (1,6-13). Recent updating of breast cancer classification, based on gene expression profile analyses, has indicated that MBCs can be considered as part of the basal-like carcinoma spectrum made up of the estrogen receptor (ER) negative-, progesterone receptor (PR) negative-, and human epidermal growth factor receptor 2 (HER-2)-negative tumors (‘triple-negative phenotype’) (14-17).Cancer/testis (C/T) antigens are a subgroup of tumor-associated antigens expressed in normal testis germ line cells and trophoblast, and in various malignancies of different histological types. They were discovered in the last two decades by a combination of immunological and molecular biology techniques. Most genes that encode these antigens are localized on the X-chromosome, frequently as multigene families and are referred to as CT-X genes or CT-X antigens (18-23). Biological functions of C/T genes and C/T antigens in both germ lines and tumors remain poorly understood. Due to their tumor-associated expression pattern and limited presence in normal tissues, C/T antigens appear to be valuable targets for immunotherapy of cancer. The best-studied C/T antigens are those of the MAGE-A family and the NY-ESO-1 antigen (18-23). Our initial reports on C/T antigens expression detected by immunohistochemistry in breast invasive ductal carcinomas of no special type (24,25) has been confirmed by other studies (26,27). However, these studies have not been performed on special or relatively rare histological types of breast cancers, such as the MBC.We have recently reported clinicopathological features of MBCs in 48 patients who were operated on in our two hospitals between 1999 and 2005 (28). The present study includes immunohistochemical analysis of the expression of C/T antigens MAGE-A, MAGE-A1, and NY-ESO 1 in these MBC samples.  相似文献   

11.

Aim

To assess the effect of peritonsillar infiltration of ketamine and tramadol on post tonsillectomy pain and compare the side effects.

Methods

The double-blind randomized clinical trial was performed on 126 patients aged 5-12 years who had been scheduled for elective tonsillectomy. The patients were randomly divided into 3 groups to receive either ketamine, tramadol, or placebo. They had American Society of Anesthesiologists physical status class I and II. All patients underwent the same method of anesthesia and surgical procedure. The three groups did not differ according to their age, sex, and duration of anesthesia and surgery. Post operative pain was evaluated using CHEOPS score. Other parameters such as the time to the first request for analgesic, hemodynamic elements, sedation score, nausea, vomiting, and hallucination were also assessed during 12 hours after surgery.

Results

Tramadol group had significantly lower pain scores (P = 0.005), significantly longer time to the first request for analgesic (P = 0.001), significantly shorter time to the beginning of liquid regimen (P = 0.001), and lower hemodynamic parameters such as blood pressure (P = 0.001) and heart rate (P = 0.001) than other two groups. Ketamine group had significantly greater presence of hallucinations and negative behavior than tramadol and placebo groups. The groups did not differ significantly in the presence of nausea and vomiting.

Conclusion

Preoperative peritonsillar infiltration of tramadol can decrease post-tonsillectomy pain, analgesic consumption, and the time to recovery without significant side effects.Registration No: IRCT201103255764N2Postoperative pain has not only a pathophysiologic impact but also affects the quality of patients’ lives. Improved pain management might therefore speed up recovery and rehabilitation and consequently decrease the time of hospitalization (1). Surgery causes tissue damage and subsequent release of biochemical agents such as prostaglandins and histamine. These agents can then stimulate nociceptors, which will send the pain message to the central nervous system to generate the sensation of pain (2-4). Neuroendocrine responses to pain can also cause hypercoagulation state and immune suppression, leading to hypoglycemia, which can delay wound healing (5).Tonsillectomy is a common surgery in children and post-tonsillectomy pain is an important concern. Duration and severity of pain depend on the surgical technique, antibiotic and corticosteroid use, preemptive and postoperative pain management, and patient’s perception of pain (6-9). There are many studies that investigated the control of post tonsillectomy pain using different drugs such as intravenous opioids, non-steroidal anti-inflammatory drugs, steroids, ketamine, as well as peritonsillar injection of local anesthetic, opioid, and ketamine (6,7,10-14).Ketamine is an intravenous anesthetic from phencyclidin family, which because of its antagonist effects on N methyl-D-aspartate receptors (that are involved in central pain sensitization) has regulatory influence on central sensitization and opium resistance. It can also band with mu receptors in the spinal cord and brain and cause analgesia. Ketamine can be utilized intravenously, intramuscularly, epidurally, rectally, and nasaly (15,16). Several studies have shown the effects of sub-analgesic doses of ketamine on postoperative pain and opioid consumption (7,13,15-17). Its side effects are hallucination, delirium, agitation, nausea, vomiting, airways hyper-secretion, and increased intra cerebral pressure and intra ocular pressure (10,11,15,16).Tramadol is an opium agonist that mostly effects mu receptors, and in smaller extent kappa and sigma receptors, and like anti depressant drugs can inhibit serotonin and norepinephrine reuptake and cause analgesia (6,12,18). Its potency is 5 times lower than morphine (6,12), but it has lower risk of dependency and respiratory depression, without any reported serious toxicity (6,12). However, it has some side effects such as nausea, vomiting, dizziness, sweating, anaphylactic reactions, and increased intra-cerebral pressure. It can also lower the seizure threshold (6,12,18,19).Several studies have confirmed the efficacy of tramadol and ketamine on post-tonsillectomy pain (6,10-12,20). In previous studies, effects of peritonsillar/ IV or IM infiltration of tramadol and ketamine were compared to each other and to placebo, and ketamine and tramadol were suggested as appropriate drugs for pain management (6,7,10-19,21). Therefore, in this study we directly compared the effect of peritonsillar infiltration of either tramadol or ketamine with each other and with placebo.  相似文献   

12.
13.
AimTo present and evaluate a new screening protocol for amblyopia in preschool children.MethodsZagreb Amblyopia Preschool Screening (ZAPS) study protocol performed screening for amblyopia by near and distance visual acuity (VA) testing of 15 648 children aged 48-54 months attending kindergartens in the City of Zagreb County between September 2011 and June 2014 using Lea Symbols in lines test. If VA in either eye was >0.1 logMAR, the child was re-tested, if failed at re-test, the child was referred to comprehensive eye examination at the Eye Clinic.Results78.04% of children passed the screening test. Estimated prevalence of amblyopia was 8.08%. Testability, sensitivity, and specificity of the ZAPS study protocol were 99.19%, 100.00%, and 96.68% respectively.ConclusionThe ZAPS study used the most discriminative VA test with optotypes in lines as they do not underestimate amblyopia. The estimated prevalence of amblyopia was considerably higher than reported elsewhere. To the best of our knowledge, the ZAPS study protocol reached the highest sensitivity and specificity when evaluating diagnostic accuracy of VA tests for screening. The pass level defined at ≤0.1 logMAR for 4-year-old children, using Lea Symbols in lines missed no amblyopia cases, advocating that both near and distance VA testing should be performed when screening for amblyopia.Vision disorders in children represent important public health concern as they are acknowledged to be the leading cause of handicapping conditions in childhood (1). Amblyopia, a loss of visual acuity (VA) in one or both eyes (2) not immediately restored by refractive correction (3), is the most prevalent vision disorder in preschool population (4). The estimated prevalence of amblyopia among preschool children varies from 0.3% (4) to 5% (5). In addition, consequences of amblyopia include reduced contrast sensitivity and/or positional disorder (6). It develops due to abnormal binocular interaction and foveal pattern vision deprivation or a combination of both factors during a sensitive period of visual cortex development (7). Traversing through adulthood, it stands for the leading cause of monocular blindness in the 20-70 year age group (8). The main characteristic of amblyopia is crowding or spatial interference, referring to better VA when single optotypes are used compared to a line of optotypes, where objects surrounding the target object deliver a jumbled percept (9-12). Acuity is limited by letter size, crowding is limited by spacing, not size (12).Since amblyopia is predominantly defined as subnormal VA, a reliable instrument for detecting amblyopia is VA testing (13-15). Moreover, VA testing detects 97% of all ocular anomalies (13). The gold standard for diagnosing amblyopia is complete ophthalmological examination (4). There is a large body of evidence supporting the rationale for screening, as early treatment of amblyopia during the child’s first 5-7 years of life (8) is highly effective in habilitation of VA, while the treatment itself is among the most cost-effective interventions in ophthalmology (16). Preschool vision screening meets all the World Health Organization’s criteria for evaluation of screening programs (17). Literature search identified no studies reporting unhealthy and damaging effects of screening. The gold standard for screening for amblyopia has not been established (4). There is a large variety of screening methodologies and inconsistent protocols for referral of positives to complete ophthalmological examination. Lack of information on the validity (18,19) and accuracy (4) of such protocols probably intensifies the debate on determining the most effective method of vision screening (8,20-29). The unique definition of amblyopia accepted for research has not reached a consensus (4,5,30,31), further challenging the standardization of the screening protocols.Overall, two groups of screening methods exist: the traditional approach determines VA using VA tests, while the alternative approach identifies amblyogenic factors (27) based on photoscreening or automated refraction. The major difference between the two is that VA-based testing detects amblyopia directly, providing an explicit measure of visual function, while the latter, seeking for and determining only the level of refractive status does not evaluate visual function. In addition, the diagnosis and treatment of amblyopia is governed by the level of VA. On the other hand, amblyogenic factors represent risk factors for amblyopia to evolve. There are two major pitfalls in screening for amblyogenic factors. First, there is a lack of uniform cut-off values for referral and second, not all amblyogenic factors progress to amblyopia (19).Besides the issue of what should be detected, amblyopia or amblyogenic factors, a question is raised about who should be screened. Among literate children, both 3- and 4- year-old children can be reliably examined. However, 3-year-old children achieved testability rate of about 80% and positive predictive rate of 58% compared to >90% and 75%, respectively in the 4-year-old group (32). In addition, over-referrals are more common among 3-year-old children (32). These data determine the age of 4 years as the optimum age to screen for amblyopia. Hence, testability is a relevant contributor in designating the optimal screening test.If VA is to be tested in children, accepted standard tests should be used, with well-defined age-specific VA threshold determining normal monocular VA. For VA testing of preschool children Lea Symbols (33) and HOTV charts (22,32) are acknowledged as the best practice (34), while tumbling E (28,35,36) and Landolt C (28,37-39) are not appropriate as discernment of right-left laterality is still not a fully established skill (34,40). The Allen picture test is not standardized (34,41). Both Lea Symbols and HOTV optotypes can be presented as single optotypes, single optotypes surrounded with four flanking bars, single line of optotypes surrounded with rectangular crowding bars, or in lines of optotypes (22,33,34,41-53). The more the noise, the bigger the “crowding” effect. Isolated single optotypes without crowding overestimate VA (24), hence they are not used in clinical practice in Sweden (32). If presented in lines, which is recognized as the best composition to detect crowding, test charts can be assembled on Snellen or gold standard logMAR principle (34,42,51,54). Age-specific thresholds defining abnormal VA in preschool screening for amblyopia changed over time from <0.8 to <0.65 for four-year-old children due to overload of false positives (20).The outline of an effective screening test is conclusively demonstrated by both high sensitivity and high specificity. Vision screening tests predominately demonstrated higher specificity (4). Moreover, sensitivity evidently increased with age, whereas specificity remained evenly high (4). The criteria where to set the cut-off point if the confirmatory, diagnostic test is expensive or invasive, advocate to minimize false positives or use a cut-off point with high specificity.On the contrary, if the penalty for missing a case is high and treatment exists, the test should maximize true positives and use a cut-off point with high sensitivity (55). A screening test for amblyopia should target high sensitivity to identify children with visual impairment, while the specificity should be high enough not to put immense load on pediatric ophthalmologists (14). Complete ophthalmological examination as the diagnostic confirmatory gold standard test for amblyopia is neither invasive nor elaborate technology is needed, while the penalty for missing a case is a lifetime disability.In devising the Zagreb Amblyopia Preschool Screening (ZAPS) study protocol, we decided to use Lea Symbols in lines test and to screen preschool children aged 48-54 months to address the problems declared. Near VA testing was introduced in addition to commonly accepted distance VA testing (14,22,24,32,45,56-69) due to several reasons: first, hypermetropia is the most common refractive error in preschool children (70), hence near VA should more reliably detect the presence of hypermetropia; second, the larger the distance, the shorter the attention span is; and third, to increase the accuracy of the test.The pass cut-off level of ≤0.1 logMAR was defined because of particular arguments. Prior to 1992 Sweden used the pass cut-off level for screening of 0.8 (20). A change in the referral criteria to <0.65 for four-year-old children ensued, as many children referred did not require treatment (20). In addition, amblyopia treatment outcome of achieved VA>0.7 is considered as habilitation of normal vision (3,14). At last, the pass cut-off value ≤0.1 logMAR at four years can hardly mask serious visual problems, and even if they are present, we presume they are mild and can be successfully treated at six years when school-entry vision screening is performed. The aim of the ZAPS study is to present and evaluate new screening protocol for preschool children aged 48-54 months, established for testing near and distance VA using Lea Symbols in lines test. Furthermore, we aimed to determine the threshold of age-specific and chart-specific VA normative, testability of the ZAPS study protocol, and the prevalence of amblyopia in the City of Zagreb County. By delivering new evidence on amblyopia screening, guideline criteria defining optimal screening test for amblyopia in preschool children can be revised in favor of better visual impairment clarification.  相似文献   

14.

Aim

To determine the pattern of breast diseases among Saudi patients who underwent breast biopsy, with special emphasis on breast carcinoma.

Methods

A retrospective review was made of all breast biopsy reports of a mass or lump from male and female patients seen between January 2001 and December 2010 at the King Khalid University Hospital, Riyadh, Saudi Arabia.

Results

Of 1035 breast tissues reviewed, 939 specimens (90.7%) were from female patients. There were 690 benign (65.8%) and 345 (34.2%) malignant cases. In women, 603 (64.2%) specimens were benign and 336 (35.8%) were malignant. In men, 87 specimens (90.6%) were benign and 9 (9.4%) were malignant. All malignant cases from male patients belonged to invasive ductal carcinoma and the majority of malignant cases from female patients belonged to invasive/infiltrating ductal carcinoma. The proportion of malignancy was 18% in patients younger than 40 years and 63.2% in patients older than 60 years. The mean age of onset for malignancy was 48.6 years. The annual percentage incidence of malignant breast cancer steadily increased by 4.8%, from an annual rate of 23.5% in 2000 to 47.2% in 2007.

Conclusion

Among Saudi patients, there is a significant increase in the incidence of breast cancer, which occurs at an earlier age than in western countries. Continued vigilance, mammographic screening, and patient education are needed to establish early diagnosis and perform optimal treatment.Increased awareness and efficient breast cancer information-dissemination campaign led to an increased number of diagnosed cases of breast cancer. According to the American Cancer Society, about 1.3 million American women annually are diagnosed with breast cancer and about 465 000 die from the disease (1). The number of deaths has decreased since 1990, probably due to an earlier detection and advances in treatment. According to 2000-2004 Saudi National Cancer Registry data, there were 127.8 per 100 000 women with breast cancer and mortality rate was 25.5 per 100 000 (2).Most palpable breast masses are benign; less than 30% of women with palpable masses have a diagnosis of cancer (3-5). Approximately 4% of breast cancers present with a palpable mass without mammographic or ultrasonographic evidence of the disease (6). Therefore, evaluation of a breast mass should be done by taking into consideration patient’s history, physical examination, imaging, and biopsy. Definitive diagnosis in nearly all cases is established by needle biopsy. Because of the low specificity of mammography, many women undergo unnecessary breast biopsy. As many as 65%-85% of breast biopsies are performed on benign lesions (7), which subjects the patients to avoidable emotional and physical burden.Similarly to other countries, breast cancer in Saudi Arabia is the most common cancer in women (7). The Saudi National Cancer Registry reported a rising proportion of breast cancer among women of all ages, from 10.2% in 2000 to 24.3% in 2005 (8). A significant majority of these breast cancers (almost 80%) were of the infiltrating ductal type. The average age at presentation of breast cancer in Arab countries is 48 years, which is a decade earlier than in western countries (9). The median age of onset of breast cancer among Saudi women is 46 years (8). Due to the increasing incidence, several articles have been published on screening for breast cancer and on public awareness programs initiated by the Saudi Arabian government and non-governmental sectors (10-13). This study aims to describe the epidemiological characteristics of breast mass lesions of patients examined at the King Khalid University Hospital, Riyadh, Saudi Arabia from 2001 to 2010.  相似文献   

15.

Aim

To assess retrospectively the clinical effects of typical (fluphenazine) or atypical (olanzapine, risperidone, quetiapine) antipsychotics in three open clinical trials in male Croatian war veterans with chronic combat-related posttraumatic stress disorder (PTSD) with psychotic features, resistant to previous antidepressant treatment.

Methods

Inpatients with combat-related PTSD were treated for 6 weeks with fluphenazine (n = 27), olanzapine (n = 28) risperidone (n = 26), or quetiapine (n = 53), as a monotherapy. Treatment response was assessed by the reduction in total and subscales scores in the clinical scales measuring PTSD (PTSD interview and Clinician-administered PTSD Scale) and psychotic symptoms (Positive and Negative Syndrome Scale).

Results

After 6 weeks of treatment, monotherapy with fluphenazine, olanzapine, risperidone, or quetiapine in patients with PTSD significantly decreased the scores listed in trauma reexperiencing, avoidance, and hyperarousal subscales in the clinical scales measuring PTSD, and total and subscales scores listed in positive, negative, general psychopathology, and supplementary items of the Positive and negative syndrome scale subscales, respectively (P<0.001).

Conclusion

PTSD and psychotic symptoms were significantly reduced after monotherapy with typical or atypical antipsychotics. As psychotic symptoms commonly occur in combat-related PTSD, the use of antipsychotic medication seems to offer another approach to treat a psychotic subtype of combat-related PTSD resistant to previous antidepressant treatment.In a world in which terrorism and conflicts are constant threats, and these threats are becoming global, posttraumatic stress disorder (PTSD) is a serious and global illness. According to the criteria from the 4th edition of Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) (1), exposure to a life-threatening or horrifying event, such as combat trauma, rape, sexual molestation, abuse, child maltreatment, natural disasters, motor vehicle accidents, violent crimes, hostage situations, or terrorism, can lead to the development of PTSD (1,2). The disorder may also be precipitated if a person experienced, saw, or learned of an event or events that involved actual or threatened death, serious injury, or violation of the body of self or others (3,4). In such an event, a person’s response can involve intense fear, helplessness, or horror (3,4). However, not all persons who are exposed to a traumatic event will develop PTSD. Although the stress reaction is a normal response to an abnormal situation, some extremely stressful situations will in some individuals overwhelm their ability to cope with stress (5).PTSD is a chronic psychiatric illness. The essential features of PTSD are the development of three characteristic symptom clusters in the aftermath of a traumatic event: re-experiencing the trauma, avoidance and numbing, and hyperarousal (1,6). The core PTSD symptoms in the re-experiencing cluster are intrusive memories, images, or perceptions; recurring nightmares; intrusive daydreams or flashbacks; exaggerated emotional and physical reactions; and dissociative experiences (1,6,7). These symptoms intensify or re-occur upon exposure to reminders of the trauma, and various visual, auditory, or olfactory cues might trigger traumatic memories (3,4). The avoidance and numbing cluster of symptoms includes efforts to avoid thoughts, feelings, activities, or situations associated with the trauma; feelings of detachment or alienation; inability to have loving feelings; restricted range of affect; loss of interest; and avoidance of activity. The hyperarousal cluster includes exaggerated startle response, hyper-vigilance, insomnia and other sleep disturbances, difficulties in concentrating, and irritability or outbursts of anger. PTSD criteria include functional impairment, which can be seen in occupational instability, marital problems, discord with family and friends, and difficulties in parenting (3,4,8). In addition to this social and occupational dysfunction, PTSD is often accompanied by substance abuse (9) and by various comorbid diagnoses, such as major depression (10), other anxiety disorders, somatization, personality disorders, dissociative disorders (7,11), and frequently with suicidal behavior (12). Combat exposure can precipitate a more severe clinical picture of PTSD, which may be complicated with psychotic features and resistance to treatment. War veterans with PTSD have a high risk of suicide, and military experience, guilt about combat actions, survivor guilt, depression, anxiety, and severe PTSD are significantly associated with suicide attempts (12).The pharmacotherapy treatment of PTSD includes the use of antidepressants, such as selective serotonin reuptake inhibitors (fluvoxamine, fluoxetine, sertraline, or paroxetine) as a first choice of treatment, tricyclic antidepressants (desipramine, amitriptyline, imipramine), monoamine oxidase inhibitors (phenelzine, brofaromine), buspirone, and other antianxiety agents, benzodiazepines (alprazolam), and mood stabilizers (lithium) (13-16). Although the pharmacotherapy of PTSD starts with antidepressants, in treatment-refractory patients a new pharmacological approach is required to obtain a response. In treatment-resistant patients, pharmacotherapy strategies reported to be effective include anticonvulsants, such as carbamazepine, gabapentine, topiramate, tiagabine, divalproex, lamotrigine (14,17); anti-adrenergic agents, such as clonidine (although presynaptic α2-adrenoceptor agonist, clonidine blocks central noradrenergic outflow from the locus ceruleus), propranolol, and prazosin (13,14), opiate antagonists (13), and neuroleptics and antipsychotics (14,17,18).Combat exposure frequently induces PTSD, and combat-related PTSD might progress to a severe form of PTSD, which is often refractory to treatment (19-21). Combat-related PTSD is frequently associated with comorbid psychotic features (11,14,17,19-21), while psychotic features add to the severity of symptoms in combat-related PTSD patients (19,22-24). These cases of a more severe subtype of PTSD, complicated with psychotic symptoms, require the use of neuroleptics or atypical antipsychotic drugs (14,17,25-27).After the war in Croatia (1991-1995), an estimated million people were exposed to war trauma and about 10 000 of the Homeland War veterans (15% prevalence) have developed PTSD, with an alarmingly high suicide rate (28). The war in Croatia brought tremendous suffering, not only to combat-exposed veterans and prisoners of war (29), but also to different groups of traumatized civilians in the combat zones, displaced persons and refugees, victims of terrorist attacks, civilian relatives of traumatized war veterans and terrorist attacks victims, and traumatized children and adolescents (30). Among Croatian war veterans with combat-related PTSD, 57-62% of combat soldiers with PTSD met criteria for comorbid diagnoses (8-11), such as alcohol abuse, major depressive disorder, anxiety disorders, panic disorder and phobia, psychosomatic disorder, psychotic disorders, drug abuse, and dementia. In addition to different comorbid psychiatric disorders, a great proportion of war veterans with combat-related PTSD developed psychotic features (8,11,25,26), which consisted of psychotic depressive and schizophrenia-like symptoms (suggesting prominent symptoms of thought disturbances and psychosis). Psychotic symptoms were accompanied by auditory or visual hallucinations and delusional thinking in over two-thirds of patients (25,26). Delusional paranoid symptoms occurred in 32% of patients (25,26). The hallucinations were not associated exclusively with the traumatic experience, while the delusions were generally paranoid or persecutory in nature (25,26). Although psychotic PTSD and schizophrenia share some similar symptoms, there are clear differences between these two entities, since PTSD patients still retain some insight into reality and usually do not have complete disturbances of affect (eg, constricted or inappropriate) or thought disorder (eg, loose associations or disorganized responses).This proportion of veterans with combat-related PTSD refractory to treatment (18-20) and with co-occurring psychotic symptoms requires additional pharmacological strategies, such as the use of neuroleptics (25) or atypical antipsychotics (14,17,26). Studies evaluating the use of antipsychotics in combat-related PTSD with psychotic features are scarce, and antipsychotics were frequently added to existing medication in the treatment of PTSD.In this study, we compared retrospectively the clinical effects of four antipsychotic drugs – a neuroleptic drug (fluphenazine) and three atypical antipsychotics (olanzapine, risperidone and quetiapine) – in treatment-resistant male war veterans with combat-related PTSD with psychotic features.  相似文献   

16.
The aim of this article is to review the role of uric acid in the context of antioxidant effects of wine and its potential implication to human health. We described and discussed the mechanisms of increase in plasma antioxidant capacity after consumption of moderate amounts of wine. Because this effect is largely contributed by acute elevation in plasma uric acid, we paid special attention to wine constituents and metabolic processes that are likely to be involved in uric acid elevation.The association between light-to-moderate wine consumption and risk reduction of cardiovascular diseases and/or reduction of all-cause mortality was confirmed by numerous epidemiological studies (1-7). Among other beneficial biological effects, it has been shown that wine can increase antioxidant capacity in humans (8-12) and reduce susceptibility of human plasma to lipid peroxidation (11,13). These effects of wine attracted significant research interest, as oxidative stress is implicated in the pathogenesis of various diseases such as cancer and cardiovascular and neurodegenerative diseases (14-19).  相似文献   

17.

Aim

To assess patients’ attitudes toward changing unhealthy lifestyle, confidence in the success, and desired involvement of their family physicians in facilitating this change.

Methods

We conducted a cross-sectional study in 15 family physicians’ practices on a consecutive sample of 472 patients (44.9% men, mean age  [± standard deviation] 49.3 ± 10.9 years) from October 2007 to May 2008. Patients were given a self-administered questionnaire on attitudes toward changing unhealthy diet, increasing physical activity, and reducing body weight. It also included questions on confidence in the success, planning lifestyle changes, and advice from family physicians.

Results

Nearly 20% of patients planned to change their eating habits, increase physical activity, and reach normal body weight. Approximately 30% of patients (more men than women) said that they wanted to receive advice on this issue from their family physicians. Younger patients and patients with higher education were more confident that they could improve their lifestyle. Patients who planned to change their lifestyle and were more confident in the success wanted to receive advice from their family physicians.

Conclusion

Family physicians should regularly ask the patients about the intention of changing their lifestyle and offer them help in carrying out this intention.Unhealthy lifestyle, including unhealthy diet and physical inactivity, is still a considerable health problem all over the world. Despite publicly available evidence about the health risks of unhealthy lifestyle, people still find it hard to improve their unhealthy diet and increase physical activity. Previous studies have shown that attitudes toward lifestyle change depended on previous health behavior, awareness of unhealthy lifestyle, demographic characteristics, personality traits, social support, family functioning, ongoing contact with health care providers, and an individual’s social ecology or network (1-4).As community-based health education approaches have had a limited effect on health risk factors reduction (3,5), the readiness-to-change approach, based on two-way communication, has become increasingly used with patients who lead an unhealthy lifestyle (3,6,7). Family physicians are in a unique position to adopt this approach, since almost every patient visits his/hers family physician at least once in five years (8). Previous studies showed that patients highly appreciated their family physicians’ advice on lifestyle changes (9,10). Moreover, patients who received such advice were also more willing to change their unhealthy habits (3,7,11). The reason for this is probably that behavioral changes are made according to the patient’s stage of the motivational circle at the moment of consultation (12), which can be determined only by individual approach.Although family physicians are convinced that it is their task to give advice on health promotion and disease prevention, in practice they are less likely to do so (13). The factors that prevent them from giving advice are time (14,15), cost, availability, practice capacity (14), lack of knowledge and guidelines, poor counseling skills (16), and personal attitudes (17). It also seems that physicians’ assessment varies considerably according to the risk factor in question. For example, information on diet and physical activity are often inferred from patients’ appearance rather than from clinical measurements (14). Also, health care professionals seldom give advice on recommended aspects of intervention that could facilitate behavioral change (18). As a large proportion of primary care patients are ready to lose weight, improve diet, and increase exercise (19), it is even more important that their family physicians provide timely advice.So far, several studies have addressed patients’ willingness to make lifestyle change (2-5,20) and the provision of professional advice (3,5,7,10,11). However, none of these studies have investigated the relation between these factors. So, the aim of our study was to assess the relation between patients’ attitudes toward changing unhealthy lifestyle, confidence in success, and the desired involvement of their family physicians in facilitating the change.  相似文献   

18.

Aim

To investigate the involvement of the vesicular membrane trafficking regulator Synaptotagmin IV (Syt IV) in Alzheimer’s disease pathogenesis and to define the cell types containing increased levels of Syt IV in the β-amyloid plaque vicinity.

Methods

Syt IV protein levels in wild type (WT) and Tg2576 mice cortex were determined by Western blot analysis and immunohistochemistry. Co-localization studies using double immunofluorescence staining for Syt IV and markers for astrocytes (glial fibrillary acidic protein), microglia (major histocompatibility complex class II), neurons (neuronal specific nuclear protein), and neurites (neurofilaments) were performed in WT and Tg2576 mouse cerebral cortex.

Results

Western blot analysis showed higher Syt IV levels in Tg2576 mice cortex than in WT cortex. Syt IV was found only in neurons. In plaque vicinity, Syt IV was up-regulated in dystrophic neurons. The Syt IV signal was not up-regulated in the neurons of Tg2576 mice cortex without plaques (resembling the pre-symptomatic conditions).

Conclusions

Syt IV up-regulation within dystrophic neurons probably reflects disrupted vesicular transport or/and impaired protein degradation occurring in Alzheimer’s disease and is probably a consequence but not the cause of neuronal degeneration. Hence, Syt IV up-regulation and/or its accumulation in dystrophic neurons may have adverse effects on the survival of the affected neuron.The main pathological hallmarks of Alzheimer’s disease (AD) are the formation of amyloid plaques, neurofibrillary tangles, dystrophic neurites, and sometimes activation of glial cells in the brain (1,2). In the vicinity of amyloid plaques, neurons undergo dramatic neuropathological changes including metabolic disturbances such as altered energy metabolism, dysfunction of vesicular trafficking, neurite breakage, and disruption of neuronal connections (3-8).Synaptotagmin IV (Syt IV) is a protein involved in the regulation of membrane trafficking in neurons and astrocytes (9,10). In hippocampal neurons, it regulates brain-derived neurotrophic factor release (11) and is involved in hippocampus-dependent memory and learning (12,13). In astrocytes, it is implicated in glutamate release (10). Recent data show that Syt IV plays an important role in neurodegenerative processes (14). Syt IV expression could be induced by seizures, drugs, and brain injury. Its changes have been shown in several animal models of neurodegeneration (Parkinson’s disease, brain ischemia, AD) (14-25). However, the exact role of Syt IV in neurodegeneration is unknown.Our previous study showed that the expression of Syt IV mRNA and its protein in the hippocampus and cortex of Tg2576 mouse model for AD was increased in the tissue surrounding β-amyloid plaques (14). It is not clear whether Syt IV is expressed in astrocytes (10,26,27) or/and in neurons (28,29), ie, whether it regulates the release of pro- or anti-inflammatory cytokines from β-amyloid associated astrocytes or is involved in neuronal vesicular pathogenesis (5,30). Therefore, the present study aimed to determine the type of cells in which Syt IV up-regulation occurs.  相似文献   

19.
The aim of this paper is to describe our surgical procedure for the treatment of osteonecrosis of the femoral head using a minimally invasive technique. We have limited the use of this procedure for patients with pre-collapse osteonecrosis of the femoral head (Ficat Stage I or II). To treat osteonecrosis of the femoral head at our institution we currently use a combination of outpatient, minimally invasive iliac crest bone marrow aspirations and blood draw combined with decompressions of the femoral head. Following the decompression of the femoral head, adult mesenchymal stem cells obtained from the iliac crest and platelet rich plasma are injected into the area of osteonecrosis. Patients are then discharged from the hospital using crutches to assist with ambulation. This novel technique was utilized on 77 hips. Sixteen hips (21%) progressed to further stages of osteonecrosis, ultimately requiring total hip replacement. Significant pain relief was reported in 86% of patients (n = 60), while the rest of patients reported little or no pain relief. There were no significant complications in any patient. We found that the use of a minimally invasive decompression augmented with concentrated bone marrow and platelet rich plasma resulted in significant pain relief and halted the progression of disease in a majority of patients.Osteonecrosis of the femoral head (ONFH) occurs when the cells of the trabecular bone and marrow in the femoral head spontaneously die, leading to fracture and collapse of the articular surface (1,2). In the US, every year ONFH occurs in 10 000-20 000 adults between the ages of 20 and 60 (1,3,4). Once collapse occurs, severe pain ensues, and the disease course rarely regresses (5-8). In order to halt disease progression and provide pain relief, 80% of patients suffering from ONFH will require a total hip arthroplasty (THA); typically at a younger age than patients undergoing a THA for osteoarthritis (9-11).Although ONFH is a common indication for THA, the etiology of the disease is still unknown (12,13). ONFH is thought to be a multifactorial disease, with patients reporting a history of exposure to one or more risk factors, including trauma to the hip, alcohol abuse, corticosteroid use, hemoglobinopathies, pregnancy, coagulopathies, organ transplant, chemotherapy, Caisson disease, HIV, and autoimmune conditions; however in some patients the risk factor remains unknown, and the disease is termed “idiopathic” ONFH (12-16). Recent studies looking at the gentics risks of ONFH have resulted in identifying an autosomal dominant mutation in collagen type II gene (COL2 A1 gene) (17); which has been associated with genetic polymorphisms in alcohol metabolizing enzymes and the drug transport proteins (18,19).If the disease course is recognized before collapse of the subchondral bone and cartilage, patients can be treated with core decompression of the femoral head including Ficat Stage I or II (12,20,21). This technique has been used for over four decades, however randomized control trials have failed to show that this procedure alone halts disease progression and collapse (4). Recently, concentrated bone marrow autograft has been used to augment the decompression site to attempt to repopulate the femoral head with human mesenchymal stem cells (hMSC) (13,22,23). This aim of this paper is to describe our surgical technique and early clinical results using autologous bone marrow concentrate with platelet rich plasma and a minimally invasive decompression for the treatment of ONFH.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号