首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Aim

To analyze potential and actual drug-drug interactions reported to the Spontaneous Reporting Database of the Croatian Agency for Medicinal Products and Medical Devices (HALMED) and determine their incidence.

Methods

In this retrospective observational study performed from March 2005 to December 2008, we detected potential and actual drug-drug interactions using interaction programs and analyzed them.

Results

HALMED received 1209 reports involving at least two drugs. There were 468 (38.7%) reports on potential drug-drug interactions, 94 of which (7.8% of total reports) were actual drug-drug interactions. Among actual drug-drug interaction reports, the proportion of serious adverse drug reactions (53 out of 94) and the number of drugs (n = 4) was significantly higher (P < 0.001) than among the remaining reports (580 out of 1982; n = 2, respectively). Actual drug-drug interactions most frequently involved nervous system agents (34.0%), and interactions caused by antiplatelet, anticoagulant, and non-steroidal anti-inflammatory drugs were in most cases serious. In only 12 out of 94 reports, actual drug-drug interactions were recognized by the reporter.

Conclusion

The study confirmed that the Spontaneous Reporting Database was a valuable resource for detecting actual drug-drug interactions. Also, it identified drugs leading to serious adverse drug reactions and deaths, thus indicating the areas which should be in the focus of health care education.Adverse drug reactions (ADR) are among the leading causes of mortality and morbidity responsible for causing additional complications (1,2) and longer hospital stays. Magnitude of ADRs and the burden they place on health care system are considerable (3-6) yet preventable public health problems (7) if we take into consideration that an important cause of ADRs are drug-drug interactions (8,9). Although there is a substantial body of literature on ADRs caused by drug-drug interactions, it is difficult to accurately estimate their incidence, mainly because of different study designs, populations, frequency measures, and classification systems (10-15).Many studies including different groups of patients found the percentage of potential drug-drug interactions resulting in ADRs to be from 0%-60% (10,11,16-25). System analysis of ADRs showed that drug-drug interactions represented 3%-5% of all in-hospital medication errors (3). The most endangered groups were elderly and polimedicated patients (22,26-28), and emergency department visits were a frequent result (29). Although the overall incidence of ADRs caused by drug-drug interactions is modest (11-13,15,29,30), they are severe and in most cases lead to hospitalization (31,32).Potential drug-drug interactions are defined on the basis of on retrospective chart reviews and actual drug-drug interactions are defined on the basis of clinical evidence, ie, they are confirmed by laboratory tests or symptoms (33). The frequency of potential interactions is higher than that of actual interactions, resulting in large discrepancies among study findings (24).A valuable resource for detecting drug-drug interactions is a spontaneous reporting database (15,34). It currently uses several methods to detect possible drug-drug interactions (15,29,35,36). However, drug-drug interactions in general are rarely reported and information about the ADRs due to drug-drug interactions is usually lacking.The aim of this study was to estimate the incidence of actual and potential drug-drug interactions in the national Spontaneous Reporting Database of ADRs in Croatia. Additionally, we assessed the clinical significance and seriousness of drug-drug interactions and their probable mechanism of action.  相似文献   

2.

Aim

To investigate the relationship between total serum cholesterol and levels of depression, aggression, and suicidal ideations in war veterans with posttraumatic stress disorder (PTSD) without psychiatric comorbidity.

Methods

A total of 203 male PTSD outpatients were assessed for the presence of depression, aggression, and suicidality using the 17-item Hamilton Depression Rating Scale (HAM-D17), Corrigan Agitated Behavior Scale (CABS), and Scale for Suicide Ideation (SSI), respectively, followed by plasma lipid parameters determination (total cholesterol, high density lipoprotein [HDL]-cholesterol, low density lipoprotein [LDL]-cholesterol, and triglycerides). PTSD severity was assessed using the Clinician-Administered PTSD Scale for DSM-IV, Current and Lifetime Diagnostic Version (CAPS-DX) and the Clinical Global Impressions of Severity Scale (CGI-S), before which Mini-International Neuropsychiatric Interview (MINI) was administered to exclude psychiatric comorbidity and premorbidity.

Results

After adjustments for PTSD severity, age, body mass index, marital status, educational level, employment status, use of particular antidepressants, and other lipid parameters (LDL- and HDL- cholesterol and triglycerides), higher total cholesterol was significantly associated with lower odds for having higher suicidal ideation (SSI≥20) (odds ratio [OR] 0.09; 95% confidence interval [CI] 0.03-0.23], clinically significant aggression (CABS≥22) (OR 0.28; 95% CI 0.14-0.59), and at least moderate depressive symptoms (HAM-D17≥17) (OR 0.20; 95% CI 0.08-0.48). Association of total cholesterol and HAM-D17 scores was significantly moderated by the severity of PTSD symptoms (P < 0.001).

Conclusion

Our results indicate that higher total serum cholesterol is associated with lower scores on HAM-D17, CABS, and SSI in patients with chronic PTSD.Posttraumatic stress disorder (PTSD) is one of the few mental disorders with a clearly identifiable cause. It is an anxiety disorder caused by exposure to a traumatic event that presented a threat to the physical integrity of persons themselves or other people in their surroundings (1). Key neurochemical PTSD features include altered catecholamines regulation, alterations in serotonergic system, and alterations in systems of aminoacids, peptides, and opioid neurotransmitters (2).Associations between serum lipids and various psychiatric disorders and some behavioral aspects (like aggressive behavior) and/or suicidality have been widely explored. Lower total cholesterol levels were predominantly found in patients with major depressive disorder (MDD) (3-9). Significantly higher high-density lipoprotein cholesterol (HDL-cholesterol) levels were found in depressive patients than in controls (7). Some studies found significantly lower HDL-cholesterol levels (10) and a lower HDL-cholesterol/total cholesterol ratio (5) in patients with MDD than in controls.A negative correlation (11-13) between serum cholesterol level and aggressive behavior was also found, confirming the cholesterol-serotonergic hypothesis of aggression (14,15). Inadequate cholesterol intake could lead to decreased central serotonin activity, which is associated with an increased risk for impulsive-aggressive behavior (14-18). Depression (19-21) and aggression are well-known suicidality risk factors (15,22).The correlation between hypocholesterolemia, decreased central serotonin activity, increased depressive potential, and increased suicidality risk (23-27) was confirmed, implicating that hypocholesterolemia might be indirectly, ie, through decreased central serotonin activity and increased depression potential (20,25,28), associated with an increased suicidality risk (15,19-24,26,27). In patients with anxiety disorders other than PTSD, like panic disorder (PD), lower HDL-cholesterol and higher very low density lipoprotein cholesterol (VLDL-cholesterol) levels were found to be associated with higher suicide ideations/risk (29). Significantly lower serum total cholesterol and LDL cholesterol levels were found in suicidal patients with PD than in control subjects (30).Hypercholesterolemia was found to be associated with chronic, war-related PTSD (31-34). In a study from Bosnia and Herzegovina, not only hypercholesterolemia but also increased VLDL- and HDL-cholesterol levels were found in war veterans with PTSD in comparison with war veterans without psychiatric disorders (35). A Croatian study found no significant differences in the total serum cholesterol level, LDL-, and HDL-cholesterol between war veterans with PTSD, war veterans without PTSD, and healthy volunteers (36).The aim of this study was to investigate the relationship between serum cholesterol and levels of depression, aggression, and suicidal ideations in war veterans with PTSD free of other psychiatric premorbidity and comorbidity.  相似文献   

3.

Aim

To assess the effect of peritonsillar infiltration of ketamine and tramadol on post tonsillectomy pain and compare the side effects.

Methods

The double-blind randomized clinical trial was performed on 126 patients aged 5-12 years who had been scheduled for elective tonsillectomy. The patients were randomly divided into 3 groups to receive either ketamine, tramadol, or placebo. They had American Society of Anesthesiologists physical status class I and II. All patients underwent the same method of anesthesia and surgical procedure. The three groups did not differ according to their age, sex, and duration of anesthesia and surgery. Post operative pain was evaluated using CHEOPS score. Other parameters such as the time to the first request for analgesic, hemodynamic elements, sedation score, nausea, vomiting, and hallucination were also assessed during 12 hours after surgery.

Results

Tramadol group had significantly lower pain scores (P = 0.005), significantly longer time to the first request for analgesic (P = 0.001), significantly shorter time to the beginning of liquid regimen (P = 0.001), and lower hemodynamic parameters such as blood pressure (P = 0.001) and heart rate (P = 0.001) than other two groups. Ketamine group had significantly greater presence of hallucinations and negative behavior than tramadol and placebo groups. The groups did not differ significantly in the presence of nausea and vomiting.

Conclusion

Preoperative peritonsillar infiltration of tramadol can decrease post-tonsillectomy pain, analgesic consumption, and the time to recovery without significant side effects.Registration No: IRCT201103255764N2Postoperative pain has not only a pathophysiologic impact but also affects the quality of patients’ lives. Improved pain management might therefore speed up recovery and rehabilitation and consequently decrease the time of hospitalization (1). Surgery causes tissue damage and subsequent release of biochemical agents such as prostaglandins and histamine. These agents can then stimulate nociceptors, which will send the pain message to the central nervous system to generate the sensation of pain (2-4). Neuroendocrine responses to pain can also cause hypercoagulation state and immune suppression, leading to hypoglycemia, which can delay wound healing (5).Tonsillectomy is a common surgery in children and post-tonsillectomy pain is an important concern. Duration and severity of pain depend on the surgical technique, antibiotic and corticosteroid use, preemptive and postoperative pain management, and patient’s perception of pain (6-9). There are many studies that investigated the control of post tonsillectomy pain using different drugs such as intravenous opioids, non-steroidal anti-inflammatory drugs, steroids, ketamine, as well as peritonsillar injection of local anesthetic, opioid, and ketamine (6,7,10-14).Ketamine is an intravenous anesthetic from phencyclidin family, which because of its antagonist effects on N methyl-D-aspartate receptors (that are involved in central pain sensitization) has regulatory influence on central sensitization and opium resistance. It can also band with mu receptors in the spinal cord and brain and cause analgesia. Ketamine can be utilized intravenously, intramuscularly, epidurally, rectally, and nasaly (15,16). Several studies have shown the effects of sub-analgesic doses of ketamine on postoperative pain and opioid consumption (7,13,15-17). Its side effects are hallucination, delirium, agitation, nausea, vomiting, airways hyper-secretion, and increased intra cerebral pressure and intra ocular pressure (10,11,15,16).Tramadol is an opium agonist that mostly effects mu receptors, and in smaller extent kappa and sigma receptors, and like anti depressant drugs can inhibit serotonin and norepinephrine reuptake and cause analgesia (6,12,18). Its potency is 5 times lower than morphine (6,12), but it has lower risk of dependency and respiratory depression, without any reported serious toxicity (6,12). However, it has some side effects such as nausea, vomiting, dizziness, sweating, anaphylactic reactions, and increased intra-cerebral pressure. It can also lower the seizure threshold (6,12,18,19).Several studies have confirmed the efficacy of tramadol and ketamine on post-tonsillectomy pain (6,10-12,20). In previous studies, effects of peritonsillar/ IV or IM infiltration of tramadol and ketamine were compared to each other and to placebo, and ketamine and tramadol were suggested as appropriate drugs for pain management (6,7,10-19,21). Therefore, in this study we directly compared the effect of peritonsillar infiltration of either tramadol or ketamine with each other and with placebo.  相似文献   

4.

Aim

To compare cardiometabolic risk-related biochemical markers and sexual hormone and leptin receptors in the adrenal gland of rat males, non-ovariectomized females (NON-OVX), and ovariectomized females (OVX) under chronic stress.

Methods

Forty six 16-week-old Sprague-Dawley rats were divided into male, NON-OVX, and OVX group and exposed to chronic stress or kept as controls. Weight, glucose tolerance test (GTT), serum concentration of glucose, and cholesterol were measured. Adrenal glands were collected at the age of 28 weeks and immunohistochemical staining against estrogen beta (ERβ), progesterone (PR), testosterone (AR), and leptin (Ob-R) receptors was performed.

Results

Body weight, GTT, serum cholesterol, and glucose changed in response to stress as expected and validated the applied stress protocol. Stressed males had significantly higher number of ERβ receptors in comparison to control group (P = 0.028). Stressed NON-OVX group had significantly decreased AR in comparison to control group (P = 0.007). The levels of PR did not change in any consistent pattern. The levels of Ob-R increased upon stress in all groups, but the significant difference was reached only in the case of stressed OVX group compared to control (P = 0.033).

Conclusion

Chronic stress response was sex specific. OVX females had similar biochemical parameters as males. Changes upon chronic stress in adrenal gland were related to a decrease in testosterone receptor in females and increase in estrogen receptor in males.Maintaining homeostasis is often challenged by different types of stressors (1). Homeostasis is regulated by a complex endocrine processes engaging the hypothalamic-pituitary-adrenal axis (HPA) and sympathetic autonomic system (2-4). Stress can occur either in acute or chronic form with different consequences – the acute stress mostly induces the ˝fight or flight˝ response, while chronic stress promotes long term changes, which can lead to a variety of diseases (5,6). If stress is of sufficient magnitude and duration, the action of HPA is unsuppressed and results in prolonged elevation of cortisol (7), induced production of energy, vasoconstriction, lipolysis, proteolysis, immunosuppression, and suppression of reproductive function to save energy and retain overall homeostasis (8). Women are generally less susceptible to chronic stress up to the period of menopause, when the loss of protective hormones, estrogen and progesterone, occurs and thus they become prone to development of depression, anxiety, or schizophrenia (9). In contrast, men are generally more susceptible and sensitive to chronic stress, showing changes in feeding habits and decreased body weight (10,11).Chronic stress can cause the development of cardiovascular disorder, obesity, and diabetes, which can be reflected in serum cholesterol, glucose, and decreased glucose tolerance (12-14). There is a strong correlation between stress and sexual hormones, but the mechanisms by which estrogen, testosterone, and progesterone exert their possible protective role under stress conditions are not fully explored. Sexual hormones affect stress outcome and stress hormones affect the levels of sexual hormones (15-17). Testosterone is activated during stress response in rats and humans (18,19) and tends to increase more in men than women (20). Estrogen lowers the stress-induced response in women and men (9,21). Estrogens and progesterone are produced even after ovariectomy by adrenal glands (22) but it is not known if such compensation can withstand additional challenge like stress. Another possible player in stress response is leptin (Ob), hormone responsible for maintaining body weight, which is synthesized and secreted by adipose tissue (23), exerting its effects through the leptin receptor (Ob-R) (24). Chronic stress models imply a direct link between stress response and leptin (25,26). Receptors for leptin are present in the adrenal gland (27). The aim of this study was to investigate cardiovascular risk parameters and changes in leptin and sexual hormone receptors in adrenal gland during chronic stress. There is a clinically relevant change in the onset of cardiometabolic risk between healthy women and women with premature ovarian failure (28) and because of that ovariectomized female rats were included in the study.  相似文献   

5.

Aim

To evaluate the accuracy of eye color prediction based on six IrisPlex single nucleotide polymorphisms (SNP) in a Slovenian population sample.

Methods

Six IrisPlex predictor SNPs (HERC2 – rs12913832, OCA2 – rs1800407, SLC45A2 – rs16891982 and TYR – rs1393350, SLC24A4 – rs12896399, and IRF4 – rs12203592) of 105 individuals were analyzed using single base extension approach and SNaPshot chemistry. The IrisPlex multinomial regression prediction model was used to infer eye color probabilities. The accuracy of the IrisPlex was assessed through the calculation of sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and the area under the receiver characteristic operating curves (AUC).

Results

Blue eye color was observed in 44.7%, brown in 29.6%, and intermediate in 25.7% participants. Prediction accuracy expressed by the AUC was 0.966 for blue, 0.913 for brown, and 0.796 for intermediate eye color. Sensitivity was 93.6% for blue, 58.1% for brown, and 0% for intermediate eye color. Specificity was 93.1% for blue, 89.2% for brown, and 100% for intermediate eye color. PPV was 91.7% for blue and 69.2% for brown color. NPV was 94.7% for blue and 83.5% for brown eye color. These values indicate prediction accuracy comparable to that established in other studies.

Conclusion

Blue and brown eye color can be reliably predicted from DNA samples using only six polymorphisms, while intermediate eye color defies prediction, indicating that more research is needed to genetically predict the whole variation of eye color in humans.Prediction of human visible characteristics by genotyping informative polymorphisms in DNA opens up a new perspective in the forensic field. Multiple genes including HERC2, OCA2, MC1R, SLC24A5, SLC45A2, TYR, TYRP1, ASIP, SLC24A4, TPCN2, KITLG, and IRF4 have been associated with eye, hair, and skin color in European populations and they have been used in studies dealing with eye color prediction (1-14). Variation of iris color depends on the content of eumelanine, a brown light-absorbing biopolymer, which is present in higher concentrations in brown-eyed individuals (15,16). Although eye color is evidently a continuous variable, it has been often classified into three categories – blue, brown, and intermediate (4,14). Eye color variability is particularly striking in European populations, constituting a highly differentiating trait of potential use in forensic investigations (7,14,17). Recent studies have shown that a significant fraction of human iris color variation can be explained by polymorphisms within a single region in the human genome, comprising the evolutionary conserved HERC2 gene and the neighboring OCA2 gene located on the chromosome 15. It is assumed that the level of expression of the known pigmentation gene – OCA2 – is controlled by polymorphism rs12913832 on HERC2 locus (18,19). The remaining genes that have been shown to contribute to eye color variation are SLC24A4, SLC45A2, TYR, and IRF4 (4,20,21). However, their impact on eye color prediction is lower and it seems to vary between populations (8,14,22,23). Since such differences may potentially affect accuracy of prediction in various populations, we further addressed this issue and analyzed a population sample of individuals with defined eye color from Slovenia.Several prediction models have already been proposed to be useful in eye color prediction (4,8,9,17,23,24). Here we used six IrisPlex predictors, which were selected by Liu et al (4) from a larger set of polymorphisms potentially influencing pigmentation in humans and included into the IrisPlex prediction system (4,13,17). The IrisPlex prediction model is based on a multinomial logistic regression method and uses phenotype and genotype data from 3804 Dutch individuals. Based on these data the model gives three probabilities for blue, brown, and intermediate eye color (13). From the obtained probabilities, the most probable iris color is predicted based on recommendations given in Walsh et al (13).  相似文献   

6.

Aim

To assess patients’ attitudes toward changing unhealthy lifestyle, confidence in the success, and desired involvement of their family physicians in facilitating this change.

Methods

We conducted a cross-sectional study in 15 family physicians’ practices on a consecutive sample of 472 patients (44.9% men, mean age  [± standard deviation] 49.3 ± 10.9 years) from October 2007 to May 2008. Patients were given a self-administered questionnaire on attitudes toward changing unhealthy diet, increasing physical activity, and reducing body weight. It also included questions on confidence in the success, planning lifestyle changes, and advice from family physicians.

Results

Nearly 20% of patients planned to change their eating habits, increase physical activity, and reach normal body weight. Approximately 30% of patients (more men than women) said that they wanted to receive advice on this issue from their family physicians. Younger patients and patients with higher education were more confident that they could improve their lifestyle. Patients who planned to change their lifestyle and were more confident in the success wanted to receive advice from their family physicians.

Conclusion

Family physicians should regularly ask the patients about the intention of changing their lifestyle and offer them help in carrying out this intention.Unhealthy lifestyle, including unhealthy diet and physical inactivity, is still a considerable health problem all over the world. Despite publicly available evidence about the health risks of unhealthy lifestyle, people still find it hard to improve their unhealthy diet and increase physical activity. Previous studies have shown that attitudes toward lifestyle change depended on previous health behavior, awareness of unhealthy lifestyle, demographic characteristics, personality traits, social support, family functioning, ongoing contact with health care providers, and an individual’s social ecology or network (1-4).As community-based health education approaches have had a limited effect on health risk factors reduction (3,5), the readiness-to-change approach, based on two-way communication, has become increasingly used with patients who lead an unhealthy lifestyle (3,6,7). Family physicians are in a unique position to adopt this approach, since almost every patient visits his/hers family physician at least once in five years (8). Previous studies showed that patients highly appreciated their family physicians’ advice on lifestyle changes (9,10). Moreover, patients who received such advice were also more willing to change their unhealthy habits (3,7,11). The reason for this is probably that behavioral changes are made according to the patient’s stage of the motivational circle at the moment of consultation (12), which can be determined only by individual approach.Although family physicians are convinced that it is their task to give advice on health promotion and disease prevention, in practice they are less likely to do so (13). The factors that prevent them from giving advice are time (14,15), cost, availability, practice capacity (14), lack of knowledge and guidelines, poor counseling skills (16), and personal attitudes (17). It also seems that physicians’ assessment varies considerably according to the risk factor in question. For example, information on diet and physical activity are often inferred from patients’ appearance rather than from clinical measurements (14). Also, health care professionals seldom give advice on recommended aspects of intervention that could facilitate behavioral change (18). As a large proportion of primary care patients are ready to lose weight, improve diet, and increase exercise (19), it is even more important that their family physicians provide timely advice.So far, several studies have addressed patients’ willingness to make lifestyle change (2-5,20) and the provision of professional advice (3,5,7,10,11). However, none of these studies have investigated the relation between these factors. So, the aim of our study was to assess the relation between patients’ attitudes toward changing unhealthy lifestyle, confidence in success, and the desired involvement of their family physicians in facilitating the change.  相似文献   

7.
AimTo present and evaluate a new screening protocol for amblyopia in preschool children.MethodsZagreb Amblyopia Preschool Screening (ZAPS) study protocol performed screening for amblyopia by near and distance visual acuity (VA) testing of 15 648 children aged 48-54 months attending kindergartens in the City of Zagreb County between September 2011 and June 2014 using Lea Symbols in lines test. If VA in either eye was >0.1 logMAR, the child was re-tested, if failed at re-test, the child was referred to comprehensive eye examination at the Eye Clinic.Results78.04% of children passed the screening test. Estimated prevalence of amblyopia was 8.08%. Testability, sensitivity, and specificity of the ZAPS study protocol were 99.19%, 100.00%, and 96.68% respectively.ConclusionThe ZAPS study used the most discriminative VA test with optotypes in lines as they do not underestimate amblyopia. The estimated prevalence of amblyopia was considerably higher than reported elsewhere. To the best of our knowledge, the ZAPS study protocol reached the highest sensitivity and specificity when evaluating diagnostic accuracy of VA tests for screening. The pass level defined at ≤0.1 logMAR for 4-year-old children, using Lea Symbols in lines missed no amblyopia cases, advocating that both near and distance VA testing should be performed when screening for amblyopia.Vision disorders in children represent important public health concern as they are acknowledged to be the leading cause of handicapping conditions in childhood (1). Amblyopia, a loss of visual acuity (VA) in one or both eyes (2) not immediately restored by refractive correction (3), is the most prevalent vision disorder in preschool population (4). The estimated prevalence of amblyopia among preschool children varies from 0.3% (4) to 5% (5). In addition, consequences of amblyopia include reduced contrast sensitivity and/or positional disorder (6). It develops due to abnormal binocular interaction and foveal pattern vision deprivation or a combination of both factors during a sensitive period of visual cortex development (7). Traversing through adulthood, it stands for the leading cause of monocular blindness in the 20-70 year age group (8). The main characteristic of amblyopia is crowding or spatial interference, referring to better VA when single optotypes are used compared to a line of optotypes, where objects surrounding the target object deliver a jumbled percept (9-12). Acuity is limited by letter size, crowding is limited by spacing, not size (12).Since amblyopia is predominantly defined as subnormal VA, a reliable instrument for detecting amblyopia is VA testing (13-15). Moreover, VA testing detects 97% of all ocular anomalies (13). The gold standard for diagnosing amblyopia is complete ophthalmological examination (4). There is a large body of evidence supporting the rationale for screening, as early treatment of amblyopia during the child’s first 5-7 years of life (8) is highly effective in habilitation of VA, while the treatment itself is among the most cost-effective interventions in ophthalmology (16). Preschool vision screening meets all the World Health Organization’s criteria for evaluation of screening programs (17). Literature search identified no studies reporting unhealthy and damaging effects of screening. The gold standard for screening for amblyopia has not been established (4). There is a large variety of screening methodologies and inconsistent protocols for referral of positives to complete ophthalmological examination. Lack of information on the validity (18,19) and accuracy (4) of such protocols probably intensifies the debate on determining the most effective method of vision screening (8,20-29). The unique definition of amblyopia accepted for research has not reached a consensus (4,5,30,31), further challenging the standardization of the screening protocols.Overall, two groups of screening methods exist: the traditional approach determines VA using VA tests, while the alternative approach identifies amblyogenic factors (27) based on photoscreening or automated refraction. The major difference between the two is that VA-based testing detects amblyopia directly, providing an explicit measure of visual function, while the latter, seeking for and determining only the level of refractive status does not evaluate visual function. In addition, the diagnosis and treatment of amblyopia is governed by the level of VA. On the other hand, amblyogenic factors represent risk factors for amblyopia to evolve. There are two major pitfalls in screening for amblyogenic factors. First, there is a lack of uniform cut-off values for referral and second, not all amblyogenic factors progress to amblyopia (19).Besides the issue of what should be detected, amblyopia or amblyogenic factors, a question is raised about who should be screened. Among literate children, both 3- and 4- year-old children can be reliably examined. However, 3-year-old children achieved testability rate of about 80% and positive predictive rate of 58% compared to >90% and 75%, respectively in the 4-year-old group (32). In addition, over-referrals are more common among 3-year-old children (32). These data determine the age of 4 years as the optimum age to screen for amblyopia. Hence, testability is a relevant contributor in designating the optimal screening test.If VA is to be tested in children, accepted standard tests should be used, with well-defined age-specific VA threshold determining normal monocular VA. For VA testing of preschool children Lea Symbols (33) and HOTV charts (22,32) are acknowledged as the best practice (34), while tumbling E (28,35,36) and Landolt C (28,37-39) are not appropriate as discernment of right-left laterality is still not a fully established skill (34,40). The Allen picture test is not standardized (34,41). Both Lea Symbols and HOTV optotypes can be presented as single optotypes, single optotypes surrounded with four flanking bars, single line of optotypes surrounded with rectangular crowding bars, or in lines of optotypes (22,33,34,41-53). The more the noise, the bigger the “crowding” effect. Isolated single optotypes without crowding overestimate VA (24), hence they are not used in clinical practice in Sweden (32). If presented in lines, which is recognized as the best composition to detect crowding, test charts can be assembled on Snellen or gold standard logMAR principle (34,42,51,54). Age-specific thresholds defining abnormal VA in preschool screening for amblyopia changed over time from <0.8 to <0.65 for four-year-old children due to overload of false positives (20).The outline of an effective screening test is conclusively demonstrated by both high sensitivity and high specificity. Vision screening tests predominately demonstrated higher specificity (4). Moreover, sensitivity evidently increased with age, whereas specificity remained evenly high (4). The criteria where to set the cut-off point if the confirmatory, diagnostic test is expensive or invasive, advocate to minimize false positives or use a cut-off point with high specificity.On the contrary, if the penalty for missing a case is high and treatment exists, the test should maximize true positives and use a cut-off point with high sensitivity (55). A screening test for amblyopia should target high sensitivity to identify children with visual impairment, while the specificity should be high enough not to put immense load on pediatric ophthalmologists (14). Complete ophthalmological examination as the diagnostic confirmatory gold standard test for amblyopia is neither invasive nor elaborate technology is needed, while the penalty for missing a case is a lifetime disability.In devising the Zagreb Amblyopia Preschool Screening (ZAPS) study protocol, we decided to use Lea Symbols in lines test and to screen preschool children aged 48-54 months to address the problems declared. Near VA testing was introduced in addition to commonly accepted distance VA testing (14,22,24,32,45,56-69) due to several reasons: first, hypermetropia is the most common refractive error in preschool children (70), hence near VA should more reliably detect the presence of hypermetropia; second, the larger the distance, the shorter the attention span is; and third, to increase the accuracy of the test.The pass cut-off level of ≤0.1 logMAR was defined because of particular arguments. Prior to 1992 Sweden used the pass cut-off level for screening of 0.8 (20). A change in the referral criteria to <0.65 for four-year-old children ensued, as many children referred did not require treatment (20). In addition, amblyopia treatment outcome of achieved VA>0.7 is considered as habilitation of normal vision (3,14). At last, the pass cut-off value ≤0.1 logMAR at four years can hardly mask serious visual problems, and even if they are present, we presume they are mild and can be successfully treated at six years when school-entry vision screening is performed. The aim of the ZAPS study is to present and evaluate new screening protocol for preschool children aged 48-54 months, established for testing near and distance VA using Lea Symbols in lines test. Furthermore, we aimed to determine the threshold of age-specific and chart-specific VA normative, testability of the ZAPS study protocol, and the prevalence of amblyopia in the City of Zagreb County. By delivering new evidence on amblyopia screening, guideline criteria defining optimal screening test for amblyopia in preschool children can be revised in favor of better visual impairment clarification.  相似文献   

8.

Aim

To explore the prevalence of psychiatric heredity (family history of psychiatric illness, alcohol dependence disorder, and suicidality) and its association with the diagnosis of stress-related disorders in Croatian war veterans established during psychiatric examination.

Methods

The study included 415 war veterans who were psychiatrically assessed and diagnosed by the same psychiatrist during an expert examination conducted for the purposes of compensation seeking. Data were collected by a structured diagnostic procedure.

Results

There was no significant correlation between psychiatric heredity of psychiatric illness, alcohol dependence, or suicidality and diagnosis of posttraumatic stress disorder (PTSD) or PTSD with psychiatric comorbidity. Diagnoses of psychosis or psychosis with comorbidity significantly correlated with psychiatric heredity (φ = 0.111; P = 0.023). There was a statistically significant correlation between maternal psychiatric illness and the patients’ diagnoses of partial PTSD or partial PTSD with comorbidity (φ = 0.104; P = 0.035) and psychosis or psychosis with comorbidity (φ = 0.113; P = 0.022); paternal psychiatric illness and the patients’ diagnoses of psychosis or psychosis with comorbidity (φ = 0.130; P = 0.008), alcohol dependence or alcohol dependence with comorbidity (φ = 0.166; P = 0.001); psychiatric illness in the primary family with the patients’ psychosis or psychosis with comorbidity (φ = 0.115; P = 0.019); alcohol dependence in the primary family with the patients’ personality disorder or personality disorder with comorbidity (φ = 0.099; P = 0.044); and suicidality in the primary family and a diagnosis of personality disorder or personality disorder with comorbidity (φ = 0.128; P = 0.009).

Conclusion

The study confirmed that parental and familial positive history of psychiatric disorders puts the individual at higher risk for developing psychiatric illness or alcohol or drug dependence disorder. Psychiatric heredity might not be necessary for the individual who was exposed to severe combat-related events to develop symptoms of PTSD.There are several risk factors associated with the development of posttraumatic stress disorder (PTSD), such as factors related to cognitive and biological systems and genetic and familial risk (1), environmental and demographic factors (2), and personality and psychiatric anamnesis (3).They are usually grouped into three categories: factors that preceded the exposure to trauma or pre-trauma factors; factors associated with trauma exposure itself; and post-trauma factors that are associated with the recovery environment (2,4).There are many studies which support the hypothesis that pre-trauma factors, such as ongoing life stress, psychiatric history, female sex (3), childhood abuse, low economic status, lack of education, low intelligence, lack of social support (5), belonging to racial and ethnic minority, previous traumatic events, psychiatric heredity, and a history of perceived life threat, influence the development of stress related disorders (6). Many findings suggest that ongoing life stress or prior trauma history sensitizes a person to a new stressor (2,7-9). The same is true for the lack of social support, particularly the loss of support from significant others (2,9-11), as well as from friends and community (12-14). If the community does not have an elaborated plan for providing socioeconomic support to the victims, then the low socioeconomic status can also be an important predictor of a psychological outcome such as PTSD (2,10,15). Unemployment was recognized as a risk factor for developing PTSD in a survey of 374 trauma survivors (16). It is known that PTSD commonly occurs in patients with a previous psychiatric history of mental disorders, such as affective disorders, other anxiety disorders, somatization, substance abuse, or dissociative disorders (17-21). Epidemiological studies showed that pre-existing psychiatric problems are one of the three factors that can predict the development of PTSD (2,22). Pre-existing anxiety disorders, somatoform disorders, and depressive disorders can significantly increase the risk of PTSD (23). Women have a higher vulnerability for PTSD than men if they experienced sexually motivated violence or had pre-existing anxiety disorders (23,24). A number of studies have examined the effects of gender differences on the predisposition for developing PTSD, with the explanation that women generally have higher rates of depression and anxiety disorders (3,25,26). War-zone stressors were described as more important for PTSD in men, whereas post-trauma resilience-recovery factors as more important for women (27).Lower levels of education and poorer cognitive abilities also appear to be risk factors (25). Golier et al (25) reported that low levels of education and low IQ were associated with poorer recall on words memorization tasks. In addition, this study found that the PTSD group with lower Wechsler Adult Intelligence Scale-Revised (WAIS-R) scores had fewer years of education (25). Nevertheless, some experts provided evidence for poorer cognitive ability in PTSD patients as a result or consequence rather than the cause of stress-related symptoms (28-31). Studies of war veterans showed that belonging to racial and ethnic minority could influence higher rates of developing PTSD even after the adjustment for combat exposure (32,33). Many findings suggest that early trauma in childhood, such as physical or sexual abuse or even neglect, can be associated with adult psychopathology and lead to the development of PTSD (2,5,26,34,35). Surveys on animal models confirm the findings of lifelong influences of early experience on stress hormone reactivity (36).Along with the reports on the effects of childhood adversity as a risk factor for the later development of PTSD, there is also evidence for the influence of previous exposure to trauma related events on PTSD (9,26,28). Breslau et al (36) reported that previous trauma experience substantially increased the risk for chronic PTSD.Perceived life threats and coping strategies carry a high risk for developing PTSD (9,26). For instance, Ozer et al (9) reported that dissociation during trauma exposure has high predictive value for later development of PTSD. Along with that, the way in which people process and interpret perceived threats has a great impact on the development or maintenance of PTSD (37,38).Brewin et al (2) reported that individual and family psychiatric history had more uniform predictive effects than other risk factors. Still, this kind of influence has not been examined yet.Keeping in mind the lack of investigation of parental psychiatric heredity on the development of stress-related disorders, the aim of our study was to explore the prevalence and correlation between the heredity of psychiatric illness, alcohol dependence, suicidality, and the established diagnosis of stress-related disorders in Croatian 1991-1995 war veterans.  相似文献   

9.

Aim

To analyze and interpret incidence and mortality trends of breast and ovarian cancers and incidence trends of cervical and endometrial cancers in Croatia for the period 1988-2008.

Methods

Incidence data were obtained from the Croatian National Cancer Registry. The mortality data were obtained from the World Health Organization (WHO) mortality database. Trends of incidence and mortality were analyzed by joinpoint regression analysis.

Results

Joinpoint analysis showed an increase in the incidence of breast cancer with estimated annual percent of change (EAPC) of 2.6% (95% confidence interval [CI], 1.9 to 3.4). The mortality rate was stable, with the EAPC of 0.3% (95% CI, -0.6 to 0.0). Endometrial cancer showed an increasing incidence trend, with EAPC of 0.8% (95% CI, 0.2 to 1.4), while cervical cancer showed a decreasing incidence trend, with EAPC of -1.0 (95% CI, -1.6 to -0.4). Ovarian cancer incidence showed three trends, but the average annual percent change (AAPC) for the overall period was not significant, with a stable trend of 0.1%. Ovarian cancer mortality was increasing since 1992, with EAPC of 1.2% (95% CI, 0.4 to 1.9), while the trend for overall period was stable with AAPC 0.1%.

Conclusion

Incidence trends of breast, endometrial, and ovarian cancers in Croatia 1988-2008 are similar to the trends observed in most of the European countries, while the modest decline in cervical cancer incidence and lack of decline in breast cancer mortality suggest suboptimal cancer prevention and control.Breast and gynecological cancers are among the seven most common female cancers in Croatia: in 2008 breast cancer was the most common cancer with the proportion of 26% of all cancer sites, endometrial cancer ranked fourth (6%), ovarian cancer (with fallopian tubes cancer) sixth (5%), and cervical cancer seventh (4%) (1).Breast, endometrial, and ovarian cancers share some similar risk factors like early menarche, late menopause, obesity, and low parity (2-5). Also, breast cancer in personal history increases the risk of endometrial and ovarian cancer (6). Delayed childbearing increases the risk of breast cancer but seems to have no impact on the development of ovarian and endometrial cancer (3-5). Diabetes mellitus increases the risk of endometrial and breast cancer (7,8). Use of tamoxifen or other selective estrogen receptor modulators increases the risk of endometrial and ovarian cancer, while the use of combined oral contraceptives is a protective factor (2,9,10). Also, tobacco smoking and alcohol intake reduce the risk of endometrial cancer (2,11,12). Alcohol intake and both oral contraceptives and hormonal replacement therapy are risk factors for breast cancer (2,13,14). Multiparty and physical activity are protective factors for all three cancers (2,4,15,16). Low socioeconomic status, sexually transmitted diseases, promiscuity, unprotected sexual behavior, earlier age of first intercourse, and smoking are risk factors for cervical cancer (2,17-23). Infection with human papillomavirus is considered as a necessary cause of cervical cancer (24).The aim of this study was to report the incidence and mortality of breast and ovarian cancers and incidence of endometrial and cervical cancers, analyze the trends in the period 1988-2008, and compare them to other European countries.  相似文献   

10.
The aim of this paper is to describe our surgical procedure for the treatment of osteonecrosis of the femoral head using a minimally invasive technique. We have limited the use of this procedure for patients with pre-collapse osteonecrosis of the femoral head (Ficat Stage I or II). To treat osteonecrosis of the femoral head at our institution we currently use a combination of outpatient, minimally invasive iliac crest bone marrow aspirations and blood draw combined with decompressions of the femoral head. Following the decompression of the femoral head, adult mesenchymal stem cells obtained from the iliac crest and platelet rich plasma are injected into the area of osteonecrosis. Patients are then discharged from the hospital using crutches to assist with ambulation. This novel technique was utilized on 77 hips. Sixteen hips (21%) progressed to further stages of osteonecrosis, ultimately requiring total hip replacement. Significant pain relief was reported in 86% of patients (n = 60), while the rest of patients reported little or no pain relief. There were no significant complications in any patient. We found that the use of a minimally invasive decompression augmented with concentrated bone marrow and platelet rich plasma resulted in significant pain relief and halted the progression of disease in a majority of patients.Osteonecrosis of the femoral head (ONFH) occurs when the cells of the trabecular bone and marrow in the femoral head spontaneously die, leading to fracture and collapse of the articular surface (1,2). In the US, every year ONFH occurs in 10 000-20 000 adults between the ages of 20 and 60 (1,3,4). Once collapse occurs, severe pain ensues, and the disease course rarely regresses (5-8). In order to halt disease progression and provide pain relief, 80% of patients suffering from ONFH will require a total hip arthroplasty (THA); typically at a younger age than patients undergoing a THA for osteoarthritis (9-11).Although ONFH is a common indication for THA, the etiology of the disease is still unknown (12,13). ONFH is thought to be a multifactorial disease, with patients reporting a history of exposure to one or more risk factors, including trauma to the hip, alcohol abuse, corticosteroid use, hemoglobinopathies, pregnancy, coagulopathies, organ transplant, chemotherapy, Caisson disease, HIV, and autoimmune conditions; however in some patients the risk factor remains unknown, and the disease is termed “idiopathic” ONFH (12-16). Recent studies looking at the gentics risks of ONFH have resulted in identifying an autosomal dominant mutation in collagen type II gene (COL2 A1 gene) (17); which has been associated with genetic polymorphisms in alcohol metabolizing enzymes and the drug transport proteins (18,19).If the disease course is recognized before collapse of the subchondral bone and cartilage, patients can be treated with core decompression of the femoral head including Ficat Stage I or II (12,20,21). This technique has been used for over four decades, however randomized control trials have failed to show that this procedure alone halts disease progression and collapse (4). Recently, concentrated bone marrow autograft has been used to augment the decompression site to attempt to repopulate the femoral head with human mesenchymal stem cells (hMSC) (13,22,23). This aim of this paper is to describe our surgical technique and early clinical results using autologous bone marrow concentrate with platelet rich plasma and a minimally invasive decompression for the treatment of ONFH.  相似文献   

11.

Aim

To assess retrospectively the clinical effects of typical (fluphenazine) or atypical (olanzapine, risperidone, quetiapine) antipsychotics in three open clinical trials in male Croatian war veterans with chronic combat-related posttraumatic stress disorder (PTSD) with psychotic features, resistant to previous antidepressant treatment.

Methods

Inpatients with combat-related PTSD were treated for 6 weeks with fluphenazine (n = 27), olanzapine (n = 28) risperidone (n = 26), or quetiapine (n = 53), as a monotherapy. Treatment response was assessed by the reduction in total and subscales scores in the clinical scales measuring PTSD (PTSD interview and Clinician-administered PTSD Scale) and psychotic symptoms (Positive and Negative Syndrome Scale).

Results

After 6 weeks of treatment, monotherapy with fluphenazine, olanzapine, risperidone, or quetiapine in patients with PTSD significantly decreased the scores listed in trauma reexperiencing, avoidance, and hyperarousal subscales in the clinical scales measuring PTSD, and total and subscales scores listed in positive, negative, general psychopathology, and supplementary items of the Positive and negative syndrome scale subscales, respectively (P<0.001).

Conclusion

PTSD and psychotic symptoms were significantly reduced after monotherapy with typical or atypical antipsychotics. As psychotic symptoms commonly occur in combat-related PTSD, the use of antipsychotic medication seems to offer another approach to treat a psychotic subtype of combat-related PTSD resistant to previous antidepressant treatment.In a world in which terrorism and conflicts are constant threats, and these threats are becoming global, posttraumatic stress disorder (PTSD) is a serious and global illness. According to the criteria from the 4th edition of Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) (1), exposure to a life-threatening or horrifying event, such as combat trauma, rape, sexual molestation, abuse, child maltreatment, natural disasters, motor vehicle accidents, violent crimes, hostage situations, or terrorism, can lead to the development of PTSD (1,2). The disorder may also be precipitated if a person experienced, saw, or learned of an event or events that involved actual or threatened death, serious injury, or violation of the body of self or others (3,4). In such an event, a person’s response can involve intense fear, helplessness, or horror (3,4). However, not all persons who are exposed to a traumatic event will develop PTSD. Although the stress reaction is a normal response to an abnormal situation, some extremely stressful situations will in some individuals overwhelm their ability to cope with stress (5).PTSD is a chronic psychiatric illness. The essential features of PTSD are the development of three characteristic symptom clusters in the aftermath of a traumatic event: re-experiencing the trauma, avoidance and numbing, and hyperarousal (1,6). The core PTSD symptoms in the re-experiencing cluster are intrusive memories, images, or perceptions; recurring nightmares; intrusive daydreams or flashbacks; exaggerated emotional and physical reactions; and dissociative experiences (1,6,7). These symptoms intensify or re-occur upon exposure to reminders of the trauma, and various visual, auditory, or olfactory cues might trigger traumatic memories (3,4). The avoidance and numbing cluster of symptoms includes efforts to avoid thoughts, feelings, activities, or situations associated with the trauma; feelings of detachment or alienation; inability to have loving feelings; restricted range of affect; loss of interest; and avoidance of activity. The hyperarousal cluster includes exaggerated startle response, hyper-vigilance, insomnia and other sleep disturbances, difficulties in concentrating, and irritability or outbursts of anger. PTSD criteria include functional impairment, which can be seen in occupational instability, marital problems, discord with family and friends, and difficulties in parenting (3,4,8). In addition to this social and occupational dysfunction, PTSD is often accompanied by substance abuse (9) and by various comorbid diagnoses, such as major depression (10), other anxiety disorders, somatization, personality disorders, dissociative disorders (7,11), and frequently with suicidal behavior (12). Combat exposure can precipitate a more severe clinical picture of PTSD, which may be complicated with psychotic features and resistance to treatment. War veterans with PTSD have a high risk of suicide, and military experience, guilt about combat actions, survivor guilt, depression, anxiety, and severe PTSD are significantly associated with suicide attempts (12).The pharmacotherapy treatment of PTSD includes the use of antidepressants, such as selective serotonin reuptake inhibitors (fluvoxamine, fluoxetine, sertraline, or paroxetine) as a first choice of treatment, tricyclic antidepressants (desipramine, amitriptyline, imipramine), monoamine oxidase inhibitors (phenelzine, brofaromine), buspirone, and other antianxiety agents, benzodiazepines (alprazolam), and mood stabilizers (lithium) (13-16). Although the pharmacotherapy of PTSD starts with antidepressants, in treatment-refractory patients a new pharmacological approach is required to obtain a response. In treatment-resistant patients, pharmacotherapy strategies reported to be effective include anticonvulsants, such as carbamazepine, gabapentine, topiramate, tiagabine, divalproex, lamotrigine (14,17); anti-adrenergic agents, such as clonidine (although presynaptic α2-adrenoceptor agonist, clonidine blocks central noradrenergic outflow from the locus ceruleus), propranolol, and prazosin (13,14), opiate antagonists (13), and neuroleptics and antipsychotics (14,17,18).Combat exposure frequently induces PTSD, and combat-related PTSD might progress to a severe form of PTSD, which is often refractory to treatment (19-21). Combat-related PTSD is frequently associated with comorbid psychotic features (11,14,17,19-21), while psychotic features add to the severity of symptoms in combat-related PTSD patients (19,22-24). These cases of a more severe subtype of PTSD, complicated with psychotic symptoms, require the use of neuroleptics or atypical antipsychotic drugs (14,17,25-27).After the war in Croatia (1991-1995), an estimated million people were exposed to war trauma and about 10 000 of the Homeland War veterans (15% prevalence) have developed PTSD, with an alarmingly high suicide rate (28). The war in Croatia brought tremendous suffering, not only to combat-exposed veterans and prisoners of war (29), but also to different groups of traumatized civilians in the combat zones, displaced persons and refugees, victims of terrorist attacks, civilian relatives of traumatized war veterans and terrorist attacks victims, and traumatized children and adolescents (30). Among Croatian war veterans with combat-related PTSD, 57-62% of combat soldiers with PTSD met criteria for comorbid diagnoses (8-11), such as alcohol abuse, major depressive disorder, anxiety disorders, panic disorder and phobia, psychosomatic disorder, psychotic disorders, drug abuse, and dementia. In addition to different comorbid psychiatric disorders, a great proportion of war veterans with combat-related PTSD developed psychotic features (8,11,25,26), which consisted of psychotic depressive and schizophrenia-like symptoms (suggesting prominent symptoms of thought disturbances and psychosis). Psychotic symptoms were accompanied by auditory or visual hallucinations and delusional thinking in over two-thirds of patients (25,26). Delusional paranoid symptoms occurred in 32% of patients (25,26). The hallucinations were not associated exclusively with the traumatic experience, while the delusions were generally paranoid or persecutory in nature (25,26). Although psychotic PTSD and schizophrenia share some similar symptoms, there are clear differences between these two entities, since PTSD patients still retain some insight into reality and usually do not have complete disturbances of affect (eg, constricted or inappropriate) or thought disorder (eg, loose associations or disorganized responses).This proportion of veterans with combat-related PTSD refractory to treatment (18-20) and with co-occurring psychotic symptoms requires additional pharmacological strategies, such as the use of neuroleptics (25) or atypical antipsychotics (14,17,26). Studies evaluating the use of antipsychotics in combat-related PTSD with psychotic features are scarce, and antipsychotics were frequently added to existing medication in the treatment of PTSD.In this study, we compared retrospectively the clinical effects of four antipsychotic drugs – a neuroleptic drug (fluphenazine) and three atypical antipsychotics (olanzapine, risperidone and quetiapine) – in treatment-resistant male war veterans with combat-related PTSD with psychotic features.  相似文献   

12.

Aim

To establish how many patients diagnosed with posttraumatic stress disorder (PTSD) in 1996 used psychiatric facilities and had psychiatric symptoms 10 years later, and assess their sociodemographic characteristics, comorbid disorders, and type of treatment.

Methods

Medical records of patients diagnosed with PTSD in 1996 were reviewed in the period 2007-2009 and the patients who contacted a psychiatrist in that period (n = 85) and those who did not (n = 158) were compared.

Results

There were 36.7% of men and 20% of women diagnosed with PTSD in 1996 who contacted a psychiatrist in the period 2007-2009. Patients who contacted a psychiatrist and those who did not did not differ in sex, age, the number of visits and hospitalizations in 1996, and employment status. The majority of patients still had PTSD and/or were enduring personality change in the period 2007-2009, and 54.8% had some comorbidity (mostly depression, alcohol-related disorders, and personality disorders). Patients were most often treated with anxiolytics and antidepressants.

Conclusion

Ten years after the traumatic experience, one third of patients with PTSD received psychiatric help, regardless of their sex, age, and employment status. Half of them had comorbid disorders and the majority of them were treated with anxiolytics and antidepressants.Posttraumatic stress disorder (PTSD) is a mental disorder that develops in 9-25% of war veterans (1-4), mainly in the first two years after the traumatic experience (5,6), but sometimes can develop years later (7,8). A similar prevalence was also found among Croatian war veterans (5-9). In the majority of cases (80-98%), PTSD is comorbid with other mental disorders: alcohol abuse, depression, anxiety disorders, and somatization (5,9-11).PTSD can also develop after a war-related trauma that is not necessarily combat-related, and the lifetime prevalence of PTSD in this population is 15-38% (12) and the prevalence of anxiety and depressive disorders is even higher (13,14).Many patients in Croatia had symptoms of PTSD and used health facilities for treatment years after the war (5-9). In the former Yugoslavia, 84% of untreated war-related PTSD patients still had PTSD symptoms years after the war (15). Resolution of PTSD is observed in 50-60% of cases (4,16).Combat-related PTSD causes more functional impairment and is less responsive to treatment than PTSD related to other traumas (17-19). It is unclear whether this happens because there is indeed a difference between the two types of PTSD or some of the patients aggravate their symptoms in order to get compensation (17,18,20). Some studies show that the use of health facilities decreases after obtaining war veteran status and compensation, but others show that the patients who had obtained the status and compensation used medical facilities more often than those who had not (20-23).The aim of this study was to establish how many patients diagnosed with PTSD in 1996 used psychiatric facilities and had psychiatric symptoms 10 years later and assess their comorbidities, sociodemographic characteristics, and type of treatment.  相似文献   

13.
Prevalence of erectile and ejaculatory difficulties among men in Croatia   总被引:1,自引:1,他引:0  

Aim

To determine the prevalence and risk factors of erectile difficulties and rapid ejaculation in men in Croatia.

Method

We surveyed 615 of 888 contacted men aged 35-84 years. The mean age of participants was 54 ± 12 years. College-educated respondents and the respondents from large cities were slightly overrepresented in the sample. Structured face-to-face interviews were conducted in June and July 2004 by 63 trained interviewers. The questionnaire used in interviews was created for commercial purposes and had not been validated before.

Results

Out of 615 men who were sexually active in the preceding month and gave the valid answers to the questions on erectile difficulties and rapid ejaculation, 130 suffered from erectile or ejaculatory difficulties. Men who had been sexually active the month before the interview and gave the valid answers to the questions on sexual difficulties reported having erectile difficulties more often (77 out of 615) than rapid ejaculation (57 out of 601). Additional 26.8% (165 out of 615) and 26.3% (158 out of 601) men were classified as being at risk for erectile difficulties and rapid ejaculation, respectively. The prevalence of erectile difficulties varied from 5.8% in the 35-39 age group to 30% in the 70-79 age group. The association between age and rapid ejaculation was curvilinear, ie, U-shaped. Rates of rapid ejaculation were highest in the youngest (15.7%) and the oldest (12.5%) age groups. Older age (odds ratios [OR], 6.2-10.3), overweight (OR, 3.3-4.2), alcohol (OR, 0.3-0.4), intense physical activity (OR, 0.3), traditional attitudes about sexuality (OR, 2.8), and discussing sex with one’s partner (OR, 0.1-0.3) were associated with erectile difficulties. Education (OR, 0.1-0.3), being overweight (OR, 22.0) or obese (OR, 20.1), alcohol consumption (OR, 0.2-0.3), stress and anxiety (OR, 10.8-12.5), holding traditional attitudes (OR, 2.8) and moderate physical activity (OR, 0.1) were factors associated with rapid ejaculation.

Conclusion

The prevalence of erectile difficulties was higher than the prevalence of rapid ejaculation in men in Croatia. The odds of having these sexual difficulties increased with older age, overweight, traditional attitudes toward sex, and higher level of stress and anxiety.A growing number of international studies on sexual health issues suggest that many women and men worldwide have sexual health problems (1-4). According to surveys based on community samples, the prevalence of male sex disturbances ranges between 10% and 50% (2,4). The most frequent male sexual disturbance seems to be premature or rapid ejaculation (5,6), reported to range from 4% to 29% (6). The Global Study of Sexual Attitudes and Behaviors estimated the prevalence of rapid ejaculation at approximately 30% across all age groups (7). Actually, it seems to be the most common of all male sexual disturbances (5-9). However, when objective definition of rapid ejaculation is attempted, problems arise (9,10). According to the fourth edition of Diagnostic and Statistical Manual of Mental Disorders (DSM-IV), rapid ejaculation is a persistent or recurrent onset of orgasm and ejaculation with minimal sexual stimulation before, upon, or shortly after penetration and before the person wishes it (11). It results in pronounced distress or interpersonal difficulties and is not exclusively due to the direct effects of a substance used (11). Although useful for clinical practice, this definition does not offer precise guidelines for epidemiological research. As indicated by large discrepancies in the prevalence rates (6), epidemiological analyses of rapid ejaculation are characterized by definition and measurement inconsistencies (1,10,12).In spite of the lack of agreement as to what constitutes rapid ejaculation (12) and the fact that it is not a well-understood problem (5,13), the consequences are well known. Chronic rapid ejaculation is accompanied by an array of psychological problems, including a psychogenic erectile dysfunction (14). Rapid ejaculation can seriously burden interpersonal dynamics and decrease sexual satisfaction (15) and sometimes the overall quality of intimate relationship (16,17). In addition to frustrations, withdrawal (including the lack of desire and cessation of sexual contacts), and strained relationship, rapid ejaculation causes changes in self-image and one’s sense of masculinity. It has been shown that rapid ejaculation has similar psychological impact as erectile problems, especially in terms of self-confidence and worries over the relationship, both the present and the future ones (14).Psychologically and culturally, erectile difficulties are the most dreaded male sexual problem (16,18,19), which not only result in deep frustration, but often lead to a crisis of masculine identity (19). Recent pharmacological breakthrough has initiated a rapid growth of interest in the epidemiology of erectile difficulties. Current studies suggest that a sizeable proportion of adult men suffer from erectile difficulties and that the likelihood of erectile difficulties increases with age (1-4). According to a recently published systematic review, the prevalence of erectile difficulties ranges from 2% in men younger than 40 years to over 80% in men aged 80 years or more (4). Due to the aging of population, the number of men with erectile difficulties is expected to be rising (20,21). The projection based on the results of the Massachusetts Male Aging Study (MMAS) from 1995 is that the number of men with the condition will more than double by 2025 (22).How do we explain considerable variations in reported prevalence rates of erectile difficulties? Methodological and conceptual differences between the studies (1,3,4,23) seem to be the main reason, although the effect of culture-specific perception of sexual problems should not be underestimated (24). In spite of a large number of population or community sample studies (18,20,25-38), inconsistent definitions and operationalization seriously hamper the analysis of the role of culture in perception and reporting of erectile difficulties in men.In transitional countries, sexual health is a rather neglected research area. The main reason for that is the lack of education and research training of possible investigators in the field of sexology. In Croatia, sexual health issues have only recently gained attention as a topic worthy of clinical (39) and non-clinical research (40,41). Our aim was to determine the prevalence of and risk factors for erectile difficulties and rapid ejaculation in a national sample of Croatian men.  相似文献   

14.

Aim

To determine predictive risk factors for violent offending in patients with paranoid schizophrenia in Croatia.

Method

The cross-sectional study including male in-patients with paranoid schizophrenia with (N = 104) and without (N = 102) history of physical violence and violent offending was conducted simultaneously in several hospitals in Croatia during one-year period (2010-2011). Data on their sociodemographic characteristics, duration of untreated illness phase (DUP), alcohol abuse, suicidal behavior, personality features, and insight into illness were collected and compared between the groups. Binary logistic regression model was used to determine the predictors of violent offending.

Results

Predictors of violent offending were older age, DUP before first contact with psychiatric services, and alcohol abuse. Regression model showed that the strongest positive predictive factor was harmful alcohol use, as determined by AUDIT test (odds ratio 37.01; 95% confidence interval 5.20-263.24). Psychopathy, emotional stability, and conscientiousness were significant positive predictive factors, while extroversion, pleasantness, and intellect were significant negative predictive factors for violent offending.

Conclusion

This study found an association between alcohol abuse and the risk for violent offending in paranoid schizophrenia. We hope that this finding will help improve public and mental health prevention strategies in this vulnerable patient group.Individuals with schizophrenia have an increased risk of violence (1), but different studies report different risks (1,2). Anglo-American studies commonly report higher prevalence rates than European studies (3,4). These patients have also been reported to have up to 4-6 times higher violent behavior rate than the general population (3-5). Nonetheless, less than 0.2% patients suffering from schizophrenia commit homicide (in 20-year period) and less than 10% of commit a violent act (3). Also, patients with schizophrenia contribute to 6%-11% of all homicides and homicide attempts (3-5).In general, aggressiveness is usually associated with anti-social personality features, juvenile delinquency, and psychoactive substance abuse (6). In patients with schizophrenia violence and violent offending is associated with a great number of risk factors, such as premorbid affinity to violent behavior, alcohol abuse, younger age, lower socioeconomic status (6,7), deinstitutionalization, longer duration of untreated psychosis, later onset of first episode of psychosis (1,4,8), lower social status, broken families, asocial behavior of parents, loss of father at an early age, a new marriage partner in the family, and growing up in an orphanage (9).Several studies (10-12) looked at four basic personality dimensions and their role in violence in patients with schizophrenic illness spectrum: impulse control, affect regulation, narcissism, and paranoid cognition. Impulsivity and immature affect regulation were associated with most neuropsychiatric disorders, and were particularly predictive of affinity for addictive disorders, while paranoid cognition and narcissism were predictive of violence acts (10-12).The causes of schizophrenia may be genetic, early environmental, and epigenetic risk factors (13,14), which may further modulate the risk of violent offending among individuals with this disease (1,15). Until recently, very little has been reported about the predictive factors of violence and violent offending in the patient population in Croatia. The Croatian population has during the last two decades been exposed to environmental and socio-demographic changes (eg, Croatian War 1991-1995 and post-war period), which might have had an impact on predictive risk factors. Therefore, we conducted a cross-sectional study of in-patients with paranoid schizophrenia with or without history of physical violence and violent offending (inclusive of homicide) in several hospitals in Croatia during one-year period.  相似文献   

15.

Aim

To estimate and compare asthma prevalence in Africa in 1990, 2000, and 2010 in order to provide information that will help inform the planning of the public health response to the disease.

Methods

We conducted a systematic search of Medline, EMBASE, and Global Health for studies on asthma published between 1990 and 2012. We included cross-sectional population based studies providing numerical estimates on the prevalence of asthma. We calculated weighted mean prevalence and applied an epidemiological model linking age with the prevalence of asthma. The UN population figures for Africa for 1990, 2000, and 2010 were used to estimate the cases of asthma, each for the respective year.

Results

Our search returned 790 studies. We retained 45 studies that met our selection criteria. In Africa in 1990, we estimated 34.1 million asthma cases (12.1%; 95% confidence interval [CI] 7.2-16.9) among children <15 years, 64.9 million (11.8%; 95% CI 7.9-15.8) among people aged <45 years, and 74.4 million (11.7%; 95% CI 8.2-15.3) in the total population. In 2000, we estimated 41.3 million cases (12.9%; 95% CI 8.7-17.0) among children <15 years, 82.4 million (12.5%; 95% CI 5.9-19.1) among people aged <45 years, and 94.8 million (12.0%; 95% CI 5.0-18.8) in the total population. This increased to 49.7 million (13.9%; 95% CI 9.6-18.3) among children <15 years, 102.9 million (13.8%; 95% CI 6.2-21.4) among people aged <45 years, and 119.3 million (12.8%; 95% CI 8.2-17.1) in the total population in 2010. There were no significant differences between asthma prevalence in studies which ascertained cases by written and video questionnaires. Crude prevalences of asthma were, however, consistently higher among urban than rural dwellers.

Conclusion

Our findings suggest an increasing prevalence of asthma in Africa over the past two decades. Due to the paucity of data, we believe that the true prevalence of asthma may still be under-estimated. There is a need for national governments in Africa to consider the implications of this increasing disease burden and to investigate the relative importance of underlying risk factors such as rising urbanization and population aging in their policy and health planning responses to this challenge.Chronic respiratory diseases (CRDs) are among the leading causes of death worldwide, with asthma rated the most common chronic disease affecting children (1). Globally, about 300 million people have asthma, and current trends suggest that an additional 100 million people may be living with asthma by 2025 (1,2). The World Health Organization (WHO) estimates about 250 000 deaths from asthma every year, mainly in low- and middle-income countries (LMIC) (3,4). Just like with many other chronic diseases in Africa, the fast rate of urbanization has been linked to the increase in the burden of asthma and other allergic diseases (3,5,6). The prevalence of these conditions may, in theory, have the potential to reach levels higher than those observed in high-income countries (HIC) due to priming effects of parasitic helminthic infections on the immune system, as these infections are common in many African settings (5). The International Study of Asthma and Allergies (ISAAC) reported that asthma prevalence among children was increasing in Africa and has contributed most to the burden of disease through its effects on quality of life (3). In-patient admissions and purchase of medications account for most of the direct costs on government, while loss of productivity, due to absenteeism from work and school, are responsible for most of the indirect costs (7,8).Asthma is widely known as a multifactorial respiratory disorder with both genetic and environmental underlying risk factors (3). Exposure to common allergens (including pollens, dust mites, and animal furs) and indoor and outdoor air pollution from various sources (eg, traffic pollution, combustion of fossils and biomass fuels, workplace dust) have all been implicated as triggers of the disease (9). Second hand tobacco smoking is a confirmed risk factor in pediatric patients (5,10). Viral infections, a major cause of upper respiratory tract infections and “common cold,” are also a common risk factor in children (11,12). As noted, helminthic infections are relatively common in Africa and are associated with bronchial hyper-responsiveness and asthma (5,13); this is perhaps due to the presence of related raised immunoglobulin E (IgE) and a prominent Th2 immune response (5,14).Studies on asthma are few in Africa, with most publications mainly from South African and Nigerian populations (14). One main factor affecting research output is the diagnosis of asthma, which still remains a challenging issue (15,16). The WHO has emphasized that this has limited on-going research efforts globally (4,16). The International Union against Tuberculosis and Lung Diseases (IUATLD) published one of the first diagnostic and survey guidelines for asthma in 1984, but experts subsequently reported concerns about its precision and reliability (17). According to the Global Initiative for Asthma (GINA), detailed history, physical examination and spirometric lung function tests are vital to the diagnosis and management of asthma (10,18). Generally, a reduction in forced expiratory volume in one second (FEV1) and peak expiratory flow (PEF) may be indicative of asthma, with the amount of reduction proportional to the severity of asthma (4). GINA proposed that an increase in FEV1 of >12% and 200 mL in about 15-20 minutes following the inhalation of 200-400 μg of salbutamol or a 20% increase in PEF from baseline can be employed as standardized criteria in diagnosis of asthma (10). This, however, lacks sensitivity, as many asthmatics, especially those on treatment, may not exhibit an increase in FEV1 and PEF when assessed (16,19). Thus, although asthma is characterized by significant reversibility of airway obstruction, an absence of reversibility may not always exclude the presence of asthma (20). The ISAAC established in 1991, remains the largest epidemiological study among children globally (1). ISAAC methodologies and scoring are currently the most widely employed by researchers in Africa (1,4). This involves both video and written questionnaires, as there were reports that video and pictorial representations of asthma symptoms may contribute to improved case recognition in younger children (1). However, this is still a subject of debate among experts (21). The European Community Respiratory Health Survey (ECRHS), which assessed the prevalence of atopy and symptoms of airway disease among older age groups in Western Europe, has been widely implemented and has reported significant geographic variations in the prevalence of asthma and atopy (9). Despite these revised guidelines, both ISAAC and ECRHS research groups have reported challenges in achieving high sensitivity and specificity in case ascertainment with the symptom “wheeze at rest in the last 12 months” (also regarded as current wheeze, or active wheeze), yielding the highest sensitivity and specificity (1).In Africa, problems including those arising from the over-utilization of health services, lack of trained staff and diagnostic apparatus, and non-availability and unaffordability of inhaled medications have hindered efforts to improve the management of asthma (22,23). The lack of organized health promotion programs, such as effective control strategies for environmental triggers, air pollutants, and occupational dusts have also contributed to the growing burden (24). The WHO has reported that the levels of asthma control and health responses in the continent have been below recommended standards, and that these have contributed to the size of the disease burden (3,4). In addition, although many African countries have national guidelines for the management of asthma and other CRDs, these guidelines have not been implemented in most rural areas (25,26). Economic analyses in many African settings have shown that direct costs from asthma are usually greater than the indirect costs. However, indirect costs represent a relatively higher proportion of total costs among pediatric than adult patients (8). Moreover, the wider economic burden on individuals, families, employers, and society, due to loss of future potential source of livelihood, has also been devastating in many resource-poor settings (22). It is believed that many children with asthma in Africa may fail to achieve their full potential if proper management and control measures are not put in place (1). It has been suggested that education of health care providers and the public is a vital element of the response to the challenge posed by asthma in Africa (4,27).By 2015, it is expected that world’s urban population will increase from 45% to 59%, with over half of this occurring in Africa (8). It is also expected that the prevalence of asthma and many chronic diseases in Africa will increase due to this growing population size and from effects of accompanying urbanization and adoption of western lifestyles (28). In light of this and of the low research output and poor availability of health services data on the burden of asthma in Africa, it is important to analyze the available data through a systematic review of the literature in order to attempt to quantify the burden, guide health priority settings, and inform the formulation of an appropriate health policy response.  相似文献   

16.

Aim

To analyze the neurotoxic potential of synthesized magnetite nanoparticles coated by dextran, hydroxyethyl starch, oxidized hydroxyethyl starch, and chitosan, and magnetic nanoparticles combined with ferritin as a native protein.

Methods

The size of nanoparticles was analyzed using photon correlation spectroscopy, their effects on the conductance of planar lipid membrane by planar lipid bilayer technique, membrane potential and acidification of synaptic vesicles by spectrofluorimetry, and glutamate uptake and ambient level of glutamate in isolated rat brain nerve terminals (synaptosomes) by radiolabeled assay.

Results

Uncoated synthesized magnetite nanoparticles and nanoparticles coated by different polysaccharides had no significant effect on synaptic vesicle acidification, the initial velocity of L-[14C]glutamate uptake, ambient level of L-[14C]glutamate and the potential of the plasma membrane of synaptosomes, and conductance of planar lipid membrane. Native ferritin-based magnetic nanoparticles had no effect on the membrane potential but significantly reduced L-[14C]glutamate transport in synaptosomes and acidification of synaptic vesicles.

Conclusions

Our study indicates that synthesized magnetite nanoparticles in contrast to ferritin have no effects on the functional state and glutamate transport of nerve terminals, and so ferritin cannot be used as a prototype, analogue, or model of polysaccharide-coated magnetic nanoparticle in toxicity risk assessment and manipulation of nerve terminals by external magnetic fields. Still, the ability of ferritin to change the functional state of nerve terminals in combination with its magnetic properties suggests its biotechnological potential.Superparamagnetic iron oxide nanoparticles are a promising candidate for increasing the efficiency of targeted drug delivery and therapy due to external magnetic guidance. Nanomaterials differ from those in bulk forms because they often show unexpected physical and chemical properties. They may produce potential functional and toxicity effects on human nerve cells due to their ability to pass through biological membranes and increase the risk of the development of neurodegenerative diseases (1-3). They can penetrate the blood-brain barrier (3-5) and kill nervous cells in vitro (6-8). Surface modification of iron oxide is a key issue for enhancing its interaction with the cell membrane. By using iron oxide nanoparticles coated by dextran, it was shown that labeled cells could be tracked by magnetic resonance imaging in vivo (9,10). Dextran occupies a special place among polysaccharides because of its wide application. Contrast agents based on dextran-coated iron oxides, eg, Endorem (Guerbet, Roissy, France) and Resovist (Bayer Schering Pharma AG, Berlin-Wedding, Germany), have been commercially available for human use as blood pool agents. Similarly, immortalized cells from the MHP36 hippocampal cell line labeled in vitro with gadolinium rhodamine dextran were tracked in ischemia-damaged rat hippocampus in perfused brains ex vivo (11).Taking into account that all nanoparticles are more or less toxic and the brain can be a target for their neurotoxic action (3,8,12,13), it is crucial to know their neurotoxic potential. Estimation of neurotoxic risks of nanoparticles can be assessed at various levels of nervous system organization. This research was conducted at the neurochemical level according to the Guidelines for Neurotoxicity Risk Assessment of US Environmental Protection Agency (14), assessing the uptake and release of the neurotransmitters in nerve terminals (15,16). It has been suggested that a possible target for nanoparticles, beyond the already established microglial cells, are presynaptic terminals of neurons (12). Presynaptic nerve terminals contain vesicular pool of neurotransmitters that can be released by exocytosis to the synaptic cleft in response to stimulation (17,18). A key excitatory neurotransmitter in the mammalian central nervous system is glutamate, which is implicated in many aspects of normal brain functioning. Abnormal glutamate homeostasis contributes to neuronal dysfunction and is involved in the pathogenesis of major neurological disorders (19,20). Under normal physiological conditions, extracellular glutamate between episodes of exocytotic release is kept at a low level, thereby preventing continual activation of glutamate receptors and protecting neurons from excitotoxic injury. Low extracellular glutamate concentration is maintained through its uptake by high-affinity Na+-dependent glutamate transporters located in the plasma membrane of neurons and glial cells.Prototypic nanoparticles have been shown to be useful for investigation of synaptic mechanisms underlying the development of neurotoxicity (8,12). Ferritin may be considered as a model nanoparticle (8,12) because it is composed of 24 subunits, which form a spherical shell with a large cavity where up to 4500 ions Fe3+ can be deposited as compact mineral crystallites resembling ferrihydrite (21-25). Ferritin stores cellular iron in a dynamic manner allowing the release of the metal on demand (24). Its cores exhibit superparamagnetic properties, which are inherent to magnetic nanoparticles, and vary in diameter from 3.5 nm to 7.5 nm in different tissues (26,27). This protein can penetrate blood-brain barrier (28) and be transported in different cells using clathrin-mediated endocytosis, similarly to many artificial nanoparticles that use the same mechanism (8,29,30).Recently, there has started an examination of ferritin from the biotechnological point of view. The hypothesis was that ferritin might be considered a good tool and prototypical nanoparticle for investigation of possible toxic properties of metal nanoparticles coated by dextran/polymer shells and possible causes of neurodegeneration associated with exposure to nanoparticles (8,12). Ferritin has been suggested as a label for high-gradient magnetic separation (31) and magnetic force microscopy imaging (32). Recently, it has been shown that the avascular microscopic breast and brain tumors could be noninvasively detected by designing nanoparticles that contained human ferritin as molecular probes for near-infrared fluorescence and magnetic resonance imaging (33).This research was focused on two aspects – the first was the assessment of neurotoxic potential of synthesized nanoparticles of magnetite (MNP) coated by dextran, hydroxyethyl starch, oxidized hydroxyethyl starch, chitosan as well as uncoated nanoparticles, studying their effects on: 1) the uptake of L-[14C]glutamate by rat brain nerve terminals via specific high-affinity Na+-dependent plasma membrane transporters; 2) the ambient level of L-[14C]glutamate in nerve terminals; 3) the membrane potential (Em) of the plasma membrane of nerve terminals using potential-sensitive fluorescent dye Rhodamine 6G; 4) transmembrane current across the planar lipid membrane using planar lipid bilayer technique; 5) acidification of synaptic vesicles in nerve terminals using pH-sensitive fluorescent dye acridine orange. The second aspect was a comparative analysis of neurotoxic potential of these synthesized polysaccharide-coated nanoparticles and ferritin, which could bring new insight into a possible usage of ferritin as an analogue of polymer-coated magnetic nanoparticle in toxicity risk assessment.  相似文献   

17.
18.

Aim

To determine peripheral blood lymphocyte subsets – T cells, helper T cells, cytotoxic T cells, B cells, and natural killer cells, natural killer cell cytotoxicity, serum cortisol concentration, and lymphocyte glucocorticoid receptor expression in Croatian combat veterans diagnosed with chronic posttraumatic stress disorder (PTSD); and to examine the relationship between the assessed parameters and the time passed since the traumatic experience.

Methods

Well-characterized group of 38 PTSD patients was compared to a group of 24 healthy civilians. Simultaneous determination of lymphocyte subsets and the expression of intracellular glucocorticoid receptor was performed using three-color flow cytometry. Natural killer cell cytotoxicity was measured by 51Cr-release assay and the serum cortisol concentration was determined by radioimmunoassay.

Results

We found higher lymphocyte counts in PTSD patients than in healthy controls (2294.7 ± 678.0/μL vs 1817.2 ± 637.0/μL, P = 0.007) and a positive correlation between lymphocyte glucocorticoid receptor expression and the number of years that passed from the traumatic experience (rs = 0.43, P = 0.008). Lymphocyte glucocorticoid receptor expression positively correlated with serum cortisol concentration both in PTSD patients (r = 0.46, P = 0.006) and healthy controls (r = 0.46, P = 0.035).

Conclusion

This study confirmed that the immune system was affected in the course of chronic PTSD. Our findings also indicated that the hypothalamic-pituitary-adrenal axis profile in PTSD was associated with the duration of the disorder. Due to the lack of power, greater sample sizes are needed to confirm the results of this study.Prolonged or frequently repeated stress response during symptomatic episodes in chronic posttraumatic stress disorder (PTSD) can result in neuroendocrine and immune alterations, posing serious threat to mental and physical health (1,2). Evidence suggests that PTSD is related to increased medical morbidity, particularly from cardiovascular and autoimmune diseases (3). With controversial findings when neurobiology of PTSD is concerned, the patophysiological mechanisms underlying increased susceptibility to disease are not clear (4,5). However, it has been implicated that the sympathetic-adrenal-medullary (SAM) and the hypothalamic-pituitary-adrenal axes are the key mediators in this process (6,7).The immune system interacts with the hypothalamic-pituitary-adrenal axis in a bidirectional fashion to maintain homeostasis. Being the primary effector of the stress response, cortisol modifies the complex cytokine network and, consequently, leukocyte function and recirculation (8). These effects are achieved through its interaction with the specific intracellular glucocorticoid receptors (9).Studies of the leukocyte recirculation (10,11), immune cells function (12), and hypothalamic-pituitary-adrenal axis activity (5) in PTSD yielded controversial results. Overall findings support the hypothesis that immune activation in PTSD may be associated with Th2 cytokine shift and alterations in the proinflammatory cytokine system (4). Besides, it is believed that PTSD is linked with low plasma cortisol levels and higher glucocorticoid receptor expression, suggesting enhanced feedback sensitivity to cortisol (13). In contrast to these findings, Gotovac et al (14) showed that Croatian combat veterans with PTSD, approximately 6 years after traumatic event, had lower expression of glucocorticoid receptor in lymphocyte subsets, with higher serum cortisol concentration than healthy subjects. Majority of other studies did not take into account the time passed since the trauma and their samples mainly included Vietnam veterans (15) or Holocaust survivors (16), who had greater time gap since the traumatic experience than Croatian war veterans.Considering the strong discrepancies in the results published to date, we performed a cross-sectional study to evaluate the correlation between PTSD in Croatian combat war veterans and the percentages of circulating lymphocyte subsets, natural killer cell cytotoxicity as a measure of immune function, and the serum cortisol concentration with lymphocyte glucocorticoid receptor expression as components of hypothalamic-pituitary-adrenal axis. The emphasis was put on the relationship between the assessed parameters and the time passed since the traumatic experience.  相似文献   

19.

Aim

To genotype and evaluate a panel of single-nucleotide polymorphisms for individual identification (IISNPs) in three Chinese populations: Chinese Han, Uyghur, and Tibetan.

Methods

Two previously identified panels of IISNPs, 86 unlinked IISNPs and SNPforID 52-plex markers, were pooled and analyzed. Four SNPs were included in both panels. In total, 132 SNPs were typed on Sequenom MassARRAY® platform in 330 individuals from Han Chinese, Uyghur, and Tibetan populations. Population genetic indices and forensic parameters were determined for all studied markers.

Results

No significant deviation from Hardy-Weinberg equilibrium was observed for any of the SNPs in 3 populations. Expected heterozygosity (He) ranged from 0.144 to 0.500 in Han Chinese, from 0.197 to 0.500 in Uyghur, and from 0.018 to 0.500 in Tibetan population. Wright''s Fst values ranged from 0.0001 to 0.1613. Pairwise linkage disequilibrium (LD) calculations for all 132 SNPs showed no significant LD across the populations (r2<0.147). A subset of 58 unlinked IISNPs (r2<0.094) with He>0.450 and Fst values from 0.0002 to 0.0536 gave match probabilities of 10−25 and a cumulative probability of exclusion of 0.999992.

Conclusion

The 58 unlinked IISNPs with high heterozygosity have low allele frequency variation among 3 Chinese populations, which makes them excellent candidates for the development of multiplex assays for individual identification and paternity testing.Single-nucleotide polymorphisms (SNPs) are often used as a supplementary tool to short tandem repeats (STRs) analysis (1) since they show advantages over STRs in degraded DNA detection (2), kinship analysis (3), ancestry inference (4,5), and physical traits analysis (6). Different SNPs groups have been selected according to defined purposes (4,5,7,8).In the last decades, several panels for individual identification have been developed (2,9-11). Kidd et al (12) defined an ideal SNP panel for individual identification (IISNP) as a group of statistically independent SNPs that showed little frequency variation among different populations with high heterozygosity. Based on this criterion, Pakstis et al selected 86 unlinked candidate individual identification SNPs (IISNPs) with average heterozygosity >0.4 and Fst values <0.06 for 44 major populations across the world (1). However, the sample sizes for Chinese populations were very limited. On the other hand, the SNPforID consortium (www.snpforid.org) developed a 52-plex SNPs assay for individual identification (7). This assay was validated by Břrsting et al (13-15) according to the ISO 17025 standard and used for routine casework. This assay worked well in several European countries but showed a somewhat larger frequency variation among populations from other continents (16-18).In order to collect an ideal SNP panel for individual identification and evaluate its performance in Chinese populations, we pooled the previous 86 IISNPs and 52-plex SNPs together, and typed them in 330 samples of three Chinese population groups: Han, Tibetan, and Uyghur.  相似文献   

20.
AimTo determine the correlation of urinary fibroblast growth factor 23 (FGF23) excretion with blood pressure and calcium-phosphorus metabolism.MethodsThe study included 42 hypertensive (17 girls) and 46 healthy children and adolescents (17 girls) aged 6-18 years admitted to the Department of Pediatrics and Nephrology, Medical University of Białystok between January 2013 and December 2013. FGF23 in urine was measured using Human Intact FGF-23 ELISA Kit.ResultsHypertensive participants had significantly higher urine FGF23/creatinine values than the reference group (8.65 vs 5.59 RU/mg creatinine, P = 0.007). Urine FGF23/creatinine positively correlated with systolic blood pressure in all participants. In hypertensive patients, urine FGF23/creatinine positively correlated with serum calcium and negatively with serum 25(OH)D, urinary calcium, phosphorus, and magnesium.ConclusionThis study found that FGF23 may play an important role in the pathogenesis of hypertension in children and adolescents, but our results should be confirmed by further studies.Hypertension is a chronic medical condition and a major risk factor for cardiovascular disease, heart failure, and chronic kidney disease (CKD). Hypertension was found to be associated with several factors, among them calcium-phosphorus imbalance, lack of vitamin D, and serum parathyroid hormone (PTH) (1-5). However, far too little attention has been paid to phosphates and hormonal mechanisms responsible for their regulation, especially since the consumption of phosphorus has considerably increased in recent years. Some studies have shown that serum phosphorus increases BP (11,12). However, recent studies have found that high phosphorus intake reduces BP, when the diet is rich in calcium (6-8), while other have shown that BP was reduced by low phosphorus and high calcium diet (9,10).Phosphate concentration is primarily regulated by PTH and fibroblast growth factor 23 (FGF 23) – phosphatonin, produced by osteoblasts/osteocytes in the bone, which, similarly to PTH, stimulates phosphaturia. FGF23 decreases renal calcitriol production and inhibits PTH secretion. Its main function is to maintain phosphate homeostasis by increasing urinary phosphate excretion and decreasing serum 1,25(OH)2D (13,14) In patients with CKD, it positively correlated with PTH secretion (15,16). The increase in FGF23 in those patients led to an early development of secondary hypertension by suppression of 1,25(OH)2D production (17), and low phosphate intake of phosphorus binders caused 35% decrease in plasma FGF23 level (18). However in healthy individuals no changes in FGF23 levels were observed after both phosphate deprivation and loading (19,20).FGF 23 is also involved in renal sodium handling (21) and, what is even more interesting, it suppresses the expression of angiotensin-converting enzyme-2 (ACE2) in CKD-mice and thereby activates renin-angiotensin-aldosterone system (RAAS) (22). FGF23 can also influence the RAAS indirectly through vitamin D (23), which probably reduces renin gene expression and secretory activity of the juxtaglomerular apparatus, the main place of production of renin (24).The investigation of the effect of FGF23 on hypertension is not confined to in vitro models. Hypertensive people were found to have significantly higher plasma FGF23 level than normotensive people (25). FGF23 was shown to have an association with markers of inflammation in individuals with CKD stages 2-4 (26), and with impaired endothelium-dependent vasodilatation in healthy individuals and early CKD patients (27). This effect of FGF23 might also result indirectly from a decrease in 1,25 (OH)2D (28). FGF23 also correlated with asymmetrical dimethylarginin (ADMA), which is an endogenous inhibitor of NO synthase and a biomarker of endothelial dysfunction (29).So far, however, the relevance of FGF23 in primary arterial hypertension has been under-investigated. What is more, available data focus on adult hypertensive patients and possible relation of phosphorus intake and increased FGF23 concentration to elevated BP (25). There is a paucity of similar data in children and adolescents. The aims of this research were to determine whether urinary excretion of FGF23 in hypertensive children and adolescents was higher than in healthy controls and whether its urinary level correlated with serum calcium, phosphorus, vitamin D, and PTH concentrations. Reference group data were obtained from the OLAF study, which established the reference blood pressure range for Polish children and adolescents. A strong correlation between serum and urine FGF23 was previously confirmed (r = 0.92, P < 0.001) (30).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号