首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
AimTo construct a single-format questionnaire on sleep habits and mood before and during the COVID-19 pandemic in the general population.MethodsWe constructed the Split Sleep Questionnaire (SSQ) after a literature search of sleep, mood, and lifestyle questionnaires, and after a group of sleep medicine experts proposed and assessed questionnaire items as relevant/irrelevant. The study was performed during 2021 in 326 respondents distributed equally in all age categories. Respondents filled out the SSQ, the Pittsburgh Sleep Quality Index (PSQI), and State Trait Anxiety Inventory (STAI), and kept a seven-day sleep diary.ResultsWorkday and work-free day bedtime during the COVID-19 pandemic assessed with SSQ were comparable to the sleep diary assessment (P = 0.632 and P = 0.203, respectively), as was the workday waketime (P = 0.139). Work-free day waketime was significantly later than assessed in sleep diary (8:19 ± 1:52 vs 7:45 ± 1:20; P < 0.001). No difference in sleep latency was found between the SSQ and PSQI (P = 0.066). Cronbach alpha for Sleep Habits section was 0.819, and 0.89 for Mood section. Test-retest reliability ranged from 0.45 (P = 0.036) for work-free day bedtime during the pandemic to 0.779 (P < 0.001) for sleep latency before the pandemic.ConclusionThe SSQ provides a valid, reliable, and efficient screening tool for the assessment of sleep habits and associated factors in the general population during the COVID-19 pandemic.

The COVID-19 pandemic, along with its multiple adverse effects on various aspects of mental health, has significantly affected sleep. Sleep habits alterations and newly developed sleep disturbances during the COVID-19 pandemic may influence the overall well-being and health (1). Since the beginning of the pandemic, several studies reported a delay in bedtimes and waketimes, and an associated shift in chronotype toward eveningness (2-5).Even though actigraphy and sleep diaries provide a valid and reliable assessment of sleep habits (6,7), to achieve the highest reliability and validity, these methods require an assessment during seven consecutive days including weekends (8). Daily reporting may be perceived by the respondents as an additional burden (6,9), a limitation that may be overcome by the use of single-administration questionnaires (9,10). Since sleep disturbances recognized in the first pandemic outbreak remained stable during new waves of the COVID-19 pandemic (5), single-administration questionnaires may enable screening of large population groups and an extended assessment of sleep disturbances during the pandemic.So far, validated sleep questionnaires have most often aimed at sleep disorders or symptoms associated with sleep disorders (9). Studies commonly report the Pittsburgh sleep Quality Index (PSQI) (11), which provides data on sleep duration, sleep disturbances, and sleep latency during the previous month. However, PSQI reflects mainly sleep quality on workdays (12), while not collecting information on sleep habits on weekends. The Sleep Timing Questionnaire (STQ) has been developed as an alternative to the sleep diary for the healthy adult population, showing good reliability and validity (10). Still, although sleep habits are associated with mood (13), social media use (14-16), learning time in students (17-19), sports or exercise (20), and symptoms of insomnia (21), the STQ does not assess variables such as mood and lifestyle habits.Large studies objectively assessing sleep with wearable devices have recognized sleep timing and sleep duration to be modifiable risk factors for adverse mental health during the current pandemic (22). Young adults are especially at risk for increased mood disorder symptoms, higher levels of perceived stress, and more common alcohol use during the pandemic (23). Even though mood disorders are often reported in pandemic studies on sleep habits, mood itself has been less commonly measured and associated with sleep parameters (24). A review of the literature showed a transactional relationship between mood and emotion (25), indicating that mood is characterized by longer duration than emotion (26). Mood is often assessed with the Brief Mood Introspection Scale (27), the Profile of Mood States (28), or the Visual Analogue Mood Scale (29). A relevant aspect of mood measurement is a hierarchical structure with two broad dimensions in positive and negative affect, and multiple specific states (30). Commonly used mood assessment scales evaluate the basic negative mood of fear/anxiety, sadness/depression, and anger/hostility, as well as at least one positive mood. Therefore, it has been strongly recommended that mood researchers assess a broad range of both positive and negative emotions (30).Linking mood changes and lifestyle habits during the pandemic has been relevant in order to recognize possible predictors of mood changes, especially due to a reported increase in depression (31). Since sleep is often intertwined with mood and lifestyle changes (31), we assumed that a single-format questionnaire comprehensively assessing these variables and sleep may be applicable and timely.The aim of this study was to construct a single-format Split Sleep Questionnaire (SSQ) comprehensively assessing sleep habits, lifestyle habits, and mood changes, as well as to evaluate its reliability and validity in the general population. Sleep habits were validated by using standard instruments such as sleep diary, PSQI, and STAI questionnaires as the measures of construct validity. Additionally, we aimed to assess the psychometric properties of the Mood section and to explore the effects of the COVID-19 pandemic on sleep habits and mood alterations in the general population of Croatia.  相似文献   

2.
3.
Malignant brain tumors are among the most aggressive human neoplasms. One of the most common and severe symptoms that patients with these malignancies experience is sleep disruption. Disrupted sleep is known to have significant systemic pro-tumor effects, both in patients with other types of cancer and those with malignant brain lesions. We therefore provide a review of the current knowledge on disrupted sleep in malignant diseases, with an emphasis on malignant brain tumors. More specifically, we review the known ways in which disrupted sleep enables further malignant progression. In the second part of the article, we also provide a theoretical framework of the reverse process. Namely, we argue that due to the several possible pathophysiological mechanisms, patients with malignant brain tumors are especially susceptible to their sleep being disrupted and compromised. Thus, we further argue that addressing the issue of disrupted sleep in patients with malignant brain tumors can, not just improve their quality of life, but also have at least some potential of actively suppressing the devastating disease, especially when other treatment modalities have been exhausted. Future research is therefore desperately needed.

The annual incidence of tumors of the central nervous system (CNS) is little over 22 per 100 000 in the general population (1). Around a third of these lesions are malignant. Among the malignant tumors, gliomas are by far the most common type, constituting over 80% of the number. Among gliomas, the most aggressive type (glioblastoma) is the most common one, making up over a half of all newly diagnosed gliomas (2,3). The five-year survival of patients with malignant CNS tumors is around 30%, with patients being diagnosed a glioblastoma having a five-year survival rate of less than 5%. All this goes to show how malignant CNS tumors are some of the most aggressive human malignancies today. It also shows how the vast accumulated knowledge on the disease origin and progression still has not translated into significant improvement of the overall survival of these patients. New treatment modalities are therefore desperately needed.Besides the devastating diagnosis of a malignant brain tumor, these patients often experience a wide variety of severe symptoms, which significantly diminish their quality of life (4). There has been an increasing awareness of the importance of supportive and palliative care in patients suffering from malignant brain tumors, especially those in whom other treatment modalities have been exhausted (5-7). One of the most commonly reported symptoms is sleep disturbance (4,8-12).Sleep is a recurrent, physiological phenomenon, which consists of many measurable factors (12) and is ubiquitous throughout the natural world (13-16). It is a highly active, easily reversible process, which is crucial not only for the physical and mental well-being of all living organisms, but also for the very concepts we as humans have of ourselves and the world around us (17). There are many theories regarding the possible function of sleep, ranging from the physiological explanations such as rest of individual cells (18) to behavioral explanations of why a biological system needs periodic inactivity (19). There is a growing understanding of how the modern lifestyle disrupts the natural circadian rhythm in humans, consequences of which are still not sufficiently explored (20).Sleep disruption has a well known detrimental role for an organism. Indeed, patients with disrupted sleep have been found to have a higher prevalence of several diseases, such as cardiovascular disorders (21), cognitive impairment (22), various metabolic disorders and obesity (23,24), and systemic and local inflammation (25,26). Furthermore, sleep can be impaired in many ways. The current classification of sleep disorders consists of several clinical entities such as insomnia, parasomnia, hyper-somnolence, sleep-related movement disorders, etc (27). However, this article refers to all of this broad pathology as “sleep disturbance,” primarily for clarity and simplicity sake. In addition, research on disrupted sleeping patterns in patients with malignant lesions usually also encompasses all of these entities into this broader term (28,29).  相似文献   

4.

Aim

To examine to what extent personality traits (extraversion, agreeableness, conscientiousness, neuroticism, and openness), organizational stress, and attitudes toward work and interactions between personality and either organizational stress or attitudes toward work prospectively predict 3 components of burnout.

Methods

The study was carried out on 118 hospital nurses. Data were analyzed by a set of hierarchical regression analyses, in which personality traits, measures of organizational stress, and attitudes toward work, as well as interactions between personality and either organizational stress or attitudes toward work were included as predictors, while 3 indices of burnout were measured 4 years later as criteria variables.

Results

Personality traits proved to be significant but weak prospective predictors of burnout and as a group predicted only reduced professional efficacy (R2 = 0.10), with agreeableness being a single negative predictor. Organizational stress was positive, affective-normative commitment negative predictor, while continuance commitment was not related to any dimension of burnout. We found interactions between neuroticism as well as conscientiousness and organizational stress, measured as role conflict and work overload, on reduced professional efficacy (βNRCWO = -0.30; ßcRCWO = -0.26). We also found interactions between neuroticism and affective normative commitment (β = 0.24) and between openness and continuance commitment on reduced professional efficacy (β = -0.23), as well as interactions between conscientiousness and continuance commitment on exhaustion.

Conclusion

Although contextual variables were strong prospective predictors and personality traits weak predictors of burnout, the results suggested the importance of the interaction between personality and contextual variables in predicting burnout.Numerous studies have focused on work stress and burnout in nurses because they work in high-stress environment, which has detrimental effects both on their mental and physical health, productivity and efficacy at work, absenteeism, as well as on patients'' outcomes such as increased mortality and patient dissatisfaction (1-3).Burnout refers to the symptoms of mental/emotional exhaustion caused by chronic job stress (4,5). It manifests itself in the form of exhaustion, depersonalization (cynicism), and the perception of reduced personal efficacy in working with others. Emotional exhaustion refers to feelings of fatigue and loss of energy, depersonalization and detachment from the job, cynicism and mental distancing from service recipients, while reduced professional efficacy refers to feelings of incompetence and a lack of achievement and productivity at work.The predictors of job burnout are both environmental and individual (5-8). Among frequently examined environmental (organizational) antecedents of burnout are stressors at work such as work overload, role conflict, and role ambiguity. Increased demands at work were strongly related to all components of burnout, and especially to emotional exhaustion (5-8). Rather scarce studies of personality effects found that almost all of 5-factor personality dimensions were related to burnout, although the relations between them were not always strong and consistent (9). However, neuroticism proved to be more strongly and consistently related to burnout than other 5-factor dimensions. Other studies also found positive relations between neuroticism and all three components of burnout (10-15). On the other hand, extraversion was mainly negatively related to burnout (12,14,16), and in some studies negative relations were also found between agreeableness and one or two of burnout dimensions (15,17-19). Conscientiousness was negatively related to emotional exhaustion and reduced professional efficacy and positively to depersonalization, while the relations between openness and burnout dimensions appeared less consistent (20-22).However, most of the above mentioned studies have cross-sectional designs, meaning that personality dimensions and burnout were examined at the same time, which could result in higher correlations between them. Furthermore, many studies examined burnout in relation to attitudes toward work, most frequently work satisfaction, job involvement, and organizational commitment. Organizational commitment is defined as a degree to which a person identifies himself or herself with the organization and its goals (23). The model of organizational commitment that received considerable empirical support identified 3 components: affective (value-based), normative (obligation-based), and continuance (based on an assessment of costs and benefits) (24). Organizational commitment serves as a protective factor from negative health outcomes and decreases negative effects of stressors on burnout (25).Although most of the explanatory models of burnout explained it as the outcome of the transaction of environmental and personality variables (26), most often the effects of only one set of variables, organizational (situational) or individual (dispositional), have been examined in a single study. With respect to the evidence that personality influences how people react to stressful situation in their workplace (27), it seems plausible to assume that besides direct effects of personality on one hand, and environmental variables on the other, environmental variables could also moderate the effects of personality on burnout. However, some authors stressed the need for more research on organizational and individual factors that may have direct effects or serve as moderators or buffers of burnout (28,29). Consequently, present study examines the direct effects of both individual and organizational factors, as well as moderating effects of organizational factors on professional burnout in hospital nurses. We examined the direct effects of 5-factor personality variables, and direct and moderating effects of organizational stress and attitudes toward work on 3 components of burnout among hospital nurses measured 4 years later. It was hypothesized that 5-factor personality traits would be predictors of burnout dimensions, and specifically neuroticism was expected to be positive, while extraversion, agreeableness, and conscientiousness negative predictors of burnout. We also tested the possibility that organizational stress would be positive, and affective-normative commitment negative prospective predictor of burnout components. Organizational stress and attitudes toward work (affective-normative commitment and continuance commitment) would be moderators of the effects of personality variables on burnout components.  相似文献   

5.

Aim

To assess retrospectively the clinical effects of typical (fluphenazine) or atypical (olanzapine, risperidone, quetiapine) antipsychotics in three open clinical trials in male Croatian war veterans with chronic combat-related posttraumatic stress disorder (PTSD) with psychotic features, resistant to previous antidepressant treatment.

Methods

Inpatients with combat-related PTSD were treated for 6 weeks with fluphenazine (n = 27), olanzapine (n = 28) risperidone (n = 26), or quetiapine (n = 53), as a monotherapy. Treatment response was assessed by the reduction in total and subscales scores in the clinical scales measuring PTSD (PTSD interview and Clinician-administered PTSD Scale) and psychotic symptoms (Positive and Negative Syndrome Scale).

Results

After 6 weeks of treatment, monotherapy with fluphenazine, olanzapine, risperidone, or quetiapine in patients with PTSD significantly decreased the scores listed in trauma reexperiencing, avoidance, and hyperarousal subscales in the clinical scales measuring PTSD, and total and subscales scores listed in positive, negative, general psychopathology, and supplementary items of the Positive and negative syndrome scale subscales, respectively (P<0.001).

Conclusion

PTSD and psychotic symptoms were significantly reduced after monotherapy with typical or atypical antipsychotics. As psychotic symptoms commonly occur in combat-related PTSD, the use of antipsychotic medication seems to offer another approach to treat a psychotic subtype of combat-related PTSD resistant to previous antidepressant treatment.In a world in which terrorism and conflicts are constant threats, and these threats are becoming global, posttraumatic stress disorder (PTSD) is a serious and global illness. According to the criteria from the 4th edition of Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) (1), exposure to a life-threatening or horrifying event, such as combat trauma, rape, sexual molestation, abuse, child maltreatment, natural disasters, motor vehicle accidents, violent crimes, hostage situations, or terrorism, can lead to the development of PTSD (1,2). The disorder may also be precipitated if a person experienced, saw, or learned of an event or events that involved actual or threatened death, serious injury, or violation of the body of self or others (3,4). In such an event, a person’s response can involve intense fear, helplessness, or horror (3,4). However, not all persons who are exposed to a traumatic event will develop PTSD. Although the stress reaction is a normal response to an abnormal situation, some extremely stressful situations will in some individuals overwhelm their ability to cope with stress (5).PTSD is a chronic psychiatric illness. The essential features of PTSD are the development of three characteristic symptom clusters in the aftermath of a traumatic event: re-experiencing the trauma, avoidance and numbing, and hyperarousal (1,6). The core PTSD symptoms in the re-experiencing cluster are intrusive memories, images, or perceptions; recurring nightmares; intrusive daydreams or flashbacks; exaggerated emotional and physical reactions; and dissociative experiences (1,6,7). These symptoms intensify or re-occur upon exposure to reminders of the trauma, and various visual, auditory, or olfactory cues might trigger traumatic memories (3,4). The avoidance and numbing cluster of symptoms includes efforts to avoid thoughts, feelings, activities, or situations associated with the trauma; feelings of detachment or alienation; inability to have loving feelings; restricted range of affect; loss of interest; and avoidance of activity. The hyperarousal cluster includes exaggerated startle response, hyper-vigilance, insomnia and other sleep disturbances, difficulties in concentrating, and irritability or outbursts of anger. PTSD criteria include functional impairment, which can be seen in occupational instability, marital problems, discord with family and friends, and difficulties in parenting (3,4,8). In addition to this social and occupational dysfunction, PTSD is often accompanied by substance abuse (9) and by various comorbid diagnoses, such as major depression (10), other anxiety disorders, somatization, personality disorders, dissociative disorders (7,11), and frequently with suicidal behavior (12). Combat exposure can precipitate a more severe clinical picture of PTSD, which may be complicated with psychotic features and resistance to treatment. War veterans with PTSD have a high risk of suicide, and military experience, guilt about combat actions, survivor guilt, depression, anxiety, and severe PTSD are significantly associated with suicide attempts (12).The pharmacotherapy treatment of PTSD includes the use of antidepressants, such as selective serotonin reuptake inhibitors (fluvoxamine, fluoxetine, sertraline, or paroxetine) as a first choice of treatment, tricyclic antidepressants (desipramine, amitriptyline, imipramine), monoamine oxidase inhibitors (phenelzine, brofaromine), buspirone, and other antianxiety agents, benzodiazepines (alprazolam), and mood stabilizers (lithium) (13-16). Although the pharmacotherapy of PTSD starts with antidepressants, in treatment-refractory patients a new pharmacological approach is required to obtain a response. In treatment-resistant patients, pharmacotherapy strategies reported to be effective include anticonvulsants, such as carbamazepine, gabapentine, topiramate, tiagabine, divalproex, lamotrigine (14,17); anti-adrenergic agents, such as clonidine (although presynaptic α2-adrenoceptor agonist, clonidine blocks central noradrenergic outflow from the locus ceruleus), propranolol, and prazosin (13,14), opiate antagonists (13), and neuroleptics and antipsychotics (14,17,18).Combat exposure frequently induces PTSD, and combat-related PTSD might progress to a severe form of PTSD, which is often refractory to treatment (19-21). Combat-related PTSD is frequently associated with comorbid psychotic features (11,14,17,19-21), while psychotic features add to the severity of symptoms in combat-related PTSD patients (19,22-24). These cases of a more severe subtype of PTSD, complicated with psychotic symptoms, require the use of neuroleptics or atypical antipsychotic drugs (14,17,25-27).After the war in Croatia (1991-1995), an estimated million people were exposed to war trauma and about 10 000 of the Homeland War veterans (15% prevalence) have developed PTSD, with an alarmingly high suicide rate (28). The war in Croatia brought tremendous suffering, not only to combat-exposed veterans and prisoners of war (29), but also to different groups of traumatized civilians in the combat zones, displaced persons and refugees, victims of terrorist attacks, civilian relatives of traumatized war veterans and terrorist attacks victims, and traumatized children and adolescents (30). Among Croatian war veterans with combat-related PTSD, 57-62% of combat soldiers with PTSD met criteria for comorbid diagnoses (8-11), such as alcohol abuse, major depressive disorder, anxiety disorders, panic disorder and phobia, psychosomatic disorder, psychotic disorders, drug abuse, and dementia. In addition to different comorbid psychiatric disorders, a great proportion of war veterans with combat-related PTSD developed psychotic features (8,11,25,26), which consisted of psychotic depressive and schizophrenia-like symptoms (suggesting prominent symptoms of thought disturbances and psychosis). Psychotic symptoms were accompanied by auditory or visual hallucinations and delusional thinking in over two-thirds of patients (25,26). Delusional paranoid symptoms occurred in 32% of patients (25,26). The hallucinations were not associated exclusively with the traumatic experience, while the delusions were generally paranoid or persecutory in nature (25,26). Although psychotic PTSD and schizophrenia share some similar symptoms, there are clear differences between these two entities, since PTSD patients still retain some insight into reality and usually do not have complete disturbances of affect (eg, constricted or inappropriate) or thought disorder (eg, loose associations or disorganized responses).This proportion of veterans with combat-related PTSD refractory to treatment (18-20) and with co-occurring psychotic symptoms requires additional pharmacological strategies, such as the use of neuroleptics (25) or atypical antipsychotics (14,17,26). Studies evaluating the use of antipsychotics in combat-related PTSD with psychotic features are scarce, and antipsychotics were frequently added to existing medication in the treatment of PTSD.In this study, we compared retrospectively the clinical effects of four antipsychotic drugs – a neuroleptic drug (fluphenazine) and three atypical antipsychotics (olanzapine, risperidone and quetiapine) – in treatment-resistant male war veterans with combat-related PTSD with psychotic features.  相似文献   

6.

Aim

To elucidate the involvement of noradrenergic system in the mechanism by which diazepam suppresses basal hypothalamic-pituitary-adrenal (HPA) axis activity.

Methods

Plasma corticosterone and adrenocorticotropic hormone (ACTH) levels were determined in female rats treated with diazepam alone, as well as with diazepam in combination with clonidine (α2-adrenoreceptor agonist), yohimbine (α2-adrenoreceptor antagonist), alpha-methyl-p-tyrosine (α-MPT, an inhibitor of catecholamine synthesis), or reserpine (a catecholamine depleting drug) and yohimbine.

Results

Diazepam administered in a dose of 2.0 mg/kg suppressed basal HPA axis activity, ie, decreased plasma corticosterone and ACTH levels. Pretreatment with clonidine or yohimbine failed to affect basal plasma corticosterone and ACTH concentrations, but abolished diazepam-induced inhibition of the HPA axis activity. Pretreatment with α-MPT, or with a combination of reserpine and yohimbine, increased plasma corticosterone and ACTH levels and prevented diazepam-induced inhibition of the HPA axis activity.

Conclusion

The results suggest that α2-adrenoreceptors activity, as well as intact presynaptic noradrenergic function, are required for the suppressive effect of diazepam on the HPA axis activity.Benzodiazepines are used for their anxiolytic, sedative-hypnotic, muscle relaxant, and anticonvulsant properties in the treatment of a variety of neuropsychiatric disorders (1,2), including anxiety and depression, which are often related to disturbances in the activity of hypothalamic-pituitary-adrenal (HPA) axis (3,4). Although these drugs exert most of their pharmacological effects via γ-aminobutyric acidA (GABAA) receptors (5,6), benzodiazepine administration has been associated with alterations in neuroendocrine function both in experimental animals and humans (7-9). However, even after years of extensive studies, the complex mechanisms by which these widely used drugs produce their effects on the HPA axis are still not known.Although most of the previous studies have demonstrated that classical benzodiazepines such as diazepam decrease the HPA axis activity in stressful contexts (10-14), under basal conditions they have been shown to stimulate (9,11,15-18), inhibit (15,19-22), and not affect (17,23-25) the HPA axis activity. Such diverse results might be related to several factors such as the dose and gender (15,16,20,21,26-28), or may also be a consequence of the net effect of non-selective benzodiazepines on the various GABAA receptor isoforms (9).Our previous studies demonstrated that while diazepam (1 mg/kg) produced no change in plasma corticosterone levels in male rats (15,20), it decreased basal levels of corticosterone in female rats (15,26). However, although diazepam inhibited the HPA axis activity of female rats following administration of lower doses (1 or 2 mg/kg) (15,20,21,26), it stimulated the HPA axis activity following administration of high doses (10 mg/kg) (15,16,26). Moreover, whereas the suppressive effect of the lower doses of diazepam (2.0 mg/kg) on the HPA axis activity in female rats involves the GABAA receptor complex (21), increases in corticosterone levels by a higher dose of diazepam (10 mg/kg) do not involve the stimulation of GABAA receptors (16). In addition, stimulatory effect of 10 mg/kg diazepam on the HPA axis activity in rats seems not to be mediated by the benzodiazepine/GABA/channel chloride complex or by peripheral benzodiazepine receptors, but rather by a cyclic adenosine monophosphate (AMP)-dependent mechanism (18).Since our previous results suggested that the effect of a high dose of diazepam on the activity of the HPA axis in female rats might be due to a blockade of α2-adrenergic receptors (16), the aim of this study was to elucidate whether noradrenergic system also has a modulatory role in the inhibitory effect of 2.0 mg/kg diazepam on basal plasma adrenocorticotropic hormone (ACTH) and corticosterone levels in female rats.  相似文献   

7.
8.

Aim

To determine the differences in subjective quality of life between elderly people living in a nursing home and those living in their own homes after brain stroke, and to determine the contribution of demographic variables and different quality of life domains to the explanation of self-assessed quality of life.

Methods

The study included 60 elderly men and women, 30 living in their own homes (median age, 81; range, 72-90) and 30 living in a nursing home (median age, 81; range, 72-86). Both groups received care (stationary or ambulatory) from the same nursing home. World Health Organization Quality of Life Questionnaire – short version, self-assessed quality of life questionnaire, and demographic questionnaire were used to collect data on subjective quality of life. The participants completed self-report questionnaires individually.

Results

Quality of life scores were significantly higher in the elderly living in a nursing home than in the elderly living in their own home (mean ± standard deviation, 78.7 ± 12.8 vs 59.3 ± 17.3 out of maximum 100, P < 0.001). Also, the elderly living in the nursing home scored significantly higher than those living in their own home on all 4 quality of life domains (maximum 100 for each domain): physical (28.5 ± 3.3 vs 17.2 ± 5.0), psychological (22.3 ± 3.7 vs 16.3 ± 4.0), social relationships (11.4 ± 1.6 vs 8.3 ± 1.7), and environment (32.8 ± 4.6 vs 24.0 ± 6.1) domain (P < 0.001 for all). All predictive variables together explained 51.9% of quality of life variance, with self-assessed health being the most significant predictor.

Conclusion

Quality of life of the elderly in a nursing home was significantly higher than that of their peers living in their own home, which may be related to better care in specially organized settings.Quality of life is influenced by a wide range of different factors. Although material status is one of these factors, it is neither an essential nor sufficient precondition for the feeling of satisfaction with life (1). Objective factors, such as social, economic, and political situation influence subjective assessment of the quality of life, but the association between objective and subjective aspects is not linear, ie, a change in objective aspects does not automatically imply a change in subjective aspects (2). If poor social living conditions are improved, subjective perception of satisfaction with life improves, but after a certain point, this association disappears (1,2). If all basic life needs are met, increase in material well-being will not significantly influence the subjective assessment of quality of life (1).The World Health Organization (WHO) defines quality of life as an individual’s perception of his or her position in life in specific cultural, social, and environmental context (3). Quality of life consists of the following main areas: objective environment, environment, behavioral competence (including health), perceived quality of life, and psychological well-being (including life satisfaction) (4). Beside the objective factors, quality of life is influenced by subjective perception and assessment of physical, material, social, and emotional well-being, personal development, and purposeful activity. All these domains are influenced by an individual’s personal value system (5).It has been shown that individuals with serious and persistent disabilities and objectively poor quality of life report having good or satisfactory quality of life, which is also known as the disability paradox (6,7). This is explained by theory of balance, which says that an individual perceives the quality of life as a balance between body and mind (6). On the other hand, the explanation may lie in establishing supportive social relationships during illness (7,8) and developing effective coping strategies (9). Health is the most often reported factor influencing quality of life of elderly people (10-12). However, objective health problems are not always associated with subjective perception of poor health (13). Paying attention to individual context (14,15) could help us to understand this paradox. For example, Browne et al (16) found that self-reported quality of life was higher among very old study participants than among younger ones. Philp (17) holds that the most important aspect of care for the elderly is to increase and maintain quality of life and that, therefore, all factors that increase the quality of life should be identified. As human life is extended, there is a greater number of diseases that make adequate functioning more difficult (18-20), and the association between symptoms, disorders, and everyday activities has not been completely explained. For example, depression in persons without physical disabilities significantly contributes to the decrease in their daily activities and increases their dependence on others (21). Bowling and Brown (22) reported that persons aged over 85 who lived in their own homes in London assessed their health status as an important predictor of emotional well-being, more influential than social network. Persons with poorer social support had lower satisfaction with life (23), and dependence on help from others elicited the feelings of insecurity and anxiety about future and especially about continued availability of persons that provide help (24). Quality of life is influenced by socio-demographic factors, level of help, variety of activities, and social and environmental factors (23,25-27). Socio-economic indicators contribute relatively little to the model (28).The aim of our study was to determine the differences in self-assessed quality of life between elderly people living in the nursing home and elderly people living in their own homes after stroke and to determine predictive contribution of demographic variables and different quality of life domains to the explanation of subjective quality of life.  相似文献   

9.
10.
Prevalence of erectile and ejaculatory difficulties among men in Croatia   总被引:1,自引:1,他引:0  

Aim

To determine the prevalence and risk factors of erectile difficulties and rapid ejaculation in men in Croatia.

Method

We surveyed 615 of 888 contacted men aged 35-84 years. The mean age of participants was 54 ± 12 years. College-educated respondents and the respondents from large cities were slightly overrepresented in the sample. Structured face-to-face interviews were conducted in June and July 2004 by 63 trained interviewers. The questionnaire used in interviews was created for commercial purposes and had not been validated before.

Results

Out of 615 men who were sexually active in the preceding month and gave the valid answers to the questions on erectile difficulties and rapid ejaculation, 130 suffered from erectile or ejaculatory difficulties. Men who had been sexually active the month before the interview and gave the valid answers to the questions on sexual difficulties reported having erectile difficulties more often (77 out of 615) than rapid ejaculation (57 out of 601). Additional 26.8% (165 out of 615) and 26.3% (158 out of 601) men were classified as being at risk for erectile difficulties and rapid ejaculation, respectively. The prevalence of erectile difficulties varied from 5.8% in the 35-39 age group to 30% in the 70-79 age group. The association between age and rapid ejaculation was curvilinear, ie, U-shaped. Rates of rapid ejaculation were highest in the youngest (15.7%) and the oldest (12.5%) age groups. Older age (odds ratios [OR], 6.2-10.3), overweight (OR, 3.3-4.2), alcohol (OR, 0.3-0.4), intense physical activity (OR, 0.3), traditional attitudes about sexuality (OR, 2.8), and discussing sex with one’s partner (OR, 0.1-0.3) were associated with erectile difficulties. Education (OR, 0.1-0.3), being overweight (OR, 22.0) or obese (OR, 20.1), alcohol consumption (OR, 0.2-0.3), stress and anxiety (OR, 10.8-12.5), holding traditional attitudes (OR, 2.8) and moderate physical activity (OR, 0.1) were factors associated with rapid ejaculation.

Conclusion

The prevalence of erectile difficulties was higher than the prevalence of rapid ejaculation in men in Croatia. The odds of having these sexual difficulties increased with older age, overweight, traditional attitudes toward sex, and higher level of stress and anxiety.A growing number of international studies on sexual health issues suggest that many women and men worldwide have sexual health problems (1-4). According to surveys based on community samples, the prevalence of male sex disturbances ranges between 10% and 50% (2,4). The most frequent male sexual disturbance seems to be premature or rapid ejaculation (5,6), reported to range from 4% to 29% (6). The Global Study of Sexual Attitudes and Behaviors estimated the prevalence of rapid ejaculation at approximately 30% across all age groups (7). Actually, it seems to be the most common of all male sexual disturbances (5-9). However, when objective definition of rapid ejaculation is attempted, problems arise (9,10). According to the fourth edition of Diagnostic and Statistical Manual of Mental Disorders (DSM-IV), rapid ejaculation is a persistent or recurrent onset of orgasm and ejaculation with minimal sexual stimulation before, upon, or shortly after penetration and before the person wishes it (11). It results in pronounced distress or interpersonal difficulties and is not exclusively due to the direct effects of a substance used (11). Although useful for clinical practice, this definition does not offer precise guidelines for epidemiological research. As indicated by large discrepancies in the prevalence rates (6), epidemiological analyses of rapid ejaculation are characterized by definition and measurement inconsistencies (1,10,12).In spite of the lack of agreement as to what constitutes rapid ejaculation (12) and the fact that it is not a well-understood problem (5,13), the consequences are well known. Chronic rapid ejaculation is accompanied by an array of psychological problems, including a psychogenic erectile dysfunction (14). Rapid ejaculation can seriously burden interpersonal dynamics and decrease sexual satisfaction (15) and sometimes the overall quality of intimate relationship (16,17). In addition to frustrations, withdrawal (including the lack of desire and cessation of sexual contacts), and strained relationship, rapid ejaculation causes changes in self-image and one’s sense of masculinity. It has been shown that rapid ejaculation has similar psychological impact as erectile problems, especially in terms of self-confidence and worries over the relationship, both the present and the future ones (14).Psychologically and culturally, erectile difficulties are the most dreaded male sexual problem (16,18,19), which not only result in deep frustration, but often lead to a crisis of masculine identity (19). Recent pharmacological breakthrough has initiated a rapid growth of interest in the epidemiology of erectile difficulties. Current studies suggest that a sizeable proportion of adult men suffer from erectile difficulties and that the likelihood of erectile difficulties increases with age (1-4). According to a recently published systematic review, the prevalence of erectile difficulties ranges from 2% in men younger than 40 years to over 80% in men aged 80 years or more (4). Due to the aging of population, the number of men with erectile difficulties is expected to be rising (20,21). The projection based on the results of the Massachusetts Male Aging Study (MMAS) from 1995 is that the number of men with the condition will more than double by 2025 (22).How do we explain considerable variations in reported prevalence rates of erectile difficulties? Methodological and conceptual differences between the studies (1,3,4,23) seem to be the main reason, although the effect of culture-specific perception of sexual problems should not be underestimated (24). In spite of a large number of population or community sample studies (18,20,25-38), inconsistent definitions and operationalization seriously hamper the analysis of the role of culture in perception and reporting of erectile difficulties in men.In transitional countries, sexual health is a rather neglected research area. The main reason for that is the lack of education and research training of possible investigators in the field of sexology. In Croatia, sexual health issues have only recently gained attention as a topic worthy of clinical (39) and non-clinical research (40,41). Our aim was to determine the prevalence of and risk factors for erectile difficulties and rapid ejaculation in a national sample of Croatian men.  相似文献   

11.

Aim

To analyze potential and actual drug-drug interactions reported to the Spontaneous Reporting Database of the Croatian Agency for Medicinal Products and Medical Devices (HALMED) and determine their incidence.

Methods

In this retrospective observational study performed from March 2005 to December 2008, we detected potential and actual drug-drug interactions using interaction programs and analyzed them.

Results

HALMED received 1209 reports involving at least two drugs. There were 468 (38.7%) reports on potential drug-drug interactions, 94 of which (7.8% of total reports) were actual drug-drug interactions. Among actual drug-drug interaction reports, the proportion of serious adverse drug reactions (53 out of 94) and the number of drugs (n = 4) was significantly higher (P < 0.001) than among the remaining reports (580 out of 1982; n = 2, respectively). Actual drug-drug interactions most frequently involved nervous system agents (34.0%), and interactions caused by antiplatelet, anticoagulant, and non-steroidal anti-inflammatory drugs were in most cases serious. In only 12 out of 94 reports, actual drug-drug interactions were recognized by the reporter.

Conclusion

The study confirmed that the Spontaneous Reporting Database was a valuable resource for detecting actual drug-drug interactions. Also, it identified drugs leading to serious adverse drug reactions and deaths, thus indicating the areas which should be in the focus of health care education.Adverse drug reactions (ADR) are among the leading causes of mortality and morbidity responsible for causing additional complications (1,2) and longer hospital stays. Magnitude of ADRs and the burden they place on health care system are considerable (3-6) yet preventable public health problems (7) if we take into consideration that an important cause of ADRs are drug-drug interactions (8,9). Although there is a substantial body of literature on ADRs caused by drug-drug interactions, it is difficult to accurately estimate their incidence, mainly because of different study designs, populations, frequency measures, and classification systems (10-15).Many studies including different groups of patients found the percentage of potential drug-drug interactions resulting in ADRs to be from 0%-60% (10,11,16-25). System analysis of ADRs showed that drug-drug interactions represented 3%-5% of all in-hospital medication errors (3). The most endangered groups were elderly and polimedicated patients (22,26-28), and emergency department visits were a frequent result (29). Although the overall incidence of ADRs caused by drug-drug interactions is modest (11-13,15,29,30), they are severe and in most cases lead to hospitalization (31,32).Potential drug-drug interactions are defined on the basis of on retrospective chart reviews and actual drug-drug interactions are defined on the basis of clinical evidence, ie, they are confirmed by laboratory tests or symptoms (33). The frequency of potential interactions is higher than that of actual interactions, resulting in large discrepancies among study findings (24).A valuable resource for detecting drug-drug interactions is a spontaneous reporting database (15,34). It currently uses several methods to detect possible drug-drug interactions (15,29,35,36). However, drug-drug interactions in general are rarely reported and information about the ADRs due to drug-drug interactions is usually lacking.The aim of this study was to estimate the incidence of actual and potential drug-drug interactions in the national Spontaneous Reporting Database of ADRs in Croatia. Additionally, we assessed the clinical significance and seriousness of drug-drug interactions and their probable mechanism of action.  相似文献   

12.

Aim

To evaluate how exposure to educational leaflet about healthy sleep affects knowledge about sleep in adolescents.

Methods

The study included students aged 15-18 years from 12 high schools (1209 participants; 85% of eligible study population). Multistage sampling was used and the selected schools were randomly assigned into two intervention groups and two control groups, according to the Solomon experimental design. Intervention groups received educational leaflets and control groups did not. In one of the intervention groups and one of the control groups, pre-testing of knowledge about sleep was performed. Students answered the Sleep Knowledge Test, which was constructed in accordance with the information on the leaflet. Data were analyzed by four-way ANOVA and additional analyses of simple main effects were performed.

Results

Positive effect of educational leaflet was found in students aged 15 (F = 28.46; P < 0.001), 16 (F = 5.74; P = 0.017), and 17 (F = 17.17; P < 0.001), but there was no effect in students aged 18 (P = 0.467). In male students, positive effect of the leaflet was found only in the group that had not been pre-tested (F = 6.29; P = 0.012), while in female students, it was found in both pre-tested (F = 26.24; P < 0.001) and not pre-tested group (F = 17.36; P < 0.001), with greater effect in pre-tested group (F = 5.70; P = 0.017). Female students generally showed better knowledge about sleep than male students (F = 95.95; P < 0.001).

Conclusion

Educational leaflets can be an effective first step in educating younger high school students about healthy sleep, with the method being more effective in female adolescents.Sleep education has been used as a method of primary and secondary prevention of sleep problems in all age groups (1-3). An especially vulnerable age group are adolescents who frequently have poor sleep habits and suffer from sleep deprivation (4-6). In adolescents, insufficient sleep, inadequate sleep quality, and irregular sleep patterns are associated with daytime sleepiness, negative moods, increased likelihood of stimulant use, higher levels of risk-taking behavior, poor school performance, and increased risk of unintentional injuries (7-10). As an US study has shown, sleepiness was the major causal factor in many traffic accidents and more than 50% of sleep-related crashes involved drivers aged 25 or younger (11).Having in mind that adolescence is not only a period when sleep problems arise, but also a period when many life habits are established, adolescent education about healthy sleep becomes an important task. Different educational programs and public educational campaigns have been organized to increase the knowledge about healthy sleep and consequences of sleepiness in adolescents and their parents and teachers (12-14). The effects of such educational programs on adolescents’ sleep knowledge and characteristics have been described by several studies (2,12).Another way to increase knowledge about sleep and to foster positive behavioral changes regarding sleep in adolescents are public education campaigns. In order to achieve these goals, effective educational methods need to be developed and a systematic evaluation of their effectiveness performed. In this study, we evaluated the effect of our educational effort to increase adolescents’ knowledge about sleep. The method we used was exposure to leaflets, which is a commonly used method in public health campaigns. Since some studies have shown sex differences in school performance (15,16), we expected that sleep education would have a different effect on knowledge about sleep in boys than in girls. The effect of age on sleep education may also be expected because of possible differences in the basic knowledge about sleep in students of different age.  相似文献   

13.

Aim

To assess whether demographic characteristics, self-rated health status, coping behaviors, satisfaction with important interpersonal relationships, financial situation, and current overall quality of life are determinants of sick leave duration in professional soldiers of the Slovenian Armed Forces.

Methods

In 2008, 448 military personnel on active duty in the Slovenian Armed Forces were invited to participate in the study and 390 returned the completed questionnaires (response rate 87%). The questionnaires used were the self-rated health scale, sick leave scale, life satisfaction scale, Folkman-Lazarus'' Ways of Coping Questionnaire, and a demographic data questionnaire. To partition the variance across a wide variety of indicators of participants’ experiences, ordinal modeling procedures were used.

Results

A multivariate ordinal regression model, explaining 24% of sick leave variance, showed that the following variables significantly predicted longer sick leave duration: female sex (estimate, 1.185; 95% confidence interval [CI], 0.579-1.791), poorer self-rated health (estimate, 3.243; 95% CI, 1.755-4.731), lower satisfaction with relationships with coworkers (estimate, 1.333; 95% CI, 0.399-2.267), and lower education (estimate, 1.577; 95% CI, 0.717-2.436). The impact of age and coping mechanisms was not significant.

Conclusion

Longer sick leave duration was found in women and respondents less satisfied with their relationships with coworkers, and these are the groups to which special attention should be awarded when planning supervision, work procedures, and gender equality policy of the Armed Forces. A good way of increasing the quality of interpersonal relationships at work would be to teach such skills in teaching programs for commanding officers.Self-rated health represents a person''s comprehensive and subjective assessment of his or her health, which incorporates the subjective feeling of health together with biological, psychological, and socio-economic dimensions (1,2), any present illness, symptoms, and the functional status (3). The term is frequently used in population research and social epidemiology as an indicator of a typical health behavior of the individual (4,5). Self-rated health is associated with physical fitness (3) and predicts morbidity and mortality (6-11).In middle-aged healthy individuals, self-rated health has several predictors: physical and psycho-social working conditions (12), economic situation, psychological status, and lifestyle (13). Among work-related factors the most important is stress, which has been shown to increase the likelihood of taking a sick leave (14-17). It has also been shown that the number of days of sick leave increased as self-reported health decreased (13,18). Sick leave duration has been found to have a negative correlation with self-rated health even over a period of 10 years (19).In Sweden, long-term sick leave (>90 days) was taken mostly by women in the public sector, and it was caused by depression-related illness and work-related stress (20). However, the impact of job-related stress as a reason for disability remains unexplained. It is unclear whether this impairment is a result of prolonged stress exposure or a pre-existing susceptibility factor. In a study of white-collar workers’ absenteeism, there was no association between employee’s psychological distress, type of employee, and productivity (21). However, in blue-collar workers high psychological distress resulted in an 18% increase in absenteeism rates (21). A study of 54 264 full-time employees from different levels of the corporate hierarchy showed that elevated psychological distress was associated with increasing absenteeism (22).Subjective health assessment is a valid health status indicator for middle-aged people (23) and can be used to study the relationship between stress, burnout, and organizational conditions at work. The validity of self-rated health can be confirmed by objective assessment methods, for example, by the number of visits to the physician, absenteeism from work, and mortality. In 2008, Erikksson analyzed the connection between sick leave and self-rated health in the Swedish population using the EQ-5D Questionnaire for Health Assessment (24).In Slovenia, only one epidemiological study on self-rated health was conducted, and it studied the factors leading to poor health ratings (25). Only a few studies have assessed the effects of threats, fears, or various other psychological difficulties on subjective health, and these have shown that subjective health was influenced by perceived threat and stress, a source of which can also be a chronic illness (26).In our previous study, we explored key psychological factors in the members of the Slovenian armed forces who reported poorer bio-psycho-social well-being and more burnout, and therefore had reduced working effectiveness and motivation (27). The present analysis specifically analyzed the predicting factors of absence from work due to illness in professional soldiers of the Slovenian Armed Forces.  相似文献   

14.

Aim

To investigate the involvement of the vesicular membrane trafficking regulator Synaptotagmin IV (Syt IV) in Alzheimer’s disease pathogenesis and to define the cell types containing increased levels of Syt IV in the β-amyloid plaque vicinity.

Methods

Syt IV protein levels in wild type (WT) and Tg2576 mice cortex were determined by Western blot analysis and immunohistochemistry. Co-localization studies using double immunofluorescence staining for Syt IV and markers for astrocytes (glial fibrillary acidic protein), microglia (major histocompatibility complex class II), neurons (neuronal specific nuclear protein), and neurites (neurofilaments) were performed in WT and Tg2576 mouse cerebral cortex.

Results

Western blot analysis showed higher Syt IV levels in Tg2576 mice cortex than in WT cortex. Syt IV was found only in neurons. In plaque vicinity, Syt IV was up-regulated in dystrophic neurons. The Syt IV signal was not up-regulated in the neurons of Tg2576 mice cortex without plaques (resembling the pre-symptomatic conditions).

Conclusions

Syt IV up-regulation within dystrophic neurons probably reflects disrupted vesicular transport or/and impaired protein degradation occurring in Alzheimer’s disease and is probably a consequence but not the cause of neuronal degeneration. Hence, Syt IV up-regulation and/or its accumulation in dystrophic neurons may have adverse effects on the survival of the affected neuron.The main pathological hallmarks of Alzheimer’s disease (AD) are the formation of amyloid plaques, neurofibrillary tangles, dystrophic neurites, and sometimes activation of glial cells in the brain (1,2). In the vicinity of amyloid plaques, neurons undergo dramatic neuropathological changes including metabolic disturbances such as altered energy metabolism, dysfunction of vesicular trafficking, neurite breakage, and disruption of neuronal connections (3-8).Synaptotagmin IV (Syt IV) is a protein involved in the regulation of membrane trafficking in neurons and astrocytes (9,10). In hippocampal neurons, it regulates brain-derived neurotrophic factor release (11) and is involved in hippocampus-dependent memory and learning (12,13). In astrocytes, it is implicated in glutamate release (10). Recent data show that Syt IV plays an important role in neurodegenerative processes (14). Syt IV expression could be induced by seizures, drugs, and brain injury. Its changes have been shown in several animal models of neurodegeneration (Parkinson’s disease, brain ischemia, AD) (14-25). However, the exact role of Syt IV in neurodegeneration is unknown.Our previous study showed that the expression of Syt IV mRNA and its protein in the hippocampus and cortex of Tg2576 mouse model for AD was increased in the tissue surrounding β-amyloid plaques (14). It is not clear whether Syt IV is expressed in astrocytes (10,26,27) or/and in neurons (28,29), ie, whether it regulates the release of pro- or anti-inflammatory cytokines from β-amyloid associated astrocytes or is involved in neuronal vesicular pathogenesis (5,30). Therefore, the present study aimed to determine the type of cells in which Syt IV up-regulation occurs.  相似文献   

15.
AimTo present and evaluate a new screening protocol for amblyopia in preschool children.MethodsZagreb Amblyopia Preschool Screening (ZAPS) study protocol performed screening for amblyopia by near and distance visual acuity (VA) testing of 15 648 children aged 48-54 months attending kindergartens in the City of Zagreb County between September 2011 and June 2014 using Lea Symbols in lines test. If VA in either eye was >0.1 logMAR, the child was re-tested, if failed at re-test, the child was referred to comprehensive eye examination at the Eye Clinic.Results78.04% of children passed the screening test. Estimated prevalence of amblyopia was 8.08%. Testability, sensitivity, and specificity of the ZAPS study protocol were 99.19%, 100.00%, and 96.68% respectively.ConclusionThe ZAPS study used the most discriminative VA test with optotypes in lines as they do not underestimate amblyopia. The estimated prevalence of amblyopia was considerably higher than reported elsewhere. To the best of our knowledge, the ZAPS study protocol reached the highest sensitivity and specificity when evaluating diagnostic accuracy of VA tests for screening. The pass level defined at ≤0.1 logMAR for 4-year-old children, using Lea Symbols in lines missed no amblyopia cases, advocating that both near and distance VA testing should be performed when screening for amblyopia.Vision disorders in children represent important public health concern as they are acknowledged to be the leading cause of handicapping conditions in childhood (1). Amblyopia, a loss of visual acuity (VA) in one or both eyes (2) not immediately restored by refractive correction (3), is the most prevalent vision disorder in preschool population (4). The estimated prevalence of amblyopia among preschool children varies from 0.3% (4) to 5% (5). In addition, consequences of amblyopia include reduced contrast sensitivity and/or positional disorder (6). It develops due to abnormal binocular interaction and foveal pattern vision deprivation or a combination of both factors during a sensitive period of visual cortex development (7). Traversing through adulthood, it stands for the leading cause of monocular blindness in the 20-70 year age group (8). The main characteristic of amblyopia is crowding or spatial interference, referring to better VA when single optotypes are used compared to a line of optotypes, where objects surrounding the target object deliver a jumbled percept (9-12). Acuity is limited by letter size, crowding is limited by spacing, not size (12).Since amblyopia is predominantly defined as subnormal VA, a reliable instrument for detecting amblyopia is VA testing (13-15). Moreover, VA testing detects 97% of all ocular anomalies (13). The gold standard for diagnosing amblyopia is complete ophthalmological examination (4). There is a large body of evidence supporting the rationale for screening, as early treatment of amblyopia during the child’s first 5-7 years of life (8) is highly effective in habilitation of VA, while the treatment itself is among the most cost-effective interventions in ophthalmology (16). Preschool vision screening meets all the World Health Organization’s criteria for evaluation of screening programs (17). Literature search identified no studies reporting unhealthy and damaging effects of screening. The gold standard for screening for amblyopia has not been established (4). There is a large variety of screening methodologies and inconsistent protocols for referral of positives to complete ophthalmological examination. Lack of information on the validity (18,19) and accuracy (4) of such protocols probably intensifies the debate on determining the most effective method of vision screening (8,20-29). The unique definition of amblyopia accepted for research has not reached a consensus (4,5,30,31), further challenging the standardization of the screening protocols.Overall, two groups of screening methods exist: the traditional approach determines VA using VA tests, while the alternative approach identifies amblyogenic factors (27) based on photoscreening or automated refraction. The major difference between the two is that VA-based testing detects amblyopia directly, providing an explicit measure of visual function, while the latter, seeking for and determining only the level of refractive status does not evaluate visual function. In addition, the diagnosis and treatment of amblyopia is governed by the level of VA. On the other hand, amblyogenic factors represent risk factors for amblyopia to evolve. There are two major pitfalls in screening for amblyogenic factors. First, there is a lack of uniform cut-off values for referral and second, not all amblyogenic factors progress to amblyopia (19).Besides the issue of what should be detected, amblyopia or amblyogenic factors, a question is raised about who should be screened. Among literate children, both 3- and 4- year-old children can be reliably examined. However, 3-year-old children achieved testability rate of about 80% and positive predictive rate of 58% compared to >90% and 75%, respectively in the 4-year-old group (32). In addition, over-referrals are more common among 3-year-old children (32). These data determine the age of 4 years as the optimum age to screen for amblyopia. Hence, testability is a relevant contributor in designating the optimal screening test.If VA is to be tested in children, accepted standard tests should be used, with well-defined age-specific VA threshold determining normal monocular VA. For VA testing of preschool children Lea Symbols (33) and HOTV charts (22,32) are acknowledged as the best practice (34), while tumbling E (28,35,36) and Landolt C (28,37-39) are not appropriate as discernment of right-left laterality is still not a fully established skill (34,40). The Allen picture test is not standardized (34,41). Both Lea Symbols and HOTV optotypes can be presented as single optotypes, single optotypes surrounded with four flanking bars, single line of optotypes surrounded with rectangular crowding bars, or in lines of optotypes (22,33,34,41-53). The more the noise, the bigger the “crowding” effect. Isolated single optotypes without crowding overestimate VA (24), hence they are not used in clinical practice in Sweden (32). If presented in lines, which is recognized as the best composition to detect crowding, test charts can be assembled on Snellen or gold standard logMAR principle (34,42,51,54). Age-specific thresholds defining abnormal VA in preschool screening for amblyopia changed over time from <0.8 to <0.65 for four-year-old children due to overload of false positives (20).The outline of an effective screening test is conclusively demonstrated by both high sensitivity and high specificity. Vision screening tests predominately demonstrated higher specificity (4). Moreover, sensitivity evidently increased with age, whereas specificity remained evenly high (4). The criteria where to set the cut-off point if the confirmatory, diagnostic test is expensive or invasive, advocate to minimize false positives or use a cut-off point with high specificity.On the contrary, if the penalty for missing a case is high and treatment exists, the test should maximize true positives and use a cut-off point with high sensitivity (55). A screening test for amblyopia should target high sensitivity to identify children with visual impairment, while the specificity should be high enough not to put immense load on pediatric ophthalmologists (14). Complete ophthalmological examination as the diagnostic confirmatory gold standard test for amblyopia is neither invasive nor elaborate technology is needed, while the penalty for missing a case is a lifetime disability.In devising the Zagreb Amblyopia Preschool Screening (ZAPS) study protocol, we decided to use Lea Symbols in lines test and to screen preschool children aged 48-54 months to address the problems declared. Near VA testing was introduced in addition to commonly accepted distance VA testing (14,22,24,32,45,56-69) due to several reasons: first, hypermetropia is the most common refractive error in preschool children (70), hence near VA should more reliably detect the presence of hypermetropia; second, the larger the distance, the shorter the attention span is; and third, to increase the accuracy of the test.The pass cut-off level of ≤0.1 logMAR was defined because of particular arguments. Prior to 1992 Sweden used the pass cut-off level for screening of 0.8 (20). A change in the referral criteria to <0.65 for four-year-old children ensued, as many children referred did not require treatment (20). In addition, amblyopia treatment outcome of achieved VA>0.7 is considered as habilitation of normal vision (3,14). At last, the pass cut-off value ≤0.1 logMAR at four years can hardly mask serious visual problems, and even if they are present, we presume they are mild and can be successfully treated at six years when school-entry vision screening is performed. The aim of the ZAPS study is to present and evaluate new screening protocol for preschool children aged 48-54 months, established for testing near and distance VA using Lea Symbols in lines test. Furthermore, we aimed to determine the threshold of age-specific and chart-specific VA normative, testability of the ZAPS study protocol, and the prevalence of amblyopia in the City of Zagreb County. By delivering new evidence on amblyopia screening, guideline criteria defining optimal screening test for amblyopia in preschool children can be revised in favor of better visual impairment clarification.  相似文献   

16.

Aim

To explore the prevalence of psychiatric heredity (family history of psychiatric illness, alcohol dependence disorder, and suicidality) and its association with the diagnosis of stress-related disorders in Croatian war veterans established during psychiatric examination.

Methods

The study included 415 war veterans who were psychiatrically assessed and diagnosed by the same psychiatrist during an expert examination conducted for the purposes of compensation seeking. Data were collected by a structured diagnostic procedure.

Results

There was no significant correlation between psychiatric heredity of psychiatric illness, alcohol dependence, or suicidality and diagnosis of posttraumatic stress disorder (PTSD) or PTSD with psychiatric comorbidity. Diagnoses of psychosis or psychosis with comorbidity significantly correlated with psychiatric heredity (φ = 0.111; P = 0.023). There was a statistically significant correlation between maternal psychiatric illness and the patients’ diagnoses of partial PTSD or partial PTSD with comorbidity (φ = 0.104; P = 0.035) and psychosis or psychosis with comorbidity (φ = 0.113; P = 0.022); paternal psychiatric illness and the patients’ diagnoses of psychosis or psychosis with comorbidity (φ = 0.130; P = 0.008), alcohol dependence or alcohol dependence with comorbidity (φ = 0.166; P = 0.001); psychiatric illness in the primary family with the patients’ psychosis or psychosis with comorbidity (φ = 0.115; P = 0.019); alcohol dependence in the primary family with the patients’ personality disorder or personality disorder with comorbidity (φ = 0.099; P = 0.044); and suicidality in the primary family and a diagnosis of personality disorder or personality disorder with comorbidity (φ = 0.128; P = 0.009).

Conclusion

The study confirmed that parental and familial positive history of psychiatric disorders puts the individual at higher risk for developing psychiatric illness or alcohol or drug dependence disorder. Psychiatric heredity might not be necessary for the individual who was exposed to severe combat-related events to develop symptoms of PTSD.There are several risk factors associated with the development of posttraumatic stress disorder (PTSD), such as factors related to cognitive and biological systems and genetic and familial risk (1), environmental and demographic factors (2), and personality and psychiatric anamnesis (3).They are usually grouped into three categories: factors that preceded the exposure to trauma or pre-trauma factors; factors associated with trauma exposure itself; and post-trauma factors that are associated with the recovery environment (2,4).There are many studies which support the hypothesis that pre-trauma factors, such as ongoing life stress, psychiatric history, female sex (3), childhood abuse, low economic status, lack of education, low intelligence, lack of social support (5), belonging to racial and ethnic minority, previous traumatic events, psychiatric heredity, and a history of perceived life threat, influence the development of stress related disorders (6). Many findings suggest that ongoing life stress or prior trauma history sensitizes a person to a new stressor (2,7-9). The same is true for the lack of social support, particularly the loss of support from significant others (2,9-11), as well as from friends and community (12-14). If the community does not have an elaborated plan for providing socioeconomic support to the victims, then the low socioeconomic status can also be an important predictor of a psychological outcome such as PTSD (2,10,15). Unemployment was recognized as a risk factor for developing PTSD in a survey of 374 trauma survivors (16). It is known that PTSD commonly occurs in patients with a previous psychiatric history of mental disorders, such as affective disorders, other anxiety disorders, somatization, substance abuse, or dissociative disorders (17-21). Epidemiological studies showed that pre-existing psychiatric problems are one of the three factors that can predict the development of PTSD (2,22). Pre-existing anxiety disorders, somatoform disorders, and depressive disorders can significantly increase the risk of PTSD (23). Women have a higher vulnerability for PTSD than men if they experienced sexually motivated violence or had pre-existing anxiety disorders (23,24). A number of studies have examined the effects of gender differences on the predisposition for developing PTSD, with the explanation that women generally have higher rates of depression and anxiety disorders (3,25,26). War-zone stressors were described as more important for PTSD in men, whereas post-trauma resilience-recovery factors as more important for women (27).Lower levels of education and poorer cognitive abilities also appear to be risk factors (25). Golier et al (25) reported that low levels of education and low IQ were associated with poorer recall on words memorization tasks. In addition, this study found that the PTSD group with lower Wechsler Adult Intelligence Scale-Revised (WAIS-R) scores had fewer years of education (25). Nevertheless, some experts provided evidence for poorer cognitive ability in PTSD patients as a result or consequence rather than the cause of stress-related symptoms (28-31). Studies of war veterans showed that belonging to racial and ethnic minority could influence higher rates of developing PTSD even after the adjustment for combat exposure (32,33). Many findings suggest that early trauma in childhood, such as physical or sexual abuse or even neglect, can be associated with adult psychopathology and lead to the development of PTSD (2,5,26,34,35). Surveys on animal models confirm the findings of lifelong influences of early experience on stress hormone reactivity (36).Along with the reports on the effects of childhood adversity as a risk factor for the later development of PTSD, there is also evidence for the influence of previous exposure to trauma related events on PTSD (9,26,28). Breslau et al (36) reported that previous trauma experience substantially increased the risk for chronic PTSD.Perceived life threats and coping strategies carry a high risk for developing PTSD (9,26). For instance, Ozer et al (9) reported that dissociation during trauma exposure has high predictive value for later development of PTSD. Along with that, the way in which people process and interpret perceived threats has a great impact on the development or maintenance of PTSD (37,38).Brewin et al (2) reported that individual and family psychiatric history had more uniform predictive effects than other risk factors. Still, this kind of influence has not been examined yet.Keeping in mind the lack of investigation of parental psychiatric heredity on the development of stress-related disorders, the aim of our study was to explore the prevalence and correlation between the heredity of psychiatric illness, alcohol dependence, suicidality, and the established diagnosis of stress-related disorders in Croatian 1991-1995 war veterans.  相似文献   

17.

Aim

To assess the frequency and forms of pulmonary tuberculosis at autopsy in a high-traffic hospital in the capital city of a country with a low tuberculosis incidence.

Methods

We performed a retrospective search of autopsy data from the period 2000 to 2009 at Sestre Milosrdnice University Hospital Center, Zagreb, Croatia. We also examined patients’ records and histological slides.

Results

Of 3479 autopsies, we identified 61 tuberculosis cases, corresponding to a frequency of 1.8%. Active tuberculosis was found in 33 cases (54%), 23 of which (70%) were male. Of the 33 active cases, 25 (76%) were clinically unrecognized and 19 (76%) of these were male.

Conclusion

Clinically undiagnosed tuberculosis accounted for a substantial proportion of active tuberculosis cases diagnosed at autopsy. Autopsy data may be an important complement to epidemiological data on tuberculosis frequency.Each year, there are nearly 9 million new tuberculosis cases globally and nearly 2 million tuberculosis-related deaths (1,2). Tuberculosis occurs throughout the world, but its incidence varies greatly (3). Preventing infection through contact between healthy individuals and patients is the best measure to fight tuberculosis. The new World Health Organization strategy to fight tuberculosis, Stop TB Strategy (2006-2015), deals with the human immunodeficiency virus epidemic that has increased the incidence of tuberculosis (4). The European Centre for Disease Prevention and Control in 2008 created a strategy against tuberculosis called the “Framework Action Plan to Fight Tuberculosis in the European Union” (5). The long-term goal of the Stop TB Strategy and TB Framework Action Plan is to control and ultimately eliminate tuberculosis in the world based on four basic principles: ensure prompt and quality care for all; strengthen the capacity of health systems; develop new tools; and build partnerships and collaboration with countries and stakeholders (4,5).Croatia has a low incidence of tuberculosis, which has been steadily decreasing for the last five decades (6). The peak of the epidemic was at the turn of the 19th and 20th century, when more than 400 deaths per 100 000 people occurred as a direct result of tuberculosis (6). In the mid-20th century, the incidence of new tuberculosis cases was 20 000 per 100 000 people (6). In 2009, the incidence of new tuberculosis cases was 20 per 100 000 people (7) and in 2006 nearly all reported cases showed low levels of multidrug resistance (2,6,7). In accordance with international and European efforts, Croatia has its own guidelines for the fight against tuberculosis, with the following goals: to cure at least 85% of cases; to detect at least 70% of tuberculosis patients, and to decrease the incidence of the disease to 10 per 100 000 people (6-8).Although tuberculosis can affect any organ, 70%-80% of cases suffer from pulmonary tuberculosis (2). Generally, it is possible to detect tuberculosis infection 8-10 weeks after exposure based on a positive tuberculin skin test or an interferon-gamma release assay (9). The rest of the cases have latent tuberculosis infection (LTBI), which is an asymptomatic condition, and cannot transmit the disease (1,2). However, transmission becomes possible under certain conditions such as stress or immune suppression (6,10,11). It is believed that individuals with LTBI account for most infections in low-incidence countries like Croatia, and that this problem is compounded by migration and increasing numbers of homeless persons, alcoholics, and drug addicts (6,10,12).Statistics about tuberculosis prevalence may underestimate the number of infected people, since as many as half of the cases of pulmonary tuberculosis seen at autopsy were previously undiagnosed (12,13). In fact, few studies have examined the relationship between tuberculosis diagnoses at autopsy and reported tuberculosis prevalence in the population (14). This information may help assess whether clinically unrecognized tuberculosis poses a significant public health threat. The present study examined 3479 autopsies performed from 2000 through 2009, to assess the frequency and forms of pulmonary tuberculosis in a country with a low tuberculosis incidence. The results were compared with the number of tuberculosis patients in Croatia recorded in the Croatian Health Service Yearbook for the same period (7,8).  相似文献   

18.

Aim

To investigate the association between parental war involvement and different indicators of psychosocial distress in a community sample of early adolescents ten years after the war in Croatia 1991-1995.

Methods

A total of 695 adolescents were screened with a self-report questionnaire assessing parental war involvement, sociodemographic characteristics, and alcohol and drug consumption. Personality traits were assessed with the Junior Eysenck Personality Questionnaire; depressive symptoms with the Children’s Depression Inventory (CDI); and unintentional injuries, physical fighting, and bullying with the World Health Organization survey Health Behavior in School-aged Children. Suicidal ideation was assessed with three dichotomous items. Suicidal attempts were assessed with one dichotomous item.

Results

Out of 348 boys and 347 girls who were included in the analysis, 57.7% had at least one veteran parent. Male children of war veterans had higher rates of unintentional injuries (odds ratio [OR], 1.2; 95% confidence interval [CI], 0.56 to 2.63) and more frequent affirmative responses across the full suicidal spectrum (thoughts about death – OR, 2.1; 95% CI, 1.02 to 4.3; thoughts about suicide – OR, 5; 95% CI, 1.72 to 14.66; suicide attempts – OR, 3.6; 95% CI, 1.03 to 12.67). In boys, thoughts about suicide and unintentional injuries were associated with parental war involvement even after logistic regression. However, girls were less likely to be affected by parental war involvement, and only exhibited signs of psychopathology on the CDI total score.

Conclusion

Parental war involvement was associated with negative psychosocial sequels for male children. This relationship is possibly mediated by some kind of identification or secondary traumatization. Suicidality and unintentional injuries are nonspecific markers for a broad range of psychosocial distresses, which is why the suggested target group for preventive interventions should be veteran parents as vectors of this distress.War represents a major stressor that can have long-lasting adverse influences on mental health (1). Milliken et al showed that upon the return from the Iraq war 42.7% of US combat veterans required mental health treatment (2). Most frequent psychopathological consequences of combat exposure are posttraumatic stress disorder (PTSD), anxiety, depression, and psychosomatic complaints (3,4). During the war in Croatia 1991-1995, more than 300,000 people were recruited to army service (5). Among Croatian war veterans, one of the most prevalent psychiatric diagnoses and the most common disorder comorbid with PTSD is depression (6-8). Veterans’ psychological distress inevitably impacts those with whom they interact (9). In fact, there is an association between psychopathological disturbances and a reduced effectiveness in parenting, perhaps because of veterans’ disrupted social functioning, emotional withdrawal, and decreased desire to interact with the children (9-11). Data on the influence of combat-related depression are scarce. Nonetheless, there is a significant body of evidence suggesting that “civilian” depression may negatively influence parenting behaviors (12,13) and that maternal mental health status following war affects children''s adjustment (14). Further, psychological disturbances in war veterans could have a negative horizontal impact on their wives (15,16), and even a long-lasting vertical influence on children and further generations (17,18). Since psychiatric disorders associated with war exposure are categorical and not dimensional (19), it is possible that even more veterans exhibit non-specific, sub-threshold psychological problems, which could negatively impact their social, familial, and parenting roles. In addition to the direct individual consequences, soldiers’ absence during deployment might exert negative influence on the structural stability of the community, which is critical for the welfare of youth because it produces consistency and continuity in social relationships. Thus, structural stability helps build trust, enhances social support, and facilitates social control through commitment to community values and norms (20). War also creates a situation of social-norm disintegration, leading to social anomie and an increase in suicidal phenomena (21), which was of particular interest for this investigation. The aim of this study was to explore the association between parental war involvement and mental health problems in children, including depression, risky behaviors, sleep related problems, and suicidal ideation and attempts.  相似文献   

19.

Aim

To assess the effect of peritonsillar infiltration of ketamine and tramadol on post tonsillectomy pain and compare the side effects.

Methods

The double-blind randomized clinical trial was performed on 126 patients aged 5-12 years who had been scheduled for elective tonsillectomy. The patients were randomly divided into 3 groups to receive either ketamine, tramadol, or placebo. They had American Society of Anesthesiologists physical status class I and II. All patients underwent the same method of anesthesia and surgical procedure. The three groups did not differ according to their age, sex, and duration of anesthesia and surgery. Post operative pain was evaluated using CHEOPS score. Other parameters such as the time to the first request for analgesic, hemodynamic elements, sedation score, nausea, vomiting, and hallucination were also assessed during 12 hours after surgery.

Results

Tramadol group had significantly lower pain scores (P = 0.005), significantly longer time to the first request for analgesic (P = 0.001), significantly shorter time to the beginning of liquid regimen (P = 0.001), and lower hemodynamic parameters such as blood pressure (P = 0.001) and heart rate (P = 0.001) than other two groups. Ketamine group had significantly greater presence of hallucinations and negative behavior than tramadol and placebo groups. The groups did not differ significantly in the presence of nausea and vomiting.

Conclusion

Preoperative peritonsillar infiltration of tramadol can decrease post-tonsillectomy pain, analgesic consumption, and the time to recovery without significant side effects.Registration No: IRCT201103255764N2Postoperative pain has not only a pathophysiologic impact but also affects the quality of patients’ lives. Improved pain management might therefore speed up recovery and rehabilitation and consequently decrease the time of hospitalization (1). Surgery causes tissue damage and subsequent release of biochemical agents such as prostaglandins and histamine. These agents can then stimulate nociceptors, which will send the pain message to the central nervous system to generate the sensation of pain (2-4). Neuroendocrine responses to pain can also cause hypercoagulation state and immune suppression, leading to hypoglycemia, which can delay wound healing (5).Tonsillectomy is a common surgery in children and post-tonsillectomy pain is an important concern. Duration and severity of pain depend on the surgical technique, antibiotic and corticosteroid use, preemptive and postoperative pain management, and patient’s perception of pain (6-9). There are many studies that investigated the control of post tonsillectomy pain using different drugs such as intravenous opioids, non-steroidal anti-inflammatory drugs, steroids, ketamine, as well as peritonsillar injection of local anesthetic, opioid, and ketamine (6,7,10-14).Ketamine is an intravenous anesthetic from phencyclidin family, which because of its antagonist effects on N methyl-D-aspartate receptors (that are involved in central pain sensitization) has regulatory influence on central sensitization and opium resistance. It can also band with mu receptors in the spinal cord and brain and cause analgesia. Ketamine can be utilized intravenously, intramuscularly, epidurally, rectally, and nasaly (15,16). Several studies have shown the effects of sub-analgesic doses of ketamine on postoperative pain and opioid consumption (7,13,15-17). Its side effects are hallucination, delirium, agitation, nausea, vomiting, airways hyper-secretion, and increased intra cerebral pressure and intra ocular pressure (10,11,15,16).Tramadol is an opium agonist that mostly effects mu receptors, and in smaller extent kappa and sigma receptors, and like anti depressant drugs can inhibit serotonin and norepinephrine reuptake and cause analgesia (6,12,18). Its potency is 5 times lower than morphine (6,12), but it has lower risk of dependency and respiratory depression, without any reported serious toxicity (6,12). However, it has some side effects such as nausea, vomiting, dizziness, sweating, anaphylactic reactions, and increased intra-cerebral pressure. It can also lower the seizure threshold (6,12,18,19).Several studies have confirmed the efficacy of tramadol and ketamine on post-tonsillectomy pain (6,10-12,20). In previous studies, effects of peritonsillar/ IV or IM infiltration of tramadol and ketamine were compared to each other and to placebo, and ketamine and tramadol were suggested as appropriate drugs for pain management (6,7,10-19,21). Therefore, in this study we directly compared the effect of peritonsillar infiltration of either tramadol or ketamine with each other and with placebo.  相似文献   

20.
AimTo evaluate Klotho and SIRT1 expression in the heart and kidneys of rats with acute and chronic renovascular hypertension.MethodsFour and sixteen weeks after the induction of renovascular hypertension by clipping the left renal artery, systemic blood pressure, serum angiotensin II level, and the expression of Klotho and SIRT1 proteins and oxidative stress indices in the heart and kidneys were assessed.ResultsSIRT1 level was significantly reduced in the ischemic (left) kidney in acute and chronic phases of hypertension. In the heart, it decreased in the acute phase, but increased in the chronic phase. Klotho levels in the heart and kidneys did not change significantly in either hypertension phase. Superoxide dismutase (SOD) activity in the heart significantly decreased, and SOD, total antioxidant capacity, and malondialdehyde in the ischemic kidney significantly increased during the development of hypertension. Serum angiotensin II level significantly increased in the acute phase of hypertension.ConclusionDevelopment of renovascular hypertension was associated with a reduction of SIRT1 expression in the heart and ischemic kidney. As angiotensin II and SIRT1 counteract each other''s expression, a SIRT1 reduction in the heart and kidney, along with the influence of systemic/local angiotensin II, seems to be partly responsible for hypertension development. A combination of SIRT1 agonists and angiotensin II antagonists may be considered for use in the treatment of renovascular hypertension.

Hypertension is one of the leading causes of disease burden worldwide, doubling the risk of coronary artery diseases (1). The prevalence of hypertension in US adults in the 2013-2016 period ranged from 26.1% in the age group 20-44 to 78.2% among people older than 65 years (2). Despite antihypertensive treatment, blood pressure of more than half of American adults is not controlled (3). Thus, to be able to produce more effective drugs, the underlying mechanisms of hypertension should be investigated.The most common cause of death in hypertensive patients is hypertensive heart disease, which results from functional and structural adaptation of the heart to high blood pressure (1). Secondary hypertension is most frequently a result of primary kidney disease. On the other hand, hypertension is a risk factor for kidney damage and end-stage renal disease (1).Hypertension and related cardiovascular diseases are age-dependent (4,5). The aging of the cardiovascular system is an important process determining longevity (6).Sirtuins are a family of enzymes encoded by SIRT1 to SIRT7 in mammals that play important roles in longevity (7). These enzymes are abundantly expressed in the nucleus and cytoplasm of several tissues, including the heart and vascular endothelium (8). The most well-known member of the sirtuin family is SIRT1, which plays beneficial roles in age-associated metabolic, inflammatory, and cardiovascular diseases (9). SIRT1 has anti-oxidant, anti-inflammatory, and anti-apoptotic effects in the endothelium and prevents endothelial senescence and dysfunction (10,11). Several studies showed that SIRT1 protected against atherosclerosis (10-13). Increasing SIRT1 expression in mice improved vascular remodeling and hypertension caused by angiotensin II (14). In addition, through reducing SIRT1 expression, hyperglycemia causes vascular damage (15).Klotho is a membrane-bound protein that exerts anti-aging function (16). Klotho deficiency leads to premature aging phenotype and shortens the lifespan (17), while its increased gene expression increases life expectancy (18). Klotho is involved in the prevention of arteriosclerosis, inducing its effects even in tissues that do not express it, which indicates its endocrine role (16). A recent study on Klotho haplodeficient mice showed that Klotho deficiency led to arteriosclerosis and hypertension, but these effects were diminished by increasing SIRT1 activity (19).One of the experimental models to evaluate secondary hypertension is 2-kidney-1-clip (2K1C) hypertension (20). In this model, a clamp is placed on one of the renal arteries to induce ischemia, while the other renal artery remains intact. This procedure steadily increases blood pressure due to an increased activity of the renin-angiotensin system in the acute phase, and sodium and water retention in the chronic phase (20,21). As SIRT1 and Klotho play a role in blood pressure regulation, and the kidneys play a role in secondary hypertension, we hypothesized that these two proteins may partake in the development of acute and chronic renovascular hypertension. Therefore, the aim of this study was to assess the expression of these two proteins in the heart and in the ischemic and non-ischemic kidneys of 2K1C rats. On the other hand, it has been shown that angiotensin II infusion increases oxidative stress and blood pressure, and that the deleterious effects of angiotensin II on blood pressure and the kidneys can be prevented by an inhibition of reactive oxygen species after angiotensin II infusion (22) and in 2K1C rats (23). Furthermore, it has been shown that SIRT1 exerts its beneficial effects by reducing oxidative stress (11,24). Therefore, the amount of oxidative stress in the heart and kidneys of the experimental animals was also assessed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号