首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We describe the epidemiology of influenza virus infections in refugees in a camp in rural Southeast Asia during May–October 2009, the first 6 months after identification of pandemic (H1N1) 2009 in Thailand. Influenza A viruses were detected in 20% of patients who had influenza-like illness and in 23% of those who had clinical pneumonia. Seasonal influenza A (H1N1) was the predominant virus circulating during weeks 26–33 (June 25–August 29) and was subsequently replaced by the pandemic strain. A review of passive surveillance for acute respiratory infection did not show an increase in acute respiratory tract infection incidence associated with the arrival of pandemic (H1N1) 2009 in the camp.  相似文献   

2.

Objective

To estimate the case–fatality ratio (CFR) for measles in Nepal, determine the role of risk factors, such as political instability, for measles mortality, and compare the use of a nationally representative sample of outbreaks versus routine surveillance or a localized study to establish the national CFR (nCFR).

Methods

This was a retrospective study of measles cases and deaths in Nepal. Through two-stage random sampling, we selected 37 districts with selection probability proportional to the number of districts in each region, and then randomly selected within each district one outbreak among all those that had occurred between 1 March and 1 September 2004. Cases were identified by interviewing a member of each and every household and tracing contacts. Bivariate analyses were performed to assess the risk factors for a high CFR and determine the time from rash onset until death. Each factor’s contribution to the CFR was determined through multivariate logistic regression. From the number of measles cases and deaths found in the study we calculated the total number of measles cases and deaths for all of Nepal during the study period and in 2004.

Findings

We identified 4657 measles cases and 64 deaths in the study period and area. This yielded a total of about 82 000 cases and 900 deaths for all outbreaks in 2004 and a national CFR of 1.1% (95% confidence interval, CI: 0.5–2.3). CFR ranged from 0.1% in the eastern region to 3.4% in the mid-western region and was highest in politically insecure areas, in the Ganges plains and among cases < 5 years of age. Vitamin A treatment and measles immunization were protective. Most deaths occurred during the first week of illness.

Conclusion

To our knowledge, this is the first CFR study based on a nationally representative sample of measles outbreaks. Routine surveillance and studies of a single outbreak may not yield an accurate nCFR. Increased fatalities associated with political insecurity are a challenge for health-care service delivery. The short period from disease onset to death and reduced mortality from treatment with vitamin A suggest the need for rapid, field-based treatment early in the outbreak.  相似文献   

3.
Objectives. We examined the relationship between obstetrical intervention and preterm birth in the United States between 1991 and 2006.Methods. We assessed changes in preterm birth, cesarean delivery, labor induction, and associated risks. Logistic regression modeled the odds of preterm obstetrical intervention after risk adjustment.Results. From 1991 to 2006, the percentage of singleton preterm births increased 13%. The cesarean delivery rate for singleton preterm births increased 47%, and the rate of induced labor doubled. In 2006, 51% of singleton preterm births were spontaneous vaginal deliveries, compared with 69% in 1991. After adjustment for demographic and medical risks, the mother of a preterm infant was 88% (95% confidence interval [CI] = 1.87, 1.90) more likely to have an obstetrical intervention in 2006 than in 1991. Using new birth certificate data from 19 states, we estimated that 42% of singleton preterm infants were delivered via induction or cesarean birth without spontaneous onset of labor.Conclusions. Obstetrical interventions were related to the increase in the US preterm birth rate between 1991 and 2006. The public health community can play a central role in reducing medically unnecessary interventions.During the past 15 years, rates of obstetrical interventions have been rising in the United States.1,2 The percentage of births with induced labor more than doubled between 1991 and 2006, from 10.5% to 22.5%.1,2 After a decline in the early 1990s, the cesarean delivery rate increased by 50%, from 20.7% in 1996 to an all-time high of 31.1% in 2006.1 Large increases occurred for both primary and repeat cesarean deliveries and among mothers with no known medical risk factors or indications for cesarean delivery (such as diabetes, hypertension, or premature rupture of membranes).1,3,4 Recent studies have shown that changing primary cesarean rates did not correspond to shifts in mothers’ medical risk profiles but, rather, appeared to be related to increased use of cesarean delivery with all medical conditions.46From 1991 to 2006, the preterm (less than 37 weeks of gestation) birth rate increased by 19%, from 10.8% to 12.8% of all births1; the preterm rate increased by 13% for singletons and by 22% for multiple births. An increase in the preterm birth rate is of concern because rates of death and disability are higher among preterm infants than among infants born at term (37–41 weeks).79 Although rates of death and disability are highest among infants born very preterm (less than 32 weeks), mortality rates among moderately preterm (32–33 weeks) and late preterm (34–36 weeks) infants are 7 and 3 times, respectively, the mortality rates for term infants.7We examined the relationship between changes in the use of obstetrical intervention and changes in the preterm birth rate in the United States between 1991–2006. Specifically, we explored trends in singleton preterm births, delivery methods (cesarean or vaginal), and induction of labor.  相似文献   

4.
5.
A matched case–control study (95 cases and 220 controls) was designed to study risk factors for atypical scrapie in sheep in France. We analyzed contacts with animals from other flocks, lambing and feeding practices, and exposure to toxic substances. Data on the prnp genotype were collected for some case and control animals and included in a complementary analysis. Sheep dairy farms had a higher risk for scrapie (odds ratio [OR] 15.1, 95% confidence interval [CI] 3.3–69.7). Lower risk was associated with organic farms (OR 0.15, 95% CI 0.02–1.26), feeding corn silage (OR 0.16, 95% CI 0.05–0.53), and feeding vitamin and mineral supplements (OR 0.6, 95% CI 0.32–1.14). Genetic effects were quantitatively important but only marginally changed estimates of other variables. We did not find any risk factor associated with an infectious origin of scrapie. Atypical scrapie could be a spontaneous disease influenced by genetic and metabolic factors.  相似文献   

6.
7.
The study describes the characteristics of maternal deaths in the city of Rio de Janeiro, Brazil, during 2000–2003. After investigation by public-health services, 217 maternal deaths were identified among predominantly non-white (48.9%), single (57.1%) women aged 29.6±7.3 years on average. Direct obstetric causes corresponded to 77.4% of the maternal deaths, mainly due to hypertensive disorders. HIV-related diseases accounted for 4% of the maternal deaths. Almost three-fourths of the mothers who died were aged 20–39 years, although the highest risk of maternal death corresponded to the age-group of 40–49 years (248.9 per 100,000 livebirths). The socioeconomic and demographic profiles of maternal deaths in the city of Rio de Janeiro reflected a vulnerable social situation. Appropriate interventions aimed at reducing maternal mortality need to encompass all women of childbearing age, irrespective of the magnitude of the risk of maternal death.Key words: Causes of death, HIV, Hypertensive disorders, Maternal health, Maternal mortality, Vital statistics, Women''s health, Brazil  相似文献   

8.
The language of rights has long permeated discussions about health care in Britain, but during the latter half of the 20th century, patients’ rights achieved a level of unprecedented prominence. By the end of the 1980s, the language of entitlement appeared to have spread into many areas of the National Health Service: consent to treatment, access to information, and the ability to complain were all legally established patients’ rights. Patient organizations played a critical role in both realizing these rights and in popularizing the discourse of rights in health care in Britain. “Rights talk,” however, was not without its drawbacks, as it was unclear what kinds of rights were being exercised and whether these were held by patients, consumers, or citizens.THE IDEA THAT PATIENTS HAVE rights in relation to health care is a powerful one, but it is a concept that has often generated problems when put into practice. As recent debates about health care reform in the United States make plain, both supporters and detractors of universal health care have been able to use the language of rights to make their case.1 The application of rights to health is no less challenging, however, in countries that do have systems guaranteeing population-wide access, such as Britain with its National Health Service (NHS). In January 2009, the Labour government introduced the NHS Constitution for England, a document that set out a series of rights, responsibilities, and pledges designed to embody the “principles and values” that guide the NHS. Patients were told that they had 25 rights, encompassing areas such as access to health services; quality of care and the environment; access to nationally approved treatments; respect, consent, and confidentiality; informed choice; involvement in their own health care and the wider NHS; and complaint and redress.2 The NHS Constitution, it was claimed, brought together “in one place for the first time in the history of the NHS what staff, patients and public can expect from the NHS.”3Although the introduction of the NHS Constitution was an important development in the reform of British health care under New Labour, it was certainly not the first attempt to formulate a list of patients’ rights, or to use these to shape the future of health services. From the 1960s onwards, a number of organizations claiming to represent the patient, such as the Patients Association, the Consumers’ Association, the National Consumer Council, and the Community Health Councils, drew on the language of rights to put forward their demands. Concerns about patients’ ability to complain, their access to information, and the presence of medical students during consultations and treatment were framed around the concept of rights. Patient organizations also expended much time and energy drawing up patients’ charters and guides to patients’ rights within the NHS. But where did this language of rights come from? What did it mean to talk about patients’ rights in the context of a collective health system like Britain’s NHS?In this article, I explore how the language of rights came to enter the discourse around British health care in the 1960s, and how it was developed and applied by patient groups in the 1970s and 1980s. Drawing on the papers of patient organizations, government records, newspapers, and medical journals, I suggest that although the language of patients’ rights held rhetorical power, putting such language into practice was to prove deeply problematic.  相似文献   

9.
Objectives. In an effort to examine national and Chicago, Illinois, progress in meeting the Healthy People 2010 goal of eliminating health disparities, we examined whether disparities between non-Hispanic Black and non-Hispanic White persons widened, narrowed, or stayed the same between 1990 and 2005.Methods. We examined 15 health status indicators. We determined whether a disparity widened, narrowed, or remained unchanged between 1990 and 2005 by examining the percentage difference in rates between non-Hispanic Black and non-Hispanic White populations at both time points and at each location. We calculated P values to determine whether changes in percentage difference over time were statistically significant.Results. Disparities between non-Hispanic Black and non-Hispanic White populations widened for 6 of 15 health status indicators examined for the United States (5 significantly), whereas in Chicago the majority of disparities widened (11 of 15, 5 significantly).Conclusions. Overall, progress toward meeting the Healthy People 2010 goal of eliminating health disparities in the United States and in Chicago remains bleak. With more than 15 years of time and effort spent at the national and local level to reduce disparities, the impact remains negligible.Racial disparities in health in the United States have been well documented, and federal initiatives have been undertaken to reduce these disparities. One of the first federal initiatives to bring awareness to racial disparities in health was the 1985 Report of the Secretary''s Task Force on Black and Minority Health, which highlighted the need for programs and policies to address disparities in health within the United States.1 Many initiatives have followed. The most recent federal initiative is Healthy People 2010, which consists of 2 main goals, 28 focus areas, and 467 objectives. One of the main goals is the elimination of health disparities within the United States.2 This builds upon one of the goals from Healthy People 2000, which aimed at the reduction of health disparities.3Interestingly, although the reduction and elimination of health disparities are declared priorities, there are few reports that comprehensively examine progress in this area by analyzing changes in multiple indicators. In 2001, Silva et al. published a study of 22 health status indicators in Chicago, Illinois, and compared outcomes for Black and White people between 1980 and 1998.4 An important contribution in this area came from Keppel et al. in 2002 when they evaluated the Healthy People 2000 goal of reducing health disparities at the national level by examining progress in reducing disparities among the 5 largest racial/ethnic groups in the United States for 17 health status indicators between 1990 and 1998.5 The analysis revealed that for the majority of indicators, racial/ethnic disparities had declined over the period on the national level. However, a comparable Chicago-specific analysis by Margellos et al. focusing on non-Hispanic Black–non-Hispanic White disparities found that although the majority of Black–White disparities narrowed nationally between 1990 and 1998, the opposite was true in Chicago with the majority widening over the same interval.6First, we wanted to determine whether the Black–White disparity within each of 15 health status indicators had widened, narrowed, or stayed the same over a 15-year period in Chicago and in the United States. Second, we wanted to determine whether, taken together, there was a general shift toward widening, narrowing, or no change in the Black–White disparity in Chicago and the United States. This updates the work of both Keppel et al. and Margellos et al. to consider progress toward reducing and eventually eliminating Black–White disparities nationally and in Chicago for 1990 to 2005, thus adding a 7-year update to each of these previous reports. The analysis of national progress serves as a benchmark to examine Chicago''s progress within a national context.  相似文献   

10.
Objectives. We examined the roles of gender and poverty in cigarette use and nicotine dependence among adults in the United States.Methods. Our data were drawn from the 2001–2002 National Epidemiological Survey of Alcoholism and Related Conditions, a nationally representative sample of US adults 18 years and older.Results. The overall rate of cigarette use declined between 1964 and 2002. Nicotine dependence does not appear to have declined overall, and there is evidence that nicotine dependence has increased among women in recent cohorts. The odds of nicotine dependence among cigarette users appear to have increased significantly in recent cohorts.Conclusions. Despite recent declines in cigarette use, the prevalence of nicotine dependence has increased among some groups and has remained steady overall, which may be hampering public health initiatives to reduce cigarette use. Efforts to study or curb cigarette use should therefore take nicotine dependence into account.Cigarette use is the leading preventable cause of death among adults in the United States.1 According to the Centers for Disease Control and Prevention, cigarettes are responsible for approximately 440 000 deaths annually.2 Since 1964, when the first surgeon general''s report on smoking and health was released, awareness concerning the harmful effects of cigarette use has risen, and in recent years, there has been evidence of a decline in cigarette use. In 2003, the US Department of Health and Human Services reported that the frequency of smoking among adults had declined from 25% in 1990 to 23% in 2000 and 22.5% in 2002.3 Recent studies indicate that this effect may have reached a plateau between 2004 and 2006.4The overall decline from 1964 levels is viewed as confirmatory evidence that public health efforts to increase awareness of the health risks of cigarettes4,5 and to decrease cigarette use (e.g., through increased taxation) have succeeded in altering cigarette use behavior.6 However, previous studies have mainly addressed tobacco use or cigarette smoking per se rather than examining amount of cigarette use (i.e., frequency and duration) in detail. Specifically, they have not addressed the regular, heavy cigarette use that frequently characterizes nicotine dependence, which is the pattern of use thought to be the most detrimental to health and longevity.Among studies that have examined prevalence rates of cigarette consumption across time, few have focused specifically on the prevalence of nicotine dependence, and most have not distinguished between nicotine dependence and nondependent cigarette use in prevalence estimates. However, nicotine dependence warrants study as a separate area of concern. The Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV), defines nicotine dependence as a mental disorder,7 and research has suggested that nicotine dependence is as strong an addiction and is as difficult to treat as cocaine addiction.8 In addition, researchers have established that trends in prevalence of cigarette use are not necessarily parallel with trends in prevalence of nicotine dependence.9,10 However, studies that have measured fluctuations in the prevalence of cigarette use have failed to measure fluctuations in nicotine dependence,11 despite the fact that cigarette use and nicotine dependence are likely to have distinct risk factors, courses, treatments, prevention strategies, and outcomes.9,1214 In addition, the outcomes associated with nicotine dependence are thought to be far more severe than those associated with occasional cigarette use, because negative health outcomes are thought to be fairly proportionate to the number of cigarettes smoked.1420Another factor that has changed dramatically in the epidemiology of tobacco consumption and dependence over the past several decades is gender. There have been substantial disparities between men and women in the prevalence of cigarette use and nicotine dependence over the past forty years, with smoking being far more common among men for most of that time.21 However, recent evidence suggests a relatively narrow gender gap in smoking prevalence.22 Little is known about gender differences and changes in prevalence of nicotine dependence over the past several decades.Socioeconomic status has also been shown to be associated with differences in prevalence of cigarette use. Some reports have suggested that cigarette use may be disproportionately common among those in poverty,2328 and rates of cigarette use among adults in different socioeconomic groups are thought to have shifted over time.29 Still, previous studies have not provided information on potential disparities in nicotine dependence by gender and poverty status. This information would ideally be elicited by studies that track changes in prevalence of nicotine dependence by means of repeated general population surveys that use consistent measures, carried out over many years. Unfortunately, such data are not available, but given the major public health importance of cigarette use and nicotine dependence, we sought a different way to address these questions. To that end, we decided to use data from a large, cross-sectional survey with excellent measures of cigarette use and nicotine dependence in the United States,30 taking steps to minimize biases from reporting and differential mortality.In this study, we had 3 goals: (1) to examine rates of nondependent cigarette use over 4 recent birth cohorts among adults in the United States, (2) to examine the prevalence of nicotine dependence among adults over these 4 birth cohorts, and (3) to examine the results by gender and poverty status in order to elicit changes in cigarette use and nicotine dependence among groups that may be especially vulnerable, including women and those in poverty.  相似文献   

11.
Objectives. We assessed the effectiveness of the penalty points system (PPS) introduced in Spain in July 2006 in reducing traffic injuries.Methods. We performed an evaluation study with an interrupted time–series design. We stratified dependent variables—numbers of drivers involved in injury collisions and people injured in traffic collisions in Spain from 2000 to 2007 (police data)—by age, injury severity, type of road user, road type, and time of collision, and analyzed variables separately by gender. The explanatory variable (the PPS) compared the postintervention period (July 2006 to December 2007) with the preintervention period (January 2000 to June 2006). We used quasi-Poisson regression, controlling for time trend and seasonality.Results. Among men, we observed a significant risk reduction in the postintervention period for seriously injured drivers (relative risk [RR] = 0.89) and seriously injured people (RR = 0.89). The RRs among women were 0.91 (P = .095) and 0.88 (P < .05), respectively. Risk reduction was greater among male drivers, moped riders, and on urban roads.Conclusions. The PPS was associated with reduced numbers of drivers involved in injury collisions and people injured by traffic collisions in Spain.Traffic injuries cause considerable mortality and morbidity worldwide. Since 2004, traffic deaths in Spain have followed a downward trend. However, more than 135 000 road users were injured and more than 4000 were killed in 2005, numbers which placed Spain above the mean for the European Union (EU; ranked 13th of the 25 member states).1The penalty points system (PPS), introduced in Spain on July 1, 2006, attempts to deter drivers from committing traffic offenses. Because the PPS does not exclusively depend on monetary penalties, it affects all drivers irrespective of their income level.2 In Spain, drivers start with a 12-point license (8-point for novice drivers), and the points are gradually removed if certain traffic violations are committed, such as exceeding the speed limit, driving while intoxicated, or using a hand-held mobile phone, culminating in license suspension if all points are lost. Only serious violations result in loss of points, with the number of points removed varying with the severity of the offense (3 Several months before its introduction, the PPS was announced via a publicity campaign in all news media, and was included in the media agenda, giving rise to public debate.

TABLE 1

Number of Points Subtracted From the Driver''s License, by Type of Offense, in Spain''s Penalty Points System (PPS): Spain, 2000–2007
2 Points3 Points4 Points
6 Points
Speeding > 20 km/h to 30 km/h over the limit (< 50% of the limit)Speeding > 30 km/h to 40 km/h over the limit (< 50% of the limit)Speeding > 40 km/h over the limit (< 50% of the limit)Driving with a blood alcohol content 0.25 mg/L to 0.50 mg/L (0.15 mg/L to 0.30 mg/L professionals and novices)Speeding > 50% of the limit, at least > 30 km/h
Driving without headlights when headlights are requiredChanging direction illegallyNot obeying stop signs, traffic lights, right-of-ways, and other traffic rulesOvertaking dangerously or in locations with limited visibilityDriving with a blood alcohol content > 0.50 mg/L (> 0.30 mg/L for professionals and novices)
Circulating with a person aged < 12 y on a moped or motorcycle, with the statutory exceptionsFailing to comply with the safety distanceHindering other vehicles from overtakingOvertaking putting cyclists at riskDriving under the influence of drugs or other substances
Using systems to avoid traffic officers’ surveillance or to detect speed camerasDriving while using earphones or hand-held mobile phonesReversing in motorwaysCareless drivingRefusing analysis of alcohol, drugs, and other similar behaviors
Stopping or parking at dangerous places (e.g., road junction, tunnel)Driving without seat belt, helmet, and other compulsory safety devicesNot obeying traffic officers’ signalsDriving without the appropriate licenseDangerous driving, wrong way, races, and other similar behaviors
Stopping or parking disturbing circulation, pedestrians, or in lanes reserved for public transportDriving on a motorway with a forbidden vehicleThrowing objects on the road that may produce a fire or accidentsDriving with > 50% more than the authorized number of occupantsFor professional drivers, exceeding the maximum permitted uninterrupted driving hours by > 50% or reducing subsequent rest hours by > 50%
Open in a separate windowAlthough 20 of the 27 EU member states had adopted a PPS by 2007, to date, few countries have published studies assessing its effectiveness in terms of road safety.49 The few studies that have been published are generally simple before–after analyses, with the exception of those by Zambon et al.4 and Pulido et al.9 In addition, most studies have assessed only the impact of PPS on the overall number of people injured or killed, and have not considered gender, type of road user, and other variables that could help to identify in which road user profiles the PPS is effective and in which profiles it is ineffective. In Spain, the effectiveness of the PPS has been assessed only for overall numbers of fatalities on nonurban roads.9 In addition, none of those studies have analyzed changes in risk among drivers, who are the main target of the PPS.Our objective was to assess the effectiveness of the PPS in reducing the number of drivers involved in injury collisions (i.e., traffic collisions resulting in injury) and the number of people injured in traffic collisions in Spain. Our hypothesis was that the PPS is effective in reducing traffic injuries and that its effectiveness varies with gender, age, injury severity, type of road user, road type, and time of collision.  相似文献   

12.
13.

Objective

Dengue has been reportable in Cambodia since 1980. Virological surveillance began in 2000 and sentinel surveillance was established at six hospitals in 2001. Currently, national surveillance comprises passive and active data collection and reporting on hospitalized children aged 0–15 years. This report summarizes surveillance data collected since 1980.

Methods

Crude data for 1980–2001 are presented, while data from 2002–2008 are used to describe disease trends and the effect of vector control interventions. Trends in dengue incidence were analysed using the Prais–Winsten generalized linear regression model for time series.

Findings

During 1980–2001, epidemics occurred in cycles of 3–4 years, with the cycles subsequently becoming less prominent. For 2002–2008 data, linear regression analysis detected no significant trend in the annual reported age-adjusted incidence of dengue (incidence range: 0.7–3.0 per 1000 population). The incidence declined in 2.7% of the 185 districts studied, was unchanged in 86.2% and increased in 9.6%. The age-specific incidence was highest in infants aged < 1 year and children aged 4–6 years. The incidence was higher during rainy seasons. All four dengue virus (DENV) serotypes were permanently in circulation, though the predominant serotype has alternated between DENV-3 and DENV-2 since 2000. Although larvicide has been distributed in 94 districts since 2002, logistic regression analysis showed no association between the intervention and dengue incidence.

Conclusion

The dengue burden remained high among young children in Cambodia, which reflects intense transmission. The national vector control programme appeared to have little impact on disease incidence.  相似文献   

14.
During November 2008–July 2009, we investigated the origin of unknown fever in Senegalese patients with a negative malaria test result, focusing on potential rickettsial infection. Using molecular tools, we found evidence for Rickettsia felis–associated illness in the initial days of infection in febrile Senegalese patients without malaria.  相似文献   

15.

Background:

Unsafe medical injections are a prevalent risk factor for viral hepatitis and HIV in India.

Objectives:

This review undertakes a cost–benefit assessment of the auto-disable syringe, now being introduced to prevent the spread of hepatitis B virus, hepatitis C virus, and human immunodeficiency virus (HIV).

Materials and Methods:

The World Health Organization methods for modeling the global burden of disease from unsafe medical injections are reproduced, correcting for the concentrated structure of the HIV epidemic in India. A systematic review of risk factor analyses in India that investigate injection risks is used in the uncertainty analysis.

Results:

The median population attributable fraction for hepatitis B carriage associated with recent injections is 46%, the median fraction of hepatitis C infections attributed to unsafe medical injections is 38%, and the median fraction of incident HIV infections attributed to medical injections is 12% in India. The modeled incidence of blood-borne viruses suggests that introducing the auto-disable syringe will impose an incremental cost of $46–48 per disability adjusted life year (DALY) averted. The epidemiological evidence suggests that the incremental cost of introducing the auto-disable syringe for all medical injections is between $39 and $79 per DALY averted.

Conclusions:

The auto-disable syringe is a cost-effective alternative to the reuse of syringes in a country with low prevalence of blood-borne viruses.  相似文献   

16.
17.
18.
19.
Background: Lung cancer and cardiovascular disease (CVD) mortality risks increase with smoking, secondhand smoke (SHS), and exposure to fine particulate matter < 2.5 μm in diameter (PM2.5) from ambient air pollution. Recent research indicates that the exposure–response relationship for CVD is nonlinear, with a steep increase in risk at low exposures and flattening out at higher exposures. Comparable estimates of the exposure–response relationship for lung cancer are required for disease burden estimates and related public health policy assessments.Objectives: We compared exposure–response relationships of PM2.5 with lung cancer and cardiovascular mortality and considered the implications of the observed differences for efforts to estimate the disease burden of PM2.5.Methods: Prospective cohort data for 1.2 million adults were collected by the American Cancer Society as part of the Cancer Prevention Study II. We estimated relative risks (RRs) for increments of cigarette smoking, adjusting for various individual risk factors. RRs were plotted against estimated daily dose of PM2.5 from smoking along with comparison estimates for ambient air pollution and SHS.Results: For lung cancer mortality, excess risk rose nearly linearly, reaching maximum RRs > 40 among long-term heavy smokers. Excess risks for CVD mortality increased steeply at low exposure levels and leveled off at higher exposures, reaching RRs of approximately 2–3 for cigarette smoking.Conclusions: The exposure–response relationship associated with PM2.5 is qualitatively different for lung cancer versus cardiovascular mortality. At low exposure levels, cardiovascular deaths are projected to account for most of the burden of disease, whereas at high levels of PM2.5, lung cancer becomes proportionately more important.  相似文献   

20.
Objectives. We examined suicide and suicide attempt rates, patterns, and risk factors among White Mountain Apache youths (aged < 25 years) from 2001 to 2006 as the first phase of a community-based participatory research process to design and evaluate suicide prevention interventions.Methods. Apache paraprofessionals gathered data as part of a tribally mandated suicide surveillance system. We compared findings to other North American populations.Results. Between 2001 and 2006, 61% of Apache suicides occurred among youths younger than 25 years. Annual rates among those aged 15 to 24 years were highest: 128.5 per 100 000, 13 times the US all-races rate and 7 times the American Indian and Alaska Native rate. The annual suicide attempt incidence rate in this age group was 3.5%. The male-to-female ratio was 5:1 for suicide and approximately 1:1 for suicide attempts. Hanging was the most common suicide method, and third most common attempt method. The most frequently cited attempt precipitants were family or intimate partner conflict.Conclusions. An innovative tribal surveillance system identified high suicide and attempt rates and unique patterns and risk factors of suicidal behavior among Apache youths. Findings are guiding targeted suicide prevention programs.Suicide is the third leading cause of death among US youths aged 10 to 24 years,1 and suicide attempts are a major source of adolescent morbidity in the United States. As behavioral scientists have increasingly recognized youths'' suicide behavior as an important and preventable public health problem, Healthy People 2010 has set specific objectives to reduce suicide and suicide attempt rates among youths. Past evidence supports the premise that youth suicide can be prevented by addressing risk factors and promoting early identification, referral, and treatment of mental and substance use disorders. However, risk factors vary across races, ethnic groups, and regions, necessitating targeted formative research and community-specific prevention approaches.2It is well-documented that American Indians and Alaska Natives have the highest rates of suicide of all US races.3 American Indian and Alaska Native (AIAN) suicides occur predominantly among youths ( < 25 years), in contrast to the US general population, in which deaths from suicide are concentrated among the elderly ( ≥ 65 years).4 Further, there is significant variability in suicide rates among youths across tribes and rural versus urban AIAN populations. Among the 1.3 million American Indians and Alaska Natives residing on or near rural reservation lands tracked by the Indian Health Service, the average rate of suicide per 100 000 is 20.2, with a range of 7.7 (Nashville area) to 45.9 (Alaska area).5 In comparison, for all 4.1 million American Indians and Alaska Natives identified by the US Census, the suicide rate is 11.7.6 Because urban AIAN residents compose approximately 60% of the US Census AIAN population,7 the lower overall census suicide rate indicates that rural reservation suicide rates are higher than urban AIAN suicide rates.To date, little reservation-specific information on suicide behavior or related risk factors exists to explain differences in rates across AIAN communities and in comparison with other US populations. Developing the means to collect and analyze local tribal data is key to discerning unique risk factors that are driving local and national disparities in suicide among AIAN youths, and to the public health mission of reducing suicide among youths across the United States and the world.There are approximately 15 500 White Mountain Apache (Apache) tribal members who reside on the 1.6 million acre Fort Apache Reservation in east-central Arizona. More than half (54%) of the tribal members are younger than 25 years, compared with approximately 35% of the US all-races population.8 In 2001, a cluster of suicides among youths on the Apache reservation led the Tribal Council to enact a resolution to mandate tribal members and community providers to report all suicidal behavior (ideation, attempts, and deaths) to a central data registry. The resulting surveillance system is the first of its kind, gathering data from both community-based and clinical settings.In 2004, as part of the Johns Hopkins Center for American Indian Health, we partnered with the Apaches to conduct a community-based participatory research (CBPR) project that included formalizing the mandated reporting process, transferring the registry system to an electronic format, analyzing quarterly trends, and engaging community leaders in interpreting surveillance data to inform prevention strategies. Because of the contentious history of research in tribal communities, CBPR methodologies are essential to ensuring a culturally sensitive interpretation of findings and culturally relevant interventions.9 A CBPR approach is particularly important in the complex area of mental health because explanatory models for cause and treatment of mental illness can vary widely across tribal and nontribal cultures.10We describe the Apache suicide behavior surveillance system, report patterns of Apache youths'' suicide and suicide attempts between 2001 and 2006, and compare those rates with those of other tribal and North American populations. We discuss the relevance of the paraprofessional-administered surveillance system and its findings to public health prevention of suicide behavior among youths.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号