首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The duration of immunity to norovirus (NoV) gastroenteritis has been believed to be from 6 months to 2 years. However, several observations are inconsistent with this short period. To gain better estimates of the duration of immunity to NoV, we developed a mathematical model of community NoV transmission. The model was parameterized from the literature and also fit to age-specific incidence data from England and Wales by using maximum likelihood. We developed several scenarios to determine the effect of unknowns regarding transmission and immunity on estimates of the duration of immunity. In the various models, duration of immunity to NoV gastroenteritis was estimated at 4.1 (95% CI 3.2–5.1) to 8.7 (95% CI 6.8–11.3) years. Moreover, we calculated that children (<5 years) are much more infectious than older children and adults. If a vaccine can achieve protection for duration of natural immunity indicated by our results, its potential health and economic benefits could be substantial.Key words: Norovirus, modeling, mathematical model, immunity, incidence, vaccination, vaccine development, viruses, enteric infections, acute gastroenteritisNoroviruses (NoVs) are the most common cause of acute gastroenteritis (AGE) in industrialized countries. In the United States, NoV causes an estimated 21 million cases of AGE (1), 1.7 million outpatient visits (2), 400,000 emergency care visits, 70,000 hospitalizations (3), and 800 deaths annually across all age groups (4). Although the highest rates of disease are in young children, infection and disease occur throughout life (5), despite an antibody seroprevalence >50%, and infection rates approach 100% in older adults (6,7).Frequently cited estimates of the duration of immunity to NoV are based on human challenge studies conducted in the 1970s. In the first, Parrino et al. challenged volunteers with Norwalk virus (the prototype NoV strain) inoculum multiple times. Results suggested that the immunity to Norwalk AGE lasts from ≈2 months to 2 years (8). A subsequent study with a shorter challenge interval suggested that immunity to Norwalk virus lasts for at least 6 months (9). In addition, the collection of volunteer studies together demonstrate that antibodies against NoV may not confer protection and that protection from infection (serologic response or viral shedding) is harder to achieve than protection from disease (defined as AGE symptoms) (1014). That said, most recent studies have reported some protection from illness and infection in association with antibodies that block binding of virus-like particles to histo-blood group antigen (HBGA) (13,14). Other studies have also associated genetic resistance to NoV infections with mutations in the 1,2-fucosyltransferase (FUT2) gene (or “secretor” gene) (15). Persons with a nonsecretor gene (FUT2−/−) represent as much as 20% of the European population. Challenge studies have also shown that recently infected volunteers are susceptible to heterologous strains sooner than to homotypic challenge, indicating limited cross-protection (11).One of many concerns with all classic challenge studies is that the virus dose given to volunteers was several thousand–fold greater than the small amount of virus capable of causing human illness (estimated as 18–1,000 virus particles) (16). Thus, immunity to a lower challenge dose, similar to what might be encountered in the community, might be more robust and broadly protective than the protection against artificial doses encountered in these volunteer studies. Indeed, Teunis et al. have clearly demonstrated a dose-response relationship whereby persons challenged with a higher NoV dose have substantially greater illness risk (16).Furthermore, in contrast with results of early challenge studies, several observations can be made that, when taken together, are inconsistent with a duration of immunity on the scale of months. First, the incidence of NoV in the general population has been estimated in several countries as ≈5% per year, with substantially higher rates in children (5). Second, Norwalk virus (GI.1) volunteer studies conducted over 3 decades, indicate that approximately one third of genetically susceptible persons (i.e., secretor-positive persons with a functional FUT2 gene) are immune (18,20,22). The point prevalence of immunity in the population (i.e., population immunity) can be approximated by the incidence of infection (or exposure) multiplied by the duration of immunity. If duration of immunity is truly <1 year and incidence is 5%, <5% of the population should have acquired immunity at any given time. However, challenge studies show population immunity levels on the order of 30%–45%, suggesting that our understanding of the duration of immunity is incomplete (8,11,17,18). HBGA–mediated lack of susceptibility may play a key role, but given the high seroprevalence of NoV antibodies and broad diversity of human HBGAs and NoV, HBGA–mediated lack of susceptibility cannot solely explain the discrepancy between estimates of duration of immunity and observed NoV incidence. Moreover, population immunity levels may be driven through the acquisition of immunity of fully susceptible persons or through boosting of immunity among those previously exposed.

Table 1

Summary of literature review of Norwalk virus volunteer challenge studies*
StudyAll
Secretor positive
Secretor negative
Strain
No. challengedNo. (%) infectedNo. (%) AGE No. challengedNo. (%) infected No. (%) AGENo. challengedNo. (%) infected
Dolin 1971 (10)129 (75)SM
Wyatt 1974 (11)†2316 (70)NV, MC, HI
Parrino 1977 (8)†126 (50)NV
Johnson 1990 (17)†4231 (74)25 (60)NV
Graham 1994 (12)5041 (82)34 (68)NV
Lindesmith 2003 (18)7734 (44)21 (27)5535 (64)21 (38)210NV
Lindesmith 2005 (19)159 (60)7 (47)128 (67)31 (33)SM
Atmar 2008 (20)2116 (76)11 (52)2116 (76)11 (52)NV
Leon 2011 (21)‡157 (47)5 (33)157 (47)5 (33)NV
Atmar 2011 (14)‡4134 (83)29 (71)4134 (83)29 (71)NV
Seitz 2011 (22)1310 (77)10 (77)1310 (77)10 (77)1 (5.6)NV
Frenck 2012 (23)4017 (42)12 (30)2316 (70)12 (52.1)17GII.4
Open in a separate window*AGE, acute gastroenteritis; SM, Snow Mountain virus; NV, Norwalk virus; MC, Montgomery County virus; HI, Hawaii virus; GII.4, genogroup 2 type 4.
†Only includes initial challenge, not subsequent re-challenge.
‡Only includes placebo or control group.In this study, we aimed to gain better estimates of the duration of immunity to NoV by developing a community-based transmission model that represents the transmission process and natural history of NoV, including the waning of immunity. The model distinguishes between persons susceptible to disease and those susceptible to infection but not disease. We fit the model to age-specific incidence data from a community cohort study. However, several factors related to NoV transmission remain unknown (e.g., the role asymptomatic persons who shed virus play in transmission). Therefore, we constructed and fit a series of 6 models to represent the variety of possible infection processes to gain a more robust estimate of the duration of immunity. This approach does not consider multiple strains or the emergence of new variants, so we are effectively estimating minimum duration of immunity in the absence of major strain changes.  相似文献   

2.
In early 1976, the novel A/New Jersey/76 (Hsw1N1) influenza virus caused severe respiratory illness in 13 soldiers with 1 death at Fort Dix, New Jersey. Since A/New Jersey was similar to the 1918–1919 pandemic virus, rapid outbreak assessment and enhanced surveillance were initiated. A/New Jersey virus was detected only from January 19 to February 9 and did not spread beyond Fort Dix. A/Victoria/75 (H3N2) spread simultaneously, also caused illness, and persisted until March. Up to 230 soldiers were infected with the A/New Jersey virus. Rapid recognition of A/New Jersey, swift outbreak assessment, and enhanced surveillance resulted from excellent collaboration between Fort Dix, New Jersey Department of Health, Walter Reed Army Institute of Research, and Center for Disease Control personnel. Despite efforts to define the events at Fort Dix, many questions remain unanswered, including the following: Where did A/New Jersey come from? Why did transmission stop?Key words: Influenza, military, respiratory disease, swine, perspectiveRevisiting events surrounding the 1976 swine influenza A (H1N1) outbreak may assist those planning for the rapid identification and characterization of threatening contemporary viruses, like avian influenza A (H5N1) (1). The severity of the 1918 influenza A (H1N1) pandemic and evidence for a cycle of pandemics aroused concern that the 1918 disaster could recur (2,3). Following the 1918 pandemic, H1N1 strains circulated until the "Asian" influenza A (H2N2) pandemic in 1957 (3). When in early 1976, cases of influenza in soldiers, mostly recruits, at Fort Dix, New Jersey, were associated with isolation of influenza A (H1N1) serotypes (which in 1976 were labeled Hsw1N1), an intense investigation followed (4).Of 19,000 people at Fort Dix in January 1976, ≈32% were recruits (basic trainees) (4). Recruits reported to Fort Dix for 7 weeks of initial training through the basic training reception center, where they lived and were processed into the Army during an intense 3 days of examinations, administrative procedures, and indoctrination. At the reception center, training unit cohorts were formed. Recruits were grouped into 50-member units (platoons) and organized into companies of 4 platoons each. Units formed by week''s end moved from the reception center to the basic training quarters. To prevent respiratory illnesses, recruits were isolated in their company areas for 2 weeks and restricted to the military post for 4 weeks (4). Platoon members had close contact with other platoon members, less contact with other platoons in their company, and even less contact with other companies.On arrival, recruits received the 1975–1976 influenza vaccine (A/Port Chalmers/1/73 [H3N2], A/Scotland/840/74 [H3N2], and B/Hong Kong/15/72) (4). Other soldiers reported directly to advanced training programs of 4 to 12 weeks at Fort Dix immediately after basic training at Fort Dix or elsewhere. These soldiers received influenza vaccinations in basic training. Civilian employees and soldiers'' families were offered vaccine, but only an estimated <40% accepted (4).Training stopped over the Christmas–New Year''s holidays and resumed on January 5, 1976, with an influx of new trainees. The weather was cold (wind chill factors of 0° to –43°F), and the reception center was crowded (4). Resumption of training was associated with an explosive febrile respiratory disease outbreak involving new arrivals and others. Throat swabs were collected from a sample of hospitalized soldiers with this syndrome. On January 23, the Fort Dix preventive medicine physician learned of 2 isolations of adenovirus type 21 and suspected an adenovirus outbreak (4). He notified the county health department and the New Jersey (NJ) Department of Health of the outbreak (4). On January 28, an NJ Department of Health official consulted with the military physician and suggested that the explosive, widespread outbreak could be influenza (4). Over the next 2 days, 19 specimens were delivered to the state laboratory and 7 A/Victoria-like viruses and 3 unknown hemagglutinating agents were identified (4). Specimens were flown to the Center for Disease Control (CDC), Atlanta, Georgia, on February 6, where a fourth unknown agent was found (4).On February 2, Fort Dix and NJ Department of Health personnel arranged for virologic studies of deaths possibly caused by influenza (4). Tracheal swabs taken on February 5 from a recruit who died on February 4 yielded a fifth unknown agent on February 9. By February 10, laboratory evidence had confirmed that a novel influenza strain was circulating at Fort Dix and that 2 different influenza strains were causing disease. By February 13, all 5 unknown strains were identified as swine influenza A (Hsw1N1). The possibility of laboratory contamination was evaluated (4). No known swine influenza A strains were present in the NJ Department of Health Virus Laboratory before the Fort Dix outbreak. Additionally, all unknown Fort Dix viruses were independently isolated from original specimens at CDC and the Walter Reed Army Institute of Research (WRAIR), Washington, DC. Also, 2 patients with novel virus isolates had convalescent-phase, homologous, hemagglutination-inhibition (HAI) antibody titers of 1:40–1:80, consistent with recent infections. The new influenza strain had been independently identified in 3 different laboratories and supporting serologic evidence developed within 15 days after the original specimens were collected (4).

Table

Key events in the swine influenza A (Hsw1N1) outbreak, Fort Dix, NJ
Date (1976)Event
January 5After the holidays, basic training resumed at Fort Dix, NJ; a sudden, dramatic outbreak of acute respiratory disease followed the influx of new recruit trainees (4).
January 19Earliest hospitalization of a Fort Dix soldier with acute respiratory disease attributed to swine influenza A (Hsw1N1) (identified retrospectively by serologic tests) (7,14)
January 21Influenza A/Victoria (H3N2) identified away from Fort Dix in NJ civilians (4)
January 23Fort Dix received reports of adenovirus type 21 isolations from soldiers ill with respiratory disease and reported the outbreak to the local and state health departments (4)
January 28A NJ Department of Health official suggested the Fort Dix outbreak may be due to influenza and offered to process specimens for virus isolation (4)
January 29–3019 specimens sent to NJ Department of Health in 2 shipments (4)
February 2–3NJ Department of Health identified 4 isolates of H3N2-like viruses and 2 unknown hemagglutinating agents in 8 specimens sent on January 29. Fort Dix and NJ Department of Health arranged for the study of deaths possibly due to influenza. NJ Department of Health identified 3 H3N2-like viruses and a third unknown hemagglutinating agent in 11 specimens sent on January 30 (4).
February 4Fort Dix soldier died with acute respiratory disease (4).
February 5Tracheal specimens from the soldier who died on February 4 were sent to NJ Department of Health (4).
February 6NJ Department of Health sent the Fort Dix specimens to Center for Disease Control (CDC), Atlanta, GA; CDC identified a fourth unknown hemagglutinating agent in the Fort Dix specimens (4).
February 9Specimens from the soldier who died on February 4 yielded a fifth unknown hemagglutinating agent (4). Last hospitalization of an identified Fort Dix soldier with febrile, acute respiratory disease attributed to swine influenza A (Hsw1N1) (identified retrospectively by serologic tests) (7,14).
February 10Laboratory evidence supported 2 influenza type A strains circulating on Fort Dix; 1 was a radically new strain. Prospective surveillance for cases in the areas around Fort Dix was initiated; only cases of H3N2 were found (4).
February 13Review of laboratory data and information found that all 5 unknown agents were swine influenza A strains (later named A/New Jersey [Hsw1N1]); 3 laboratories independently identified the swine virus from original specimens (serologic data supporting swine influenza A virus infection was later obtained from 2 survivors with A/New Jersey isolates) (4).
February 14–16Initial planning meeting between CDC, NJ Department of Health, Fort Dix, and the Walter Reed Army Institute of Research personnel was held in Atlanta, GA. Prospective case finding was initiated at Fort Dix; H3N2 was isolated; Hsw1N1 was not isolated (7). Retrospective case finding was initiated by serologic study of stored serum specimens from Fort Dix soldiers who had been hospitalized for acute respiratory disease; 8 new cases of disease due to Hsw1N1 were identified with hospitalization dates between January 19 and February 9 (7,14).
February 22–24Prospective case finding was again conducted at Fort Dix; H3N2 virus was isolated but not Hsw1N1 (7).
February 27Thirty-nine new recruits entering Fort Dix February 21–27 gave blood samples after arrival and 5 weeks later; serologic studies were consistent with influenza immunization but not spread of H3N2 virus. None had a titer rise to Hsw1N1 (11).
March 19Prospective surveillance identified the last case of influenza in the areas around Fort Dix; only H3N2 viruses were identified outside of Fort Dix (4).
Open in a separate window  相似文献   

3.

Background

Diabetic retinopathy is a retinal vascular disorder that affects more than 4.1 million people in the United States. New methods of detecting and ensuring adequate follow-up of this life-altering disease are vital to improving patient outcomes. Wills Eye Hospital and the Centers for Disease Control and Prevention are conducting a collaborative study to initiate a novel diabetic retinopathy screening in the community setting.

Objective

To evaluate the feasibility of a more widespread, large-scale implementation of this novel model of care for diabetic retinopathy screening in the community setting.

Methods

A simple, self-administered survey was distributed to pharmacists, pharmacy technicians, student pharmacists, and Wills Eye Hospital interns. The survey consisted of open-ended questions and responders were given 1 week to respond. A total of 22 surveys were distributed and 16 were completed. The responses were culled and analyzed to assess the feasibility of implementing this novel screening model in the pharmacy.

Results

The response rate to this pilot survey was 72%. The majority of the responding pharmacy staff members indicated that diabetic retinopathy screening in community pharmacies would greatly benefit patients and could improve patient care. However, they also noted barriers to implementing the screening, such as concerns about the cost of carrying out the screenings, the cost of the equipment needed to be purchased, and the lack of time and shortage of pharmacy staff.

Conclusion

The potential exists for pharmacists to positively influence diabetes care by implementing retinopathy care through the early detection of the disease and reinforcement of the need for follow-up; however, real-world barriers must be addressed before widespread adoption of such a novel model of care becomes feasible.Diabetic retinopathy is a retinal vascular disorder that often presents asymptomatically in its early stages and can progress to include visual symptoms such as blurry vision, dark or floating spots, vision loss, and complete blindness in its later stages.13 Diabetic retinopathy affects more than 4.1 million people in the United States and can severely impact vision.2,4,5 The American Academy of Ophthalmology and the American Diabetes Association recommend that all patients affected by diabetes have fundus examinations at least annually.6,7 Dilated fundus examinations require dilating the pupils to better see into the periphery of the eye. However, lack of time, not understanding the importance of screening for diabetic retinopathy, and lack of access to screening have resulted in 35% to 79% of patients not adhering to current recommendations.710A review of the literature for any type of community pharmacy–based patient intervention, performed in early January of 2013, revealed no published studies on the involvement of community pharmacists in diabetic retinopathy screenings. However, community pharmacy screenings that focused on other diseases have shown promising results for this type of intervention.1117The
Screening type/study (publication year)ObjectiveSettingParticipants, type (N)Results
Chlamydia screening study (2010)11Qualitative analysis of pharmacists'' views on chlamydia screeningCommunity pharmacyPharmacists (26)Pharmacists appreciated the opportunity to expand their practice by providing chlamydia screenings, but were hesitant to provide screening to certain women (eg, married women or those in long-term relationships)
Study of diabetes and CV conditions (2006)12To assess a new screening model for diabetes, hypertension, and dyslipidemiaCommunity pharmacy and non-healthcare settingsPatients: high-risk elderly (888)Pharmacists were able to identify patients with elevated glucose, cholesterol, and blood pressure
Screening for PAD (2011)13To evaluate the feasibility of a community pharmacy pharmacist-initiated PAD screening programCommunity pharmacyPatients (39)The screening program was effective in increasing PAD recognition and demonstrated program feasibility
COPD screening study (2012)14To assess the ability of pharmacists in community pharmacies to accurately conduct COPD screeningsCommunity pharmacyPatients (185)Pharmacists are able to effectively conduct COPD screenings and interpret results
Screening for CV risk (2010)15To assess the ability of community pharmacists to conduct CV risk screeningsCommunity pharmacyPatients (655)The results show the ability of a CV screening program to improve diagnoses of high-risk individuals and to help contain the burden of CV disease
Osteoporosis screening (2010)16To assess an osteoporosis screening and patient education program in community pharmaciesCommunity pharmacyPatients (262)The osteoporosis screening program doubled the number of patients who proceeded for further testing or treatment
Osteoporosis screening study (2008)17To develop an effective community pharmacy screening program for the detection of osteoporosis in womenCommunity pharmacyPatients: women (159)With the benefit of an effective screening program, women who were screened revealed high proportions of lifestyle or medication modifications at 3- or 6-month follow-up
Open in a separate windowCOPD indicates chronic obstructive pulmonary disease; CV, cardiovascular; PAD, peripheral arterial disease.The successful outcomes of these studies provide insight into the feasibility of implementing diabetic retinopathy screenings in community pharmacies, showing that quick interventions, such as friendly reminders and on-the-spot counseling, are effective ways to motivate patients to improve their health.  相似文献   

4.
Review of US Comparative Economic Evidence for Treatment of Metastatic Renal Cell Carcinoma after Failure of First-Line VEGF Inhibitor Therapy     
Michael K. Wong  Xufang Wang  Maruit J. Chulikavit  Zhimei Liu 《American Health & Drug Benefits》2013,6(5):275-286

Background

In 2006, the economic burden of metastatic renal cell carcinoma (mRCC) was estimated to be up to $1.6 billion worldwide and has since grown annually. With the continuing increase of the economic burden of this disease in the United States, there is a growing need for economic analyses to guide treatment and policy decisions for this patient population.

Objective

To evaluate available comparative economic data on targeted therapies for patients with mRCC who have failed first-line targeted therapies.

Method

A broad and comprehensive literature review was conducted of US-based studies between January 1, 2005, and February 11, 2013, evaluating comparative economic evidence for targeted agents that are used as second-line therapy or beyond. Based on the specific search parameters that focused on cost-effectiveness and economic comparisons between vascular endothelial growth factor (VEGF)/VEGF receptor (VEGFr) inhibitors and mammalian target of rapamycin (mTOR) inhibitors, only 7 relevant, US-based economic evaluations were found appropriate for inclusion in the analysis. All authors, who are experts in the health economics and outcomes research field, reviewed the search results. Studies of interest were those with a targeted agent, VEGF/VEGFr or mTOR inhibitor, in at least 1 study arm.

Discussion

As a group, targeted therapies were found to be cost-effective options in treating patients with refractory mRCC in the United States. Oral therapies showed an economic advantage over intravenous agents, presumably because oral therapies have a lower impact on outpatient resources. Based on 3 studies, everolimus has been shown to have an economic advantage over temsirolimus and to be cost-effective compared with sorafenib. No economic comparison between everolimus and axitinib, the only 2 drugs with a National Comprehensive Cancer Network category 1 recommendation for use after the failure of VEGFr tyrosine kinase inhibitors, is available.

Conclusion

The limited and heterogeneous sum of the currently available economic evidence does not allow firm conclusions to be drawn about the most cost-effective targeted treatment option in the second-line setting and beyond in patients with mRCC. It is hoped that ongoing head-to-head therapeutic trials and biomarker studies will help improve the economic efficiency of these expensive agents.Renal cell carcinoma (RCC) comprises 92% of all kidney cancers and has a poor prognosis, with approximately 10% of patients with metastatic disease surviving beyond 5 years.1 In 2006, the economic burden of metastatic RCC (mRCC) was estimated to be up to $1.6 billion worldwide and has since grown annually.2 A recent review reported that the economic burden of RCC in the United States ranges from $600 million to $5.19 billion, with annual per-patient medical costs of between $16,488 and $43,805.3 Furthermore, these costs will likely increase with the expanded use of targeted agents, based on a 2011 pharmacoeconomic analysis showing that the annual costs to treat patients with RCC receiving these agents are 3- to 4-fold greater than the costs to treat patients who are not receiving targeted therapies.4 In addition, the incidence and prevalence of RCC are rising, in part because of improved and earlier detection, and because of increases in related risk factors, such as hypertension, diabetes, and obesity.57

KEY POINTS

  • ▸ The growing economic burden of renal cell carcinoma (RCC) in the United States indicates the need for economic analyses of current therapies to guide treatment decisions for this disease.
  • ▸ This article is based on a comprehensive review of 7 studies that were identified within the search criteria for US-based economic data related to targeted therapies for metastatic RCC (mRCC) after failure of first-line therapies.
  • ▸ Targeted therapies were shown to be cost-effective for the treatment of refractory mRCC.
  • ▸ Oral therapies showed an economic advantage over intravenous agents, presumably because of their lower impact on outpatient resources.
  • ▸ No economic comparison is yet available for the only 2 drugs (ie, everolimus and axitinib) with an NCCN category 1 recommendation for use after a vascular endothelial growth factor receptor TKI.
  • ▸ Ongoing head-to-head therapeutic trials and biomarker studies may help to improve the economic efficiency of targeted treatments in the second-line setting and beyond for mRCC.
Clear-cell RCC, the most common histology, constitutes 75% of cases of RCC.8 The majority of patients with clear-cell RCC experience a loss of the functional von Hippel-Lindau gene, resulting in the accumulation of hypoxia-inducible factor-1α, an angiogenic factor whose protein synthesis is regulated by mammalian target of rapamycin (mTOR).9 The net effect is overproduction of downstream proteins that promote RCC progression by stimulating cell growth and proliferation, cellular metabolism, and angiogenesis (ie, vascular endothelial growth factor [VEGF], platelet-derived growth factor, and epidermal growth factor).9Abnormal functioning of the mTOR pathway is therefore thought to play a role in the pathogenesis of RCC; inhibition of mTOR globally decreases protein production, suppresses VEGF synthesis, and induces cell cycle arrest.10 Knowledge of the critical role of VEGF and mTOR in RCC pathogenesis drove the development of targeted agents in the treatment of this disease. The US Food and Drug Administration (FDA) approval of axitinib in January 2012 brings the total of approved targeted agents for RCC to 7 in the past 7 years, making this one of the most prolific areas of cancer drug development (1125 The need for clarity regarding the optimal sequential use of these agents is stronger than ever, particularly given the high price of these agents.

Table 1

Targeted Agents Approved for RCC and Pivotal Phase 3 Clinical Trials
Drug, route of administration, approval dateRCC indicationDesign of pivotal trialPFs in the overall population of pivotal trial
Sorafenib, oral11 December 20, 2005Advanced RCCTARGET: randomized, double-blind study of sorafenib (n = 451) vs placebo (n = 452) in patients treated with 1 previous systemic therapy (primarily cytokines)18
  • Median, 5.5 mo with sorafenib vs 2.8 mo with placebo
  • HR, 0.44 (95% CI, 0.35–0.55; P <.001)
Sunitinib, oral12 February 2, 2007Advanced RCCRandomized, open-label study of sunitinib (n = 375) vs IFN-α (n = 375) in treatment-naive patients19
  • Median, 11 mo with sunitinib vs 5 mo with IFN-α
  • HR, 0.539 (95% CI, 0.4510.643; P <.001)
Temsirolimus, IV13 May 30, 2007Advanced RCCARCC: randomized, open-label study of temsirolimus (n = 209) vs IFN-α (n = 207) vs temsirolimus + IFN-α (n = 210) in treatment-naive patients with ≥3 of 6 predictors of short survival20
  • Median, 3.8 mo with temsirolimus vs 1.9 mo with temsirolimus + IFN-α vs 3.7 mo with temsirolimus + IFN-α
  • HR, not available
Everolimus, oral14 March 30, 2009RCC therapy after failure of treatment with sunitinib or sorafenibRECORD-1: randomized, double-blind study of everolimus (n = 277) vs placebo (n = 139) in patients previously treated with sunitinib and/or sorafenib21
  • Median, 4.9 mo with everolimus vs 1.9 mo with placebo
  • HR, 0.33 (95% CI, 0.25–0.43; P <.001)
Bevacizumab, IV, plus IFN-α, SC15 August 3, 2009Metastatic RCC with IFN-αAVOREN: randomized, double-blind study of bevacizumab + IFN-α (n = 327) vs placebo + IFN-α (n = 322) in treatment-naive patients22
  • Median, 10.2 mo with bevacizumab + IFN-α vs 5.4 mo with placebo + IFN-α
  • HR, 0.63 (95% CI, 0.52–0.75; P = .001)
CALGB 90206: randomized, open-label study of bevacizumab + IFN-α (n = 369) vs IFN-α (n = 363) in treatment-naive patients23
  • Median, 8.5 mo with bevacizumab + IFN-α vs 5.2 mo with IFN-α
  • HR, 0.72 (95% CI, 0.61–0.83; P <.001)
Pazopanib, oral16 October 19, 2009Adults for first-line treatment of advanced RCC and for patients who have received previous cytokine therapy for advanced diseaseRandomized, double-blind study of pazopanib (n = 290) vs placebo (n = 145) in treatment-naive and cytokine-pretreated patients24
  • Median, 9.2 mo with pazopanib vs 4.2 mo with placebo
  • HR, 0.46 (95% CI, 0.34–0.62; P <.001)
Axitinib, oral17 January 27, 2012Treatment of RCC after failure of 1 previous systemic therapyAXIS: randomized, open-label study of axitinib (n = 361) vs sorafenib (n = 362) in patients treated with 1 previous systemic therapy25
  • Median, 6.7 mo with axitinib vs 4.7 mo with sorafenib
  • HR, 0.665 (95% CI, 0.5440.812; P <.001)
Open in a separate windowCI indicates confidence interval; HR, hazard ratio; IFN, interferon; IV, intravenous; PFS, progression-free survival; RCC, renal cell carcinoma; SC, subcutaneous.The oral VEGF receptor tyrosine kinase inhibitors (VEGFr-TKIs) sunitinib and pazopanib, the VEGF monoclonal antibody bevacizumab plus (subcutaneously injected) interferon-a, and the intravenous (IV) mTOR inhibitor temsirolimus are recommended by the National Comprehensive Cancer Network (NCCN) as first-line therapies for the treatment of mRCC (26 The VEGFr-TKI sorafenib is recommended for select patients only. Despite efficacy in mRCC, agents targeted against VEGF only “inhibit” the disease, making resistance almost inevitable and universal, thereby necessitating second-line therapy after the failure of initial VEGF inhibition.18,19,2224

Table 2

NCCN Treatment Guidelines for mRCC, by Phase 3 Evidence
SettingCategory 1 evidence
Treatment naïveGood or intermediate riskaSunitinib
Pazopanib
Bevacizumab + IFN-α
Poor riskaTemsirolimus
Previously treatedPrevious cytokineSorafenib
Sunitinib
Pazopanib
Axitinibb
Previous tyrosine kinase inhibitorEverolimus
Axitinibb
Previous mTOR inhibitorUnknown
Open in a separate windowaMemorial Sloan-Kettering Cancer Center risk category.bAxitinib has a category 1 recommendation for treatment of patients who have failed ≥1 previous systemic therapy.IFN indicates interferon; mRCC, metastatic renal cell carcinoma; mTOR, mammalian target of rapamycin; NCCN, National Comprehensive Cancer Network.Source: National Comprehensive Cancer Network. NCCN Clinical Practice Guidelines in Oncology (NCCN Guidelines®). Kidney cancer. Version 1.2013. 2013.Because curing metastatic disease with these agents is rare, most patients require lifelong therapy and are destined to cycle through the available treatment options. Guidelines on sequential therapy for the second-line treatment of mRCC and beyond are limited, indicating a lack of clinical trial–based comparative evidence and/or consensus in this area. In the NCCN guidelines, the oral agents everolimus and axitinib are category 1 recommendations for second-line therapy (26 Despite their clinically proven benefit in extending progression-free survival (PFS), the cost of these agents and their lack of proven survival benefit have led to controversial government reimbursement decisions in some parts of the world (eg, by the National Institute for Health and Care Excellence in the United Kingdom27).Given the lack of prospectively collected data sets assessing the optimal sequence of targeted therapies, as well as the high price of these agents, economic analyses provide important insights into the overall costs versus benefits of targeted therapies, thus helping to inform treatment decisions. In this review, we identify comparative economic evidence beyond the first-line treatment of mRCC and discuss the potential implications of the findings.  相似文献   

5.
The impact of OSHA recordkeeping regulation changes on occupational injury and illness trends in the US: a time-series analysis     
Friedman LS  Forst L 《Occupational and environmental medicine》2007,64(7):454-460
  相似文献   

6.
Improving ILI Surveillance using Hospital Staff Influenza-like Absence (ILA)     
Lydia Drumright  Simon D. Frost  Mike Catchpole  John Harrison  Mark Atkins  Penny Parker  Alex J. Elliot  Douglas M. Fleming  Alison H. Holmes 《Online Journal of Public Health Informatics》2013,5(1)

Objective

To address the feasibility and efficiency of a novel syndromic surveillance method, monitoring influenza-like absence (ILA) among hospital staff, to improve national ILI surveillance and inform local hospital preparedness.

Introduction

Surveillance of influenza in the US, UK and other countries is based primarily on measures of influenza-like illness (ILI), through a combination of syndromic surveillance systems, however, this method may not capture the full spectrum of illness or the total burden of disease. Care seeking behaviour may change due to public beliefs, for example more people in the UK sought care for pH1N1 in the summer of 2009 than the winters of 2009/2010 and 2010/2011, resulting in potential inaccurate estimates from ILI (1). There may also be underreporting of or delays in reporting ILI in the community, for example in the UK those with mild illness are less likely to see a GP (2), and visits generally occur two or more days after onset of symptoms (3). Work absences, if the reason is known, could fill these gaps in detection.

Methods

Weekly counts and rates of hospital staff ILA (attributed to colds or influenza) were compared to GP ILI consultation rates (Royal College of General Practitioners Weekly Returns Service)(4) for 15–64 year olds, and positive influenza A test results (PITR) for all inpatients hospitalised in the three London hospitals for which staff data were collected using both retrospective time series and prospective outbreak detection methods implemented in the surveillance package in R (5)

Results

Rates of ILA were about six times higher than rates of ILI. Data on hospital staff ILA demonstrated seasonal trends as defined by ILI. Compared to the ILI rates, ILA demonstrated a more realistic estimate of the relative burden of pandemic H1N1 during July 2009 (1) (Figure). ILA provides potentially earlier warnings than GP ILI as indicated by its ability to predict ILI data for the local region (p < 0.001), as well as its potential for daily ‘real time’ updates. Using outbreak detection methods and examining peak weeks, alarms and thresholds, ILA alarmed, reached threshold rates and peaked consistently earlier or in the same week as ILI and PITR, with the exception of the July 2009, suggesting that it may be predictive of both community and patient cases of influenza (Open in a separate windowFigure:Weekly counts of ILA among hospital staff (blue), PITR among hospital patients (orange), and ILI in the community (red) from April 2008 to March 2011 and prospective alarms for elevated counts (circles) using a Bayesian subsystem algorithm, using the previous six weeks as the reference for prediction. Data plotted by counts rather than rates for clarity.

Table:

Week of the year that alarms commenced and peaks were reached for each of the four official influenza events from March 2008 to April 2011.
Winter 2008/2009Winter 2009/2010Winter 2010/2011Summer 2009
CommenceThresholdPeakCommenceThresholdPeakCommenceThresholdPeakCommenceThresholdPeak
ILA33.17.11383644474751272429
II.I364951394049494951262629
PITR49NA5139NA4447NA5225NA31
Open in a separate windowILA = influenza like absences among hospital staff; ILI = influenza like illness from RCGP data in London, ages 15–64; PITR = positive influenza A test results among patients from the same hospital as staff contributing ILA data; Threshold for ILI data was set at 30/100,000 as defined by the Health Protection Agency. The ILA threshold set at 60/100,000, such that all ILI above a threshold of 30/100000 were also above a threshold for ILA.

Conclusions

This study has demonstrated the potential to further explore the usefulness of using ILA data to complement existing national influenza surveillance systems. This work could improve our accuracy in monitoring of influenza and has the potential to improve emergency response to influenza for individual hospitals.  相似文献   

7.
US health educators' likelihood of adopting genomic competencies into health promotion     
Chen LS  Kwok OM  Goodson P 《American journal of public health》2008,98(9):1651-1657
Objectives. We examined US health educators’ likelihood of adopting genomic competencies—specific skills and knowledge in public health genomics—into health promotion and the factors influencing such likelihood.Methods. We developed and tested a model to assess likelihood to adopt genomic competencies. Data from 1607 health educators nationwide were collected through a Web-based survey. The model was tested through structural equation modeling.Results. Although participants in our study were not very likely to adopt genomic competencies into their practice, the data supported the proposed model. Awareness, attitudes, and self-efficacy significantly affected health educators’ likelihood to incorporate genomic competencies. The model explained 60.3% of the variance in likelihood to incorporate genomic competencies. Participants’ perceived compatibility between public health genomics and their professional and personal roles, their perceptions of genomics as complex, and the communication channels used to learn about public health genomics significantly related to genomic knowledge and attitudes.Conclusions. Because US health educators in our sample do not appear ready for their professional role in genomics, future research and public health work-force training are needed.The Human Genome Project has motivated extensive research and technological developments regarding genetics and genomics. Because most diseases can be associated either with single genes, with multiple genetic variations, or with interactions between genes and environment, advancements in genomic knowledge stand to affect public health—in its quest to improve the biological, environmental, social, and educational conditions fostering health promotion—in unprecedented ways.1,2An emerging field, public health genomics, focuses on “the study and application of knowledge about the elements of the human genome and their functions, including interactions with the environment, in relation to health and disease in populations.”3 This focus signals important “changes in the landscape” of public health2,4 and requires that public health workers develop new professional skills. Health promotion scholars,5,6 alongside many professional organizations and agencies such as the American Public Health Association (APHA),7 the Institute of Medicine,1 the National Coalition for Health Professional Education in Genetics,8 and the Centers for Disease Control and Prevention (CDC),9 have advocated the adoption of specific genomic competencies by the public health workforce. What Caumartin, Baker, and Marrs affirmed of public health students applies invariably to all public health professionals:
Students of Public Health do not need to be geneticists. They should, however, be public health specialists who possess an understanding of how the application of human genetic information and technology is creating a paradigm shift in public health and prevention strategies.10(p569)
The term “genomic competencies” refers to specific skills and knowledge in public health genomics.5,9 According to the CDC,9 as members of the public health workforce, health educators should develop 7 specific genomic competencies (Table 1Genomic CompetencyNot Likely at All, %Not Likely, %Somewhat Likely, %Extremely Likely, %1: Translating complex genomic information for use in community-based health education programsa40.336.620.72.42: Facilitating genomic education for agency staff, administrators, volunteers, community groups, and other interested personnelb36.739.421.22.63: Developing a plan for incorporating genomics into health education services by working with community organizations, genomic experts, and other stakeholdersc32.239.423.15.24: Conducting a needs assessment for community-based genomic education programsd29.037.026.67.45: Advocating for community-based genomic education programse28.041.026.64.56: Integrating genomic components into community-based genomic education programse27.038.130.34.57: Evaluating the effectiveness of community-based genomic education programsf29.540.624.55.4Open in a separate windowaModified from the Centers for Disease Control and Prevention’s (CDC’s) Genomic Competency 1.bModified from the CDC’s Genomic Competency 4.cModified from the CDC’s Genomic Competency 5.dModified from the CDC’s Genomic Competency 6.eModified from the CDC’s Genomic Competency 7.fCreated by the authors.In fact, given the newness of the field, little research is available regarding relevant issues in public health genomics. In tandem with our previous report on health educators’ knowledge and attitudes toward genomics published in Genetics in Medicine,11 the study described here represents an initial step toward better understanding genomics-related elements and their impact on public health practice. In this report, health educators in the United States are offered as a case study from which careful extrapolations to the entire public health workforce might be appropriate.In our previous study, we assessed US health educators’ attitudes toward genomic competencies, their awareness of efforts in the field to promote and incorporate genomics, and their basic and applied genomic knowledge. Findings indicated that the sample espoused negative attitudes toward genomic competencies, low awareness levels, and deficient knowledge. Yet exposure to training in genetics and genomics appeared to influence attitudes, awareness, and knowledge.11In this study, we examined health educators’ likelihood of adopting genomic competencies into health promotion research and practice and the factors that might influence such likelihood. We proposed a conceptual, theory-based model, grounded in 4 behavior change theories: diffusion of innovations theory,12 the theory of planned behavior,13 the health belief model,14 and social cognitive theory.15 Findings from qualitative, in-depth interviews with 24 health educators also informed the development of this model (Figure 1).Open in a separate windowFIGURE 1—Theoretical model of US health educators’ likelihood of adopting genomic competencies into health promotion research and practice.We tested the model using structural equation modeling techniques, applied to a nationwide sample of US health educators. We chose structural equation modeling because it is a robust statistical technique that handles missing data efficiently, reduces type I error, calculates measurement errors for all variables in the model, simultaneously assesses all variables and their interactions as proposed in the model, and most importantly, examines the “fit” of the hypothetical model to empirical data.16We sought to answer 4 specific questions: (1) How likely are health educators to adopt genomic competencies into health promotion research and practice? (2) Does the proposed model adequately explain health educators’ likelihood of adopting genomic competencies? In other words, is the model helpful for understanding what shapes health educators’ likelihood of adopting genomic competencies? (3) How much variance in the likelihood variable is accounted for by the predictor variables in this proposed theoretical model? (4) Which variable in the theoretical model is the best predictor of health educators’ likelihood of adopting genomic competencies into health promotion research and practice? Does this variable differ significantly from other variables?  相似文献   

8.
Enhanced preventive programme at a beryllium oxide ceramics facility reduces beryllium sensitisation among new workers     
Cummings KJ  Deubner DC  Day GA  Henneberger PK  Kitt MM  Kent MS  Kreiss K  Schuler CR 《Occupational and environmental medicine》2007,64(2):134-140
  相似文献   

9.
Assessment of Treatment Patterns and Patient Outcomes in Levodopa-Induced Dyskinesias (ASTROID): A US Chart Review Study     
Barb Lennert  Wendy Bibeau  Eileen Farrelly  Patricia Sacco  Tessa Schoor 《American Health & Drug Benefits》2012,5(6):347-358
  相似文献   

10.
New and Emerging Drugs and Targets for Type 2 Diabetes: Reviewing the Evidence     
Brien Rex Miller  Hanh Nguyen  Charles Jia-Haur Hu  Chihyi Lin  Quang T. Nguyen 《American Health & Drug Benefits》2014,7(8):452-463
  相似文献   

11.
A System for Surveillance Directly from the EMR     
Richard F. Davies  Jason Morin  Ramanjot S. Bhatia  Lambertus de Bruijn 《Online Journal of Public Health Informatics》2013,5(1)

Objective

Our objective was to conduct surveillance of nosocomial infections directly from multiple EMR data streams in a large multi-location Canadian health care facility. The system developed automatically triggers bed-day-level-location-aware reports and detects and tracks the incidents of nosocomial infections in hospital by ward.

Introduction

Hospital acquired infections are a major cause of morbidity, mortality and increased resource utilization. CDC estimates that in the US alone, over 2 million patients are affected by nosocomial infections costing approximately $34.7 billion to $45 billion annually (1). The existing process of detection and reporting relies on time consuming manual processing of records and generation of alerts based on disparate definitions that are not comparable across institutions or even physicians.

Methods

A multi-stakeholder team consisting of experts from medicine, infection control, epidemiology, privacy, computing, artificial intelligence, data fusion and public health conducted a proof of concept from four complete years of admission records of all patients at the University of Ottawa Heart Institute. Figure 1 lists the data elements investigated. Our system uses an open source enterprise bus ‘Mirth Connect’ to receive and store data in HL7 format. The processing of information is handled by individual components and alerts are pushed back to respective locations. The free text components were classified using natural language processing. Negation detection was performed using NegEx (2). Data-fusion algorithms were used to merge information to make it meaningful and allow complex syndrome definitions to be mapped onto the data.

Results

The system monitors: Ventilator Associated Pneumonia (VAP), Central Line Infections (CLI), Methicillin Resistant Staph Aureus (MRSA), Clostridium difficile (C. Diff) and Vancomycin resistant Enterococcus (VRE).21452 hospital admissions occurred in 17670 unique patients over four years. There were 41720 CXRs performed in total, of which 10546 were classified as having an infiltrate. 4575 admissions were associated with at least one CXR showing an infiltrate, 2266 of which were hospital-acquired. Hospital acquired infiltrates were associated with an increased hospital mortality (6.3% vs 2.6%)* and length of stay (19.5 days vs 6.5 days)*. 253 patients had at least one positive blood culture. This was also associated with an increased hospital mortality (23,3% vs. 2.8%)* and length of stay (10.8 vs 40.9 days)*. (* all p values < 0.00001)

Conclusions

This proof of concept system demonstrates the capability of monitoring and analyzing multiple available data streams to automatically detect and track infections without the need for manual data capture and entry. It acquires directly from the EMR data to identify and classify health care events, which can be used to improve health outcomes and costs. The standardization of definitions used for detection will allow for generalization across institutions.
Data element/sourceMicrobiology
Medical Record Numberbacteriology requests
Patient Record Systembacteriology results
year of birthvirology request
Sexvirology results
partial postal codeHematology
WardCBC results
TransfersBiochemistry
date of admissionCreatinine
date of dischargePharmacy
isolation/respiratory, enteric precautions statusorders for antidiarrheals. antibiotics, antivirals
MRSA/VRE screening statusmedication list
RadiologySurgical Information Management System
Chest x-ray requestsOperative report or surgical list
Chest x -ray resultsOther information
Emergency RoomClinical Stores:
Chief complaintRequests and utilization of ventilators, masks, gloves, hand sanitizer and linens
Final diagnosisPayroll:
CTAS codeStaffing levels, absenteeism
Date of ER visit
Open in a separate window  相似文献   

12.
Content Analysis of Syndromic Twitter Data     
Bethany Keffala  Mike Conway  Son Doan  Nigel Collier 《Online Journal of Public Health Informatics》2013,5(1)

Objective

We present an annotation scheme developed to analyze syndromic Twitter data, and the results of its application to a set of respiratory syndrome-related tweets [1]. The scheme was designed to differentiate true positive tweets (where an individual is experiencing respiratory symptoms) from false positive tweets (where an individual is not experiencing respiratory symptoms), and to quantify more fine-grained information within the data.

Introduction

The popularity of Twitter, a social-networking service, creates the opportunity for researchers to collect large amounts of free, localizable data in real-time. Data takes the form of short, user-written messages, and has been employed for general syndromic surveillance [2] and surveillance of public attitudes toward the H1N1 flu outbreak [3]. Accessibility of tweets in real-time makes them particularly appropriate for use in early warning systems. Data collected through keyword search contains a significant amount of noise, however, annotation can help boost the signal for true positive tweets.

Methods

The annotation scheme was developed based on information relevant for early warning systems (e.g. who is experiencing symptoms, and when) as well as other information present in the tweets (e.g. aspirations regarding symptoms, or abuse of substances such as cough syrup). Categories included Experiencer: Self/Other, Temporality: Current/Non-Current, Sentiment: Positive/Negative, Information: Providing/Seeking, Language: Non-English, Aspiration, Hyperbole, and Substance Abuse. All categories with the exception of Language and Substance Abuse were defined in reference to diseases or symptoms. The scheme was applied to 1,100 respiratory syndrome-related tweets (544 false positive, 556 true positive) from a previously collected corpus of syndromic twitter data [2]. Inter-annotator agreement was calculated for 9% of the data (100 tweets).

Results

Inter-annotator agreement was generally good, however certain categories had lower scores. Categories for Experiencer, Temporality, Sentiment: Negative, Information: Providing, and Language all had Kappa values above .9, Sentiment: Positive, Aspiration, and Substance abuse had Kappa values above .7, and Information: Seeking and Hyperbole had Kappas above .6. There was good separation between true positive tweets and false positive tweets, especially for the Experiencer: Self, Temporality: Current, Sentiment: Negative, Aspiration, Hyperbole, and Substance Abuse categories (see % True Positive Tweets% False Positive TweetsExperiencer: Self98.20.4Temporality: Current98.70.2Sentiment: Negative79.71.7Information: Providing0.72.8Language: Non-English2.71.3Aspiration11.00.2Hyperbole18.30.2Substance Abuse1.38.1Open in a separate window

Conclusions

Future work will apply the scheme to other syndromes, including constitutional, gastrointestinal, neurological, rash, and hemorrhagic.  相似文献   

13.
Current Therapies and Emerging Drugs in the Pipeline for Type 2 Diabetes     
Quang T. Nguyen  Karmella T. Thomas  Katie B. Lyons  Loida D. Nguyen  Raymond A. Plodkowski 《American Health & Drug Benefits》2011,4(5):303-311

Background

Diabetes is a global epidemic that affects 347 million people worldwide and 25.8 million adults in the United States. In 2007, the total estimated cost associated with diabetes in the United States in 2007 was $174 billion. In 2009, $16.9 billion was spent on drugs for diabetes. The global sales of diabetes pharmaceuticals totaled $35 billion in 2010, and these are expected to rise to $48 billion by 2015. Despite such considerable expenditures, in 2000 only 36% of patients with type 2 diabetes in the United States achieved glycemic control, defined as hemoglobin A1c <7%.

Objective

To review some of the most important drug classes currently in development for the treatment of type 2 diabetes.

Discussion

Despite the 13 classes of antidiabetes medications currently approved by the US Food and Drug Administration (FDA) for the treatment of type 2 diabetes, the majority of patients with this chronic disease do not achieve appropriate glycemic control with these medications. Many new drug classes currently in development for type 2 diabetes appear promising in early stages of development, and some of them represent novel approaches to treatment, with new mechanisms of action and a low potential for hypoglycemia. Among these promising pharmacotherapies are agents that target the kidney, liver, and pancreas as a significant focus of treatment in type 2 diabetes. These investigational agents may potentially offer new approaches to controlling glucose levels and improve outcomes in patients with diabetes. This article focuses on several new classes, including the sodium-glucose cotransporter-2 inhibitors (which are furthest along in development); 11beta-hydroxysteroid dehydrogenase (some of which are now in phase 2 trials); glycogen phosphorylase inhibitors; glucokinase activators; G protein–coupled receptor 119 agonists; protein tyrosine phosphatase 1B inhibitors; and glucagon-receptor antagonists.

Conclusion

Despite the abundance of FDA-approved therapeutic options for type 2 diabetes, the majority of American patients with diabetes are not achieving appropriate glycemic control. The development of new options with new mechanisms of action may potentially help improve outcomes and reduce the clinical and cost burden of this condition.Diabetes is a chronic, progressive disease that affects approximately 347 million people worldwide.1 In the United States, 25.8 million Americans have diabetes, and another 79 million US adults aged ≥20 years are considered to have prediabetes.2 Diabetes is the leading cause of kidney failure, nontraumatic lower-limb amputations, and new cases of blindness among adults in the United States. It is a major cause of heart disease and stroke and is the seventh leading cause of death among US adults.2The total estimated cost for diabetes in the United States in 2007 was $174 billion,2 and between 2007 and 2009, the estimated cost attributable to pharmacologic intervention in the treatment of diabetes increased from $12.5 billion to $16.9 billion.35 Global sales for diabetes medications totaled $35 billion in 2010 and could rise to $48 billion by 2015, according to the drug research company IMS Health.6,7 In 2009, $1.1 billion was spent on diabetes research by the National Institutes of Health.8 Despite these staggering costs, currently there are still no proved strategies to prevent this disease or its serious complications.

KEY POINTS

  • ▸ Approximately 25.8 million adult Americans have diabetes. In 2007, diabetes cost the United States an estimated $174 billion, and in 2009, $16.9 billion was spent on antidiabetes medications.
  • ▸ Nevertheless, the majority of American patients with diabetes do not achieve glycemic control with the currently available pharmacotherapies.
  • ▸ Several novel and promising medications are currently in development, targeting the kidney, liver, and pancreas in the treatment of type 2 diabetes.
  • ▸ Many of these investigational agents involve new mechanisms of action that offer new therapeutic targets and may help improve glucose control in patients with diabetes.
  • ▸ The new drug classes in development include the sodium-glucose cotransporter-2 inhibitors (which are furthest along in development); the 11beta-hydroxysteroid dehydrogenase; glycogen phosphorylase inhibitors; glucokinase activators; G protein-coupled receptor 119 agonists; protein tyrosine phosphatase 1B inhibitors; glucagon-receptor antagonists.
  • ▸ Several of these new classes are associated with low potential for hypoglycemia, representing a potentially new approach to diabetes drug therapy.
  • ▸ The development of new options with new mechanisms of action may potentially help improve patient outcomes and reduce the clinical and cost burden of this chronic disease.
According to the 1999–2000 National Health and Nutrition Examination Survey, only 36% of patients with type 2 diabetes achieve glycemic control—defined as hemoglobin (Hb) A1c <7%—with currently available therapies.9 Lifestyle modification remains the most important and effective way to treat diabetes; however, the majority of patients with type 2 diabetes are unable to maintain such a rigid lifestyle regimen. For most patients with type 2 diabetes, pharmacologic intervention will therefore be needed to maintain glycemic control.2and22 list the 13 classes of medication currently approved by the US Food and Drug Administration (FDA) for the treatment of type 2 diabetes. Despite this abundance of pharmacotherapies, new medications with different mechanisms of action or new approaches to therapy are needed to improve patient outcomes and reduce the clinical and cost burden of this serious condition.

Table 1

FDA-Approved Antidiabetic Agents for the Treatment of Type 2 Diabetes
ClassDrug (brand)Mechanism of actionaHbA1c reduction, %bEffect on weightAdverse effectsaPrecautions/Comments
Alpha-glucosidase inhibitorsAcarbose (Precose) Miglitol (Glyset)Delay complex carbohydrate absorption0.5–0.8Weight neutralFlatulence, diarrhea, abdominal painTitrate slowly to minimize gastrointestinal effects
Amylin analogPramlintide (Symlin)Acts in conjunction with insulin to prolong gastric emptying, reduce postprandial glucose secretion, promote appetite suppression0.5–1Weight lossNausea, vomitingBlack box warning: Coadministration with insulin may induce severe hypoglycemia Injectable drug
BiguanideMetformin (Glucophage)Decrease hepatic glucose output Increase peripheral glucose uptake1–2Weight neutralNausea, vomiting, diarrhea, flatulenceTaken with meals Avoid use in patients with renal or hepatic impairment or with CHF, because of increased risk for lactic acidosis
Bile acid sequestrantColesevelam (Welchol)Binds to intestinal bile acids Mechanism of action for diabetes control unknown0.5Weight neutralConstipation, dyspepsia, nausea 
DPP-4 inhibitorsSitagliptin (Januvia) Saxagliptin (Onglyza) Linagliptin (Tradjenta)Slow inactivation of incretin hormones0.5–0.8Weight neutralNot clinically significant 
Dopamine agonistBromocriptine (Parlodel)Mechanism of action for diabetes control unknown0.5–0.7Weight neutralNausea, vomiting dizziness, headache, diarrhea 
Incretin mimeticsExanetide (Byetta)
Liraglutide (Victoza)
Stimulate insulin secretion, slows gastric emptying, suppresses glucagon release, induces satiety0.5–1Weight lossNausea, vomiting, diarrheaAcute pancreatitis has been reported during postmarketing experience
Injectable drug
Insulin preparations: rapid-, short-, intermediate-, long-acting, premixedRefer to Exogenous insulinUp to 3.5Weight gainHypoglycemia 
Nonsulfonylurea secretagoguesNateglinide (Starlix)
Repaglinide (Prandin)
Stimulate insulin secretion from the pancreas1–1.5Weight gainHypogylcemiaTaken with meals to control rapid onset
First-generation sulfonylureasChlorpropamide (Diabinese)
Tolazamide (Tolinase)
Tolbutamide (Orinase)
Stimulate insulin secretion from the pancreas1–2Weight gainHypoglycemiaUse of these agents has declined in response to adverse effects and unpredictable results
Second-generation sulfonylureasGlimepiride (Amaryl)
Glipizide (Glucotrol)
Glyburide (Micronase, Diabeta, Glynase)
Stimulate insulin secretion from the pancreas1–2Weight gainHypoglycemia 
ThiazolidinedionesPioglitazone (Actos)
Rosiglitazone (Avandia)
Increase peripheral tissue insulin sensitivity0.5–1.4Weight gainEdemaBlack box warning: These agents can cause or exacerbate CHF Contraindicated in patients with NYHA class III or IV heart failure
Open in a separate windowaLacy CF, et al, eds. Drug Information Handbook. 18th ed. Hudson, OH: Lexi-Comp; 2009–2010.bNathan DM, et al. Management of hyperglycemia in type 2 diabetes: a consensus algorithm for the initiation and adjustment of therapy. Diabetes Care. 2006;29:1963–1972.CHF indicates congestive heart failure; DPP, dipeptidyl peptidase; HbA1c, glycated hemoglobin; NYHA, New York Heart Association.

Table 2

Insulin Preparations
Drug (brand)Onset timeaPeak timeaDurationaComments
Rapid-acting
Insulin aspart (NovoLog)10–20 min1–3 hr3–5 hrAdminister within
Insulin glulisine (Apidra)25 min45–48 min4–5 hr15 min before or immediately after
Insulin lispro (Humalog)15–30 min0.5–2.5 hr3–6.5 hrmeals
Short-acting
Insulin regular (Novolin R, Humulin R)30–60 min1–5 hr6–10 hrAdminister 30 min before meals
Intermediate-acting
Insulin NPH (Novolin N, Humulin N)1–2 hr6–14 hr16–24+ hrCloudy appearance
Long-acting
Insulin detemir (Levemir)1.1–2 hr3.2–9.3 hr5.7–24 hrDo not mix with (dose-dependent) other insulins
Insulin glargine (Lantus)1.1 hrNone24 hr 
Premixed
70% Insulin aspart protamine/30% insulin aspart (NovoLog Mix 70/30)10–20 min1–4 hr24 hrCloudy appearance Administer within 15 min before meals
75% Insulin lispro protamine/25% insulin lispro protamine (Humalog Mix 75/25)15–30 min2 hr22 hr 
50% Insulin lispro protamine/50% insulin lispro protamine (Humalog Mix 50/50)15–30 min2 hr22 hr 
70% Insulin NPH/30% insulin regular (Humulin 70/30, Novolin 70/30)30 min1.5–12 hr24 hrCloudy appearance Administer within 30 min before meals
50% Insulin NPH/50% insulin regular (Humulin 50/50)30–60 min1.5–4.5 hr7.5–24 hr 
Open in a separate windowaMcEvoy GK, ed. American Society of Health-System Pharmacists Drug Information. Bethesda, MD; 2008.NPH indicates neutral protamine Hagedorn.Indeed, the number of diabetes medications for type 2 diabetes is expected to grow in the next few years, considering the many promising investigational therapeutic options currently in development that may gain FDA approval in the future. This article reviews some of the therapies that are currently being tested and may soon become new options for the treatment of type 2 diabetes (Drug categoryMechanism of actionCommentsSodium-glucose cotransporter-2 inhibitorsInhibit reabsorption of glucose at the proximal tubule of the kidney, thereby decreasing systemic hyperglycemiaLow potential for hypoglycemia Furthest along in clinical trials11beta-hydroxysteroid dehydrogenase type 1 inhibitorsInhibit an enzyme responsible for activating cortisone to cortisol, which minimizes antiglycemic effects of cortisolLow potential for hypoglycemia All drugs currently in phase 2 clinical trialsGlycogen phosphorylase inhibitorsInhibit enzymes responsible for hepatic gluconeogenesisStill very early in development Oral agents have shown promising results in animals and humansGlucokinase activatorsActivate key enzyme to increase hepatic glucose metabolismSeveral drugs are currently in phase 2 clinical trialsG protein–coupled receptor 119 agonistsMechanisms unknown Activation induces insulin release and increases secretion of glucagon-like peptide 1 and gastric inhibitory peptideStill very early in development Animal data are availableProtein tyrosine phosphatase 1B inhibitorsIncrease leptin and insulin releaseStill very early in development A potential weight-loss medicationGlucagon-receptor antagonistsBlock glucagon from binding to hepatic receptors, thereby decreasing gluconeogenesisLow potential for hypoglycemiaOpen in a separate window  相似文献   

14.
Serum Zinc Concentration and Acute Diarrhea in Children from Different Regions of Uzbekistan     
Gulnara A. Ibadova  T. A. Merkushina  E. S. Abdumutalova  Aybek V. Khodiev 《Online Journal of Public Health Informatics》2013,5(1)

Objective

To study the blood serum zinc concentration in children with acute diarrhea (AD) in in-patient facilities before and after therapy.

Introduction

There are several reports of zinc deficiency in pathogenesis of acute and chronic diarrhea. The literature review showed children with diarrhea and chronic gastroduodenitis performed zinc deficiency in majority of cases (1). The normal values of zinc in blood serum are 12.8–27.8 μmol/l (2). There is a threshold of 13μmol/l zinc concentration for zinc deficiency diagnosis. The zinc level 8.2 μmol/l and below is poor prognostic criteria (3).

Methods

Totally 102 children (1–14 years old) with AD in in-patient facility from different regions were studied for serum zinc concentration before and after treatment. Termez and Saraosie cities are located in south of Uzbekistan, in the region with high negative impact from the nearly Tajikistan located aluminum producing plant. The serum zinc level measured by neutron-activation method in the Institute of Nuclear Research (INR).

Results

The zinc concentration in serum significantly varied by the region (CitynZinc concentration, μmol/l (mean ± SD)Before treatmentAfter treatmentTashkent city (captial)3613.8±1.512.5±1.3Termez city409.1±0.087.47±0.01Saraosie city267.9±0.37.5±0.8Open in a separate windowThe level of zinc in children from Tashkent estimated at lower normal limit with reduction below normal values after treatment. Children from Termez during admission to the in-patient facilities were zinc deficient with further reduction to the poor prognostic level. Children in Saraosie admitted to the in-patient with significant zinc deficiency that remained on poor prognostic level after treatment.

Conclusions

The study results may indicate the treatment of AD in children do not replenish the zinc to the appropriate level. Though some confounding factors may contribute the observed zinc disorders the results may indicate environmental factors, such as pollution by aluminum producing plant emission to contribute the difference in zinc concentration and should be considered for the correction and treatment of AD in children.  相似文献   

15.
Creating a Transdisciplinary Research Center to Reduce Cardiovascular Health Disparities in Baltimore,Maryland: Lessons Learned     
Lisa A. Cooper  L. Ebony Boulware  Edgar R. Miller  III  Sherita Hill Golden  Kathryn A. Carson  Gary Noronha  Mary Margaret Huizinga  Debra L. Roter  Hsin-Chieh Yeh  Lee R. Bone  David M. Levine  Felicia Hill-Briggs  Jeanne Charleston  Miyong Kim  Nae-Yuh Wang  Hanan Aboumatar  Jennifer P. Halbert  Patti L. Ephraim  Frederick L. Brancati 《American journal of public health》2013,103(11):e26-e38
Cardiovascular disease (CVD) disparities continue to have a negative impact on African Americans in the United States, largely because of uncontrolled hypertension. Despite the availability of evidence-based interventions, their use has not been translated into clinical and public health practice. The Johns Hopkins Center to Eliminate Cardiovascular Health Disparities is a new transdisciplinary research program with a stated goal to lower the impact of CVD disparities on vulnerable populations in Baltimore, Maryland. By targeting multiple levels of influence on the core problem of disparities in Baltimore, the center leverages academic, community, and national partnerships and a novel structure to support 3 research studies and to train the next generation of CVD researchers. We also share the early lessons learned in the center’s design.Racial disparities in hypertension prevalence, control rates with care, and related cardiovascular complications and mortality, are persistent and extensively documented in the United States.1–5 Cardiovascular disease (CVD) accounts for 35% of the excess overall mortality in African Americans, in large part because of hypertension.6,7 Nationwide, eliminating racial disparities in hypertension control would result in more than 5000 fewer deaths from coronary heart disease and more than 2000 fewer deaths from stroke annually in African Americans.8 Despite numerous studies establishing the efficacy of pharmacologic and lifestyle therapies9–12 in African Americans and Whites, blood pressure control rates remain suboptimal, even among persons receiving regular health care.13,14 Barriers to hypertension control exist at multiple levels, including individual patients, health care professionals, the health care system, and patients’ social and environmental context. Although successful interventions exist,15–18 these strategies have not been translated into clinical and public health practice.In Baltimore, Maryland, like in the rest of the United States, CVD, including coronary heart disease and stroke, is the leading cause of death. Approximately 2000 people die from CVD in Baltimore each year; these deaths disproportionately affect African Americans,19 making health disparities from CVD a key factor in the racial discrepancy in life expectancy in the city. Cardiovascular disease is a key reason for the 20-year difference in life expectancy between those who live in more affluent neighborhoods (83 years) and those who reside in poorer neighborhoods (63 years) of Baltimore.20 VariableBaltimoreMarylandUnited StatesAfrican American adults with hypertension, %41.32139.22238.623White adults with hypertension, %28.62125.12232.323Life expectancy, African Americansa71.52475.52474.525Life expectancy, Whitesa76.52479.72478.825Open in a separate windowaIn years, at birth in 2009.  相似文献   

16.
Comparing Prescription Sales,Google Trends and CDC Data as Flu Activity Indicators     
Avinash Patwardhan  David Lorber 《Online Journal of Public Health Informatics》2013,5(1)
  相似文献   

17.
Spousal Violence in 5 Transitional Countries: A Population-Based Multilevel Analysis of Individual and Contextual Factors     
Leyla Ismayilova 《American journal of public health》2015,105(11):e12-e22
Objectives. I examined the individual- and community-level factors associated with spousal violence in post-Soviet countries.Methods. I used population-based data from the Demographic and Health Survey conducted between 2005 and 2012. My sample included currently married women of reproductive age (n = 3932 in Azerbaijan, n = 4053 in Moldova, n = 1932 in Ukraine, n = 4361 in Kyrgyzstan, and n = 4093 in Tajikistan). I selected respondents using stratified multistage cluster sampling. Because of the nested structure of the data, multilevel logistic regressions for survey data were fitted to examine factors associated with spousal violence in the last 12 months.Results. Partner’s problem drinking was the strongest risk factor associated with spousal violence in all 5 countries. In Moldova, Ukraine, and Kyrgyzstan, women with greater financial power than their spouses were more likely to experience violence. Effects of community economic deprivation and of empowerment status of women in the community on spousal violence differed across countries. Women living in communities with a high tolerance of violence faced a higher risk of spousal violence in Moldova and Ukraine. In more traditional countries (Azerbaijan, Kyrgyzstan, and Tajikistan), spousal violence was lower in conservative communities with patriarchal gender beliefs or higher financial dependency on husbands.Conclusions. My findings underscore the importance of examining individual risk factors in the context of community-level factors and developing individual- and community-level interventions.Understanding factors that contribute to intimate partner violence (IPV) is essential to reducing it and minimizing its deleterious effect on women’s functioning and health. Most evidence comes from studies conducted in western industrialized countries or in the developing countries of Africa, Latin America, and Asia1–5; there is scarce knowledge available on IPV in the transitional countries of the former Soviet Union (fSU) region,6 which represents different geopolitical, socioeconomic, and cultural environments.7 Studies from other countries often demonstrate mixed findings regarding key risk factors for spousal violence, which suggests that their effects are context specific.8–11 An examination of cross-country similarities and differences within the fSU region may contribute to the understanding of risk factors for spousal violence in a different sociocultural context.As a part of the Soviet Union for approximately 70 years until its collapse in 1991, the fSU countries shared similar sociopolitical contexts,12 with a legacy of well-established public services, stable jobs, and high levels of education dating back to the Soviet era.13 The political turmoil and economic crisis of the 1990s following the collapse of the Soviet Union and the transition from a socialist to a market economy resulted in high unemployment, deterioration of public services, and growth in poverty and social inequalities, which increased family stress.14My study focused on 5 countries of the fSU that included an additional Domestic Violence (DV) module in the Demographic and Health Survey (DHS), which presented the first opportunity for cross-country comparison in this region using recent nationally representative data. The DHS survey was conducted in 2 Eastern European countries of the fSU (Moldova and Ukraine) and 2 countries located in the Central Asian region (Kyrgyz Republic and Tajikistan); the Caucasus region was represented by Azerbaijan. Previous DHS and other nationally representative studies from the fSU region included only individual-level predictors of violence without examining the role of contextual factors and focused predominantly on Eastern European countries of the fSU.8,15–17Despite shared Soviet background, the 5 countries differ in terms of gender norms and current socioeconomic situations (7 Eastern European countries (Ukraine and Moldova) share relatively more egalitarian gender norms, whereas Azerbaijan, Tajikistan and Kyrgyzstan, which are secular Muslim nations, have more traditional values and conservative norms. Women in Kyrgyzstan fall in the middle because of a historically large Russian-speaking population.18–21 Nevertheless, Azerbaijan, Kyrgyzstan, and Tajikistan—where the female literacy rate is close to 100% and polygamous marriages are illegal22—differ from many countries with a traditional Muslim culture because of a history of socialistic ideology, suppression of religion, and universal public education. Although Azerbaijan and Ukraine have exhibited significant economic growth because of rich energy resources, Moldova remains one of the poorest countries in Eastern Europe,23 and Tajikistan maintains the status of the poorest republic in the entire fSU region.

TABLE 1—

Selected Country-Level Indicators for 5 Former Soviet Union Countries: 2005–2012
Eastern Europe
Caucasus
Central Asia
Country-Level IndicatorsMoldovaUkraineAzerbaijanKyrgyzstanTajikistan
Population (in millions)3.645.59.55.98.2
Official language(s)RomanianUkrainianAzerbaijaniKyrgyz, RussianTajik
Area, km233 846603 50086 600199 951142 550
Country’s income categoryLower middleLower middleUpper middleLower middleLow
GNI per capita, Atlas method, US$2 4703 9607 3501 210990
Human development index0.663 (medium)0.734 (high)0.747 (high)0.628 (medium)0.607 (medium)
Female adult literacy, %9910010099100
Open in a separate windowNote. GNI = gross national income; USD = United States dollars.Source: World Development Indicators, World Bank, 2013.Several theories explain IPV through single factors: poverty-induced stress,24 weakened impulse control because of substance use,25,26 or learned aggressive or victimized behavior from the family of origin.27,28 Feminist theorists, however, have argued that poverty, stress, and alcohol abuse do not explain why violence disproportionally occurs against women. Instead, feminist theories suggest that IPV results from historical power differentials by gender, which have been reinforced through male superiority, authority, and socialization.29–32 However, feminist theory alone does not explain why people act differently, even if they grew up in the same social environment and were exposed to similar gender norms.33 Thus, Heise’s ecological model of IPV,33 adopted by the World Health Organization (WHO) as a guiding framework, and modified by Koenig et al.,4 combines individual theories explaining IPV and emphasizes the importance of contextual-level factors.Empirical studies in the United States, Bangladesh, Colombia, and Nigeria demonstrated that certain communities—not just individuals or families—are affected by IPV more than others, positing that violence might be a function of community-level characteristics and attitudes, and not only individual beliefs and behaviors.5,34–36 Community socioeconomic development, domestic violence norms, and community-level gender inequalities might shape individual women’s experiences.4,5 Inclusion of community-level variables might change the effects of individual factors, exemplifying the importance of conducting a 2-level analysis.4,5,34,35Thus, I examined the role of individual-level factors (socioeconomic status, family risk factors, and women’s empowerment status within the household) and contextual factors (community poverty and women’s empowerment status at the community level) associated with current spousal violence in population-based samples in 5 fSU countries: Azerbaijan, Moldova, Ukraine, Kyrgyzstan, and Tajikistan. More specifically, I aimed to examine whether contextual factors had an effect on spousal violence, above and beyond women’s individual-level characteristics, and whether effects remained significant while adjusting for individual and contextual factors simultaneously.  相似文献   

18.
Smokers With Behavioral Health Comorbidity Should Be Designated a Tobacco Use Disparity Group     
Jill M. Williams  Marc L. Steinberg  Kim Gesell Griffiths  Nina Cooperman 《American journal of public health》2013,103(9):1549-1555
Smokers with co-occurring mental illness or substance use disorders are not designated a disparity group or priority population by most national public health and tobacco control groups.These smokers fulfill the criteria commonly used to identify groups that merit special attention: targeted marketing by the tobacco industry, high smoking prevalence rates, heavy economic and health burdens from tobacco, limited access to treatment, and longer durations of smoking with less cessation. A national effort to increase surveillance, research, and treatment is needed.Designating smokers with behavioral health comorbidity a priority group will bring much-needed attention and resources. The disparity in smoking rates among persons with behavioral health issues relative to the general population will worsen over time if their needs remain unaddressed.ELIMINATING DISPARITIES IN health and health care is a major priority in the United States.1,2 Groups with health disparities are referred to as vulnerable or priority populations and can be defined by factors such as race/ethnicity, socioeconomic status, geography, gender, age, disability status, or sexual orientation.3 The sources of these disparities are complex, rooted in historic and social inequities.4 Cigarette smoking, the leading cause of preventable death, is listed as one of 21 conditions with ongoing health disparities that must be addressed.1 Indeed, as the American Legacy Foundation points out, tobacco is not an equal opportunity killer.5 The criteria organizations such as the Centers for Disease Control and Prevention use to designate a tobacco disparity group are that they experience disproportionate tobacco consumption, disproportionate consequences or health burden from tobacco use, disproportionate economic burden from tobacco use, or limited access to tobacco-related health care.1,6,7 These groups may also be targeted by the tobacco industry with special marketing.6 Increased tobacco consumption may stem from differences in risk for tobacco use initiation or progression, differences in tobacco use prevalence and rates of nicotine dependence, and differences in smoking cessation rates.Smokers with a co-occurring mental illness or substance use disorder (SUD) have historically been underserved.8–13 Persons with behavioral health conditions, a collective term whose use is increasing because it may reduce stigma, compose a significant subset of smokers in the United States. A recent study found that cigarette smoking prevalence was 37.8% among people with any anxiety disorder, 45.1% among those with any affective disorder, 63.6% among those with a substance use disorder, and only 21.3% among those with no mental disorder.14 Smoking rates have plateaued despite ongoing tobacco control efforts, and clinical data support the concern that public health techniques that have been largely successful in the past may have reduced impact with remaining smokers.15,16 Although population-level data are less consistent on this point, data from both the National Health Interview Survey17 and the National Survey of Drug Use and Health18 suggest that smokers with moderate to high levels of general psychological distress are less likely than those with lower levels to have quit smoking. These data raise the possibility that behavioral health comorbidity may contribute to existing concerns about the impact of current tobacco approaches on today’s smokers.Surprisingly, most tobacco control Web sites and organizations, such as the Centers for Disease Control and Prevention’s Office on Smoking and Health,19 Healthy People 2020,2 and the American Legacy Foundation,20 do not designate smokers with behavioral health comorbidity as a disparity group or priority population. Understanding and eliminating disparities are such high priorities that these larger organizations have sponsored dedicated spin-off groups, such the National Networks for Tobacco Control and Prevention (sponsored by the Centers for Disease Control and Prevention)21 and the Tobacco Research Network on Disparities (TReND; cosponsored by the National Cancer Institute and American Legacy Foundation).22 These groups have paid only cursory attention to smokers with behavioral health comorbidity.23 For example, these smokers are included on the TReND Web site with a long list of “other historically underserved groups” that includes lesbian, gay, bisexual, and transgender persons; people with disabilities; and the military. (Major tobacco control groups in the United States and their identified disparity populations are listed in Organization/ReportSourceRacial/Ethnic MinoritiesaPersons With Low SESbPregnant WomenLGBT PersonsGenderYouthsOlder AdultsMilitary PersonnelPersons With Mental Health and Substance Use DisordersCDC Office on Smoking and Healthhttp://www.cdc.gov/tobacco/basic_information/health_disparities/index.htmXXXXXXNational Networks for Tobacco Control and Preventionchttp://www.tobaccopreventionnetworks.org/site/c.ksJPKXPFJpH/b.2588535/k.6D55/Eliminating_Disparities.htmXXXXSurgeon general’s reports (2000, 2001, 2004, and 2012)http://www.surgeongeneral.gov/library/index.htmlXXXHealthy People 2020http://healthypeople.gov/2020/LHI/tobacco.aspxXXXXXAmerican Legacy Foundationhttp://www.legacyforhealth.org/2165.aspxXXXXTobacco Research Network on Disparitiesdhttp://www.tobaccodisparities.orgXXXXXXXXAmerican Lung Associationhttp://www.lung.org/stop-smoking/about-smoking/facts-figures/specific-populations.htmlXXXXXXXXTobacco Cessation Leadership Networkhttp://www.tcln.org/cessation/priority-populations.htmlXXXSociety for Research on Nicotine and Tobacco Tobacco Related Health Disparities Networkhttp://www.srnt.org/about/networks.cfmXSmoking Cessation Leadership Centerhttp://smokingcessationleadership.ucsf.edu/BehavioralHealth.htmXOpen in a separate windowNote. CDC = Centers for Disease Control and Prevention; LGBT = lesbian, gay, bisexual, transgender; SES = socioeconomic status.aAfrican American, American Indian, Alaska Native, Asian American, Pacific Islander, and Hispanic.bIndicated by poverty, low education level, unemployment.cSponsored by CDC.dCosponsored by the National Cancer Institute and American Legacy Foundation.  相似文献   

19.
The Enduring Effects of Smoking in Latin America     
Alberto Palloni  Beatriz Novak  Guido Pinto-Aguirre 《American journal of public health》2015,105(6):1246-1253
Objectives. We estimated smoking-attributable mortality, assessed the impact of past smoking on recent mortality, and computed expected future losses in life expectancy caused by past and current smoking behavior in Latin America and the Caribbean.Methods. We used a regression-based procedure to estimate smoking-attributable mortality and information for 6 countries (Argentina, Brazil, Chile, Cuba, Mexico, and Uruguay) for the years 1980 through 2009 contained in the Latin American Mortality Database (LAMBdA). These countries jointly comprise more than two thirds of the adult population in Latin America and the Caribbean and have the region’s highest rates of smoking prevalence.Results. During the last 10 years, the impact of smoking was equivalent to losses in male (aged ≥ 50 years) life expectancy of about 2 to 6 years. These effects are likely to increase, particularly for females, both in the study countries and in those that joined the epidemic at later dates.Conclusions. Unless innovations in the detection and treatment of chronic diseases are introduced soon, continued gains in adult survival in Latin America and the Caribbean region may slow down considerably.Continuous progress in the remarkable mortality decline in Latin America and the Caribbean region1 may be difficult to sustain. This possibility is foreshadowed in a recent report showing that cancers of the respiratory tract, particularly lung cancers, are among the 3 most important forms of cancer in the region and are primary causes of adult mortality.2 It is known that these chronic illnesses are closely connected to smoking, but less is known about the actual contribution of past smoking on current and future adult mortality in these countries. It could well be that, if pervasive enough, past (and future) smoking behavior trumps long-term trends in adult mortality.In response to the increasing vigilance and massive public health campaigns against tobacco consumption that began in the United States after the mid-1960s, the tobacco industry initiated an aggressive program to open new markets in Europe, Asia, and Latin America.3–5 A number of sociodemographic factors contributed to the higher numbers of potential smokers in Latin America and the Caribbean region beginning in the 1950s: the explosive growth in the populations of adolescents and young adults, who are at highest risk for smoking initiation; the spread of an urban lifestyle and the accelerated growth of cities; greater access to education; and the entry of women into the labor market.6,7 Increasing cigarette affordability,8–10 widespread legislative maneuvers,6,11–13 and a sophisticated publicity machine8,12–14 contributed to a massive market expansion for tobacco in all forms and cigarettes in particular. As a result, cigarette consumption increased first in countries in the vanguard of mortality decline (Argentina, Uruguay, Cuba, and Chile) and then in Mexico, Brazil, Colombia, Costa Rica, and Panama.3,15 Countries with higher mortality, such as, Peru, Ecuador, Bolivia, Paraguay, and Guatemala, still have low levels of smoking, but some of them (e.g., Bolivia) are catching up rapidly. The spread of smoking is known in public health circles as the “smoking epidemic”—a term we adopt here.16,17According to a useful typology,18 countries in Latin America and the Caribbean span a broad range of experiences in the smoking epidemic, from those in the late stages (Argentina, Chile, Cuba, and Uruguay) to those of more recent onset (Mexico and Brazil).19 24 Males in 4 countries—Argentina, Cuba, Chile, and Uruguay—have higher rates of smoking than do US males, whereas the rates are lower in Brazil and Mexico. As we will show, Cuba’s unique position at the top of the ranking of smoking prevalence translates into the highest estimated excess adult mortality. Female rates lag behind male rates everywhere, but they have reached levels of around 20% in Argentina and Chile. Age-specific smoking prevalence rates for the 6 countries in this study (data available as a supplement to the online version of this article at http://www.ajph.org) display a high degree of heterogeneity and reflect characteristics typical of different stages of the epidemic. These age patterns reveal telling anomalies: an exceptionally high prevalence among the population younger than 25 years in Chile, signs of a recrudescence of the smoking epidemic, and unexpectedly low levels of adolescent smoking in Brazil, an indication of successful antismoking campaigns.19,26

TABLE 1—

Characteristics of the Smoking Epidemic Among Adults Aged 20–80 Years: Argentina, Brazil, Chile, Cuba, Mexico, Uruguay, and United States; 2005–2009
Argentina, 2005Brazil, 2008Chile, 2006Cuba, 2009Mexico, 2009Uruguay, 2009United States, 2007
Males
 No.16 64715 9957 9815 3502 3602 228
 Smoking prevalence/100 persons (SD)a35.7 (0.8)24.0 (0.4)37.8 (0.6)44.8c23.8 (0.7)32.5 (1.3)28.4 (45.1)
 No. of cigarettes/d, mean (SD)a13.1 (0.3)15.3 (0.2)5.8 (0.1)10.3 (0.5)11.0 (0.5)16.5 (11.8)
 No. of cigarettes/y, mean (SD)a4 783.0 (109.3)5 583.7 (88.1)2 080.8 (48.3)3 752.9 (174.3)4 027.6 (166.6)6 040.7 (4322.1)
 Deaths/100 persons attributable to tobacco (all causes)b1915112172423
 Deaths/100 000 persons attributable to tobacco (trachea, bronchus, and lung cancers)b7535329018115103
Females
 No.21 90719 1767 9686 2202 6172 400
 Smoking prevalence/100 persons (SD)a25.7 (0.7)14.5 (0.3)28.0 (0.6)29.6c7.7 (0.5)22.5 (1.0)21.5 (41.1)
 No. of cigarettes/d, mean (SD)a9.6 (0.2)12.6 (0.2)4.9 (0.1)8.5 (0.5)10.9 (0.4)14.5 (10.1)
 No. of cigarettes/y, mean (SD)a3 507.1 (79.8)4 614 (83.4)1 757.6 (47)3 102.2 (200.0)3 962.7 (136.8)5 284.1 (3 702.1)
 Deaths/100 persons attributable to tobacco (all causes)b668186523
 Deaths/100 000 persons attributable to tobacco (trachea, bronchus, and lung cancers)b126107841068
Yearly consumption ratio, female–malea0.730.830.800.840.980.87
Open in a separate windowNote. Values in the table were computed from information contained in the original sources.Source. National Risk Factors Study (Argentina),20 Global Adult Tobacco Survey (Brazil, Mexico, and Uruguay),21 Social Protection Study (Chile),22 and National Health and Nutrition Examination Survey (NHANES; Smoking Module).23aPopulation weighted and age standardized (Standard NHANES 2007–2008)23 for Argentina, Brazil, Chile, Mexico, and Uruguay.bWorld Health Organization (2012)24 estimated proportion of deaths attributable to tobacco and death rates correspond with 2004 and are totals for individuals aged 30 years and older.cAge standardized for individuals aged 15 years and older.25The typology mentioned here is useful for comparing aggregate, country-specific conditions and is not informed by—nor does it intend to inform—individual psychological traits responsible for smoking-related behavior in the countries to which it is applied.  相似文献   

20.
Detection of Patients with Influenza Syndrome Using Machine-Learning Models Learned from Emergency Department Reports     
Arturo López Pineda  Fu-Chiang Tsui  Shyam Visweswaran  Gregory F. Cooper 《Online Journal of Public Health Informatics》2013,5(1)

Objective

Compare 7 machine learning algorithms with an expert constructed Bayesian network on detection of patients with influenza syndrome.

Introduction

Early detection of influenza outbreaks is critical to public health officials. Case detection is the foundation for outbreak detection. Previous study by Elkin el al. demonstrated that using individual emergency department (ED) reports can better detect influenza cases than using chief complaints [1]. Our recent study using ED reports processed by Bayesian networks (using expert constructed network structure) showed high detection accuracy on detection of influenza cases [2].

Methods

The dataset used in this study includes 182 ED reports with confirmed PCR influenza tests (Jan 1, 2007–Dec 31, 2009) and 40853 ED reports as control cases from 8 EDs in UPMC (Jul 1, 2010–Aug 31, 2010). All ED reports were deidentified by De-ID software with IRB approval.An NLP system, Topaz, was used to extract relevant findings and symptoms from the reports and encoded them with the UMLS concept unique identifier codes [2]. Two subsets were created: DS1-train (67% of cases) and DS1-test (remaining 33%).The algorithms used for training the models are: Naïve Bayes Classifier, Efficient Bayesian Multivariate Classification (EBMC) [3], Bayesian Network with K2 algorithm, Logistic Regression (LR), Support Vector Machine (SVM), Artificial Neural Networks (ANN) and Random Forest (RF).The predictive performance of each method was evaluated using the area under the receiver operator characteristic (AUROC) and the Hosmer-Lemeshow (HL) statistical significance testing, that describes the lack-of-fit of the model to the dataset.

Results

The evaluation results of all the models using DS1-test, including the AUROC, its confidence interval, p-value (between each algorithm and the expert) and the calibration with HL are shown in ConclusionsAll models achieved high AUROC values. The pairwise comparison of p-values in Figure 1.One limitation of the study is that the test dataset has low influenza prevalence, which may bias the detection algorithm performance. We are in the process of testing the algorithms using higher prevalence rate.The same process could also be applied to other diseases to further research the generalizability of our method.Predictive performance and Calibration
AlgorithmAUROC95% CIp-valueCalibration: HL
NaïveBayes0.9988(0.9983, 0.9994)0.23424880.63
EBMC0.9993(0.9989, 0.9998)0,22554.53
BN-K20.9994(0.9990, 0.9998)0.22281315.71
LR0.9829(0.9512, 1.0000)0.8935177.01
SVM0.9996(0.9993, 0.9999)0.218912.30
RandForest0.9995(0.9993. 0.9998)0.220116.30
A-NN0.9991(0.9986, 0.9997)0.2300275.81
Expert0.9798(0.9483, 1.0000)1.000014374.67
Open in a separate windowArea under the ROC curve (AUROC) with 95% Confidence Interval; p-value relative to the Expert model; and Hosmer-Lemeshow calibration statisticOpen in a separate windowInfluenza Syndrome model created using the EBMC algorithm  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号