首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
2.
In early 1976, the novel A/New Jersey/76 (Hsw1N1) influenza virus caused severe respiratory illness in 13 soldiers with 1 death at Fort Dix, New Jersey. Since A/New Jersey was similar to the 1918–1919 pandemic virus, rapid outbreak assessment and enhanced surveillance were initiated. A/New Jersey virus was detected only from January 19 to February 9 and did not spread beyond Fort Dix. A/Victoria/75 (H3N2) spread simultaneously, also caused illness, and persisted until March. Up to 230 soldiers were infected with the A/New Jersey virus. Rapid recognition of A/New Jersey, swift outbreak assessment, and enhanced surveillance resulted from excellent collaboration between Fort Dix, New Jersey Department of Health, Walter Reed Army Institute of Research, and Center for Disease Control personnel. Despite efforts to define the events at Fort Dix, many questions remain unanswered, including the following: Where did A/New Jersey come from? Why did transmission stop?Key words: Influenza, military, respiratory disease, swine, perspectiveRevisiting events surrounding the 1976 swine influenza A (H1N1) outbreak may assist those planning for the rapid identification and characterization of threatening contemporary viruses, like avian influenza A (H5N1) (1). The severity of the 1918 influenza A (H1N1) pandemic and evidence for a cycle of pandemics aroused concern that the 1918 disaster could recur (2,3). Following the 1918 pandemic, H1N1 strains circulated until the "Asian" influenza A (H2N2) pandemic in 1957 (3). When in early 1976, cases of influenza in soldiers, mostly recruits, at Fort Dix, New Jersey, were associated with isolation of influenza A (H1N1) serotypes (which in 1976 were labeled Hsw1N1), an intense investigation followed (4).Of 19,000 people at Fort Dix in January 1976, ≈32% were recruits (basic trainees) (4). Recruits reported to Fort Dix for 7 weeks of initial training through the basic training reception center, where they lived and were processed into the Army during an intense 3 days of examinations, administrative procedures, and indoctrination. At the reception center, training unit cohorts were formed. Recruits were grouped into 50-member units (platoons) and organized into companies of 4 platoons each. Units formed by week''s end moved from the reception center to the basic training quarters. To prevent respiratory illnesses, recruits were isolated in their company areas for 2 weeks and restricted to the military post for 4 weeks (4). Platoon members had close contact with other platoon members, less contact with other platoons in their company, and even less contact with other companies.On arrival, recruits received the 1975–1976 influenza vaccine (A/Port Chalmers/1/73 [H3N2], A/Scotland/840/74 [H3N2], and B/Hong Kong/15/72) (4). Other soldiers reported directly to advanced training programs of 4 to 12 weeks at Fort Dix immediately after basic training at Fort Dix or elsewhere. These soldiers received influenza vaccinations in basic training. Civilian employees and soldiers'' families were offered vaccine, but only an estimated <40% accepted (4).Training stopped over the Christmas–New Year''s holidays and resumed on January 5, 1976, with an influx of new trainees. The weather was cold (wind chill factors of 0° to –43°F), and the reception center was crowded (4). Resumption of training was associated with an explosive febrile respiratory disease outbreak involving new arrivals and others. Throat swabs were collected from a sample of hospitalized soldiers with this syndrome. On January 23, the Fort Dix preventive medicine physician learned of 2 isolations of adenovirus type 21 and suspected an adenovirus outbreak (4). He notified the county health department and the New Jersey (NJ) Department of Health of the outbreak (4). On January 28, an NJ Department of Health official consulted with the military physician and suggested that the explosive, widespread outbreak could be influenza (4). Over the next 2 days, 19 specimens were delivered to the state laboratory and 7 A/Victoria-like viruses and 3 unknown hemagglutinating agents were identified (4). Specimens were flown to the Center for Disease Control (CDC), Atlanta, Georgia, on February 6, where a fourth unknown agent was found (4).On February 2, Fort Dix and NJ Department of Health personnel arranged for virologic studies of deaths possibly caused by influenza (4). Tracheal swabs taken on February 5 from a recruit who died on February 4 yielded a fifth unknown agent on February 9. By February 10, laboratory evidence had confirmed that a novel influenza strain was circulating at Fort Dix and that 2 different influenza strains were causing disease. By February 13, all 5 unknown strains were identified as swine influenza A (Hsw1N1). The possibility of laboratory contamination was evaluated (4). No known swine influenza A strains were present in the NJ Department of Health Virus Laboratory before the Fort Dix outbreak. Additionally, all unknown Fort Dix viruses were independently isolated from original specimens at CDC and the Walter Reed Army Institute of Research (WRAIR), Washington, DC. Also, 2 patients with novel virus isolates had convalescent-phase, homologous, hemagglutination-inhibition (HAI) antibody titers of 1:40–1:80, consistent with recent infections. The new influenza strain had been independently identified in 3 different laboratories and supporting serologic evidence developed within 15 days after the original specimens were collected (4).

Table

Key events in the swine influenza A (Hsw1N1) outbreak, Fort Dix, NJ
Date (1976)Event
January 5After the holidays, basic training resumed at Fort Dix, NJ; a sudden, dramatic outbreak of acute respiratory disease followed the influx of new recruit trainees (4).
January 19Earliest hospitalization of a Fort Dix soldier with acute respiratory disease attributed to swine influenza A (Hsw1N1) (identified retrospectively by serologic tests) (7,14)
January 21Influenza A/Victoria (H3N2) identified away from Fort Dix in NJ civilians (4)
January 23Fort Dix received reports of adenovirus type 21 isolations from soldiers ill with respiratory disease and reported the outbreak to the local and state health departments (4)
January 28A NJ Department of Health official suggested the Fort Dix outbreak may be due to influenza and offered to process specimens for virus isolation (4)
January 29–3019 specimens sent to NJ Department of Health in 2 shipments (4)
February 2–3NJ Department of Health identified 4 isolates of H3N2-like viruses and 2 unknown hemagglutinating agents in 8 specimens sent on January 29. Fort Dix and NJ Department of Health arranged for the study of deaths possibly due to influenza. NJ Department of Health identified 3 H3N2-like viruses and a third unknown hemagglutinating agent in 11 specimens sent on January 30 (4).
February 4Fort Dix soldier died with acute respiratory disease (4).
February 5Tracheal specimens from the soldier who died on February 4 were sent to NJ Department of Health (4).
February 6NJ Department of Health sent the Fort Dix specimens to Center for Disease Control (CDC), Atlanta, GA; CDC identified a fourth unknown hemagglutinating agent in the Fort Dix specimens (4).
February 9Specimens from the soldier who died on February 4 yielded a fifth unknown hemagglutinating agent (4). Last hospitalization of an identified Fort Dix soldier with febrile, acute respiratory disease attributed to swine influenza A (Hsw1N1) (identified retrospectively by serologic tests) (7,14).
February 10Laboratory evidence supported 2 influenza type A strains circulating on Fort Dix; 1 was a radically new strain. Prospective surveillance for cases in the areas around Fort Dix was initiated; only cases of H3N2 were found (4).
February 13Review of laboratory data and information found that all 5 unknown agents were swine influenza A strains (later named A/New Jersey [Hsw1N1]); 3 laboratories independently identified the swine virus from original specimens (serologic data supporting swine influenza A virus infection was later obtained from 2 survivors with A/New Jersey isolates) (4).
February 14–16Initial planning meeting between CDC, NJ Department of Health, Fort Dix, and the Walter Reed Army Institute of Research personnel was held in Atlanta, GA. Prospective case finding was initiated at Fort Dix; H3N2 was isolated; Hsw1N1 was not isolated (7). Retrospective case finding was initiated by serologic study of stored serum specimens from Fort Dix soldiers who had been hospitalized for acute respiratory disease; 8 new cases of disease due to Hsw1N1 were identified with hospitalization dates between January 19 and February 9 (7,14).
February 22–24Prospective case finding was again conducted at Fort Dix; H3N2 virus was isolated but not Hsw1N1 (7).
February 27Thirty-nine new recruits entering Fort Dix February 21–27 gave blood samples after arrival and 5 weeks later; serologic studies were consistent with influenza immunization but not spread of H3N2 virus. None had a titer rise to Hsw1N1 (11).
March 19Prospective surveillance identified the last case of influenza in the areas around Fort Dix; only H3N2 viruses were identified outside of Fort Dix (4).
Open in a separate window  相似文献   

3.

Background

By a wide margin, lung cancer is the most significant cause of cancer death in the United States and worldwide. The incidence of lung cancer increases with age, and Medicare beneficiaries are often at increased risk. Because of its demonstrated effectiveness in reducing mortality, lung cancer screening with low-dose computed tomography (LDCT) imaging will be covered without cost-sharing starting January 1, 2015, by nongrandfathered commercial plans. Medicare is considering coverage for lung cancer screening.

Objective

To estimate the cost and cost-effectiveness (ie, cost per life-year saved) of LDCT lung cancer screening of the Medicare population at high risk for lung cancer.

Methods

Medicare costs, enrollment, and demographics were used for this study; they were derived from the 2012 Centers for Medicare & Medicaid Services (CMS) beneficiary files and were forecast to 2014 based on CMS and US Census Bureau projections. Standard life and health actuarial techniques were used to calculate the cost and cost-effectiveness of lung cancer screening. The cost, incidence rates, mortality rates, and other parameters chosen by the authors were taken from actual Medicare data, and the modeled screenings are consistent with Medicare processes and procedures.

Results

Approximately 4.9 million high-risk Medicare beneficiaries would meet criteria for lung cancer screening in 2014. Without screening, Medicare patients newly diagnosed with lung cancer have an average life expectancy of approximately 3 years. Based on our analysis, the average annual cost of LDCT lung cancer screening in Medicare is estimated to be $241 per person screened. LDCT screening for lung cancer in Medicare beneficiaries aged 55 to 80 years with a history of ≥30 pack-years of smoking and who had smoked within 15 years is low cost, at approximately $1 per member per month. This assumes that 50% of these patients were screened. Such screening is also highly cost-effective, at <$19,000 per life-year saved.

Conclusion

If all eligible Medicare beneficiaries had been screened and treated consistently from age 55 years, approximately 358,134 additional individuals with current or past lung cancer would be alive in 2014. LDCT screening is a low-cost and cost-effective strategy that fits well within the standard Medicare benefit, including its claims payment and quality monitoring.Lung cancer is a lethal disease that claims the lives of more people in the United States annually than the next 4 most lethal cancers combined, which are, in order, colon, breast, pancreas, and prostate cancers.1,2 In the United States, an estimated 224,210 people will be diagnosed with lung cancer, and an estimated 159,260 people will die of the disease in 2014.3 The incidence of lung cancer increases with age,4 and the risk increases with the cumulative effects of past smoking. Millions of Medicare beneficiaries are at significant risk.5On December 31, 2013, lung cancer screening using low-dose computed tomography (LDCT) was rated as a level “B” recommendation by the US Preventive Services Task Force (USPSTF),6 a panel of independent experts convened by the Agency for Healthcare Research and Quality to evaluate the strength of evidence and the balance of benefits and harms of preventive services.7 The USPSTF recommendation applies to people aged 55 to 80 years with a history of heavy smoking.6 LDCT is an imaging technology that enables 3-dimensional visualization of internal body structures, including the lungs, using low doses of radiation.Under the Affordable Care Act, the “B” recommendation means that LDCT lung cancer screening must be covered without cost-sharing by qualified health plans starting January 1, 2015.6,8 Qualified health plans include commercial insurance and self-insured benefit plans, with the exclusion of grandfathered plans. Several private insurers have initiated LDCT screening coverage in advance of the 2015 requirement.9 Furthermore, versions of the USPSTF recommendations have been adopted essentially by every major academic body with an interest in lung cancer, including the National Comprehensive Cancer Network, American Association for Thoracic Surgery, American College of Radiology, Society of Thoracic Surgeons, International Association for the Study of Lung Cancer, American College of Chest Physicians, and the American Cancer Society.Medicare has begun a national coverage analysis to determine whether LDCT lung cancer screening meets its criteria for coverage, which includes whether screening is reasonable and necessary for early detection, whether the service has an “A” or a “B” recommendation by the USPSTF, and whether screening is appropriate for Medicare beneficiaries.High doses of radiation can be harmful. LDCT can be performed at very low doses of <0.7 mSv per procedure10 by comparison, the annual natural background radiation in New York City (sea level) is 3 mSv. LDCT technology refinements and protocol optimization have translated into patient benefits, supporting the detection of ever-smaller lung cancers, reducing the rate of surgical procedures, and providing higher cure rates.1114Advances in LDCT technology, promising results from nonrandomized trials,14 and unchanged survival statistics over the previous 30 years, led to the implementation of the National Lung Screening Trial (NLST), the most expensive and one of the largest randomized screening trials ever sponsored by the National Cancer Institute.13 The trial of 53,454 people aged 55 to 74 years at high risk for lung cancer was conducted to determine whether LDCT screening could reduce mortality from lung cancer. Participants in this 2-arm US study received 3 annual screenings with either an LDCT or a chest x-ray. Based on the study protocol, the trial was stopped when findings demonstrated a relative reduction of 20% in lung cancer mortality in the LDCT arm versus the chest x-ray arm.13Observational data and epidemiologic arguments for breast cancer also suggest that additional rounds of screening would reduce lung cancer mortality by much more than 20%.1522 Other large studies have shown that computed tomography (CT) screening is associated with a high proportion (much higher than 70%) of the lung cancer diagnoses being early stage1517,21 compared with 15% in the national data.23 Long-term survival rates of approximately 80% have been reported for patients with lung cancer who are diagnosed by CT screening12,15,16 compared with a 16.8% 5-year survival rate from the national data.23

KEY POINTS

  • ▸ Lung cancer is the leading cause of cancer death in the United States and worldwide.
  • ▸ Because the risk increases with age and with a history of smoking, some Medicare beneficiaries are at high risk for this type of cancer.
  • ▸ Low-dose computed tomography (LDCT) has been shown to reduce mortality from lung cancer by more than 20%.
  • ▸ Under healthcare reform, LDCT must be covered without cost-sharing by nongrandfathered commercial health plans beginning in 2015.
  • ▸ Based on this new analysis, LDCT screening of high-risk Medicare beneficiaries is cost-effective and will cost approximately $1 per member per month.
  • ▸ The average annual cost of such a screening policy is estimated to be $241 for a Medicare beneficiary screened.
  • ▸ Given all causes of mortality, without screening, Medicare patients newly diagnosed with lung cancer have an average of 3 years life expectancy.
  • ▸ With screening, these patients would have an additional 4 years of additional life expectancy incremental to the life expectancy without screening.
  • ▸ If all eligible beneficiaries had been screened and treated consistently from age 55 years, approximately 358,134 additional individuals with current or past lung cancer would be alive in 2014.
One of the coauthors of this article was the lead author of an actuarial analysis of LDCT lung cancer screening for the commercially insured population.24 This report used similar methodology, types of structures, and data to examine lung cancer screening for the Medicare program. The Medicare program faces significant budget limitations, and any new coverage benefit will face scrutiny regarding its costs and benefits.The purpose of the present study was to estimate the hypothetical 2014 costs and benefits associated with the responsible implementation of widespread lung cancer screening in the high-risk US population covered by Medicare.  相似文献   

4.
5.

Background

In 2006, the economic burden of metastatic renal cell carcinoma (mRCC) was estimated to be up to $1.6 billion worldwide and has since grown annually. With the continuing increase of the economic burden of this disease in the United States, there is a growing need for economic analyses to guide treatment and policy decisions for this patient population.

Objective

To evaluate available comparative economic data on targeted therapies for patients with mRCC who have failed first-line targeted therapies.

Method

A broad and comprehensive literature review was conducted of US-based studies between January 1, 2005, and February 11, 2013, evaluating comparative economic evidence for targeted agents that are used as second-line therapy or beyond. Based on the specific search parameters that focused on cost-effectiveness and economic comparisons between vascular endothelial growth factor (VEGF)/VEGF receptor (VEGFr) inhibitors and mammalian target of rapamycin (mTOR) inhibitors, only 7 relevant, US-based economic evaluations were found appropriate for inclusion in the analysis. All authors, who are experts in the health economics and outcomes research field, reviewed the search results. Studies of interest were those with a targeted agent, VEGF/VEGFr or mTOR inhibitor, in at least 1 study arm.

Discussion

As a group, targeted therapies were found to be cost-effective options in treating patients with refractory mRCC in the United States. Oral therapies showed an economic advantage over intravenous agents, presumably because oral therapies have a lower impact on outpatient resources. Based on 3 studies, everolimus has been shown to have an economic advantage over temsirolimus and to be cost-effective compared with sorafenib. No economic comparison between everolimus and axitinib, the only 2 drugs with a National Comprehensive Cancer Network category 1 recommendation for use after the failure of VEGFr tyrosine kinase inhibitors, is available.

Conclusion

The limited and heterogeneous sum of the currently available economic evidence does not allow firm conclusions to be drawn about the most cost-effective targeted treatment option in the second-line setting and beyond in patients with mRCC. It is hoped that ongoing head-to-head therapeutic trials and biomarker studies will help improve the economic efficiency of these expensive agents.Renal cell carcinoma (RCC) comprises 92% of all kidney cancers and has a poor prognosis, with approximately 10% of patients with metastatic disease surviving beyond 5 years.1 In 2006, the economic burden of metastatic RCC (mRCC) was estimated to be up to $1.6 billion worldwide and has since grown annually.2 A recent review reported that the economic burden of RCC in the United States ranges from $600 million to $5.19 billion, with annual per-patient medical costs of between $16,488 and $43,805.3 Furthermore, these costs will likely increase with the expanded use of targeted agents, based on a 2011 pharmacoeconomic analysis showing that the annual costs to treat patients with RCC receiving these agents are 3- to 4-fold greater than the costs to treat patients who are not receiving targeted therapies.4 In addition, the incidence and prevalence of RCC are rising, in part because of improved and earlier detection, and because of increases in related risk factors, such as hypertension, diabetes, and obesity.57

KEY POINTS

  • ▸ The growing economic burden of renal cell carcinoma (RCC) in the United States indicates the need for economic analyses of current therapies to guide treatment decisions for this disease.
  • ▸ This article is based on a comprehensive review of 7 studies that were identified within the search criteria for US-based economic data related to targeted therapies for metastatic RCC (mRCC) after failure of first-line therapies.
  • ▸ Targeted therapies were shown to be cost-effective for the treatment of refractory mRCC.
  • ▸ Oral therapies showed an economic advantage over intravenous agents, presumably because of their lower impact on outpatient resources.
  • ▸ No economic comparison is yet available for the only 2 drugs (ie, everolimus and axitinib) with an NCCN category 1 recommendation for use after a vascular endothelial growth factor receptor TKI.
  • ▸ Ongoing head-to-head therapeutic trials and biomarker studies may help to improve the economic efficiency of targeted treatments in the second-line setting and beyond for mRCC.
Clear-cell RCC, the most common histology, constitutes 75% of cases of RCC.8 The majority of patients with clear-cell RCC experience a loss of the functional von Hippel-Lindau gene, resulting in the accumulation of hypoxia-inducible factor-1α, an angiogenic factor whose protein synthesis is regulated by mammalian target of rapamycin (mTOR).9 The net effect is overproduction of downstream proteins that promote RCC progression by stimulating cell growth and proliferation, cellular metabolism, and angiogenesis (ie, vascular endothelial growth factor [VEGF], platelet-derived growth factor, and epidermal growth factor).9Abnormal functioning of the mTOR pathway is therefore thought to play a role in the pathogenesis of RCC; inhibition of mTOR globally decreases protein production, suppresses VEGF synthesis, and induces cell cycle arrest.10 Knowledge of the critical role of VEGF and mTOR in RCC pathogenesis drove the development of targeted agents in the treatment of this disease. The US Food and Drug Administration (FDA) approval of axitinib in January 2012 brings the total of approved targeted agents for RCC to 7 in the past 7 years, making this one of the most prolific areas of cancer drug development (1125 The need for clarity regarding the optimal sequential use of these agents is stronger than ever, particularly given the high price of these agents.

Table 1

Targeted Agents Approved for RCC and Pivotal Phase 3 Clinical Trials
Drug, route of administration, approval dateRCC indicationDesign of pivotal trialPFs in the overall population of pivotal trial
Sorafenib, oral11 December 20, 2005Advanced RCCTARGET: randomized, double-blind study of sorafenib (n = 451) vs placebo (n = 452) in patients treated with 1 previous systemic therapy (primarily cytokines)18
  • Median, 5.5 mo with sorafenib vs 2.8 mo with placebo
  • HR, 0.44 (95% CI, 0.35–0.55; P <.001)
Sunitinib, oral12 February 2, 2007Advanced RCCRandomized, open-label study of sunitinib (n = 375) vs IFN-α (n = 375) in treatment-naive patients19
  • Median, 11 mo with sunitinib vs 5 mo with IFN-α
  • HR, 0.539 (95% CI, 0.4510.643; P <.001)
Temsirolimus, IV13 May 30, 2007Advanced RCCARCC: randomized, open-label study of temsirolimus (n = 209) vs IFN-α (n = 207) vs temsirolimus + IFN-α (n = 210) in treatment-naive patients with ≥3 of 6 predictors of short survival20
  • Median, 3.8 mo with temsirolimus vs 1.9 mo with temsirolimus + IFN-α vs 3.7 mo with temsirolimus + IFN-α
  • HR, not available
Everolimus, oral14 March 30, 2009RCC therapy after failure of treatment with sunitinib or sorafenibRECORD-1: randomized, double-blind study of everolimus (n = 277) vs placebo (n = 139) in patients previously treated with sunitinib and/or sorafenib21
  • Median, 4.9 mo with everolimus vs 1.9 mo with placebo
  • HR, 0.33 (95% CI, 0.25–0.43; P <.001)
Bevacizumab, IV, plus IFN-α, SC15 August 3, 2009Metastatic RCC with IFN-αAVOREN: randomized, double-blind study of bevacizumab + IFN-α (n = 327) vs placebo + IFN-α (n = 322) in treatment-naive patients22
  • Median, 10.2 mo with bevacizumab + IFN-α vs 5.4 mo with placebo + IFN-α
  • HR, 0.63 (95% CI, 0.52–0.75; P = .001)
CALGB 90206: randomized, open-label study of bevacizumab + IFN-α (n = 369) vs IFN-α (n = 363) in treatment-naive patients23
  • Median, 8.5 mo with bevacizumab + IFN-α vs 5.2 mo with IFN-α
  • HR, 0.72 (95% CI, 0.61–0.83; P <.001)
Pazopanib, oral16 October 19, 2009Adults for first-line treatment of advanced RCC and for patients who have received previous cytokine therapy for advanced diseaseRandomized, double-blind study of pazopanib (n = 290) vs placebo (n = 145) in treatment-naive and cytokine-pretreated patients24
  • Median, 9.2 mo with pazopanib vs 4.2 mo with placebo
  • HR, 0.46 (95% CI, 0.34–0.62; P <.001)
Axitinib, oral17 January 27, 2012Treatment of RCC after failure of 1 previous systemic therapyAXIS: randomized, open-label study of axitinib (n = 361) vs sorafenib (n = 362) in patients treated with 1 previous systemic therapy25
  • Median, 6.7 mo with axitinib vs 4.7 mo with sorafenib
  • HR, 0.665 (95% CI, 0.5440.812; P <.001)
Open in a separate windowCI indicates confidence interval; HR, hazard ratio; IFN, interferon; IV, intravenous; PFS, progression-free survival; RCC, renal cell carcinoma; SC, subcutaneous.The oral VEGF receptor tyrosine kinase inhibitors (VEGFr-TKIs) sunitinib and pazopanib, the VEGF monoclonal antibody bevacizumab plus (subcutaneously injected) interferon-a, and the intravenous (IV) mTOR inhibitor temsirolimus are recommended by the National Comprehensive Cancer Network (NCCN) as first-line therapies for the treatment of mRCC (26 The VEGFr-TKI sorafenib is recommended for select patients only. Despite efficacy in mRCC, agents targeted against VEGF only “inhibit” the disease, making resistance almost inevitable and universal, thereby necessitating second-line therapy after the failure of initial VEGF inhibition.18,19,2224

Table 2

NCCN Treatment Guidelines for mRCC, by Phase 3 Evidence
SettingCategory 1 evidence
Treatment naïveGood or intermediate riskaSunitinib
Pazopanib
Bevacizumab + IFN-α
Poor riskaTemsirolimus
Previously treatedPrevious cytokineSorafenib
Sunitinib
Pazopanib
Axitinibb
Previous tyrosine kinase inhibitorEverolimus
Axitinibb
Previous mTOR inhibitorUnknown
Open in a separate windowaMemorial Sloan-Kettering Cancer Center risk category.bAxitinib has a category 1 recommendation for treatment of patients who have failed ≥1 previous systemic therapy.IFN indicates interferon; mRCC, metastatic renal cell carcinoma; mTOR, mammalian target of rapamycin; NCCN, National Comprehensive Cancer Network.Source: National Comprehensive Cancer Network. NCCN Clinical Practice Guidelines in Oncology (NCCN Guidelines®). Kidney cancer. Version 1.2013. 2013.Because curing metastatic disease with these agents is rare, most patients require lifelong therapy and are destined to cycle through the available treatment options. Guidelines on sequential therapy for the second-line treatment of mRCC and beyond are limited, indicating a lack of clinical trial–based comparative evidence and/or consensus in this area. In the NCCN guidelines, the oral agents everolimus and axitinib are category 1 recommendations for second-line therapy (26 Despite their clinically proven benefit in extending progression-free survival (PFS), the cost of these agents and their lack of proven survival benefit have led to controversial government reimbursement decisions in some parts of the world (eg, by the National Institute for Health and Care Excellence in the United Kingdom27).Given the lack of prospectively collected data sets assessing the optimal sequence of targeted therapies, as well as the high price of these agents, economic analyses provide important insights into the overall costs versus benefits of targeted therapies, thus helping to inform treatment decisions. In this review, we identify comparative economic evidence beyond the first-line treatment of mRCC and discuss the potential implications of the findings.  相似文献   

6.
On August 22, 2006, President Bush issued an Executive Order calling on all federal agencies and those who do healthcare business with the government to engage in collaborative efforts to incorporate the 4 cornerstones of value-driven healthcare: health information technology standards, quality standards, price standards, and incentives. The Department of Health and Human Services has embarked on a campaign to make these 4 cornerstones a reality by encouraging the public and private sectors to work collaboratively at the local level. In support of this campaign, the Centers for Medicare & Medicaid Services launched a project in late 2006 that leverages local collaboratives as a means to explore a national approach to physician performance measurement. This project, which is known as the Better Quality Information to Improve Care for Medicare Beneficiaries Project, aims to test methods to aggregate Medicare administrative data with data from commercial health plans and, in some cases, Medicaid, in 6 local collaboratives to calculate and report quality measures for physician groups and for some individual physicians.On August 22, 2006, President Bush issued an Executive Order—Promoting Quality and Efficient Health Care in Federal Government Administered or Sponsored Health Care Programs—calling on all federal agencies and those who do healthcare business with the government to engage in collaborative efforts to incorporate the cornerstones of value-driven healthcare (1:

Table 1

The 4 Cornerstones of Value-Driven Healthcare
Interoperable HIT
  • Interoperable HIT is the development and implementation of standards and health information systems that allow various parts of the healthcare delivery system to communicate and exchange data quickly and securely
  • Interoperable HIT holds the potential to create greater efficiency in the healthcare delivery system
Measure and publish quality information
  • Consumers need quality-of-care information to be able to make confident and informed decisions about their healthcare providers and treatment options
  • Quality-of-care information is also important for providers to have to be able to improve the quality of care they provide
  • The quality information consumers and providers receive should be based on measures that are developed through consensus-based processes that involve all stakeholders, such as the processes used by the National Quality Forum, the AQA Alliance, and the Hospital Quality Alliance
Measure and publish price information
  • To be able to make confident and informed decisions about their healthcare providers and treatment options, consumers also need to have price information that is measured and reported in a uniform manner
  • Efforts are currently under way to develop uniform approaches to measuring and reporting price information, including strategies for measuring the overall cost of services for common episodes of care and the treatment of common chronic diseases
Promote quality and efficiency of care
  • The healthcare delivery system should be structured in a manner to reward those who offer and those who purchase high-quality, cost-effective care
  • All stakeholders—providers, consumers, health plans, and payers—should participate in arrangements that reward high-quality, cost-effective care
Open in a separate windowHIT indicates health information technology.Source: Reference 1.
  1. Interoperable health information technology
  2. Measure and publish quality information
  3. Measure and publish price information
  4. Promote quality and efficiency of care.
  相似文献   

7.
8.
Neisseria meningitidis carriage was compared in swab specimens of nasopharynx, tonsils, and saliva taken from 258 students. We found a higher yield in nasopharyngeal than in tonsillar swabs (32% vs. 19%, p<0.001). Low prevalence of carriage in saliva swabs (one swab [0.4%]) suggests that low levels of salivary contact are unlikely to transmit meningococci.Invasive meningococcal disease has a high case-fatality rate and an immediate risk of further cases among household contacts. Public health measures therefore include prompt identification of contacts for chemoprophylaxis (1). One question that commonly arises is whether salivary contact through sharing cups or glasses is an indication for prophylaxis, but the evidence base to inform an answer is weak, and national guidelines are inconsistent (1,2). Although saliva is thought to inhibit meningococcal growth (3), carriage rates in saliva are not known, and swabs to detect carriage are usually taken from tonsils or nasopharynx (46). We compared meningococcal isolation rates in swabs of saliva (front of mouth), tonsils, and nasopharynx.We recruited volunteers among students from two colleges in Hereford, England. After giving written consent, students completed a short questionnaire on age, sex, smoking, recent antimicrobial drug use, and meningococcal vaccine status. Three sterile, dry, cotton-tipped swabs were used to take samples from each volunteer: one from the nasopharynx (through the mouth and swept up behind the uvula), one from both tonsils, and one swab of saliva between the lower gum and lips. Swabs were plated directly onto a selective culture medium primarily designed for the isolation of pathogenic Neisseria species (modified New York City base containing vancomycin, colistin, and trimethoprim), prepared by Taunton Media Services, UK (7). The plates were transported to Hereford Public Health Laboratory, where they were spread once from the primary inoculum and incubated in 7% CO2 at 37°C for 48 h. Putative Neisseria species isolated were sent to the Meningococcal Reference Unit, Manchester Public Health Laboratory, for Neisseria meningitidis confirmation and serologic phenotypic characterization. Data were entered into the computer using Excel (Microsoft Corp., Redmond, WA). Carriage rates by site were compared with McNemar’s test and by risk factor using chi-square tests. Ethical approval was obtained from the Public Health Laboratory Service Ethics Committee and Herefordshire District Ethics Committee.Of the 258 participants, 90 (35%) were identified as carrying Neisseria meningitidis from one or more sites. The site with the highest yield was the nasopharynx (32.2%), whereas tonsillar carriage was 19.4% (
Site of swabOne site positiveTwo sites positiveThree sites positiveTotal positiveOverall carriage %
Nasopharynx
39
44
0
83
32.2
Tonsils
6
44
0
50
19.4
Saliva10010.4
Open in a separate windowa(n=258).
1H.O. was responsible for recruiting students, obtaining specimens, swabs, and drafting the paper with J.S.; S.G. and M.M. were responsible for microbiologic processing and analysis; and J.S. designed the study and drafted the paper with H.O. All authors contributed to the final draft.The predominant serogroup among carried strains was B. No serogroup C strains were identified. Of the 44 carriers with positive swabs from both nasopharynx and tonsils, each pair of isolates was considered to be phenotypically indistinguishable by serogroup, serotype, and sero-subtype. In three of these pairs, one isolate expressed serogroup B, and the paired isolate could not be serogrouped but had identical serotype and sero-subtype.Of the 258 participants, 116 (45%) were men, and 142 were women. Most (86%) were 18 to 21 years of age. Carriage rates were higher among men than women (54/116 vs. 36/142, p<0.001), and among smokers than nonsmokers (49/90 vs. 51/168, p<0.001). Carriage rates were similar when persons were stratified by age, meningococcal vaccination status, and recent antimicrobial drug use. Although duplicate swabs from the nasopharynx sometimes yield different meningococcal strains (3), none of the paired isolates in this study were distinguishable by phenotype.The yield of meningococci from nasopharyngeal swabs was nearly twice as high as that from tonsillar swabs. Previous researchers have found a lower sensitivity of nasopharyngeal swabs taken through the nose using small cotton-tipped wire swabs compared to tonsillar swabs taken using larger cotton-tipped swabs (5,6). No previous studies have compared yields from the nasopharynx and tonsils with those from the same type of swabs taken through the mouth. The carriage rate was higher than expected for this age group (4), suggesting that we had efficient swabbing and microbiologic techniques. We suggest that throat swabs to detect meningococcal carriage should always be taken from the nasopharynx (through the mouth whenever practical) and not from the tonsils.The very low isolation rate from saliva swabs suggests that low levels of salivary contact are unlikely to transmit meningococci (1). This observation is supported by results of a case-control study among university students that found no association between meningococcal acquisition and sharing of glasses or cigarettes (8). On the basis of this evidence, we propose that guidelines for public health management of meningococcal disease should not include low-level salivary contact (e.g., sharing drinks) with a case-patient as an indication for chemoprophylaxis.  相似文献   

9.
Pharmacy Staff Opinions Regarding Diabetic Retinopathy Screenings in the Community Setting: Findings from a Brief Survey     
Miranda G. Law  Stephanie Komura  Ann P. Murchison  Laura T. Pizzi 《American Health & Drug Benefits》2013,6(9):548-552

Background

Diabetic retinopathy is a retinal vascular disorder that affects more than 4.1 million people in the United States. New methods of detecting and ensuring adequate follow-up of this life-altering disease are vital to improving patient outcomes. Wills Eye Hospital and the Centers for Disease Control and Prevention are conducting a collaborative study to initiate a novel diabetic retinopathy screening in the community setting.

Objective

To evaluate the feasibility of a more widespread, large-scale implementation of this novel model of care for diabetic retinopathy screening in the community setting.

Methods

A simple, self-administered survey was distributed to pharmacists, pharmacy technicians, student pharmacists, and Wills Eye Hospital interns. The survey consisted of open-ended questions and responders were given 1 week to respond. A total of 22 surveys were distributed and 16 were completed. The responses were culled and analyzed to assess the feasibility of implementing this novel screening model in the pharmacy.

Results

The response rate to this pilot survey was 72%. The majority of the responding pharmacy staff members indicated that diabetic retinopathy screening in community pharmacies would greatly benefit patients and could improve patient care. However, they also noted barriers to implementing the screening, such as concerns about the cost of carrying out the screenings, the cost of the equipment needed to be purchased, and the lack of time and shortage of pharmacy staff.

Conclusion

The potential exists for pharmacists to positively influence diabetes care by implementing retinopathy care through the early detection of the disease and reinforcement of the need for follow-up; however, real-world barriers must be addressed before widespread adoption of such a novel model of care becomes feasible.Diabetic retinopathy is a retinal vascular disorder that often presents asymptomatically in its early stages and can progress to include visual symptoms such as blurry vision, dark or floating spots, vision loss, and complete blindness in its later stages.13 Diabetic retinopathy affects more than 4.1 million people in the United States and can severely impact vision.2,4,5 The American Academy of Ophthalmology and the American Diabetes Association recommend that all patients affected by diabetes have fundus examinations at least annually.6,7 Dilated fundus examinations require dilating the pupils to better see into the periphery of the eye. However, lack of time, not understanding the importance of screening for diabetic retinopathy, and lack of access to screening have resulted in 35% to 79% of patients not adhering to current recommendations.710A review of the literature for any type of community pharmacy–based patient intervention, performed in early January of 2013, revealed no published studies on the involvement of community pharmacists in diabetic retinopathy screenings. However, community pharmacy screenings that focused on other diseases have shown promising results for this type of intervention.1117The Screening type/study (publication year)ObjectiveSettingParticipants, type (N)ResultsChlamydia screening study (2010)11Qualitative analysis of pharmacists'' views on chlamydia screeningCommunity pharmacyPharmacists (26)Pharmacists appreciated the opportunity to expand their practice by providing chlamydia screenings, but were hesitant to provide screening to certain women (eg, married women or those in long-term relationships)Study of diabetes and CV conditions (2006)12To assess a new screening model for diabetes, hypertension, and dyslipidemiaCommunity pharmacy and non-healthcare settingsPatients: high-risk elderly (888)Pharmacists were able to identify patients with elevated glucose, cholesterol, and blood pressureScreening for PAD (2011)13To evaluate the feasibility of a community pharmacy pharmacist-initiated PAD screening programCommunity pharmacyPatients (39)The screening program was effective in increasing PAD recognition and demonstrated program feasibilityCOPD screening study (2012)14To assess the ability of pharmacists in community pharmacies to accurately conduct COPD screeningsCommunity pharmacyPatients (185)Pharmacists are able to effectively conduct COPD screenings and interpret resultsScreening for CV risk (2010)15To assess the ability of community pharmacists to conduct CV risk screeningsCommunity pharmacyPatients (655)The results show the ability of a CV screening program to improve diagnoses of high-risk individuals and to help contain the burden of CV diseaseOsteoporosis screening (2010)16To assess an osteoporosis screening and patient education program in community pharmaciesCommunity pharmacyPatients (262)The osteoporosis screening program doubled the number of patients who proceeded for further testing or treatmentOsteoporosis screening study (2008)17To develop an effective community pharmacy screening program for the detection of osteoporosis in womenCommunity pharmacyPatients: women (159)With the benefit of an effective screening program, women who were screened revealed high proportions of lifestyle or medication modifications at 3- or 6-month follow-upOpen in a separate windowCOPD indicates chronic obstructive pulmonary disease; CV, cardiovascular; PAD, peripheral arterial disease.The successful outcomes of these studies provide insight into the feasibility of implementing diabetic retinopathy screenings in community pharmacies, showing that quick interventions, such as friendly reminders and on-the-spot counseling, are effective ways to motivate patients to improve their health.  相似文献   

10.
Assessing Proposals for New Global Health Treaties: An Analytic Framework     
Steven J. Hoffman  John-Arne R?ttingen  Julio Frenk 《American journal of public health》2015,105(8):1523-1530
We have presented an analytic framework and 4 criteria for assessing when global health treaties have reasonable prospects of yielding net positive effects.First, there must be a significant transnational dimension to the problem being addressed. Second, the goals should justify the coercive nature of treaties. Third, proposed global health treaties should have a reasonable chance of achieving benefits. Fourth, treaties should be the best commitment mechanism among the many competing alternatives.Applying this analytic framework to 9 recent calls for new global health treaties revealed that none fully meet the 4 criteria. Efforts aiming to better use or revise existing international instruments may be more productive than is advocating new treaties.The increasingly interconnected and interdependent nature of our world has inspired many proposals for new international treaties addressing various health challenges,1 including alcohol consumption,2 elder care,3 falsified/substandard medicines,4 impact evaluations,5 noncommunicable diseases,6 nutrition,7 obesity,8 research and development (R&D),9 and global health broadly.10 These proposals claim to build on the success of existing global health treaties (Year AdoptedTreaty Name1892International Sanitary Convention1893International Sanitary Convention1894International Sanitary Convention1897International Sanitary Convention1903International Sanitary Convention (replacing 1892, 1893, 1894, and 1897 conventions)1912International Sanitary Convention (replacing 1903 convention)1924Brussels Agreement for Free Treatment of Venereal Disease in Merchant Seamen1926International Sanitary Convention (revising 1912 convention)1933International Sanitary Convention for Aerial Navigation1934International Convention for Mutual Protection Against Dengue Fever1938International Sanitary Convention (revising 1926 convention)1944International Sanitary Convention (revising 1926 convention)1944International Sanitary Convention for Aerial Navigation (revising 1933 convention)1946Protocols to Prolong the 1944 International Sanitary Conventions1946Constitution of the World Health Organization1951International Sanitary Regulations (replacing previous conventions)1969International Health Regulations (replacing 1951 regulations)1972Biological Weapons Convention1989Basel Convention on Transboundary Movements of Hazardous Wastes1993Chemical Weapons Convention1994World Trade Organization Agreement on the Application of Sanitary and Phytosanitary Measures1997Convention on the Prohibition of Anti-Personnel Mines and Their Destruction1998Rotterdam Convention on Hazardous Chemicals and Pesticides in International Trade2000Cartagena Protocol on Biosafety to the Convention on Biological Diversity2001Stockholm Convention on Persistent Organic Pollutants2003World Health Organization Framework Convention on Tobacco Control2005International Health Regulations (revising 1969 regulations)2007United Nations Convention on the Rights of Persons With Disabilities2013Minamata Convention on MercuryOpen in a separate windowNote. Global health treaties are those that were adopted primarily to promote human health.

TABLE 2—

Examples of the Diverse Regulatory Functions Among Existing International Treaties
Domestic ObligationsForeign Obligations
Positive ObligationsThe Framework Convention on Tobacco Control (2003) requires countries to restrict tobacco advertising, promotion, and sponsorshipThe International Health Regulations (2005) requires countries to report public health emergencies of international concern to the World Health Organization
The World Trade Organization''s Agreement on Trade-Related Aspects of Intellectual Property (1994) requires countries to protect patent rightsThe Constitution of the World Health Organization (1946) requires countries to pay annual membership dues
Negative ObligationsThe International Convention on Economic, Social & Cultural Rights (1966) prohibits countries from interfering with a person’s right to the highest attainable standard of healthThe Biological Weapons Convention (1972) and the Chemical Weapons Convention (1993) prohibit countries from using biological and chemical weapons, respectively
The Stockholm Convention (2001) prohibits countries from producing certain persistent organic pollutantsThe Geneva Conventions (1949) prohibit countries from torturing prisoners of war
Open in a separate windowBut whether international treaties actually achieve the benefits their negotiators intend is highly contested.11–13 There are strong theoretical arguments on both sides, and the available empirical evidence conflicts. A recent review of 90 quantitative impact evaluations of treaties across sectors found some treaties achieve their intended benefits whereas others do not. From a health perspective, there is currently no quantitative evidence linking ratification of an international treaty directly to improved health outcomes. There is only quantitative evidence linking domestic implementation of policies recommended in treaties with health outcomes. For example, Levy et al. found that tobacco tax increases between 2007 and 2010 in 14 countries to 75% of the final retail price resulted in 7 million fewer smokers and averted 3.5 million smoking-related deaths; the World Health Organization recommended this policy as part of its MPOWER package of tobacco-control measures that was introduced to help countries implement the Framework Convention on Tobacco Control.14 Evidence of treaties’ direct impact on other social objectives is extremely mixed.1Even if prospects for benefits are great, international treaties are still not always appropriate solutions to global health challenges. This is because the potential value of any new treaty depends on not only its expected benefits but also its costs, risks of harm, and trade-offs.15 Conventional wisdom suggests that international treaties are inexpensive interventions that just need to be written, endorsed by governments, and disseminated. Knowledge of national governance makes this assumption reasonable: most countries’ lawmaking systems have high fixed costs for basic operations and thereafter incur relatively low marginal costs for each additional legislative act pursued. But at the international level, lawmaking is expensive. Calls for new treaties do not fully consider these costs. Even rarer is adequate consideration of treaties’ potentially harmful, coercive, and paternalistic effects and how treaties represent competing claims on limited resources.11,15When might global health treaties be worth their many costs? Like all interventions and implementation mechanisms, the answer depends on what these costs entail, the associated risks of harm, the complicated trade-offs involved, and whether these factors are all outweighed by the benefits that can reasonably be expected. We reviewed the important issues at stake, and we have offered an analytic framework and 4 criteria for assessing when new global health treaties should be pursued.  相似文献   

11.
Creating a Transdisciplinary Research Center to Reduce Cardiovascular Health Disparities in Baltimore,Maryland: Lessons Learned     
Lisa A. Cooper  L. Ebony Boulware  Edgar R. Miller  III  Sherita Hill Golden  Kathryn A. Carson  Gary Noronha  Mary Margaret Huizinga  Debra L. Roter  Hsin-Chieh Yeh  Lee R. Bone  David M. Levine  Felicia Hill-Briggs  Jeanne Charleston  Miyong Kim  Nae-Yuh Wang  Hanan Aboumatar  Jennifer P. Halbert  Patti L. Ephraim  Frederick L. Brancati 《American journal of public health》2013,103(11):e26-e38
Cardiovascular disease (CVD) disparities continue to have a negative impact on African Americans in the United States, largely because of uncontrolled hypertension. Despite the availability of evidence-based interventions, their use has not been translated into clinical and public health practice. The Johns Hopkins Center to Eliminate Cardiovascular Health Disparities is a new transdisciplinary research program with a stated goal to lower the impact of CVD disparities on vulnerable populations in Baltimore, Maryland. By targeting multiple levels of influence on the core problem of disparities in Baltimore, the center leverages academic, community, and national partnerships and a novel structure to support 3 research studies and to train the next generation of CVD researchers. We also share the early lessons learned in the center’s design.Racial disparities in hypertension prevalence, control rates with care, and related cardiovascular complications and mortality, are persistent and extensively documented in the United States.1–5 Cardiovascular disease (CVD) accounts for 35% of the excess overall mortality in African Americans, in large part because of hypertension.6,7 Nationwide, eliminating racial disparities in hypertension control would result in more than 5000 fewer deaths from coronary heart disease and more than 2000 fewer deaths from stroke annually in African Americans.8 Despite numerous studies establishing the efficacy of pharmacologic and lifestyle therapies9–12 in African Americans and Whites, blood pressure control rates remain suboptimal, even among persons receiving regular health care.13,14 Barriers to hypertension control exist at multiple levels, including individual patients, health care professionals, the health care system, and patients’ social and environmental context. Although successful interventions exist,15–18 these strategies have not been translated into clinical and public health practice.In Baltimore, Maryland, like in the rest of the United States, CVD, including coronary heart disease and stroke, is the leading cause of death. Approximately 2000 people die from CVD in Baltimore each year; these deaths disproportionately affect African Americans,19 making health disparities from CVD a key factor in the racial discrepancy in life expectancy in the city. Cardiovascular disease is a key reason for the 20-year difference in life expectancy between those who live in more affluent neighborhoods (83 years) and those who reside in poorer neighborhoods (63 years) of Baltimore.20 VariableBaltimoreMarylandUnited StatesAfrican American adults with hypertension, %41.32139.22238.623White adults with hypertension, %28.62125.12232.323Life expectancy, African Americansa71.52475.52474.525Life expectancy, Whitesa76.52479.72478.825Open in a separate windowaIn years, at birth in 2009.  相似文献   

12.
A Life Course Perspective on How Racism May Be Related to Health Inequities     
Gilbert C. Gee  Katrina M. Walsemann  Elizabeth Brondolo 《American journal of public health》2012,102(5):967-974
  相似文献   

13.
Duration of Immunity to Norovirus Gastroenteritis     
Kirsten Simmons  Manoj Gambhir  Juan Leon  Ben Lopman 《Emerging infectious diseases》2013,19(8):1260-1267
The duration of immunity to norovirus (NoV) gastroenteritis has been believed to be from 6 months to 2 years. However, several observations are inconsistent with this short period. To gain better estimates of the duration of immunity to NoV, we developed a mathematical model of community NoV transmission. The model was parameterized from the literature and also fit to age-specific incidence data from England and Wales by using maximum likelihood. We developed several scenarios to determine the effect of unknowns regarding transmission and immunity on estimates of the duration of immunity. In the various models, duration of immunity to NoV gastroenteritis was estimated at 4.1 (95% CI 3.2–5.1) to 8.7 (95% CI 6.8–11.3) years. Moreover, we calculated that children (<5 years) are much more infectious than older children and adults. If a vaccine can achieve protection for duration of natural immunity indicated by our results, its potential health and economic benefits could be substantial.Key words: Norovirus, modeling, mathematical model, immunity, incidence, vaccination, vaccine development, viruses, enteric infections, acute gastroenteritisNoroviruses (NoVs) are the most common cause of acute gastroenteritis (AGE) in industrialized countries. In the United States, NoV causes an estimated 21 million cases of AGE (1), 1.7 million outpatient visits (2), 400,000 emergency care visits, 70,000 hospitalizations (3), and 800 deaths annually across all age groups (4). Although the highest rates of disease are in young children, infection and disease occur throughout life (5), despite an antibody seroprevalence >50%, and infection rates approach 100% in older adults (6,7).Frequently cited estimates of the duration of immunity to NoV are based on human challenge studies conducted in the 1970s. In the first, Parrino et al. challenged volunteers with Norwalk virus (the prototype NoV strain) inoculum multiple times. Results suggested that the immunity to Norwalk AGE lasts from ≈2 months to 2 years (8). A subsequent study with a shorter challenge interval suggested that immunity to Norwalk virus lasts for at least 6 months (9). In addition, the collection of volunteer studies together demonstrate that antibodies against NoV may not confer protection and that protection from infection (serologic response or viral shedding) is harder to achieve than protection from disease (defined as AGE symptoms) (1014). That said, most recent studies have reported some protection from illness and infection in association with antibodies that block binding of virus-like particles to histo-blood group antigen (HBGA) (13,14). Other studies have also associated genetic resistance to NoV infections with mutations in the 1,2-fucosyltransferase (FUT2) gene (or “secretor” gene) (15). Persons with a nonsecretor gene (FUT2−/−) represent as much as 20% of the European population. Challenge studies have also shown that recently infected volunteers are susceptible to heterologous strains sooner than to homotypic challenge, indicating limited cross-protection (11).One of many concerns with all classic challenge studies is that the virus dose given to volunteers was several thousand–fold greater than the small amount of virus capable of causing human illness (estimated as 18–1,000 virus particles) (16). Thus, immunity to a lower challenge dose, similar to what might be encountered in the community, might be more robust and broadly protective than the protection against artificial doses encountered in these volunteer studies. Indeed, Teunis et al. have clearly demonstrated a dose-response relationship whereby persons challenged with a higher NoV dose have substantially greater illness risk (16).Furthermore, in contrast with results of early challenge studies, several observations can be made that, when taken together, are inconsistent with a duration of immunity on the scale of months. First, the incidence of NoV in the general population has been estimated in several countries as ≈5% per year, with substantially higher rates in children (5). Second, Norwalk virus (GI.1) volunteer studies conducted over 3 decades, indicate that approximately one third of genetically susceptible persons (i.e., secretor-positive persons with a functional FUT2 gene) are immune (18,20,22). The point prevalence of immunity in the population (i.e., population immunity) can be approximated by the incidence of infection (or exposure) multiplied by the duration of immunity. If duration of immunity is truly <1 year and incidence is 5%, <5% of the population should have acquired immunity at any given time. However, challenge studies show population immunity levels on the order of 30%–45%, suggesting that our understanding of the duration of immunity is incomplete (8,11,17,18). HBGA–mediated lack of susceptibility may play a key role, but given the high seroprevalence of NoV antibodies and broad diversity of human HBGAs and NoV, HBGA–mediated lack of susceptibility cannot solely explain the discrepancy between estimates of duration of immunity and observed NoV incidence. Moreover, population immunity levels may be driven through the acquisition of immunity of fully susceptible persons or through boosting of immunity among those previously exposed.

Table 1

Summary of literature review of Norwalk virus volunteer challenge studies*
StudyAll
Secretor positive
Secretor negative
Strain
No. challengedNo. (%) infectedNo. (%) AGE No. challengedNo. (%) infected No. (%) AGENo. challengedNo. (%) infected
Dolin 1971 (10)129 (75)SM
Wyatt 1974 (11)†2316 (70)NV, MC, HI
Parrino 1977 (8)†126 (50)NV
Johnson 1990 (17)†4231 (74)25 (60)NV
Graham 1994 (12)5041 (82)34 (68)NV
Lindesmith 2003 (18)7734 (44)21 (27)5535 (64)21 (38)210NV
Lindesmith 2005 (19)159 (60)7 (47)128 (67)31 (33)SM
Atmar 2008 (20)2116 (76)11 (52)2116 (76)11 (52)NV
Leon 2011 (21)‡157 (47)5 (33)157 (47)5 (33)NV
Atmar 2011 (14)‡4134 (83)29 (71)4134 (83)29 (71)NV
Seitz 2011 (22)1310 (77)10 (77)1310 (77)10 (77)1 (5.6)NV
Frenck 2012 (23)4017 (42)12 (30)2316 (70)12 (52.1)17GII.4
Open in a separate window*AGE, acute gastroenteritis; SM, Snow Mountain virus; NV, Norwalk virus; MC, Montgomery County virus; HI, Hawaii virus; GII.4, genogroup 2 type 4.
†Only includes initial challenge, not subsequent re-challenge.
‡Only includes placebo or control group.In this study, we aimed to gain better estimates of the duration of immunity to NoV by developing a community-based transmission model that represents the transmission process and natural history of NoV, including the waning of immunity. The model distinguishes between persons susceptible to disease and those susceptible to infection but not disease. We fit the model to age-specific incidence data from a community cohort study. However, several factors related to NoV transmission remain unknown (e.g., the role asymptomatic persons who shed virus play in transmission). Therefore, we constructed and fit a series of 6 models to represent the variety of possible infection processes to gain a more robust estimate of the duration of immunity. This approach does not consider multiple strains or the emergence of new variants, so we are effectively estimating minimum duration of immunity in the absence of major strain changes.  相似文献   

14.
Effects of Medicare Part D on Disparity Implications of Medication Therapy Management Eligibility Criteria     
Junling Wang  Yanru Qiao  Ya-Chen Tina Shih  JoEllen Jarrett Jamison  Christina A. Spivey  Liyuan Li  Jim Y. Wan  Shelley I. White-Means  Samuel Dagogo-Jack  William C. Cushman  Marie Chisholm-Burns 《American Health & Drug Benefits》2014,7(6):346-358
  相似文献   

15.
Spousal Violence in 5 Transitional Countries: A Population-Based Multilevel Analysis of Individual and Contextual Factors     
Leyla Ismayilova 《American journal of public health》2015,105(11):e12-e22
Objectives. I examined the individual- and community-level factors associated with spousal violence in post-Soviet countries.Methods. I used population-based data from the Demographic and Health Survey conducted between 2005 and 2012. My sample included currently married women of reproductive age (n = 3932 in Azerbaijan, n = 4053 in Moldova, n = 1932 in Ukraine, n = 4361 in Kyrgyzstan, and n = 4093 in Tajikistan). I selected respondents using stratified multistage cluster sampling. Because of the nested structure of the data, multilevel logistic regressions for survey data were fitted to examine factors associated with spousal violence in the last 12 months.Results. Partner’s problem drinking was the strongest risk factor associated with spousal violence in all 5 countries. In Moldova, Ukraine, and Kyrgyzstan, women with greater financial power than their spouses were more likely to experience violence. Effects of community economic deprivation and of empowerment status of women in the community on spousal violence differed across countries. Women living in communities with a high tolerance of violence faced a higher risk of spousal violence in Moldova and Ukraine. In more traditional countries (Azerbaijan, Kyrgyzstan, and Tajikistan), spousal violence was lower in conservative communities with patriarchal gender beliefs or higher financial dependency on husbands.Conclusions. My findings underscore the importance of examining individual risk factors in the context of community-level factors and developing individual- and community-level interventions.Understanding factors that contribute to intimate partner violence (IPV) is essential to reducing it and minimizing its deleterious effect on women’s functioning and health. Most evidence comes from studies conducted in western industrialized countries or in the developing countries of Africa, Latin America, and Asia1–5; there is scarce knowledge available on IPV in the transitional countries of the former Soviet Union (fSU) region,6 which represents different geopolitical, socioeconomic, and cultural environments.7 Studies from other countries often demonstrate mixed findings regarding key risk factors for spousal violence, which suggests that their effects are context specific.8–11 An examination of cross-country similarities and differences within the fSU region may contribute to the understanding of risk factors for spousal violence in a different sociocultural context.As a part of the Soviet Union for approximately 70 years until its collapse in 1991, the fSU countries shared similar sociopolitical contexts,12 with a legacy of well-established public services, stable jobs, and high levels of education dating back to the Soviet era.13 The political turmoil and economic crisis of the 1990s following the collapse of the Soviet Union and the transition from a socialist to a market economy resulted in high unemployment, deterioration of public services, and growth in poverty and social inequalities, which increased family stress.14My study focused on 5 countries of the fSU that included an additional Domestic Violence (DV) module in the Demographic and Health Survey (DHS), which presented the first opportunity for cross-country comparison in this region using recent nationally representative data. The DHS survey was conducted in 2 Eastern European countries of the fSU (Moldova and Ukraine) and 2 countries located in the Central Asian region (Kyrgyz Republic and Tajikistan); the Caucasus region was represented by Azerbaijan. Previous DHS and other nationally representative studies from the fSU region included only individual-level predictors of violence without examining the role of contextual factors and focused predominantly on Eastern European countries of the fSU.8,15–17Despite shared Soviet background, the 5 countries differ in terms of gender norms and current socioeconomic situations (7 Eastern European countries (Ukraine and Moldova) share relatively more egalitarian gender norms, whereas Azerbaijan, Tajikistan and Kyrgyzstan, which are secular Muslim nations, have more traditional values and conservative norms. Women in Kyrgyzstan fall in the middle because of a historically large Russian-speaking population.18–21 Nevertheless, Azerbaijan, Kyrgyzstan, and Tajikistan—where the female literacy rate is close to 100% and polygamous marriages are illegal22—differ from many countries with a traditional Muslim culture because of a history of socialistic ideology, suppression of religion, and universal public education. Although Azerbaijan and Ukraine have exhibited significant economic growth because of rich energy resources, Moldova remains one of the poorest countries in Eastern Europe,23 and Tajikistan maintains the status of the poorest republic in the entire fSU region.

TABLE 1—

Selected Country-Level Indicators for 5 Former Soviet Union Countries: 2005–2012
Eastern Europe
Caucasus
Central Asia
Country-Level IndicatorsMoldovaUkraineAzerbaijanKyrgyzstanTajikistan
Population (in millions)3.645.59.55.98.2
Official language(s)RomanianUkrainianAzerbaijaniKyrgyz, RussianTajik
Area, km233 846603 50086 600199 951142 550
Country’s income categoryLower middleLower middleUpper middleLower middleLow
GNI per capita, Atlas method, US$2 4703 9607 3501 210990
Human development index0.663 (medium)0.734 (high)0.747 (high)0.628 (medium)0.607 (medium)
Female adult literacy, %9910010099100
Open in a separate windowNote. GNI = gross national income; USD = United States dollars.Source: World Development Indicators, World Bank, 2013.Several theories explain IPV through single factors: poverty-induced stress,24 weakened impulse control because of substance use,25,26 or learned aggressive or victimized behavior from the family of origin.27,28 Feminist theorists, however, have argued that poverty, stress, and alcohol abuse do not explain why violence disproportionally occurs against women. Instead, feminist theories suggest that IPV results from historical power differentials by gender, which have been reinforced through male superiority, authority, and socialization.29–32 However, feminist theory alone does not explain why people act differently, even if they grew up in the same social environment and were exposed to similar gender norms.33 Thus, Heise’s ecological model of IPV,33 adopted by the World Health Organization (WHO) as a guiding framework, and modified by Koenig et al.,4 combines individual theories explaining IPV and emphasizes the importance of contextual-level factors.Empirical studies in the United States, Bangladesh, Colombia, and Nigeria demonstrated that certain communities—not just individuals or families—are affected by IPV more than others, positing that violence might be a function of community-level characteristics and attitudes, and not only individual beliefs and behaviors.5,34–36 Community socioeconomic development, domestic violence norms, and community-level gender inequalities might shape individual women’s experiences.4,5 Inclusion of community-level variables might change the effects of individual factors, exemplifying the importance of conducting a 2-level analysis.4,5,34,35Thus, I examined the role of individual-level factors (socioeconomic status, family risk factors, and women’s empowerment status within the household) and contextual factors (community poverty and women’s empowerment status at the community level) associated with current spousal violence in population-based samples in 5 fSU countries: Azerbaijan, Moldova, Ukraine, Kyrgyzstan, and Tajikistan. More specifically, I aimed to examine whether contextual factors had an effect on spousal violence, above and beyond women’s individual-level characteristics, and whether effects remained significant while adjusting for individual and contextual factors simultaneously.  相似文献   

16.
Current Therapies and Emerging Drugs in the Pipeline for Type 2 Diabetes     
Quang T. Nguyen  Karmella T. Thomas  Katie B. Lyons  Loida D. Nguyen  Raymond A. Plodkowski 《American Health & Drug Benefits》2011,4(5):303-311

Background

Diabetes is a global epidemic that affects 347 million people worldwide and 25.8 million adults in the United States. In 2007, the total estimated cost associated with diabetes in the United States in 2007 was $174 billion. In 2009, $16.9 billion was spent on drugs for diabetes. The global sales of diabetes pharmaceuticals totaled $35 billion in 2010, and these are expected to rise to $48 billion by 2015. Despite such considerable expenditures, in 2000 only 36% of patients with type 2 diabetes in the United States achieved glycemic control, defined as hemoglobin A1c <7%.

Objective

To review some of the most important drug classes currently in development for the treatment of type 2 diabetes.

Discussion

Despite the 13 classes of antidiabetes medications currently approved by the US Food and Drug Administration (FDA) for the treatment of type 2 diabetes, the majority of patients with this chronic disease do not achieve appropriate glycemic control with these medications. Many new drug classes currently in development for type 2 diabetes appear promising in early stages of development, and some of them represent novel approaches to treatment, with new mechanisms of action and a low potential for hypoglycemia. Among these promising pharmacotherapies are agents that target the kidney, liver, and pancreas as a significant focus of treatment in type 2 diabetes. These investigational agents may potentially offer new approaches to controlling glucose levels and improve outcomes in patients with diabetes. This article focuses on several new classes, including the sodium-glucose cotransporter-2 inhibitors (which are furthest along in development); 11beta-hydroxysteroid dehydrogenase (some of which are now in phase 2 trials); glycogen phosphorylase inhibitors; glucokinase activators; G protein–coupled receptor 119 agonists; protein tyrosine phosphatase 1B inhibitors; and glucagon-receptor antagonists.

Conclusion

Despite the abundance of FDA-approved therapeutic options for type 2 diabetes, the majority of American patients with diabetes are not achieving appropriate glycemic control. The development of new options with new mechanisms of action may potentially help improve outcomes and reduce the clinical and cost burden of this condition.Diabetes is a chronic, progressive disease that affects approximately 347 million people worldwide.1 In the United States, 25.8 million Americans have diabetes, and another 79 million US adults aged ≥20 years are considered to have prediabetes.2 Diabetes is the leading cause of kidney failure, nontraumatic lower-limb amputations, and new cases of blindness among adults in the United States. It is a major cause of heart disease and stroke and is the seventh leading cause of death among US adults.2The total estimated cost for diabetes in the United States in 2007 was $174 billion,2 and between 2007 and 2009, the estimated cost attributable to pharmacologic intervention in the treatment of diabetes increased from $12.5 billion to $16.9 billion.35 Global sales for diabetes medications totaled $35 billion in 2010 and could rise to $48 billion by 2015, according to the drug research company IMS Health.6,7 In 2009, $1.1 billion was spent on diabetes research by the National Institutes of Health.8 Despite these staggering costs, currently there are still no proved strategies to prevent this disease or its serious complications.

KEY POINTS

  • ▸ Approximately 25.8 million adult Americans have diabetes. In 2007, diabetes cost the United States an estimated $174 billion, and in 2009, $16.9 billion was spent on antidiabetes medications.
  • ▸ Nevertheless, the majority of American patients with diabetes do not achieve glycemic control with the currently available pharmacotherapies.
  • ▸ Several novel and promising medications are currently in development, targeting the kidney, liver, and pancreas in the treatment of type 2 diabetes.
  • ▸ Many of these investigational agents involve new mechanisms of action that offer new therapeutic targets and may help improve glucose control in patients with diabetes.
  • ▸ The new drug classes in development include the sodium-glucose cotransporter-2 inhibitors (which are furthest along in development); the 11beta-hydroxysteroid dehydrogenase; glycogen phosphorylase inhibitors; glucokinase activators; G protein-coupled receptor 119 agonists; protein tyrosine phosphatase 1B inhibitors; glucagon-receptor antagonists.
  • ▸ Several of these new classes are associated with low potential for hypoglycemia, representing a potentially new approach to diabetes drug therapy.
  • ▸ The development of new options with new mechanisms of action may potentially help improve patient outcomes and reduce the clinical and cost burden of this chronic disease.
According to the 1999–2000 National Health and Nutrition Examination Survey, only 36% of patients with type 2 diabetes achieve glycemic control—defined as hemoglobin (Hb) A1c <7%—with currently available therapies.9 Lifestyle modification remains the most important and effective way to treat diabetes; however, the majority of patients with type 2 diabetes are unable to maintain such a rigid lifestyle regimen. For most patients with type 2 diabetes, pharmacologic intervention will therefore be needed to maintain glycemic control.2and22 list the 13 classes of medication currently approved by the US Food and Drug Administration (FDA) for the treatment of type 2 diabetes. Despite this abundance of pharmacotherapies, new medications with different mechanisms of action or new approaches to therapy are needed to improve patient outcomes and reduce the clinical and cost burden of this serious condition.

Table 1

FDA-Approved Antidiabetic Agents for the Treatment of Type 2 Diabetes
ClassDrug (brand)Mechanism of actionaHbA1c reduction, %bEffect on weightAdverse effectsaPrecautions/Comments
Alpha-glucosidase inhibitorsAcarbose (Precose) Miglitol (Glyset)Delay complex carbohydrate absorption0.5–0.8Weight neutralFlatulence, diarrhea, abdominal painTitrate slowly to minimize gastrointestinal effects
Amylin analogPramlintide (Symlin)Acts in conjunction with insulin to prolong gastric emptying, reduce postprandial glucose secretion, promote appetite suppression0.5–1Weight lossNausea, vomitingBlack box warning: Coadministration with insulin may induce severe hypoglycemia Injectable drug
BiguanideMetformin (Glucophage)Decrease hepatic glucose output Increase peripheral glucose uptake1–2Weight neutralNausea, vomiting, diarrhea, flatulenceTaken with meals Avoid use in patients with renal or hepatic impairment or with CHF, because of increased risk for lactic acidosis
Bile acid sequestrantColesevelam (Welchol)Binds to intestinal bile acids Mechanism of action for diabetes control unknown0.5Weight neutralConstipation, dyspepsia, nausea 
DPP-4 inhibitorsSitagliptin (Januvia) Saxagliptin (Onglyza) Linagliptin (Tradjenta)Slow inactivation of incretin hormones0.5–0.8Weight neutralNot clinically significant 
Dopamine agonistBromocriptine (Parlodel)Mechanism of action for diabetes control unknown0.5–0.7Weight neutralNausea, vomiting dizziness, headache, diarrhea 
Incretin mimeticsExanetide (Byetta)
Liraglutide (Victoza)
Stimulate insulin secretion, slows gastric emptying, suppresses glucagon release, induces satiety0.5–1Weight lossNausea, vomiting, diarrheaAcute pancreatitis has been reported during postmarketing experience
Injectable drug
Insulin preparations: rapid-, short-, intermediate-, long-acting, premixedRefer to Exogenous insulinUp to 3.5Weight gainHypoglycemia 
Nonsulfonylurea secretagoguesNateglinide (Starlix)
Repaglinide (Prandin)
Stimulate insulin secretion from the pancreas1–1.5Weight gainHypogylcemiaTaken with meals to control rapid onset
First-generation sulfonylureasChlorpropamide (Diabinese)
Tolazamide (Tolinase)
Tolbutamide (Orinase)
Stimulate insulin secretion from the pancreas1–2Weight gainHypoglycemiaUse of these agents has declined in response to adverse effects and unpredictable results
Second-generation sulfonylureasGlimepiride (Amaryl)
Glipizide (Glucotrol)
Glyburide (Micronase, Diabeta, Glynase)
Stimulate insulin secretion from the pancreas1–2Weight gainHypoglycemia 
ThiazolidinedionesPioglitazone (Actos)
Rosiglitazone (Avandia)
Increase peripheral tissue insulin sensitivity0.5–1.4Weight gainEdemaBlack box warning: These agents can cause or exacerbate CHF Contraindicated in patients with NYHA class III or IV heart failure
Open in a separate windowaLacy CF, et al, eds. Drug Information Handbook. 18th ed. Hudson, OH: Lexi-Comp; 2009–2010.bNathan DM, et al. Management of hyperglycemia in type 2 diabetes: a consensus algorithm for the initiation and adjustment of therapy. Diabetes Care. 2006;29:1963–1972.CHF indicates congestive heart failure; DPP, dipeptidyl peptidase; HbA1c, glycated hemoglobin; NYHA, New York Heart Association.

Table 2

Insulin Preparations
Drug (brand)Onset timeaPeak timeaDurationaComments
Rapid-acting
Insulin aspart (NovoLog)10–20 min1–3 hr3–5 hrAdminister within
Insulin glulisine (Apidra)25 min45–48 min4–5 hr15 min before or immediately after
Insulin lispro (Humalog)15–30 min0.5–2.5 hr3–6.5 hrmeals
Short-acting
Insulin regular (Novolin R, Humulin R)30–60 min1–5 hr6–10 hrAdminister 30 min before meals
Intermediate-acting
Insulin NPH (Novolin N, Humulin N)1–2 hr6–14 hr16–24+ hrCloudy appearance
Long-acting
Insulin detemir (Levemir)1.1–2 hr3.2–9.3 hr5.7–24 hrDo not mix with (dose-dependent) other insulins
Insulin glargine (Lantus)1.1 hrNone24 hr 
Premixed
70% Insulin aspart protamine/30% insulin aspart (NovoLog Mix 70/30)10–20 min1–4 hr24 hrCloudy appearance Administer within 15 min before meals
75% Insulin lispro protamine/25% insulin lispro protamine (Humalog Mix 75/25)15–30 min2 hr22 hr 
50% Insulin lispro protamine/50% insulin lispro protamine (Humalog Mix 50/50)15–30 min2 hr22 hr 
70% Insulin NPH/30% insulin regular (Humulin 70/30, Novolin 70/30)30 min1.5–12 hr24 hrCloudy appearance Administer within 30 min before meals
50% Insulin NPH/50% insulin regular (Humulin 50/50)30–60 min1.5–4.5 hr7.5–24 hr 
Open in a separate windowaMcEvoy GK, ed. American Society of Health-System Pharmacists Drug Information. Bethesda, MD; 2008.NPH indicates neutral protamine Hagedorn.Indeed, the number of diabetes medications for type 2 diabetes is expected to grow in the next few years, considering the many promising investigational therapeutic options currently in development that may gain FDA approval in the future. This article reviews some of the therapies that are currently being tested and may soon become new options for the treatment of type 2 diabetes (Drug categoryMechanism of actionCommentsSodium-glucose cotransporter-2 inhibitorsInhibit reabsorption of glucose at the proximal tubule of the kidney, thereby decreasing systemic hyperglycemiaLow potential for hypoglycemia Furthest along in clinical trials11beta-hydroxysteroid dehydrogenase type 1 inhibitorsInhibit an enzyme responsible for activating cortisone to cortisol, which minimizes antiglycemic effects of cortisolLow potential for hypoglycemia All drugs currently in phase 2 clinical trialsGlycogen phosphorylase inhibitorsInhibit enzymes responsible for hepatic gluconeogenesisStill very early in development Oral agents have shown promising results in animals and humansGlucokinase activatorsActivate key enzyme to increase hepatic glucose metabolismSeveral drugs are currently in phase 2 clinical trialsG protein–coupled receptor 119 agonistsMechanisms unknown Activation induces insulin release and increases secretion of glucagon-like peptide 1 and gastric inhibitory peptideStill very early in development Animal data are availableProtein tyrosine phosphatase 1B inhibitorsIncrease leptin and insulin releaseStill very early in development A potential weight-loss medicationGlucagon-receptor antagonistsBlock glucagon from binding to hepatic receptors, thereby decreasing gluconeogenesisLow potential for hypoglycemiaOpen in a separate window  相似文献   

17.
Lessons Learned From Evaluations of California's Statewide School Nutrition Standards     
Gail Woodward-Lopez  Wendi Gosliner  Sarah E. Samuels  Lisa Craypo  Janice Kao  Patricia B. Crawford 《American journal of public health》2010,100(11):2137-2145
Objectives. We assessed the impact of legislation that established nutrition standards for foods and beverages that compete with reimbursable school meals in California.Methods. We used documentation of available foods and beverages, sales accounts, and surveys of and interviews with students and food service workers to conduct 3 studies measuring pre- and postlegislation food and beverage availability, sales, and student consumption at 99 schools.Results. Availability of nutrition standard–compliant foods and beverages increased. Availability of noncompliant items decreased, with the biggest reductions in sodas and other sweetened beverages, regular chips, and candy. At-school consumption of some noncompliant foods dropped; at-home consumption of selected noncompliant foods did not increase. Food and beverage sales decreased at most venues, and food service à la carte revenue losses were usually offset by increased meal program participation. Increased food service expenditures outpaced revenue increases.Conclusions. Regulation of competitive foods improved school food environments and student nutritional intake. Improvements were modest, partly because many compliant items are fat- and sugar-modified products of low nutritional value. Additional policies and actions are needed to achieve more substantive improvements in school nutrition environments and student nutrition and health.The current obesity epidemic in the United States has been associated with environmental factors such as the proliferation of unhealthy foods in schools and neighborhoods, as well as promotion of unhealthy foods in media environments.14 An effective way to support children in being active and eating healthfully is to change institutional practices within schools by improving physical education and the nutritional value and quality of foods served.5,6Schools participating in the federally reimbursed National School Lunch Program and School Breakfast Program serve meals that must meet federal nutrition guidelines. However, foods that are not part of the meal programs are only subject to minimal federal regulation, and these “competitive” foods have become increasingly widespread in schools over the last 40 years.7 Sold throughout schools in vending machines, school stores, snack bars, and at fundraisers, competitive foods and beverages are of lower nutritional quality and are typically high in added sugars, salt, and fat. Common examples of competitive foods include soft drinks and other sweetened beverages, potato chips, candy, cookies, and pastries.811In an effort to combat childhood obesity, state and local policymakers have recently begun to regulate competitive school food offerings by enacting stricter school nutrition standards.12 These efforts were reinforced by provisions in the Child Nutrition and WIC Reauthorization Act of 2004, which required school districts receiving federal meal program funding to enact wellness policies—including guidelines for all foods and beverages served—by the 2006–2007 school year.13The wellness policies of 92 out of 100 large school districts polled by the School Nutrition Association in 2007 included nutrition standards limiting times or offerings of competitive foods and beverages in school à la carte services, stores, and vending machines.14 Although the effects of state and local regulations of competitive foods are only beginning to be evaluated,15 emerging evidence suggests that school policies that decrease access to competitive foods of limited nutritional value are associated with less frequent student consumption of these foods during the school day.16,17In California, Senate Bill 12 (SB 12), which applied nutrition standards to competitive foods sold in K–12 schools, took effect in July 2007. The law imposed the following limits on foods in secondary schools18:Individually sold snacks must contain no more than:
  • 35% of calories from fat (with some exceptions, such as legumes, nuts, and eggs);
  • 10% of calories from saturated fat (excluding eggs and cheese);
  • 35% sugar by weight (excluding fruits and vegetables); and
  • a total of 250 calories.
Individually sold entrées must contain no more than 36% of calories from fat and 400 calories per entrée.At elementary schools, the only competitive foods allowed are individually sold portions of nuts, nut butters, seeds, eggs, cheese packaged for individual sale, fruit, vegetables that have not been deep-fried, legumes, and dairy or whole-grain foods that meet the nutrient limits described previously and contain no more than 175 calories.A second law, SB 965, limited the competitive beverages that could be offered during the school day.18 The limits went into full effect in July 2007 for elementary and middle schools; at high schools, 50% of beverages had to comply by July 2007, and 100% of beverages had to comply by July 2009. The law limits competitive beverages to the following:
  • fruit-based and vegetable-based drinks that are at least 50% fruit juice without added sweeteners;
  • drinking water without added sweeteners;
  • milk products and nondairy milks that have no more than 2% fat and 28 g of total sugars per 8 oz; and
  • electrolyte replacement beverages with no caffeine and no more than 42 g of added sweetener per 20 oz (not allowed at elementary schools).
Three studies—the Healthy Eating, Active Communities study (HEAC), the High School Study (HSS), and the School Wellness Study (SWS), all conducted by the authors of this article, assessed different aspects of the implementation and impact of California''s school nutrition standards in diverse settings (Data Collection Dates
Data Collection MethodologyPurposeStudies IncludedNo.PrelegislationPostlegislationaOn-site observations: One-day site visits were made to each school. Information on all competitive foods and beverages available for sale was documented by trained staff who used standardized forms. We determined the nutrient profile of each item by using a validated nutrient composition database or information obtained from packaging, recipes, or manufacturer Web sites.To assess changes made to foods and beverages offered and to quantify change in degree of compliance with the nutrition standards.HEAC6 elementary schools, 6 middle schools, 6 high schools, 1 K-12 schoolSpring 2005Spring 2008HSS56 high schoolsSpring 2007Spring 2008SWS8 elementary schools, 8 middle schools, 8 high schoolsFall 2007Spring 2009Student survey: Paper questionnaires—proctored on-site by trained research staff—were completed by seventh- and ninth-grade students.To understand the impact on student dietary intake and food and beverage purchases.HEAC3527 students prelegislation; 3828 students postlegislationSpring 2006Spring 2008Food and beverage sales: Information was provided by school food service and school administration and entered onto standardized forms.To determine the financial impact of implementing the nutrition standards.HEAC6 elementary schools, 6 middle schools, 6 high schools2004–20052007–2008Food service survey: Interactive PDF questionnaire was sent electronically and was completed by school food service directors or supervisors (1 per school).To ascertain the perceived benefits of and challenges to implementation of the standards.HSS56 high schoolsSpring 2007Spring 2008School wellness team interviews: One on-site group interview with school wellness personnel was conducted by trained research staff at each school.To ascertain the perceived benefits of and challenges to implementation of the standards.SWS8 elementary schools, 8 middle schools, 8 high schoolsFall 2007Spring 2009
Open in a separate windowaHEAC postlegislation data were collected at the midpoint of the project. HEAC endpoint data were collected in spring of 2010 and were not yet available for inclusion in this article at press time.
  • To what extent did schools comply with nutrition standards?
  • What changes did schools make in foods and beverages offered?
  • What was the impact on student dietary intake?
  • What was the impact on food and beverage sales?
  • What were the benefits of and challenges to implementation?
  相似文献   

18.
Understanding Evidence-Based Public Health Policy     
Ross C. Brownson  Jamie F. Chriqui  Katherine A. Stamatakis 《American journal of public health》2009,99(9):1576-1583
Public health policy has a profound impact on health status. Missing from the literature is a clear articulation of the definition of evidence-based policy and approaches to move the field forward. Policy-relevant evidence includes both quantitative (e.g., epidemiological) and qualitative information (e.g., narrative accounts).We describe 3 key domains of evidence-based policy: (1) process, to understand approaches to enhance the likelihood of policy adoption; (2) content, to identify specific policy elements that are likely to be effective; and (3) outcomes, to document the potential impact of policy.Actions to further evidence-based policy include preparing and communicating data more effectively, using existing analytic tools more effectively, conducting policy surveillance, and tracking outcomes with different types of evidence.IT HAS LONG BEEN KNOWN that public health policy, in the form of laws, regulations, and guidelines, has a profound effect on health status. For example, in a review of the 10 great public health achievements of the 20th century,1 each of them was influenced by policy change such as seat belt laws or regulations governing permissible workplace exposures. As with any decision-making process in public health practice, formulation of health policies is complex and depends on a variety of scientific, economic, social, and political forces.2There is a considerable gap between what research shows is effective and the policies that are enacted and enforced. The definition of policy is often broad, including laws, regulations, and judicial decrees as well as agency guidelines and budget priorities.24 In a systematic search of “model” public health laws (i.e., a public health law or private policy that is publicly recommended by at least 1 organization for adoption by government bodies or by specified private entities), Hartsfield et al.5 identified 107 model public health laws, covering 16 topics. The most common model laws were for tobacco control, injury prevention, and school health, whereas the least commonly covered topics included hearing, heart disease prevention, public health infrastructure, and rabies control. In only 6.5% of the model laws did the sponsors provide details showing that the law was based on scientific information (e.g., research-based guidelines).Research is most likely to influence policy development through an extended process of communication and interaction.6 In part, the research–policy interface is made more complex by the nature of scientific information, which is often vast, uneven in quality, and inaccessible to policymakers. Several models for how research influences policymaking have been described,79 most of which involve moving beyond a simple linear model to more nuanced and indirect routes of influence, as in gradual “enlightenment.”10 Such nonlinear models of policymaking and decision-making take into consideration that research evidence may hold equal, or even less importance, than other factors that ultimately influence policy, such as policymakers'' values and competing sources of information, including anecdotes and personal experience.11 Although not exhaustive, 1216

TABLE 1

Barriers to Implementing Effective Public Health Policy
BarrierExample
Lack of value placed on preventionOnly a small percentage of the annual US health care budget is allocated to population-wide approaches.
Insufficient evidence baseThe scientific evidence on effectiveness of some interventions is lacking or the evidence is changing over time.
Mismatched time horizonsElection cycles, policy processes, and research time often do not match well.
Power of vested interestsCertain unhealthy interests (e.g., tobacco, asbestos) hold disproportionate influence.
Researchers isolated from the policy processThe lack of personal contact between researchers and policymakers can lead to lack of progress, and researchers do not see it as their responsibility to think through the policy implications of their work.
Policymaking process can be complex and messyEvidence-based policy occurs in complex systems and social psychology suggests that decision-makers often rely on habit, stereotypes, and cultural norms for the vast majority of decisions.
Individuals in any one discipline may not understand the policymaking process as a wholeTransdisciplinary approaches are more likely to bring all of the necessary skills to the table.
Practitioners lack the skills to influence evidence-based policyMuch of the formal training in public health (e.g., masters of public health training) contains insufficient emphasis on policy-related competencies.
Open in a separate windowAlthough there have been many calls for more systematic and evidence-based approaches to policy development,5,6,1721 missing from the literature is a clear articulation of the definition of evidence-based policy along with specific approaches that will enhance the use of evidence in policymaking.  相似文献   

19.
Historical Influences on Contemporary Tobacco Use by Northern Plains and Southwestern American Indians     
Stephen J. Kunitz 《American journal of public health》2016,106(2):246-255
There are great differences in smoking- and tobacco-related mortality between American Indians on the Northern Plains and those in the Southwest that are best explained by (1) ecological differences between the two regions, including the relative inaccessibility and aridity of the Southwest and the lack of buffalo, and (2) differences between French and Spanish Indian relations policies. The consequence was the disruption of inter- and intratribal relations on the Northern Plains, where as a response to disruption the calumet (pipe) ceremony became widespread, whereas it did not in the Southwest. Tobacco was, thus, integrated into social relationships with religious sanctions on the Northern Plains, which increased the acceptability of commercial cigarettes in the 20th century. Smoking is, therefore, more deeply embedded in religious practices and social relationships on the Northern Plains than in the Southwest.Open in a separate windowJohn Richard Coke Smyth, Indians Bartering with Trader, Sketches of Canada, 1842.It has been known for several decades that American Indians on the Northern Plains smoke cigarettes at higher rates and suffer higher smoking-related death rates than do American Indians in the Southwest (1 and data from 1980 to 1982 revealed great differences in smoking-related health conditions. The ratios of death rates of American Indians on the Northern Plains and in the Southwest were myocardial infarction, 3.1; lung cancer, 5.9; cerebrovascular diseases, 1.9; and all-cause mortality, 1.4.2

TABLE 1—

Largest American Indian Populations on the Northern Plains and in the Southwest: 2000
TribePopulation, No.a
Northern plains
Sioux79 511
Chippewa51 240
Blackfeet10 336
Crow7 041
Menominee7 488
Iroquois7 556
Southwest
Navajo255 485
Pueblo48 303
Apache30 386
Tohono O’Odham15 812
Pima6 402
Ute5 949
Open in a separate windowNote. The states included in the Northern Plains are Montana, Wyoming, North and South Dakota, Iowa, Nebraska, Minnesota, Wisconsin, Indiana, Illinois, and Michigan. The Southwest includes Arizona, New Mexico, Colorado, Utah, and Nevada.aUS Census Bureau, 2000 Census of Population and Housing, Characteristics of American Indians and Alaska Natives by Tribe and Language (Washington, DC, 2003), PHC-5.I attempt to account for these patterns.There are three possible explanations for this:
  1. There may be greater poverty, less education, and more alcohol misuse on the Northern Plains than in the Southwest, all associated with, or leading to, an increased likelihood of cigarette use.
  2. Exposure to both native and commercial tobacco may be greater on the Northern Plains.
  3. There may be a greater association on the Northern Plains of tobacco use with social relationships and religious practices.
10 at a regional level there is not a persuasive association of cigarette use with education, income, poverty, or alcohol use. That leaves exposure to various forms of tobacco and the social functions of tobacco.

TABLE 2—

Cigarette and Alcohol Use and Income, American Indians on the Northern Plains and in the Southwest: Early 2000s
Northern PlainsSouthwestRate Ratio
Tobacco use 2000–20093
Current smoker, %
 Male42.118.82.23
 Female42.114.82.84
Former smoker, %
 Male28.329.00.97
 Female22.715.41.47
Never smoked, %
 Male29.752.20.56
 Female35.269.70.5
All-cause mortality 1999–20094
 Male1 748.81 251.41.4
 Female1 243.4828.11.5
Smoking-related deaths 1999–2009
Lung cancer
 Male5113.420.15.6
 Female81.611.67
Heart disease: underlying cause of death6
 Male778.0446.71.7
 Female481.0266.71.8
Heart disease: multiple cause of death
 Male1 373.7852.01.6
 Female916.0548.01.7
Stroke7
 Male129.284.91.5
 Female124.073.71.7
Alcohol use
Alcohol-related mortality 2005–20098
 Male167.0177.20.94
 Female85.369.31.23
Binge drinker,3 %
 Male23.119.11.2
 Female18.08.9202.0
Heavy drinker,3 %
 Male8.16.41.26
 Female5.02.61.92
Below poverty line, 2000,9 %3232.5
Median income, 2000,9 $24 95724 605
High school graduate, 2000,9 %74.364.7
Open in a separate window  相似文献   

20.
Industry Self-Regulation to Improve Student Health: Quantifying Changes in Beverage Shipments to Schools     
Robert F. Wescott  Brendan M. Fitzpatrick  Elizabeth Phillips 《American journal of public health》2012,102(10):1928-1935
Objectives. We developed a data collection and monitoring system to independently evaluate the self-regulatory effort to reduce the number of beverage calories available to children during the regular and extended school day. We have described the data collection procedures used to verify data supplied by the beverage industry and quantified changes in school beverage shipments.Methods. Using a proprietary industry data set collected in 2005 and semiannually in 2007 through 2010, we measured the total volume of beverage shipments to elementary, middle, and high schools to monitor intertemporal changes in beverage volumes, the composition of products delivered to schools, and portion sizes. We compared data with findings from existing research of the school beverage landscape and a separate data set based on contracts between schools and beverage bottling companies.Results. Between 2004 and the 2009–2010 school year, the beverage industry reduced calories shipped to schools by 90%. On a total ounces basis, shipments of full-calorie soft drinks to schools decreased by 97%.Conclusions. Industry self-regulation, with the assistance of a transparent and independent monitoring process, can be a valuable tool in improving public health outcomes.Improving public health outcomes involves various policy strategies, from national laws and state mandates to local initiatives and industry self-regulation. Previous self-regulatory efforts, in particular, demonstrate both the merits and limitations of relying on industry to address public health challenges.1 These experiences have spawned a debate among public health professionals about whether self-regulation is merely a self-serving tool used by industry to forestall government action or an effective way of responding to market failures.1–3 Ultimately, the success of these efforts is tied to the strength of the commitment to achieve meaningful, measurable, and verifiable outcomes.This debate extends to the regulation of sugar-sweetened beverages (SSBs) in schools. Although the Child Nutrition Act regulates SSBs, the limited stringency of these regulations has prompted states and communities to take further action.2 In addition, the Alliance for a Healthier Generation worked with representatives of the 3 largest US beverage companies to further address the issue of SSB availability in schools through self-regulation. In 2006, they reached an agreement to implement School Beverage Guidelines to reduce the number of beverage calories available to children at school.4 At that time, the signatories commissioned us to design and implement a multiyear data collection and monitoring system.We assessed the extent to which the School Beverage Guidelines have achieved meaningful outcomes by (1) describing the data collection procedures created to verify data supplied by the beverage industry, (2) quantifying changes in the school beverage landscape between 2004 and the 2009–2010 school year, and (3) identifying the strengths and weaknesses of industry self-regulation in this specific case.Public health experts, dieticians, and nutritionists frame the obesity epidemic in terms of behavioral, psychological, cultural, social, and genetic factors.5 Public attention, however, tends to focus on diet and how competitive food and beverages specifically contribute to obesity.6 Studies showing a positive relationship between higher energy intake and obesity support many of these concerns.7–9 These studies have also raised questions about external drivers of consumption behaviors, such as the school nutrition environment.10–12 A common research hypothesis pertaining to external conditions and consumption is that people tend to consume foods and beverages that are most accessible.13 Evidence from natural experiments and randomized controlled trials shows a positive relationship between the availability of SSBs and consumption patterns.14–16 Although some studies suggest that availability alone may not alter students’ beverage choices significantly,17 there is evidence showing that restrictions on SSBs may reduce their consumption.18–20In response, federal and state policymakers, school administrators, and private entities have enacted regulatory measures to create healthier food environments in schools. For instance, under the Child Nutrition Act, the US Department of Agriculture regulates foods sold in conjunction with the National School Breakfast Program and National School Lunch Program.21 The recent reauthorization of this act permits the US Department of Agriculture to regulate foods sold in vending machines, a la carte lunch lines, and school stores during the school day, although some believe that its regulations do not go far enough.22 Therefore, in the absence of more stringent federal laws, some states have placed further restrictions on SSBs, including portion and content standards, limitations on when SSBs can be sold, and, in the case of some states (e.g., Connecticut), bans on SSB sales in schools.2 Likewise, some local communities have adopted regulations to restrict access to SSBs during the school day.2Additionally, food and beverage companies have voluntarily offered to restrict marketing to children, alter product content, and limit beverage access in schools.23 In May 2006, the Alliance for a Healthier Generation worked with representatives of the Coca-Cola Company, Dr. Pepper Snapple Group, PepsiCo, and the American Beverage Association (the national trade association representing the nonalcoholic refreshment beverage industry) to agree on implementation of the Alliance School Beverage Guidelines. As outlined in the Memorandum of Understanding (MOU), beverage companies and their largest bottlers voluntarily agreed to phase out sales of full-calorie carbonated soft drinks (CSDs) and other beverages, shift the product mix toward no or low-calorie beverages, and reduce portion sizes over a period of 3 academic years (Beverage Types PermittedElementary SchoolsMiddle SchoolsaHigh SchoolsaBottled WaterYesYesYes100% juice (no added sweeteners), ≤ 120 cal/8 oz and ≥ 10% of the recommended value for ≥ 3 vitamins and mineralsYesYesYesFat-free or low-fat regular and flavored milk and nutritionally equivalent milk alternatives per US Department of Agriculture (e.g., soy milk)YesYesYesMilk alternatives with ≤ 150 cal/8 ozYesYesYesNo or low-calorie beverages with ≤ 10 calories/8 oz (including diet sodas, teas, and flavored waters)NoNoYesOther drinks with ≤ 66 cal/8 ozNoNoYesOther drinks with ≥ 66 cal/8 ozNoNoNoMaximum serving size for milks, juices, and (in high schools) other allowable beverages with more than 10 cal/8 oz81012Open in a separate windowNote. These guidelines apply to beverages sold on school grounds during the regular and extended school day. (The extended school day includes before and after school activities such as clubs, band, student government, drama, and childcare and latchkey programs.) These guidelines do not apply to school-related events in which parents and other adults are part of an audience or are selling beverages as boosters during intermission as well as immediately before or after an event. Examples of these events include sporting events, school plays, and band concerts.Source. American Beverage Association.4aIf middle school and high school students have shared access to areas on a common campus or in buildings, the school community has the option to adopt the high school standards.Since the signing of the MOU, several studies have focused on the qualitative merits of the School Beverage Guidelines and self-regulation more generally. In a review of regulations on SSB sales in schools, Mello et al.2 offered conclusions about different policy strategies, comparing the relative and expected effectiveness of government regulation and industry self-regulation. The authors outlined concerns about certain aspects of the School Beverage Guidelines, including the lack of noncompliance provisions, the less restrictive nature of the School Beverage Guidelines compared with some existing state and local regulations, and the inability to affect preexisting contracts. They acknowledged that the beverage industry’s pledge represents a “significant step forward in industry self-regulation”2(p600) but also concluded that the stringency and staying power of state and local policies make them more effective instruments for regulating SSBs in schools.2Sharma et al.1 proposed 8 general standards for evaluating the effectiveness of self-regulation. They examined these standards in the context of existing food and beverage industry self-regulations and then contextualized the discussion with a historical sketch of self-regulation in other industries. Through these examples, they identified potential pitfalls associated with industry self-regulation and reviewed the conditions that encourage successful outcomes. Regarding the School Beverage Guidelines, the authors outlined many of the same concerns as Mello et al.2 but did not conclude whether past self-regulation in the food and beverage industry has been a success or a failure.1Finally, Solomon3 analyzed instances of public and private regulation, including the School Beverage Guidelines, to outline the merits and limitations of various regulatory approaches. Solomon stated that 1 of the distinct aspects of the School Beverage Guidelines self-regulatory effort was the binding provisions created to track compliance. The author concluded that future state-level policy initiatives to address obesity should include objective and quantitative mechanisms for reporting and tracking progress, similar to the monitoring procedures for the School Beverage Guidelines.3  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号