首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.

Objective

Review concept of situation awareness (SA) as it relates to public health surveillance, epidemiology and preparedness [1]. Outline hierarchical levels and organizational criteria for SA [2]. Initiate consensus building process aimed at developing a working definition and measurable outcomes and metrics for SA as they relate to syndromic surveillance practice and evaluation.

Introduction

A decade ago, the primary objective of syndromic surveillance was bioterrorism and outbreak early event detection (EED) [3]. Syndromic systems for EED focused on rapid, automated data collection, processing and statistical anomaly detection of indicators of potential bioterrorism or outbreak events. The paradigm presented a clear and testable surveillance objective: the early detection of outbreaks or events of public health concern. Limited success in practice and limited rigorous evaluation, however, led to the conclusion that syndromic surveillance could not reliably or accurately achieve EED objectives. At the federal level, the primary rationale for syndromic surveillance shifted away from bioterrorism EED, and towards all-hazards biosurveillance and SA [46]. The shift from EED to SA occurred without a clear evaluation of EED objectives, and without a clear definition of the scope or meaning of SA in practice. Since public health SA has not been clearly defined in terms of operational surveillance objectives, statistical or epidemiological methods, or measurable outcomes and metrics, the use of syndromic surveillance to achieve SA cannot be evaluated.

Methods

This session is intended to provide a forum to discuss SA in the context of public health disease surveillance practice. The roundtable will focus on defining SA in the context of public health syndromic and epidemiologic surveillance. While SA is often noted in federal level documents as a primary rationale for biosurveillance [1, 46], it is rarely defined or described in operational detail. One working definition presents SA as “real-time analysis and display of health data to monitor the location, magnitude, and spread of an outbreak”, yet it does not elaborate on the methods, systems or evaluation requirements for SA in public health or biosurveillance [3]. In terms of translating SA into public health surveillance practice [1], we will discuss and define the requirements of public health SA based on its development and practice in other areas [2]. The proposed theoretical framework and evaluation criteria adapted and applied to public health SA [2] follow:
  • - Level 1: Perceive relevant surveillance data and epidemiological information.
  • - Level 2: Integrate surveillance and non-surveillance data in conjunction with operator goals to provide understanding of the meaning of the information.
  • - Level 3: Through perceiving (Level 1) and integrating and understanding (Level 2) provide prediction of future events and system states to allow for timely and effective public health decision making.

Results

Sample questions for discussion: What is the relevance of syndromic surveillance and biosurveillance in the SA framework? Where does it fit within the current public health surveillance environment? To achieve the roundtable discussion objectives, the participants will work towards a consensus definition of SA for public health, and will outline measureable outcomes and metrics for evaluation of syndromic surveillance for public health SA.  相似文献   

2.
3.

Objective

To investigate the potential of utilizing raccoons as sentinels for West Nile Virus (WNV) in an effort to guide public health surveillance, prevention, and control efforts.

Introduction

Since its detection in 1999 in New York, WNV spread westward across the continent, and was first detected in California in 2003 in Imperial County (1). In California and in many states, birds, especially corvids, are used as sentinel animals to detect WNV activity. Recent seroprevalence studies have shown WNV activity in different wild mammalian species (13); in the United States, WNV sero-prevalence in some studies in raccoons has ranged from 34–46% (3,4). In addition, it has been shown that after experimental infection, raccoons can attain high viral titers and shed WNV in their saliva and feces (5). Given their peridomestic nature, we investigated the feasibility of their use as sentinels for early warning of WNV and as indicators of WNV activity as a strategy to better localize WNV transmission foci in guiding vector control efforts.

Methods

Sick, injured or orphaned raccoons undergoing rehabilitation at Project Wildlife, one of the largest, non-profit wildlife rehabilitation organizations in the United States, located in San Diego County, were tested for WNV shedding. Project Wildlife team members who regularly care for sick, injured, or orphaned raccoons were trained to collect oral and fecal samples for viral testing during 2011 and 2012 upon raccoons’ arrival to Project Wildlife. Oral and fecal samples were tested using real-time PCR for the envelope gene of WNV.

Results

To date 71 raccoons have been tested for WNV and all PCR test results have been negative. Of the 71 raccoons tested from May 2011 to October 2011 and June 2012 to September 2012, 85.9% (n=61) had age classification data. The majority of these raccoons were young; 52.5% (n=32) were days or weeks old and 39.3% (n=24) were classified as juveniles. All raccoons were found primarily in urban settings at least 20 miles from the northern edge of the County.

Conclusions

While none of the raccoon samples tested in this study were found to be WNV positive, surveillance data from San Diego County suggests that WNV activity during this time period was extremely low. From January–October 2011, San Diego County Vector Control reported all negative results for WNV in dead birds, sentinel chickens, horses, and humans for WNV; only 1 mosquito pool from the northern border region of the County tested positive for WNV (6). Thus, despite WNV activity throughout the state of California, the virus did not appear to be circulating widely in San Diego County in 2011 (7). To date during the 2012 season, San Diego County reported all negatives for WNV in dead birds, sentinel chickens, mosquito pools, and horses; only one human case of WNV was identified in an asymptomatic male during a routine blood donation (6).Further evaluation is needed to determine if raccoons are useful sentinel species for WNV surveillance. Testing should continue to evaluate if raccoons may serve as a more effective early warning sentinel for WNV than birds which can travel long distances from the exposure site, and to determine if raccoons may allow better localization of WNV activity.  相似文献   

4.

Objective

To improve the method of automated retrieval of surveillance-related literature from a wide range of indexed repositories.

Introduction

The ISDS Research Committee (RC) is an interdisciplinary group of researchers interested in various topics related to disease surveillance. The RC hosts a literature review process with a permanent repository of relevant journal articles and bimonthly calls that provide a forum for discussion and author engagement. The calls have led to workgroups and society-wide events, boosted interest in the ISDS Conference, and fostered networking among participants.Since 2007, the RC has identified and classified published articles using an automated search method with the aim of progressing ISDS’s mission of advancing the science and practice of disease surveillance by fostering collaboration and increasing awareness of innovations in the field of surveillance. The RC literature review efforts have provided an opportunity for interprofessional collaboration and have resulted in a repository of over 1,000 articles, but feedback from ISDS members indicated relevant articles were not captured by the existing methodology. The method of automated literature retrieval was thus refined to improve efficiency and inclusiveness of stakeholder interests.

Methods

The earlier literature review method was implemented from March 2007 to March 2012. PubCrawler [1] (articles indexed in Medline) and Google Scholar [2] search results were sent to the RC via automated e-mail. To refine this method, the RC developed search strings in PubMed [3], Embase [4], and Scopus [5], consisting of over 100 terms suggested by members. After evaluating these methods, we found that the Scopus search is the most comprehensive and improved the cross-disciplinary scope. Scopus results allowed filtering of 50–100 titles and abstracts in fewer than 30 minutes each week for the identification of relevant articles (Figure).Open in a separate windowFigure.Overview of 2012 literature review process including, Scopus search [6]; Zotero [7], a freely available web application that streamlines content management; and summarized article archive on ISDS Wiki [8].Journal titles were categorized to assess the increased range of fields covered; categories include epidemiology, agriculture, economics, and medicine (51 categories total).

Results

Since implementing the new method, potentially relevant articles identified per month increased from an average of 19 (SD: 13; n= 31) to 159 (SD: 63; n= 3). Both methods identified articles in the health sciences, but the new search also captured articles in the life, physical, and social sciences. Between March 2007 and March 2012, articles selected were classified into an average of 10 different categories per literature review (SD: 4; n= 31) versus an average of 33 categories (SD: 5; n= 3) with the updated process.

Conclusions

The new search method improves upon the previous method – it captures relevant articles indexed in health science and other secondary databases beyond Medline. The new method has resulted in a greater number of relevant literature articles, from a broader range of disciplines, and in reduced amount of preparation time as compared to the results of the previous search method. This improvement may increase multi-disciplinary discussions and partnerships, but changes in online publishing pose challenges to continued access of the new range of articles.  相似文献   

5.

Objective

To incorporate information from multiple data streams of disease surveillance to achieve more coherent spatial cluster detection using statistical tools from multi-criteria analysis.

Introduction

Multiple data sources are essential to provide reliable information regarding the emergence of potential health threats, compared to single source methods [1,2]. Spatial Scan Statistics have been adapted to analyze multivariate data sources [1]. In this context, only ad hoc procedures have been devised to address the problem of selecting the most likely cluster and computing its significance. A multi-objective scan was proposed to detect clusters for a single data source [3].

Methods

For simplicity, consider only two data streams. The j-th objective function evaluates the strength of candidate clusters using only information from the j-th data stream. The best cluster solutions are found by maximizing two objective functions simultaneously, based on the concept of dominance: a point is called dominated if it is worse than another point in at least one objective, while not being better than that point in any other objective [4]. The nondominated set consists of all solutions which are not dominated by any other solution. To evaluate the statistical significance of solutions, a statistical approach based on the concept of attainment function is used [4].

Results

The two datasets are standardized brain cancer mortality rates for male and female adults for each of the 3111 counties in the 48 contiguous states of the US, from 1986 to 1995 [5].We run the circular scan and plot the (m(Zi),w(Zi)) points in the Cartesian plane, where m(Zi) and w(Zi) are the LLR for the zone Zi in the men’s and women’s brain cancer map, respectively, and i, i=1,...,N(r) is the set of all circular zones up to a radius r>0. The non-dominated set is inspected to observe possible correlations between the two maps regarding brain cancer clustering (Figure 1); e.g., the upper inset map has high LLR value on women’s map, but not on men’s; the inverse happens to the lower inset map. Other nondominated clusters in the middle have lower LLR values on both datasets. The first two examples have comparatively lower p-value (they belong to the two “knees” in the nondominated set), as computed using the attainment surfaces (not shown in the figure).

Conclusions

The multi-criteria multivariate approach has several advantages: (i) the representation of the evaluation function for each datastream is very clear, and does not suffer from an artificial, and possibly confusing mixture with the other datastream evaluations; (jj) it is possible to attribute, in a rigorous way, the statistical significance of each candidate cluster; (iii) it is possible to analyze and pick-up the best cluster solutions, as given naturally by the non-dominated set.Open in a separate windowPart of the solution set in the LLR(male) X LLR(female) space of the male/female brain cancer datasets for the US counties map. Clusters are indicated by blue points, with the non-dominated solutions represented by small red circles. The inset maps depict the geographic location of the clusters found in the US counties map (yellow circles) for two sample non-dominated solutions.  相似文献   

6.

Objective

To document the current evidence base for the use of electronic health record (EHR) data for syndromic surveillance using emergency department, urgent care clinic, hospital inpatient, and ambulatory clinical care data.

Introduction

Historically, syndromic surveillance has primarily involved the use of near real-time data sent from hospital emergency department (EDs) and urgent care (UC) clinics to public health agencies. The use of data from inpatient and ambulatory settings is now gaining interest and support throughout the United States, largely as a result of the Stage 2 and 3 Meaningful Use regulations [1]. Questions regarding the feasibility and utility of applying a syndromic approach to these data sources are hampering the development of systems to collect, analyze, and share this potentially valuable information. Solidifying the evidence base and communicating the results to the public health surveillance community may help to initiate and build support for using these data to advance surveillance functions.

Methods

We conducted a literature search in the published and grey literature that scanned for relevant articles in the Google Scholar, Pub Med, and EBSCO Information Services databases. Search terms included: “inpatient/ambulatory electronic health record”; “ambulatory/inpatient/hospital/outpatient/chronic disease syndromic surveillance”; and “EHR syndromic surveillance”. Information gleaned from each article included data use, data elements extracted, and data quality indicators. In addition, several stakeholders who provided input on the September 2012 ISDS Recommendations [2] also provided articles that were incorporated into the literature review.ISDS also invited speakers from existing inpatient and ambulatory syndromic surveillance systems to give webinar presentations on how they are using data from these novel sources.

Results

The number of public health agencies (PHAs) routinely receiving ambulatory and inpatient syndromic surveillance data is substantially smaller than the number receiving ED and UC data. Some health departments, private medical organizations (including HMOs), and researchers are conducting syndromic surveillance and related research with health data captured in these clinical settings [2].In inpatient settings, many of the necessary infrastructure and analytic tools are already in place. Syndromic surveillance with inpatient data has been used for a range of innovative uses, from monitoring trends in myocardial infarction in association with risk factors for cardiovascular disease [3] to tracking changes in incident-related hospitalizations following the 2011 Joplin, Missouri tornado [3].In contrast, ambulatory systems face a need for new infrastructure, as well as pose a data volume challenge. The existing systems vary in how they address data volume and what types of encounters they capture. Ambulatory data has been used for a variety of uses, from monitoring gastrointestinal infectious disease [3], to monitoring behavioral health trends in a population, while protecting personal identities [4].

Conclusions

The existing syndromic surveillance systems and substantial research in the area indicate an interest in the public health community in using hospital inpatient and ambulatory clinical care data in new and innovative ways. However, before inpatient and ambulatory syndromic surveillance systems can be effectively utilized on a large scale, the gaps in knowledge and the barriers to system development must be addressed. Though the potential use cases are well documented, the generalizability to other settings requires additional research, workforce development, and investment.  相似文献   

7.

Objective

To determine if influenza surveillance should target all patients with acute respiratory infections (ARI) or only track pneumonia cases.

Introduction

Effective responses to epidemics of infectious diseases hinge not only on early outbreak detection, but also on an assessment of disease severity. In recent work, we combined previously developed ARI case-detection algorithms (CDA) [1] with text analyses of chest imaging reports to identify ARI patients whose providers thought had pneumonia. In this work, we asked if a surveillance system aimed at patients with pneumonia would outperform one that monitors the full severity spectrum of ARI.

Methods

Time series of daily casecounts (backgrounds) were created by applying either an ARI CDA (ARI ICD-9 codeset [1]) or a Pneumonia CDA (ARI ICD-9 codes AND chest imaging obtained AND positive results from automated text analysis that identify those chest imaging reports that support the diagnosis of pneumonia) to electronic medical record (EMR) entries related to outpatient encounters at the VA Maryland Health Care System. We used an age-structured metapopulation influenza epidemic model for Baltimore to inject factitious influenza cases into backgrounds. Injections were discounted by the known sensitivity of the ARI CDA [1]. For injections into the pneumonia backgrounds time series, factitious ARI cases were further discounted by the expected pneumonia rate in the modeled influenza epidemic (10%). From the time of injection, EARS or CUSUM statistics [2,3] were applied on each successive day on paired back-ground+injection vs. background-only time series. Each injection-prospective-surveillance cycle was repeated 52 times, each time with the injection shifted to a different week of the one-year study period (2010–11). We computed: 1) the “Detection Delay”, the average time from injection to the first alarm present in the back-ground+injection dataset but absent from the background-only dataset; 2) the “False-Alarm Rate” (FAR), defined as the number of unique false-alarms originating in the background-only dataset during the study year, divided by 365 days. To create activity monitoring operating characteristic (AMOC) curves, we empirically determined the corresponding Delay-FAR pairs over a wide range of alarm thresholds.

Results

The Figure compares AMOC curves for otherwise identical surveillance systems that included either any ARI outpatient visits (red circles, using the EARS W2c statistic [2] and blue triangles, using the CUSUM statistic) or pneumonia (blue triangles, using the CUSUM statistic modified for sparse data [3]). Note that Detection Delay (y-axis) is lower at any given FAR when surveillance aims at patients with pneumonia. Sensitivity analysis suggests that this advantage remains true when pneumonia complicates influenza ≥ 5% of the time.

Conclusions

Our results suggest that EMR-based influenza surveillance that targets patients with pneumonia can outperform systems that monitor all ARI patients.Open in a separate window  相似文献   

8.

Objective

To conduct an initial examination of the potential use of BioSense data to monitor and rapidly assess the safety of medical countermeasures (MCM) used for prevention or treatment of adverse health effects of biological, chemical, and radiation exposures during a public health emergency.

Introduction

BioSense is a national human health surveillance system for disease detection, monitoring, and situation awareness through near real-time access to existing electronic healthcare encounter information, including information from hospital emergency departments (EDs). MCM include antibiotics, antivirals, antidotes, antitoxins, vaccinations, nuclide-binding agents, and other medications. Although some MCM have been extensively evaluated and have FDA approval, many do not (1). Current FDA and CDC systems that monitor drug and vaccine safety have limited ability to monitor MCM safety, and in particular to conduct rapid assessments during an emergency (1).

Methods

To provide a preliminary assessment of the use of BioSense for this purpose, we reviewed selected publications evaluating the use of electronic health records (EHRs) to monitor safety of drugs and vaccinations (medications), focusing particularly on systematic reviews, reviewed BioSense data elements, and consulted with a number of subject matter experts.

Results

More than 40 studies have examined use of EHR data to monitor adverse effects (AEs) of medications using administrative, laboratory, and pharmacy records from inpatient- and out-patient settings, including EDs (24). To identify AEs, investigators have used diagnostic codes; administration of antidotes, laboratory measures of drug levels and of biologic response, text searches of unstructured clinical notes, and combinations of those data elements. BioSense ED data include chief complaint text, triage notes, text diagnosis, as well as diagnostic and medical procedure codes.Investigations used a variety of study designs in various populations and settings; examined a wide range of medications, vaccinations, and AEs; and developed a diverse set of analytic algorithms to search EHR data to detect and signal AEs (24). Most research has been done on FDA-approved medications. Most studies used EHR data to identify individuals using specific medications and then searched for potential AEs identified from previous research. None of the studies investigated use of EHR data to monitor safety when records of an individual’s medication use could not be linked to that individual’s records of AEs. BioSense data could be used for AE detection, but linking AEs to MCM use would require follow-back investigation. Since there is limited research on AEs of some MCM, there would be limited information to guide identification of potential AEs.Performance characteristics of the AE monitoring systems have been mixed with reported sensitivities ranging from 40–90%; specificities from 1% to 90%, and positive predictive values from < 1% to 64%, depending on the medication, AE and other characteristics of the study (2, 4). However, the small numbers of studies with common characteristics has limited the ability of reviewers to determine which types of systems have better performance for different medications and AEs.Some experts suggest that data in BioSense, might contribute to safety surveillance of MCM. They also caution that poor predictive values and high rates of false positives reported in the literature raise concerns about burden to those conducting investigations in response to AE alerts, particularly in the context of a public health emergency.

Conclusions

These findings suggest that BioSense data could potentially contribute to rapid identification of safety issues for MCM and that some methods from published research could be applicable to the use of BioSense for this purpose. However, such use would require careful development and evaluation.  相似文献   

9.

Introduction

Data consisting of counts or indicators aggregated from multiple sources pose particular problems for data quality monitoring when the users of the aggregate data are blind to the individual sources. This arises when agencies wish to share data but for privacy or contractual reasons are only able to share data at an aggregate level. If the aggregators of the data are unable to guarantee the quality of either the sources of the data or the aggregation process then the quality of the aggregate data may be compromised.This situation arose in the Distribute surveillance system (1). Distribute was a national emergency department syndromic surveillance project developed by the International Society for Disease Surveillance for influenza-like-illness (ILI) that integrated data from existing state and local public health department surveillance systems, and operated from 2006 until mid 2012. Distribute was designed to work solely with aggregated data, with sites providing data aggregated from sources within their jurisdiction, and for which detailed information on the un-aggregated ‘raw’ data was unavailable. Previous work (2) on Distribute data quality identified several issues caused in part by the nature of the system: transient problems due to inconsistent uploads, problems associated with transient or long-term changes in the source make up of the reporting sites and lack of data timeliness due to individual site data accruing over time rather than in batch. Data timeliness was addressed using prediction intervals to assess the reliability of the partially accrued data (3). The types of data quality issues present in the Distribute data are likely to appear to some extent in any aggregate data surveillance system where direct control over the quality of the source data is not possible. In this work we present methods for detecting both transient and long-term changes in the source data makeup.

Methods

We examined methods to detect transient changes in data sources, which manifest as classical outliers. We found that traditional statistical process control methods did not work well for detecting transient issues due to the presence of discontinuities cause by long term changes in the source makeup. As both transient and long-term changes in source makeup manifest as step changes, we examined the performance of change point detection methods for monitoring this data. These methods have been previously used for detecting changes in disease trends in data aggregated from Distribute (4). Following Kass-Hout (4), we used the Bayesian change point estimation procedure of Barry (5) as implemented in the R package BCP (6). We examined both offline and online detection using time series held at a constant lag.

Results

We found that transient problems could be detected offline as neighboring change points with high posterior probability. When multiple outliers exist close together, detection can be improved by iteratively removing flagged data points and re-running the change point detection on the reduced data. Following the removal of outliers, remaining change points indicated long-term changes. To enable real-time monitoring for data quality problems we modified this offline detection process to in addition flag individual change points (rather than pairs of change points) detected in the most recent 5 days.Open in a separate window  相似文献   

10.

Objective

To evaluate the association between Dengue Fever (DF) and climate in Mexico with real-time data from Google Dengue Trends (GDT) and climate data from NASA Earth observing systems.

Introduction

The incidence of dengue fever (DF) has increased 30 fold between 1960 and 2010 [1]. The literature suggests that temperature plays a major role in the life cycle of the mosquito vector and in turn, the timing of DF outbreaks [2]. We use real-time data from GDT and real-time temperature estimates from NASA Earth observing systems to examine the relationship between dengue and climate in 17 Mexican states from 2003–2011. For the majority of states, we predict that a warming climate will increase the number of days the minimum temperature is within the risk range for dengue.

Methods

The GDT estimates are derived from internet search queries and use similar methods as those developed for Google Flu Trends [3]. To validate GDT data, we ran a correlation between GDT and dengue data from the Mexican Secretariat of Health (2003–2010). To analyze the relationship between GDT and varying lags of temperature, we constructed a time series meta-analysis. The mean, max and min of temperature were tested at lags 0 –12 weeks using data from the Modern Era Retrospective-Analysis for Research and Applications. Finally, we built a binomial model to identify the minimum 5° C temperature range associated with a 50% or higher Dengue activity threshold as predicted by GDT.

Results

The time series plot of GDT data and the Mexican Secretariat of Health data (2003– 2010) (Figure 1) produced a correlation coefficient of 0.87. The time series meta-analysis results for 17 states showed an increase in minimum temperature at lag week 8 had the greatest odds of dengue incidence, 1.12 Odds Ratio (1.09–1.16, 95% Confidence Interval). The comparison of dengue activity above 50% in each state to the minimum temperature at lag week 8 showed 14/17 states had an association with warmest 5 degrees of the minimum temperature range. The state of Sonora was the only state to show an association between dengue and the coldest 5 degrees of the minimum temperature range.Open in a separate windowFigure 1Time Series Correlation: Google Dengue Trends vs. Secretariat of Health, Mexico 2003–2010

Conclusions

Overall, the incidence data from the Mexican Secretariat of Health showed a close correlation with the GDT data. The meta-analysis indicates that an increase in the minimum temperature at lag week 8 is associated with an increased dengue risk. This is consistent with the Colon-Gonzales et al. Mexico study which also found a strong association with the 8 week lag of increasing minimum temperature [4]. The results from this binomial regression show, for the majority of states, the warmest 5 degree range for the minimum temperature had the greatest association with dengue activity 8 weeks later. Inevitably, several other factors contribute to dengue risk which we are unable to include in this model [5]. IPCC climate change predictions suggest a 4° C increase in Mexico. Under such scenario, we predict an increase in the number of days the minimum temperature falls within the range associated with DF risk.  相似文献   

11.

Objective

Preliminary analysis was completed to define, identify, and track the trends of drug overdoses (OD), both intentional and unintentional, from emergency department (ED) and urgent care (UC) chief complaint data.

Introduction

The State of Ohio, as well as the country, has experienced an increasing incidence of drug ODs over the last three decades [3]. Of the increased number of unintended drug OD deaths in 2008, 9 out of 10 were caused by medications or illicit drugs [1]. In Ohio, drug ODs surpassed MVCs as the leading cause of injury death in 2007. This trend has continued through the most current available data [3]. Using chief complaint data to quickly track changes in the geographical distribution, demographics, and volume of drug ODs may aid public health efforts to decrease the number of associated deaths.

Methods

Chief complaint data from ED/UC visits were collected and analyzed from Ohio’s syndromic surveillance application for 2010–2012. Ninety-six percent of all Ohio ED visits were captured during this timeframe. Due to the nonspecific nature of chief complaints as well as the lack of detail given upon registration at the ED/UC, attempting to separate visits into intentional vs. unintentional was not feasible. Therefore, a fairly specific classifier was created to define all potential ED/UC visits related to drug ODs. The data were analyzed, using SAS v 9.3, via time series analyses, and stratified by age, gender, and geographic region. Although these data types are pre-diagnostic in nature, they are more readily accessible than discharge data.

Results

On average, Ohio observed approx 66 ED/UC visits per day related to drug ODs from 2010–2012. The data show an increasing trend from 2010 through 2012 as well as a slight seasonal trend with higher visits observed in the spring/summer months as opposed to the autumn/winter months (Figure 1). The data showed that females attributed to a higher frequency of the drug ODs than males by approximately 4 ED/UC visits per day. Other data sources show a higher incidence in males than females related to unintentional drug ODs [3]. The highest age category attributing to the increase was the 18–39 years of age for both males and females, as shown in Figure 2. Population rates were calculated to identify those counties most affected by drug ODs. The data showed the highest rates of ED/UC visits related to drug ODs to be found in mostly rural areas of Ohio.Open in a separate windowFigure 1ED Visits Related to Drug Overdoses by Day, Ohio, 2010–12Open in a separate windowFigure 2ED Visits Related to Drug Overdoses by Age Group, Ohio, 2010–12

Conclusions

The annual death rate from unintentional drug poisonings by Ohio residents has increased from 3.6 in 2000 to 13.4 per 100,000 population in 2010[3]. As a result, the Ohio Governor created a Drug Abuse Task Force in 2009[4]. Ohio legislation (HB 93) implemented a prohibition on the operation of pain management clinics without a license on June 19, 2011[3]. According to this preliminary analysis, ED/UC visits related to drug ODs 1 year post-implementation of HB 93 continue to increase. It is unclear if HB 93 has slowed the rate of increase. Additionally, pre-diagnostic data has significant limitations including the significant possibility of misclassifying non-OD patient encounters as ODs. Further study of post-diagnostic data to confirm these trends is warranted.  相似文献   

12.

Objective

Review of the origins and evolution of the field of syndromic surveillance. Compare the goals and objectives of public health surveillance and syndromic surveillance in particular. Assess the science and practice of syndromic surveillance in the context of public health and national security priorities. Evaluate syndromic surveillance in practice, using case studies from the perspective of a local public health department.

Introduction

Public health disease surveillance is defined as the ongoing systematic collection, analysis and interpretation of health data for use in the planning, implementation and evaluation of public health, with the overarching goal of providing information to government and the public to improve public health actions and guidance [1,2]. Since the 1950s, the goals and objectives of disease surveillance have remained consistent [1]. However, the systems and processes have changed dramatically due to advances in information and communication technology, and the availability of electronic health data [2,3]. At the intersection of public health, national security and health information technology emerged the practice of syndromic surveillance [3].

Methods

To better understand the current state of the field, a review of the literature on syndromic surveillance was conducted: topics and keywords searched through PubMed and Google Scholar included biosurveillance, bioterrorism detection, computerized surveillance, electronic disease surveillance, situational awareness and syndromic surveillance, covering the areas of practice, research, preparedness and policy. This literature was compared with literature on traditional epidemiologic and public health surveillance. Definitions, objectives, methods and evaluation findings presented in the literature were assessed with a focus on their relevance from a local perspective, particularly as related to syndromic surveillance systems and methods used by the New York City Department of Health and Mental Hygiene in the areas of development, implementation, evaluation, public health practice and epidemiological research.

Results

A decade ago, the objective of syndromic surveillance was focused on outbreak and bioterrorism early-event detection (EED). While there have been clear recommendations for evaluation of syndromic surveillance systems and methods, the original detection paradigm for syndromic surveillance has not been adequately evaluated in practice, nor tested by real world events (ie, the systems have largely not ‘detected’ events of public health concern). In the absence of rigorous evaluation, the rationale and objectives for syndromic surveillance have broadened from outbreak and bioterrorism EED, to include all causes and hazards, and to encompass all data and analyses needed to achieve “situational awareness”, not simply detection. To evaluate current practices and provide meaningful guidance for local syndromic surveillance efforts, it is important to understand the emergence of the field in the broader context of public health disease surveillance. And it is important to recognize how the original stated objectives of EED have shifted in relation to actual evaluation, recommendation, standardization and implementation of syndromic systems at the local level.

Conclusions

Since 2001, the field of syndromic surveillance has rapidly expanded, following the dual requirements of national security and public health practice. The original objective of early outbreak or bioterrorism event detection remains a core objective of syndromic surveillance, and systems need to be rigorously evaluated through comparison of consistent methods and metrics, and public health outcomes. The broadened mandate for all-cause situation awareness needs to be focused into measureable public health surveillance outcomes and objectives that are consistent with established public health surveillance objectives and relevant to the local practice of public health [2].  相似文献   

13.

Objective

To present the usefulness of syndromic surveillance for the detection of infectious diseases outbreak in small islands, based on the experience of Mayotte.

Introduction

Mayotte Island, a French overseas department of around 374 km2 and 200 000 inhabitants is located in the North of Mozambique Channel in the Indian Ocean (Figure 1).Open in a separate windowFigure 1Map of the western Indian Ocean featuring Mayotte IslandIn response to the threat of the pandemic influenza A(H1N1)2009 virus emergence, a syndromic surveillance system has been implemented in order to monitor its spread and its impact on public health (1). This surveillance system which proved to be useful during the influenza pandemic, has been maintained in order to detect infection diseases outbreaks.

Methods

Data are collected daily directly from patients’ computerized medical files that are filled in during medical consultations at the emergency department (ED) of the hospital Center of Mayotte (2). Among the collected variables, the diagnosis coded according to ICD-10 is used to categorize the syndromes. Several syndromes are monitored including the syndromic grouping for conjunctivitis and unexplained fever.For early outbreak detection, a control chart is used based on an adaptation of the Cusum methods developed by the CDC within the framework of the EARS program (3).

Results

Each week, about 700 patients attend the ED of the hospital. The syndromic surveillance system allowed to detect an outbreak of conjunctivitis from week 10 (Figure 2). During the epidemic peak on week 12, conjunctivitis consultations represented 5% of all consultations. The data of the sentinel practitioner network confirmed this epidemic and the laboratory isolated Enterovirus (4). At the same time, an unusual increase of unexplained fever was detected.Open in a separate windowFigure 2Weekly number of conjonctivitis and unexplained fever consultations and statistical alarms detected

Conclusions

Due to its geographical and socio-demographical situation, the population of Mayotte is widely exposed to infectious diseases. Even on a small island, syndromic surveillance can be useful to detect outbreak early leading to alerts and to mobilize a rapid response in addition to others systems.  相似文献   

14.
Patients who are active and involved in their self-management and care are more likely to manage chronic conditions effectively (6, 26). With a 5-fold increase in the incidence of chronic illness over the past 20 years, access to information can provide patients the tools and support to self-manage their chronic illness. New media technologies can serve as tools to engage and involve patients in their health care. Due to the increasing ubiquity of the Internet and the availability of health information, patients are more easily able to seek and find information about their health.. Thus, the Internet can serve as a mechanism of empowerment (4, 5). This is especially important for people with diabetes mellitus where intensive self-management is critical.  相似文献   

15.

Objective

Show the benefits of using a generalized linear mixed model (GLMM) to examine long-term trends in asthma syndrome data.

Introduction

Over the last decade, the application of syndromic surveillance systems has expanded beyond early event detection to include long-term disease trend monitoring. However, statistical methods employed for analyzing syndromic data tend to focus on early event detection. Generalized linear mixed models (GLMMs) may be a useful statistical framework for examining long-term disease trends because, unlike other models, GLMMs account for clustering common in syndromic data, and GLMMs can assess disease rates at multiple spatial and temporal levels (1). We show the benefits of the GLMM by using a GLMM to estimate asthma syndrome rates in New York City from 2007 to 2012, and to compare high and low asthma rates in Harlem and the Upper East Side (UES) of Manhattan.

Methods

Asthma related emergency department (ED) visits, and patient age and ZIP code were obtained from data reported daily to the NYC Department of Health and Mental Hygiene. Demographic data were obtained from 2010 US Census. ZIP codes that represented high and low asthma rates in Harlem and the UES of Manhattan were chosen for closer inspection. The ratio of weekly asthma syndrome visits to total ED visits was modeled with a Poisson GLMM with week and ZIP code random intercepts (2). Age and ethnicity were adjusted for because of their association with asthma rates (3).

Results

The GLMM showed citywide asthma rates remained stable from 2007 to 2012, but seasonal differences and significant inter-ZIP code variation were present. The Harlem ZIP code asthma rate that was estimated with the GLMM was significantly higher (5.83%, 95% CI: 3.65%, 9.49%) than the asthma rate in UES ZIP code (0.78%, 95% CI: 0.50%, 1.21%). A linear time component to the GLMM showed no appreciable change over time despite the seasonal fluctuations in asthma rate. GLMM based asthma rates are shown over time (Figure 1).Open in a separate windowFigure 1:Harlem ZIP code (red), the Upper East Side ZIP code (blue), and citywide (black) estimates are shown as dotted lines surrounded by 30% credibility bands in solid lines.

Conclusions

GLMMs have several strengths as statistical frameworks for monitoring trends including:
  1. Disease rates can be estimated at multiple spatial and temporal levels,
  2. Standard error adjustment for clustering in syndromic data allows for accurate, statistical assessment of changes over time and differences between subgroups,
  3. “Strength borrowed” (4) from the aggregated data informs small subgroups and smooths trends,
  4. Integration of covariate data reduces bias in estimated rates.
GLMMs have previously been suggested for early event detection with syndromic surveillance data (5), but the versatility of GLMM makes them useful for monitoring long-term disease trends as well. In comparison to GLMMs, standard errors from single level GLMs do not account for clustering and can lead to inaccurate statistical hypothesis testing. Bayesian hierarchical models (6), share many of the strengths of GLMMS, but are more complicated to fit. In the future, GLMMs could provide a framework for grouping similar ZIP codes based on their model estimates (e.g. seasonal trends and influence on overall trend), and analyzing long-term disease trends with syndromic data.  相似文献   

16.

Objective

Create an analysis pipeline that can detect the behavioral determinants of disease in the population using social media data.

Introduction

The explosive use of social media sites presents a unique opportunity for developing alternative methods for understanding the health of the public. The near ubiquity of smartphones has further increased the volume and resolution of data that is shared through these sites. The emerging field of digital epidemiology[1] has focused on methods to analyze and use this “digital exhaust” to augment traditional epidemiologic methods. When applied to the task of disease detection they often detect outbreaks 1–2 weeks earlier than their traditional counterpart [1]. Many of these approaches successfully employ data mining techniques to detect symptoms associated with influenza-like illness [2]. Others can identify the appearance of novel symptom patterns, allowing the ability to detect the emergence of a new illness in a population [3]. However, behaviors that lead to increased risk for disease have not yet received this treatment.

Methods

We have created a methodology that can detect the behavioral determinants of disease in the population. Initially we have focused on risky behaviors that can contribute to HIV transmission in a population, however, the methodology is generalizable.We collected 15 million tweets based on 32 broad keywords relating to three types of risky behaviors associated with the transmission of HIV: drug use (e.g. meth), risky sexual behaviors (e.g. bareback), and other STIs (e.g. herpes). We then hand coded a subset of 2,537 unique tweets using a crowd-sourceable “game” that can be distributed online. This hand-coded set was used to train a simple n-gram classifier, which resulted in an algorithm to select relevant tweets from the larger database. We then generated geocodes from text locations provided by the tweet author, supplemented by the ∼1% of tweets that are already geolocated. We scaled these geocodes to the state and county levels, which allowed us to compare HIV prevalence in our collected data with public health data.

Results

We present the correlation between behaviors identified in social media and the corresponding impacts on disease incidence across a large population. Hand coding revealed that 34% of tweets with one or more of the 32 initial keywords was relevant to behaviors associated with HIV transmission. Among the three categories of initial search terms, the drug category yielded 21% true positives, compared to 9% for risky behaviors, and 2% for other STIs. The n-gram classifier measured 66% sensitivity and 44% specificity on a test set. In addition, our geolocation algorithm found coordinates for 88% of text locations. Of those, a test sample of 59 text locations showed that 83% of geolocations are correctly identified. These components combine to form an analysis pipeline for detecting risky behaviors across the United States.

Conclusions

We present a surveillance methodology to help sift through the vast volumes of these data to detect behaviors and determinants of health contributing to both disease transmission and chronic illness. This effort allows for identification of at-risk communities and populations, which will facilitate targeted, primary and secondary-prevention efforts to improve public health.  相似文献   

17.

Objective

To examine the effects of temperature on cardiovascular-related (CVD) morbidity and mortality among New York City (NYC) residents.

Introduction

Extreme temperatures are consistently shown to have an effect on CVD-related mortality [1, 2]. A large multi-city study of mortality demonstrated a cold-day and hot-day weather effect on CVD-related deaths, with the larger impact occurring on the coldest days [3]. In contrast, the association between weather and CVD-related morbidity is less clear [4, 5]. The purpose of this study is to characterize the effect of temperature on CVD-related emergency department (ED) visits, hospitalizations, and mortality on a large, heterogeneous population. Additionally, we conducted a sensitivity analysis to determine the impact of air pollutants, specifically fine particulates (PM2.5) and ozone (O3), along with temperature, on CVD outcomes.

Methods

We analyzed daily weather conditions, ED visits classified as CVD-related based on chief complaint text, hospitalizations, and natural cause deaths that occurred in NYC between 2002 and 2006. ED visits were obtained from data reported daily to the city health department for syndromic surveillance. Inpatient admissions were obtained from the Statewide Planning and Research Cooperative System, a data reporting system developed by New York State. Mortality data were obtained from the NYC Office of Vital Statistics. Data for PM2.5 and O3 were obtained from all available air quality monitors within the five boroughs of NYC. To estimate risk of CVD morbidity and mortality, we used generalized linear models using a Poisson distribution to calculate relative risks (RR) and 95% confidence intervals (CI). A non-linear distributed lag was used to model mean temperature in order to allow for its effect on the same day and on subsequent days. Models were fit separately for cold season (October through March) and warm season (April through September) given season may modify the effect on CVD outcomes. For our sensitivity analysis, we included PM2.5 and O3 in our model.

Results

During the cold season, CVD-related ED visits and hospitalizations increased, while mortality decreased, with increasing mean temperature on the same day and lagged days. Extremely cold temperature was associated with a small increase of same day in-hospital mortality though generally cold temperatures did not appear to be associated with higher mortality. The opposite was observed in the warm season as ED visits and hospitalizations decreased, and mortality increased, with increasing mean temperature on the same day and on lagged days. Our sensitivity analysis, in which we controlled for PM2.5 and O3, demonstrated little effect of these air pollutants on the relationship between temperature and CVD outcomes.

Conclusions

Our results suggest a decline in risk of a CVD-related ED visit and hospitalization during extreme temperatures on the same day and on recent day lags for both cold and warm seasons. In contrast, our findings for mortality indicate an increase in risk of CVD-related deaths during hot temperatures. No mortality effect was observed during cold temperatures. The effects of extreme temperatures on CVD-related morbidity may be explained by behavioral patterns, as people are more likely to stay indoors on the coldest and hottest days.  相似文献   

18.

Objective

Uncertainty regarding the location of disease acquisition, as well as selective identification of cases, may bias maps of risk. We propose an extension to a distance-based mapping method (DBM) that incorporates weighted locations to adjust for these biases. We demonstrate this method by mapping potential drug-resistant tuberculosis (DRTB) transmission hotspots using programmatic data collected in Lima, Peru.

Introduction

Uncertainty introduced by the selective identification of cases must be recognized and corrected for in order to accurately map the distribution of risk. Consider the problem of identifying geographic areas with increased risk of DRTB. Most countries with a high TB burden only offer drug sensitivity testing (DST) to those cases at highest risk for drug-resistance. As a result, the spatial distribution of confirmed DRTB cases under-represents the actual number of drug-resistant cases[1]. Also, using the locations of confirmed DRTB cases to identify regions of increased risk of drug-resistance may bias results towards areas of increased testing. Since testing is neither done on all incident cases nor on a representative sample of cases, current mapping methods do not allow standard inference from programmatic data about potential locations of DRTB transmission.

Methods

We extend a DBM method [2] to adjust for this uncertainty. To map the spatial variation of the risk of a disease, such as DRTB, in a setting where the available data consist of a non-random sample of cases and controls, we weight each address in our study by the probability that the individual at that address is a case (or would test positive for DRTB in this setting). Once all locations are assigned weights, a prespecified number of these locations (from previously published country-wide surveillance estimates) will be sampled, based on these weights, defining our cases. We assign these sampled cases to DRTB status, calculate our DBM, repeat this random selection and create a consensus map[3].

Results

Following [2], we select reassignment weights by the inverse probability of each untested case receiving DST at their given location. These weights preferentially reassign untested cases located in regions of reduced testing, reflecting an assumption that in areas where testing is common, individuals most at risk are tested. Fig. 1 shows two risk maps created by this weighted DBM, one on the unadjusted data (Fig.1, L) and one using the informative weights (Fig. 1, R). This figure shows the difference, and potentially the improvement, made when information related to the missingness mechanism, which introduces spatial uncertainty, is incorporated into the analysis.

Conclusions

The weighted DBM has the potential to analyze spatial data more accurately, when there is uncertainty regarding the locations of cases. Using a weighted DBM in combination with programmatic data from a high TB incidence community, we are able to make use of routine data in which a non-random sample of drug resistant cases are detected to estimate the true underlying burden of disease.Open in a separate window(L) Unweighted DBM of risk of a new TB case that received DST being positive for DRTB, compared to all new TB cases that received DST. (R) Weighted DBM of the risk of a new TB case that received DST being positive for DRTB, based on lab-confirmed DRTB cases and IPW selected non-DST TB cases, compared to all new TB cases.  相似文献   

19.

Objective

This study was to elucidate the spatio-temporal correlations between the mild and severe enterovirus cases through integrating enterovirus-related three surveillance systems in Taiwan. With these fully understanding epidemiological characteristics, hopefully, we can develop better measures and indicators from mild cases to provide early warning signals and thus minimizing subsequent numbers of severe cases.

Introduction

In July 2012, the 54 children infected with enterovirus-71(EV-71) were died in Cambodia [1]. The media called it as mystery illness and made Asian parents worried. In fact, the severe epidemics of enterovirus occurred frequently in Asia, including Malaysia, Singapore, Taiwan and China [2]. The clinical severity varied from asymptomatic to mild (hand-foot-mouth disease and herpangina) and severe pulmonary edema/hemorrhage and encephalitis [3]. Up to now, the development of vaccine for EV-71 and the more effective antiviral drug was still ongoing [4]. Therefore, surveillance for monitoring the enterovirus activity and understanding the epidemiological characteristics between mild and severe enterovirus cases was crucial.

Methods

Three main databases including national notifiable diseases surveillance, sentinel physician surveillance and laboratory surveillance from July 1, 1999 to December 31, 2008 were analyzed. The Pearson’s correlation coefficient was applied for measuring the consistency of the trend. The Poisson space-time scan statistic [5] was used for identifying the most likely clusters. We used GIS (ArcMap, version9.0; ESRI Inc.,Redlands, CA, USA) for visualization of detected clusters.

Results

Temporal analysis found that the Pearson’s correlation between mild EV cases and severe EV cases occurring in the same week was 0.553 (p<0.01) in Figure 1. Such a correlation became moderate (data) when mild EV cases happened in 1∼4 weeks before the current severe EV cases. Among the 1,517 severe EV cases notified to Taiwan CDC during the study period, the mean age was 27 months, 61.4% was male and 12% were fatal. These severe EV cases were significantly associated with the positive isolation rate of EV-71, with much higher correlation than the mild cases [ 0.498 p<0.01 vs. 0.278, p<0.01]. Using the space-time cluster method, we identified three possible clusters in June 2008 in six cities/counties (Figure 2).Open in a separate windowFigure 1The temporal trend between mild and severe EV cases.Open in a separate windowFigure 2The spatio-temporal clusters of EV severe cases.

Conclusions

Taiwan’s surveillance data indicate that local public health professionals can monitor the trends in the numbers of mild EV cases in community to provide early warning signals for local residents to prevent the severity of future waves.  相似文献   

20.

Objective

To look at the diversity of the patterns displayed by a range of organisms, and to seek a simple family of models that adequately describes all organisms, rather than a well-fitting model for any particular organism.

Introduction

There has been much research on statistical methods of prospective outbreak detection that are aimed at identifying unusual clusters of one syndrome or disease, and some work on multivariate surveillance methods (1). In England and Wales, automated laboratory surveillance of infectious diseases has been undertaken since the early 1990’s. The statistical methodology of this automated system is described in (2). However, there has been little research on outbreak detection methods that are suited to large, multiple surveillance systems involving thousands of different organisms.

Methods

We obtained twenty years’ data on weekly counts of all infectious disease organisms reported to the UK’s Health Protection Agency. We summarized the mean frequencies, trends and seasonality of each organism using log-linear models. To identify a simple family of models which adequately represents all organisms, the Poisson model, the quasi-Poisson model and the negative binomial model were investigated (3,4). Formal goodness-of-fit tests were not used as they can be unreliable with sparse data. Adequacy of the models was empirically studied using the relationships between the mean, variance and skewness. For this purpose, each data series was first subdivided into 41 half-years and de-seasonalized.

Results

Trends and seasonality were summarized by plotting the distribution of estimated linear trend parameters for 2250 organisms, and modal seasonal period for 2254 organisms, including those organisms for which the seasonal effect is statistically significant.Relationships between mean and variance were summarized as given in Figure 1.Open in a separate windowFigure 1Relationships between mean and variance. (top) Histogram of the slopes of the best fit lines for 1001 organisms; the value 1 corresponds to the quasi-Poisson model; (bottom) log of variance plotted against log of mean for one organism. The full line is the best fit to the points; the dashed line corresponds to the quasi-Poisson model; the dotted line corresponds to the Poisson model.Similar plots were used to summarize the relationships between mean and skewness.

Conclusions

Statistical outbreak detection models must be able to cope with seasonality and trends. The data analyses suggest that the great majority of organisms can adequately – though far from perfectly – be represented by a statistical model in which the variance is proportional to the mean, such as the quasi-Poisson or negative binomial models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号