首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Previous studies demonstrate statistically significant associations between disease and climate variations, highlighting the potential for developing climate‐based epidemic early warning systems. However, limitations include failure to allow for non‐climatic confounding factors, limited geographical/temporal resolution, or lack of evaluation of predictive validity. Here, we consider such issues for dengue in Southeast Brazil using a spatio‐temporal generalised linear mixed model with parameters estimated in a Bayesian framework, allowing posterior predictive distributions to be derived in time and space. This paper builds upon a preliminary study by Lowe et al. but uses extended, more recent data and a refined model formulation, which, amongst other adjustments, incorporates past dengue risk to improve model predictions. For the first time, a thorough evaluation and validation of model performance is conducted using out‐of‐sample predictions and demonstrates considerable improvement over a model that mirrors current surveillance practice. Using the model, we can issue probabilistic dengue early warnings for pre‐defined ‘alert’ thresholds. With the use of the criterion ‘greater than a 50% chance of exceeding 300 cases per 100,000 inhabitants’, there would have been successful epidemic alerts issued for 81% of the 54 regions that experienced epidemic dengue incidence rates in February–April 2008, with a corresponding false alarm rate of 25%. We propose a novel visualisation technique to map ternary probabilistic forecasts of dengue risk. This technique allows decision makers to identify areas where the model predicts with certainty a particular dengue risk category, to effectively target limited resources to those districts most at risk for a given season. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
3.
Routine surveillance of notifiable infectious diseases gives rise to daily or weekly counts of reported cases stratified by region and age group. From a public health perspective, forecasts of infectious disease spread are of central importance. We argue that such forecasts need to properly incorporate the attached uncertainty, so they should be probabilistic in nature. However, forecasts also need to take into account temporal dependencies inherent to communicable diseases, spatial dynamics through human travel and social contact patterns between age groups. We describe a multivariate time series model for weekly surveillance counts on norovirus gastroenteritis from the 12 city districts of Berlin, in six age groups, from week 2011/27 to week 2015/26. The following year (2015/27 to 2016/26) is used to assess the quality of the predictions. Probabilistic forecasts of the total number of cases can be derived through Monte Carlo simulation, but first and second moments are also available analytically. Final size forecasts as well as multivariate forecasts of the total number of cases by age group, by district and by week are compared across different models of varying complexity. This leads to a more general discussion of issues regarding modelling, prediction and evaluation of public health surveillance data. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

4.
BackgroundPrior to the COVID-19 pandemic, US hospitals relied on static projections of future trends for long-term planning and were only beginning to consider forecasting methods for short-term planning of staffing and other resources. With the overwhelming burden imposed by COVID-19 on the health care system, an emergent need exists to accurately forecast hospitalization needs within an actionable timeframe.ObjectiveOur goal was to leverage an existing COVID-19 case and death forecasting tool to generate the expected number of concurrent hospitalizations, occupied intensive care unit (ICU) beds, and in-use ventilators 1 day to 4 weeks in the future for New Mexico and each of its five health regions.MethodsWe developed a probabilistic model that took as input the number of new COVID-19 cases for New Mexico from Los Alamos National Laboratory’s COVID-19 Forecasts Using Fast Evaluations and Estimation tool, and we used the model to estimate the number of new daily hospital admissions 4 weeks into the future based on current statewide hospitalization rates. The model estimated the number of new admissions that would require an ICU bed or use of a ventilator and then projected the individual lengths of hospital stays based on the resource need. By tracking the lengths of stay through time, we captured the projected simultaneous need for inpatient beds, ICU beds, and ventilators. We used a postprocessing method to adjust the forecasts based on the differences between prior forecasts and the subsequent observed data. Thus, we ensured that our forecasts could reflect a dynamically changing situation on the ground.ResultsForecasts made between September 1 and December 9, 2020, showed variable accuracy across time, health care resource needs, and forecast horizon. Forecasts made in October, when new COVID-19 cases were steadily increasing, had an average accuracy error of 20.0%, while the error in forecasts made in September, a month with low COVID-19 activity, was 39.7%. Across health care use categories, state-level forecasts were more accurate than those at the regional level. Although the accuracy declined as the forecast was projected further into the future, the stated uncertainty of the prediction improved. Forecasts were within 5% of their stated uncertainty at the 50% and 90% prediction intervals at the 3- to 4-week forecast horizon for state-level inpatient and ICU needs. However, uncertainty intervals were too narrow for forecasts of state-level ventilator need and all regional health care resource needs.ConclusionsReal-time forecasting of the burden imposed by a spreading infectious disease is a crucial component of decision support during a public health emergency. Our proposed methodology demonstrated utility in providing near-term forecasts, particularly at the state level. This tool can aid other stakeholders as they face COVID-19 population impacts now and in the future.  相似文献   

5.
We propose a new method that could be part of a warning system for the early detection of time clusters applied to public health surveillance data. This method is based on the extreme value theory (EVT). To any new count of a particular infection reported to a surveillance system, we associate a return period that corresponds to the time that we expect to be able to see again such a level. If such a level is reached, an alarm is generated. Although standard EVT is only defined in the context of continuous observations, our approach allows to handle the case of discrete observations occurring in the public health surveillance framework. Moreover, it applies without any assumption on the underlying unknown distribution function. The performance of our method is assessed on an extensive simulation study and is illustrated on real data from Salmonella surveillance in France. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
We introduce a nonparametric survival prediction method for right‐censored data. The method generates a survival curve prediction by constructing a (weighted) Kaplan–Meier estimator using the outcomes of the K most similar training observations. Each observation has an associated set of covariates, and a metric on the covariate space is used to measure similarity between observations. We apply our method to a kidney transplantation data set to generate patient‐specific distributions of graft survival and to a simulated data set in which the proportional hazards assumption is explicitly violated. We compare the performance of our method with the standard Cox model and the random survival forests method. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

7.
Automated time series forecasting for biosurveillance   总被引:1,自引:0,他引:1  
For robust detection performance, traditional control chart monitoring for biosurveillance is based on input data free of trends, day-of-week effects, and other systematic behaviour. Time series forecasting methods may be used to remove this behaviour by subtracting forecasts from observations to form residuals for algorithmic input. We describe three forecast methods and compare their predictive accuracy on each of 16 authentic syndromic data streams. The methods are (1) a non-adaptive regression model using a long historical baseline, (2) an adaptive regression model with a shorter, sliding baseline, and (3) the Holt-Winters method for generalized exponential smoothing. Criteria for comparing the forecasts were the root-mean-square error, the median absolute per cent error (MedAPE), and the median absolute deviation. The median-based criteria showed best overall performance for the Holt-Winters method. The MedAPE measures over the 16 test series averaged 16.5, 11.6, and 9.7 for the non-adaptive regression, adaptive regression, and Holt-Winters methods, respectively. The non-adaptive regression forecasts were degraded by changes in the data behaviour in the fixed baseline period used to compute model coefficients. The mean-based criterion was less conclusive because of the effects of poor forecasts on a small number of calendar holidays. The Holt-Winters method was also most effective at removing serial autocorrelation, with most 1-day-lag autocorrelation coefficients below 0.15. The forecast methods were compared without tuning them to the behaviour of individual series. We achieved improved predictions with such tuning of the Holt-Winters method, but practical use of such improvements for routine surveillance will require reliable data classification methods.  相似文献   

8.
Like many high‐income countries, in Australia there are a range of programmes in place, from social security to food banks, to help address food insecurity. So far, they have been unable to adequately alleviate and prevent this growing nutrition challenge. This paper presents an evaluation of a new type of intervention in the food security landscape, the social enterprise. The Community Grocer is a social enterprise that operates weekly fresh fruit and vegetable markets in Melbourne, Australia. The aim of the study was to examine the market's ability to increase access, use and availability of nutritious food in a socially acceptable way, for low socioeconomic status urban‐dwelling individuals. The mixed‐method evaluation included: comparative price audits (n = 27) at local (<1 km) stores; analysis of operational data from sample markets (n = 3); customer surveys (n = 91) and customer interviews (n = 12), collected in two phases (Autumn 2017, Summer 2018). The results found common (n = 10) fruit and vegetables cost, on average, approximately 40% less at the social enterprise, than local stores. Over twenty per cent of customers were food insecure and 80% of households were low income. Thirty‐four different nationalities shopped at the market, and just over half (54%) shopped there weekly. More than 50 types of vegetables and fruit were available to purchase, varying for cultural preferences and seasonality, which supported variety and choice. Overall, this enterprise promotes food security in a localised area through low‐cost, convenient, dignified and nutritious offerings.  相似文献   

9.
We describe a novel Bayesian approach to estimate acquisition and clearance rates for many competing subtypes of a pathogen in a susceptible–infected–susceptible model. The inference relies on repeated measurements of the current status of being a non‐carrier (susceptible) or a carrier (infected) of one of the nq > 1 subtypes. We typically collect the measurements with sampling intervals that may not catch the true speed of the underlying dynamics. We tackle the problem of incompletely observed data with Bayesian data augmentation, which integrates over possible carriage histories, allowing the data to contain intermittently missing values, complete dropouts of study subjects, or inclusion of new study subjects during the follow‐up. We investigate the performance of the described method through simulations by using two different mixing groups (family and daycare) and different sampling intervals. For comparison, we describe crude maximum likelihood‐based estimates derived directly from the observations. We apply the estimation algorithm to data about transmission of Streptococcus pneumonia in Bangladeshi families. The computationally intensive Bayesian approach is a valid method to account for incomplete observations, and we found that it performs generally better than the simple crude method, in particular with large amount of missing data. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

10.

Background  

Plague, caused by the bacterium Yersinia pestis, is a public and wildlife health concern in California and the western United States. This study explores the spatial characteristics of positive plague samples in California and tests Maxent, a machine-learning method that can be used to develop niche-based models from presence-only data, for mapping the potential distribution of plague foci. Maxent models were constructed using geocoded seroprevalence data from surveillance of California ground squirrels (Spermophilus beecheyi) as case points and Worldclim bioclimatic data as predictor variables, and compared and validated using area under the receiver operating curve (AUC) statistics. Additionally, model results were compared to locations of positive and negative coyote (Canis latrans) samples, in order to determine the correlation between Maxent model predictions and areas of plague risk as determined via wild carnivore surveillance.  相似文献   

11.
The scan statistic is a very popular surveillance technique for purely spatial, purely temporal, and spatial‐temporal disease data. It was extended to the prospective surveillance case, and it has been applied quite extensively in this situation. When the usual signal rules, as those implemented in SaTScanTM(Boston, MA, USA) software, are used, we show that the scan statistic method is not appropriate for the prospective case. The reason is that it does not adjust properly for the sequential and repeated tests carried out during the surveillance. We demonstrate that the nominal significance level α is not meaningful and there is no relationship between α and the recurrence interval or the average run length (ARL). In some cases, the ARL may be equal to , which makes the method ineffective. This lack of control of the type‐I error probability and of the ARL leads us to strongly oppose the use of the scan statistic with the usual signal rules in the prospective context. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.

Objective

Our goal is to develop a statistical model for characterizing influenza surveillance systems that will be helpful in interpreting multiple streams of influenza surveillance data in future outbreaks.

Introduction

Syndromic surveillance has been widely used in influenza surveillance worldwide. However, despite the potential benefits created by the large volume of data, biases due to the changes in healthcare seeking behavior and physicians’ reporting behavior, as well as the background noise caused by seasonal flu epidemics, contribute to the complexity of the surveillance system and may limit its utility as a tool for early detection [1,2]. Since most current analysis methods are developed for outbreak detection, there are few tools to characterize influenza surveillance data for situational awareness purposes in a quantitative manner.Hong Kong Centre for Health Protection (CHP) has a comprehensive influenza surveillance system based on healthcare providers, laboratories, schools, daycare centers and residential care homes for the elderly. Hong Kong usually experiences a summer peak in July and August [3], which potentially doubles the data volume and constitutes a natural experiment to assess the effect of school-age children in the influenza transmission dynamics. The richness of the available data and the unique epidemiological characteristics make Hong Kong an ideal study object to develop and evaluate our model.

Methods

We have constructed a Bayesian statistical model for influenza surveillance data by parameterizing factors that describe disease transmission, behavior patterns in health care seeking and provision, and biases and errors embedded in the reporting process (Figure 1). The prior distributions are selected for each of the parameters to reflect knowledge of influenza epidemiology and the likely biases in each data system. Using the Markov Chain Monte-Carlo (MCMC) method in OpenBUGS, a posterior distribution can be generated for every parameter to characterize each data stream. The ratios of specific pairs of data streams are assessed in order to identify patterns in the change of ratios at different stage of the flu season.

Results

Preliminary results, as shown in Figure 2, incorporate confirmed influenza infection (solid line), influenza-like illness (double solid line), fever cases (dashed line), and Google search index (round dashed line). Although most of these data series track together, differences among them suggest reporting bias related to public awareness, which will be addressed in the statistical modeling.

Conclusions

The posterior distribution for parameters and ratios between individual data streams can be used to characterize influenza surveillance systems in terms of tendency in peak early or late, or to over or under represent actual influenza cases. To better interpret syndromic surveillance data for situational awareness purposes, behavioral data related to healthcare resource utilization, such as the percentage of intended GP visit among people with ILI, need to be collected together with the flu activity surveillance.Open in a separate windowConceptual model for influenza surveillance statistical modelBlue circles: unobservable true value; white boxes: observation; orange boxes: factorsOpen in a separate windowHong Kong flu activity in 2009 pH1N1 outbreak  相似文献   

13.
14.
An important statistical task in disease mapping problems is to identify divergent regions with unusually high or low risk of disease. Leave‐one‐out cross‐validatory (LOOCV) model assessment is the gold standard for estimating predictive p‐values that can flag such divergent regions. However, actual LOOCV is time‐consuming because one needs to rerun a Markov chain Monte Carlo analysis for each posterior distribution in which an observation is held out as a test case. This paper introduces a new method, called integrated importance sampling (iIS), for estimating LOOCV predictive p‐values with only Markov chain samples drawn from the posterior based on a full data set. The key step in iIS is that we integrate away the latent variables associated the test observation with respect to their conditional distribution without reference to the actual observation. By following the general theory for importance sampling, the formula used by iIS can be proved to be equivalent to the LOOCV predictive p‐value. We compare iIS and other three existing methods in the literature with two disease mapping datasets. Our empirical results show that the predictive p‐values estimated with iIS are almost identical to the predictive p‐values estimated with actual LOOCV and outperform those given by the existing three methods, namely, the posterior predictive checking, the ordinary importance sampling, and the ghosting method by Marshall and Spiegelhalter (2003). Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

15.
The problem of testing symmetry about zero has a long and rich history in the statistical literature. We introduce a new test that sequentially discards observations whose absolute value is below increasing thresholds defined by the data. McNemar's statistic is obtained at each threshold and the largest is used as the test statistic. We obtain the exact distribution of this maximally selected McNemar and provide tables of critical values and a program for computing p‐values. Power is compared with the t‐test, the Wilcoxon Signed Rank Test and the Sign Test. The new test, MM, is slightly less powerful than the t‐test and Wilcoxon Signed Rank Test for symmetric normal distributions with nonzero medians and substantially more powerful than all three tests for asymmetric mixtures of normal random variables with or without zero medians. The motivation for this test derives from the need to appraise the safety profile of new medications. If pre and post safety measures are obtained, then under the null hypothesis, the variables are exchangeable and the distribution of their difference is symmetric about a zero median. Large pre–post differences are the major concern of a safety assessment. The discarded small observations are not particularly relevant to safety and can reduce power to detect important asymmetry. The new test was utilized on data from an on‐road driving study performed to determine if a hypnotic, a drug used to promote sleep, has next day residual effects. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

16.

Background  

Evidence-based screening guidelines are needed for women under 40 with a family history of breast cancer, a BRCA1 or BRCA2 mutation, or other risk factors. An accurate assessment of breast cancer risk is required to balance the benefits and risks of surveillance, yet published studies have used narrow risk assessment schemata for enrollment. Breast density limits the sensitivity of film-screen mammography but is not thought to pose a limitation to MRI, however the utility of MRI surveillance has not been specifically examined before in women with dense breasts. Also, all MRI surveillance studies yet reported have used high strength magnets that may not be practical for dedicated imaging in many breast centers. Medium strength 0.5 Tesla MRI may provide an alternative economic option for surveillance.  相似文献   

17.
With advancements in next‐generation sequencing technology, a massive amount of sequencing data is generated, which offers a great opportunity to comprehensively investigate the role of rare variants in the genetic etiology of complex diseases. Nevertheless, the high‐dimensional sequencing data poses a great challenge for statistical analysis. The association analyses based on traditional statistical methods suffer substantial power loss because of the low frequency of genetic variants and the extremely high dimensionality of the data. We developed a Weighted U Sequencing test, referred to as WU‐SEQ, for the high‐dimensional association analysis of sequencing data. Based on a nonparametric U‐statistic, WU‐SEQ makes no assumption of the underlying disease model and phenotype distribution, and can be applied to a variety of phenotypes. Through simulation studies and an empirical study, we showed that WU‐SEQ outperformed a commonly used sequence kernel association test (SKAT) method when the underlying assumptions were violated (e.g., the phenotype followed a heavy‐tailed distribution). Even when the assumptions were satisfied, WU‐SEQ still attained comparable performance to SKAT. Finally, we applied WU‐SEQ to sequencing data from the Dallas Heart Study (DHS), and detected an association between ANGPTL 4 and very low density lipoprotein cholesterol.  相似文献   

18.
The Mann‐Whitney U test is frequently used to evaluate treatment effects in randomized experiments with skewed outcome distributions or small sample sizes. It may lack power, however, because it ignores the auxiliary baseline covariate information that is routinely collected. Wald and score tests in so‐called probabilistic index models generalize the Mann‐Whitney U test to enable adjustment for covariates, but these may lack robustness by demanding correct model specification and do not lend themselves to small sample inference. Using semiparametric efficiency theory, we here propose an alternative extension of the Mann‐Whitney U test, which increases its power by exploiting covariate information in an objective way and which lends itself to permutation inference. Simulation studies and an application to an HIV clinical trial show that the proposed permutation test attains the nominal Type I error rate and can be drastically more powerful than the classical Mann‐Whitney U test. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

19.
Using data from the 2015 Asian American Quality of Life Survey (N = 2,609), latent profile analysis was conducted on general (health insurance, usual place for care and income) and immigrant‐specific (nativity, length of stay in the U.S., English proficiency and acculturation) risk factors of healthcare access. Latent profile analysis identified a three‐cluster model (low‐risk, moderate‐risk and high‐risk groups). Compared with the low‐risk group, the odds of having an unmet healthcare need was 1.52 times greater in the moderate‐risk group and 2.24 times greater in the high‐risk group. Challenging the myth of model minority, the present sample of Asian Americans demonstrates its vulnerability in access to healthcare. Findings also show the heterogeneity in healthcare access risk profiles.  相似文献   

20.
《Vaccine》2022,40(49):7097-7107
IntroductionParent and child vaccination behavior is related for human papillomavirus (HPV) and flu vaccine. Thus, it is likely that parental vaccination status is also associated with their children’s adherence to guideline-concordant childhood vaccination schedules. We hypothesized that parent influenza (flu) vaccination would be associated with their child’s vaccination status at age two.MethodsWe used electronic health record data to identify children and linked parents seen in a community health center (CHC) within the OCHIN network (292 CHCs in 16 states). We randomly selected a child aged <2 years with ≥1 ambulatory visit between 2009-2018.Employing a retrospective, cohort study design, we used general estimating equations logistic regression to estimate the odds of a child being up-to-date on vaccinations based on their linked parents’ flu vaccination status. We adjusted for relevant parent and child covariates and stratified by mother only, father only, and two-parent samples.ResultsThe study included 40,007 family-units: mother only = 35,444, father only = 2,784, and two parents = 1,779. A higher percentage of children were fully vaccinated if their parent or parents received a flu vaccine. Children in the two-parent sample whose parents both received a flu vaccine had more than twice the odds of being fully vaccinated, and two and a half times the odds of being fully vaccinated except flu vaccine compared to children with two parents who did not receive a flu vaccine (covariate-adjusted odds ratio [aOR] = 2.39, 95% CI = 1.67, 3.43 and aOR = 2.54, 95% CI = 1.54, 4.19, respectively).ConclusionsParent flu vaccination is associated with routine child vaccination. Future research is needed to understand if this relationship persists over time and in different settings.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号