首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   26篇
  免费   1篇
基础医学   5篇
临床医学   1篇
综合类   1篇
预防医学   19篇
药学   1篇
  2017年   1篇
  2015年   10篇
  2014年   5篇
  2013年   6篇
  2012年   1篇
  2011年   1篇
  2010年   1篇
  2007年   1篇
  2006年   1篇
排序方式: 共有27条查询结果,搜索用时 15 毫秒
11.
12.
13.

Objective

Public health surveillance requires outbreak detection algorithms with computational efficiency sufficient to handle the increasing volume of disease surveillance data. In response to this need, the authors propose a spatial clustering algorithm, rank-based spatial clustering (RSC), that detects rapidly infectious but non-contagious disease outbreaks.

Design

The authors compared the outbreak-detection performance of RSC with that of three well established algorithms—the wavelet anomaly detector (WAD), the spatial scan statistic (KSS), and the Bayesian spatial scan statistic (BSS)—using real disease surveillance data on to which they superimposed simulated disease outbreaks.

Measurements

The following outbreak-detection performance metrics were measured: receiver operating characteristic curve, activity monitoring operating curve curve, cluster positive predictive value, cluster sensitivity, and algorithm run time.

Results

RSC was computationally efficient. It outperformed the other two spatial algorithms in terms of detection timeliness, and outbreak localization. RSC also had overall better timeliness than the time-series algorithm WAD at low false alarm rates.

Conclusion

RSC is an ideal algorithm for analyzing large datasets when the application of other spatial algorithms is not practical. It also allows timely investigation for public health practitioners by providing early detection and well-localized outbreak clusters.  相似文献   
14.
In recent decades, the issue of emerging and re-emerging infectious diseases, especially those related to viruses, has become an increasingly important area of concern in public health. It is of significance to anticipate future epidemics by accumulating knowledge through appropriate research and by monitoring their emergence using indicators from different sources. The objective is to alert and respond effectively in order to reduce the adverse impact on the general populations. Most of the emerging pathogens in humans originate from known zoonosis. These pathogens have been engaged in long-standing and highly successful interactions with their hosts since their origins are exquisitely adapted to host parasitism. They developed strategies aimed at: (1) maximizing invasion rate; (2) selecting host traits that can reduce their impact on host life span and fertility; (3) ensuring timely replication and survival both within host and between hosts; and (4) facilitating reliable transmission to progeny. In this context, Arboviruses (or ARthropod-BOrne viruses), will represent with certainty a threat for the coming century. The unprecedented epidemic of Chikungunya virus which occurred between 2005 and 2006 in the French Reunion Island in the Indian Ocean, followed by several outbreaks in other parts of the world, such as India and Southern Europe, has attracted the attention of medical and state authorities about the risks linked to this re-emerging mosquito-borne virus. This is an excellent model to illustrate the issues we are facing today and to improve how to respond tomorrow.  相似文献   
15.
A general problem in biosurveillance is finding appropriate aggregates of elemental data to monitor for the detection of disease outbreaks. We developed an unsupervised clustering algorithm for aggregating over-the-counter healthcare (OTC) products into categories. This algorithm employs MCMC over hundreds of parameters in a Bayesian model to place products into clusters. Despite the high dimensionality, it still performs fast on hundreds of time series. The procedure was able to uncover a clinically significant distinction between OTC products intended for the treatment of allergy and OTC products intended for the treatment of cough, cold, and influenza symptoms.  相似文献   
16.

Objective

To highlight how data quality has been discussed in the biosurveillance literature in order to identify current gaps in knowledge and areas for future research.

Introduction

Data quality monitoring is necessary for accurate disease surveillance. However it can be challenging, especially when “real-time” data are required. Data quality has been broadly defined as the degree to which data are suitable for use by data consumers [1]. When compromised at any point in a health information system, data of low quality can impair the detection of data anomalies, delay the response to emerging health threats [2], and result in inefficient use of staff and financial resources. While the impacts of poor data quality on biosurveillance are largely unknown, and vary depending on field and business processes, the information management literature includes estimates for increased costs amounting to 8–12% of organizational revenue and, in general, poorer decisions that take longer to make [3].

Methods

To fill an unmet need, a literature review was conducted using a structured matrix based on the following predetermined questions:
  • -How has data quality been defined and/or discussed?
  • -What measurements of data quality have been utilized?
  • -What methods for monitoring data quality have been utilized?
  • -What methods have been used to mitigate data quality issues?
  • -What steps have been taken to improve data quality?
The search included PubMed, ISDS and AMIA Conference Proceedings, and reference lists. PubMed was searched using the terms “data quality,” “biosurveillance,” “information visualization,” “quality control,” “health data,” and “missing data.” The titles and abstracts of all search results were assessed for relevance and relevant articles were reviewed using the structured matrix.

Results

The completeness of data capture is the most commonly measured dimension of data quality discussed in the literature (other variables include timeliness and accuracy). The methods for detecting data quality issues fall into two broad categories: (1) methods for regular monitoring to identify data quality issues and (2) methods that are utilized for ad hoc assessments of data quality. Methods for regular monitoring of data quality are more likely to be automated and focused on visualization, compared with the methods described as part of special evaluations or studies, which tend to include more manual validation.Improving data quality involves the identification and correction of data errors that already exist in the system using either manual or automated data cleansing techniques [4]. Several methods of improving data quality were discussed in the public health surveillance literature, including development of an address verification algorithm that identifies an alternative, valid address [5], and manual correction of the contents of databases [6].Communication with the data entry personnel or data providers, either on a regular basis (e.g., annual report) or when systematic data entry errors are identified, was mentioned in the literature as the most common step to prevent data quality issues.

Conclusions

In reviewing the biosurveillance literature in the context of the data quality field, the largest gap appears to be that the data quality methods discussed in literature are often ad hoc and not consistently implemented. Developing a data quality program to identify the causes of lower quality health data, address data quality problems, and prevent issues would allow public health departments to more efficiently and effectively conduct biosurveillance and to apply results to improving public health practice.  相似文献   
17.
18.
19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号