首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Objectives. We assessed how frequently researchers reported the use of statistical techniques that take into account the complex sampling structure of survey data and sample weights in published peer-reviewed articles using data from 3 commonly used adolescent health surveys.Methods. We performed a systematic review of 1003 published empirical research articles from 1995 to 2010 that used data from the National Longitudinal Study of Adolescent Health (n = 765), Monitoring the Future (n = 146), or Youth Risk Behavior Surveillance System (n = 92) indexed in ERIC, PsycINFO, PubMed, and Web of Science.Results. Across the data sources, 60% of articles reported accounting for design effects and 61% reported using sample weights. However, the frequency and clarity of reporting varied across databases, publication year, author affiliation with the data, and journal.Conclusions. Given the statistical bias that occurs when design effects of complex data are not incorporated or sample weights are omitted, this study calls for improvement in the dissemination of research findings based on complex sample data. Authors, editors, and reviewers need to work together to improve the transparency of published findings using complex sample data.Secondary data analysis of nationally representative health surveys is commonly conducted by health science researchers and can be extremely useful when they are investigating risk and protective factors associated with health-related outcomes. By providing access to a vast array of variables on large numbers of individuals, large-scale health survey data are enticing to many researchers. Many researchers, however, lack the methodological skills needed for effective access to and use of such data. Traditional statistical methods and software analysis programs assume that data were generated through simple random sampling, with each individual having equal probability of being selected. With large, nationally representative health surveys, however, this is often not the case. Instead, from the perspective of statistical analysis, data from these complex sample surveys differ from those obtained via simple random sampling in 4 respects.First, the probabilities of selection of the observations are not equal; oversampling of certain subgroups in the population is often employed in survey sample design to allow reasonable precision in the estimation of parameters. Second, multistage sampling results in clustered observations in which the variance among units within each cluster is less than the variance among units in general. Third, stratification in sampling ensures appropriate sample representation on the stratification variable(s), but yields negatively biased estimates of the population variance. Fourth, unit nonresponse and other poststratification adjustments are usually applied to the sample to allow unbiased estimates of population characteristics.1 If these aspects of complex survey data are ignored, standard errors and point estimates are biased, thereby potentially leading to incorrect inferences being made by the researcher.  相似文献   

2.
Background. Previous reviews have demonstrated a higher risk of suicide attempts for lesbian, gay, and bisexual (LGB) persons (sexual minorities), compared with heterosexual groups, but these were restricted to general population studies, thereby excluding individuals sampled through LGB community venues. Each sampling strategy, however, has particular methodological strengths and limitations. For instance, general population probability studies have defined sampling frames but are prone to information bias associated with underreporting of LGB identities. By contrast, LGB community surveys may support disclosure of sexuality but overrepresent individuals with strong LGB community attachment.Objectives. To reassess the burden of suicide-related behavior among LGB adults, directly comparing estimates derived from population- versus LGB community–based samples.Search methods. In 2014, we searched MEDLINE, EMBASE, PsycInfo, CINAHL, and Scopus databases for articles addressing suicide-related behavior (ideation, attempts) among sexual minorities.Selection criteria. We selected quantitative studies of sexual minority adults conducted in nonclinical settings in the United States, Canada, Europe, Australia, and New Zealand.Data collection and analysis. Random effects meta-analysis and meta-regression assessed for a difference in prevalence of suicide-related behavior by sample type, adjusted for study or sample-level variables, including context (year, country), methods (medium, response rate), and subgroup characteristics (age, gender, sexual minority construct). We examined residual heterogeneity by using τ2.Main results. We pooled 30 cross-sectional studies, including 21 201 sexual minority adults, generating the following lifetime prevalence estimates of suicide attempts: 4% (95% confidence interval [CI] = 3%, 5%) for heterosexual respondents to population surveys, 11% (95% CI = 8%, 15%) for LGB respondents to population surveys, and 20% (95% CI = 18%, 22%) for LGB respondents to community surveys (Figure 1). The difference in LGB estimates by sample type persisted after we accounted for covariates with meta-regression. Sample type explained 33% of the between-study variability.Open in a separate windowFIGURE 1—Lifetime Prevalence of Suicide Attempts by Sexual Identity and Sample TypeAuthor’s conclusions. Regardless of sample type examined, sexual minorities had a higher lifetime prevalence of suicide attempts than heterosexual persons; however, the magnitude of this disparity was contingent upon sample type. Community-based surveys of LGB people suggest that 20% of sexual minority adults have attempted suicide.Public health implications. Accurate estimates of sexual minority health disparities are necessary for public health monitoring and research. Most data describing these disparities are derived from 2 sample types, which yield different estimates of the lifetime prevalence of suicide attempts. Additional studies should explore the differential effects of selection and information biases on the 2 predominant sampling approaches used to understand sexual minority health.  相似文献   

3.
Objectives. We examined whether the widespread assumption that Hispanics are subject to greater noncoverage bias in landline telephone surveys because they are more likely than other ethnic groups to use cell phones exclusively was supported by data.Methods. Data came from the 2010 National Health Interview Survey and the 2009 California Health Interview Survey. We considered estimates derived from surveys of adults with landline telephones biased and compared them with findings for all adults. Noncoverage bias was the difference between them, examined separately for Hispanics and non-Hispanic Whites.Results. Differences in demographic and health characteristics between cell-only and landline users were larger for non-Hispanic Whites than Hispanics; cell usage was much higher for Hispanics than non-Hispanic Whites. The existence, pattern, and magnitude of noncoverage bias were comparable between the groups.Conclusions. We found no evidence to support a larger noncoverage bias for Hispanics than non-Hispanic Whites in landline telephone surveys. This finding should be considered in the design and interpretation of telephone surveys.The population trend of dropping landline telephone service and switching to cellular phones is a well-known threat to landline random-digit-dialed telephone surveys, in which cell phone numbers are not part of the sample.1–6 According to the National Health Interview Survey (NHIS), the rate of cell-only usage for the US adult (aged ≥ 18 years) general population was 27.8% in the second half of 2010, more than double the rate in 2007 (12.6%).7 This increasing cell-only usage implies that the proportion of the population without a landline telephone is much larger than when traditional landline random-digit-dialed surveys became popular in the 1970s. This decreases the proportion of the general population covered by the landline telephone frame, which is the listing of telephone numbers used to draw the samples. If we assume that those who have both cell and landline telephones but mostly use cell phones are difficult to reach over landline telephones, close to half of the adult population (45.2%) would be classified as unreachable via landline telephones.7Greater cell phone usage also implies greater noncoverage bias. As defined in literature, the noncoverage problem arises from failure to include some elements of the population in the frame.8 Noncoverage bias is the difference in estimates derived from those who are covered by the frame and those who should be covered. It can also be calculated by multiplying the noncoverage rate and the difference between those who are covered and those who are not. For telephone surveys, bias may arise not only at the population level but also at the subpopulation level when the cell-only users are not accounted for in the data collection.3,9 For example, for low-income young adults, ignoring cell-only users is likely to incur bias in health risk behavior variables, and for young adults in general, the estimates for alcohol consumption have been found to be subject to such noncoverage bias.3,9Besides age and lifestyle characteristics, ethnicity/race is considered to be an important correlate of telephone usage. The cell phone usage rates differ by ethnicity/race, and often non-Hispanic Whites report a lower cell-only rate than do other groups. Among minority groups, Hispanics are associated with the highest cell-only rate.1,7,10 Figure A (available as a supplement to the online version of this article at http://www.ajph.org) summarizes cell phone usage estimates for the adult general population, Hispanics, and non-Hispanic Whites from the 2008, 2009, and 2010 NHIS, as reported in Blumberg and Luke.7 Hispanics consistently reported much higher cell-only and cell-mostly usage than the rest of the population. Their cell-only rate was close to 40%, and their combined cell-only and cell-mostly rate was higher than 55% in the second half of 2010, both about 13.4 percentage points higher than those of non-Hispanic Whites.The documented difference in cell phone usage leads many researchers to believe that noncoverage bias is larger for Hispanics than for other ethnic groups. However, this belief requires further investigation, because discussions about noncoverage bias need to address both noncoverage rates and differences between persons who are and are not covered by the frame.8 Even with a high noncoverage rate, noncoverage bias may be trivial, if those who are covered by the frame are similar to those who are not. The reverse is also true. A low noncoverage rate does not guarantee low noncoverage bias. It is therefore essential to evaluate noncoverage bias rather than merely accounting for noncoverage rates. We examined potential noncoverage bias in landline telephone surveys for Hispanics by assessing their telephone usage and associated health characteristics. We further compared telephone usage and noncoverage bias between Hispanics and non-Hispanic Whites in 2 independent data sources.  相似文献   

4.

Objectives

The inherent nature of the Korean National Health and Nutrition Examination Survey (KNHANES) design requires special analysis by incorporating sample weights, stratification, and clustering not used in ordinary statistical procedures.

Methods

This study investigated the proportion of research papers that have used an appropriate statistical methodology out of the research papers analyzing the KNHANES cited in the PubMed online system from 2007 to 2012. We also compared differences in mean and regression estimates between the ordinary statistical data analyses without sampling weight and design-based data analyses using the KNHANES 2008 to 2010.

Results

Of the 247 research articles cited in PubMed, only 19.8% of all articles used survey design analysis, compared with 80.2% of articles that used ordinary statistical analysis, treating KNHANES data as if it were collected using a simple random sampling method. Means and standard errors differed between the ordinary statistical data analyses and design-based analyses, and the standard errors in the design-based analyses tended to be larger than those in the ordinary statistical data analyses.

Conclusions

Ignoring complex survey design can result in biased estimates and overstated significance levels. Sample weights, stratification, and clustering of the design must be incorporated into analyses to ensure the development of appropriate estimates and standard errors of these estimates.  相似文献   

5.

Background

A continuously operating survey can yield advantages in survey management, field operations, and the provision of timely information for policymakers and researchers. We describe the key features of the sample design of the New Zealand (NZ) Health Survey, which has been conducted on a continuous basis since mid-2011, and compare to a number of other national population health surveys.

Methods

A number of strategies to improve the NZ Health Survey are described: implementation of a targeted dual-frame sample design for better Māori, Pacific, and Asian statistics; movement from periodic to continuous operation; use of core questions with rotating topic modules to improve flexibility in survey content; and opportunities for ongoing improvements and efficiencies, including linkage to administrative datasets.

Results and discussion

The use of disproportionate area sampling and a dual frame design resulted in reductions of approximately 19%, 26%, and 4% to variances of Māori, Pacific and Asian statistics respectively, but at the cost of a 17% increase to all-ethnicity variances. These were broadly in line with the survey’s priorities. Respondents provided a high degree of cooperation in the first year, with an adult response rate of 79% and consent rates for data linkage above 90%.

Conclusions

A combination of strategies tailored to local conditions gives the best results for national health surveys. In the NZ context, data from the NZ Census of Population and Dwellings and the Electoral Roll can be used to improve the sample design. A continuously operating survey provides both administrative and statistical advantages.
  相似文献   

6.
Objectives. We examined self-reported health status, health behaviors, access to care, and use of preventive services of the US Hispanic adult population to identify language-associated disparities.Methods. We analyzed 2003 to 2005 Behavioral Risk Factor Surveillance System data from 45 076 Hispanic adults in 23 states, who represented 90% of the US Hispanic population, and compared 25 health indicators between Spanish-speaking Hispanics and English-speaking Hispanics.Results. Physical activity and rates of chronic disease, obesity, and smoking were significantly lower among Spanish-speaking Hispanics than among English-speaking Hispanics. Spanish-speaking Hispanics reported far worse health status and access to care than did English-speaking Hispanics (39% vs 17% in fair or poor health, 55% vs 23% uninsured, and 58% vs 29% without a personal doctor) and received less preventive care. Adjustment for demographic and socioeconomic factors did not mitigate the influence of language on these health indicators.Conclusions. Spanish-language preference marks a particularly vulnerable subpopulation of US Hispanics who have less access to care and use of preventive services. Priority areas for Spanish-speaking adults include maintenance of healthy behaviors, promotion of physical activity and preventive health care, and increased access to care.More than 1 in 10 US residents now speak Spanish at home, and approximately half of these persons report an ability to speak English less than “very well.”1 Language preference and English language proficiency have previously been associated with health-related behaviors, disease prevalence, and receipt of health care services among Hispanics,26 but lack of sufficient individual-level population-based data on ethnicity, socioeconomic position, acculturation, and language has limited our ability to document the extent of language-associated disparities or to understand their component causes.7The utility of national surveys in monitoring health disparities and informing public health interventions relies upon methodologic adaptation to the increasing diversity of the US population.8 One of the most important sources of national data for identifying emerging health problems, developing public health policies and targeted prevention programs, and tracking progress toward meeting the Healthy People 2010 objectives is the Behavioral Risk Factor Surveillance System (BRFSS) sponsored by the Centers for Disease Control and Prevention.9 The BRFSS has included an optional Spanish-language survey instrument since 1987, but until recently, few states conducted Spanish-language interviews. Spanish-language survey data are now available from 23 states, which together represent approximately 90% of the total US Hispanic population. Thus, it is newly possible to describe rates of common population health indicators for a nationally representative sample of Spanish-speaking adults and to broadly examine language-associated disparities within the US Hispanic population.We sought to (1) provide a broad, national overview of the current US Spanish-speaking population, examining chronic disease prevalence, risk factors, self-reported health status, access to care, and receipt of preventive health services; (2) assess the extent to which language is associated with these health indicators among US Hispanics; and (3) examine regional variation in these health indicators among Spanish-speaking Hispanics. Comparative indicators for English-speaking Hispanic respondents are given to provide a context for evaluating and responding to the health risks and health care needs of the Spanish-speaking population.  相似文献   

7.
Objectives. We compared a statewide telephone health survey with electronic health record (EHR) data from a large Wisconsin health system to estimate asthma prevalence in Wisconsin.Methods. We developed frequency tables and logistic regression models using Wisconsin Behavioral Risk Factor Surveillance System and University of Wisconsin primary care clinic data. We compared adjusted odds ratios (AORs) from each model.Results. Between 2007 and 2009, the EHR database contained 376 000 patients (30 000 with asthma), and 23 000 (1850 with asthma) responded to the Behavioral Risk Factor Surveillance System telephone survey. AORs for asthma were similar in magnitude and direction for the majority of covariates, including gender, age, and race/ethnicity, between survey and EHR models. The EHR data had greater statistical power to detect associations than did survey data, especially in pediatric and ethnic populations, because of larger sample sizes.Conclusions. EHRs can be used to estimate asthma prevalence in Wisconsin adults and children. EHR data may improve public health chronic disease surveillance using high-quality data at the local level to better identify areas of disparity and risk factors and guide education and health care interventions.Asthma is a complex chronic disease with intermittent symptoms and varying degrees of severity. This often makes it difficult to determine its prevalence in a population. Nationally, asthma is estimated to affect approximately 10% of children aged 17 years and younger and 8% of adults,1 and is associated with significant morbidity and substantial health care costs. The economic cost of asthma in the United States was estimated at $59.0 billion in 2007, including direct health care costs of $53.1 billion and indirect, or lost productivity, costs of $5.9 billion.2 These outcomes are largely preventable with targeted interventions.3 Ideally, asthma surveillance should identify disproportionately affected populations and guide prevention and intervention efforts.Surveillance data for chronic diseases are traditionally drawn from federally supported health surveys that provide estimates of asthma prevalence at the national and state levels but not at the local level, where many policy decisions are made. The Behavioral Risk Factor Surveillance System (BRFSS) is the only source of data on health-related behaviors and outcomes for many states, and it is the principal source of asthma prevalence data for Wisconsin.4 The Wisconsin telephone-based BRFSS survey contains self-reported disease and risk factor data for approximately 4500 adults and 1100 children annually. The BRFSS sample depends on available federal funding and may vary widely from year to year. Although data are provided at the county level, the sample size is often too small for direct estimation of disease prevalence at this geographical level.Electronic health records (EHRs) are increasingly used in research to identify patients with chronic diseases for surveillance and epidemiological studies.5–7 We compared asthma prevalence estimates in the Wisconsin child and adult population from the traditional statewide BRFSS telephone survey and EHRs from a large Wisconsin health system. We hypothesized that a reliable estimate of asthma prevalence can be made from EHR data at a local level when compared with telephone survey data.  相似文献   

8.

Objective

Surveys frequently deviate from simple random sampling through the use of unequal probability sampling, stratified sampling, and multistage sampling. This work uses a survey of public health to systematically illustrate the effects of incompletely accounting for strata, clustering, and weights.

Study Design and Setting

Data analysis was based on the Study of Health in Pomerania (n = 4,308, 20-79 years), a two-stage regional survey with high sampling fractions at the first stage. Effects of survey design features comprising weights, stratification, clustering, and finite population correction on point and variance estimates of lifestyle indicators and clinical parameters were assessed.

Results

Misspecifications of the survey design substantially affected both the point estimates of health characteristics and their standard errors (SEs). The strongest bias in SEs concerned the omission of the second sampling stage. Ignoring the sampling design led to minor differences in variance estimates from the complete setup. Weighting predominantly affected point estimates of lifestyle factors.

Conclusion

A partial misspecification of survey design elements may bias variance estimates severely and is sometimes even more harmful compared with completely neglecting design elements. If subgroups are sampled at different rates, weighting is of particular relevance with regard to prevalence estimates of lifestyle indicators.  相似文献   

9.

Background

Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS) approach has been proposed as an alternative, as smaller sample sizes are required.

Methods

We explored (i) the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii) the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A.

Results

VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i) health areas not requiring supplemental activities; ii) health areas requiring additional vaccination; iii) health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3), standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans.

Conclusions

Small sample cluster surveys (10 × 15) are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes.  相似文献   

10.
This paper outlines the utility of statistical methods for sample surveys in analysing clinical trials data. Sample survey statisticians face a variety of complex data analysis issues deriving from the use of multi-stage probability sampling from finite populations. One such issue is that of clustering of observations at the various stages of sampling. Survey data analysis approaches developed to accommodate clustering in the sample design have more general application to clinical studies in which repeated measures structures are encountered. Situations where these methods are of interest include multi-visit studies where responses are observed at two or more time points for each patient, multi-period cross-over studies, and epidemiological studies for repeated occurrences of adverse events or illnesses. We describe statistical procedures for fitting multiple regression models to sample survey data that are more effective for repeated measures studies with complicated data structures than the more traditional approaches of multivariate repeated measures analysis. In this setting, one can specify a primary sampling unit within which repeated measures have intraclass correlation. This intraclass correlation is taken into account by sample survey regression methods through robust estimates of the standard errors of the regression coefficients. Regression estimates are obtained from model fitting estimation equations which ignore the correlation structure of the data (that is, computing procedures which assume that all observational units are independent or are from simple random samples). The analytic approach is straightforward to apply with logistic models for dichotomous data, proportional odds models for ordinal data, and linear models for continuously scaled data, and results are interpretable in terms of population average parameters. Through the features summarized here, the sample survey regression methods have many similarities to the broader family of methods based on generalized estimating equations (GEE). Sample survey methods for the analysis of time-to-event data have more recently been developed and implemented in the context of finite probability sampling. Given the importance of survival endpoints in late phase studies for drug development, these methods have clear utility in the area of clinical trials data analysis. A brief overview of methods for sample survey data analysis is first provided, followed by motivation for applying these methods to clinical trials data. Examples drawn from three clinical studies are provided to illustrate survey methods for logistic regression, proportional odds regression and proportional hazards regression. Potential problems with the proposed methods and ways of addressing them are discussed.  相似文献   

11.

Objectives

We investigated the impact of recruitment bias within the venue-based sampling (VBS) method, which is widely used to estimate disease prevalence and risk factors among groups, such as men who have sex with men (MSM), that congregate at social venues.

Methods

In a 2008 VBS study of 479 MSM in New York City, we calculated venue-specific approach rates (MSM approached/MSM counted) and response rates (MSM interviewed/MSM approached), and then compared crude estimates of HIV risk factors and seroprevalence with estimates weighted to address the lower selection probabilities of MSM who attend social venues infrequently or were recruited at high-volume venues.

Results

Our approach rates were lowest at dance clubs, gay pride events, and public sex strolls, where venue volumes were highest; response rates ranged from 39% at gay pride events to 95% at community-based organizations. Sixty-seven percent of respondents attended MSM-oriented social venues at least weekly, and 21% attended such events once a month or less often in the past year. In estimates adjusted for these variations, the prevalence of several past-year risk factors (e.g., unprotected anal intercourse with casual/exchange partners, ≥5 total partners, group sex encounters, at least weekly binge drinking, and hard-drug use) was significantly lower compared with crude estimates. Adjusted HIV prevalence was lower than unadjusted prevalence (15% vs. 18%), but not significantly.

Conclusions

Not adjusting VBS data for recruitment biases could overestimate HIV risk and prevalence when the selection probability is greater for higher-risk MSM. While further examination of recruitment-adjustment methods for VBS data is needed, presentation of both unadjusted and adjusted estimates is currently indicated.Venue-based sampling (VBS), also called time-location or time-space sampling, is a study design that is widely used to provide estimates of risk factors and disease outcomes.1 Although it can be used to study any target population that congregates at known venues associated with the population,2 it has been primarily used for behavioral research of groups at risk for human immunodeficiency virus (HIV) or sexually transmitted diseases, such as men who have sex with men (MSM) and drug users.3 Because these populations are often “hidden” from probabilistic sampling (i.e., a population sampling frame cannot be constructed),4 using traditional probability designs may be inefficient or infeasible.5Several variations of VBS exist, but all introduce elements of randomness in recruitment that improve upon convenience sampling. In the Young Men''s Survey of MSM in seven U.S. cities, for example, a universe of MSM-oriented venues was created, venues were randomly selected, and presumed MSM entering a selected venue were non-preferentially approached to participate.6 Sampling efficiency is a chief strength of VBS, as selected recruitment venues contain a high density of the target population. But a corresponding weakness is that the group able to be sampled (e.g., MSM who visit MSM-oriented social venues) may be different from the larger target population (e.g., all sexually active MSM). VBS-based estimates are not generalizable to that larger population when the venue-attending subpopulation exhibits differential characteristics.7 Nonetheless, VBS data are often useful in designing outreach-based HIV prevention programs because the venue-attending subpopulation is inherently accessible.8Increasing the validity of VBS-based estimates for that subpopulation, however, is a persistent goal. Statistical adjustment of VBS data may be used to correct unequal selection probabilities arising from at least two VBS recruitment biases. First, someone who attends venues frequently is more likely to be sampled than someone who attends venues infrequently. If outcome variables such as partner number or alcohol consumption are also related to attendance frequency, then unweighted data will overestimate population prevalence of these variables. Second, individual selection probability is inversely related to the volume of the target population at each recruitment venue. For example, MSM at low-volume bars have higher selection probabilities than MSM at high-volume gay pride events. Not accounting for these variations may bias estimates if outcome variables are associated with recruitment venue characteristics. Ideally, venue volume would be accounted for a priori in a study design such as probability-proportional-to-size (PPS) sampling, which adjusts second-stage (i.e., participant) selection probability by the size of a first-stage sampling unit.9 But PPS requires precise volume enumeration before recruitment, which is often infeasible for social venues. Post hoc statistical adjustment is an alternative approach. True selection probability will always be unknown in the VBS design because the population sampling frame is undefined, but adjustment for the two aforementioned biases may serve as an appropriate proxy in the absence of that gold standard.While several studies have compared VBS estimates with those using another study design,10 few VBS-based studies have used statistical adjustment for weighted analyses. Adjustment methods were developed for the Young Men''s Survey, but study analyses have only used unweighted data because weighting did not influence HIV prevalence estimates.11 Other VBS studies have presented data weighted to account for differences in venue volume but not attendance frequency.12 To our knowledge, no VBS studies have reported comparisons of unweighted and weighted estimates of the same data. In this study, we examined the impact of adjustment for the two previously mentioned recruitment biases and compared weighted and unweighted prevalence estimates of HIV risk factors and seroprevalence in a VBS-based sample of MSM.  相似文献   

12.
Objectives. We examined potential nonresponse bias in a large-scale, population-based, random-digit-dialed telephone survey in California and its association with the response rate.Methods. We used California Health Interview Survey (CHIS) data and US Census data and linked the two data sets at the census tract level. We compared a broad range of neighborhood characteristics of respondents and nonrespondents to CHIS. We projected individual-level nonresponse bias using the neighborhood characteristics.Results. We found little to no substantial difference in neighborhood characteristics between respondents and nonrespondents. The response propensity of the CHIS sample was similarly distributed across these characteristics. The projected nonresponse bias appeared very small.Conclusions. The response rate in CHIS did not result in significant nonresponse bias and did not substantially affect the level of data representativeness, and it is not valid to focus on response rates alone in determining the quality of survey data.Declining survey response rates over the last decade have raised concerns regarding public health research that uses population-based survey data. Response rates are commonly considered the most important indicator of the representativeness of a survey sample and overall data quality, and low response rates are viewed as evidence that a sample suffers from nonresponse bias.1,2 Recent survey research literature, however, suggests that response rates are a poor measure of not only nonresponse bias but also data quality.37The decline in survey response rates over the past several decades has led to a number of rigorous studies and innovative methods to explore the relationship between survey response rates and bias. A meta-analysis that examined response rates and nonresponse bias in 59 surveys found no clear association between nonresponse rates and nonresponse bias.8 Some surveys with response rates under 20% had a level of nonresponse bias similar to that of surveys with response rates over 70%. This is because nonresponse bias is either a function of both the response rate and the difference between respondents and nonrespondents in a variable of interest,9 or it is a function of covariance between response propensity and a variable of interest.10 Therefore, response rates alone are not the determinant of nonresponse bias of the survey estimates. Although it may be convenient to use the response rate as a single indicator of a survey''s representativeness and data quality, nonresponse bias is a property of a particular variable, not of a survey.Nonetheless, declining survey response rates increase the potential for nonresponse bias and have raised questions about the representativeness of inferences made from probability sample surveys. Inferences from surveys are based on randomization theory and assume a 100% response from the sample. Although the gap between theory-based assumptions and the reality of survey administration has always been a concern, the increasing deviation from the full response assumption increases this concern.Nonresponse is multidimensional, not a unitary outcome, and is roughly divided into 3 components: noncontact, refusal, and other nonresponse.9 Most examples of nonresponse compose the first 2 components. A study by Curtin et al. found that refusal rates in a telephone survey remained constant between 1979 and 2003, although the contact rates decreased dramatically.11 Another study by Tuckel and O''Neill found the same pattern.12Arguably, different dynamics lead to noncontact and refusal.13,14 Noncontact (e.g., unanswered phone calls in random-digit-dialed surveys) is related to accessibility. Call screening devices, phone usage, and at-home patterns affect accessibility, and calling strategy (e.g., number of call attempts and timing of calls) directly influences contact rates.7,12 Refusal occurs only after contact is made. The decision to participate or not is an indicator of the respondent''s amenability to the survey and is also influenced by other factors.Noncontact and refusal may affect different types of potential biases, and these biases may offset one another.7,15 For example, measures on volunteerism may be biased through noncontact because those who spend much time volunteering may be hard to reach in random-digit-dialed surveys. On the other hand, those who refuse to participate in the same survey may have opinions and behaviors related to volunteerism that differ dramatically from those of persons who are never contacted. Because aggregating noncontact and refusal may obscure our understanding of nonresponse bias, understanding detailed response behaviors along with overall nonresponse bias is important.The decline in response rates is more rapid for random-digit-dialed telephone surveys than for other survey types. The difficulties inherent in examining nonresponse bias arise from the absence of data on nonrespondents. Unlike face-to-face surveys, in which interviewers make direct observation of the sampled individual and have an opportunity to gather contextual information regardless of response status, such information is scarce in telephone surveys because interviewers do not visit the individual and the interviewer–respondent interaction, if any, remains oral and over the telephone. Follow-up with nonrespondents in a telephone survey can be conducted to study its nonresponse bias, but such efforts are resource intensive. Additionally, unless 100% participation is achieved, there still remains some level of nonresponse.Alternatively, nonresponse can be studied through the use of the geographic identifiers associated with sampled telephone numbers. Phone numbers from random-digit-dialed sampling frames can be readily associated with a limited number of geographic identifiers, such as zip codes. In addition, most phone numbers can be matched to a postal address and consequently to a census tract and county, which provides a unique opportunity to evaluate patterns of nonresponse as a function of neighborhood characteristics. A few recent nonresponse bias studies have used such contextual data.1619We examined potential nonresponse bias in the 2005 CHIS, a large random-digit-dialed telephone survey, by comparing a wide range of census tract–level neighborhood characteristics by response behavior as well as examining response rates across neighborhood characteristics. Although these characteristics are not specific to individual cases (households), neighborhood characteristics at the census tract level serve as useful proxy indicators of differences in the population. This is because census tracts are relatively permanent small geographic divisions with 1500 to 8000 people that are designed to be homogeneous with respect to sociodemographic characteristics.20 Unlike previous studies that focused on statistical significance, we discuss substantive significance. We explored nonresponse bias in a large, population-based telephone health survey in California. We linked data from the California Health Interview Survey (CHIS) to US Census data at the tract level to compare respondents and nonrespondents across a broad range of neighborhood characteristics.  相似文献   

13.
Objectives. We estimated sodium intake, which is associated with elevated blood pressure, a major risk factor for cardiovascular disease, and assessed its association with related variables among New York City adults.Methods. In 2010 we conducted a cross-sectional, population-based survey of 1656 adults, the Heart Follow-Up Study, that collected self-reported health information, measured blood pressure, and obtained sodium, potassium, and creatinine values from 24-hour urine collections.Results. Mean daily sodium intake was 3239 milligrams per day; 81% of participants exceeded their recommended limit. Sodium intake was higher in non-Hispanic Blacks (3477 mg/d) and Hispanics (3395 mg/d) than in non-Hispanic Whites (3066 mg/d; both P < .05). Higher sodium intake was associated with higher blood pressure in adjusted models, and this association varied by race/ethnicity.Conclusions. Higher sodium intake among non-Hispanic Blacks and Hispanics than among Whites was not previously documented in population surveys relying on self-report. These results demonstrate the feasibility of 24-hour urine collection for the purposes of research, surveillance, and program evaluation.Cardiovascular disease (CVD) is the leading cause of death in the United States,1 and hypertension is a leading risk factor. A positive and continuous relationship between sodium intake and blood pressure (BP) is well established.2 Existing estimates of sodium intake measured by self-report show that US adults consume a daily average of 3400 milligrams, well above the recommended limit (1500–2300 mg/d),2 and public health efforts are aimed at reducing sodium consumption.3–5 In a simulation analysis of risk factor and outcome data from key CVD data sources, researchers estimated that up to 92 000 deaths could be averted annually by lowering the current mean adult intake by 1200 milligrams of sodium, resulting in intake closer to the recommended limit.6 Although reduced sodium intake decreases BP on average in all racial/ethnic groups and in individuals with normal and high BP, the BP-lowering effect of sodium reduction is greater in Blacks than in other racial/ethnic groups.7,8 National estimates of sodium intake derived from self-report do not demonstrate higher intake among Blacks.The gold standard method for assessing sodium intake is measurement of sodium excretion in rigorously collected 24-hour urine samples, although this method has some limitations, such as undercollection.9 This method has been used to assess population intake in the United Kingdom, Finland, Portugal, and Barbados.10–14 In the United States, population intake has been assessed since 1971 through 24-hour dietary recall. Although adequate for understanding general trends and intake, estimates that rely on self-report are subject to reporting error and bias.9 Objective measures would avoid these problems; however, to date no representative assessment of sodium intake derived from 24-hour urine collections has been performed in the United States.In the absence of nationally representative US surveys employing the gold standard method, we measured sodium excretion in urine over 24 hours in a representative sample of adults in New York City. Our objectives were to estimate mean population sodium intake, overall and by subgroup, particularly in different racial/ethnic groups; to understand sodium intake in relation to recommended limits; and to assess the relationship between sodium intake and other variables.  相似文献   

14.
Objectives. We examined whether 3 nationally representative data sources produce consistent estimates of disparities and rates of uninsurance among the American Indian/Alaska Native (AIAN) population and to demonstrate how choice of data source impacts study conclusions.Methods. We estimated all-year and point-in-time uninsurance rates for AIANs and non-Hispanic Whites younger than 65 years using 3 surveys: Current Population Survey (CPS), National Health Interview Survey (NHIS), and Medical Expenditure Panel Survey (MEPS).Results. Sociodemographic differences across surveys suggest that national samples produce differing estimates of the AIAN population. AIAN all-year uninsurance rates varied across surveys (3%–23% for children and 18%–35% for adults). Measures of disparity also differed by survey. For all-year uninsurance, the unadjusted rate for AIAN children was 2.9 times higher than the rate for White children with the CPS, but there were no significant disparities with the NHIS or MEPS. Compared with White adults, AIAN adults had unadjusted rate ratios of 2.5 with the CPS and 2.2 with the NHIS or MEPS.Conclusions. Different data sources produce substantially different estimates for the same population. Consequently, conclusions about health care disparities may be influenced by the data source used.Access to quality health care is a priority for the nation. Access to such care is designated in Healthy People 2010 as one of the 10 Leading Health Indicators, marking it as a priority area for improving the health of the nation1 and reducing health disparities.2 American Indians/Alaska Natives (AIANs) are one group that continues to have substantial health disparities compared with other racial groups.38 However, disparities in health care coverage and access for AIANs have received only intermittent attention,913 leaving a marked gap in our understanding. Previously documented issues for research on AIAN health care disparities include gaps in data availability for AIANs14,15 as well as problems with national-level estimates masking the differences across geographic areas.13,16 However, it is also possible that there are differences in the magnitude of estimates or the conclusions drawn, depending on which data source is used to examine health care disparities.Because no single data source contains all possible measures of health and health care, different data sources are often used to answer complementary but different questions. In the case of national surveillance and annual snapshot reports, information from numerous data sources are used to present a more complete picture of health for the US population. Healthy People 2010 uses National Health Interview Survey (NHIS) data to monitor insurance coverage and access to a usual source of care and uses National Vital Statistics System data to monitor access to prenatal care.1 In the chapter on access to care, the National Healthcare Disparities Report also uses NHIS data to examine uninsurance and access to a usual source of care but uses the Medical Expenditure Panel Survey (MEPS) to examine all-year uninsurance and access to a primary care provider.17 A few recent studies that examined health care access for AIANs used other data sources, such as the National Survey of America''s Families12 or the Behavioral Risk Factor Surveillance Survey.13We use 3 general population surveys commonly used for health care coverage and access research to examine the implications of using different data sources for estimating health care disparities specific to AIANs. We use uninsurance disparities as an example but acknowledge at the outset that different data sources measure insurance coverage in different ways. Our purpose is not to critically review measures of uninsurance or to critique the surveys that collect these data. Rather, we aim to demonstrate that choice of data source matters for disparities research, often for a variety of reasons. Our intent is 2-fold: (1) to examine whether 3 nationally representative data sources produce trustworthy and consistent estimates of the AIAN population in the United States and (2) to highlight the impact that choice of data source can have on conclusions about uninsurance disparities.  相似文献   

15.
Objectives. We sought to improve public health surveillance by using a geographic analysis of emergency department (ED) visits to determine local chronic disease prevalence.Methods. Using an all-payer administrative database, we determined the proportion of unique ED patients with diabetes, hypertension, or asthma. We compared these rates to those determined by the New York City Community Health Survey. For diabetes prevalence, we also analyzed the fidelity of longitudinal estimates using logistic regression and determined disease burden within census tracts using geocoded addresses.Results. We identified 4.4 million unique New York City adults visiting an ED between 2009 and 2012. When we compared our emergency sample to survey data, rates of neighborhood diabetes, hypertension, and asthma prevalence were similar (correlation coefficient = 0.86, 0.88, and 0.77, respectively). In addition, our method demonstrated less year-to-year scatter and identified significant variation of disease burden within neighborhoods among census tracts.Conclusions. Our method for determining chronic disease prevalence correlates with a validated health survey and may have higher reliability over time and greater granularity at a local level. Our findings can improve public health surveillance by identifying local variation of disease prevalence.In its 2012 report on measures for population health, the Institute of Medicine prioritized understanding local population health to improve health care for populations with the highest need.1 Generally, health care providers have used the term “population health” when referring to patients linked to a specific health care provider or insurance group.2 However, the discipline of public health more broadly defines population health as the health of all individuals living in specific geographic regions.3To estimate disease burden, traditional methods include performing population-based telephone health surveys.4 Unless large numbers of individuals are surveyed, it is difficult to determine prevalence in small geographic areas such as census tracts, and yearly estimates have significant noise because of small sample sizes.5 Low response rates can lead to errors in estimating disease prevalence, and larger surveys can be costly and difficult to perform.6With increasing use of big data in the form of large administrative data sets with clinical data,7 there is an opportunity to create more precise measures of population health by reducing the variance associated with small sample sizes.8–10 These methods may be biased as they only track individuals who register a medical claim, which makes for a type of convenience sample. Nevertheless, a significant proportion of all individuals, regardless of insurance type, interact with the health care system, especially through emergency services. Nearly 1 in 5 individuals report having gone to an emergency department (ED) in the past year.11 Previous studies have demonstrated the promise of using emergency claims data for tracking acute illnesses; however, there is potential to extend these methods to the surveillance of chronic disease.12,13 One of the advantages of using administrative claims data is the achievement of large sample sizes without the need to conduct large surveys.14,15In this study, we have introduced a novel geographic method of public health surveillance and determined whether we could use ED administrative claims to estimate chronic disease prevalence at a local level over time. As the ED is generally a place where all individuals can access care regardless of socioeconomic or insurance status, it offers an ideal environment for public health surveillance among all types of individuals within a heterogeneous population.16  相似文献   

16.

OBJECTIVE

To compare the efficiency and accuracy of sampling designs including and excluding the sampling of individuals within sampled households in health surveys.

METHODS

From a population survey conducted in Baixada Santista Metropolitan Area, SP, Southeastern Brazil, lowlands between 2006 and 2007, 1,000 samples were drawn for each design and estimates for people aged 18 to 59 and 18 and over were calculated for each sample. In the first design, 40 census tracts, 12 households per sector, and one person per household were sampled. In the second, no sampling within the household was performed and 40 census sectors and 6 households for the 18 to 59-year old group and 5 or 6 for the 18 and over age group or more were sampled. Precision and bias of proportion estimates for 11 indicators were assessed in the two final sets of the 1000 selected samples with the two types of design. They were compared by means of relative measurements: coefficient of variation, bias/mean ratio, bias/standard error ratio, and relative mean square error. Comparison of costs contrasted basic cost per person, household cost, number of people, and households.

RESULTS

Bias was found to be negligible for both designs. A lower precision was found in the design including individuals sampling within households, and the costs were higher.

CONCLUSIONS

The design excluding individual sampling achieved higher levels of efficiency and accuracy and, accordingly, should be first choice for investigators. Sampling of household dwellers should be adopted when there are reasons related to the study subject that may lead to bias in individual responses if multiple dwellers answer the proposed questionnaire.  相似文献   

17.
Objectives. We examined the implications of the current recommended data collection practice of placing self-rated health (SRH) before specific health-related questions (hence, without a health context) to remove potential context effects, between Hispanics and non-Hispanics.Methods. We used 2 methodologically comparable surveys conducted in English and Spanish that asked SRH in different contexts: before and after specific health questions. Focusing on the elderly, we compared the influence of question contexts on SRH between Hispanics and non-Hispanics and between Spanish and English speakers.Results. The question context influenced SRH reports of Spanish speakers (and Hispanics) significantly but not of English speakers (and non-Hispanics). Specifically, on SRH within a health context, Hispanics reported more positive health, decreasing the gap with non-Hispanic Whites by two thirds, and the measurement utility of SRH was improved through more consistent mortality prediction across ethnic and linguistic groups.Conclusions. Contrary to the current recommendation, asking SRH within a health context enhanced measurement utility. Studies using SRH may result in erroneous conclusions when one does not consider its question context.Hispanics in the United States have emerged as an important group for public health research because of their noteworthy population growth. The past decade saw a rapid increase of the Hispanic population from 35.3 million to 50.5 million, corresponding to 12.5% and 16.3% of the total population.1 In states such as California, New Mexico, and Texas, Hispanics make up more than 35% of the population. Although not the majority in the general population, Hispanics contributed more than half of the US population growth.One distinctive characteristic of Hispanics is their language use. Four out of 10 Hispanics are reported to speak English less than very well, hence being classified as “linguistically isolated.”2 The linguistic isolation rate is estimated to be more than 90% for some Hispanic subgroups, such as older low-income Cuban women in Miami.3 Although not a health risk factor itself, low English proficiency (LEP) is related to many health outcomes through the socioeconomic gradient, such as education, poverty, and access to health care. Because the failure to capture LEP persons produces data misrepresenting the population,4–6 it has become standard practice for government and academic surveys in the United States to conduct interviews in both English and Spanish. The National Health Interview Survey (NHIS), for example, has conducted Spanish interviews consistently since 1997 and with standardized translated questionnaires since 2004.Hispanics’ health has been compared with that of other racial/ethnic groups,7–11 creating a famous term, “Hispanic paradox.” Even though correlates of health, such as income and education, are estimated to be lower for Hispanics than for non-Hispanic Whites, Hispanics show better health outcomes than non-Hispanic Whites or comparable health outcomes to non-Hispanic Whites.12–18 One exception to the paradox is the measure self-rated health (SRH), which consistently shows less favorable outcomes for Hispanics compared with non-Hispanic Whites.7,19,20 Self-rated health is a simple survey item asking respondents for their subjective assessment about their own health by using some variations of a 4- or 5-point Likert response scale. One of the scales ranging from “excellent,” “very good,” “good,” “fair,” to “poor” is popular in the United States, and another scale using “very good” to “very poor,” supported by the World Health Organization, is popular elsewhere.21The popularity of SRH not only in health research22–26 but also in other social sciences27–31 led the US National Center for Health Statistics to organize a conference dedicated to this particular item, the Conference on the Cognitive Aspects of the Self-Rated Health Status, in 1993.32 Because of its proven utility as a strong and independent predictor of subsequent mortality,33–39 various health conditions,40-43 and health care utilization,44–46 the World Health Organization,47 the US Centers for Disease Control and Prevention,48 and the European Commission49 have recommended SRH as a reliable measure of monitoring population health.Self-rated health is also used as a practical tool for comparing various population groups associated with country,50–52 gender,53 race,41 socioeconomic status,54,55 educational attainment,56 poverty status,57 and immigration status,58 often leading to discussions about health disparities. Although it is critical to use items with comparable measurement properties across comparison groups, the measurement utility of SRH has been mostly examined with English speakers or northern Europeans.44,59–61 Beyond these groups, the performance of SRH has been found to be inconsistent, leading to questionable comparability.9,21,25,29,62–68The literature on the utility of SRH for US Hispanics is spotty and does not provide clear conclusions and appears to have overlooked methodological limitations in the data.8,10,64,69 Some studies have used data that did not include LEP Hispanics,64 and some used SRH asked in different contexts.8,10,69 The former is no longer a serious issue because the current survey practice accommodates LEP Hispanics. However, the question context may raise a concern as it has been suggested as a future research topic for SRH,70,71 including the seminal work by Idler and Benyamini.36Particularly for Hispanics, a recent study by Lee and Grant72 suggests troublesome implications. They conducted an experiment where the order of SRH in a questionnaire was randomized: SRH was asked as either a first health-related item after a few demographic questions (i.e., without a health context) or after a series of questions on chronic health conditions (i.e., within a health context). Whereas English-speaking respondents’ SRH reports remained consistent regardless of the question context, Spanish-speaking respondents’ reports were found to be unstable depending on the context. Specifically, Spanish-speaking respondents reported significantly and substantively better health on SRH asked within than without a health context. Reflecting LEP among Hispanics, their SRH rating was also affected by the question context. The question context effect is of concern in its own right. When the context interacts with interview language or respondents’ cultural background as in this example, it becomes even more important, because systematic incomparability is introduced.We further examined the effect of SRH question contexts with the US elderly population. Using data from surveys conducted in both English and Spanish, we examined how the question context affects (1) the estimates of SRH for each linguistic group, (2) the comparisons of health between 2 linguistic groups, and (3) the predictive power of SRH for subsequent mortality. Because Spanish-language use is tightly related to ethnicity, we also included Hispanics and non-Hispanics in the study.  相似文献   

18.
In complex probability sample surveys, numerous adjustments are customarily made to the survey weights to reduce potential bias in survey estimates. These adjustments include sampling design (SD) weight adjustments, which account for features of the sampling plan, and non-sampling design (NSD) weight adjustments, which account for non-sampling errors and other effects. Variance estimates prepared from complex survey data customarily account for SD weight adjustments, but rarely account for all NSD weight adjustments. As a result, variance estimates may be biased and standard confidence intervals may not achieve their nominal coverage levels. We describe the implementation of the bootstrap method to account for the SD and NSD weight adjustments for complex survey data. Using data from the National Immunization Survey (NIS), we illustrate the use of the bootstrap (i). for evaluating the use of standard confidence intervals that use Taylor series approximations to variance estimators that do not account for NSD weight adjustments, (ii). for obtaining confidence intervals for ranks estimated from weighted survey data, and (iii). for evaluating the predictive power of logistic regressions using receiver operating characteristic curve analyses that account for the SD and NSD adjustments made to the survey weights.  相似文献   

19.
BackgroundPopulation-based health surveys are typically conducted using face-to-face household interviews in low- and middle-income countries (LMICs). However, telephone-based surveys are cheaper, faster, and can provide greater access to hard-to-reach or remote populations. The rapid growth in mobile phone ownership in LMICs provides a unique opportunity to implement novel data collection methods for population health surveys.ObjectiveThis study aims to describe the development and population representativeness of a mobile phone survey measuring live poultry exposure in urban Bangladesh.MethodsA population-based, cross-sectional, mobile phone survey was conducted between September and November 2019 in North and South Dhaka City Corporations (DCC), Bangladesh, to measure live poultry exposure using a stratified probability sampling design. Data were collected using a computer-assisted telephone interview platform. The call operational data were summarized, and the participant data were weighted by age, sex, and education to the 2011 census. The demographic distribution of the weighted sample was compared with external sources to assess population representativeness.ResultsA total of 5486 unique mobile phone numbers were dialed, with 1047 respondents completing the survey. The survey had an overall response rate of 52.2% (1047/2006) and a co-operation rate of 89.0% (1047/1176). Initial results comparing the sociodemographic profile of the survey sample to the census population showed that mobile phone sampling slightly underrepresented older individuals and overrepresented those with higher secondary education. After weighting, the demographic profile of the sample population matched well with the latest DCC census population profile.ConclusionsProbability-based mobile phone survey sampling and data collection methods produced a population-representative sample with minimal adjustment in DCC, Bangladesh. Mobile phone–based surveys can offer an efficient, economic, and robust way to conduct surveillance for population health outcomes, which has important implications for improving population health surveillance in LMICs.  相似文献   

20.
Objectives. We used recent data to reexamine whether the exclusion of adults from households with no telephone or only wireless phones may bias estimates derived from health-related telephone surveys.Methods. We calculated the difference between estimates for the full population of adults and estimates for adults with landline phones; data were from the 2007 National Health Interview Survey.Results. When data from landline telephone surveys were weighted to match demographic characteristics of the full population, bias was generally less than 2 percentage points (range = 0.1–2.4). However, among young adults and low-income adults, we found greater bias (range = 1.7–5.9) for estimates of health insurance, smoking, binge drinking, influenza vaccination, and having a usual place for care.Conclusions. From 2004 to 2007, the potential for noncoverage bias increased. Bias can be reduced through weighting adjustments. Therefore, telephone surveys limited to landline households may still be appropriate for health surveys of all adults and for surveys of subpopulations regarding health status. However, for some behavioral risk factors and health care service use indicators, caution is warranted when using landline surveys to draw inferences about young or low-income adults.In 2006, in this journal, we examined nationally representative survey data from 2004 and early 2005 to determine whether the exclusion of adults without landline telephones biased population-based estimates derived from health-related random-digit-dial telephone surveys.1 Noncoverage bias is determined both by the magnitude of the difference between persons with and without landline telephones for the variable of interest and by the percentage of persons without landline telephones in the population of interest.2 In 2004 and early 2005, only 7.2% of adults did not have landline telephones, and we concluded that “noncoverage is not presently a reason to reject the continued use of general population telephone surveys to help guide public health policy and program decisions.”1(p931)In less than 3 years, the percentage of adults without landline telephones more than doubled. In 2007, 13.5% of adults lived in households with only wireless telephones, and an additional 1.7% of adults lived in households without any telephone service.3 Among certain subgroups, the percentage without landlines was even greater, reaching 30.6% for adults younger than 30 years and 21.6% for adults living in low-income households (defined as < 200% of the federal poverty level).Our previously published conclusion, that noncoverage bias is not a concern,1,4 needed to be revisited. We therefore used more recent data to reexamine whether the exclusion of adults from households with no telephone or only wireless phones may bias estimates derived from health-related telephone surveys.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号