首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 937 毫秒
1.
真实世界数据是指在日常医疗活动、生活与工作等环境下产生的数据。真实世界数据在临床与公共卫生研究中一直被广泛应用,但真实世界数据质量相关问题,如不完整性、不一致性和准确性等会影响真实世界研究的真实性。为了应对真实世界源数据缺乏标准化带来的挑战,本文基于当前被广泛应用的数据标准,即临床数据交换标准协会(CDISC)开发的CDISC标准开发了专病CDISC-病例报告表(CRF),以提高真实世界源数据标准化水平,助力我国真实世界数据生态建设。我们阐述了如何应用数据标准弥补真实世界数据到真实世界证据之间的裂痕;设计了基于专病CDISC-CRF建设真实世界数据生态的流程,重点介绍了CDISC-CRF表单的开发技术;并就基于专病CDISC-CRF建设真实世界数据的应用前景及意义进行了描述。  相似文献   

2.
BackgroundPatient-reported outcomes (PROs) are the consequences of disease and/or its treatment as reported by the patient. The importance of PRO measures in clinical trials for new drugs, biological agents, and devices was underscored by the release of the US Food and Drug Administration's draft guidance for industry titled “Patient-Reported Outcome Measures: Use in Medical Product Development to Support Labeling Claims.” The intent of the guidance was to describe how the FDA will evaluate the appropriateness and adequacy of PRO measures used as effectiveness end points in clinical trials. In response to the expressed need of ISPOR members for further clarification of several aspects of the draft guidance, ISPOR's Health Science Policy Council created three task forces, one of which was charged with addressing the implications of the draft guidance for the collection of PRO data using electronic data capture modes of administration (ePRO). The objective of this report is to present recommendations from ISPOR's ePRO Good Research Practices Task Force regarding the evidence necessary to support the comparability, or measurement equivalence, of ePROs to the paper-based PRO measures from which they were adapted.MethodsThe task force was composed of the leadership team of ISPOR's ePRO Working Group and members of another group (i.e., ePRO Consensus Development Working Group) that had already begun to develop recommendations regarding ePRO good research practices. The resulting task force membership reflected a broad array of backgrounds, perspectives, and expertise that enriched the development of this report. The prior work became the starting point for the Task Force report. A subset of the task force members became the writing team that prepared subsequent iterations of the report that were distributed to the full task force for review and feedback. In addition, review beyond the task force was sought and obtained. Along with a presentation and discussion period at an ISPOR meeting, a draft version of the full report was distributed to roughly 220 members of a reviewer group. The reviewer group comprised individuals who had responded to an emailed invitation to the full membership of ISPOR. This Task Force report reflects the extensive internal and external input received during the 16-month good research practices development process.Results/RecommendationsAn ePRO questionnaire that has been adapted from a paper-based questionnaire ought to produce data that are equivalent or superior (e.g., higher reliability) to the data produced from the original paper version. Measurement equivalence is a function of the comparability of the psychometric properties of the data obtained via the original and adapted administration mode. This comparability is driven by the amount of modification to the content and format of the original paper PRO questionnaire required during the migration process. The magnitude of a particular modification is defined with reference to its potential effect on the content, meaning, or interpretation of the measure's items and/or scales. Based on the magnitude of the modification, evidence for measurement equivalence can be generated through combinations of the following: cognitive debriefing/testing, usability testing, equivalence testing, or, if substantial modifications have been made, full psychometric testing. As long as only minor modifications were made to the measure during the migration process, a substantial body of existing evidence suggests that the psychometric properties of the original measure will still hold for the ePRO version. Hence, an evaluation limited to cognitive debriefing and usability testing only may be sufficient. However, where more substantive changes in the migration process has occurred, confirming that the adaptation to the ePRO format did not introduce significant response bias and that the two modes of administration produce essentially equivalent results is necessary. Recommendations regarding the study designs and statistical approaches for assessing measurement equivalence are provided.ConclusionsThe electronic administration of PRO measures offers many advantages over paper administration. We provide a general framework for decisions regarding the level of evidence needed to support modifications that are made to PRO measures when they are migrated from paper to ePRO devices. The key issues include: 1) the determination of the extent of modification required to administer the PRO on the ePRO device and 2) the selection and implementation of an effective strategy for testing the measurement equivalence of the two modes of administration. We hope that these good research practice recommendations provide a path forward for researchers interested in migrating PRO measures to electronic data collection platforms.  相似文献   

3.
《Value in health》2022,25(7):1090-1098
ObjectivesAlthough best practices from electronic patient-reported outcome (PRO) measures are transferable, the migration of clinician-reported outcome (ClinRO) assessments to electronic modes requires recommendations that address their unique properties, such as the user (eg, clinician), and complexity associated with programming of clinical content. Faithful migration remains essential to ensuring that the content and psychometric properties of the original scale (ie, validated reference) are preserved, such that clinicians completing the ClinRO assessments interpret and respond to the items the same way regardless of data collection mode. The authors present a framework for how to “faithfully” migrate electronic ClinRO assessments for successful deployment in clinical trials.MethodsCritical Path Institute’s Electronic PRO Consortium and PRO Consortium convened a consensus panel of representatives from member firms to develop recommendations for electronic migration and implementation of ClinRO assessments in clinical trials based on industry standards, regulatory guidelines where available, and relevant literature. The recommendations were reviewed and approved by all member firms from both consortia.Consensus RecommendationsStandard, minimal electronic modifications for ClinRO assessments are described. This article also outlines implementation steps, including planning, startup, electronic clinical outcome assessment system development, training, and deployment. The consensus panel proposes that functional clinical testing by a clinician or clinical outcome assessment expert, as well as copyright holder review of screenshots (if possible) are sufficient to support minimal modifications during migration. Additional evidence generation is proposed for modifications that deviate significantly from the validated reference.  相似文献   

4.
《Value in health》2013,16(4):480-489
Outcomes research literature has many examples of high-quality, reliable patient-reported outcome (PRO) data entered directly by electronic means, ePRO, compared to data entered from original results on paper. Clinical trial managers are increasingly using ePRO data collection for PRO-based end points. Regulatory review dictates the rules to follow with ePRO data collection for medical label claims. A critical component for regulatory compliance is evidence of the validation of these electronic data collection systems. Validation of electronic systems is a process versus a focused activity that finishes at a single point in time. Eight steps need to be described and undertaken to qualify the validation of the data collection software in its target environment: requirements definition, design, coding, testing, tracing, user acceptance testing, installation and configuration, and decommissioning. These elements are consistent with recent regulatory guidance for systems validation. This report was written to explain how the validation process works for sponsors, trial teams, and other users of electronic data collection devices responsible for verifying the quality of the data entered into relational databases from such devices. It is a guide on the requirements and documentation needed from a data collection systems provider to demonstrate systems validation. It is a practical source of information for study teams to ensure that ePRO providers are using system validation and implementation processes that will ensure the systems and services: operate reliably when in practical use; produce accurate and complete data and data files; support management control and comply with any existing regulations. Furthermore, this short report will increase user understanding of the requirements for a technology review leading to more informed and balanced recommendations or decisions on electronic data collection methods.  相似文献   

5.
Purpose  Interest in collecting patient-reported outcomes (PROs), such as health-related quality of life (HRQOL), health status reports, and patient satisfaction is on the rise and practical aspects of collecting PROs in clinical practice are becoming more important. The purpose of this paper is to draw the attention to a number of issues relevant for a successful integration of PRO measures into the daily work flow of busy clinical settings. Methods  The paper summarizes the results from a breakout session held at an ISOQOL special topic conference for PRO measures in clinical practice in 2007. Results  Different methodologies of collecting PROs are discussed, and the support needed for each methodology is highlighted. The discussion is illustrated by practical real-life examples from early adaptors who administered paper–pencil, or electronic PRO assessments (ePRO) for more than a decade. The paper also reports about new experiences with more recent technological developments, such as SmartPens and Computer Adaptive Tests (CATs) in daily practice. Conclusions  Methodological and logistical issues determine the resources needed for a successful integration of PRO measures into daily work flow procedures and influence significantly the usefulness of PRO data for clinical practice.  相似文献   

6.
《Value in health》2013,16(4):461-479
BackgroundPatient-reported outcome (PRO) instruments for children and adolescents are often included in clinical trials with the intention of collecting data to support claims in a medical product label.ObjectiveThe purpose of the current task force report is to recommend good practices for pediatric PRO research that is conducted to inform regulatory decision making and support claims made in medical product labeling. The recommendations are based on the consensus of an interdisciplinary group of researchers who were assembled for a task force associated with the International Society for Pharmacoeconomics and Outcomes Research (ISPOR). In those areas in which supporting evidence is limited or in which general principles may not apply to every situation, this task force report identifies factors to consider when making decisions about the design and use of pediatric PRO instruments, while highlighting issues that require further research.Good Research PracticesFive good research practices are discussed: 1) Consider developmental differences and determine age-based criteria for PRO administration: Four age groups are discussed on the basis of previous research (<5 years old, 5–7 years, 8–11 years, and 12–18 years). These age groups are recommended as a starting point when making decisions, but they will not fit all PRO instruments or the developmental stage of every child. Specific age ranges should be determined individually for each population and PRO instrument. 2) Establish content validity of pediatric PRO instruments: This section discusses the advantages of using children as content experts, as well as strategies for concept elicitation and cognitive interviews with children. 3) Determine whether an informant-reported outcome instrument is necessary: The distinction between two types of informant-reported measures (proxy vs. observational) is discussed, and recommendations are provided. 4) Ensure that the instrument is designed and formatted appropriately for the target age group. Factors to consider include health-related vocabulary, reading level, response scales, recall period, length of instrument, pictorial representations, formatting details, administration approaches, and electronic data collection (ePRO). 5) Consider cross-cultural issues.ConclusionsAdditional research is needed to provide methodological guidance for future studies, especially for studies involving young children and parents’ observational reports. As PRO data are increasingly used to support pediatric labeling claims, there will be more information regarding the standards by which these instruments will be judged. The use of PRO instruments in clinical trials and regulatory submissions will help ensure that children’s experience of disease and treatment are accurately represented and considered in regulatory decisions.  相似文献   

7.

Objectives

To synthesize the findings of cognitive interview and usability studies performed to assess the measurement equivalence of patient-reported outcome (PRO) instruments migrated from paper to electronic formats (ePRO), and make recommendations regarding future migration validation requirements and ePRO design best practice.

Methods

We synthesized findings from all cognitive interview and usability studies performed by a contract research organization between 2012 and 2015: 53 studies comprising 68 unique instruments and 101 instrument evaluations. We summarized study findings to make recommendations for best practice and future validation requirements.

Results

Five studies (9%) identified minor findings during cognitive interview that may possibly affect instrument measurement properties. All findings could be addressed by application of ePRO best practice, such as eliminating scrolling, ensuring appropriate font size, ensuring suitable thickness of visual analogue scale lines, and providing suitable instructions. Similarly, regarding solution usability, 49 of the 53 studies (92%) recommended no changes in display clarity, navigation, operation, and completion without help. Reported usability findings could be eliminated by following good product design such as the size, location, and responsiveness of navigation buttons.

Conclusions

With the benefit of accumulating evidence, it is possible to relax the need to routinely conduct cognitive interview and usability studies when implementing minor changes during instrument migration. Application of design best practice and selecting vendor solutions with good user interface and user experience properties that have been assessed in a representative group may enable many instrument migrations to be accepted without formal validation studies by instead conducting a structured expert screen review.  相似文献   

8.

Background

Currently there is little knowledge on real-life sustainability of routine patient-reported outcome (PRO) measurement and the representativeness of collected data.

Objectives

The investigation of routine PRO with regard to noncompletion bias and long-term adher- ence, considering the potential impact of mode of assessment (MOA) (paper-pencil vs. electronic PRO [ePRO]) and patient characteristics.

Methods

At our department, routine PRO measurement in oncological patients is being done since 2005 using different MOA (paper-pencil assessment until 2011 and ePRO assessment from 2011 onward). We analyzed two different patient groups: patients eligible in both periods (both-MOA group) and patients eligible in only one period (one-MOA group). The primary outcome was PRO noncompletion (100% missing questionnaires). The secondary outcome was poor PRO adherence (>20% missing questionnaires). Multivariate logistic regression models were developed, testing the impact of MOA and patient characteristics on the outcomes in the different patient groups.

Results

Data from 1484 eligible patients were included in the analyses. Most of the patients could be included in PRO assessment at least once. PRO noncompletion rates were clearly higher during paper-pencil assessment (odds ratios between 2.72 and 4.31), as were poor PRO adherence rates (odd ratio 2.23). Analyses of potential bias by patient characteristics showed that male patients had a higher risk of poor adherence. Other factors with significant impact were age, country, and cancer diagnosis, but results were indecisive.

Conclusions

ePRO increased the feasibility of our clinical routine PRO data for retrospective analyses by increasing completion rates. In general, potential completion bias regarding certain patient characteristics requires attention before generalizing results to the respective populations.  相似文献   

9.
【目的】 分析数据期刊科学数据质量评审中发现的主要问题,为促进科学数据质量提升,建立数据期刊质量评审行业标准和规范提供参考与借鉴。【方法】 以《全球变化数据仓储电子杂志(中英文)》的数据质量评审实践为例,对元数据质量评审、实体数据质量评审、元数据-实体数据-数据论文初稿对照评审3个方面发现的主要数据质量问题进行归纳和总结,并对其原因进行分析。【结果】 数据质量的主要问题有:元数据内容不完整,表达不规范;实体数据和元数据、数据论文中的数据描述不一致;实体数据的研发方法有疏漏,或缺少科学依据;实体数据与实际情况不符;实体数据内容不完整;实体数据内容不一致;引用实体数据不标注出处,或者标注不规范。【结论】 目前数据质量评审发现的问题比较多,这与当前的科研评价机制、部分作者对待数据缺乏严谨态度等有密切关系。  相似文献   

10.
《Value in health》2015,18(4):493-504
ObjectiveTo recommend methods for assessing quality of care via patient-reported outcome-based performance measures (PRO-PMs) of symptoms, functional status, and quality of life.MethodsA Technical Expert Panel was assembled by the American Medical Association–convened Physician Consortium for Performance Improvement. An environmental scan and structured literature review were conducted to identify quality programs that integrate PRO-PMs. Key methodological considerations in the design, implementation, and analysis of these PRO-PM data were systematically identified. Recommended methods for addressing each identified consideration were developed on the basis of published patient-reported outcome (PRO) standards and refined through public comment. Literature review focused on programs using PROs to assess performance and on PRO guidance documents.ResultsThirteen PRO programs and 10 guidance documents were identified. Nine best practices were developed, including the following: provide a rationale for measuring the outcome and for using a PRO-PM; describe the context of use; select a measure that is meaningful to patients with adequate psychometric properties; provide evidence of the measure’s sensitivity to differences in care; address missing data and risk adjustment; and provide a framework for implementation, interpretation, dissemination, and continuous refinement.ConclusionMethods for integrating PROs into performance measurement are available.  相似文献   

11.
军队卫生信息数据集和数据元的标准化   总被引:1,自引:0,他引:1  
数据标准化是信息标准化的重要内容,须通过遵守一致的元数据标准完成。数据集及其包含的数据元的标准化是信息交换中实现语义互操作的关键。军队卫生信息数据集的标准化可从管理状态、范围、统计单元、收集方法和周期、包含的数据元等方面描述;数据元的标准化应该从名称、标识号、定义、语境、数据元概念、值域、允许值等方面描述。元数据的集合构成数据字典,将各类信息系统或数据库的字段、数据集的数据项(数据元)与数据字典中的通配数据元进行比照和协同化,形成标准化的系统或数据库字典,可促进卫生信息的可解释性、可比性和一致性。  相似文献   

12.
《Vaccine》2019,37(35):4823-4829
In response to global interest in the development of a universal influenza vaccine, the Bill & Melinda Gates Foundation, PATH, and the Global Funders Consortium for Universal Influenza Vaccine Development convened a meeting of experts (London, UK, May 2018) to assess the role of a standardized controlled human influenza virus infection model (CHIVIM) towards the development of novel influenza vaccine candidates. This report (in two parts) summarizes those discussions and offers consensus recommendations. This article (Part 1) covers challenge virus selection, regulatory and ethical considerations, and issues concerning standardization, access, and capacity. Part 2 covers specific methodologic considerations.Current methods for influenza vaccine development and licensure require large costly field trials. The CHIVIM requires fewer subjects and the controlled setting allows for better understanding of influenza transmission and host immunogenicity. The CHIVIM can be used to identify immune predictors of disease for at-risk populations and to measure efficacy of potential vaccines for further development.Limitations to the CHIVIM include lack of standardization, limited access to challenge viruses and assays, lack of consensus regarding role of the CHIVIM in vaccine development pathway, and concerns regarding risk to study participants and community. To address these issues, the panel of experts recommended that WHO and other key stakeholders provide guidance on standardization, challenge virus selection, and risk management. A common repository of well-characterized challenge viruses, harmonized protocols, and standardized assays should be made available to researchers. A network of research institutions performing CHIVIM trials should be created, and more study sites are needed to increase capacity.Experts agreed that a research network of institutions working with a standardized CHIVIM could contribute important data to support more rapid development and licensure of novel vaccines capable of providing long-lasting protection against seasonal and pandemic influenza strains.  相似文献   

13.
Objective. To assess the internal consistency and agreement between the Health Care Information and Management Systems Society (HIMSS) and the Leapfrog computerized provider order entry (CPOE) data. Data Sources. Secondary hospital data collected by HIMSS Analytics, the Leapfrog Group, and the American Hospital Association from 2005 to 2007. Study Design. Dichotomous measures of full CPOE status were created for the HIMSS and Leapfrog datasets in each year. We assessed internal consistency by calculating the percent of full adopters in a given year that report full CPOE status in subsequent years. We assessed the level of agreement between the two datasets by calculating the κ statistic and McNemar's test. We examined responsiveness by assessing the change in full CPOE status rates, over time, reported by HIMSS and Leapfrog data, respectively. Principal Findings. Findings indicate minimal agreement between the two datasets regarding positive hospital CPOE status, but adequate agreement within a given dataset from year to year. Relative to each other, the HIMSS data tend to overestimate increases in full CPOE status over time, while the Leapfrog data may underestimate year over year increases in national CPOE status. Conclusions. Both Leapfrog and HIMSS data have strengths and weaknesses. Those interested in studying outcomes associated with CPOE use or adoption should be aware of the strengths and limitations of the Leapfrog and HIMSS datasets. Future development of a standard definition of CPOE status in hospitals will allow for a more comprehensive validation of these data.  相似文献   

14.
《Value in health》2023,26(9):1321-1324
With expanding data availability and computing power, health research is increasingly relying on big data from a variety of sources. We describe a state-level effort to address aspects of the opioid epidemic through public health research, which has resulted in an expansive data resource combining dozens of administrative data sources in Massachusetts. The Massachusetts Public Health Data Warehouse is a public health innovation that serves as an example of how to address the complexities of balancing data privacy and access to data for public health and health services research. We discuss issues of data protection and data access, and provide recommendations for ethical data governance. Keeping these issues in mind, the use of this data resource has the potential to allow for transformative research on critical public health issues.  相似文献   

15.

Purpose

To understand oncologists’ attitudes toward patient-reported outcome (PRO) measures and to learn how PRO data influence their clinical decision-making.

Methods

Twenty practicing oncologists participated in 1 of 4 semi-structured focus groups.

Results

Most oncologists had no experience with PRO measures, but were able to identify several concepts appropriate for patient-reported assessment. Participants agreed that clinical measures such as performance status were more meaningful to them, but acknowledged that PRO measures were more appropriate for assessing patient symptoms and treatment response. All oncologists believed that clinical efficacy and toxicity data were of primary importance, but that PROs become increasingly important when multiple treatments are available, in advanced or incurable disease, and in palliative care. Several issues prevented oncologists from being able to draw meaningful conclusions from PRO data: lack of familiarity with PRO measures, being presented with too much data to process, lack of clarity around a meaningful change in PRO measure scores, and lack of standardization in the use of PRO measures.

Conclusions

Oncologists indicated that PRO data are most influential in advanced or incurable disease and in palliative care. Improving the interpretability of PRO measures could increase the usefulness of PRO data in treatment decision-making.  相似文献   

16.
《Vaccine》2019,37(35):4830-4834
In response to global interest in the development of a universal influenza vaccine, the Bill & Melinda Gates Foundation, PATH, and the Global Funders Consortium for Universal Influenza Vaccine Development convened a meeting of experts (London, UK, May 2018) to assess the role of a standardized controlled human influenza virus infection model (CHIVIM) towards the development of novel influenza vaccine candidates. This report (in two parts) summarizes those discussions and offers consensus recommendations. Part 1 covers challenge virus selection, regulatory and ethical considerations, and issues concerning standardization, access, and capacity. This article (Part 2) summarizes the discussion and recommendations concerning CHIVIM methods.The panelists identified an overall need for increased standardization of CHIVIM trials, in order to produce comparable results that can support universal vaccine licensure. Areas of discussion included study participant selection and screening, route of exposure and dose, devices for administering challenge, rescue therapy, protection of participants and institutions, clinical outcome measures, and other considerations. The panelists agreed upon specific recommendations to improve the standardization and usefulness of the model for vaccine development.Experts agreed that a research network of institutions working with a standardized CHIVIM could contribute important data to support more rapid development and licensure of novel vaccines capable of providing long-lasting protection against seasonal and pandemic influenza strains.  相似文献   

17.

Purpose

An essential aspect of patient-centered outcomes research (PCOR) and comparative effectiveness research (CER) is the integration of patient perspectives and experiences with clinical data to evaluate interventions. Thus, PCOR and CER require capturing patient-reported outcome (PRO) data appropriately to inform research, healthcare delivery, and policy. This initiative’s goal was to identify minimum standards for the design and selection of a PRO measure for use in PCOR and CER.

Methods

We performed a literature review to find existing guidelines for the selection of PRO measures. We also conducted an online survey of the International Society for Quality of Life Research (ISOQOL) membership to solicit input on PRO standards. A standard was designated as “recommended” when >50 % respondents endorsed it as “required as a minimum standard.”

Results

The literature review identified 387 articles. Survey response rate was 120 of 506 ISOQOL members. The respondents had an average of 15 years experience in PRO research, and 89 % felt competent or very competent providing feedback. Final recommendations for PRO measure standards included: documentation of the conceptual and measurement model; evidence for reliability, validity (content validity, construct validity, responsiveness); interpretability of scores; quality translation, and acceptable patient and investigator burden.

Conclusion

The development of these minimum measurement standards is intended to promote the appropriate use of PRO measures to inform PCOR and CER, which in turn can improve the effectiveness and efficiency of healthcare delivery. A next step is to expand these minimum standards to identify best practices for selecting decision-relevant PRO measures.  相似文献   

18.

Objectives

To compare US Food and Drug Administration (FDA) and European Medicines Agency (EMA) labeling for evidence based on patient-reported outcomes (PROs) of new oncology treatments approved by both agencies.

Methods

Oncology drugs and indications approved between 2012 and 2016 by both the FDA and the EMA were identified. PRO-related language and analysis reported in US product labels and drug approval packages and EMA summaries of product characteristics were compared for each indication.

Results

In total, 49 oncology drugs were approved for a total of 64 indications. Of the 64 indications, 45 (70.3%) included PRO data in either regulatory submission. No FDA PRO labeling was identified. PRO language was included in the summary of product characteristics for 21 (46.7%) of 45 indications. European Organisation for Research and Treatment of Cancer and Functional Assessment of Cancer Therapy measures were used frequently in submissions. FDA’s comments suggest that aspects of study design (eg, open labels) or the validity of PRO measures was the primary reason for the lack of labeling based on PRO endpoints. Both agencies identified missing PRO data as problematic for interpretation.

Conclusions

During this time period, the FDA and the EMA used different evidentiary standards to assess PRO data from oncology studies, with the EMA more likely to accept data from open-label studies and broad concepts such as health-related quality of life. An understanding of the key differences between the agencies may guide sponsor PRO strategy when pursuing labeling. Patient-focused proximal concepts are more likely than distal concepts to receive positive reviews.  相似文献   

19.
Multiple imputation (MI) is a technique that can be used for handling missing data in a public-use dataset. With MI, two or more completed versions of the dataset are created, containing possibly different but reasonable replacements for the missing data. Users analyse the completed datasets separately with standard techniques and then combine the results using simple formulae in a way that allows the extra uncertainty due to missing data to be assessed. An advantage of this approach is that the resulting public-use data can be analysed by a variety of users for a variety of purposes, without each user needing to devise a method to deal with the missing data. A recent example for a large public-use dataset is the MI of the family income and personal earnings variables in the National Health Interview Survey. We propose an approach to utilise MI to handle the problems of missing gestational ages and implausible birthweight–gestational age combinations in national vital statistics datasets. This paper describes MI and gives examples of MI for public-use datasets, summarises methods that have been used for identifying implausible gestational age values on birth records, and combines these ideas by setting forth scenarios for identifying and then imputing missing and implausible gestational age values multiple times. Because missing and implausible gestational age values are not missing completely at random, using multiple imputations and, thus, incorporating both the existing relationships among the variables and the uncertainty added from the imputation, may lead to more valid inferences in some analytical studies than simply excluding birth records with inadequate data.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号