首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
传统上,流行病学多以干预划分观察和实验,干预研究等于实验研究,还认为干预研究的科学性高于观察性研究。在一般科学实验里,干预指人为施加的改变自然状况的措施。干预并不一定是有益的,也并不一定是研究者当下施加的,研究者、受试者或第三者目前或过去施加的措施都可以形成"有效的"干预。例如,由研究者、受试者和第三者通过某种方法致使视神经损伤,都可以形成有效改变视神经正常功能的干预,研究者可以由此观察到视神经和视力的关系。以此推论,由受试者自己过去施加的不良干预(如吸烟)也属于干预,那么研究吸烟和肺癌的观察性队列研究就等同于实验研究了。由此看来,干预本身并不足以有效地区分观察和实验。如果认为实验的科学性高于观察,那么在干预的基础上,只能从科学性上(即设计特征)区分观察和实验。在评估医学干预效果的临床试验中,随机分组是在传统认为的观察研究基础上引入的最重要的偏倚控制措施,应该是区分观察和实验的核心属性。如果一定要把人群研究分成观察和实验,随机对照试验才是真正的实验研究,非随机分组形成的干预研究属于试验,但不是实验。基于大数据的现实世界研究,如果没有随机分组,不能构成实验,也不能成为对干预效果的最终检验。大数据现实世界研究不能取代随机对照试验,这是本文希望传达的最重要的信息。  相似文献   

2.
关于循证医学、精准医学和大数据研究的几点看法   总被引:5,自引:2,他引:3       下载免费PDF全文
循证医学仍是当今最好的医学实践模式。需要注意的是,证据本身不等于决策,决策还必须考虑现有资源和人们的价值取向。证据显示,绝大多数患者不会因使用降血压、降血脂、降血糖、抗癌药而预防重要并发症或死亡,说明现代医学的很多诊断和治疗都不精准,找到那几个为数不多的对治疗有反应的患者就成了现代医学的梦。精准医学应运而生,但它并不是新概念,也不等于孤注一掷的基因测序。精准医学依赖的大队列多因素研究由来已久,也不是新方法。医学一直在寻求精准,而且在人类认知的各个层面都有所建树,如疫苗和抗体、血型与输血、影像对病灶的定位以及白内障晶体替换手术。基因不是达到精准的唯一途径,只是提供了新的可能性。但是多数基因和疾病关联强度很低,说明基因精准指导防治的价值可能不大,利用大数据和其他预测因素是精准医学的必经之路。在使用大数据问题上,强调拥有总体、大样本、关联关系而淡化因果关系,是严重的误导。科学从来不会待考察了总体后才进行推论;研究需要的样本量恰恰与效果大小成反比;否定因果关系就是对流行病学科学原理和方法的否定,放弃了对真实性的保障,最终会导致防治的无效。因此,在确认疗效上,基于大数据的现实世界观察性结果不能取代随机对照试验的实验性证据。本文谨希望以怀疑和批评的方式,激发出精准医学和大数据蕴藏的真正潜力。  相似文献   

3.
临床试验是评价干预措施疗效和安全性的金标准,但存在花费大、耗时长等限制。现实世界数据可为比较性研究提供强大的数据基础,但研究质量参差不齐。本文介绍了仿真目标试验,其利用现实世界数据,按照临床试验的设计,事先定义暴露和结局、设立纳入排除标准、确定时间零点、估计样本量和制定统计分析计划等,以期提高观察性研究的证据等级,并初步讨论仿真目标试验的证据等级评价标准,通过案例解读仿真目标试验。  相似文献   

4.
目的 干预措施在临床实践中的实际干预效果与随机对照试验(RCT)中表现的效力存在差异,即效力-效果差距。RCT结果与真实世界研究(RWS)结果的差异可能无法代表真实的效力-效果差距,这是因为当RWS与RCT在研究设计上有较大差异,或RWS结果估计存在偏倚时,效力-效果的估计可能是有偏的。其次,当发现干预措施存在效力-效果差距,不能对所有患者实行一刀切的临床决策,而需要进一步评估影响干预措施效果的真实世界因素,识别可能取得期望效用的患者群体。方法 检索PubMed、Embase、Web of Science、万方数据知识服务平台、维普数据库、中国知网6个数据库从建库至2022年12月31日的中英文文献,采用概括性综述的方法,对如何改进RWS设计从而弥合效力-效果差距的方法进行归纳整合和定性描述。结果 共纳入10篇文献,探讨如何以RCT研究方案为模板,制定相应的RWS方案,在正确估计效力-效果差距的基础上,进一步评估干预措施在患者亚群中的效果,选取能获得预期收益风险比的患者亚群,从而弥合效力-效果差距。结论 使用医疗大数据,模拟目标试验方案关键特征,可以提高研究结果的真实性和有效性,弥合效力-效果差距。  相似文献   

5.
医疗资源的合理布局是构建高质量医疗体系的重要基础。随着北京疏解工作的全面启动,以往高度集中布局的北京医疗资源需要重新配置。如何实现优质医疗资源空间布局的优化,既要解决“看病难、看病贵”,还不能加剧北京的“大城市病”,这是疏解面临的新课题。研究以计算社会科学的方法为基础,北京医疗资源疏解为背景,开发基于agent的城市医疗资源空间配置和布局的计算实验平台,讨论“点轴放射”模式和“组团”模式等不同布局情景下居民的居住选择和就医行为,“动态”观察和评估不同模式在平常和应急两种状态下资源配置的合理性和有效性。计算实验结果表明,“点轴放射”模式的医院布局在空间分布上更加均匀,能够随着人口的迁移互动逐渐形成自己的服务半径,效果略好于“组团”模式。然而,在灾情疫情等应急情形下,“组团”模式的医疗资源配置表现要显著好于“点轴放射”模式。研究表明,基于agent的建模和计算实验方法能够为利用真实数据建立现实模型展开这类布局事前研究和规划提供坚实的基础。  相似文献   

6.
现实世界研究(real world research,RWR)作为随机对照研究的补充,受到越来越多关注。如何能有效地利用高质量现实世界数据产生可靠的现实世界证据存在着机遇与挑战。本文从数据管理与利用以及获取证据的技术两方面对近年来的相关研究现状进行总结与评述,以期为RWR及应用提供参考。  相似文献   

7.
随机对照试验(RCT)被视为评估治疗效果的金标准,其关注效力(efficacy)研究,但是人们的兴趣通常还是集中在评估RCT严格控制的范围之外的真实世界的效果(effectiveness),更加重视外部有效性[1]。因此,越来越多的研究者使用观察数据来评估治疗效果。在RCT中,受试者被随机分配到治疗组和对照组,从而保证了两组基线协变量的分布相同,而观察性研究并非如此,如果某些协变量同时与治疗方案和结果相关,则可能会造成混杂。此时,需要通过统计方法消除混杂的影响,常用的方法有匹配法、多元统计分析法等。近十几年来,倾向性评分(propensity score,PS)法作为一种控制混杂偏倚的方法被研究者们越来越关注,其实现的统计软件有R语言、stata、SPSS等。  相似文献   

8.
金梦  孙可欣  胡永华   《现代预防医学》2016,(20):3831-3836
本文以常见数据库作为主要文献定量来源,阐述了医学大数据时代医学信息学研究所呈现出的研究领域丰富多样、研究内容系统深入和研究成果增速迅猛的发展现状。围绕医学大数据的特点总结其在医学信息学研究中所面临的机遇:为大容量医学大数据的高效管理存储、访问相关研究提供实际数据基础与需求;为先进的大数据挖掘、分析、集成等信息技术应用于医学研究与实践提出现实而迫切的需求;以及医学大数据所具备独特的价值密度高的特点为医疗健康相关研究提供了强大数据与知识基础。面对此机遇,分析并阐述了医学信息学研究及发展所面临的挑战:实现融合大数据平台,进行医学大数据的高效挖掘和分析,进而实现精准临床决策支持等临床辅助应用。  相似文献   

9.
目前,中国已具有了成为世界医学中心的经济文化基础,而大数据具有海量数据规模和快速数据专业化处理能力等优势,必将助力于中国早日成为世界医学中心之一。文章论述了大数据对生命科学和医学发展的巨大作用,分析了我国医学大数据中目前存在的主要问题,提出加快相关政策和标准的制定、加强5G技术在健康医疗大数据中的应用、抓紧医学大数据人才培养及相关伦理法规研究等措施,解决大数据发展中的问题,以助力我国早日成为世界医学中心。  相似文献   

10.
健康指数体系的构建与发展对于推动健康中国目标的实现具有重要的战略意义。从现实世界数据入手,通过一系列的因果推断方法,筛选和确定对健康/疾病结局具有确凿因果关系且可干预的健康指数指标,从而为健康/疾病管理提供更贴近实践、更有价值的现实世界证据是至关重要的。本文针对健康指数构建的循证医学需求,介绍了目前常用的现实世界研究中人群水平评估的因果推断方法,为健康指数指标筛选提供方法支撑。  相似文献   

11.
We compare exact and asymptotic methods for variable selection in matched case-control studies. Data from a study of melanoma among the employees of the Lawrence Livermore National Laboratory illustrate the comparisons. Relative to large sample methods, the exact method almost always yielded larger p-values. The differences in p-values became more pronounced with inclusion of more variables in the logistic model. Thus, when the sample size is not large, and there are many covariates under study, use of the exact method tends to select more parsimonious models and avoids overfit of the data.  相似文献   

12.
Genetic studies provide valuable information to assess if the effect of genetic variants varies by the nongenetic “environmental” variables, what is traditionally defined to be gene–environment interaction (GxE). A common complication is that multiple disease states present with the same set of symptoms, and hence share the clinical diagnosis. Because (a) disease states might have distinct genetic bases; and (b) frequencies of the disease states within the clinical diagnosis vary by the environmental variables, analyses of association with the clinical diagnosis as an outcome variable might result in false positive or false negative findings. We develop estimates for this setting to be able to assess GxE in a case-only study and we compare the case-control and case-only estimates. We report extensive simulation studies that evaluate empirical properties of the estimates and show the application to a study of Alzheimer's disease.  相似文献   

13.
A key objective of Phase II dose finding studies in clinical drug development is to adequately characterize the dose response relationship of a new drug. An important decision is then on the choice of a suitable dose response function to support dose selection for the subsequent Phase III studies. In this paper, we compare different approaches for model selection and model averaging using mathematical properties as well as simulations. We review and illustrate asymptotic properties of model selection criteria and investigate their behavior when changing the sample size but keeping the effect size constant. In a simulation study, we investigate how the various approaches perform in realistically chosen settings. Finally, the different methods are illustrated with a recently conducted Phase II dose finding study in patients with chronic obstructive pulmonary disease. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

14.
《亚太生殖杂志》2014,3(1):46-52
ObjectiveTo compare maternal outcome of multiple versus singleton pregnancies at a tertiary hospital in Tanzania.MethodsA case control study was designed using maternally linked data from Kilimanjaro Christian Medical Centre (KCMC) medical birth registry for the period of 2000–2010. A total of 822 multiple gestations (cases) were matched with 822 singletons (controls) with respect to maternal age at delivery and parity. The odds ratio (ORs) with 95% confidence intervals (CIs) for adverse maternal outcome between singleton and multiple gestations were computed in a multivariable logistic regression model.ResultsOf the 33 997 births, there were 822 (2.1%) multiples. Compared with singletons, women with multiple gestations had increased risk for preeclampsia (OR 2.6; 95%CI: 1.7–3.9), preterm labour (OR 5.6; 95%CI: 4.2–7.4), antepartum haemorrhage (OR 1.6; 95%CI: 1.1–2.3), anaemia (OR 2.0; 95%CI: 1.6–2.6) and caesarean section (OR 1.5; 95%CI: 1.4–1.7). In addition, there were six maternal deaths among women with multiple gestations, of which all were attributed to postpartum haemorrhage. This accounted for a case fatality rate of 15.8%.ConclusionsMultiple gestations are associated with adverse maternal outcomes. Close follow-up and timely interventions may help to prevent poor outcomes related to multiple gestations. These findings suggest the needs for clinicians to counsel women with multiple gestations during prenatal care regarding the potential risks.  相似文献   

15.
Correlated data are obtained in longitudinal epidemiological studies, where repeated measurements are taken on individuals or groups over time. Such longitudinal data are ideally analyzed using multilevel modeling approaches, which appropriately account for the correlations in repeated responses in the same individual. Commonly used regression models are inappropriate as they assume that measurements are independent. In this tutorial, we use multilevel modeling to demonstrate its use for analysis of correlated data obtained from serial examinations on individuals. We focus on cardiovascular epidemiological research where investigators are often interested in quantifying the relations between clinical risk factors and outcome measures (X and Y, respectively), where X and Y are measured repeatedly over time, for example, using serial observations on participants attending multiple examinations in a longitudinal cohort study. For instance, it may be of interest to evaluate the relations between serial measures of left ventricular mass (outcome) and of its potential determinants (i.e., body mass index and blood pressure), both of which are measured over time. In this tutorial, we describe the application of multilevel modeling to cardiovascular risk factors and outcome data (using serial echocardiographic data as an example of an outcome). We suggest an analytical approach that can be implemented to evaluate relations between any potential outcome of interest and risk factors, including assessment of random effects and nonlinear relations. We illustrate these steps using echocardiographic data from the Framingham Heart Study with SAS PROC MIXED. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
目的 探讨外切、内套术治疗混合痔的方法,并与Milligan-Morgan术治疗混合痔进行比较.方法 将126例混合痔患者随机分为治疗组66例,对照组60例.治疗组采用外切、内套术治疗混合痔,对照组采用Milligan-Morgan术治疗混合痔,比较两组在创面愈合时间、尿潴留、出血、疼痛、肛管直肠狭窄、肛门溢液等方面的差异.结果 创面平均愈合时间治疗组(8.2±2.6)d,对照组(17.4±3.8)d,P<0.01;肛管直肠狭窄、肛门溢液发生率治疗组明显低于对照组,P<0.01.结论 外切、内套术治疗混合痔减少了肛管移行区组织的破坏,缩短了愈合时间,显著减少了肛周瘢痕,有效保护了肛门功能.  相似文献   

17.

Purpose

To explore the impact of length-biased sampling on the evaluation of risk factors of nosocomial infections (NIs) in point-prevalence studies.

Methods

We used cohort data with full information including the exact date of the NI and mimicked an artificial 1-day prevalence study by picking a sample from this cohort study. Based on the cohort data, we studied the underlying multistate model which accounts for NI as an intermediate and discharge/death as competing events. Simple formulas are derived to display relationships between risk, hazard, and prevalence odds ratios.

Results

Due to length-biased sampling, long stay and thus sicker patients are more likely to be sampled. In addition, patients with NIs usually stay longer in hospital. We explored mechanisms that are—due to the design—hidden in prevalence data. In our example, we showed that prevalence odds ratios were usually less pronounced than risk odds ratios but more pronounced than hazard ratios.

Conclusions

Thus, to avoid misinterpretation, knowledge of the mechanisms from the underlying multistate model is essential for the interpretation of risk factors derived from point-prevalence data.  相似文献   

18.
In survival analyses, inverse‐probability‐of‐treatment (IPT) and inverse‐probability‐of‐censoring (IPC) weighted estimators of parameters in marginal structural Cox models are often used to estimate treatment effects in the presence of time‐dependent confounding and censoring. In most applications, a robust variance estimator of the IPT and IPC weighted estimator is calculated leading to conservative confidence intervals. This estimator assumes that the weights are known rather than estimated from the data. Although a consistent estimator of the asymptotic variance of the IPT and IPC weighted estimator is generally available, applications and thus information on the performance of the consistent estimator are lacking. Reasons might be a cumbersome implementation in statistical software, which is further complicated by missing details on the variance formula. In this paper, we therefore provide a detailed derivation of the variance of the asymptotic distribution of the IPT and IPC weighted estimator and explicitly state the necessary terms to calculate a consistent estimator of this variance. We compare the performance of the robust and consistent variance estimators in an application based on routine health care data and in a simulation study. The simulation reveals no substantial differences between the 2 estimators in medium and large data sets with no unmeasured confounding, but the consistent variance estimator performs poorly in small samples or under unmeasured confounding, if the number of confounders is large. We thus conclude that the robust estimator is more appropriate for all practical purposes.  相似文献   

19.
Background Bioavailability is a critical feature in the assessment of the role of micronutrients in human health. Although postprandial behaviour does not predict long-term responses and acute responses, it is accepted that the study of triglyceride-rich lipoprotein fractions reflects newly absorbed lipids from recent meals. Aim To assess the predictive value of a 3-point versus 7-point post-prandial response (area under the curve) in nutrient bioavailability studies in humans. Methods We used results from a human bioavailability study (n = 19) that consisted of a single-dose pharmacokinetic assay involving three types of commercially available vitamin A and E fortified milk. Results Correlation coefficients between 3-point AUC (AUCp, predictive) versus 7-point AUC (AUCc, conventional) ranged between r = 0.81 (P < 0.001) for vitamin A-fortified skim milk and r = 0.95 (P < 0.001) for whole milk. Bland-Altman representations showed a good agreement between the two methods with 95% of the differences within the concordance limits. More than 90% of the subjects were correctly classified in the same or adjacent quartiles and he calculated relative absorption of vitamin A from the foods was, on average, <5% lower using the AUCp compared to that estimated using AUCc. Conclusion The use of the 3-point approach may be a reliable alternative to assess the relative postprandial lipid response in human bioavailability studies. Nevertheless, since this approach has been studied considering one nutrient (i.e. preformed vitamin A) and one type of food (i.e. milk), its applicability to other nutrients and foods should be tested.  相似文献   

20.
《Vaccine》2015,33(32):3976-3982
Background and aimsSimplified vaccine preparation steps would save time and reduce potential immunisation errors. The aim of the study was to assess vaccine preparation time with fully-liquid hexavalent vaccine (DTaP-IPV-HB-PRP-T, Sanofi Pasteur MSD) versus non-fully liquid hexavalent vaccine that needs reconstitution (DTPa-HBV-IPV/Hib, GlaxoSmithKline Biologicals).MethodsNinety-six Health Care Professionals (HCPs) participated in a randomised, cross-over, open-label, time and motion study in Belgium (2014). HCPs prepared each vaccine in a cross-over manner with a wash-out period of 3–5 min. An independent nurse assessed preparation time and immunisation errors by systematic review of the videos. HCPs satisfaction and preference were evaluated by a self-administered questionnaire.ResultsAverage preparation time was 36 s for the fully-liquid vaccine and 70.5 s for the non-fully liquid vaccine. The time saved using the fully-liquid vaccine was 34.5 s (p  0.001). On 192 preparations, 57 immunisation errors occurred: 47 in the non-fully liquid vaccine group (including one missing reconstitution of Hib component), 10 in the fully-liquid vaccine group. 71.9% of HCPs were very or somewhat satisfied with the ease of handling of both vaccines; 66.7% and 67.7% were very or somewhat satisfied with speed of preparation in the fully-liquid vaccine and the non-fully liquid vaccine groups, respectively. Almost all HCPs (97.6%) stated they would prefer the use of the fully-liquid vaccine in their daily practice.ConclusionsPreparation of a fully-liquid hexavalent vaccine can be completed in half the time necessary to prepare a non-fully liquid vaccine. The simplicity of the fully-liquid hexavalent vaccine preparation helps optimise reduction of immunisation errors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号