首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
INTRODUCTION: There is much subjective discussion, but few empirical data that explore how students approach the learning of anatomy. AIMS: Students' perceptions of successful approaches to learning anatomy were correlated with their own approaches to learning, quality of learning and grades. METHODS: First-year medical students (n = 97) studying anatomy at an Australian university completed an online survey including a version of the Study Process Questionnaire (SPQ) that measures approaches to learning. The quality of students' written assessment was rated using the Structure of Observed Learning Outcomes (SOLO) taxonomy. Final examination data were used for correlation with approaches and quality of learning. RESULTS: Students perceived successful learning of anatomy as hard work, involving various combinations of memorisation, understanding and visualisation. Students' surface approach (SA) scores (mean 30 +/- 3.4) and deep approach (DA) scores (mean 31 +/- 4.2) reflected the use of both memorisation and understanding as key learning strategies in anatomy. There were significant correlations between SOLO ratings and DA scores (r = 0.24, P < 0.01), between SA scores and final grades (r = - 0.30, P < 0.01) and between SOLO ratings and final grades (r = 0.61, P < 0.01) in the subject. CONCLUSIONS: Approaches to learning correlate positively with the quality of learning. Successful learning of anatomy requires a balance between memorisation with understanding and visualisation. Interrelationships between these three strategies for learning anatomy in medicine and other disciplines require further investigation.  相似文献   

2.
3.
4.
5.
6.
7.
8.
9.
The matched case‐control designs are commonly used to control for potential confounding factors in genetic epidemiology studies especially epigenetic studies with DNA methylation. Compared with unmatched case‐control studies with high‐dimensional genomic or epigenetic data, there have been few variable selection methods for matched sets. In an earlier paper, we proposed the penalized logistic regression model for the analysis of unmatched DNA methylation data using a network‐based penalty. However, for popularly applied matched designs in epigenetic studies that compare DNA methylation between tumor and adjacent non‐tumor tissues or between pre‐treatment and post‐treatment conditions, applying ordinary logistic regression ignoring matching is known to bring serious bias in estimation. In this paper, we developed a penalized conditional logistic model using the network‐based penalty that encourages a grouping effect of (1) linked Cytosine‐phosphate‐Guanine (CpG) sites within a gene or (2) linked genes within a genetic pathway for analysis of matched DNA methylation data. In our simulation studies, we demonstrated the superiority of using conditional logistic model over unconditional logistic model in high‐dimensional variable selection problems for matched case‐control data. We further investigated the benefits of utilizing biological group or graph information for matched case‐control data. We applied the proposed method to a genome‐wide DNA methylation study on hepatocellular carcinoma (HCC) where we investigated the DNA methylation levels of tumor and adjacent non‐tumor tissues from HCC patients by using the Illumina Infinium HumanMethylation27 Beadchip. Several new CpG sites and genes known to be related to HCC were identified but were missed by the standard method in the original paper. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

10.
11.
Statistical methods used in spatio‐temporal surveillance of disease are able to identify abnormal clusters of cases but typically do not provide a measure of the degree of association between one case and another. Such a measure would facilitate the assignment of cases to common groups and be useful in outbreak investigations of diseases that potentially share the same source. This paper presents a model‐based approach, which on the basis of available location data, provides a measure of the strength of association between cases in space and time and which is used to designate and visualise the most likely groupings of cases. The method was developed as a prospective surveillance tool to signal potential outbreaks, but it may also be used to explore groupings of cases in outbreak investigations. We demonstrate the method by using a historical case series of Legionnaires’ disease amongst residents of England and Wales. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

12.
13.
14.
Evaluation of a surgical simulator for learning clinical anatomy   总被引:1,自引:0,他引:1  
BACKGROUND: New techniques in imaging and surgery have made 3-dimensional anatomical knowledge an increasingly important goal of medical education. This study compared the efficacy of 2 supplemental, self-study methods for learning shoulder joint anatomy to determine which method provides for greater transfer of learning to the clinical setting. METHODS: Two groups of medical students studied shoulder joint anatomy using either a second-generation virtual reality surgical simulator or images from a textbook. They were then asked to identify anatomical structures of the shoulder joint as they appeared in a videotape of a live arthroscopic procedure. RESULTS: The mean identification scores, out of a possible score of 7, were 3.1 +/- 1.3 for the simulator group and 2.9 +/- 1.5 for the textbook group (P = 0.70). Student ratings of the 2 methods on a 5-point Likert scale were significantly different. The simulator group rated the simulator more highly as an effective learning tool than the textbook group rated the textbook (means of 3.2 +/- 0.7 and 2.6 +/- 0.5, respectively, P = 0.02). Furthermore, the simulator group indicated that they were more likely to use the simulator as a learning tool if it were available to them than the textbook group was willing to use the textbook (means of 4.0 +/- 1.2 and 3.0 +/- 0.9, respectively, P = 0.02). CONCLUSION: Our results show that this surgical simulator is at least as effective as textbook images for learning anatomy and could enhance student learning through increased motivation. These findings provide insight into simulator development and strategies for learning anatomy. Possible explanations and future research directions are discussed.  相似文献   

15.
16.

Objective

To assess a quality improvement disparity reduction intervention and its sustainability.

Data Sources/Study Setting

Electronic health records and Quality Index database of Clalit Health Services in Israel (2008–2012).

Study Design

Interrupted time‐series with pre‐, during, and postintervention disparities measurement between 55 target clinics (serving approximately 400,000 mostly low socioeconomic, minority populations) and all other (126) clinics.

Data Collection/Extraction Methods

Data on a Quality Indicator Disparity Scale (QUIDS‐7) of 7 indicators, and on a 61‐indicator scale (QUIDS‐61).

Principal Findings

The gap between intervention and nonintervention clinics for QUIDS‐7 decreased by 66.7 percent and by 70.4 percent for QUIDS‐61. Disparity reduction continued (18.2 percent) during the follow‐up period.

Conclusions

Quality improvement can achieve significant reduction in disparities in a wide range of clinical domains, which can be sustained over time.  相似文献   

17.
The high‐dimensional propensity score (hdPS) algorithm was proposed for automation of confounding adjustment in problems involving large healthcare databases. It has been evaluated in comparative effectiveness research (CER) with point treatments to handle baseline confounding through matching or covariance adjustment on the hdPS. In observational studies with time‐varying interventions, such hdPS approaches are often inadequate to handle time‐dependent confounding and selection bias. Inverse probability weighting (IPW) estimation to fit marginal structural models can adequately handle these biases under the fundamental assumption of no unmeasured confounders. Upholding of this assumption relies on the selection of an adequate set of covariates for bias adjustment. We describe the application and performance of the hdPS algorithm to improve covariate selection in CER with time‐varying interventions based on IPW estimation and explore stabilization of the resulting estimates using Super Learning. The evaluation is based on both the analysis of electronic health records data in a real‐world CER study of adults with type 2 diabetes and a simulation study. This report (i) establishes the feasibility of IPW estimation with the hdPS algorithm based on large electronic health records databases, (ii) demonstrates little impact on inferences when supplementing the set of expert‐selected covariates using the hdPS algorithm in a setting with extensive background knowledge, (iii) supports the application of the hdPS algorithm in discovery settings with little background knowledge or limited data availability, and (iv) motivates the application of Super Learning to stabilize effect estimates based on the hdPS algorithm. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

18.
We propose three‐sided testing, a testing framework for simultaneous testing of inferiority, equivalence and superiority in clinical trials, controlling for multiple testing using the partitioning principle. Like the usual two‐sided testing approach, this approach is completely symmetric in the two treatments compared. Still, because the hypotheses of inferiority and superiority are tested with one‐sided tests, the proposed approach has more power than the two‐sided approach to infer non‐inferiority or non‐superiority. Applied to the classical point null hypothesis of equivalence, the three‐sided testing approach shows that it is sometimes possible to make an inference on the sign of the parameter of interest, even when the null hypothesis itself could not be rejected. Relationships with confidence intervals are explored, and the effectiveness of the three‐sided testing approach is demonstrated in a number of recent clinical trials. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

19.
We take a functional data approach to longitudinal studies with complex bivariate outcomes. This work is motivated by data from a physical activity study that measured 2 responses over time in 5‐minute intervals. One response is the proportion of time active in each interval, a continuous proportions with excess zeros and ones. The other response, energy expenditure rate in the interval, is a continuous variable with excess zeros and skewness. This outcome is complex because there are 3 possible activity patterns in each interval (inactive, partially active, and completely active), and those patterns, which are observed, induce both nonrandom and random associations between the responses. More specifically, the inactive pattern requires a zero value in both the proportion for active behavior and the energy expenditure rate; a partially active pattern means that the proportion of activity is strictly between zero and one and that the energy expenditure rate is greater than zero and likely to be moderate, and the completely active pattern means that the proportion of activity is exactly one, and the energy expenditure rate is greater than zero and likely to be higher. To address these challenges, we propose a 3‐part functional data joint modeling approach. The first part is a continuation‐ratio model to reorder the ordinal valued 3 activity patterns. The second part models the proportions when they are in interval (0,1). The last component specifies the skewed continuous energy expenditure rate with Box‐Cox transformations when they are greater than zero. In this 3‐part model, the regression structures are specified as smooth curves measured at various time points with random effects that have a correlation structure. The smoothed random curves for each variable are summarized using a few important principal components, and the association of the 3 longitudinal components is modeled through the association of the principal component scores. The difficulties in handling the ordinal and proportional variables are addressed using a quasi‐likelihood type approximation. We develop an efficient algorithm to fit the model that also involves the selection of the number of principal components. The method is applied to physical activity data and is evaluated empirically by a simulation study.  相似文献   

20.
Drug‐drug interactions (DDIs) are a common cause of adverse drug events (ADEs). The electronic medical record (EMR) database and the FDA's adverse event reporting system (FAERS) database are the major data sources for mining and testing the ADE associated DDI signals. Most DDI data mining methods focus on pair‐wise drug interactions, and methods to detect high‐dimensional DDIs in medical databases are lacking. In this paper, we propose 2 novel mixture drug‐count response models for detecting high‐dimensional drug combinations that induce myopathy. The “count” indicates the number of drugs in a combination. One model is called fixed probability mixture drug‐count response model with a maximum risk threshold (FMDRM‐MRT). The other model is called count‐dependent probability mixture drug‐count response model with a maximum risk threshold (CMDRM‐MRT), in which the mixture probability is count dependent. Compared with the previous mixture drug‐count response model (MDRM) developed by our group, these 2 new models show a better likelihood in detecting high‐dimensional drug combinatory effects on myopathy. CMDRM‐MRT identified and validated (54; 374; 637; 442; 131) 2‐way to 6‐way drug interactions, respectively, which induce myopathy in both EMR and FAERS databases. We further demonstrate FAERS data capture much higher maximum myopathy risk than EMR data do. The consistency of 2 mixture models' parameters and local false discovery rate estimates are evaluated through statistical simulation studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号