首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
目的探讨基于问题学习(PBL)与基于案例学习(CBL)相结合教学模式在儿科毛细支气管炎临床诊治在儿科见习医师教学的作用。方法在桂林医学院2013级临床医学系全科医学专业五年制的见习生中随机抽取60名学生,随机分为观察组和对照组各30例。对照组采取传统教学模式。观察组采取PBL与CBL相结合教学模式。比较两组见习医师在教学前后学习效果的差异。结果对照组理论考核成绩为优秀10人,良好15人,差5人。观察组理论考核成绩为优秀22人,良好8人,差0人。观察组在毛细支气管炎临床诊断和治疗方面显著优于对照组(P 0. 05)。观察组对毛细支气管炎基础知识掌握程度,以及在临床诊断、治疗方面、对疾病的高危因素、预防等知识点的掌握程度明著优于对照组(P 0. 05)。结论在临床教学中实施PBL与CBL相结合的模式有助于全科医学专业见习生全面掌握毛细支气管炎临床诊断和治疗。  相似文献   

2.
近年来,日渐成熟的人工智能深度学习技术使得众多领域逐渐实现自动化智能化作业。在医疗领域,随着医疗数据电子化和互联网医疗的发展,基于卷积神经网络实现包含定位、分割和分类于一体的辅助诊断系统应用已成为新型医疗模式发展的必然趋势。医学影像分割技术是医疗图像自动分析中的难点和重点,目前仍面临许多亟待解决的问题。该文将从临床医学影像的特点、深度学习主流分割网络和医学图像分割网络在临床中的应用3个方面对医学图像分割领域的研究进展进行系统综述,并进一步分析卷积神经网络在医学影像分割任务中的发展现状、面临的挑战以及未来的发展方向。  相似文献   

3.
4.
目的 探讨微视频联合基于团队学习的教学法(team based learning,TBL)在临床医学生心脏体格检查中的应用效果。方法 本研究利用试验对照方法。2022年2月~4月,随机选取空军军医大学2019级临床医学专业五年制本科学员54人为研究对象,采用数字表法随机分为试验组和对照组,每组27人。试验组采用微视频联合TBL相结合进行教学,对照组采用传统的诊断学心脏体格检查教学方法。课程结束后,以体格检查操作实践能力考核、理论考核和问卷调查方式对教学质量和效果进行评估。结果 试验组学员的心脏查体实践技能操作考核和理论考核成绩优于对照组学员(P<0.05)。学员自评调查结果显示,试验组学员对理论知识重难点掌握程度、实践技能操作掌握程度、学习兴趣激发、自主学习能力、学习记忆效果、分析和解决能力、沟通能力、临床思维能力、团队协作能力、教学模式满意度方面均高于对照组学员(P<0.05)。授课教员对学员的评价结果显示,试验组学员课堂活跃程度、临床思辨能力、主动学习能力、沟通理解能力、发现解决问题能力得分均高于对照组学员(P<0.05)。结论 微视频联合TBL教学模式可有效提高...  相似文献   

5.
目的探讨运用数据挖掘技术总结中药改善学习记忆能力的处方规律。方法收集近10年中药治疗学习记忆障碍性疾病的文献,利用Excel软件建立中药处方数据库,采用SPSS17.0和Clementine12.0软件依次对数据进行频次分析、关联分析和聚类分析。结果筛选出相关文献297篇,涉及的174味中药主要分为19大类,39小类,其中包含药物数前三位的是补虚药、清热药和活血化瘀药,使用频次前5位的药物分别为茯苓、石菖蒲、人参、川芎和当归。关联分析显示中药之间相关性最高的是熟地黄-山茱萸。聚类分析显示,存在5个核心用药群及重点用药和辨证用药两类。结论中药改善学习记忆能力的临床选药常用组合规律可为治疗记忆功能障碍性疾病的临床中药处方提供思路。  相似文献   

6.
目的 构建和验证一个用于早期胃癌自动识别的深度学习模型,旨在提高早期胃癌的识别和诊断水平。 方法 从长海医院消化内镜中心数据库选取2014年5月至2016年12月期间5 159张胃镜图像,其中包括早期胃癌1 000张,良性病变及正常图像4 159张。首先选取4 449张图像(其中早期胃癌图像768张,其他良性病变及正常图像3 681张)用于深度学习模型的训练。然后将剩余的710张图像用于模型的验证,同时再交给4名内镜医师进行诊断。最后统计相关结果。 结果 深度学习模型用于早期胃癌诊断的准确率89.4%(635/710)、敏感度88.8%(206/232)、特异度89.7%(429/478),每张图像的诊断时间为(0.30±0.02)s,均优于相比较的4名内镜医师。 结论 本研究构建的深度学习模型用于早期胃癌的诊断具有较高的准确率、特异度和敏感度,可在胃镜检查中辅助内镜医师进行实时诊断。  相似文献   

7.
目的利用深度学习卷积神经网络解决消化内镜图像中胃溃疡病变区域分割问题,探究空洞卷积与二维卷积相比对模型性能的提升作用。方法针对内镜图片常出现的气泡、反光和仪器介入等问题,使用Sobel算子等方法对原始图像进行数据清洗、数据预处理和数据增强;使用Pytorch实现算法模型训练,将经过预处理后的图像作为输入数据,利用多个卷积神经网络模型对输入的图像胃溃疡病变区域进行图像分割并标识病变区域。结果人工采集的消化内镜图像中噪声信息较多,通过数据增强可以有效提高模型对图像的分割精度,此研究发现最佳模型是DeepLab V3 Plus,其对胃溃疡病变区域的准确率达到89.667%,平均交并比达到88.478%,频权交并比为81.665%。结论针对消化内镜的预处理能够有效去除原始数据集中图像的噪声信息,利于训练进程发展。数据增强可以提高模型泛化能力,防止训练过程中出现过拟合的现象。利用空洞卷积和DeepLab V3卷积神经网络可以有效提高消化内镜图像上的胃溃疡病变区域分割问题。  相似文献   

8.
目的 探索基于深度学习的人工智能软件(内镜精灵)在“医教协同”背景下临床研究生教学中的应用,以消化科专硕胃镜学习成效为例,探讨内镜精灵对于消化专业研究生胃镜培训的教学效果。方法 选取湖北医药第四临床学院2018年9月至2022年9月在湖北医药学院附属襄阳医院的32名消化内科专业硕士研究生,按随机数字表法分为两组,分别在人工智能计算机辅助系统(内镜精灵)辅助的胃镜系统操作间(AI组)和在普通胃镜操作间(非AI组)进行培训。内镜精灵系统可以自动监测上消化道的26个解剖标志,并记录标准照片,具有实时监测胃部位盲区、辅助诊断识别病灶的作用。比较两组培训考核的最终成绩、操作成功率、病灶检出率、胃镜检查患者的疼痛评分及教学的满意度。结果 与非AI组相比,AI组的最终考核成绩较高[(93.5±2.6)分比(87.5±3.5)分]、病灶检出率更高[89(80,100)VS 60(43,78)],差异均有统计学意义(P<0.001)。AI组平均胃镜检查时间为(7.1±1.2)min,显著短于非AI组的(8.2±1.2)min(P=0.008)。AI平均胃镜检查过程中患者的视觉模拟评分(VAS)为3...  相似文献   

9.
目的 传统的NICE分型依赖于医生的主观判断和经验,存在一定的主观性和不确定性。本研究旨在开发基于小样本学习算法的结直肠息肉NICE分型分类模型。方法 共计414张来源于苏州大学附属第一医院和上海交通大学医学院苏州九龙医院内镜中心的结直肠息肉NICE分型图片纳入研究。研究基于三种不同模型架构(MobilenetV2,Resent50,Xception),利用二次迁移学习方式,分别开发了传统深度学习分类模型与基于度量学习的小样本学习分类模型(3-way, 3-shot),同时使用梯度加权分类激活映射对小样本学习分类模型的分类结果进行可视化解释。分类模型于测试集中进行性能评价,并收集高、低年资医师对测试集数据的分类结果,将其与模型的分类结果进行对比,进一步评估模型的分类能力。结果 传统深度学习分类模型分类准确性一般,平均分类准确性为0.638。基于三种不同特征提取架构的小样本学习分类模型均拥有较好的分类准确性,其平均准确性为0.827。高低年资内镜医师均拥有较好的判断表现,其平均分类准确性0.824。结论 对于结直肠息肉NICE分型图片,基于较小训练样本量的小样本学习算法展现出优于传统深度...  相似文献   

10.
目的开发一套基于深度学习的人工智能辅助质量控制系统,实现磁控胶囊胃镜检查过程实时质量控制。 方法回顾性纳入山东大学齐鲁医院、山东美兆健康科技、莒县美年大健康三家内镜中心2019年1月至2020年1月共320例患者磁控胶囊胃镜检查图像资料,经内镜医师标注,基于Resnet-50开发出一套磁控胶囊胃镜辅助质量控制系统,实时进行检查完整度,部位清洁度,检查时间等指标监控,同时向操作者反馈检查质量,规范内镜操作者检查过程,实现磁控胶囊胃镜检查的质量控制。 结果该磁控胶囊胃镜辅助质量控制系统识别胃内6解剖部位准确度为92.30~98.28%,4类清洁度识别准确度为91.46~95.37%,实际外部磁场操作时间计时准确度可达100%。同时,该质控系统整合至磁控操作系统,可安全运行。 结论磁控胶囊胃镜辅助质量控制系统可实现质量控制,实时向内镜操作员反馈检查质量,规范内镜操作过程。  相似文献   

11.
Federated learning (FL) enables edge devices, such as Internet of Things devices (e.g., sensors), servers, and institutions (e.g., hospitals), to collaboratively train a machine learning (ML) model without sharing their private data. FL requires devices to exchange their ML parameters iteratively, and thus the time it requires to jointly learn a reliable model depends not only on the number of training steps but also on the ML parameter transmission time per step. In practice, FL parameter transmissions are often carried out by a multitude of participating devices over resource-limited communication networks, for example, wireless networks with limited bandwidth and power. Therefore, the repeated FL parameter transmission from edge devices induces a notable delay, which can be larger than the ML model training time by orders of magnitude. Hence, communication delay constitutes a major bottleneck in FL. Here, a communication-efficient FL framework is proposed to jointly improve the FL convergence time and the training loss. In this framework, a probabilistic device selection scheme is designed such that the devices that can significantly improve the convergence speed and training loss have higher probabilities of being selected for ML model transmission. To further reduce the FL convergence time, a quantization method is proposed to reduce the volume of the model parameters exchanged among devices, and an efficient wireless resource allocation scheme is developed. Simulation results show that the proposed FL framework can improve the identification accuracy and convergence time by up to 3.6% and 87% compared to standard FL.

Machine learning (ML) uses data to realize intelligent and autonomous decision-making and inference. ML algorithms have been used in a wide variety of areas, such as computer vision, natural language processing, medical imaging, and communications (14). Data are often collected on devices at the edges of networks: Images and text messages are often generated and stored on smartphones; biomedical signals are collected by medical and wearable devices, and often stored on hospital servers; various forms of signals are recorded by Internet of Things systems and sensors.As massive amounts of data are typically required to train an ML model, such as deep neural networks, centralized ML algorithms must collect training data from edge devices for training purposes. For example, to train an ML model for medical diagnosis, a central controller (CC) must collect the medical data from multiple hospitals. Nonetheless, in some applications, such as the aforementioned example of training an ML diagnosis system for medical data, the edge devices may not be willing to share their data, due to privacy concerns and regulations. Furthermore, conveying large volumes of aggregated data by many mobile devices may induce a notable burden on the communication infrastructure. These considerations gave rise to the need for ML algorithms that train an ML model in a distributed fashion, such that edge devices can contribute to the learning procedure without sharing their data.Federated learning (FL) proposed in ref. 5 is a distributed learning algorithm that enables edge devices to jointly train a common ML model without being required to share their data. The FL procedure relies on the ability of each device to train an ML model locally, based on its data, while having the devices iteratively exchanging and synchronizing their local ML model parameters with each other in a manner orchestrated by a CC unit (6). Due to its unique features, FL has been applied in a wide variety of practical applications such as mobile keyboard prediction [e.g., Google (7)], speaker and command recognition [e.g., Apple (8)], and data silos for insurance companies [e.g., WeBank (9)].However, implementation of FL in practical applications faces several challenges which stem from its distributed operation, which is fundamentally different from traditional centralized ML algorithms (10). These challenges include 1) communication overhead induced by the repetitive model parameter exchanges; 2) device hardware heterogeneity, as each device may have different computational capabilities; 3) data heterogeneity, as each device can access a relatively small and personalized dataset and may thus train an ML model which is biased toward its own data; and 4) privacy and security issues, which follow from the fact that the learning procedure is carried out over multiple individual devices. Among these challenges, the communication overhead constitutes a major bottleneck due to the following reasons. First, FL is trained by an iterative process, and hence the time it takes to learn, that is, the convergence time, depends on both the optimization procedure, for example, the number of training steps, as well as the FL parameter transmission delay per training step. Second, FL training is potentially implemented by millions of edge devices, and each device must iteratively share its large-size FL parameters with a CC. Therefore, for FL implemented over a realistic network with limited computational and communication resources, its FL parameter transmission delay may be much larger than the time it takes the devices to train their local ML models. Therefore, it is necessary to design a communication-efficient FL framework that can significantly improve both convergence speed and model accuracy, thus allowing its application to training large-scale ML models over millions of edge devices.A number of existing works, including refs. 1123, have studied the design of communication-efficient FL algorithms. However, the majority of these works focus on optimization of FL in a single aspect such as device selection and scheduling (1113), FL model parameter update and transmission (1418), or network resource management (2022). In this study, we propose a communication-efficient FL framework that tackles multiple causes for communication delay, by jointly optimizing the device selection, FL model parameter transmission, and network resource management. In particular, we first propose a probabilistic device selection scheme which allows the devices that can significantly improve the convergence speed and training loss to have higher probabilities for ML model transmission. Then, a quantization method is designed to reduce the data size of the ML parameters exchanged among devices, thus improving FL convergence speed. In addition, for the selected devices, a wireless resource allocation scheme is developed to further improve their transmission data rates, thus reducing the FL transmission delay at each learning step. Finally, we analyze the convergence of our proposed FL framework. Simulation results based on real-world data demonstrate the performance of our proposed FL framework and its ability to allow accurate and fast collaborative training of multiple edge devices in a federated manner.  相似文献   

12.
Learning motor skills commonly requires repeated execution to achieve gains in performance. Motivated by memory reactivation frameworks predominantly originating from fear-conditioning studies in rodents, which have extended to humans, we asked the following: Could motor skill learning be achieved by brief memory reactivations? To address this question, we had participants encode a motor sequence task in an initial test session, followed by brief task reactivations of only 30 s each, conducted on separate days. Learning was evaluated in a final retest session. The results showed that these brief reactivations induced significant motor skill learning gains. Nevertheless, the efficacy of reactivations was not consistent but determined by the number of consecutive correct sequences tapped during memory reactivations. Highly continuous reactivations resulted in higher learning gains, similar to those induced by full extensive practice, while lower continuity reactivations resulted in minimal learning gains. These results were replicated in a new independent sample of subjects, suggesting that the quality of memory reactivation, reflected by its continuity, regulates the magnitude of learning gains. In addition, the change in noninvasive brain stimulation measurements of corticospinal excitability evoked by transcranial magnetic stimulation over primary motor cortex between pre- and postlearning correlated with retest and transfer performance. These results demonstrate a unique form of rapid motor skill learning and may have far-reaching implications, for example, in accelerating motor rehabilitation following neurological injuries.

Motor skill learning, in healthy or clinical populations, usually requires extensive execution of a motor task to achieve gains in performance. These gains are accumulated in two different time windows: during skill execution (online learning, see refs. 13) and between sessions, possibly through offline consolidation processes (offline learning, see refs. 46). Interestingly, frameworks stemming from synaptic-level studies (79), and further supported by evidence in rodents (1013) and humans (1420), suggest that even fully consolidated memories, presumably stable, can be strengthened, updated, or degraded following their reactivation. Could such brief memory reactivations enhance motor skill performance without extensive practice over multiple sessions? Motivated by a proof-of-principle study in a different domain, visual perceptual learning (15), here, we tested whether brief reactivations of an encoded motor skill can induce learning gains. We additionally tested whether such form of rapid learning generalizes to the untrained hand. The possibility of achieving skill improvements with a minimal amount of task execution could strongly impact skill learning research and have promising potential for the development of strategies to improve practice efficiency in daily life and following neurological impairments.To test the ability of memory reactivations to induce motor skill learning, participants practiced a motor sequence task (21) in which they were asked to type a five-digit sequence as fast and as accurate as they could (see Materials and Methods). The motor skill was first encoded in an initial test session, with a retest session conducted following 1 wk. Participants in the “Reactivations” group performed brief reactivations on two separate days between the test and retest. Each of these reactivation sessions lasted only 30 s, in which participants reactivated the skill memory by briefly performing a single trial of the task (Fig. 1A). The “Control” group performed only test and retest sessions without memory reactivations. Participants in the “Full Practice” group performed two full training sessions (12 trials each) between the test and retest. Learning gains were quantified as the difference in performance between the last trial of the test session and the first retest trial (5, 2224), with performance quantified as the number of correct sequences tapped, a highly common measure combining both speed and accuracy (17, 23, 25, 26).Open in a separate windowFig. 1.Reactivation-induced learning gains. (A) Experimental design. Subjects first encoded the motor skill memory in a test session including 12 trials of the task and performed a retest session following 1 wk, followed by an intermanual transfer test. Participants in the Reactivations group (composed of High Continuity and Low Continuity) performed brief reactivations between test and retest in which they reactivated their skill memory by performing only a single 30 s trial of the task. Participants in the Full Practice group performed full 12 trials training sessions between test and retest. The Control group performed only test and retest sessions without reactivations. (B) An illustrated explanation of the CS calculation. In both examples, the number of correct sequences, errors, and total key presses are identical, but the CS is different. (C) Single-trial performance for all groups (High Continuity Reactivations marked in light blue, Low Continuity Reactivations in light red, Control in gray, and Full Practice in purple. The combined Reactivations groups are illustrated in dashed black). (D) Test versus retest single-subject performance presented in a scatterplot along a unit slope line (y = x) where each point reflects a participant (5, 46). Data accumulating above the unit line reflect subjects who improved from test to retest, expressing learning gains, while data points below the line indicate degraded retest performance. (E) Dashed black bars (corresponding to the right y-axis) reflect the percentage of participants on each side of the unit slope line in D, and the colored bars reflect the mean performance in test and retest sessions (corresponding to the left y-axis). (F) Mean transfer test performance compared to test performance. *P < 0.05, **P < 0.001. Error bars represent SEM.Because of the variable efficacy of reactivations, we reasoned that the quality of reactivations may determine their efficacy in inducing learning gains. Unintentional errors during reactivation might reactivate a different version of the memory and could strengthen erroneous memories instead of the original memory trace. This could possibly cause a decrease in learning gains or even result in deteriorated performance of the original memory. This is consistent with the concept of interruptions, previously suggested to affect task performance (27, 28), possibly by preventing encoding of coherent representations of memories (27, 29, 30). Accordingly, we reasoned that continuity, reflecting minimal interruptions, might play a role in defining the efficacy of reactivations. To that effect, “High” and “Low Continuity Reactivations” were separately analyzed (see Materials and Methods and Results). In addition, a replication experiment was conducted to confirm the role of continuity in reactivation efficacy.  相似文献   

13.
14.
Machine learning (ML) is a type of artificial intelligence (AI) based on pattern recognition. There are different forms of supervised and unsupervised learning algorithms that are being used to identify and predict blood pressure (BP) and other measures of cardiovascular risk. Since 1999, starting with neural network methods, ML has been used to gauge the relationship between BP and pulse wave forms. Since then, the scope of the research has expanded to using different cardiometabolic risk factors like BMI, waist circumference, waist‐to‐hip ratio in concert with BP and its various pharmaceutical agents to estimate biochemical measures (like HDL cholesterol, LDL and total cholesterol, fibrinogen, and uric acid) as well as effectiveness of anti‐hypertensive regimens. Data from large clinical trials like the SPRINT are being re‐analyzed by ML methods to unearth new findings and identify unique relationships between predictors and outcomes. In summary, AI and ML methods are gaining immense attention in the management of chronic disease. Elevated BP is a very important early metric for the risk of development of cardiovascular and renal injury; therefore, advances in AI and ML will aid in early disease prediction and intervention.  相似文献   

15.
16.
17.
Rich sources of obesity‐related data arising from sensors, smartphone apps, electronic medical health records and insurance data can bring new insights for understanding, preventing and treating obesity. For such large datasets, machine learning provides sophisticated and elegant tools to describe, classify and predict obesity‐related risks and outcomes. Here, we review machine learning methods that predict and/or classify such as linear and logistic regression, artificial neural networks, deep learning and decision tree analysis. We also review methods that describe and characterize data such as cluster analysis, principal component analysis, network science and topological data analysis. We introduce each method with a high‐level overview followed by examples of successful applications. The algorithms were then applied to National Health and Nutrition Examination Survey to demonstrate methodology, utility and outcomes. The strengths and limitations of each method were also evaluated. This summary of machine learning algorithms provides a unique overview of the state of data analysis applied specifically to obesity.  相似文献   

18.
19.
We begin with a paradox. On one hand, not nearly enough is known about exactly how learning takes place in the brain, although exciting new results are emerging thanks to improved brain imaging and a greater focus on neuroscience by government and universities. But this research is just beginning, and a much larger effort and investment are needed to answer even the most basic questions. On the other hand, more than enough is already known about what best promotes learning to motivate and drive educational reform for years to come. This is a report from the front lines of both research and educational implementation. This information should prove of use to anyone--teachers, students, parents, patients, and health practitioners--who is concerned about how best to improve formal or informal teaching and learning, to help people remember complex instructions, or to change unhealthy habits and practices.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号