首页 | 本学科首页   官方微博 | 高级检索  
检索        

融合感知损失的深度学习在常规MR图像转换的研究
引用本文:张泽茹,李兆同,刘良友,高嵩,吴奉梁.融合感知损失的深度学习在常规MR图像转换的研究[J].中国医学物理学杂志,2021(2):178-185.
作者姓名:张泽茹  李兆同  刘良友  高嵩  吴奉梁
作者单位:北京大学医学部医学技术研究院;北京大学医学人文学院;北京大学第三医院骨科
基金项目:国家自然科学基金(12075011,82071280);北京市自然科学基金(7202093);西藏自治区重点研发计划(XZ202001ZY0005G)。
摘    要:目的:研究在完全无监督的条件下深度神经网络实现常规磁共振图像间相互转换的可行性。方法:在循环生成式对抗网络(CycleGAN)中引入感知损失,使网络利用对抗损失学习图像结构信息的同时,结合循环一致性损失和感知损失生成高质量的磁共振图像,并将生成图像与CycleGAN模型以及有监督的CycleGAN模型(S_CycleGAN)生成的图像进行定量比较。结果:引入感知损失后的网络生成的图像定量评估值均高于CycleGAN模型生成的图像,生成的T1加权图像(T1WI)的定量评估值也均高于S_CycleGAN模型生成的T1WI,生成的T2加权图像(T2WI)与S_CycleGAN模型生成的T2WI的定量评估值相似。结论:在CycleGAN中引入感知损失,可以在完全无监督的条件下生成高质量的磁共振图像,进而实现高质量的常规磁共振图像的相互转换。

关 键 词:磁共振成像  多模态  图像转换  生成式对抗网络

Application of deep learning with perceptual loss in conventional MR image translation
ZHANG Zeru,LI Zhaotong,LIU Liangyou,GAO Song,WU Fengliang.Application of deep learning with perceptual loss in conventional MR image translation[J].Chinese Journal of Medical Physics,2021(2):178-185.
Authors:ZHANG Zeru  LI Zhaotong  LIU Liangyou  GAO Song  WU Fengliang
Institution:(Institute of Medical Technology,Peking University Health Science Center,Beijing 100191,China;School of Health Humanities,Peking University,Beijing 100191,China;Department of Orthopedics,Peking University Third Hospital,Beijing 100191,China)
Abstract:Objective To research the feasibility of using deep neural networks to achieve image-to-image translation on conventional magnetic resonance(MR)images in a completely unsupervised way.Methods Perception loss was introduced into cycle generative adversarial network(CycleGAN),so that the proposed network could use the adversarial loss to learn image structure information,and combine cycle consistency loss with perceptual loss to generate high-quality MR image.The generated image was compared quantitatively with those generated by CycleGAN model and supervised CycleGAN model(S_CycleGAN).Results The quantitative evaluation showed that the proposed network with the introduction of perceptual loss was superior to CycleGAN model on imaging,and that the evaluation result of the T1-weighted image generated by the proposed network was also better than that of the image generated by S_CycleGAN model.However,the evaluation results of the T2-weighted images generated by the proposed network and S_CycleGAN model were similar.Conclusion The introduction of perceptual loss to CycleGAN can generate high-quality MR images in a completely unsupervised way,and then realize image-to-image translation on high-quality conventional MR images.
Keywords:magnetic resonance imaging  multi-modalities  image translation  generative adversarial network
本文献已被 CNKI 维普 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号