首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Retinal fundus images are often corrupted by non-uniform and/or poor illumination that occur due to overall imperfections in the image acquisition process. This unwanted variation in brightness limits the pathological information that can be gained from the image. Studies have shown that poor illumination can impede human grading in about 10~15% of retinal images. For automated grading, the effect can be even higher. In this perspective, we propose a novel method for illumination correction in the context of retinal imaging. The method splits the color image into luminosity and chroma (i.e., color) components and performs illumination correction in the luminosity channel based on a novel background estimation technique. Extensive subjective and objective experiments were conducted on publicly available DIARETDB1 and EyePACS images to justify the performance of the proposed method. The subjective experiment has confirmed that the proposed method does not create false color/artifacts and at the same time performs better than the traditional method in 84 out of 89 cases. The objective experiment shows an accuracy improvement of 4% in automated disease grading when illumination correction is performed by the proposed method than the traditional method.  相似文献   

2.
A modified Delphi methodology was used to develop a consensus regarding a series of learning outcome statements to act as the foundation of an undergraduate medical core embryology syllabus. A Delphi panel was formed by recruiting stakeholders with experience in leading undergraduate teaching of medical students. The panel (= 18), including anatomists, embryologists and practising clinicians, were nominated by members of Council and/or the Education Committee of the Anatomical Society. Following development of an a priori set of learning outcome statements (= 62) by the authors, panel members were asked in the first of a two‐stage process to ‘accept’, ‘reject’ or ‘modify’ each learning outcome, to propose additional outcomes if desired. In the second stage, the panel was asked to either accept or reject 16 statements which had either been modified, or had failed to reach consensus, during the first Delphi round. Overall, 61 of 62 learning outcome statements, each linked to examples of clinical conditions to provide context, achieved an 80% level of agreement following the modified Delphi process and were therefore deemed accepted for inclusion within the syllabus. The proposed syllabus allows for flexibility within individual curricula, while still prioritising and focusing on the core level of knowledge of embryological processes by presenting the essential elements to all newly qualified doctors, regardless of their subsequent chosen specialty.  相似文献   

3.
A modified Delphi method was employed to seek consensus when revising the UK and Ireland's core syllabus for regional anatomy in undergraduate medicine. A Delphi panel was constructed involving ‘expert’ (individuals with at least 5 years’ experience in teaching medical students anatomy at the level required for graduation). The panel (n = 39) was selected and nominated by members of Council and/or the Education Committee of the Anatomical Society and included a range of specialists including surgeons, radiologists and anatomists. The experts were asked in two stages to ‘accept’, ‘reject’ or ‘modify’ (first stage only) each learning outcome. A third stage, which was not part of the Delphi method, then allowed the original authors of the syllabus to make changes either to correct any anatomical errors or to make minor syntax changes. From the original syllabus of 182 learning outcomes, removing the neuroanatomy component (163), 23 learning outcomes (15%) remained unchanged, seven learning outcomes were removed and two new learning outcomes added. The remaining 133 learning outcomes were modified. All learning outcomes on the new core syllabus achieved over 90% acceptance by the panel.  相似文献   

4.
ObjectiveTo investigate the applicability of supervised machine learning (SML) to classify health-related webpages as ‘reliable’ or ‘unreliable’ in an automated way.MethodsWe collected the textual content of 468 different Dutch webpages about early childhood vaccination. Webpages were manually coded as ‘reliable’ or ‘unreliable’ based on their alignment with evidence-based vaccination guidelines. Four SML models were trained on part of the data, whereas the remaining data was used for model testing.ResultsAll models appeared to be successful in the automated identification of unreliable (F1 scores: 0.54–0.86) and reliable information (F1 scores: 0.82–0.91). Typical words for unreliable information are ‘dr’, ‘immune system’, and ‘vaccine damage’, whereas ‘measles’, ‘child’, and ‘immunization rate’, were frequent in reliable information. Our best performing model was also successful in terms of out-of-sample prediction, tested on a dataset about HPV vaccination.ConclusionAutomated classification of online content in terms of reliability, using basic classifiers, performs well and is particularly useful to identify reliable information.Practice implicationsThe classifiers can be used as a starting point to develop more complex classifiers, but also warning tools which can help people evaluate the content they encounter online.  相似文献   

5.
眼底图像血管分割问题是眼科及其他相关疾病计算机辅助诊断的基础。通过分割和分析眼底图像中的血管结构,可以对糖尿病视网膜病变、高血压和动脉硬化等疾病进行早期诊断和监测。针对目前已有血管分割算法存在准确率不高和灵敏度较低的问题,基于深度学习基本理论,提出一种改进U型网络的眼底图像血管分割算法。首先,通过减少传统U型网络下采样和上采样操作次数,解决眼底图像数据较少的问题;其次,通过将传统卷积层串行连接方式改为残差映射相叠加的方式,提高特征的使用效率;最后,在卷积层之间加入批量归一化和PReLU激活函数对网络进行优化,使网络性能得到进一步的提升。在DRIVE和CHASE_DB1这两个公开的眼底数据库上进行实验,每个数据库随机抽取160 000个图像块送入改进的网络中进行训练和测试,可以得到该算法在两个数据库上的灵敏度、准确率和AUC(ROC曲线下的面积)值,相比已有算法的最好结果平均分别提高2.47%、0.21%和0.35%。所提出的算法可改善眼底图像细小血管分割准确率不高及灵敏度较低的问题,能够较好地分割出低对比度的微细血管。  相似文献   

6.
In this paper, we aimed to understand and analyze the outputs of a convolutional neural network model that classifies the laterality of fundus images. Our model not only automatizes the classification process, which results in reducing the labors of clinicians, but also highlights the key regions in the image and evaluates the uncertainty for the decision with proper analytic tools. Our model was trained and tested with 25,911 fundus images (43.4% of macula-centered images and 28.3% each of superior and nasal retinal fundus images). Also, activation maps were generated to mark important regions in the image for the classification. Then, uncertainties were quantified to support explanations as to why certain images were incorrectly classified under the proposed model. Our model achieved a mean training accuracy of 99%, which is comparable to the performance of clinicians. Strong activations were detected at the location of optic disc and retinal blood vessels around the disc, which matches to the regions that clinicians attend when deciding the laterality. Uncertainty analysis discovered that misclassified images tend to accompany with high prediction uncertainties and are likely ungradable. We believe that visualization of informative regions and the estimation of uncertainty, along with presentation of the prediction result, would enhance the interpretability of neural network models in a way that clinicians can be benefitted from using the automatic classification system.  相似文献   

7.
目的目前医生一般通过检眼镜等设备查看眼病患者眼底结构特征诊断疾病,工序烦琐、存在误差并受主观因素的影响,诊断效率低。因此有必要开发一款眼底数码图像处理系统配合医生诊断。方法对原始图像进行增强对比度和空域滤波的预处理操作、二值化处理、形态学处理及边缘提取和骨架提取,进而得到目标特征区域。结果本文初步研究眼底数码图像视杯、视盘和血管轮廓特征的自动提取关键技术,通过MATLAB仿真软件对特征提取关键技术进行总结和验证,归纳出提取不同眼底特征时的应对策略。结论利用数字图像处理技术可以自动识别眼底数码图像中的不同特征区域,建立眼底数码图像处理系统用于辅助医生诊断是可行的。  相似文献   

8.
目的通过对采集的细胞图像的定量识别,并结合基于机器学习的聚类分析,实现对混合培养的多种细胞基于形态的快速识别分选。方法对体外混合培养的A549和3T3两种细胞进行免疫荧光染色以表征其形态轮廓,利用CellProfiler对采集的荧光图片进行细胞形态特征的提取,再通过CellProfiler Analyst对提取的数据进行机器学习,训练出一种规则,形成一种泛化能力,以达到对混合培养的两种细胞进行识别分选的目的。结果训练分类器准确率为81.24%,可以实现A549和3T3细胞的二分类。结论机器学习有助于提升数据聚类分析的准确率,将其应用于细胞图像的识别,可为临床对组织切片进行快速病理检测提供预判断,从而减轻医生的工作量,提高诊断的准确率。  相似文献   

9.
The suppression of motion artefacts from MR images is a challenging task. The purpose of this paper was to develop a standalone novel technique to suppress motion artefacts in MR images using a data-driven deep learning approach. A simulation framework was developed to generate motion-corrupted images from motion-free images using randomly generated motion profiles. An Inception-ResNet deep learning network architecture was used as the encoder and was augmented with a stack of convolution and upsampling layers to form an encoder-decoder network. The network was trained on simulated motion-corrupted images to identify and suppress those artefacts attributable to motion. The network was validated on unseen simulated datasets and real-world experimental motion-corrupted in vivo brain datasets. The trained network was able to suppress the motion artefacts in the reconstructed images, and the mean structural similarity (SSIM) increased from 0.9058 to 0.9338. The network was also able to suppress the motion artefacts from the real-world experimental dataset, and the mean SSIM increased from 0.8671 to 0.9145. The motion correction of the experimental datasets demonstrated the effectiveness of the motion simulation generation process. The proposed method successfully removed motion artefacts and outperformed an iterative entropy minimization method in terms of the SSIM index and normalized root mean squared error, which were 5–10% better for the proposed method. In conclusion, a novel, data-driven motion correction technique has been developed that can suppress motion artefacts from motion-corrupted MR images. The proposed technique is a standalone, post-processing method that does not interfere with data acquisition or reconstruction parameters, thus making it suitable for routine clinical practice.  相似文献   

10.
Automatic Red Tide Algae Recognition   总被引:1,自引:0,他引:1  
This paper presents a real-time alga classifier designed for flow-cytometry-based marine alga monitoring systems.The difficulties of such classification include:1) the shape of the same algae category is deformable,and largely variant due to the individual differences and mature stage;2) the image of algae may vary due to different 3D positions to the imaging plane and partial occlusion;3) the images also contain unknown algae and contaminations.In the proposed method,several shape features were developed,a naive Bayes classifier (NBC) was trained to reject the contaminative objects and unknown algae,and a support vector machine (SVM) was used to classify the algae to taxonomic categories.Our approach achieved greater 90% accuracy on a collection of algal images.The test on contaminated algal image set (containing unknown algae and non-algae objects such as sands) also demonstrated promising results.  相似文献   

11.
The identification of some important retinal anatomical regions is a prerequisite for the computer aided diagnosis of several retinal diseases. In this paper, we propose a new adaptive method for the automatic segmentation of the optic disk in digital color fundus images, using mathematical morphology. The proposed method has been designed to be robust under varying illumination and image acquisition conditions, common in eye fundus imaging. Our experimental results based on two publicly available eye fundus image databases are encouraging, and indicate that our approach potentially can achieve a better performance than other known methods proposed in the literature. Using the DRIVE database (which consists of 40 retinal images), our method achieves a success rate of 100% in the correct location of the optic disk, with 41.47% of mean overlap. In the DIARETDB1 database (which consists of 89 retinal images), the optic disk is correctly located in 97.75% of the images, with a mean overlap of 43.65%.  相似文献   

12.
医学图像语义概念识别是医学图像知识表示的重要技术环节。研究医学图像语义概念识别方法,有助于机器理解和学习医学图像中的潜在医学知识,在影像辅助诊断和智能读片等应用中发挥重要作用。将医学图像的高频概念识别问题转化为多标签分类任务,利用基于卷积神经网络的深度迁移学习方法,识别有限数量的高频医学概念;同时利用基于图像检索的主题建模方法,从给定医学图像的相似图像中提取语义相关概念。国际跨语言图像检索论坛ImageCLEF于2018年5月组织ImageCLEFcaption 2018评测,其子任务“概念检测”的目标是给定222 314张训练图片和9 938张测试图片,识别111 156个语义概念。上述两种方法的实验结果已被提交。实验结果表明,利用基于卷积神经网络的深度迁移学习方法识别医学图像高频概念,F1值为0.092 8,在提交团队中排名第二;基于图像检索的主题模型可召回部分低频相关概念,F1值为0.090 7,然而其性能依赖于图像检索结果的质量。基于卷积神经网络的深度迁移学习方法识别医学图像高频概念的鲁棒性优于基于图像检索方法的鲁棒性,但在大规模开放语义概念的识别技术研究上仍需进一步完善。  相似文献   

13.
目的提出一种基于改进的模糊C-均值(improved fuzzy C-means,IFCM)聚类算法及支持向量机(support vector machine,SVM)的检测算法,以实现对眼底图像中硬性渗出的自动识别。方法首先利用改进的FCM算法对由江苏省中医院眼科提供的120幅彩色眼底图像进行粗分割以获取硬性渗出候选区域;其次,利用Logistic回归对候选区域提取出的特征进行选择,并利用候选区域的优化特征集及相应判定结果建立SVM分类器,实现眼底图像中硬性渗出的自动检测;最后利用该方法对65幅眼底图像进行硬性渗出自动检测。结果硬性渗出自动检测得到的病灶区域水平灵敏度96.47%,阳性预测值90.13%;图像水平灵敏度100%,特异性95.00%,准确率98.46%;平均一幅图像处理时间4.56 s。结论利用改进的FCM算法与识别率较高的SVM分类器相结合的方法能够高效自动地识别出眼底图像中的硬性渗出。  相似文献   

14.
An artificial neural network (ANN) trained on high-quality medical tomograms or phantom images may be able to learn the planar data-to-tomographic image relationship with very high precision. As a result, a properly trained ANN can produce comparably accurate image reconstruction without the high computational cost inherent in some traditional reconstruction techniques. We have previously shown that a standard backpropagation neural network can be trained to reconstruct sections of single photon emission computed tomography (SPECT) images based on the planar image projections as inputs. In this study, we present a method of deriving activation functions for a backpropagation ANN that make it readily trainable for full SPECT image reconstruction. The activation functions used for this work are based on the estimated probability density functions (PDFs) of the ANN training set data. The statistically tailored ANN and the standard sigmoidal backpropagation ANN methods are compared both in terms of their trainability and generalization ability. The results presented show that a statistically tailored ANN can reconstruct novel tomographic images of a quality comparable with that of the images used to train the network. Ultimately, an adequately trained ANN should be able to properly compensate for physical photon transport effects, background noise, and artifacts while reconstructing the tomographic image.  相似文献   

15.

Diabetic retinopathy is a chronic condition that causes vision loss if not detected early. In the early stage, it can be diagnosed with the aid of exudates which are called lesions. However, it is arduous to detect the exudate lesion due to the availability of blood vessels and other distractions. To tackle these issues, we proposed a novel exudates classification from the fundus image known as hybrid convolutional neural network (CNN)-based binary local search optimizer–based particle swarm optimization algorithm. The proposed method from this paper exploits image augmentation to enlarge the fundus image to the required size without losing any features. The features from the resized fundus images are extracted as a feature vector and fed into the feed-forward CNN as the input. Henceforth, it classifies the exudates from the fundus image. Further, the hyperparameters are optimized to reduce the computational complexities by utilization of binary local search optimizer (BLSO) and particle swarm optimization (PSO). The experimental analysis is conducted on the public ROC and real-time ARA400 datasets and compared with the state-of-art works such as support vector machine classifiers, multi-modal/multi-scale, random forest, and CNN for the performance metrics. The classification accuracy is high for the proposed work, and thus, our proposed outperforms all the other approaches.

  相似文献   

16.
提出一种基于肝分段和肝癌轮廓融合的放疗靶区数据库建立和全过程数据质量管理方法,为后续人工智能靶区勾画或评估当前手工勾画提供数据支持。从肝癌数据库中取出原始图像,分别对其做带有肝脏放疗靶区勾画和分区分段轮廓标注工作,并通过图像融合技术使肝癌的放射治疗精确到肝段,最后使用深度学习的方法训练Unet网络以得到精准肝分割的神经网络模型,以实现针对肝癌的精准放疗。  相似文献   

17.
The authors have been developing a fully automated temporal subtraction scheme to assist radiologists in the detection of interval changes in digital chest radiographs. The temporal subtraction image is obtained by subtraction of a previous image from a current image. The authors' automated method includes not only image shift and rotation techniques but also a nonlinear geometric warping technique for reduction of misregistration artifacts in the subtraction image. However, a manual subtraction method that can be carried out only with image shift and rotation has been employed as a common clinical technique in angiography, and it might be clinically acceptable for detection of interval changes on chest radiographs as well. Therefore, the authors applied both the manual and automated temporal subtraction techniques to 181 digital chest radiographs, and compared the quality of the subtraction images obtained with the two methods. The numbers of clinically acceptable subtraction images were 147 (81.2%) and 176 (97.2%) for the manual and automated subtraction methods, respectively. The image quality of 148 (81.8%) subtraction images was improved by use of the automated method in comparison with the subtraction images obtained with the manual method. These results indicate that the automated method with the nonlinear warping technique can significantly reduce misregistration artifacts in comparison with the manual method. Therefore, the authors believe that the automated subtraction method is more useful for the detection of interval changes in digital chest radiographs.  相似文献   

18.
恶性黑色素瘤是最常见和最致命的皮肤癌之一。临床上,皮肤镜检查是恶性黑色素瘤早期诊断的常规手段。但是人工检查费力、费时,并且高度依赖于皮肤科医生的临床经验。因此,研究出自动识别皮肤镜图像中的黑色素瘤算法显得尤为重要。提出一种皮肤镜图像自动评估的新框架,利用深度学习方法,使其在有限的训练数据下产生更具区分性的特征。具体来说,首先在大规模自然图像数据集上预训练一个深度为152层的残差神经网络(Res-152),用来提取皮肤病变图像的深度卷积层特征,并对其使用均值池化得到特征向量,然后利用支持向量机(SVM)对提取的黑色素瘤特征进行分类。在公开的皮肤病变图像ISBI 2016挑战数据集中,用所提出的方法对248幅黑色素瘤图像和1 031幅非黑色素瘤图像进行评估,达到86.28%的准确率及84.18%的AUC值。同时,为论证神经网络深度对分类结果的影响,比较不同深度的模型框架。与现有使用传统手工特征的研究(如基于密集采样SIFT描述符的词袋模型)相比,或仅从深层神经网络的全连接层提取特征进行分类的方法相比,新方法能够产生区分性能更强的特征表达,可以在有限的训练数据下解决黑色素瘤的类内差异大、黑色素瘤与非黑素瘤之间的类间差异小的问题。  相似文献   

19.
How to cite this article: Kumar V. There is No Substitute for Human Intelligence. Indian J Crit Care Med 2021;25(5):486–488.

Artificial intelligence (AI) algorithms and handheld devices are two quantum jumps, which have brought point-of-care ultrasound (POCUS) to the forefront, facilitating bedside use by frontline medical personnel. Handheld machines now provide better quality images, while AI improves image acquisition and diagnostic yield. Handheld ultrasound is the new norm driving portability to the bedside, and these small “pocket rockets” are poised to be the next best friend forever (Bff) of frontline doctors after their cell phones. Using handheld device increases portability, reduces machine turnaround time, facilitates rapid diagnosis with reasonable accuracy, makes infection control easier, reduces number of personnel exposed, and reduces diagnostic cost, all this with an acceptable picture quality.Put together, these developments guide beginners to acquire diagnostic quality images better than their skill set and provide analysis for yielding information more than their knowledge. Even if the user is a trained emergency physician, intensivist, and anesthesiologist, the addition of AI software will ensure consistent diagnostic quality images with automated analysis so that a diagnosis is not missed. While the human mind consciously scans and analyzes images, the AI algorithms are trained for the same by deep learning and perform the task with equal if not better alacrity. Deep learning is a form of machine learning, which is the science of training computers to perform tasks not by being explicitly programmed, but rather through enabling them to study patterns within data.1On February 7, 2020, the US FDA authorized the marketing of the first AI-based software that guides users in real time to acquire diagnostic quality echocardiography images. “Today''s marketing authorization enables medical professionals who may not be experts in ultrasonography, such as a registered nurse in a family care clinic or others, to use this tool. This is especially important because it demonstrates the potential for artificial intelligence and machine learning technologies to increase access to safe and effective cardiac diagnostics that can be life-saving for patients”—accompanying statement by Robert Ochs, Ph.D., deputy director of the Office of In Vitro Diagnostics and Radiological Health in the FDA''s Center for Devices and Radiological Health.2The approved software, Caption Guidance, was developed based on images acquired by 15 registered sonographers across a range of body mass index (BMI) and cardiac pathologies and validated by experts including cardiologists. The software algorithms were trained by more than 5,000,000 hand movements of cardiac sonographers enabling the machine to understand the impact of ultrasound probe position and movement on image quality. The software guides users to acquire 10 standard transthoracic echocardiography (TTE) views of the heart. The software monitors image quality continuously, and it calculates 6D geometric distance between the current probe location and probe location anticipated to optimize the image and applies corrective probe manipulations to improve image quality. The software recognizes image of diagnostic quality by a quality meter. When the quality meter exceeds a certain threshold, the video clip gets captured automatically (auto-capture). If auto-capture is not achieved in 2 minutes, then the user has the option to save the best clip. The software thus converts a suboptimal image into one of diagnostic quality. The software also has the capability to automatically calculate ejection fraction (auto-EF) without calculating chamber volumes with reasonable accuracy. The software is compatible with multiple machines.3This approval was based on two studies, one of which is the study by Narang et al. subsequently published online in JAMA Cardiology on February 18, 2021.3 Narang et al. studied the use of this AI-guided software (Caption Guidance, Caption Health) guiding novice users (8 nurses) in conducting 240 scans capturing 10 transthoracic echocardiography views after minimal training (1-hour didactic lecture on familiarity with ultrasound machine and AI software, followed by 9 practice scans on volunteers). These AI-guided scans were compared with expert sonographer scans of the same patients done on the same machine without using AI guidance. The primary endpoints were qualitative estimation of LV size, LV function, RV size, and presence of nontrivial pericardial effusion. The FDA agreement required that at least 80% of scans be of acceptable quality for particular assessment. Secondary endpoints included six more parameters: qualitative assessment of RV function; left atrium size; structural assessment of the aortic, mitral, and tricuspid valves; and qualitative assessment of IVC size. The scans were reviewed by a panel of five expert echocardiographers. They reported adequate quality in 98.8% of nurse scans for primary endpoints—LV size, LV function, and presence of nontrivial pericardial effusion—and 92.2% adequacy for assessment of RV size; more than 90% adequacy for secondary endpoints except IVC size—57.5% and tricuspid valve—83.3%. They concluded that the integration of AI with medical imaging would allow use by novice users and in settings which do not have access to ultrasound. The study was adequately powered but did not enroll intensive care or emergency room patients.The study by Harish M Maheshwarappa compares the use of a handheld ultrasound machine (Vscan Extend™, General Electric Healthcare [GE]) with a traditional ultrasound machine (Vivid, GE). The handheld machine also has an AI-based software for objectively calculating LVEF from end-systolic and end-diastolic volumes by Simpson''s method (LVivo application). The users are trained intensivists in both groups in contrast to the study by Narang et al. where users were novices. The handheld machine has a phased array probe (1.5–3.8 MHz) and a linear probe (3.5–8 MHz), whereas multiple probes were used in assessment by conventional machine. Maheshwarappa et al4 studied 96 patients admitted to the intensive care unit with COVID-19 infection. The primary endpoint was time taken for the assessment of these patients by POCUS vis-a-vis traditional method. POCUS arm included scanning of lungs, heart, diaphragm, abdomen, and deep veins, using handheld AI-enabled ultrasound machine, while the traditional arm included clinical examination, review of ECG and CXR, plus an ultrasound of lungs, heart, and diaphragm by the traditional machine. As is obvious no clinical examination or input from ECG and CXR was integrated when patients were examined by handheld ultrasound machine; rather, the operator was blinded to the clinical findings. The median duration of bedside examination in POCUS arm using handheld ultrasound was 9 (8.0–11.0) minutes, compared to 20 (17–22) minutes in traditional arm—the latter included clinical examination and ECG and CXR interpretation (P < 0.001). They also studied the efficacy and safety profile of handheld ultrasound machine compared to traditional machine. The agreement between intensivists’ findings in both groups was perfect for LV systolic function with a Cohen kappa coefficient of 1.0, moderate for regional wall motion abnormality (RWMA) with a coefficient of 0.53 [0.37, 0.69], fair for inferior vena cava (IVC) collapsibility, with a coefficient of 0.37 [0.25, 0.49], and poor for RV systolic function and pericardial effusion with a coefficient of 0.07 and −0.01, respectively. Cohen kappa coefficient showed a good agreement for lung parameters between the two groups. Hence, the authors concluded that the use of handheld ultrasound machine reduces the time to diagnosis, which is efficacious and safe. They postulate that bedside ultrasound is a useful tool to help a primary physician or an intensivist screen the patient. If the diagnosis and management need expert advice and consultation, experts can be called over. This approach reduces the chances of spread of infection among the healthcare workers and the burden on an exhausted healthcare system during the pandemic.The study raises several questions. Is a handheld machine actually superior to the conventional machine for point-of-care ultrasound? Definitely so for basic ultrasound and initial screening involving qualitative assessments, but definitely not a substitute for conventional machine. Handheld machines are limited by the absence of color (present in some machines as in the one used in this study), pulse, and continuous-wave Doppler. Conventional machines rule the roost for objective measures or quantitative assessments, which become increasingly important during follow-up scans. The image quality of a conventional machine is undoubtedly superior to the handheld one despite technical advances in this field.5 Is the use of AI a silver lining? Should we sell out AI-guided machines? Definitely not, AI is not the panacea for quality improvement in POCUS. The value addition of AI software to calculate LVEF by volume-based method for use in intensive care is questionable. Firstly, the image acquisition needs to be proper to avoid foreshortening for using this automation correctly; hence, only a trained user can acquire images for this purpose. Secondly, if a trained user is acquiring images, then LVEF by eyeballing is comparable to that acquired by Simpson''s method; hence, the addition of AI software is not essential. Thirdly, visualization of most portion of endocardial border is a prerequisite for the calculation of end-systolic and end-diastolic volumes by Simpson''s method; critically ill patients especially those on ventilator have poor echo windows, and hence, border visualization is mostly suboptimal. AI algorithms are available, which accurately calculate EF automatically without delineating borders and calculating volumes.6 Lastly, LVEF in critical care has its limitations—it is preload- and afterload-dependent so changes in LVEF may represent changes in loading conditions and not changes in contractility. The calculation of stroke volume by LVOT VTI obtained from apical five-chamber view and LVOT diameter obtained from a zoomed PLAX view with the aortic leaflets opened and parallel to the aortic wall in systole will be a superior target for hemodynamic assessment, and its automation by AI software in the future will definitely be a more lucrative option.The most intriguing aspect of the study is the prescribed lack of clinical examination and interpretation of ECG and CXR in the handheld ultrasound group. The reduction in total duration of examination comes at a cost of no clinical examination and no laboratory adjuncts, something that defies good clinical practice and rational clinical decision-making and precludes human connect, compassion, and empathy, even if the patients are sedated and ventilated. We need to work in a way that POCUS does not lose its focus, and this may well be a reason that POCUS has universally not been shown to improve patient outcomes.The introduction of handheld and AI-integrated machines is definitely a welcome step toward bringing and using technology to patients across the healthcare system. Like all technology, there needs to be training, credentialing, privileging, and regulation to ensure correct medical, legal, and ethical use. Handheld machines need to have color and Doppler package, while AI algorithms need to build on image quality, view classification and segmentation of cardiovascular structures, measure and quantify the morphological structure, and detect abnormalities.7Above all, we humans as holders of handheld machines and users of AI software need to decide about our imaging requirements and challenges, machine users, machine deployment, imaging protocols, and the anticipated diagnostic yield to choose an appropriate machine for a particular unit. What works best in ER may not be suitable for operation theater or surgical intensive care. So let us choose wisely and remain the master rather than becoming a slave to new technology.  相似文献   

20.
Arterial spin labeling (ASL) imaging is a powerful magnetic resonance imaging technique that allows to quantitatively measure blood perfusion non-invasively, which has great potential for assessing tissue viability in various clinical settings. However, the clinical applications of ASL are currently limited by its low signal-to-noise ratio (SNR), limited spatial resolution, and long imaging time. In this work, we propose an unsupervised deep learning-based image denoising and reconstruction framework to improve the SNR and accelerate the imaging speed of high resolution ASL imaging. The unique feature of the proposed framework is that it does not require any prior training pairs but only the subject's own anatomical prior, such as T1-weighted images, as network input. The neural network was trained from scratch in the denoising or reconstruction process, with noisy images or sparely sampled k-space data as training labels. Performance of the proposed method was evaluated using in vivo experiment data obtained from 3 healthy subjects on a 3T MR scanner, using ASL images acquired with 44-min acquisition time as the ground truth. Both qualitative and quantitative analyses demonstrate the superior performance of the proposed txtc framework over the reference methods. In summary, our proposed unsupervised deep learning-based denoising and reconstruction framework can improve the image quality and accelerate the imaging speed of ASL imaging.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号