首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
税雪  刘奇 《中国临床康复》2011,(30):5607-5610
背景:通过分析超声血管图像能反映血管的病变情况。目的:采用区域生长理论对超声图像进行图像分割,分析边界点的相对位移。方法:先对视频图像分帧,将动态图像转换为静态图像,采用Gabor滤波、自适应直方图量化去除超声图像噪声,然后运用区域生长法对图像做分割,接着通过开闭运算、sobel算子检测图像边界,最后提取出两条血管边界。结果与结论:通过Gabor滤波、区域生长法等手段,得到了比较好的分割结果。区域生长法在处理速度上满足了实时性要求,具有一定的通用性。并且通过分析边界点的相对位移曲线,一定程度上反映血管的病变。  相似文献   

2.
Accurate segmentation in histopathology images at pixel-level plays a critical role in the digital pathology workflow. The development of weakly supervised methods for histopathology image segmentation liberates pathologists from time-consuming and labor-intensive works, opening up possibilities of further automated quantitative analysis of whole-slide histopathology images. As an effective subgroup of weakly supervised methods, multiple instance learning (MIL) has achieved great success in histopathology images. In this paper, we specially treat pixels as instances so that the histopathology image segmentation task is transformed into an instance prediction task in MIL. However, the lack of relations between instances in MIL limits the further improvement of segmentation performance. Therefore, we propose a novel weakly supervised method called SA-MIL for pixel-level segmentation in histopathology images. SA-MIL introduces a self-attention mechanism into the MIL framework, which captures global correlation among all instances. In addition, we use deep supervision to make the best use of information from limited annotations in the weakly supervised method. Our approach makes up for the shortcoming that instances are independent of each other in MIL by aggregating global contextual information. We demonstrate state-of-the-art results compared to other weakly supervised methods on two histopathology image datasets. It is evident that our approach has generalization ability for the high performance on both tissue and cell histopathology datasets. There is potential in our approach for various applications in medical images.  相似文献   

3.
We develop, automate and evaluate a calibration-free technique to estimate human carotid artery blood pressure from force-coupled ultrasound images. After acquiring images and force, we use peak detection to align the raw force signal with an optical flow signal derived from the images. A trained convolutional neural network selects a seed point within the carotid in a single image. We then employ a region-growing algorithm to segment and track the carotid in subsequent images. A finite-element deformation model is fit to the observed segmentation and force via a two-stage iterative non-linear optimization. The first-stage optimization estimates carotid artery wall stiffness parameters along with systolic and diastolic carotid pressures. The second-stage optimization takes the output parameters from the first optimization and estimates the carotid blood pressure waveform. Diastolic and systolic measurements are compared with those of an oscillometric brachial blood pressure cuff. In 20 participants, average absolute diastolic and systolic errors are 6.2 and 5.6 mm Hg, respectively, and correlation coefficients are r = 0.7 and r = 0.8, respectively. Force-coupled ultrasound imaging represents an automated, standalone ultrasound-based technique for carotid blood pressure estimation, which motivates its further development and expansion of its applications.  相似文献   

4.
Phase contrast, a noninvasive microscopy imaging technique, is widely used to capture time-lapse images to monitor the behavior of transparent cells without staining or altering them. Due to the optical principle, phase contrast microscopy images contain artifacts such as the halo and shade-off that hinder image segmentation, a critical step in automated microscopy image analysis. Rather than treating phase contrast microscopy images as general natural images and applying generic image processing techniques on them, we propose to study the optical properties of the phase contrast microscope to model its image formation process. The phase contrast imaging system can be approximated by a linear imaging model. Based on this model and input image properties, we formulate a regularized quadratic cost function to restore artifact-free phase contrast images that directly correspond to the specimen's optical path length. With artifacts removed, high quality segmentation can be achieved by simply thresholding the restored images. The imaging model and restoration method are quantitatively evaluated on microscopy image sequences with thousands of cells captured over several days. We also demonstrate that accurate restoration lays the foundation for high performance in cell detection and tracking.  相似文献   

5.
Accurate and automatic segmentation of individual tooth and root canal from cone-beam computed tomography (CBCT) images is an essential but challenging step for dental surgical planning. In this paper, we propose a novel framework, which consists of two neural networks, DentalNet and PulpNet, for efficient, precise, and fully automatic tooth instance segmentation and root canal segmentation from CBCT images. We first use the proposed DentalNet to achieve tooth instance segmentation and identification. Then, the region of interest (ROI) of the affected tooth is extracted and fed into the PulpNet to obtain precise segmentation of the pulp chamber and the root canal space. These two networks are trained by multi-task feature learning and evaluated on two clinical datasets respectively and achieve superior performances to several comparing methods. In addition, we incorporate our method into an efficient clinical workflow to improve the surgical planning process. In two clinical case studies, our workflow took only 2 min instead of 6 h to obtain the 3D model of tooth and root canal effectively for the surgical planning, resulting in satisfying outcomes in difficult root canal treatments.  相似文献   

6.
This study aimed to show segmentation of the heart muscle in pediatric echocardiographic images as a preprocessing step for tissue analysis. Transthoracic image sequences (2-D and 3-D volume data, both derived in radiofrequency format, directly after beam forming) were registered in real time from four healthy children over three heart cycles. Three preprocessing methods, based on adaptive filtering, were used to reduce the speckle noise for optimizing the distinction between blood and myocardium, while preserving the sharpness of edges between anatomical structures. The filtering kernel size was linked to the local speckle size and the speckle noise characteristics were considered to define the optimal filter in one of the methods. The filtered 2-D images were thresholded automatically as a first step of segmentation of the endocardial wall. The final segmentation step was achieved by applying a deformable contour algorithm. This segmentation of each 2-D image of the 3-D+time (i.e., 4-D) datasets was related to that of the neighboring images in both time and space. By thus incorporating spatial and temporal information of 3-D ultrasound image sequences, an automated method using image statistics was developed to perform 3-D segmentation of the heart muscle.  相似文献   

7.
Clinical diagnosis of the pediatric musculoskeletal system relies on the analysis of medical imaging examinations. In the medical image processing pipeline, semantic segmentation using deep learning algorithms enables an automatic generation of patient-specific three-dimensional anatomical models which are crucial for morphological evaluation. However, the scarcity of pediatric imaging resources may result in reduced accuracy and generalization performance of individual deep segmentation models. In this study, we propose to design a novel multi-task, multi-domain learning framework in which a single segmentation network is optimized over the union of multiple datasets arising from distinct parts of the anatomy. Unlike previous approaches, we simultaneously consider multiple intensity domains and segmentation tasks to overcome the inherent scarcity of pediatric data while leveraging shared features between imaging datasets. To further improve generalization capabilities, we employ a transfer learning scheme from natural image classification, along with a multi-scale contrastive regularization aimed at promoting domain-specific clusters in the shared representations, and multi-joint anatomical priors to enforce anatomically consistent predictions. We evaluate our contributions for performing bone segmentation using three scarce and pediatric imaging datasets of the ankle, knee, and shoulder joints. Our results demonstrate that the proposed approach outperforms individual, transfer, and shared segmentation schemes in Dice metric with statistically sufficient margins. The proposed model brings new perspectives towards intelligent use of imaging resources and better management of pediatric musculoskeletal disorders.  相似文献   

8.
目的提出一种从CT图像中自动分割出肺实质的方法。方法第一步,利用灰度阈值把肺部区域从背景中提取出来;第二步,对提取出的肺部区域进一步处理,除去大气管、平滑肺部边缘;第三步,利用数学形态学方法分开左右两肺。结果利用该方法对三个病人的肺部CT图像序列进行处理,证明该方法能针对不同厚度的CT图像,自动选取合适的阈值进行分割,并能去除独立的气管/支气管,最后完整提取出肺实质。结论本文提出的方法为进一步利用计算机辅助诊断方法对肺结节进行识别和标记提供了基础。  相似文献   

9.
The task of automatic segmentation and measurement of key anatomical structures in echocardiography is critical for subsequent extraction of clinical parameters. However, the influence of boundary blur, speckle noise, and other factors increase the difficulty of fully automatically segmenting 2D ultrasound images. The previous research has addressed this challenge using convolutional neural networks (CNN), which fails to consider global contextual information and long-range dependency. To further improve the quantitative analysis of pediatric echocardiography, this paper proposes an interactive fusion transformer network (IFT-Net) for quantitative analysis of pediatric echocardiography, which achieves the bidirectional fusion between local features and global context information by constructing interactive learning between the convolution branch and the transformer branch. First, we construct a dual-attention pyramid transformer (DPT) branch to model the long-range dependency from spatial and channels and enhance the learning of global context information. Second, we design a bidirectional interactive fusion (BIF) unit that fuses the local and global features interactively, maximizes their preservation and refines the segmentation. Finally, we measure the clinical anatomical parameters through key point positioning. Based on the parasternal short-axis (PSAX) view of the heart base from pediatric echocardiography, we segment and quantify the right ventricular outflow tract (RVOT) and aorta (AO) with promising results, indicating the potential clinical application. The code is publicly available at: https://github.com/Zhaocheng1/IFT-Net.  相似文献   

10.
Segmentation of a fetal head from three-dimensional (3-D) ultrasound images is a critical step in the quantitative measurement of fetal craniofacial structure. However, two main issues complicate segmentation, including fuzzy boundaries and large variations in pose and shape among different ultrasound images. In this article, we propose a new registration-based method for automatically segmenting the fetal head from 3-D ultrasound images. The proposed method first detects the eyes based on Gabor features to identify the pose of the fetus image. Then, a reference model, which is constructed from a fetal phantom and contains prior knowledge of head shape, is aligned to the image via feature-based registration. Finally, 3-D snake deformation is utilized to improve the boundary fitness between the model and image. Four clinically useful parameters including inter-orbital diameter (IOD), bilateral orbital diameter (BOD), occipital frontal diameter (OFD) and bilateral parietal diameter (BPD) are measured based on the results of the eye detection and head segmentation. Ultrasound volumes from 11 subjects were used for validation of the method accuracy. Experimental results showed that the proposed method was able to overcome the aforementioned difficulties and achieve good agreement between automatic and manual measurements.  相似文献   

11.
In this paper an automatic atlas-based segmentation algorithm for 4D cardiac MR images is proposed. The algorithm is based on the 4D extension of the expectation maximisation (EM) algorithm. The EM algorithm uses a 4D probabilistic cardiac atlas to estimate the initial model parameters and to integrate a priori information into the classification process. The probabilistic cardiac atlas has been constructed from the manual segmentations of 3D cardiac image sequences of 14 healthy volunteers. It provides space and time-varying probability maps for the left and right ventricles, the myocardium, and background structures such as the liver, stomach, lungs and skin. In addition to using the probabilistic cardiac atlas as a priori information, the segmentation algorithm incorporates spatial and temporal contextual information by using 4D Markov Random Fields. After the classification, the largest connected component of each structure is extracted using a global connectivity filter which improves the results significantly, especially for the myocardium. Validation against manual segmentations and computation of the correlation between manual and automatic segmentation on 249 3D volumes were calculated. We used the 'leave one out' test where the image set to be segmented was not used in the construction of its corresponding atlas. Results show that the procedure can successfully segment the left ventricle (LV) (r = 0.96), myocardium (r = 0.92) and right ventricle (r = 0.92). In addition, 4D images from 10 patients with hypertrophic cardiomyopathy were also manually and automatically segmented yielding a good correlation in the volumes of the LV (r = 0.93) and myocardium (0.94) when the atlas constructed with volunteers is blurred.  相似文献   

12.
We evaluate the accuracy of a vascular segmentation algorithm which uses continuity in the maximum intensity projection (MIP) depth Z-buffer as a pre-processing step to generate a list of 3D seed points for further segmentation. We refer to the algorithm as Z-buffer segmentation (ZBS). The pre-processing of the MIP Z-buffer is based on smoothness measured using the minimum chi-square value of a least square fit. Points in the Z-buffer with chi-square values below a selected threshold are used as seed points for 3D region growing. The ZBS algorithm couples spatial continuity information with intensity information to create a simple yet accurate segmentation algorithm. We examine the dependence of the segmentation on various parameters of the algorithm. Performance is assessed in terms of the inclusion/exclusion of vessel/background voxels in the segmentation of intracranial time-of-flight MRA images. The evaluation is based on 490,256 voxels from 14 patients which were classified by an observer. ZBS performance was compared to simple thresholding and to segmentation based on vessel enhancement filtering. The ZBS segmentation was only weakly dependent on the parameters of the initial MIP image generation, indicating the robustness of this approach. Region growing based on Z-buffer generated seeds was advantageous compared to simple thresholding. The ZBS algorithm provided segmentation accuracies similar to that obtained with the vessel enhancement filter. The ZBS performance was notably better than the filter based segmentation for aneurysms where the assumptions of the filter were violated. As currently implemented the algorithm slightly under-segments the intracranial vasculature.  相似文献   

13.
Quantitative neuroimaging analyses often rely on the accurate segmentation of anatomical brain structures. In contrast to manual segmentation, automatic methods offer reproducible outputs and provide scalability to study large databases. Among existing approaches, multi-atlas segmentation has recently shown to yield state-of-the-art performance in automatic segmentation of brain images. It consists in propagating the labelmaps from a set of atlases to the anatomy of a target image using image registration, and then fusing these multiple warped labelmaps into a consensus segmentation on the target image. Accurately estimating the contribution of each atlas labelmap to the final segmentation is a critical step for the success of multi-atlas segmentation. Common approaches to label fusion either rely on local patch similarity, probabilistic statistical frameworks or a combination of both. In this work, we propose a probabilistic label fusion framework based on atlas label confidences computed at each voxel of the structure of interest. Maximum likelihood atlas confidences are estimated using a supervised approach, explicitly modeling the relationship between local image appearances and segmentation errors produced by each of the atlases. We evaluate different spatial pooling strategies for modeling local segmentation errors. We also present a novel type of label-dependent appearance features based on atlas labelmaps that are used during confidence estimation to increase the accuracy of our label fusion. Our approach is evaluated on the segmentation of seven subcortical brain structures from the MICCAI 2013 SATA Challenge dataset and the hippocampi from the ADNI dataset. Overall, our results indicate that the proposed label fusion framework achieves superior performance to state-of-the-art approaches in the majority of the evaluated brain structures and shows more robustness to registration errors.  相似文献   

14.
Optimal assessment of myocardial perfusion with contrast echocardiography by using B-mode imaging often requires image alignment and background subtraction, which are time consuming and need extensive expertise. Flash echocardiography is a new technique in which primary images are gated to the electrocardiogram and secondary images are obtained by transmitting ultrasound pulses in rapid succession after each primary image. Myocardial opacification is seen in the primary image and not in the secondary images because of ultrasound-induced bubble destruction. Because the interval between the primary and first few secondary images is very short, cardiac motion between these images should be minimal. Therefore we hypothesized that 1 or more secondary images could be subtracted from the primary image without the need for image alignment. The ability of ultrasound to destroy microbubbles was assessed by varying the sampling rate, line density, and mechanical index in 6 open-chest dogs. The degree of translation between images was quantified in the x and y directions with the use of computer cross-correlation. At sampling rates of 158 Hz or less and a mechanical index of more than 0.6, videointensity rapidly declined to baseline levels by 25 ms. Significant translation between images was noted only at intervals of more than 112 ms. It is concluded that flash echocardiography can be used for digital subtraction of baseline from contrast-enhanced B-mode images without image alignment. Background subtraction is therefore feasible on-line, potentially eliminating the need for off-line image processing in the future. (J Am Soc Echocardiogr 1999;12:85-93.)  相似文献   

15.
An accurate and automated tissue segmentation algorithm for retinal optical coherence tomography (OCT) images is crucial for the diagnosis of glaucoma. However, due to the presence of the optic disc, the anatomical structure of the peripapillary region of the retina is complicated and is challenging for segmentation. To address this issue, we develop a novel graph convolutional network (GCN)-assisted two-stage framework to simultaneously label the nine retinal layers and the optic disc. Specifically, a multi-scale global reasoning module is inserted between the encoder and decoder of a U-shape neural network to exploit anatomical prior knowledge and perform spatial reasoning. We conduct experiments on human peripapillary retinal OCT images. We also provide public access to the collected dataset, which might contribute to the research in the field of biomedical image processing. The Dice score of the proposed segmentation network is 0.820 ± 0.001 and the pixel accuracy is 0.830 ± 0.002, both of which outperform those from other state-of-the-art techniques.  相似文献   

16.
Image registration aims to find geometric transformations that align images. Most algorithmic and deep learning-based methods solve the registration problem by minimizing a loss function, consisting of a similarity metric comparing the aligned images, and a regularization term ensuring smoothness of the transformation. Existing similarity metrics like Euclidean Distance or Normalized Cross-Correlation focus on aligning pixel intensity values or correlations, giving difficulties with low intensity contrast, noise, and ambiguous matching. We propose a semantic similarity metric for image registration, focusing on aligning image areas based on semantic correspondence instead. Our approach learns dataset-specific features that drive the optimization of a learning-based registration model. We train both an unsupervised approach extracting features with an auto-encoder, and a semi-supervised approach using supplemental segmentation data. We validate the semantic similarity metric using both deep-learning-based and algorithmic image registration methods. Compared to existing methods across four different image modalities and applications, the method achieves consistently high registration accuracy and smooth transformation fields.  相似文献   

17.
Speckle noise negatively affects medical ultrasound image shape interpretation and boundary detection. Speckle removal filters are widely used to selectively remove speckle noise without destroying important image features to enhance object boundaries. In this article, a fully automatic bilateral filter tailored to ultrasound images is proposed. The edge preservation property is obtained by embedding noise statistics in the filter framework. Consequently, the filter is able to tackle the multiplicative behavior modulating the smoothing strength with respect to local statistics. The in silico experiments clearly showed that the speckle reducing bilateral filter (SRBF) has superior performances to most of the state of the art filtering methods. The filter is tested on 50 in vivo US images and its influence on a segmentation task is quantified. The results using SRBF filtered data sets show a superior performance to using oriented anisotropic diffusion filtered images. This improvement is due to the adaptive support of SRBF and the embedded noise statistics, yielding a more homogeneous smoothing. SRBF results in a fully automatic, fast and flexible algorithm potentially suitable in wide ranges of speckle noise sizes, for different medical applications (IVUS, B-mode, 3-D matrix array US). (E-mail: balocco.simone@gmail.com)  相似文献   

18.
Real-time 3D echocardiography (RT3DE) promises a more objective and complete cardiac functional analysis by dynamic 3D image acquisition. Despite several efforts towards automation of left ventricle (LV) segmentation and tracking, these remain challenging research problems due to the poor-quality nature of acquired images usually containing missing anatomical information, speckle noise, and limited field-of-view (FOV). Recently, multi-view fusion 3D echocardiography has been introduced as acquiring multiple conventional single-view RT3DE images with small probe movements and fusing them together after alignment. This concept of multi-view fusion helps to improve image quality and anatomical information and extends the FOV. We now take this work further by comparing single-view and multi-view fused images in a systematic study. In order to better illustrate the differences, this work evaluates image quality and information content of single-view and multi-view fused images using image-driven LV endocardial segmentation and tracking. The image-driven methods were utilized to fully exploit image quality and anatomical information present in the image, thus purposely not including any high-level constraints like prior shape or motion knowledge in the analysis approaches. Experiments show that multi-view fused images are better suited for LV segmentation and tracking, while relatively more failures and errors were observed on single-view images.  相似文献   

19.

Purpose

Existing computer-aided detection schemes for lung nodule detection require a large number of calculations and tens of minutes per case; there is a large gap between image acquisition time and nodule detection time. In this study, we propose a fast detection scheme of lung nodule in chest CT images using cylindrical nodule-enhancement filter with the aim of improving the workflow for diagnosis in CT examinations.

Methods

Proposed detection scheme involves segmentation of the lung region, preprocessing, nodule enhancement, further segmentation, and false-positive (FP) reduction. As a nodule enhancement, our method employs a cylindrical shape filter to reduce the number of calculations. False positives (FPs) in nodule candidates are reduced using support vector machine and seven types of characteristic parameters.

Results

The detection performance and speed were evaluated experimentally using Lung Image Database Consortium publicly available image database. A 5-fold cross-validation result demonstrates that our method correctly detects 80 % of nodules with 4.2 FPs per case, and detection speed of proposed method is also 4–36 times faster than existing methods.

Conclusion

Detection performance and speed indicate that our method may be useful for fast detection of lung nodules in CT images.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号