首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
文章提出一种新的基于自组织映射网络的医学图像压缩算法。该算法在现有压缩算法的基础上改进了学习速率因子和获胜神经元算法的求法。学习速率因子的设计考虑了获胜神经元的选择频率,而在调整神经元权值时综合了本节点和相邻节点的影响。改进型自组织映射网络算法通过选择赢节点的频率来反映赢节点和输入加权调整的向量值之间的误差,同时它通过当前的权重和前一个的权重的改变比例来调整权重。实验结果表明该方法提高了医学图像的编码效率和重建质量。  相似文献   

2.
背景:最大似然估计算法是正电子发射断层图像重建的经典算法,能够在信息量不足的情况下获得分辨率和噪声特性均优于滤波反投影重建的重建结果。但是MLEM算法具有不稳定性,即随迭代次数的增加,图像噪声反而会增加。 目的:针对MLEM算法的图像噪声问题,提出一种基于指数型先验分布约束的MAP重建算法。 方法:将指数先验分布代替传统MAP重建中的高斯先验,并用信噪比和归一化均方误差来判断重建质量。 结果与结论:实验证明,该算法不仅能够抑制噪声,而且能够保持重建图像的边缘,不会造成过分平滑。  相似文献   

3.
为了解决正电子断层成像中传统迭代重建算法不能有效抑制噪声和收敛速度慢的问题,在最小二乘算法中引入二次平滑先验惩罚项,并结合有序子集加速方法用于正电子发射断层图像重建,形成了有序子集惩罚最小二乘算法。实验结果表明,有序子集惩罚最小二乘算法相对于最大似然估计等算法,不仅能够有效地抑制噪声,重建出质量更好的图像,而且较大地提高了收敛速度。  相似文献   

4.
国内目前的电阻抗断层成像技术工作重点是针对确定应用目标的实用化应用基础研究,电阻抗断层成像技术系统评价方法是从方法学基础向应用基础和临床应用研究过渡必须解决的关键问题之一。文章通过引入结构偏离度,提出了一种基于梯度误差的电阻抗断层成像技术图像重建评价函数。评价实验基于直径10 cm的16电极盐水槽装置,采用相邻激励、相邻测量模式,由等位线反投影算法重建图像,在实验室建立的电阻抗断层成像仿真实验系统研究平台上进行。采用传统评价参数图像清晰度和交叉熵,与结构偏离度的对比仿真评价结果表明,结构偏离度参数比传统的图像评价参数更具优势。文章提出的基于Sobel算子的梯度误差评价方法,能很好地反映图像的边缘变化和组织纹理结构信息,具有很好的电阻抗断层成像技术图像质量评价性能,可以更有效地反映电阻抗断层成像图像重建效果。  相似文献   

5.
In the tasks of image representation, recognition and retrieval, a 2D image is usually transformed into a 1D long vector and modelled as a point in a high-dimensional vector space. This vector-space model brings up much convenience and many advantages. However, it also leads to some problems such as the Curse of Dimensionality dilemma and Small Sample Size problem, and thus produces us a series of challenges, for example, how to deal with the problem of numerical instability in image recognition, how to improve the accuracy and meantime to lower down the computational complexity and storage requirement in image retrieval, and how to enhance the image quality and meanwhile to reduce the transmission time in image transmission, etc. In this paper, these problems are solved, to some extent, by the proposed Generalized 2D Principal Component Analysis (G2DPCA). G2DPCA overcomes the limitations of the recently proposed 2DPCA (Yang et al., 2004) from the following aspects: (1) the essence of 2DPCA is clarified and the theoretical proof why 2DPCA is better than Principal Component Analysis (PCA) is given; (2) 2DPCA often needs much more coefficients than PCA in representing an image. In this work, a Bilateral-projection-based 2DPCA (B2DPCA) is proposed to remedy this drawback; (3) a Kernel-based 2DPCA (K2DPCA) scheme is developed and the relationship between K2DPCA and KPCA (Scholkopf et al., 1998) is explored. Experimental results in face image representation and recognition show the excellent performance of G2DPCA.  相似文献   

6.
The digital reconstruction of single neurons from 3D confocal microscopic images is an important tool for understanding the neuron morphology and function. However the accurate automatic neuron reconstruction remains a challenging task due to the varying image quality and the complexity in the neuronal arborisation. Targeting the common challenges of neuron tracing, we propose a novel automatic 3D neuron reconstruction algorithm, named Rivulet, which is based on the multi-stencils fast-marching and iterative back-tracking. The proposed Rivulet algorithm is capable of tracing discontinuous areas without being interrupted by densely distributed noises. By evaluating the proposed pipeline with the data provided by the Diadem challenge and the recent BigNeuron project, Rivulet is shown to be robust to challenging microscopic imagestacks. We discussed the algorithm design in technical details regarding the relationships between the proposed algorithm and the other state-of-the-art neuron tracing algorithms.  相似文献   

7.
Sectioning tissues for optical microscopy often introduces upon the resulting sections distortions that make 3D reconstruction difficult. Here we present an automatic method for producing a smooth 3D volume from distorted 2D sections in the absence of any undistorted references. The method is based on pairwise elastic image warps between successive tissue sections, which can be computed by 2D image registration. Using a Gaussian filter, an average warp is computed for each section from the pairwise warps in a group of its neighboring sections. The average warps deform each section to match its neighboring sections, thus creating a smooth volume where corresponding features on successive sections lie close to each other. The proposed method can be used with any existing 2D image registration method for 3D reconstruction. In particular, we present a novel image warping algorithm based on dynamic programming that extends Dynamic Time Warping in 1D speech recognition to compute pairwise warps between high-resolution 2D images. The warping algorithm efficiently computes a restricted class of 2D local deformations that are characteristic between successive tissue sections. Finally, a validation framework is proposed and applied to evaluate the quality of reconstruction using both real sections and a synthetic volume.  相似文献   

8.
背景:由于人体的绝对个性化特点,标准人工假体与患者骨骼之间的误差使二者难以很好匹配。计算机辅助设计和制造个体化假体克服了其他假体的缺点,可有效地延长人工关节的使用寿命和使用质量,并可能解决人工关节的翻修问题。国内的研究尚处于起步阶段。 目的:基于CT图像的三维重建,探求个体化股骨假体计算机辅助设计在提高假体与病变骨骼匹配度中的作用。 方法:CT扫描对象为1例健康男性志愿者,排除髋关节疾患。采用GE Speed Light 16排螺旋CT对股骨中上段进行层厚3 mm扫描,得到CT数据的二维图像,利用自主开发的数据格式转换软件将CT图像转换为bmp格式。对位图编辑预处理,用Mimics8.1软件进行矢量化处理,提取股骨内外轮廓。然后输入Mimics8.1和Rapidform2004三维反求工程软件中,生成股骨内外轮廓的特征曲线,重建股骨三维模型。将股骨髓腔的特征轮廓曲线dxf文件输入计算机辅助设计建模软件Solidworks2004中,以此股骨髓腔轮廓为基础,完成个体化股骨假体的设计。 结果与结论:利用自主开发的数据格式转换软件,实现了CT图像信息的矢量转换。以CT二维图像为依据,进行三维反求,可获得精确的股骨内外轮廓三维实体模型。采用反求工程与正向计算机辅助设计相结合,可设计出匹配良好的个体化股骨假体。提示反求工程和计算机辅助设计技术为个体化假体的研制提供了一个有效可行的途径,解决假体与病变骨骼的良好匹配,可防止假体松动,提高其长期稳定性。  相似文献   

9.
We propose a new scheme for enlarging generalized learning vector quantization (GLVQ) with weighting factors for the input dimensions. The factors allow an appropriate scaling of the input dimensions according to their relevance. They are adapted automatically during training according to the specific classification task whereby training can be interpreted as stochastic gradient descent on an appropriate error function. This method leads to a more powerful classifier and to an adaptive metric with little extra cost compared to standard GLVQ. Moreover, the size of the weighting factors indicates the relevance of the input dimensions. This proposes a scheme for automatically pruning irrelevant input dimensions. The algorithm is verified on artificial data sets and the iris data from the UCI repository. Afterwards, the method is compared to several well known algorithms which determine the intrinsic data dimension on real world satellite image data.  相似文献   

10.
Images can be distorted in the real world via many sources like faulty sensors, artifacts generated by compression algorithms, defocus, faulty lens, and poor lighting conditions. Our biological vision system can identify the quality of image by looking at the images, but developing an algorithm to assess the quality of an image is a very challenging task as an image can be corrupted by different types of distortions and statistical properties of different types of distortions are dissimilar. The main objective of this article is to propose an image quality assessment technique for images corrupted by blurring and compression-based artifacts. Machine learning-based approaches have been used in recent times to perform this task. Images can be analyzed in different transform domains like discrete cosine transform domain, wavelet domains, curvelet domains, and singular value decomposition. These domains generate sparse matrices. In this paper, we propose no-reference image quality assessment algorithms for images corrupted by blur and different compression algorithms using sparsity-based features computed from different domains and all features pooled by support vector regression. The proposed model has been tested on three standard image quality assessment datasets LIVE, CSIQ, and TID2013, and correlation with subjected human opinion scores has been presented along with comparative study with state-of-the-art quality measures. Experiments run on standard image quality databases show that the results obtained are outperforming the existing results.  相似文献   

11.

The reconstruction mechanisms built by the human auditory system during sound reconstruction are still a matter of debate. The purpose of this study is to propose a mathematical model of sound reconstruction based on the functional architecture of the auditory cortex (A1). The model is inspired by the geometrical modelling of vision, which has undergone a great development in the last ten years. There are, however, fundamental dissimilarities, due to the different role played by time and the different group of symmetries. The algorithm transforms the degraded sound in an ‘image’ in the time–frequency domain via a short-time Fourier transform. Such an image is then lifted to the Heisenberg group and is reconstructed via a Wilson–Cowan integro-differential equation. Preliminary numerical experiments are provided, showing the good reconstruction properties of the algorithm on synthetic sounds concentrated around two frequencies.

  相似文献   

12.
This work presents a new algorithm (nonuniform intensity correction; NIC) for correction of intensity inhomogeneities in T1-weighted magnetic resonance (MR) images. The bias field and a bias-free image are obtained through an iterative process that uses brain tissue segmentation. The algorithm was validated by means of realistic phantom images and a set of 24 real images. The first evaluation phase was based on a public domain phantom dataset, used previously to assess bias field correction algorithms. NIC performed similar to previously described methods in removing the bias field from phantom images, without introduction of degradation in the absence of intensity inhomogeneity. The real image dataset was used to compare the performance of this new algorithm to that of other widely used methods (N3, SPM'99, and SPM2). This dataset included both low and high bias field images from two different MR scanners of low (0.5 T) and medium (1.5 T) static fields. Using standard quality criteria for determining the goodness of the different methods, NIC achieved the best results, correcting the images of the real MR dataset, enabling its systematic use in images from both low and medium static field MR scanners. A limitation of our method is that it might fail if the bias field is so high that the initial histogram does not show bimodal distribution for white and gray matter.  相似文献   

13.
Supervised learning requires a large amount of labeled data, but the data labeling process can be expensive and time consuming, as it requires the efforts of human experts. Co-Training is a semi-supervised learning method that can reduce the amount of required labeled data through exploiting the available unlabeled data to improve the classification accuracy. It is assumed that the patterns are represented by two or more redundantly sufficient feature sets (views) and these views are independent given the class. On the other hand, most of the real-world pattern recognition tasks involve a large number of categories which may make the task difficult. The tree-structured approach is an output space decomposition method where a complex multi-class problem is decomposed into a set of binary sub-problems. In this paper, we propose two learning architectures to combine the merits of the tree-structured approach and Co-Training. We show that our architectures are especially useful for classification tasks that involve a large number of classes and a small amount of labeled data where the single-view tree-structured approach does not perform well alone but when combined with Co-Training, it can exploit effectively the independent views and the unlabeled data to improve the recognition accuracy.  相似文献   

14.
Quality control (QC) of brain magnetic resonance images (MRI) is an important process requiring a significant amount of manual inspection. Major artifacts, such as severe subject motion, are easy to identify to naïve observers but lack automated identification tools. Clinical trials involving motion‐prone neonates typically pool data to obtain sufficient power, and automated quality control protocols are especially important to safeguard data quality. Current study tested an open source method to detect major artifacts among 2D neonatal MRI via supervised machine learning. A total of 1,020 two‐dimensional transverse T2‐weighted MRI images of preterm newborns were examined and classified as either QC Pass or QC Fail. Then 70 features across focus, texture, noise, and natural scene statistics categories were extracted from each image. Several different classifiers were trained and their performance was compared with subjective rating as the gold standard. We repeated the rating process again to examine the stability of the rating and classification. When tested via 10‐fold cross validation, the random undersampling and adaboost ensemble (RUSBoost) method achieved the best overall performance for QC Fail images with 85% positive predictive value along with 75% sensitivity. Similar classification performance was observed in the analyses of the repeated subjective rating. Current results served as a proof of concept for predicting images that fail quality control using no‐reference objective image features. We also highlighted the importance of evaluating results beyond mere accuracy as a performance measure for machine learning in imbalanced group settings due to larger proportion of QC Pass quality images.  相似文献   

15.
Digital reconstructions of neuronal morphology are used to study neuron function, development, and responses to various conditions. Although many measures exist to analyze differences between neurons, none is particularly suitable to compare the same arborizing structure over time (morphological change) or reconstructed by different people and/or software (morphological error). The metric introduced for the DIADEM (DIgital reconstruction of Axonal and DEndritic Morphology) Challenge quantifies the similarity between two reconstructions of the same neuron by matching the locations of bifurcations and terminations as well as their topology between the two reconstructed arbors. The DIADEM metric was specifically designed to capture the most critical aspects in automating neuronal reconstructions, and can function in feedback loops during algorithm development. During the Challenge, the metric scored the automated reconstructions of best-performing algorithms against manually traced gold standards over a representative data set collection. The metric was compared with direct quality assessments by neuronal reconstruction experts and with clocked human tracing time saved by automation. The results indicate that relevant morphological features were properly quantified in spite of subjectivity in the underlying image data and varying research goals. The DIADEM metric is freely released open source ( http://diademchallenge.org ) as a flexible instrument to measure morphological error or change in high-throughput reconstruction projects.  相似文献   

16.
背景:超分辨率冲击已经在许多领域展开研究于应用,比如医疗军队,以及视频等。 目的:利用自适应正则化超分辨率重建算法,将低梯度场中获得的具有亚象素位移的图像重建出高分辨率、高信噪比的MR图像。 方法:采用最小二乘法作为代价函数,并求其导数,以获得迭代公式。在迭代过程中自适应的改变正则化参数和步长。 结果与结论:新正则化参数使得代价函数在定义域内具有凸性,同时先验信息被包含于正则化参数中,以提高图像的高频成分。文章提供了低分辨率的体模图像及重建后的MR图像  相似文献   

17.
Qun Song  Nikola Kasabov   《Neural networks》2006,19(10):1591-1596
This paper introduces a novel transductive neuro-fuzzy inference model with weighted data normalization (TWNFI). In transductive systems a local model is developed for every new input vector, based on a certain number of data that are selected from the training data set and the closest to this vector. The weighted data normalization method (WDN) optimizes the data normalization ranges of the input variables for the model. A steepest descent algorithm is used for training the TWNFI models. The TWNFI is compared with some other widely used connectionist systems on two case study problems: Mackey–Glass time series prediction and a real medical decision support problem of estimating the level of renal function of a patient. The TWNFI method not only results in a “personalized” model with a better accuracy of prediction for a single new sample, but also depicts the most significant input variables (features) for the model that may be used for a personalized medicine.  相似文献   

18.
《Neural networks》1999,12(9):1285-1299
The backpropagation (BP) algorithm for training feedforward neural networks has proven robust even for difficult problems. However, its high performance results are attained at the expense of a long training time to adjust the network parameters, which can be discouraging in many real-world applications. Even on relatively simple problems, standard BP often requires a lengthy training process in which the complete set of training examples is processed hundreds or thousands of times. In this paper, a universal acceleration technique for the BP algorithm based on extrapolation of each individual interconnection weight is presented. This extrapolation procedure is easy to implement and is activated only a few times in between iterations of the conventional BP algorithm. This procedure, unlike earlier acceleration procedures, minimally alters the computational structure of the BP algorithm. The viability of this new approach is demonstrated on three examples. The results suggest that it leads to significant savings in computation time of the standard BP algorithm. Moreover, the solution computed by the proposed approach is always located in close proximity to the one obtained by the conventional BP procedure. Hence, the proposed method provides a real acceleration of the BP algorithm without degrading the usefulness of its solutions. The performance of the new method is also compared with that of the conjugate gradient algorithm, which is an improved and faster version of the BP algorithm.  相似文献   

19.
The comprehensive characterization of neuronal morphology requires tracing extensive axonal and dendritic arbors imaged with light microscopy into digital reconstructions. Considerable effort is ongoing to automate this greatly labor-intensive and currently rate-determining process. Experimental data in the form of manually traced digital reconstructions and corresponding image stacks play a vital role in developing increasingly more powerful reconstruction algorithms. The DIADEM challenge (short for DIgital reconstruction of Axonal and DEndritic Morphology) successfully stimulated progress in this area by utilizing six data set collections from different animal species, brain regions, neuron types, and visualization methods. The original research projects that provided these data are representative of the diverse scientific questions addressed in this field. At the same time, these data provide a benchmark for the types of demands automated software must meet to achieve the quality of manual reconstructions while minimizing human involvement. The DIADEM data underwent extensive curation, including quality control, metadata annotation, and format standardization, to focus the challenge on the most substantial technical obstacles. This data set package is now freely released ( http://diademchallenge.org ) to train, test, and aid development of automated reconstruction algorithms.  相似文献   

20.
Recently, there has been considerable interest, especially for in utero imaging, in the detection of functional connectivity in subjects whose motion cannot be controlled while in the MRI scanner. These cases require two advances over current studies: (1) multiecho acquisitions and (2) post processing and reconstruction that can deal with significant between slice motion during multislice protocols to allow for the ability to detect temporal correlations introduced by spatial scattering of slices into account. This article focuses on the estimation of a spatially and temporally regular time series from motion scattered slices of multiecho fMRI datasets using a full four‐dimensional (4D) iterative image reconstruction framework. The framework which includes quantitative MRI methods for artifact correction is evaluated using adult studies with and without motion to both refine parameter settings and evaluate the analysis pipeline. ICA analysis is then applied to the 4D image reconstruction of both adult and in utero fetal studies where resting state activity is perturbed by motion. Results indicate quantitative improvements in reconstruction quality when compared to the conventional 3D reconstruction approach (using simulated adult data) and demonstrate the ability to detect the default mode network in moving adults and fetuses with single‐subject and group analysis. Hum Brain Mapp 37:4158–4178, 2016. © 2016 Wiley Periodicals, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号