首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Retinal imaging plays a key role in the diagnosis and management of ophthalmologic disorders, such as diabetic retinopathy, glaucoma, and age-related macular degeneration. Because of the acquisition process, retinal images often suffer from blurring and uneven illumination. This problem may seriously affect disease diagnosis and progression assessment. Here we present a method for color retinal image restoration by means of multichannel blind deconvolution. The method is applied to a pair of retinal images acquired within a lapse of time, ranging from several minutes to months. It consists of a series of preprocessing steps to adjust the images so they comply with the considered degradation model, followed by the estimation of the point-spread function and, ultimately, image deconvolution. The preprocessing is mainly composed of image registration, uneven illumination compensation, and segmentation of areas with structural changes. In addition, we have developed a procedure for the detection and visualization of structural changes. This enables the identification of subtle developments in the retina not caused by variation in illumination or blur. The method was tested on synthetic and real images. Encouraging experimental results show that the method is capable of significant restoration of degraded retinal images.  相似文献   

2.
医学超声成像受其成像机制的限制,会导致系统分辨率不高.为了获得诊断学上的重要信息,通常需要进行图像复原.在实际情况中,医学超声成像系统的退化过程难以精确地描述出来,故当点扩散函数未知或先验知识较少时,采用盲图像复原算法从退化图像中进行原始图像估计.主要针对基于盲解卷积的医学超声图像复原算法进行综述,并根据辨识方法分为先验辨识法和联合辨识法,分析讨论各种盲复原方法的基本理论和改进方法,最后提出了医学超声图像盲复原算法的发展方向.  相似文献   

3.
Several techniques for performing digital image restoration are reviewed and the problems associated with evaluating image processing are discussed. An application of constrained deconvulution to images of the liver produced by single-photon emission computed tomography is presented. Specific evaluation criteria are suggested and based on these, the choice of conditions best suited for processing liver images is proposed. Typically cold tumor contrast can be improved by a factor of greater than 2 whilst image mottle increases negligibly.  相似文献   

4.
5.
The multiwire camera (MWC) produces high speed, quantitative autoradiography of radiolabelled substances in two-dimensional systems. While greatly superior to film-based systems in respect of speed and quantitativity the MWC has significantly poorer spatial resolution (particularly for high energy beta-emitting radiolabels) and the performance is ultimately limited by the noise induced in the images by Poisson statistics and counter background. Processing the MWC images with a maximum entropy algorithm significantly improves the performance of the system in these respects. The algorithm has been tested using one-dimensional data taken from images of known tritium, 14C and 125I distributions. Processed images are visually more acceptable with improved quantitative accuracy and spatial resolution. Quantitative accuracy, calculated as the root mean square deviation between an image and the known sample activities, is 10-40% lower for processed images compared with original camera images. Spatial resolution, calculated from slopes in the images representing edges of activity in the sources, is improved by 20-40% for the processed images. The algorithm is used to improve a two-dimensional image from a biological study. The source distribution consisted of a set of circular dots of varying activity. The dots with lowest activity were barely discernible in the raw MWC image but are clearly resolved after processing. The algorithm used is simple and effective and executes acceptably quickly on a personal computer. It should prove useful in any context where the imaging performance of a system is limited by Poisson statistics.  相似文献   

6.
Functional characterizations of thousands of gene products from many species are described in the published literature. These discussions are extremely valuable for characterizing the functions not only of these gene products, but also of their homologs in other organisms. The Gene Ontology (GO) is an effort to create a controlled terminology for labeling gene functions in a more precise, reliable, computer-readable manner. Currently, the best annotations of gene function with the GO are performed by highly trained biologists who read the literature and select appropriate codes. In this study, we explored the possibility that statistical natural language processing techniques can be used to assign GO codes. We compared three document classification methods (maximum entropy modeling, na?ve Bayes classification, and nearest-neighbor classification) to the problem of associating a set of GO codes (for biological process) to literature abstracts and thus to the genes associated with the abstracts. We showed that maximum entropy modeling outperforms the other methods and achieves an accuracy of 72% when ascertaining the function discussed within an abstract. The maximum entropy method provides confidence measures that correlate well with performance. We conclude that statistical methods may be used to assign GO codes and may be useful for the difficult task of reassignment as terminology standards evolve over time.  相似文献   

7.
When carrying out medical imaging based on detection of isotopic radiation levels of internal organs such a lungs or heart, distortions, and blur arise as a result of the organ motion during breathing and blood supply. Consequently, image quality declines, despite the use of expensive high resolution devices and, such devices are not exploited fully. A method with which to overcome the problem is image restoration. Previously, we suggested and developed a method for calculating numerically the optical transfer function (OTF) for any type of image motion. The purpose of this research is restoration of original isotope images (of the lungs) by restoration methods that depend on the OTF of the real time relative motion between the object and the imaging system. This research uses different algorithms for the restoration of an image, according to the OTF of the lung motion, which is in several directions simultaneously. One way of handling the three-dimensional movement is to decompose the image into several portions, to restore each portion according to its motion characteristics, and then to combine all the image portions back into a single image. An additional complication is that the image was recorded at different angles. The application of this research is in medical systems requiring high resolution imaging. The main advantage of this approach is its low cost versus conventional approaches.  相似文献   

8.
Before deconvolution can be used in renography, it is necessary to decide whether the renal function is sufficiently good to allow it. To see if this decision can be circumvented, an iterative constrained least-squares restoration (CLSR) method was implemented in which the point of termination of the iteration occurs when a residual vector has a value less than an estimate of the noise in the original renogram curve. The technique was compared with the matrix algorithm and with direct FFT division. The comparison was achieved by deconvolving simulated renogram data with differing transit time spectra and statistics. As expected, the FFT technique produced results of little value whereas the CLSR and matrix methods produced values of mean transit time (MTT) that differed slightly from the expected results. Analysis indicated that the matrix approach was superior when the percentage noise component was less than 6% and vice versa. No technique produced useful transit time spectra. As the CLSR technique produced better results than the matrix method in simulations with relatively long MTTs and high noise, it seems reasonable to suggest that it might be used for renogram deconvolution without the need for previous inspection of the curves.  相似文献   

9.
X-ray images acquired with an image intensifier detector system suffer from veiling glare, a low-frequency degradation described by a point spread function (PSF). The PSF has two experimentally determined parameters unique to a given image intensifier. This information is utilized to deconvolve the degradation from digitally acquired images. Results demonstrate a significant increase in contrast ratio of high-contrast objects after deconvolution and image restoration.  相似文献   

10.
Changes in fetal movements indicate biophysical conditions and functional development. The precise evaluation of fetal movements in clinical medicine requires the development of a continuous automated monitoring technique. A basic study of the measurement of fetal movements was carried out by modifying the Doppler ultrasound module of a cardiotocograph to produce low-frequency Doppler signals and five simultaneous outputs at various depths. These outputs represent displacement inside tissue at the various depths. Signal processing was executed on a 32-bit computer with a high-accuracy displacement estimation technique using the arctangent method. Results showed successful tracking of minute movements, such as fetal breathing movements (FBM), while rejecting other movements derived from maternal breathing etc. Using spectral analysis by the maximum entropy method (MEM), fetal movements were classified in three groups (FBM, fetal gross movements (FGM) and fetal heart movements (FHM)), based on the character of their special peak frequencies. The order of movement recognition was first FGM, then FBM and lastly FHM. FBM were more successfully recognised by MEM than by conventional B-mode observation methods. Small body movements were difficult to recognise as FGM by MEM in some cases. Although further studies are required for clinical application, it appears that automated assessments of fetal movements should be possible with this technique.  相似文献   

11.
Named entity recognition is an extremely important and fundamental task of biomedical text mining. Biomedical named entities include mentions of proteins, genes, DNA, RNA, etc which often have complex structures, but it is challenging to identify and classify such entities. Machine learning methods like CRF, MEMM and SVM have been widely used for learning to recognize such entities from an annotated corpus. The identification of appropriate feature templates and the selection of the important feature values play a very important role in the success of these methods. In this paper, we provide a study on word clustering and selection based feature reduction approaches for named entity recognition using a maximum entropy classifier. The identification and selection of features are largely done automatically without using domain knowledge. The performance of the system is found to be superior to existing systems which do not use domain knowledge.  相似文献   

12.
OBJECTIVES: Mammographic density is a useful biomarker of breast cancer risk. Computer-based methods can provide continuous data suitable for analysis. This study aimed to compare a semi-automated computer-assisted method (Cumulus) and a fully automated volumetric computer method (standard mammogram form (SMF)) for assessing mammographic density using data from a previously conducted randomised placebo-controlled trial of an isoflavone supplement. METHODS: Mammograms were obtained from participants in the intervention study. A total of 177 women completed the study. Baseline and follow-up mammograms were digitised and density was estimated using Cumulus (read by two readers) and SMF. Left-right correlation, changes in density over time, and difference between intervention and control groups were evaluated. Changes of density over time, and changes between intervention group and control group were examined using paired t-test and Student's t-test, respectively. RESULTS: Inter-reader correlation coefficient by Cumulus was 0.90 for dense area, and 0.86 for percentage density. Left-right correlation of percent density was lower in SMF than in Cumulus. Among all women, percentage density by Cumulus decreased significantly over time, but no change was seen for SMF percentage density. The intervention group showed marginally significant greater reduction of percent density by Cumulus compared to controls (p=0.04), but the difference became weak after adjustment for baseline percent density (p=0.06). No other measurement demonstrated significant difference between intervention and control groups. CONCLUSIONS: This comparison suggests that slightly different conclusions could be drawn from different methods used to assess breast density. The development of a more robust fully automated method is awaited.  相似文献   

13.

Objectives

Mammographic density is a useful biomarker of breast cancer risk. Computer-based methods can provide continuous data suitable for analysis. This study aimed to compare a semi-automated computer-assisted method (Cumulus) and a fully automated volumetric computer method (standard mammogram form (SMF)) for assessing mammographic density using data from a previously conducted randomised placebo-controlled trial of an isoflavone supplement.

Methods

Mammograms were obtained from participants in the intervention study. A total of 177 women completed the study. Baseline and follow-up mammograms were digitised and density was estimated using Cumulus (read by two readers) and SMF. Left–right correlation, changes in density over time, and difference between intervention and control groups were evaluated. Changes of density over time, and changes between intervention group and control group were examined using paired t-test and Student's t-test, respectively.

Results

Inter-reader correlation coefficient by Cumulus was 0.90 for dense area, and 0.86 for percentage density. Left–right correlation of percent density was lower in SMF than in Cumulus. Among all women, percentage density by Cumulus decreased significantly over time, but no change was seen for SMF percentage density. The intervention group showed marginally significant greater reduction of percent density by Cumulus compared to controls (p = 0.04), but the difference became weak after adjustment for baseline percent density (p = 0.06). No other measurement demonstrated significant difference between intervention and control groups.

Conclusions

This comparison suggests that slightly different conclusions could be drawn from different methods used to assess breast density. The development of a more robust fully automated method is awaited.  相似文献   

14.
基于互信息量图像配准中目标函数局部极值的克服   总被引:2,自引:0,他引:2  
图像配准技术在医学图像处理领域中已经被广泛应用,基于互信息量的全局图像配准方法具有自动化程度高,配准精度高等优点。但是,当平移距离为像素的整数倍时,插值等法会引入伪影,使目标函数产生局部极值,使得最优化搜索有时会终止于局部极值,得到错误的配准参数。本文提出了一种克服的方法,经检验,能使目标函数进一步光滑,保证了最优化搜索的准确性,提高了互信息量方法配准的成功率。  相似文献   

15.
Accurate arterial centerline extraction is essential for comprehensive visualization in CT Angiography. Time consuming manual tracking is needed when automated methods fail to track centerlines through severely diseased and occluded vessels. A previously described algorithm, Partial Vector Space Projection (PVSP), which uses vessel shape information from a database to bridge occlusions of the femoropopliteal artery, has a limited accuracy in long (>100 mm) occlusions. In this article we introduce a new algorithm, Intermediate Point Detection (IPD), which uses calcifications in the occluded artery to provide additional information about the location of the centerline to facilitate improvement in PVSP performance. It identifies calcified plaque in image space to find the most useful point within the occlusion to improve the estimate from PVSP. In this algorithm candidates for calcified plaque are automatically identified on axial CT slices in a restricted region around the estimate obtained from PVSP. A modified Canny edge detector identifies the edge of the calcified plaque and a convex polygon fit is used to find the edge of the calcification bordering the wall of the vessel. The Hough transform for circles estimates the center of the vessel on the slice, which serves as a candidate intermediate point. Each candidate is characterized by two scores based on radius and relative position within the occluded segment, and a polynomial function is constructed to define a net score representing the potential benefit of using this candidate for improving the centerline. We tested our approach in 44 femoropopliteal artery occlusions of lengths up to 398 mm in 30 patients with peripheral arterial occlusive disease. Centerlines were tracked manually by four-experts, twice each, with their mean serving as the reference standard. All occlusions were first interpolated with PVSP using a database of femoropopliteal arterial shapes obtained from a total of 60 subjects. Occlusions longer than 80 mm (N = 20) were then processed with the IPD algorithm, provided calcifications were found (N = 14). We used the maximum point-wise distance of an interpolated curve from the reference standard as our error metric. The IPD algorithm significantly reduced the average error of the initial PVSP from 2.76 to 1.86 mm (p < 0.01). The error was less than the clinically desirable 3 mm (smallest radius of the femoropopliteal artery) in 13 of 14 occlusions. The IPD algorithm achieved results within the range of the human readers in 11 of 14 cases. We conclude that the additional use of sparse but specific image space information, such as calcified atherosclerotic plaque, can be used to substantially improve the performance of a previously described knowledge-based method to restore the centerlines of femoropopliteal arterial occlusions.  相似文献   

16.
The effect of the intravascular background in the renogram on the calculated renal retention function is known and can be removed. However, the effect of the extravascular background (EVB) has not been thoroughly investigated using patient data. By varying the size of the region of interest containing a single kidney and by deconvolving the 131I-hippuran and 99Tcm-DTPA renograms so generated, the following has been found: (a) the effect of EVB on the mean transit time (MTT) is negligible and EVB subtraction is not necessary, (b) the EVB overestimates the lower relative kidney function (RKF) and underestimates the higher RKF, so that EVB subtraction should be performed if the RKFs are asymmetric. A new method is described in which the correction for EVB is performed following deconvolution. If the RKFs are greater than about 30%, the correction can be performed using a regression equation between the RKFs corrected for EVB and those that are not corrected. When the RKFs are asymmetric to a greater extent, the correction should be performed for each study separately. The proposed method includes a small systematic error due to the inherent limitations of nuclear medicine equipment.  相似文献   

17.

Background

We present a simple, data-driven method to extract haemodynamic response functions (HRF) from functional magnetic resonance imaging (fMRI) time series, based on the Fourier-wavelet regularised deconvolution (ForWaRD) technique. HRF data are required for many fMRI applications, such as defining region-specific HRFs, effciently representing a general HRF, or comparing subject-specific HRFs.

Results

ForWaRD is applied to fMRI time signals, after removing low-frequency trends by a wavelet-based method, and the output of ForWaRD is a time series of volumes, containing the HRF in each voxel. Compared to more complex methods, this extraction algorithm requires few assumptions (separability of signal and noise in the frequency and wavelet domains and the general linear model) and it is fast (HRF extraction from a single fMRI data set takes about the same time as spatial resampling). The extraction method is tested on simulated event-related activation signals, contaminated with noise from a time series of real MRI images. An application for HRF data is demonstrated in a simple event-related experiment: data are extracted from a region with significant effects of interest in a first time series. A continuous-time HRF is obtained by fitting a nonlinear function to the discrete HRF coeffcients, and is then used to analyse a later time series.

Conclusion

With the parameters used in this paper, the extraction method presented here is very robust to changes in signal properties. Comparison of analyses with fitted HRFs and with a canonical HRF shows that a subject-specific, regional HRF significantly improves detection power. Sensitivity and specificity increase not only in the region from which the HRFs are extracted, but also in other regions of interest.  相似文献   

18.
Prediction of a glucose appearance function from foods using deconvolution   总被引:1,自引:0,他引:1  
The glycaemic response of an insulin-treated diabetic patientgoes through many transitory phases, leading to a steady stateglycaemic profile following a change in either insulin regimenor diet. Most models attempting to model the glucose and insulinrelationship try to model the effect of oral or injected glucoserather than that from the digestion of food. However, it isclear that a better understanding of the glycaemic responsewould arise from consideration of intestinal absorption fromthe gut. It is assumed that this type of absorption can be modelledby a so-called glucose appearance function (systemic appearanceof glucose via glucose absorption from the gut) predicting theglucose load from the food. Much research has been carried outin the areas of hepatic balance, insulin absorption and insulinindependent/dependent utilization. However, little is knownabout intestinal absorption patterns or their correspondingglucose appearance profiles. The strategy under investigation herein is to use deconvolutionor backward engineering. By starting with specific results i.e.blood glucose and insulin therapy, it is possible to work backwardsto predict the glucose forcing functions responsible for theoutcome. Assuming compartmental consistency, this will allowa clearer insight into the true gut absorption process, If successful,the same strategy can be applied to more recent glucose andinsulin models to further our understanding of the food to bloodglucose problem. This paper investigates the Lehmann-Deutsch modified model ofglucose and insulin interaction, created from the model proposedby Berger-Rodbard. The model attempts to simulate the steadystate glycaemic and plasma insulin responses, independent ofthe initial values from which the simulation is started. Glucoseenters the model via both intestinal absorption and hepaticglucose production. We considered a 70kg male insulindependentdiabetic patient with corresponding hepatic and insulin sensitivityparameters of 0.6 and 0.3 respectively. Net hepatic glucosebalance was modelled piecewise by linear and symmetric functions.A first-order Euler method with step size of 15 minutes wasemployed. For the simulation, only Actrapid and NPH injectionswere considered. The injection of insulin and the glucose fluxto the gut were started simultaneously to avoid any delay associatedwith gastric emptying. The systemic appearance of glucose was compared from two viewpoints, not only to assess the strategic principle, but alsoto assess the suitability of the modifications made by Lehmannand Deutsch. The first is a forward prediction using the compartmentalstructure. This analysis involves the rate of gastric emptyingwithout time delay. The second is a backward prediction fromexperimentally observed blood glucose profiles. Investigationsinvolved porridge, white rice and banana containing the samecarbohydrate content (25 g). Results obtained from the firstanalysis were dependent on the rate of gastric emptying, especiallyits ascending and descending branches. Results from the secondanalysis were dependent on the dose and type of insulin administered.Both predicted profiles showed consistency with physiologicalreasoning, although it became apparent that such solutions couldbe unstable. Furthermore, both types of prediction were similarin structure and appearance, especially in simulations for porridgeand banana. This emphasized the consistency and suitabilityof both analyses when investigating the compartmental accuracyand limitations within a model. The new strategic approach was deemed a success within the model,and the modifications made by Lehmann and Deutsch appropriate.We suggest that a gastric emptying curve with a possible gastricdelay is the way forward in regulating the appearance of glucosevia gut absorption. The Lehmann-Deutsch gastric curve is describedby either a trapezoidal or triangular function dependent onthe carbohydrate content of the meal. However, it was clearfrom the results obtained that carbohydrate content is onlyone factor in carbohydrate absorption, and further progressmust inevitably involve other food characteristics and propertiesif we are to improve the glucose flux.  相似文献   

19.
基于改进最大互信息法的MR切片图像配准   总被引:1,自引:0,他引:1  
医学图像配准是医学图像处理分析的关键步骤,是医学图像融合首先要解决的问题。本研究的主要目的是实现帕金森患者深脑部刺激手术前后MR图像的配准。将互距离引入互信息测度,实现手术前后两组MR切片图像的对应匹配,然后将对应的两组MR切片系列重建三维图像,最后用Powell优化算法对重建的三维图像进行配准。通过术前术后MR三维图像的配准,可以定量的分析手术后植入电极和手术前丘脑底核的相对位置关系,从而实现对深脑部刺激手术质量的科学评估。  相似文献   

20.
A method of deconvolution is illustrated using compartmental models. The approach can be used to determine an arbitrary unknown input function from a measured response and the impulse response of the system. Compartmental models are constructed to specify (a) the function fitting the response data and (b) the impulse response of the system. Simulation of these models is then used to construct the unknown input function.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号