首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
With advances in high-throughput single-nucleotide polymorphism (SNP) genotyping, the amount of genotype data available for genetic studies is steadily increasing, and with it comes new abilities to study multigene interactions as well as to develop higher dimensional genetic models that more closely represent the polygenic nature of common disease risk. The combined impact of even small amounts of missing data on a multi-SNP analysis may be considerable. In this study, we present a neural network method for imputing missing SNP genotype data. We compared its imputation accuracy with fastPHASE and an expectation-maximization algorithm implemented in HelixTree. In a simulation data set of 1000 SNPs and 1000 subjects, 1, 5 and 10% of genotypes were randomly masked. Four levels of linkage disequilibrium (LD), LD R2<0.2, R2<0.5, R2<0.8 and no LD threshold, were examined to evaluate the impact of LD on imputation accuracy. All three methods are capable of imputing most missing genotypes accurately (accuracy >86%). The neural network method accurately predicted 92.0-95.9% of the missing genotypes. In a real data set comparison with 419 subjects and 126 SNPs from chromosome 2, the neural network method achieves the highest imputation accuracies >83.1% with missing rate from 1 to 5%. Using 90 HapMap subjects with 1962 SNPs, fastPHASE had the highest accuracy ( approximately 97%) while the other two methods had >95% accuracy. These results indicate that the neural network model is an accurate and convenient tool, requiring minimal parameter tuning for SNP data recovery, and provides a valuable alternative to usual complete-case analysis.  相似文献   

2.
In this paper, we develop an approximate analytical reconstruction algorithm that compensates for uniform attenuation in 2D parallel-beam SPECT with a 180-degree acquisition. This new algorithm is in the form of a direct Fourier reconstruction. The complex variable central slice theorem is used to derive this algorithm. The image is reconstructed with the following steps: first, the attenuated projection data acquired over 180 degrees are extended to 360 degrees and the value for the uniform attenuator is changed to a negative value. The Fourier transform (FT) of the image in polar coordinates is obtained from the Fourier transform of an analytic function interpolated from an extension of the projection data according to the complex central slice theorem. Finally, the image is obtained by performing a 2D inverse Fourier transform. Computer simulations and comparison studies with a 360-degree full-scan algorithm are provided.  相似文献   

3.
Attenuation measurements for primary x-ray spectra from 25 kVp to 18 MV were made using aluminum filters for all energies except for orthovoltage where copper filters were used. An iterative perturbation method, which utilized these measurements, was employed to derive the apparent x-ray spectrum. An initial spectrum or pre-spectrum was used to start the process. Each energy value of the pre-spectrum was perturbed positively and negatively, and an attenuation curve was calculated using the perturbed values. The value of x-rays in the given energy bin was chosen to minimize the difference between the measured and calculated transmission curves. The goal was to derive the minimum difference between the measured transmission curve and the calculated transmission curve using the derived x-ray spectrum. The method was found to yield useful information concerning the lower photon energy and the actual operating potential versus the nominal potential. Mammographic, diagnostic, orthovoltage, and megavoltage x-ray spectra up to 18 MV nominal were derived using this method. The method was validated using attenuation curves from published literature. The method was also validated using attenuation curves calculated from published spectra. The attenuation curves were then used to derive the x-ray spectra.  相似文献   

4.
An artificial neural network (ANN) solution is described for the recognition of domains in protein sequences. A query sequence is first compared to a reference database of domain sequences by use of and the output data, encoded in the form of six parameters, are forwarded to feed-forward artificial neural networks with six input and six hidden units with sigmoidal transfer function. The recognition is based on the distribution of scores precomputed for the known domain groups in a database versus database comparison. Applications to the prediction of function are discussed.  相似文献   

5.
Scatter correction in SPECT using non-uniform attenuation data   总被引:1,自引:0,他引:1  
Quantitative assessment of activity levels with SPECT is difficult because of attenuation and scattering of gamma rays within the object. To study the effect of attenuation and scatter on SPECT quantitation, phantom studies were performed with non-uniform attenuation. Simulated transmission CT data provided information about the distribution of attenuation coefficients within the source. Attenuation correction was performed by an iterative reprojection technique. Scatter correction was done by convolution of the attenuation-corrected image and an appropriate filter. The filter characteristics depended on the attenuation and activity measurement at each pixel. The scatter correction could compensate completely for the 28% scatter component from a line source, and the 61% component from a thick, extended source. Accuracy of regional activity ratios and the linearity of the relationship between true radioactivity and the SPECT measurement were both significantly improved by these corrections. The present method is expected to be valuable for the quantitative assessment of regional activity.  相似文献   

6.
For quantitative image reconstruction in positron emission tomography attenuation correction is mandatory. In case that no data are available for the calculation of the attenuation correction factors one can try to determine them from the emission data alone. However, it is not clear if the information content is sufficient to yield an adequate attenuation correction together with a satisfactory activity distribution. Therefore, we determined the log likelihood distribution for a thorax phantom depending on the choice of attenuation and activity pixel values to measure the crosstalk between both. In addition an iterative image reconstruction (one-dimensional Newton-type algorithm with a maximum likelihood estimator), which simultaneously reconstructs the images of the activity distribution and the attenuation coefficients is used to demonstrate the problems and possibilities of such a reconstruction. As result we show that for a change of the log likelihood in the range of statistical noise, the associated change in the activity value of a structure is between 6% and 263%. In addition, we show that it is not possible to choose the best maximum on the basis of the log likelihood when a regularization is used, because the coupling between different structures mediated by the (smoothing) regularization prevents an adequate solution due to crosstalk. We conclude that taking into account the attenuation information in the emission data improves the performance of image reconstruction with respect to the bias of the activities, however, the reconstruction still is not quantitative.  相似文献   

7.
Different survival data pre-processing procedures and adaptations of existing machine-learning techniques have been successfully applied to numerous fields in clinical medicine. Zupan et al. (2000) proposed handling censored survival data by assigning distributions of outcomes to shortly observed censored instances. In this paper, we applied their learning technique to two well-known procedures for learning Bayesian networks: a search-and-score hill-climbing algorithm and a constraint-based conditional independence algorithm. The method was thoroughly tested in a simulation study and on the publicly available clinical dataset GBSG2. We compared it to learning Bayesian networks by treating censored instances as event-free and to Cox regression. The results on model performance suggest that the weighting approach performs best when dealing with intermediate censoring. There is no significant difference between the model structures learnt using either the weighting approach or by treating censored instances as event-free, regardless of censoring.  相似文献   

8.
9.
Lung sounds convey relevant information related to pulmonary disorders, and to evaluate patients with pulmonary conditions, the physician or the doctor uses the traditional auscultation technique. However, this technique suffers from limitations. For example, if the physician is not well trained, this may lead to a wrong diagnosis. Moreover, lung sounds are non-stationary, complicating the tasks of analysis, recognition, and distinction. This is why developing automatic recognition systems can help to deal with these limitations. In this paper, we compare three machine learning approaches for lung sounds classification. The first two approaches are based on the extraction of a set of handcrafted features trained by three different classifiers (support vector machines, k-nearest neighbor, and Gaussian mixture models) while the third approach is based on the design of convolutional neural networks (CNN). In the first approach, we extracted the 12 MFCC coefficients from the audio files then calculated six MFCCs statistics. We also experimented normalization using zero mean and unity variance to enhance accuracy. In the second approach, the local binary pattern (LBP) features are extracted from the visual representation of the audio files (spectrograms). The features are normalized using whitening. The dataset used in this work consists of seven classes (normal, coarse crackle, fine crackle, monophonic wheeze, polyphonic wheeze, squawk, and stridor). We have also experimentally tested dataset augmentation techniques on the spectrograms to enhance the ultimate accuracy of the CNN. The results show that CNN outperformed the handcrafted feature based classifiers.  相似文献   

10.
In this paper, ECG arrhythmia classification using principal component analysis is proposed. Hebbian neural networks are used for computing the principal components of an ECG signal. This provides an unsupervised feature extraction, dimension reduction and an improved computing efficiency. Results from 14 pathological records obtained from the MIT ECG database demonstrate the capability of this method in differentiating between five different types of arrhythmia despite the variations in signal morphology. An average value for classification sensitivity and positive predictivity were found to be Se% = 98.1% and +P% = 94.7% respectively.  相似文献   

11.
Objective knowledge of tissue density distribution in CT/MRI brain datasets can be related to anatomical or neuro-functional regions for assessing pathologic conditions characterised by slight differences. The process of monitoring illness and its treatment could be then improved by a suitable detection of these variations. In this paper, we present an approach for three-dimensional (3D) classification of brain tissue densities based on a hierarchical artificial neural network (ANN) able to classify the single voxels of the examined datasets. The method developed was tested on case studies selected by an expert neuro-radiologist and consisting of both normal and pathological conditions. The results obtained were submitted for validation to a group of physicians and they judged the system to be really effective in practical applications.  相似文献   

12.
13.
Non-invasive electrocardiography has proven to be a very interesting method for obtaining information about the foetus state and thus to assure its well-being during pregnancy. One of the main applications in this field is foetal electrocardiogram (ECG) recovery by means of automatic methods. Evident problems found in the literature are the limited number of available registers, the lack of performance indicators, and the limited use of non-linear adaptive methods. In order to circumvent these problems, we first introduce the generation of synthetic registers and discuss the influence of different kinds of noise to the modelling. Second, a method which is based on numerical (correlation coefficient) and statistical (analysis of variance, ANOVA) measures allows us to select the best recovery model. Finally, finite impulse response (FIR) and gamma neural networks are included in the adaptive noise cancellation (ANC) scheme in order to provide highly non-linear, dynamic capabilities to the recovery model. Neural networks are benchmarked with classical adaptive methods such as the least mean squares (LMS) and the normalized LMS (NLMS) algorithms in simulated and real registers and some conclusions are drawn. For synthetic registers, the most determinant factor in the identification of the models is the foetal-maternal signal-to-noise ratio (SNR). In addition, as the electromyogram contribution becomes more relevant, neural networks clearly outperform the LMS-based algorithm. From the ANOVA test, we found statistical differences between LMS-based models and neural models when complex situations (high foetal-maternal and foetal-noise SNRs) were present. These conclusions were confirmed after doing robustness tests on synthetic registers, visual inspection of the recovered signals and calculation of the recognition rates of foetal R-peaks for real situations. Finally, the best compromise between model complexity and outcomes was provided by the FIR neural network. Both the methodology for selecting a model and the introduction of advanced neural models are the main contributions of this paper.  相似文献   

14.
Staging of prostate cancer is a mainstay of treatment decisions and prognostication. In the present study, 50 pT2N0 and 28 pT3N0 prostatic adenocarcinomas were characterized by Gleason grading, comparative genomic hybridization (CGH), and histological texture analysis based on principles of stereology and stochastic geometry. The cases were classified by learning vector quantization and support vector machines. The quality of classification was tested by cross-validation. Correct prediction of stage from primary tumor data was possible with an accuracy of 74– 80% from different data sets. The accuracy of prediction was similar when the Gleason score was used as input variable, when stereological data were used, or when a combination of CGH data and stereological data was used. The results of classification by learning vector quantization were slightly better than those by support vector machines. A method is briefly sketched by which training of neural networks can be adapted to unequal sample sizes per class. Progression from pT2 to pT3 prostate cancer is correlated with complex changes of the epithelial cells in terms of volume fraction, of surface area, and of second-order stereological properties. Genetically, this progression is accompanied by a significant global increase in losses and gains of DNA, and specifically by increased numerical aberrations on chromosome arms 1q, 7p, and 8p.  相似文献   

15.
It was tested whether it was possible to reduce the atypical squamous cells of undetermined significance (ASCUS) scores in a meaningful way by exploiting the cells selected by the neural networks of the PAPNET system. For this test, 2,000 routine smears were screened once by means of PAPNET and once conventionally in a laboratory in Amsterdam. From these 2,000 smears, 168 were diagnosed as ASCUS. In the second phase of the study, the diagnosis was based solely on the PAPNET images, and in addition, cases with immature cells (bare nuclei and cells with very little cytoplasm) in the PAPNET images were classified as ASCUS. Although, in this second phase, 75.6% of the cases were revised to negative, the cases with positive follow-up were all still classified as ASCUS. The negative predictive value remained at 100%, whereas the positive predictive value increased from 14.3 to 30%. By using the new paradigm (focusing on immature cells selected by the neural networks) for routine primary PAPNET screening in a laboratory in Leiden, the ASCUS scores were reduced from 10% (June of 1996) to 1.0% (early 1998), with promising follow-up results for the first half of 1997. Diagn. Cytopathol. 1998;19:361–366. © 1998 Wiley-Liss, Inc.  相似文献   

16.
Localization of focal electrical activity in the brain using dipole source analysis of the electroencephalogram (EEG), is usually performed by iteratively determining the location and orientation of the dipole source, until optimal correspondence is reached between the dipole source and the measured potential distribution on the head. In this paper, we investigate the use of feed-forward layered artificial neural networks (ANNs) to replace the iterative localization procedure, in order to decrease the calculation time. The localization accuracy of the ANN approach is studied within spherical and realistic head models. Additionally, we investigate the robustness of both the iterative and the ANN approach by observing the influence on the localization error of both noise in the scalp potentials and scalp electrode mislocalizations. Finally, after choosing the ANN structure and size that provides a good trade off between low localization errors and short computation times, we compare the calculation times involved with both the iterative and ANN methods. An average localization error of about 3.5 mm is obtained for both spherical and realistic head models. Moreover, the ANN localization approach appears to be robust to noise and electrode mislocations. In comparison with the iterative localization, the ANN provides a major speed-up of dipole source localization. We conclude that an artificial neural network is a very suitable alternative for iterative dipole source localization in applications where large numbers of dipole localizations have to be performed, provided that an increase of the localization errors by a few millimetres is acceptable.  相似文献   

17.
Pathological voice quality assessment using artificial neural networks   总被引:1,自引:0,他引:1  
This paper describes a prototype system for the objective assessment of voice quality in patients recovering from various stages of laryngeal cancer. A large database of male subjects steadily phonating the vowel /i/ was used in the study, and the quality of their voices was independently assessed by a speech and language therapist (SALT) according to their seven-point ranking of subjective voice quality. The system extracts salient short-term and long-term time-domain and frequency-domain parameters from impedance (EGG) signals and these are used to train and test an artificial neural network (ANN). Multi-layer perceptron (MLP) ANNs were investigated using various combinations of these parameters, and the best results were obtained using a combination of short-term and long-term parameters, for which an accuracy of 92% was achieved. It is envisaged that this system could be used as an assessment tool, providing a valuable aid to the SALT during clinical evaluation of voice quality.  相似文献   

18.
19.
We investigated the use of multifrequency diffuse optical tomography (MF-DOT) data for the reconstruction of the optical parameters. The experiments were performed in a 63 mm diameter cylindrical phantom containing a 15 mm diameter cylindrical object. Modulation frequencies ranging from 110 MHz to 280 MHz were used in the phantom experiments changing the contrast in absorption of the object with respect to the phantom while keeping the scattering value the same. The diffusion equation was solved using the finite element method. The sensitivity information from each frequency was combined to form a single Jacobian. The inverse problem was solved iteratively by minimizing the difference between the measurements and forward problem using single and multiple modulation frequency data. A multiparameter Tikhonov scheme was used for regularization. The phantom results show that the peak absorption coefficient in a region of interest was obtained with an error less then 5% using two-frequency reconstruction for absorption contrast values up to 2.2 times higher than background and 10% for the absorption contrast values larger than 2.2. The use of two-frequency data is sufficient to improve the quantitative accuracy compared with the single frequency reconstruction with appropriate selection of these frequencies.  相似文献   

20.
Beam-hardening-free synthetic images with absolute CT numbers that radiologists are used to can be constructed from spectral CT data by forming 'dichromatic" images after basis decomposition. The CT numbers are accurate for all tissues and the method does not require additional reconstruction. This method prevents radiologists from having to relearn new rules-of-thumb regarding absolute CT numbers for various organs and conditions as conventional CT is replaced by spectral CT. Displaying the synthetic Hounsfield unit images side-by-side with images reconstructed for optimal detectability for a certain task can ease the transition from conventional to spectral CT.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号