首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
2.
To achieve an accurate estimation of the instantaneous turbulent velocity fluctuations downstream of prosthetic heart valves in vivo, the variability of the spectral method used to measure the mean frequency shift of the Doppler signal (i.e. the Doppler velocity) should be minimised. This paper investigates the performance of various short-time spectral parametric methods such as the short-time Fourier transform, autoregressive modelling based on two different approaches, autoregressive moving average modelling based on the Steiglitz-McBride method, and Prony's spectral method. A simulated Doppler signal was used to evaluate the performance of the above mentioned spectral methods and Gaussian noise was added to obtain a set of signals with various signal-to-noise ratios. Two different parameters were used to evaluate the performance of each method in terms of variability and accurate matching of the theoretical Doppler mean instantaneous frequency variation within the cardiac cycle. Results show that autoregressive modelling outperforms the other investigated spectral techniques for window lengths varying between 1 and 10 ms. Among the autoregressive algorithms implemented, it is shown that the maximum entropy method based on a block data processing technique gives the best results for a signal-to-noise ratio of 20 dB. However, at 10 and 0 dB, the Levinson-Durbin algorithm surpasses the performance of the maximum entropy method. It is expected that the intrinsic variance of the spectral methods can be an important source of error for the estimation of the turbulence intensity. The range of this error varies from 0.38% to 24% depending on the parameters of the spectral method and the signal-to-noise ratio.  相似文献   

3.

Aim

Diagnosing hypovolemia is not a trivial task. Hypovolemia itself has several physical signs, but their specificity and sensitivity is limited, even using sophisticated monitoring techniques. However, diagnosing hypovolemia is crucial in critically ill patients to avoid worse outcomes. The aim of this paper is to provide methods for better estimation of the degree of hypovolemia in ill patients.

Methods

The so-called hypovolemic index (HVI) is introduced which classifies the degree of hypovolemia with a number in the interval [0, 1]. Four new methods are presented for the more precise diagnosis of hypovolemia. All methods rely on fuzzy logic. In the first method, clinical thresholds are used in the fuzzy rule system. The second method uses an iterative ROC analysis to determine the thresholds. The third one determines the thresholds using one single ROC analysis (“One step” method). The fourth method uses a genetic algorithm (GA) for the determination of the thresholds. The HVI is calculated using the data of patients from a previous investigation. Each method (except the first one) is tuned on a so called training database. Afterwards, they are carried out on a test database in order to determine the potential of the method.

Results

All four methods are capable of differentiating between hypovolemic and normovolemic patients. However, using the first and the second methods, several patients get a HVI of around 0.5, therefore, their degree of hypovolemia is ambiguous. The third and fourth methods deliver a better classification, hypovolemic and normovolemic patients are clearly separated from each other.

Conclusion

All four novel methods deliver powerful tools for the diagnosis of hypovolemic patients. The degree of the hypovolemic state of each patient can be estimated with a hitherto unattained degree of reliability. Using ROC analysis and GA the estimation can be improved further.  相似文献   

4.
To achieve an accurate estimation of the instantaneous turbulent velocity fluctuations downstream of prosthetic heart valves in vivo, the variability of the spectral method used to measure the mean frequency shift of the Doppler signal (i.e. the Doppler velocity) should be minimised. This paper investigates the performance of various short-time spectral parametric methods such as the short-time Fourier transform, autoregressive modelling based on two different approaches, autoregressive moving average modelling based on the Steiglitz-McBride method, and Prony's spectral method. A simulated Doppler signal was used to evaluate the performance of the above mentioned spectral methods and Gaussian noise was added to obtain a set of signals with various signal-to-noise ratios. Two different parameters were used to evaluate the performance of each method in terms of variability and accurate matching of the theoretical Doppler mean instantaneous frequency variation within the cardiac cycle. Results show that autoregressive modelling outperforms the other investigated spectral techniques for window lengths varying between 1 and 10 ms. Among the autoregressive algorithms implemented, it is shown that the maximum entropy method based on a block data processing technique gives the best results for a signal-to-noise ratio of 20 dB. However, at 10 and 0 dB, the Levinson-Durbin algorithm surpasses the performance of the maximum entropy method. It is expected that the intrinsic variance of the spectral methods can be an important source of error for the estimation of the turbulence intensity. The range of this error varies from 0.38% to 24% depending on the parameters of the spectral method and the signal-to-noise ratio.  相似文献   

5.
An improved method to determine material volumes from microcomputed tomography (micro-CT) data is presented. In particular, the method can account for materials with significantly overlapping peaks and small volumes. The example case is a hydroxyapatite scaffold cultured with osteoprogenitor cells. The histogram obtained from the micro-CT data is decomposed into a Gaussian attenuation distribution for each material in the sample, including scaffold, pore and surface tissue, and background. This is done by creating a training set of attenuation data to find initial parameters and then using a nonlinear curve fit, which produced R(2) values greater than 0.998. To determine the material volumes, the curves that simulated each material are integrated, allowing small volume fractions to be accurately quantified. Thresholds for visualizing the samples are chosen based on volume fractions of the Gaussian curves. Additionally, the use of dual-material regions helps accurately visualize tissue on the scaffold, which is otherwise difficult because of the large volume fraction of scaffold. Finally, the curve integration method is compared with Bayesian estimation and intersection thresholding methods. The pore tissue is not represented at all by the Bayesian estimation, and the intersection thresholding method is less accurate than the curve integration method.  相似文献   

6.
Psychological research often deals with psychological constructs that cannot be directly measured. Thus independent variables of regression analysis for an observable dependent variable are sometimes latent variables (factors) that are defined independently of the dependent variable. In this study we pointed out the problem associated with the use of factor analysis for the combined set of dependent variable and independent variables in such a cases; that is, the derived factors are different from those originally intended, and the true regression parameters cannot be reproduced. We proposed a stagewise estimation method to solve the problem. This method estimates parameters of measurement equation in the first stage, and then estimates parameters of structural equation in the second stage. Our proposed method enables calculation of standard errors of estimators using Bootstrapping method. Numerical studies showed that the proposed method improves the estimation efficiency over the conventional methods, and provides estimates which are robust with respect to misspecification of model.  相似文献   

7.
Positron emission tomography (PET) provides the ability to extract useful quantitative information not available through other radiological techniques. In certain studies, the physiological parameters of interest cannot be determined from the data obtained from a single PET experiment alone. In this case, multiple experiments are required. At present, the methods used to analyse measurements acquired from multiple experiments often involve considering them separately during the modelling procedures. These methods of analysis may cause errors to be propagated through successive modelling procedures and do not fully utilise the information content provided by the PET measurements. A new method is presented, based on linear least squares for the analysis of PET dynamic data acquired from multiple experiments. This method simultaneously considers the complete set of measurements obtained and provides reliable parameter estimates. The efficient use of the information content provided by multiple experiments is considered and the propagation of errors is discussed. To facilitate our discussion, we apply this new method to the estimation of the cerebral metabolic rate of oxygen and the parameters of the oxygen utilisation model as a practical example. The results demonstrate a significant improvement in the reliability and estimation accuracy of the estimates for this new method. Furthermore, this method reduced the likelihood of errors being propagated. Therefore, the proposed method is suitable for the analysis of multiple PET dynamic datasets.  相似文献   

8.
Three algorithms for scatter compensation in Tc-99m brain single-photon emission computed tomography (SPECT) were optimized and compared on the basis of the accuracy and precision with which lesion and background activity could be simultaneously estimated. These performance metrics are directly related to the clinically important tasks of activity quantitation and lesion detection, in contrast to measures based solely on the fidelity of image pixel values. The scatter compensation algorithms were (a) the Compton-window (CW) method with a 20% photopeak window, a 92-126 keV scatter window, and an optimized "k-factor," (b) the triple-energy window (TEW) method, with optimized widths of the photopeak window and the abutting scatter window, and (c) a general spectral (GS) method using seventeen 4 keV windows with optimized energy weights. Each method was optimized by minimizing the sum of the mean-squared errors (MSE) of the estimates of lesion and background activity concentrations. The accuracy and precision of activity estimates were then determined for lesions of different size, location, and contrast, as well as for a more complex Bayesian estimation task in which lesion size was also estimated. For the TEW and GS methods, parameters optimized for the estimation task differed significantly from those optimized for global normalized pixel MSE. For optimal estimation, the CW bias of activity estimates was larger and varied more (-2% to 22%) with lesion location and size than that of the other methods. The magnitude of the TEW bias was less than 7% across most conditions, although its precision was worse than that of CW estimates. The GS method performed best, with bias generally less than 4% and the lowest variance; its root-mean square (rms) estimation error was within a few percent of that achievable from primary photons alone. For brain SPECT, estimation performance with an optimized, energy-based, subtractive correction may approach that of an ideal scatter-rejection procedure.  相似文献   

9.
A method is presented for fully automated detection of Multiple Sclerosis (MS) lesions in multispectral magnetic resonance (MR) imaging. Based on the Fuzzy C-Means (FCM) algorithm, the method starts with a segmentation of an MR image to extract an external CSF/lesions mask, preceded by a local image contrast enhancement procedure. This binary mask is then superimposed on the corresponding data set yielding an image containing only CSF structures and lesions. The FCM is then reapplied to this masked image to obtain a mask of lesions and some undesired substructures which are removed using anatomical knowledge. Any lesion size found to be less than an input bound is eliminated from consideration. Results are presented for test runs of the method on 10 patients. Finally, the potential of the method as well as its limitations are discussed.  相似文献   

10.
Evoked potentials are usually evaluated subjectively, by visual inspection, and considerable differences between interpretations can occur. Objective, automated methods are normally based on calculating one (or more) parameters from the data, but only some of these techniques can provide statistical significance (p-values) for the presence of a response. In this work, we propose a bootstrap technique to provide such p-values, which can be applied to a wide variety of parameters. The bootstrap method is based on randomly resampling (with replacement) the original data and gives an estimate of the probability that the response obtained is due to random variation in the data rather than a physiological response. The method is illustrated using auditory brainstem responses (ABRs) to detecting hearing thresholds. The flexibility of the approach is illustrated, showing how it can be used with different parameters, numbers of stimuli and with user-defined false-positive rates. The bootstrap method provides a new, simple and yet powerful means of detecting evoked potentials, which is very flexible and readily adapted to a wide variety of signal parameters.  相似文献   

11.
We have set up a method to predict peptide binding to HLA-DP4 molecules. These HLA II molecules are the most frequent worldwide and hence are an interesting target for epitope-based vaccines. The prediction is based on quantitative matrices built with binding data for peptides substituted at anchoring positions for HLA-DP4. A set of 98 peptides of various origins was used to compare the prediction with binding activity. At different prediction thresholds, the positive predictive value and the sensitivity of the prediction ranged from 50% to 80%, demonstrating its efficiency. This prediction method can be applied to the entire genomes of pathogens and large peptide sequences derived from tumor antigens.  相似文献   

12.
The goal of this study is to develop a source imaging method for electroencephalography and magnetoencephalography by analyzing a distance measure based on a Euclidean norm of difference between pre- and post-stimulus brain activities. Conventional source imaging techniques generally detect evoked responses by averaging multiple trials at each source point. These methods are limited in their ability to fully analyze complex brain signals with a mixture of evoked and induced activities because they compare means or variances. In this article, we propose a novel approach for eliciting significant evoked and induced activity. To this aim, response and baseline ranges from each trial are separately mapped in an anatomically constrained source space by minimum-norm estimation. The extent within a distribution and the distance between distributions of brain activities at each source point are estimated from the set of trials. Then, this distance analysis determines the degree of difference between the response and baseline activities. The statistical significance of the distance comparison was computed using a nonparametric permutation test. In the evaluation of simulated data sets, the proposed method provided robust images of the simulated location (p<0.05), whereas the average method did not detect the perturbed source. A total of 200 randomly selected locations were tested with a signal-to-noise ratio (SNR) of 2dB, and the error between simulated points and the maximum-value-points analyzed using this method was 9+/-15 mm.  相似文献   

13.
Summary A new method is presented which makes it possible to estimate from a series of experimental observations of isometric maximum-effort muscle torque, a set of myodynamic parameter values for each of a number of muscles contributing collectively to the total torque output. The parameters that can be estimated are: the individual maximum isometric forces; the spreads of the length-tension curves; the relative maximum isometric tendon extensions; and the optimum muscle lengths; most of which parameters could not be estimated previously. The method is described for both penniform and fusiform muscles, and is demonstrated using the human triceps muscle as an example. The values obtained by this method are in general agreement with comparable values obtained by in vitro methods.  相似文献   

14.
An analytic solution of the Variable-Volume Double-Pool urea kinetics model and its application to the estimation of clinically relevant parameters of the patient-machine system, are presented. These include the urea distribution volume and generation rate and the mean dialyzer clearance. The estimation of these parameters is based on the assumption of constant values for the diffusion coefficient between the two pools and the intra-extracellular volume ratio. Results obtained by computer simulations show that the effect of a ± 50% variation of these parameters influences the estimates less than standard measurement errors.

Starting from these results, four methods to in vivo estimate the urea distribution volume and generation rate from blood samples are compared. Two methods are based on the analytic solution of the double-pool model using seven samples (reference method) or three samples (new clinical method). The remaining methods are based on urea mass-balance and are largely used in the clinical practice. These last techniques differ from each other for the blood sample taken at the end of the treatment or 30 min later.

The results obtained from hemofiltration sessions show that the urea generation rate is accurately estimated by all methods. The total distribution volume is still accurately estimated by the new clinical method while it is systematically underestimated by the urea mass-balance when the blood sample at the end of dialysis is used. Instead, a high overcompensation results using the blood sample taken 30 min after the end of dialysis. Finally, the new clinical method also provides reliable estimates for the dialyzer clearance starting from only three blood samples all taken during dialysis.  相似文献   


15.
Catheter-based approaches used in the localization and treatment of the source of heart rhythm disturbances (arrhythmias) have become popular, because they do not require highly invasive and risky open-chest operations. In most of the existing approaches, mapping of the outer surface (epicardium) is not possible even though arrhythmic substrates involving epicardial and subepicardial layers account for about 15% of the ventricular tachycardias. In this study, we report a feasibility study of a novel mapping approach targeting the epicardium which is based on the measurements of multielectrode catheters placed in the coronary veins. We investigated three methods in determining the most probable region of early activation, i.e., the region that contains the source of the abnormal activation on the heart, using only a set of sparse venous catheter recordings. The methods we proposed here were the linear estimation, correlation, and the back propagation networks. The linear estimation technique hypothesized the relationship between venous catheter measurements and unmeasured epicardial sites based on a previously recorded training data set. The correlation method included a comparative analysis between test and training epicardial activation time maps based on the measured values from the venous sites. In the back propagation method, the input layer consisted of the source data in the form of 42 nodes which were the activation time values from the intravenous catheter leads. We used two hidden layers with 600 and 500 nodes, respectively. The output layer consisted of 28 nodes in the output layer that corresponded to the manually selected early activation regions on the epicardium. The results of the linear estimation and the correlation methods showed that they could be used as a good predictor for the region of early activation, and thus, these approaches may be employed to direct a subsequent more focused electrophysiological study and curative radiofrequency (RF) ablation. The back propagation network approach performed relatively well for the right ventricularly paced beats and the results demonstrated its potential as a supporting technique to the estimation and correlation methods. The results of this study encourage further investigation and provides evidence that an epicardial mapping approach based on the venous catheter recordings is feasible and can provide adequate accuracy for clinical applications.  相似文献   

16.
细胞培养是一种体外研究活组织和活细胞的常用方法之一,而体外培养细胞的活性评价在体外细胞培养的研究中非常重要。现阶段主要使用医学和生物学的手段对细胞进行活性检测,不仅成本高,处理麻烦,且每次检测都会损失大量的细胞。本文提出了一种基于图像处理与分析方法的评价方法,在不破坏细胞的情况下获取活体细胞图像,通过对图像进行预处理和阈值化分割提取出生长晕,对其面积进行统计分析评价细胞的活性。对骨髓间充质干细胞的试验结果表明本文评价方法是准确有效的,能够大大降低细胞活性评价的成本,克服了传统评价方法破坏细胞的缺点。  相似文献   

17.
本文提出了一种基于频域相位相关算法的仿射参数估计方法,以改进PROPELLER仿射伪影消除中现有方法的不足。首先利用频域相位相关算法求出每个k-空间条的刚性运动参数,然后把刚性运动参数作为初值代入到仿射估计中,进行运动补偿后由网格化重建得到最终结果。实验结果证实该方法对于仿射参数估计的精度更高,稳定性更好,仿射运动伪影消除效果明显优于现有方法。该方法在PROPELLER伪影消除中是一种有效的并且实用的仿射参数估计方法。  相似文献   

18.
In this paper a variational framework for joint segmentation and motion estimation is employed for inspecting heart in Cine MRI sequences. A functional including Mumford-Shah segmentation and optical flow based dense motion estimation is approximated using the phase-field technique. The minimizer of the functional provides an optimum motion field and edge set by considering both spatial and temporal discontinuities. Exploiting calculus of variation principles, multiple partial differential equations associated with the Euler-Lagrange equations of the functional are extracted, first. Next, the finite element method is used to discretize the resulting PDEs for numerical solution. Several simulation runs are used to test the convergence and the parameter sensitivity of the method. It is further applied to a comprehensive set of clinical data in order to compare with conventional cascade methods. Developmental constraints are identified as memory usage and computational complexities, which may be resolved utilizing sparse matrix manipulations and similar techniques. Based on the results of this study, joint segmentation and motion estimation outperforms previously reported cascade approaches especially in segmentation. Experimental results substantiated that the proposed method extracts the motion field and the edge set more precisely in comparison with conventional cascade approaches. This superior result is the consequence of simultaneously considering the discontinuity in both motion field and image space and including consequent frames (usually five) in our joint process functional.  相似文献   

19.
The activity of central integrative brain neurons is associated with the overall assessment of functionally diverse signals of different sensory modalities which converge on these neurons via parallel inputs. Processing this information, these neurons take part in organizing the animal's various actions and in the mechanisms involved in switching from one action to another. Therefore, understanding of the functional characteristics of central brain neurons requires studies in which the dynamics of neuron activity are recorded continuously throughout a sequence of actions performed by an animal. Traditional methods of analyzing neuron activity, such as the construction of postand peristimulus histograms and cross-correlation analysis, are inadequate for this purpose. These methods allow analysis to be applied to neuron spike activity only around each synchronization point. Their use for studies of a developed program of animal actions unavoidably leads to a set of separate histograms providing no information on the dynamics of neuron activity corresponding to continuous behavior. A complex approach to studying the neuronal correlates of behavior is suggested, designed to overcome these difficulties. The method is based on the use of a developed behavioral program with recording of several neurons in parallel, with analysis of neuron activity using a relative time scale based on the duration of each sequentially performed action. Non-traditional methods of processing neuron spike activity were developed for analysis of the resulting data, including construction of relative histograms and multidimensional statistics methods. These approaches allowed us to study the dynamics of neuron activity continuously through all the stages of performance of a behavioral program and obtain data on the involvement of each group of those neurons which were studied in functionally different actions. This methodology was tested using studies of the functional characteristics of striatum neurons in monkeys. Comparable data were obtained on the individual responses of neurons and on the dynamics of their activity at different stages of the animals' performance of a multicomponent behavioral program. This revealed the lack of functional specialization in striatum neurons and different patterns of their involvement in motor and cognitive functions. Translated from Rossiskii Fiziologicheskii Zhurnal imeni I. M. Sechenova, Vol. 84, No. 8, pp. 705–718, August, 1998.  相似文献   

20.
Recent work has shown that it is possible to apply linear kinetic models to dynamic projection data in PET in order to calculate parameter projections. These can subsequently be back-projected to form parametric images--maps of parameters of physiological interest. Critical to the application of these maps, to test for significant changes between normal and pathophysiology, is an assessment of the statistical uncertainty. In this context, parametric images also include simple integral images from, e.g., [O-15]-water used to calculate statistical parametric maps (SPMs). This paper revisits the concept of parameter projections and presents a more general formulation of the parameter projection derivation as well as a method to estimate parameter variance in projection space, showing which analysis methods (models) can be used. Using simulated pharmacokinetic image data we show that a method based on an analysis in projection space inherently calculates the mathematically rigorous pixel variance. This results in an estimation which is as accurate as either estimating variance in image space during model fitting, or estimation by comparison across sets of parametric images--as might be done between individuals in a group pharmacokinetic PET study. The method based on projections has, however, a higher computational efficiency, and is also shown to be more precise, as reflected in smooth variance distribution images when compared to the other methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号