首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
目的:评价手工操作和全自动生化仪两种检测方法,对于改良葡萄糖6-磷酸脱氢酶定量比值法筛查G6PD缺乏症的一致性。方法收集患者样本,分别用手工操作法和全自动生化分析仪检测G6P/6PG比值,参照EP9-A2文件,进行方法学比对和偏倚评估。结果两种方法检测结果相关性分析 r2为0.83,在医学决定水平0.95、1、1.05处,其偏倚为31.6%、29%、27.6%。两个方法间结果的偏倚不能接受。结论使用全自动生化分析仪上机检测G6P/6PG比值,可以提高G6PD缺乏症的检出率。采用手工法,必须做好方法间比对,建立自己实验室的参考区间。  相似文献   

2.
A high-quality model with sufficient accuracy and efficiency is crucial in volume measurement of granary stockpile. Single scanning data usually cannot satisfy the high-accuracy requirement of application. Hence, multi-station scanning is usually performed to obtain a complete and dense point cloud granary model. In this study, the convex hull indexed Gaussian mixture model is introduced for accurate point cloud registration of the granary. The granary stockpile volume is calculated by subtracting the volume of full granary model from empty granary model. Hence, a novel strategy for full granary scan calculation is proposed to extend the application of granary stockpile measurement. This strategy is achieved by reconstructing a virtual empty granary model using grain line baseline and vertical principal axis. This study presents the registration and fused results of multi-station scans of different granaries. Experiments are designed to compare the stockpile volume measurements of single scanning and fused data, thereby demonstrating that the proposed method is very effective and robust for granary stockpile measurement. The proposed method has the potential to be utilized in intelligent granary management in the future because it is fully automatic.  相似文献   

3.
Forest biomass is often difficult to quantify because field measurements are time consuming and require destructive sampling. This study explores the retrieval of stem biomass of individual trees by terrestrial laser scanning (TLS). Destructive sampling was done to collect biomass data from sample trees and used as a dependent variable in a regression analysis. Two biomass estimation models were investigated: one based on diameter at breast height (DBH) and another based on the sum of the stem section volume. Both the DBH and the stem section volume were determined from automatic reconstruction of the stem curves. Two tree species (Scots pine and Norway spruce) were considered together. The quality of the performance of the models was evaluated via a leave-one-out cross-validation strategy using accurate field measurements for 30 trees. The correlation coefficient (r) and root mean square errors (RMSEs) between the predicted and measured stem biomass were used as measures of goodness of model fitting. The model with DBH as the predictor produced an r-value of 0.93 and an RMSE of 21.5%. For the model using the reconstructed stem and correspondingly derived stem volume as the predictor, an r-value of 0.98 and an RMSE of 12.5% were achieved. The results indicated that TLS measurements are capable of assessing stem biomass with high automation and accuracy by reconstructing the stem from TLS point clouds.  相似文献   

4.
目的提高血站检测前过程的工作效率与血站检测整体的自动化水平。方法引进1台全自动标本处理系统,随机处理本站2019年3—5月采集的13 625人(份)献血者血液标本中的7 121个标本的交接、扫码、离心、冷藏、挑选、开盖、转移载架等环节(全自动化组);同时采用传统手工法随机处理6 504个标本(手工组)。比较2组工作流程各环节的工作方式、工作效率等。结果血站检测前过程各环节实现了全自动化的自动化组较手工组:工作人员(数)为1 vs 2,标本平均查找时间(s)为30±10 vs 120±10,完成1批次平均耗时(min)50±10 vs 60±10。结论全自动标本处理系统的应用使血站实验室自动化检验过程前移,提高了工作效率,促进设备不断升级,更新换代,推动检测全过程自动化的发展。  相似文献   

5.
We develop and evaluate a new individual tree detection (ITD) algorithm to automatically locate and estimate the number of individual trees within a Pinus radiata plantation from relatively sparse airborne LiDAR point cloud data. The area of interest comprised stands covering a range of age classes and stocking levels. Our approach is based on local maxima (LM) filtering that tackles the issue of selecting the optimal search radius from the LiDAR point cloud for every potential LM using metrics derived from local neighbourhood data points; thus, it adapts to the local conditions, irrespective of canopy variability. This was achieved through two steps: (i) logistic regression model development using simulated stands composed of individual trees derived from real LiDAR point cloud data and (ii) application testing of the model using real plantation LiDAR point cloud data and geolocated, tree-level reference crowns that were manually identified in the LiDAR imagery. Our ITD algorithm performed well compared with previous studies, producing RMSE of 5.7% and a bias of only ?2.4%. Finally, we suggest that the ITD algorithm can be used for accurately estimating stocking and tree mapping, which in turn could be used to derive the plot-level metrics for an area-based approach for enhancing estimates of stand-level inventory attributes based on plot imputation.  相似文献   

6.
BACKGROUND: Hemoglobin A(1c) (HbA(1c)) is widely accepted as the most important biological parameter reflecting glycemic control in diabetic patients. Its measurement in clinical chemistry necessitates the use of reliable and robust methods. We studied here the influence of two hemolysis procedures on an automated HbA(1c) immunoassay using the Mira Plus analyzer. METHODS AND RESULTS: Whole blood was hemolyzed either manually, using an external manual procedure, or automatically on-board the analyzer. The results of imprecision studies showed comparable performance of both procedures, coefficients of variation (CVs) being slightly higher with the automated procedure (2.2-2.7% versus 1.7-2.1% in within-run experiments, and 2.4-3.5% versus 2.1-3.0% in between-run experiments). Comparison of results in 100 fresh samples showed acceptable correlation between the two procedures (r(2) = 0.94, y = 0.98x+0.43). Sedimentation of whole blood in sample tubes prior to automatic hemolysis did not alter the results. CONCLUSION: These data demonstrate that both procedures are suitable for routine use, with a higher practicability of the automated one. However, the values are not directly comparable, pointing out the critical role of every analytical step and the need for standardization and strict quality control of HbA(1c) assays.  相似文献   

7.
The costs of clinical investigations of drug-induced QT interval prolongation are mainly related to manual processing of electrocardiographic (ECG) recordings. Potentially, however, these costs can be decreased by automatic ECG measurement. To investigate the improvements in measurement accuracy of the modern ECG equipment, this study investigated QT interval measurement by the "old" and "new" versions of the 12SL ECG algorithm by GE Healthcare (Milwaukee, WI, USA) and compared the results to carefully validated and reconciled manual measurements. The investigation used two sets (A and B) of ECG recordings that originated from large clinical studies. Sets A and B consisted of 15,194, and 29,866 10-second ECG recordings, respectively. All the recordings were obtained with GE Healthcare recorders and were available in digital format compatible with ECG processing software by GE Healthcare. The two sets of recordings differed significantly in ECG quality with set B being substantially more noise polluted. Compared to careful manual QT interval readings in recording set A, the errors of the automatic QT interval measurement were (mean +/- SD) +3.95 +/- 5.50 ms, and +0.51 +/- 12.41 ms for the "new" and "old" 12SL algorithm, respectively. In recording set B, these numbers were +2.41 +/- 9.47 ms, and -0.17 +/- 14.89 ms, respectively (both differences were highly statistically significant, P < 0.000001). In recording set A, 95.9% and 76.6% of ECGs were measured automatically within 10 ms of the manual measurement by the "new" and "old" versions of the 12SL algorithm, In recording set B, these numbers were 83.9% and 59.5%. The errors made by the "new" and "old" version of 12SL algorithm were practically independent each of the other (correlation coefficients of 0.031 and 0.281 in recording sets A and B, respectively). The study shows that (a) compared to the "old" version of the 12SL algorithm, the QT interval measurement by the "new" version implemented in the most recent ECG equipment by GE Healthcare is significantly better, and (b) the precision of automatic measurement by the 12SL algorithm is substantially dependent on the quality of processed ECG recordings. The improved accuracy of the "new" 12SL algorithm makes it feasible to use modern ECG equipment without any manual intervention in selected parts of drug-development program.  相似文献   

8.
《Remote sensing letters.》2013,4(12):1143-1152
ABSTRACT

This letter describes a new algorithm for automatic tree crown delineation based on a model of tree crown density, and its validation. The tree crown density model was first used to create a correlation surface, which was then input to a standard watershed segmentation algorithm for delineation of tree crowns. The use of a model in an early step of the algorithm neatly solves the problem of scale selection. In earlier studies, correlation surfaces have been used for tree crown segmentation, involving modelling tree crowns as solid geometric shapes. The new algorithm applies a density model of tree crowns, which improves the model’s suitability for segmentation of Airborne Laser Scanning (ALS) data because laser returns are located inside tree crowns. The algorithm was validated using data acquired for 36 circular (40 m radius) field plots in southern Sweden. The algorithm detected high proportions of field-measured trees (40–97% of live trees in the 36 field plots: 85% on average). The average proportion of detected basal area (cross-sectional area of tree stems, 1.3 m above ground) was 93% (range: 84–99%). The algorithm was used with discrete return ALS point data, but the computation principle also allows delineation of tree crowns in ALS waveform data.  相似文献   

9.
The information on the topography of the seabed included in data from airborne lidar bathymetry can be used for the detection of changes occurring at the bottom of the basin and for the detection of objects deposited on it. Processing of full waveform data enables obtaining data on the water surface and the identification of the underwater situation. The classification process based on the Random Forest (RF) algorithm is presented using data from lidar bathymetry. The classification was performed in two independent approaches using input vector consisting of 16 features. In the first approach, the entire point cloud was classified, in the second the point cloud did not contain points of the water surface. In the classification, traits based on the full waveform and resulting from point cloud geometry were used. The quantitative efficiency of classification was verified through error matrices. The obtained efficiency (100% water surface, 99.9% seabed and 60% objects) of the point classification of the objects enables the possibility for using the RF algorithm for detection of objects on the seabed. In comparison to Support Vector Machines, the RF algorithm has better results in the detection of points on the objects in point cloud with water surface.  相似文献   

10.
We present a robust and automatic method for evaluating the 3-D navigation accuracy in ultrasound (US) based image-guided systems. The method is based on a precisely built and accurately measured phantom with several wire crosses and an automatic 3-D template matching by correlation algorithm. We investigated the accuracy and robustness of the algorithm and also addressed optimization of algorithm parameters. Finally, we applied the method to an extensive data set from an in-house US-based navigation system. To evaluate the algorithm, eight skilled observers identified the same wire crosses manually and the average over all observers constitutes our reference data set. We found no significant differences between the automatic and the manual procedures; the average distance between the point sets for one particular volume (27 point pairs) was 0.27 +/- 0.17 mm. Furthermore, the spread of the automatically determined points compared with the reference set was lower than the spread for any individual operator. This indicates that the automatic algorithm is more accurate than manual determination of the wire-cross locations, in addition to being faster and nonsubjective. In the application example, we used a set of 35 3-D US scans of the phantom under various acquisition configurations. The US frequency was 6.7 MHz and the average target depth was 6 cm. The accuracy, represented by the mean distance between automatically-determined wire-cross locations and physically measured locations, was found to be 1.34 +/- 0.62 mm.  相似文献   

11.
背景:双能X射线骨密度仪是诊断骨质疏松症的金标准,但采用其系统默认方式测量小动物骨密度误差很大。目的:观察双能X射线骨密度仪不同测量方式对大鼠骨密度测量准确度的影响。方法:应用双能X射线骨密度仪对六七月龄雌性SD大鼠进行全身扫描,分别采用自定义手动矩形方式、手动椭圆形方式与系统默认标准方式依次测量大鼠的全身、头部及脊柱部位的骨密度。结果与结论:手动椭圆形方式与系统默认方式测得的大鼠全身、头部和脊柱的骨密度差异无显著性意义(P>0.05),而手动矩形方式与系统默认标准方式间差异有显著性意义(P<0.01)。双能X射线骨密度仪应用手动椭圆形方式与系统默认标准方式对测量结果影响不大,但手动矩形测量方式误差较大。提示手动椭圆形方式可作为小动物骨密度测量后的分析方法之一。  相似文献   

12.
目的通过与SysmaxUF-100尿液分析仪及人工显微镜镜检的对比分析探讨IQ200全自动尿沉渣分析仪的性能、特点及应用价值。方法随机选取我院186例住院患者晨尿标本,分别用IQ-200全自动尿沉渣分析仪、SysmaxUF-100尿液分析仪及人工显微镜镜检进行检测,分析多个参数结果。结果(1)三种方法对尿液中红、白细胞的栓出率较一致。(2)三和方法对红细胞检测敏感度略高于白细胞的敏感度。(3)IQ-200全自动尿沉渣分析仪大大提高了对红细胞、白细胞检测的敏感度。结论IQ-200全自动尿沉渣分析仪敏感度较高,可实现尿液分析的自动化和标准化。  相似文献   

13.
The calculation of the left ventricular ejection fraction (LVEF) is dependent upon the accurate measurement of diastolic and systolic left ventricular volumes. Although breath-hold cine magnetic resonance imaging (MRI) allows coverage of the whole cardiac cycle with an excellent time resolution, many authors rely on the visual selection of diastolic and the systolic short-axis slices in order to reduce the postprocessing time. An automatic method was developed to detect the endocardial contour on each image, allowing an automatic selection of the systolic frame. The calculated ejection fraction was compared with radionuclide ventriculography (RNV). Sixty-five patients were examined using an electrocardiogram (ECG)-gated gradient echo sequence. Among these examinations, manual and automatic processing with MRI were compared when the time of the systolic frame concorded. Good correlations have been found between the automatic MRI approach and RNV, and between manual and automatic processing on MRI alone. The results show that the automatic determination of the ejection fraction is feasible, and should constitute an important step toward a larger acceptance of MRI as a routine tool in heart disease imaging. One major benefit of using automatic postprocessing is that it may eliminate the visual choice of the systolic frame, inaccurate in more than 50% of the studied patients.  相似文献   

14.
The significant performance improvement obtained by using Spark in-memory processing for iterative processes has led many researchers in various fields to implement their applications with Spark. In this study, we investigated the use of in-memory processing with Spark for creating a digital elevation model from massive light detection and ranging (LiDAR) point clouds, which can be considered an iterative process. We conducted our experiments on large high-density LiDAR data sets using two well-known interpolation methods: inverse distance weighting (IDW) and Kriging. Here, we designed our in-memory processing to parallelize those methods, and compared our results with the popularly used Hadoop MapReduce-based implementation. Our experiments ran on six servers under a medium-sized high-performance cloud computing environment. The results demonstrated that our Spark-based in-memory computing yielded better performance compared with Hadoop MapReduce, with an average 5.4 times speed increase in IDW, and 4.8 times improvement in Kriging. In addition, we evaluated the characteristics of our method in terms of central processing unit, memory usage, and network activities.  相似文献   

15.
Actigraphy has become a valuable clinical and research tool to objectively evaluate sleep, daytime activity, and circadian activity rhythms in healthy individuals as well as persons with primary and comorbid insomnia. However, procedures used for sampling, data processing, and analysis are not consistently reported in the literature. The wide variability in how actigraphy is reported makes it difficult to compare findings across studies. The procedures and reporting methods from 21 studies that used actigraphs to assess sleep and wake in adult patients with cancer are reviewed to highlight the differences in reporting strategies. Patients with cancer were chosen to illustrate the methodological challenges related to procedures and reporting in one population. The aim of this article was to advance standards of information presented in publications to enable comparisons across research studies that use actigraphy. Specific methodological challenges when using actigraphy in research include instrumentation, selection of pertinent variables, sampling, and data processing and analysis. Procedural decisions are outlined and discussed, and suggestions are made for standardized actigraphy information to include in research reports. More consistent procedures and reporting will advance the science of sleep, daytime activity, and circadian activity rhythms and their association with other health-related variables.  相似文献   

16.
ObjectiveThe study was performed to investigate the effect of two different lancets and heel warming during blood sampling from the heel on procedure duration and crying.MethodsThis was a randomized controlled trial study. The data were obtained from the Newborn Intensive Care Unit of a hospital in Istanbul between January 2015 and January 2016. One hundred twenty newborns were randomly assigned to four groups — automatic lancet with/without warming and manual lancet with/without warming. The newborns were administered heel puncture for routine blood bilirubin monitoring.ResultThere was no statistically significant difference between the four groups in terms of the characteristics that could affect the outcome of the study. The infants in the manual lancet group without warming were found to have longer duration of crying and longer procedure durations than the other groups. The procedure durations of infants in the manual lancet group with warming were significantly longer than those in the automatic lancet group with warming.ConclusionThis study show that both heel warming and using automatic lancet are effective in reducing the durations of the procedure and crying during blood sampling from the heel.  相似文献   

17.
Luminal stenosis is used for selecting the optimal management strategy for patients with carotid artery disease. The aim of this study is to evaluate the reproducibility of carotid stenosis quantification using manual and automated segmentation methods using submillimeter through-plane resolution Multi-Detector CT angiography (MDCTA). 35 patients having carotid artery disease with >30 % luminal stenosis as identified by carotid duplex imaging underwent contrast enhanced MDCTA. Two experienced CT readers quantified carotid stenosis from axial source images, reconstructed maximum intensity projection (MIP) and 3D-carotid geometry which was automatically segmented by an open-source toolkit (Vascular Modelling Toolkit, VMTK) using NASCET criteria. Good agreement among the measurement using axial images, MIP and automatic segmentation was observed. Automatic segmentation methods show better inter-observer agreement between the readers (intra-class correlation coefficient (ICC): 0.99 for diameter stenosis measurement) than manual measurement of axial (ICC = 0.82) and MIP (ICC = 0.86) images. Carotid stenosis quantification using an automatic segmentation method has higher reproducibility compared with manual methods.  相似文献   

18.
Most of collected hematopoietic stem cell (HSCs) products need processing in order to isolate stem cells, squeeze out of plasma and erythrocytes. There are two main aims for bone marrow (BM) enrichment: reduction of immunogenicity of AB0 incompatible transplants and/or preventing toxicity of hemolysis during cryopreservation. In our center we have implemented two methods for BM enrichment: manual technic using 10 % HAES (hydroxyethyl starch) and automatic cell separator. In order to optimize the process, we examined retrospectively the parameters which could have a great impact on final efficiency of engraftment, such as reduction of hematocrit, CD34 + , WBC recovery and cell viability. This study was a retrospective analysis of 46 pediatric patients (pts) who underwent autologous or allogeneic HSCT. There were performed 27 procedures using cell separator and 19 with HAES technique. This study showed that cell separator processing is significantly less damaging for stem cells than widely longer, manual HAES technique. Comparing RBC depletion and WBC recovery both used techniques are same efficient and good enough but we found out a significant difference in the efficiency of CD34 + recovery which was much higher in a technique of cell separator. We examined also the effect of addition of packed red blood cells (PRBCs) to the BM on purifying and efficiency of HSCs isolation. Doing so it decreased only the WBC recovery during sell separator processing. To sum up after series of analyzes we found out that cell separator is more convenient than HAES technique in most of considered aspects. Furthermore, cell separator use is cheaper and needs less time for processing.  相似文献   

19.
First-pass cardiac MR perfusion (CMRP) imaging has undergone rapid technical advancements in recent years. Although the efficacy of CMRP imaging in the assessment of coronary artery diseases (CAD) has been proven, its clinical use is still limited. This limitation stems, in part, from manual interaction required to quantitatively analyze the large amount of data. This process is tedious, time-consuming, and prone to operator bias. Furthermore, acquisition and patient related image artifacts reduce the accuracy of quantitative perfusion assessment. With the advent of semi- and fully automatic image processing methods, not only the challenges posed by these artifacts have been overcome to a large extent, but a significant reduction has also been achieved in analysis time and operator bias. Despite an extensive literature on such image processing methods, to date, no survey has been performed to discuss this dynamic field. The purpose of this article is to provide an overview of the current state of the field with a categorical study, along with a future perspective on the clinical acceptance of image processing methods in the diagnosis of CAD.  相似文献   

20.
With increasing use of two-dimensional echocardiograms (2DE) for diagnosis [1,2], efforts to computerize the process of quantification of cardiac parameters have increased. Visual processing of echocardiograms is time and labor intensive, and usually provides qualitative results with subjective variations [3]. In contrast, computer assisted methods are efficient and provide quantitative reproducible results. On the basis of the extent of computer usage, the 2DE processing methods are classified into three categories, namely, manual [9–30], interactive [32–49], and automatic methods [51–82]. This work is a structured survey of the published research on these three categories.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号