首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Object-oriented classification methods are increasingly used to derive plant-level structural information from high-resolution remotely sensed data from plant canopies. However, many automated, object-based classification approaches perform poorly in deciduous forests compared with coniferous forests. Here, we test the performance of the automated spatial wavelet analysis (SWA) algorithm for estimating plot-level canopy structure characteristics from a light detection and ranging (LiDAR) data set obtained from a northern mixed deciduous forest. Plot-level SWA-derived and co-located ground-based measurements of tree diameter at breast height (DBH) were linearly correlated when canopy cover was low (correlation coefficient (r)?=?0.80) or moderate (r?=?0.68), but were statistically unrelated when canopy cover was high. SWA-estimated crown diameters were not significantly correlated with allometrically based estimates of crown diameter. Our results show that, when combined with allometric equations, SWA can be useful for estimating deciduous forest structure information from LiDAR in forests with low to moderate (<175% projected canopy area/ground area) levels of canopy cover.  相似文献   

2.
The objective of this research was to evaluate wood volume estimates of Pinus nigra trees in forest stands, which were derived utilizing Geographic Object-Based Image Analysis. Information on forest parameters such as wood volume and number of trees is useful for forest management facilitating forest sustainability. Most of the existing approaches used to estimate wood volume of forest trees require field measurements, which are laboursome. In this study, the collected field data were utilized only in order to investigate the results. Wood volume was estimated based on an individual tree crown approach and using monoscopic satellite images in combination with allometric data. The study area is the Pentalofo forest, which is located in Kozani prefecture in western Macedonia, Northern Greece. About 1 plot surface of 0.1143 ha was utilized. During the preprocessing, a pansharpened image was produced from two Quickbird satellite images (one multispectral image of 2.4 m spatial resolution and one panchromatic image of 0.6 m spatial resolution). Bands of this image were utilized single or in combination in order to delineate the tree crowns individually. The allometric equation served in order to calculate the tree Diameter at Breast Height (DBH) utilizing the detected tree crowns. The evaluation was conducted on three levels: (i) number of trees, (ii) DBH class distribution and (iii) wood volume. On the third level, the evaluation procedure was conducted twice; once using field height and once without. The difference between the results and the field data for the wood volume reached a maximum of approximately 30%. The total number of trees was exactly the same as counted in the field and the DBH distribution showed a tendency for the trees to move to a higher DBH class, resulting in an overestimation of the wood volume.  相似文献   

3.
Habitat assessments often require observers to estimate tree hollows in situ, which can be costly, destructive and prone to bias. An alternative is to count the number of trees above a specific size. The size at which a tree develops hollows differs substantially among tree species. To assist with setting standards for habitat assessment we defined a large tree as the size at which a species has a 50% probability of supporting a 2-cm diameter hollow. We estimated this size for 68 species using a meta-analysis based on 18 data sources. We found that large tree size ranged from 21 to 106 cm diameter at breast height (DBH). Each species was attributed to vegetation types (formations and classes) to explore variation in large tree sizes. Despite considerable variation within vegetation classes and formations, our results suggest that a large tree size of approximately 50 cm DBH may be appropriate for most vegetation types, with lower estimates in semi-arid vegetation (~30 cm) and higher estimates in wet sclerophyll forests (~80 cm). Our estimates provide empirical support for defining large trees at species vegetation class and formation levels within New South Wales, and highlights the need for more empirical data.  相似文献   

4.
In this study, a method for estimating the stand diameter at breast height (DBH) classes in a South Korea forest using airborne lidar and field data was proposed. First, a digital surface model (DSM) and digital terrain model (DTM) were generated from the lidar data that have a point density of 4.3 points/m2, then a tree canopy model (TCM) was created by subtracting the DTM from the DSM. The tree height and crown diameter were estimated from the rasterized TCM using local maximum points, minimum points and a circle fitting algorithm. Individual tree heights and crown diameters were converted into DBH using the allometric equations obtained from the field survey data. We calculated the proportion of the total number of individual trees belonging to each DBH class in each stand to determine the stand DBH class according to the standard guidelines. More than 60% of the stand DBH classes were correctly estimated by the proposed method, and their area occupied over 80% of the total forest area. The proposed method generated more accurate results compared to the digital forest type map provided by the government.  相似文献   

5.
Crop yield forecasting in a region has become an important research area due to global warming and related climate changes. Although this can be performed by available statistical information, obtaining recent and up to date data to extract reliable statistical information is not easy. Very high resolution satellite images can be used for this purpose. However, manually processing these images acquired from large regions is neither feasible nor reliable. Therefore, automated methods are needed for this purpose. In this study, we propose a novel method to help forecasting the crop yield in an orchard. The number of trees in an orchard with the size and type of each tree crown gives an approximate crop that can be harvested. Therefore, we focus on both tree crown detection and delineation for this purpose. The proposed method for tree crown detection is based on probabilistic voting. For tree crown delineation, we propose a watershed segmentation based ellipse fitting method. We tested the proposed method on 17 satellite images containing 13,476 trees. We compared the method with the classical local maxima/minima filtering and a recent method in literature using three more test images. These tests indicate the strengths and weaknesses of the proposed method.  相似文献   

6.
7.
Terrestrial laser scanning is a technique that has been used increasingly in extracting forest biometrics such as trunk diameter and tree heights. Its potential, however, has not been fully explored in complex forested ecosystems, especially in riparian forests, considered among the most dynamic and complex portions of the Earth's biosphere. In this study, forest inventory data and multiple ground scans were obtained in a sparse managed and dense natural riparian forest on the immediate banks of the mid-lower portion of the Garonne River in Southern France, dominated by black poplar (Populus nigra) and commercial hybrid poplars (Populus × euramericana). Overall, the ground-based laser-scanning analysis successfully estimated trunk diameters, tree heights and crown radii from both managed and natural riparian forests. However, the ground scanner analysis was not as successful in identifying all of the trunks in the dense natural riparian forest, with only 141 trunks identified from a total of 234. This also results in allometric scaling exponents for ground scanning, which are significantly different from field-derived exponents. This study thus shows that there may be a limit to the number of trees detected in higher density forests, even with multiple scans.  相似文献   

8.
An accurate measure of the number of capsules in the crowns of jarrah (Eucalyptus marginata) trees is needed to assess the potential for seedling regeneration prior to silvicultural treatment in jarrah forests. The current method of estimating capsule crops on jarrah trees uses stem diameter and estimates of capsule density in the crown, but has not been fully validated. In this study, we sought to develop an accurate and practical method of assessing capsule crops in the crowns of individual jarrah trees. We did this by measuring a number of tree characteristics prior to felling them. A total of 24 trees were selected, spanning a range of sizes and crown conditions, and the total number of capsules was counted for each tree. Multiple linear regression was used to model capsule number against various combinations of eight different tree/crown variables, with model fit compared using an adjusted coefficient of determination (adjR2). The final model recommended for field use included three easily measured variables (stem diameter, subjective assessment of capsule density, and subjective assessment of capsule clump distribution in the crown) and had a high degree of predictability (adjR2 = 0.83), which was the same as that of the full model. This method substantially improved estimates of crown capsule numbers compared with the method currently used (adjR2 increased from 0.29 to 0.83), which tended to underestimate canopy capsule numbers.  相似文献   

9.
A dominant approach to brain mapping is to define functional regions in the brain by analyzing images of brain activation obtained from positron emission tomography (PET) and functional magnetic resonance imaging (fMRI). This paper presents an evaluation of using one such tool, called the scale-space primal sketch, for brain activation analysis. A comparison is made concerning two possible definitions of a significance measure of blob structures in scale-space, where local contrast is measured either relative to a local or global reference level. Experiments on real brain data show that (i) the global approach with absolute base level has a higher degree of correspondence to a traditional statistical method than a local approach with relative base level, and that (ii) the global approach with absolute base level gives a higher significance to small blobs that are superimposed on larger scale structures, whereas the significance of isolated blobs largely remains unaffected. Relative to previously reported works, the following two technical improvements are also presented. (i) A post-processing tool is introduced for merging blobs that are multiple responses to image structures. This simplifies automated analysis from the scale-space primal sketch. (ii) A new approach is introduced for scale-space normalization of the significance measure, by collecting reference statistics of residual noise images obtained from the general linear model.  相似文献   

10.
Current methods for direct determination of fat cell size are optical and non-automated. They are thus tedious, but have the advantage of providing estimates of the variance of a cell size distribution, as well as a measure of the mean cell size. An indirect method based on counting the number of osmium-fixed fat cells derived from a known wet weight of adipose tissue is also available. This method is automated and thus rapid, but does not provide information about the variance of the cell size distribution. In the present paper we describe a direct, automated method of fat cell sizing that provides estimates for both mean cell size and the variance of the cell size distribution. The correlation between our method and in indirect method based on counting osmium-fixed cells was 0.96. Transformation of the volume distribution to diameters indicated that cell diameters appeared to be normally distributed, confirming the observations of others.  相似文献   

11.
《Remote sensing letters.》2013,4(12):1143-1152
ABSTRACT

This letter describes a new algorithm for automatic tree crown delineation based on a model of tree crown density, and its validation. The tree crown density model was first used to create a correlation surface, which was then input to a standard watershed segmentation algorithm for delineation of tree crowns. The use of a model in an early step of the algorithm neatly solves the problem of scale selection. In earlier studies, correlation surfaces have been used for tree crown segmentation, involving modelling tree crowns as solid geometric shapes. The new algorithm applies a density model of tree crowns, which improves the model’s suitability for segmentation of Airborne Laser Scanning (ALS) data because laser returns are located inside tree crowns. The algorithm was validated using data acquired for 36 circular (40 m radius) field plots in southern Sweden. The algorithm detected high proportions of field-measured trees (40–97% of live trees in the 36 field plots: 85% on average). The average proportion of detected basal area (cross-sectional area of tree stems, 1.3 m above ground) was 93% (range: 84–99%). The algorithm was used with discrete return ALS point data, but the computation principle also allows delineation of tree crowns in ALS waveform data.  相似文献   

12.
We develop and evaluate a new individual tree detection (ITD) algorithm to automatically locate and estimate the number of individual trees within a Pinus radiata plantation from relatively sparse airborne LiDAR point cloud data. The area of interest comprised stands covering a range of age classes and stocking levels. Our approach is based on local maxima (LM) filtering that tackles the issue of selecting the optimal search radius from the LiDAR point cloud for every potential LM using metrics derived from local neighbourhood data points; thus, it adapts to the local conditions, irrespective of canopy variability. This was achieved through two steps: (i) logistic regression model development using simulated stands composed of individual trees derived from real LiDAR point cloud data and (ii) application testing of the model using real plantation LiDAR point cloud data and geolocated, tree-level reference crowns that were manually identified in the LiDAR imagery. Our ITD algorithm performed well compared with previous studies, producing RMSE of 5.7% and a bias of only ?2.4%. Finally, we suggest that the ITD algorithm can be used for accurately estimating stocking and tree mapping, which in turn could be used to derive the plot-level metrics for an area-based approach for enhancing estimates of stand-level inventory attributes based on plot imputation.  相似文献   

13.
Diabetic peripheral neuropathy (DPN) is one of the most common long term complications of diabetes. Corneal confocal microscopy (CCM) image analysis is a novel non-invasive technique which quantifies corneal nerve fibre damage and enables diagnosis of DPN. This paper presents an automatic analysis and classification system for detecting nerve fibres in CCM images based on a multi-scale adaptive dual-model detection algorithm. The algorithm exploits the curvilinear structure of the nerve fibres and adapts itself to the local image information. Detected nerve fibres are then quantified and used as feature vectors for classification using random forest (RF) and neural networks (NNT) classifiers. We show, in a comparative study with other well known curvilinear detectors, that the best performance is achieved by the multi-scale dual model in conjunction with the NNT classifier. An evaluation of clinical effectiveness shows that the performance of the automated system matches that of ground-truth defined by expert manual annotation.  相似文献   

14.
Application of a radioisotope dating technique to a spotted gum (Corymbia citriodora) tree in south-east Queensland showed that the observed growth rings were annual. The dating technique is based on a comparison between the concentration of C measured in tree ring cellulose and historical measurements of C in the atmosphere. This information improves our understanding of forest processes and growth over time, and undoubtedly will contribute to more efficient measures of forest growth.  相似文献   

15.
OBJECTIVE: To compare two manual methods for estimating platelet counts from Wright's stained peripheral blood smears regarding their correlation with each other and with automated platelet counts. This correlation was examined in relation to whether the platelet count was high, low, or normal and in relation to whether the hemoglobin value was low versus normal or high. DESIGN: Peripheral blood smears were Wright's stained and both platelet count estimation methodologies were performed on each slide. The traditional estimation method was the average number of platelets per oil immersion field (OIF) multiplied by 20,000 to yield a platelet count estimate per uL. The alternate estimation method was the average number of platelets per OIF multiplied by the patient's hemoglobin value in g/dL and then multiplied by 1,000 to yield a platelet count estimation per uL. The platelet count estimates were performed without the technologists having prior knowledge of the automated platelet counts which were produced on a Coulter LH750 analyzer. The agreement between the two manual methodologies with each other and each method with the automated count was assessed using the paired T-test and correlation coefficient analyses. These analyses were performed for the whole dataset as well as for subsets based on the automated platelet count and the hemoglobin value. SETTING: East Carolina University's Clinical Laboratory Science program in collaboration with the Clinical Pathology/Laboratory at Pitt County Memorial Hospital (PCMH) in Greenville NC. PARTICIPANTS: One hundred eighty-four blood samples in EDTA-anticoagulant VacutainerI tubes were used to conduct this study. Each blood sample had two peripheral blood smears made and stained on an automatic slide stainer. The blood samples were obtained from the Clinical Pathology/Laboratory of Pitt County Memorial Hospital in October and November of 2004. Each sample was given a unique numeric identifier with no personal identifying information from any sample being recorded. MAIN OUTCOME MEASURE: Platelet counts by two slide estimation methods and by an automated reference method. RESULTS: The traditional platelet count estimation method had a mean for the sample of 269,000/uL, while the alternate estimation method had a mean of 155,000/uL. The mean for the automated platelet counts was 268,000/uL. The traditional estimation method showed no statistically significant difference in mean from the automated platelet counts based on the paired T-test (p = 0.87). The traditional estimation method counts and automated counts had a high Pearson Product Moment correlation coefficient of r = .90 and a minimally dispersed scatterplot, thus showing strong agreement. The alternate platelet count estimation method had a mean for the sample of 155,000/uL which, based on the paired T-test, was highly significantly different from the automated count mean (p < 0.0001) and the traditional estimation method mean (p < 0.0001). The alternate estimation method and automated counts had a lower r value of .81 and greater dispersion in the scatterplot. In comparing the estimation methods with each other and with the automated method, the differences and similarities in agreement observed for the whole dataset were also observed with each platelet count and hemoglobin subset of data. CONCLUSIONS: Though the alternate platelet count estimation method has been recommended for use particularly with patients with low hemoglobin values, this study found that the traditional estimation method provided more agreement with automated counts than did the alternate estimation method for all samples as well as for the subset of samples with low hemoglobin values. For the present, the traditional method of estimating platelet counts from blood smears to evaluate automated results appears to provide adequate quality assurance.  相似文献   

16.
This study was undertaken to determine whether the use of automated noninvasive blood pressure monitoring altered the frequency of detection of intraoperative hypotension. We retrospectively reviewed 1,861 anesthetic records from a period in 1987, when blood pressure was obtained manually by auscultation. We compared the records from 1987 with 1,716 anesthetic records from 1989, when automated blood pressure monitors were used universally. The incidences of hypotension requiring vasopressor therapy were determined during the two periods and compared using Student's two-tailedt test. The data revealed that the incidence of detected hypotension increased from 2.4 to 5.2% with the use of automated blood pressure monitors (P<0.00002). We conclude that at our hospital the use of automated noninvasive blood pressure monitors increases the incidence of detection of intraoperative hypotension as compared with the use of manual blood pressure measurement.Presented in part of the annual meeting of the American Society of Anesthesiologists, New Orleans, October, 1989.  相似文献   

17.
The imaging performance assessment of ultrasound scanners based on traditional phantoms is limited by repeatability, subjectivity and systematic errors giving low confidence in results. A new approach to the automated measurement of scanner resolution is described. The method utilises a step change in backscatter to derive resolution from the imaging system line spread function and has been used to calculate resolution in two dimensions as a continuous function of depth. Resolution data was used to calculate resolution integrals for both lateral and slice thickness independently. For resolution integral repeatability, analysis of variance showed no significant difference between operators (p = 0.05) with intra and inter-operator repeatability (±1 standard deviation) of 1.5% and 1.5% for lateral resolution, respectively, and 2.6% and 3.3% for slice thickness, respectively. Low contrast penetration was also calculated automatically and the worst case operator repeatability was 1.3%. The acoustic properties of the phantom were validated. The possibility of extending the technique to axial resolution is discussed. (E-mail: david.rowland@stgeorges.nhs.uk)  相似文献   

18.
Surface air temperature (Tair) is a critical driver of ecosystem processes and phenological dynamics, and can be estimated in near-real time with satellite remote sensing. However, persistent cloud cover often creates large spatial and temporal gaps in our observation records. Previous studies have successfully mapped Tair; however, the challenges of mapping forest understory temperatures (Tust) are relatively unexplored. This study describes a methodology for constructing cloud-free composites of Tust at 250 m spatial resolution. We used generalized linear models to correlate daily average Tust with ground-surveyed forest structural characteristics and land surface temperature (LST) obtained from the Moderate Resolution Imaging Spectroradiometer (MODIS). Models were applied to all four daily MODIS overpasses and combined in to a single image to maximize cloud-free spatial coverage. Pixel temperatures within the remaining cloud gaps were estimated using a temporal averaging algorithm that incorporated a novel approach for factoring the relative cloudiness between days. Models predicted Tust to within 1.5°C (R2 ~ 0.87), with an overall final map accuracy having a mean absolute error of 2.2°C. Maps were produced for two growing seasons using in situ observation data from forested sites throughout the Rocky Mountains of Alberta, Canada. By avoiding complex physical models, our procedure is computationally efficient and capable of processing large volumes of data using open-source programming languages and desktop computers.  相似文献   

19.
《Medical image analysis》2015,20(1):164-175
Given the potential importance of marginal artery localization in automated registration in computed tomography colonography (CTC), we have devised a semi-automated method of marginal vessel detection employing sequential Monte Carlo tracking (also known as particle filtering tracking) by multiple cue fusion based on intensity, vesselness, organ detection, and minimum spanning tree information for poorly enhanced vessel segments. We then employed a random forest algorithm for intelligent cue fusion and decision making which achieved high sensitivity and robustness. After applying a vessel pruning procedure to the tracking results, we achieved statistically significantly improved precision compared to a baseline Hessian detection method (2.7% versus 75.2%, p < 0.001). This method also showed statistically significantly improved recall rate compared to a 2-cue baseline method using fewer vessel cues (30.7% versus 67.7%, p < 0.001). These results demonstrate that marginal artery localization on CTC is feasible by combining a discriminative classifier (i.e., random forest) with a sequential Monte Carlo tracking mechanism. In so doing, we present the effective application of an anatomical probability map to vessel pruning as well as a supplementary spatial coordinate system for colonic segmentation and registration when this task has been confounded by colon lumen collapse.  相似文献   

20.
Modern imaging spectrometers produce an ever-growing amount of data, which increases the need for automated analysis techniques. The algorithms employed, such as the United States Geological Survey (USGS) Tetracorder and the Mineral Identification and Characterization Algorithm (MICA), use a standardized spectral library and expert knowledge for the detection of surface cover types. Correct absorption feature definition and isolation are key to successful material identification using these algorithms. Here, a new continuum removal and feature isolation technique is presented, named the ‘Geometric Hull Technique’. It is compared to the well-established, knowledge-based Tetracorder feature database together with the adapted state of the art techniques scale-space filtering, alpha shapes and convex hull.

The results show that the geometric hull technique yields the smallest deviations from the feature definitions of the MICA Group 2 library with a median difference of only 8 nm for the position of the features and a median difference of only 15% for the feature shapes. The modified scale-space filtering hull technique performs second best with a median feature position difference of 16 nm and a median difference of 25% for the feature shapes. The scale-space alpha hull technique shows a 23 nm median position difference and a median deviation of 77% for the feature shapes. The geometric hull technique proposed here performs best amongst the four feature isolation techniques and may be an important building block for next generation automatic mapping algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号