首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Change detection is of great significance in remote sensing. The advent of high-resolution remote sensing images has greatly increased our ability to monitor land use and land cover changes from space. At the same time, high-resolution remote sensing images present a new challenge over other satellite systems, in which time-consuming and tiresome manual procedures must be needed to identify the land use and land cover changes. In recent years, deep learning (DL) has been widely used in the fields of natural image target detection, speech recognition, face recognition, etc., and has achieved great success. Some scholars have applied DL to remote sensing image classification and change detection, but seldomly to high-resolution remote sensing images change detection. In this letter, faster region-based convolutional neural networks (Faster R-CNN) is applied to the detection of high-resolution remote sensing image change. Compared with several traditional and other DL-based change detection methods, our proposed methods based on Faster R-CNN achieve higher overall accuracy and Kappa coefficient in our experiments. In particular, our methods can reduce a large number of false changes.  相似文献   

2.
Image classification is a prominent topic and a challenging task in the field of remote sensing. Recently many various classification methods have been proposed for satellite images specifically the frameworks based on spectral-spatial feature extraction techniques. In this paper, a feature extraction strategy of multispectral data is taken into account in order to develop a new classification framework by combining Extended Multi-Attribute Profiles (EMAP) and Sparse Autoencoder (SAE). Extended Multi-Attribute Profiles is employed to extract the spatial information, then it is joined to the original spectral information to describe the spectral-spatial property of the multispectral images. The obtained features are fed into a Sparse Autoencoder as input. Finally, the learned spectral-spatial features are embedded into the Support Vector Machine (SVM) for classification. Experiments are conducted on two multispectral (MS) images such as we construct the ground truth maps of the corresponding images. Our approach based on EMAP and deep learning (DL), proves its huge potential to achieve a high classification accuracy in reasonable running time and outperforms traditional classifiers and others classification approaches.  相似文献   

3.
ABSTRACT

Aircraft detection in remote sensing imagery has drawn much attention in recent years, which plays an important role in various military and civil applications. While many advanced works have been developed with powerful learning algorithms in natural images, there still lacks an effective one to detect aircraft precisely in remote sensing images, especially in some complicated conditions. In this paper, a novel method is designed to detect aircraft precisely, named aircraft detection using Centre-based Proposal regions and Invariant Features (CPIF), which can handle some difficult image deformations, especially rotations. Our framework mainly contains three steps. Firstly, we propose an algorithm to extract proposal regions from remote sensing imagery. Secondly, an ensemble learning classifier with the rotation-invariant HOG is trained for aircraft classification. Lastly, we detect aircraft in remote sensing images by combining the products of the above steps. The proposed method is evaluated on a public dataset RSOD and the results are performed to demonstrate the superiority and effectiveness in comparison with the state-of-the-art methods.  相似文献   

4.
Road segmentation from high-resolution visible remote sensing images provides an effective way for automatic road network forming. Recently, deep learning methods based on convolutional neural networks (CNNs) are widely applied in road segmentation. However, it is a challenge for most CNN-based methods to achieve high segmentation accuracy when processing high-resolution visible remote sensing images with rich details. To handle this problem, we propose a road segmentation method based on a Y-shaped convolutional network (indicated as Y-Net). Y-Net contains a two-arm feature extraction module and a fusion module. The feature extraction module includes a deep downsampling-to-upsampling sub-network for semantic features and a convolutional sub-network without downsampling for detail features. The fusion module combines all features for road segmentation. Benefiting from this scheme, the Y-Net can well segment multi-scale roads (both wide and narrow roads) from high-resolution images. The testing and comparative experiments on a public dataset and a private dataset show that Y-Net has higher segmentation accuracy than four other state-of-art methods, FCN (Fully Convolutional Network), U-Net, SegNet, and FC-DenseNet (Fully Convolutional DenseNet). Especially, Y-Net accurately segments contours of narrow roads, which are missed by the comparative methods.  相似文献   

5.
6.
ABSTRACT

Soil-available nutrients (SANs)are essential for crop growth and yield formation. Appropriate variable rate fertilization (VRF) can control SAN at a normal level to avoid unnecessary damage to sustainable production capacity. The precondition of optimizing the application of VRF is obtaining the real-time status of SAN. A new method for SAN estimation has been proposed by integrating modified World Food Studies (WOFOST) and time-series satellite remote sensing (RS) data. This method can provide field scale SAN estimations with high accuracy. However, the estimation accuracy at a subfield scale was low for VRF application because of the poor spatial resolution of common satellite imagery. In this letter, the subfield SAN estimations were optimized to ensure the VRF value. Time-series multispectral images acquired by an unmanned aerial vehicle (UAV) were used to replace common satellite data, and the SAN values for haplic phaeozem in selected spring maize plot in Hongxing Farm (48°09? N, 127°03? E) were estimated. Based on the field SAN data, the estimation accuracies using satellite data and UAV data were analyzed. The results show that the UAV data improved SAN estimations at the subfield scale).  相似文献   

7.
ABSTRACT

Snow cover is of great significance for many applications. However, automatic extraction of snow cover from high spatial resolution remote sensing (HSRRS) imagery remains challenging, owing to its multiscale characteristics, similarities to clouds, and occlusion by the shadows of mountains and clouds. Deep convolutional neural networks for semantic segmentation are the most popular approach to automatic map generation, but they require huge computing time and resources, as well as a large dataset of pixel-wise annotated HSRRS images, which precludes the application of many superior models. In this study, these limitations are overcome by using a sequence of transfer learning steps. The method starts with a modified aligned ‘Xception’ model pre-trained for object classification on ImageNet. Subsequently, a ‘DeepLab version three plus’ (DeepLabv3+) model is trained using a large dataset of Landsat images and corresponding snow cover products. Finally, a second transfer learning step is employed to fine-tune the model on the small dataset of GaoFen-2, the highest resolution HSRRS satellite in China. Experiments demonstrate the feasibility and effectiveness of this framework for automatic snow cover extraction.  相似文献   

8.
目的运用Faster R-CNN特征计算借助深层CNN架构,分析预标识肝脏肿块超声图像,尝试建立检测器并测试其效能。方法选择肝囊肿及肝癌超声图像为研究对象,收集正常肝脏各切面图像行deep CNN学习,迁移学习后优化预先训练的deep CNN构建更快的R-CNN。将ImageJ软件标识的肿瘤图像作为补丁训练分类器,并通过与基于区域建议的卷积神经网络集成构建检测器,检测器检测样本后自动标识肝脏异常病灶。结果(1)Faster R-CNN较传统检测器检测效率提高;(2)Faster R-CNN预测肝囊肿及肝癌的平均准确率均高于传统HOG-SVM,AlexNet、GoogleNet、ResNet三种CNN预测肝囊肿的准确率差异不显著,而三种方法中ResNet预测肝癌的准确性最佳。deep CNN进行特征转移五次交叉验证后,补丁分类结果中AlexNet、GoogleNet、ResNet预测准确性分别为94.94%、94.14%、98.68%,较传统HOG-SVM分类器准确性87.29%有提高。结论基于deep CNN的Faster R-CNN可高效准确预测肝脏肿瘤超声图像,具有一定的临床及研究价值。  相似文献   

9.
Accurate and reliable detection of abnormal lymph nodes in magnetic resonance (MR) images is very helpful for the diagnosis and treatment of numerous diseases. However, it is still a challenging task due to similar appearances between abnormal lymph nodes and other tissues. In this paper, we propose a novel network based on an improved Mask R-CNN framework for the detection of abnormal lymph nodes in MR images. Instead of laboriously collecting large-scale pixel-wise annotated training data, pseudo masks generated from RECIST bookmarks on hand are utilized as the supervision. Different from the standard Mask R-CNN architecture, there are two main innovations in our proposed network: 1) global-local attention which encodes the global and local scale context for detection and utilizes the channel attention mechanism to extract more discriminative features and 2) multi-task uncertainty loss which adaptively weights multiple objective loss functions based on the uncertainty of each task to automatically search the optimal solution. For the experiments, we built a new abnormal lymph node dataset with 821 RECIST bookmarks of 41 different types of abnormal abdominal lymph nodes from 584 different patients. The experimental results showed the superior performance of our algorithm over compared state-of-the-art approaches.  相似文献   

10.
ABSTRACT

The convolutional neural network (CNN) is widely used for image classification because of its powerful feature extraction capability. The key challenge of CNN in remote sensing (RS) scene classification is that the size of data set is small and images in each category vary greatly in position and angle, while the spatial information will be lost in the pooling layers of CNN. Consequently, how to extract accurate and effective features is very important. To this end, we present a Siamese capsule network to address these issues. Firstly, we introduce capsules to extract the spatial information of the features so as to learn equivariant representations. Secondly, to improve the classification accuracy of the model on small data sets, the proposed model utilizes the structure of the Siamese network as embedded verification. Finally, the features learned through Capsule networks are regularized by a metric learning term to improve the robustness of our model. The effectiveness of the model on three benchmark RS data sets is verified by different experiments. Experimental results demonstrate that the comprehensive performance of the proposed method surpasses other existing methods.  相似文献   

11.
《Remote sensing letters.》2013,4(10):745-754
Object recognition has been one of the hottest issues in the field of remote sensing image analysis. In this letter, a new pixel-wise learning method based on deep belief networks (DBNs) for object recognition is proposed. The method is divided into two stages, the unsupervised pre-training stage and the supervised fine-tuning stage. Given a training set of images, a pixel-wise unsupervised feature learning algorithm is utilized to train a mixed structural sparse restricted Boltzmann machine (RBM). After that, the outputs of this RBM are put into the next RBM as inputs. By stacking several layers of RBM, the deep generative model of DBNs is built. At the fine-tuning stage, a supervised layer is attached to the top of the DBN and labels of the data are put into this layer. The whole network is then trained using the back-propagation (BP) algorithm with sparse penalty. Finally, the deep model generates good joint distribution of images and their labels. Comparative experiments are conducted on our dataset acquired by QuickBird with 60 cm resolution and the recognition results demonstrate the accuracy and efficiency of our proposed method.  相似文献   

12.
ABSTRACT

Unsupervised representation learning plays an important role in remote sensing image applications. Generative adversarial network (GAN) is the most popular unsupervised learning method in recent years. However, due to poor data augmentation, many GAN-based methods are often dif?cult to carry out. In this paper, we propose an improved unsupervised representation learning model called multi-layer feature fusion Wasserstein GAN (MF-WGANs) which considers extracting the feature information for remote sensing scene classification from unlabelled samples. First, we introduced a multi-feature fusion layer behind the discriminator to extract the high-level and mid-level feature information. Second, we combined the loss of multi-feature fusion layer and WGAN-GP to generate more stable and high-quality remote sensing images with a resolution of 256 × 256. Finally, the multi-layer perceptron classifier (MLP-classifier) is used to classify the features extracted from the multi-feature fusion layer and evaluated with the UC Merced Land-Use, AID and NWPU-RESISC45 data sets. Experiments show that MF-WGANs has richer data augmentation and better classification performance than other unsupervised representation learning classification models (e.g., MARTA GANs).  相似文献   

13.
14.
ABSTRACT

Learning discriminative and robust features is crucial in remote sensing image processing. Many of the currently used approaches are based on Convolutional Neural Networks (CNNs). However, such approaches may not effectively capture various different semantic objects of remote sensing images. To overcome this limitation, we propose a novel end-to-end deep multi-feature fusion network (DMFN). DMFN combines two different deep architecture branches for feature representations; the global and local branch. The global branch, which consists of three losses, is used to learn discriminative features from the whole image. The local branch is then used in the partitioning of the entire image into multiple strips in order to obtain local features. The two branches are then combined, used to learn fusion feature representations for the image. The proposed method is an end-to-end framework during training. Comprehensive validation experiments on two public datasets indicate that relative to existing deep learning approaches, this strategy is superior for both retrieval and classification tasks.  相似文献   

15.
Deep learning-based breast lesion detection in ultrasound images has demonstrated great potential to provide objective suggestions for radiologists and improve their accuracy in diagnosing breast diseases. However, the lack of an effective feature enhancement approach limits the performance of deep learning models. Therefore, in this study, we propose a novel dual global attention neural network (DGANet) to improve the accuracy of breast lesion detection in ultrasound images. Specifically, we designed a bilateral spatial attention module and a global channel attention module to enhance features in spatial and channel dimensions, respectively. The bilateral spatial attention module enhances features by capturing supporting information in regions neighboring breast lesions and reducing integration of noise signal. The global channel attention module enhances features of important channels by weighted calculation, where the weights are decided by the learned interdependencies among all channels. To verify the performance of the DGANet, we conduct breast lesion detection experiments on our collected data set of 7040 ultrasound images and a public data set of breast ultrasound images. YOLOv3, RetinaNet, Faster R-CNN, YOLOv5, and YOLOX are used as comparison models. The results indicate that DGANet outperforms the comparison methods by 0.2%–5.9% in total mean average precision.  相似文献   

16.
ObjectiveThe goal of the work described here was to construct a deep learning–based intelligent diagnostic model for ophthalmic ultrasound images to provide auxiliary analysis for the intelligent clinical diagnosis of posterior ocular segment diseases.MethodsThe InceptionV3–Xception fusion model was established by using two pre-trained network models—InceptionV3 and Xception—in series to achieve multilevel feature extraction and fusion, and a classifier more suitable for the multiclassification recognition task of ophthalmic ultrasound images was designed to classify 3402 ophthalmic ultrasound images. The accuracy, macro-average precision, macro-average sensitivity, macro-average F1 value, subject working feature curves and area under the curve were used as model evaluation metrics, and the credibility of the model was assessed by testing the decision basis of the model using a gradient-weighted class activation mapping method.ResultsThe accuracy, precision, sensitivity and area under the subject working feature curve of the InceptionV3–Xception fusion model on the test set reached 0.9673, 0.9521, 0.9528 and 0.9988, respectively. The model decision basis was consistent with the clinical diagnosis basis of the ophthalmologist, which proves that the model has good reliability.ConclusionThe deep learning–based ophthalmic ultrasound image intelligent diagnosis model can accurately screen and identify five posterior ocular segment diseases, which is beneficial to the intelligent development of ophthalmic clinical diagnosis.  相似文献   

17.
Scene classification of remote sensing images plays an important role in many remote sensing image applications. Training a good classifier needs a large number of training samples. The labeled samples are often scarce and difficult to obtain, and annotating a large number of samples is time-consuming. We propose a novel remote sensing image scene classification framework based on generative adversarial networks (GAN) in this paper. GAN can improve the generalization ability of machine learning network model. However, generating large-size images, especially high-resolution remote sensing images is difficult. To address this issue, the scaled exponential linear units (SELU) are applied into the GAN to generate high quality remote sensing images. Experiments carried out on two datasets show that our approach can obtain the state-of-the-art results compared with the classification results of the classic deep convolutional neural networks, especially when the number of training samples is small.  相似文献   

18.
目的 基于SPECT全身骨扫描构建YOLOv5x深度学习网络模型,观察其诊断良、恶性骨病灶的价值。方法 纳入699例接受SPECT骨扫描患者共5 182处骨病变,包括恶性3 105处、良性2 077处。按8∶1∶1将1 121幅骨扫描图像分为训练集(n=897)、验证集(n=112)及测试集(n=112)。对训练集及验证集数据进行增强后输入YOLOv5x深度学习网络进行训练得到模型,基于测试集评估模型识别良、恶性骨灶的敏感度、特异度和准确率,及其诊断结果与金标准的一致性。结果 骨扫描YOLOv5x深度学习网络模型识别恶性骨病变的敏感度为95.75%、特异度为87.87%、准确率为91.60%,识别良性骨病灶分别为91.62%、94.38%及93.14%。模型识别骨扫描图像中骨病灶的曲线下面积(AUC)为0.98,识别恶性、良性骨病灶的AUC分别为0.97、0.98。模型诊断恶性及良性骨病灶的结果与金标准的一致性均好(Kappa=0.83、0.86,P均<0.05)。结论 基于SPECT全身骨扫描建立的YOLOv5x深度学习网络模型有助于诊断良、恶性骨病灶。  相似文献   

19.
Building extraction from remote sensing images is very important in many fields, such as urban planning, land use investigation, damage assessment, and so on. In polarimetric synthetic aperture radar (PolSAR) imagery, the buildings not only have typical polarimetric features but also have rich texture features. In this paper, the texture information is introduced to improve the accuracy of urban building extraction from PolSAR imagery by a new method called cross reclassification. Based on this method, the polarimetric information-based results and texture-based results can be effectively fused. The experimental results of three representative PolSAR images with different characteristics demonstrate the effectiveness of the proposed method, and the accuracy of building extraction can be improved, compared with the traditional method using only polarimetric information.  相似文献   

20.
Recent deep neural networks have shown superb performance in analyzing bioimages for disease diagnosis and bioparticle classification. Conventional deep neural networks use simple classifiers such as SoftMax to obtain highly accurate results. However, they have limitations in many practical applications that require both low false alarm rate and high recovery rate, e.g., rare bioparticle detection, in which the representative image data is hard to collect, the training data is imbalanced, and the input images in inference time could be different from the training images. Deep metric learning offers a better generatability by using distance information to model the similarity of the images and learning function maps from image pixels to a latent space, playing a vital role in rare object detection. In this paper, we propose a robust model based on a deep metric neural network for rare bioparticle (Cryptosporidium or Giardia) detection in drinking water. Experimental results showed that the deep metric neural network achieved a high accuracy of 99.86% in classification, 98.89% in precision rate, 99.16% in recall rate and zero false alarm rate. The reported model empowers imaging flow cytometry with capabilities of biomedical diagnosis, environmental monitoring, and other biosensing applications.

Conventional deep neural networks use simple classifiers to obtain highly accurate results. However, they have limitations in practical applications. This study demonstrates a robust deep metric neural network model for rare bioparticle detection.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号