首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 515 毫秒
1.
In this letter, a novel deep learning framework for hyperspectral image classification using both spectral and spatial features is presented. The framework is a hybrid of principal component analysis, deep convolutional neural networks (DCNNs) and logistic regression (LR). The DCNNs for hierarchically extract deep features is introduced into hyperspectral image classification for the first time. The proposed technique consists of two steps. First, feature map generation algorithm is presented to generate the spectral and spatial feature maps. Second, the DCNNs-LR classifier is trained to get useful high-level features and to fine-tune the whole model. Comparative experiments conducted over widely used hyperspectral data indicate that DCNNs-LR classifier built in this proposed deep learning framework provides better classification accuracy than previous hyperspectral classification methods.  相似文献   

2.
Recently, a series of deep learning methods based on the convolutional neural networks (CNNs) have been introduced for classification of hyperspectral images (HSIs). However, in order to obtain the optimal parameters, a large number of training samples are required in the CNNs to avoid the overfitting problem. In this paper, a novel method is proposed to extend the training set for deep learning based hyperspectral image classification. First, given a small-sample-size training set, the principal component analysis based edge-preserving features (PCA-EPFs) and extended morphological attribute profiles (EMAPs) are used for HSI classification so as to generate classification probability maps. Second, a large number of pseudo training samples are obtained by the designed decision function which depends on the classification probabilities. Finally, a deep feature fusion network (DFFN) is applied to classify HSI with the training set consists of the original small-sample-size training set and the added pseudo training samples. Experiments performed on several hyperspectral data sets demonstrate the state-of-the-art performance of the proposed method in terms of classification accuracies.  相似文献   

3.
In this letter, a new deep learning framework, which integrates textural features of gray level co-occurrence matrix (GLCM) into convolutional neural networks (CNNs) is proposed for hyperspectral images (HSIs) classification using limited number of labeled samples. The proposed method can be implemented in three steps. Firstly, the GLCM textural features are extracted from the first principal component after the principal components analysis (PCA) transformation. Secondly, a CNN is built to extract the deep spectral features from the original HSIs, and the features are concatenated with the textural features obtained in the first step in a concat layer of CNN. Finally, softmax is employed to generate classification maps at the end of the framework. In this way, the CNN focuses on the learning of spectral features only, and the generated textural features are used directly as one set of features before softmax. These lead to the reduction of the requirements for the size of training samples and the improvement of computing efficiency. The experimental results are presented for three HSIs and compared with several advanced deep learning and spectral-spatial classification techniques. The competitive classification accuracy can be obtained, especially when only a limited number of training samples are available.  相似文献   

4.
A hybrid neural network for hyperspectral image classification   总被引:1,自引:0,他引:1  
ABSTRACT

Recent research shows that deep learning-based methods can achieve promissing performance when applied to hyperspectral image (HSI) classification in remote sensing, some challenging issues still exist. For example, after a number of 2D convolutions, each feature map may only correspond to a unique dimension of the hyperspectral image. As a result, the relationship between different feature maps from multiple dimensional hyperspectral image can not be extracted well. Another issue is information in extracted feature maps may be erased by pooling operations. To address these problems, we propose a novel hybrid neural network (HNN) for hyperspectral image classification. The HNN uses a multi-branch architecture to extract hyperspectral image features in order to improve its prediction accuracy. Moreover, we build a deconvolution structure to recover the lost information in the pooling operation. In addition, to improve convergence and prevent overfitting, the HNN applies batch normalization (BN) and parametric rectified linear units (PReLU). In the experiments, two public benchmark HSIs are utilized to evaluate the performance of the proposed method. The experimental results demonstrate the superiority of HNN over several well-known methods.  相似文献   

5.
In this article, a novel dual-channel convolutional neural network (DC-CNN) framework is proposed for accurate spectral-spatial classification of hyperspectral image (HSI). In this framework, one-dimensional CNN is utilized to automatically extract the hierarchical spectral features and two-dimensional CNN is applied to extract the hierarchical space-related features, and then a softmax regression classifier is used to combine the spectral and spatial features together and predict classification results eventually. To overcome the problem of the limited available training samples in HSIs, we propose a simple data augmentation method which is efficient and effective for improving HSI classification accuracy. For comparison and validation, we test the proposed method along with three other deep-learning-based HSI classification methods on two real-world HSI data sets. Experimental results demonstrate that our DC-CNN-based method outperforms the state-of-the-art methods by a considerable margin.  相似文献   

6.
Deep neural networks have recently been successfully explored to extract deep features for hyperspectral image classification. Recurrent neural networks (RNNs) are an important branch of the deep learning family, which are widely used for sequence analysis. Indeed, RNNs have been used to model the dependencies between the different spectral bands of hyperspectral image, inspired by the observation that hyperspectral pixels can be considered as spectral sequences. A disadvantage of such methods is that they don’t consider the effect of neighborhood pixels on the final class label. In this letter, a RNN model is proposed for the spectral-spatial classification of hyperspectral image. Specifically, the hyperspectral image cube surrounding a central pixel is considered as a hyperspectral pixels sequence, and a RNN is used to model the dependencies between the different neighborhood pixels. The proposed RNN is conducted on two widely used hyperspectral image datasets. The experimental results demonstrate that the proposed approach provides a better performance than that of conventional methods.  相似文献   

7.
Convolutional neural network (CNN) for hyperspectral image classification can provide excellent performance when the number of labeled samples for training is sufficiently large. Unfortunately, a small number of labeled samples are available for training in hyperspectral images. In this letter, a novel semi-supervised convolutional neural network is proposed for the classification of hyperspectral image. The proposed network can automatically learn features from complex hyperspectral image data structures. Furthermore, skip connection parameters are added between the encoder layer and decoder layer in order to make the network suitable for semi-supervised learning. Semi-supervised method is adopted to solve the problem of limited labeled samples. Finally, the network is trained to simultaneously minimize the sum of supervised and unsupervised cost functions. The proposed network is conducted on a widely used hyperspectral image data. The experimental results demonstrate that the proposed approach provides competitive results to state-of-the-art methods.  相似文献   

8.
A dense convolutional neural network for hyperspectral image classification   总被引:1,自引:0,他引:1  
In this letter, a dense convolutional neural network (DCNN) is proposed for hyperspectral image classification, aiming to improve classification performance by promoting feature reuse and strengthening the flow of features and gradients. In the network, features are learned mainly through designed dense blocks, where feature maps generated in each layer can connect directly to the subsequent layers by a concatenation mode. Experiments are conducted on two well-known hyperspectral image data sets, using the proposed method and four comparable methods. Results demonstrate that overall accuracies of the DCNN reached 97.61 and 99.50% for the respective image data sets, representing an obvious improvement over the accuracies of the compared methods. The study confirms that the DCNN can provide more discriminable features for hyperspectral image classification and can offer higher classification accuracies and smoother classification maps.  相似文献   

9.
Aerial scene classification is a challenging task in the remote sensing image processing field. Owing to some similar scene, there are only differences in density. To challenge this problem, this paper proposes a novel parallel multi-stage (PMS) architecture formed by a low, middle, and high deep convolutional neural network (DCNN) sub-model. PMS model automatically learns representative and discriminative hierarchical features, which include three 512 dimension vectors, respectively, and the final representative feature created by linear connection. PMS model describes a robust feature of aerial image through three stages feature. Unlike previous methods, we only use transfer learning and deep learning methods to obtain more discriminative features from scene images while improving performance. Experimental results demonstrate that the proposed PMS model has a more superior performance than the state-of-the-art methods, obtaining average classification accuracies of 98.81% and 95.56%, respectively, on UC Merced (UCM) and aerial image dataset (AID) benchmark datasets.  相似文献   

10.
With the launch of various remote-sensing satellites, more and more high-spatial resolution remote-sensing (HSR-RS) images are becoming available. Scene classification of such a huge volume of HSR-RS images is a big challenge for the efficiency of the feature learning and model training. The deep convolutional neural network (CNN), a typical deep learning model, is an efficient end-to-end deep hierarchical feature learning model that can capture the intrinsic features of input HSR-RS images. However, most published CNN architectures are borrowed from natural scene classification with thousands of training samples, and they are not designed for HSR-RS images. In this paper, we propose an agile CNN architecture, named as SatCNN, for HSR-RS image scene classification. Based on recent improvements to modern CNN architectures, we use more efficient convolutional layers with smaller kernels to build an effective CNN architecture. Experiments on SAT data sets confirmed that SatCNN can quickly and effectively learn robust features to handle the intra-class diversity even with small convolutional kernels, and the deeper convolutional layers allow spontaneous modelling of the relative spatial relationships. With the help of fast graphics processing unit acceleration, SatCNN can be trained within about 40 min, achieving overall accuracies of 99.65% and 99.54%, which is the state-of-the-art for SAT data sets.  相似文献   

11.
ABSTRACT

The convolutional neural network (CNN) is widely used for image classification because of its powerful feature extraction capability. The key challenge of CNN in remote sensing (RS) scene classification is that the size of data set is small and images in each category vary greatly in position and angle, while the spatial information will be lost in the pooling layers of CNN. Consequently, how to extract accurate and effective features is very important. To this end, we present a Siamese capsule network to address these issues. Firstly, we introduce capsules to extract the spatial information of the features so as to learn equivariant representations. Secondly, to improve the classification accuracy of the model on small data sets, the proposed model utilizes the structure of the Siamese network as embedded verification. Finally, the features learned through Capsule networks are regularized by a metric learning term to improve the robustness of our model. The effectiveness of the model on three benchmark RS data sets is verified by different experiments. Experimental results demonstrate that the comprehensive performance of the proposed method surpasses other existing methods.  相似文献   

12.
《Remote sensing letters.》2013,4(11):1086-1094
ABSTRACT

Deep learning-based methods, especially deep convolutional neural network (CNN), have proven their powerfulness in hyperspectral image (HSI) classification. On the other hand, ensemble learning is a useful method for classification task. In this letter, in order to further improve the classification accuracy, the combination of CNN and random forest (RF) is proposed for HSI classification. The well-designed CNN is used as individual classifier to extract the discriminant features of HSI and RF randomly selects the extracted features and training samples to formulate a multiple classifier system. Furthermore, the learned weights of CNN are adopted to initialize other individual CNN. Experimental results with two hyperspectral data sets indicate that the proposed method provides competitive classification results compared with state-of-the-art methods.  相似文献   

13.
Hyperspectral images comprise hundreds of narrow contiguous wavelength bands which include wealth spectral information, and a great potential of Light detection and ranging (LIDAR) data lies in its benefits of height measurements, which can be used as complementary information for the classification of hyperspectral data. In this paper, a feature-fusion strategy of hyperspectral and LIDAR data is taken into account in order to develop a new classification framework for the accurate analysis of a surveyed area. The proposed approach employs extinction profiles (EPs) extracted with extinction filters computed on both hyperspectral and LIDAR images, leading to a fusion of the spectral, spatial, and elevation features. Experimental results obtained by using a real hyperspectral image along with LIDAR-derived digital surface model (DSM) collected over the University of Houston campus and its neighboring urban area demonstrate the effectiveness of the proposed framework.  相似文献   

14.
In this letter, a spectral-spatial classification method using functional data analysis (FDA) is proposed. Since the efficacy of the FDA for hyperspectral image analysis instead of analysis in multivariate analysis framework (MAF) was proved previously, we apply FDA to better extract spectral and spatial information for hyperspectral image classification. Therefore, in the FDA framework a support vector machine (SVM) classifier is used for hyperspectral image classification and a watershed segmentation algorithm is applied in order to extract spatial structures. Several approaches to figure a one-band gradient image, as an input to watershed transformation, are examined and investigated. As a result, the extracted segmentation map is used to improve the pixel-wise classification accuracy on which the classification and the segmentation results are combined together using majority vote approach. The efficiency of the proposed method is evaluated on two hyperspectral data sets. The experimental results show that the proposed spectral-spatial classification method provides better classification accuracies compared to some state-of-the-art spectral-spatial classification methods.  相似文献   

15.
In hyperspectral images (HSI) classification, it is important to combine multiple features of a certain pixel in both spatial and spectral domains to improve the classification accuracy. To achieve this goal, this article proposes a novel spatial-spectral feature dimensionality reduction algorithm based on manifold learning. For each feature, a graph Laplacian matrix is constructed based on discriminative information from training samples, and then the graph Laplacian matrices of the various features are linearly combined using a set of empirically defined weights. Finally, the feature mapping is obtained by an eigen-decomposition problem. Based on the classification results of the public Indiana Airborne Visible Infrared Imaging Spectrometer dataset and Texas Hyperspectral Digital Imagery Collection Experiment data set, the technical accuracies show that our method achieves superior performance compared to some representative HSI feature extraction and dimensionality reduction algorithms.  相似文献   

16.
In this study, a novel method for clustering hyperspectral images is proposed. The proposed method performs projected clustering in feature/spectral space and merges regions in image/spatial space. The novelty of the proposed method lies in the way in which spectral and spatial information is used, along with its inclusion in the projected clustering framework. The proposed method transfers clusters formed in feature space to image space by converting them into regions. Then in image space, regions are iteratively merged by making use of spatial adjacency and spectral similarity. To evaluate the effectiveness of the proposed method, experiments are conducted on three hyperspectral images. The proposed method is also compared with other partitional clustering methods. Results demonstrate that the proposed method has ability to achieve better performance in most cases.  相似文献   

17.
Due to the abundance of spatial information and relative lack of spectral information in high spatial resolution remote sensing images, a land use classification method for high-resolution remote sensing images is proposed based on a parallel spectral-spatial convolutional neural network (CNN) and object-oriented remote sensing technology. The contour of a remote sensing object is taken as the boundary and the set of pixels that comprises the object are extracted to form the input data set for the deep neural network. The proposed network considers both the features of the object and the pixels which forms the object. The spatial and spectral features of remote sensing image objects are extracted independently in the parallel network using panchromatic and multispectral remote sensing techniques. Then, through a fully connected layer, both spectral and spatial information are integrated to produce remote sensing object class coding. The experimental results demonstrate that the parallel spectral-spatial CNN, which combines spatial and spectral features, achieves better classification performance than the individual CNN. Therefore, the proposed method provides a novel approach to land use classification based on high spatial resolution remote sensing images.  相似文献   

18.
Collection of training samples for remote sensing image classification is always time-consuming and expensive. In this context, active learning (AL) that aims at using limited training samples to achieve promising classification performances is developed. Recently, integration of spatial information into AL exhibits new potential for image classification. In this letter, an AL approach with two-stage spatial computation (AL-2SC) is proposed to improve the selection of training samples. The spatial features derived from remote sensing image and the probability outputs from the neighboring pixels are introduced in AL process. Moreover, we compare several AL approaches which take spatial information into account. In experiments, random sampling (RS) and four AL methods, including AL using breaking ties heuristic (BT), AL with spatial feature (AL-SF), AL with neighbouring responses (AL-NR), and AL-2SC, are considered. Three remote sensing datasets, including one hyperspectral and two multispectral images, are used to compare the performance of different methods. It is illustrated that, the utilization of spatial information is very important for the improvement of AL performance, and the proposed AL-2SC shows the most satisfactory result.  相似文献   

19.
Automatic and accurate esophageal lesion classification and segmentation is of great significance to clinically estimate the lesion statuses of the esophageal diseases and make suitable diagnostic schemes. Due to individual variations and visual similarities of lesions in shapes, colors, and textures, current clinical methods remain subject to potential high-risk and time-consumption issues. In this paper, we propose an Esophageal Lesion Network (ELNet) for automatic esophageal lesion classification and segmentation using deep convolutional neural networks (DCNNs). The underlying method automatically integrates dual-view contextual lesion information to extract global features and local features for esophageal lesion classification and lesion-specific segmentation network is proposed for automatic esophageal lesion annotation at pixel level. For the established clinical large-scale database of 1051 white-light endoscopic images, ten-fold cross-validation is used in method validation. Experiment results show that the proposed framework achieves classification with sensitivity of 0.9034, specificity of 0.9718, and accuracy of 0.9628, and the segmentation with sensitivity of 0.8018, specificity of 0.9655, and accuracy of 0.9462. All of these indicate that our method enables an efficient, accurate, and reliable esophageal lesion diagnosis in clinics.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号