Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 19308, 2024 Aug 20.
Artigo em Inglês | MEDLINE | ID: mdl-39164343

RESUMO

This paper introduces a new latent variable probabilistic framework for representing spectral data of high spatial and spectral dimensionality, such as hyperspectral images. We use a generative Bayesian model to represent the image formation process and provide interpretable and efficient inference and learning methods. Surprisingly, our approach can be implemented with simple tools and does not require extensive training data, detailed pixel-by-pixel labeling, or significant computational resources. Numerous experiments with simulated data and real benchmark scenarios show encouraging image classification performance. These results validate the unique ability of our framework to discriminate complex hyperspectral images, irrespective of the presence of highly discriminative spectral signatures.

2.
Sensors (Basel) ; 24(14)2024 Jul 22.
Artigo em Inglês | MEDLINE | ID: mdl-39066156

RESUMO

Semi-supervised graph convolutional networks (SSGCNs) have been proven to be effective in hyperspectral image classification (HSIC). However, limited training data and spectral uncertainty restrict the classification performance, and the computational demands of a graph convolution network (GCN) present challenges for real-time applications. To overcome these issues, a dual-branch fusion of a GCN and convolutional neural network (DFGCN) is proposed for HSIC tasks. The GCN branch uses an adaptive multi-scale superpixel segmentation method to build fusion adjacency matrices at various scales, which improves the graph convolution efficiency and node representations. Additionally, a spectral feature enhancement module (SFEM) enhances the transmission of crucial channel information between the two graph convolutions. Meanwhile, the CNN branch uses a convolutional network with an attention mechanism to focus on detailed features of local areas. By combining the multi-scale superpixel features from the GCN branch and the local pixel features from the CNN branch, this method leverages complementary features to fully learn rich spatial-spectral information. Our experimental results demonstrate that the proposed method outperforms existing advanced approaches in terms of classification efficiency and accuracy across three benchmark data sets.

3.
Sensors (Basel) ; 24(13)2024 Jun 24.
Artigo em Inglês | MEDLINE | ID: mdl-39000887

RESUMO

Accurate and timely acquisition of the spatial distribution of mangrove species is essential for conserving ecological diversity. Hyperspectral imaging sensors are recognized as effective tools for monitoring mangroves. However, the spatial complexity of mangrove forests and the spectral redundancy of hyperspectral images pose challenges to fine classification. Moreover, finely classifying mangrove species using only spectral information is difficult due to spectral similarities among species. To address these issues, this study proposes an object-oriented multi-feature combination method for fine classification. Specifically, hyperspectral images were segmented using multi-scale segmentation techniques to obtain different species of objects. Then, a variety of features were extracted, including spectral, vegetation indices, fractional order differential, texture, and geometric features, and a genetic algorithm was used for feature selection. Additionally, ten feature combination schemes were designed to compare the effects on mangrove species classification. In terms of classification algorithms, the classification capabilities of four machine learning classifiers were evaluated, including K-nearest neighbor (KNN), support vector machines (SVM), random forests (RF), and artificial neural networks (ANN) methods. The results indicate that SVM based on texture features achieved the highest classification accuracy among single-feature variables, with an overall accuracy of 97.04%. Among feature combination variables, ANN based on raw spectra, first-order differential spectra, texture features, vegetation indices, and geometric features achieved the highest classification accuracy, with an overall accuracy of 98.03%. Texture features and fractional order differentiation are identified as important variables, while vegetation index and geometric features can further improve classification accuracy. Object-based classification, compared to pixel-based classification, can avoid the salt-and-pepper phenomenon and significantly enhance the accuracy and efficiency of mangrove species classification. Overall, the multi-feature combination method and object-based classification strategy proposed in this study provide strong technical support for the fine classification of mangrove species and are expected to play an important role in mangrove restoration and management.

4.
Sensors (Basel) ; 24(10)2024 May 08.
Artigo em Inglês | MEDLINE | ID: mdl-38793842

RESUMO

Hyperspectral images (HSIs) contain subtle spectral details and rich spatial contextures of land cover that benefit from developments in spectral imaging and space technology. The classification of HSIs, which aims to allocate an optimal label for each pixel, has broad prospects in the field of remote sensing. However, due to the redundancy between bands and complex spatial structures, the effectiveness of the shallow spectral-spatial features extracted by traditional machine-learning-based methods tends to be unsatisfying. Over recent decades, various methods based on deep learning in the field of computer vision have been proposed to allow for the discrimination of spectral-spatial representations for classification. In this article, the crucial factors to discriminate spectral-spatial features are systematically summarized from the perspectives of feature extraction and feature optimization. For feature extraction, techniques to ensure the discrimination of spectral features, spatial features, and spectral-spatial features are illustrated based on the characteristics of hyperspectral data and the architecture of models. For feature optimization, techniques to adjust the feature distances between classes in the classification space are introduced in detail. Finally, the characteristics and limitations of these techniques and future challenges in facilitating the discrimination of features for HSI classification are also discussed further.

5.
Sci Rep ; 14(1): 10664, 2024 05 09.
Artigo em Inglês | MEDLINE | ID: mdl-38724603

RESUMO

Kiwifruit soft rot is highly contagious and causes serious economic loss. Therefore, early detection and elimination of soft rot are important for postharvest treatment and storage of kiwifruit. This study aims to accurately detect kiwifruit soft rot based on hyperspectral images by using a deep learning approach for image classification. A dual-branch selective attention capsule network (DBSACaps) was proposed to improve the classification accuracy. The network uses two branches to separately extract the spectral and spatial features so as to reduce their mutual interference, followed by fusion of the two features through the attention mechanism. Capsule network was used instead of convolutional neural networks to extract the features and complete the classification. Compared with existing methods, the proposed method exhibited the best classification performance on the kiwifruit soft rot dataset, with an overall accuracy of 97.08% and a 97.83% accuracy for soft rot. Our results confirm that potential soft rot of kiwifruit can be detected using hyperspectral images, which may contribute to the construction of smart agriculture.


Assuntos
Actinidia , Redes Neurais de Computação , Doenças das Plantas , Actinidia/microbiologia , Doenças das Plantas/microbiologia , Aprendizado Profundo , Imageamento Hiperespectral/métodos , Frutas/microbiologia , Processamento de Imagem Assistida por Computador/métodos
6.
Sensors (Basel) ; 24(4)2024 Feb 11.
Artigo em Inglês | MEDLINE | ID: mdl-38400345

RESUMO

Hyperspectral image (HSI) classification is a highly challenging task, particularly in fields like crop yield prediction and agricultural infrastructure detection. These applications often involve complex image types, such as soil, vegetation, water bodies, and urban structures, encompassing a variety of surface features. In HSI, the strong correlation between adjacent bands leads to redundancy in spectral information, while using image patches as the basic unit of classification causes redundancy in spatial information. To more effectively extract key information from this massive redundancy for classification, we innovatively proposed the CESA-MCFormer model, building upon the transformer architecture with the introduction of the Center Enhanced Spatial Attention (CESA) module and Morphological Convolution (MC). The CESA module combines hard coding and soft coding to provide the model with prior spatial information before the mixing of spatial features, introducing comprehensive spatial information. MC employs a series of learnable pooling operations, not only extracting key details in both spatial and spectral dimensions but also effectively merging this information. By integrating the CESA module and MC, the CESA-MCFormer model employs a "Selection-Extraction" feature processing strategy, enabling it to achieve precise classification with minimal samples, without relying on dimension reduction techniques such as PCA. To thoroughly evaluate our method, we conducted extensive experiments on the IP, UP, and Chikusei datasets, comparing our method with the latest advanced approaches. The experimental results demonstrate that the CESA-MCFormer achieved outstanding performance on all three test datasets, with Kappa coefficients of 96.38%, 98.24%, and 99.53%, respectively.

7.
Neural Netw ; 168: 105-122, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37748391

RESUMO

In recent years, the application of convolutional neural networks (CNNs) and graph convolutional networks (GCNs) in hyperspectral image classification (HSIC) has achieved remarkable results. However, the limited label samples are still a major challenge when using CNN and GCN to classify hyperspectral images. In order to alleviate this problem, a double branch fusion network of CNN and enhanced graph attention network (CEGAT) based on key sample selection strategy is proposed. First, a linear discrimination of spectral inter-class slices (LD_SICS) module is designed to eliminate spectral redundancy of HSIs. Then, a spatial spectral correlation attention (SSCA) module is proposed, which can extract and assign attention weight to the spatial and spectral correlation features. On the graph attention (GAT) branch, the HSI is segmented into some super pixels as input to reduce the amount of network parameters. In addition, an enhanced graph attention (EGAT) module is constructed to enhance the relationship between nodes. Finally, a key sample selection (KSS) strategy is proposed to enable the network to achieve better classification performance with few labeled samples. Compared with other state-of-the-art methods, CEGAT has better classification performance under limited label samples.


Assuntos
Redes Neurais de Computação , Polímeros
8.
Sensors (Basel) ; 23(17)2023 Sep 03.
Artigo em Inglês | MEDLINE | ID: mdl-37688086

RESUMO

In the realm of hyperspectral image classification, the pursuit of heightened accuracy and comprehensive feature extraction has led to the formulation of an advance architectural paradigm. This study proposed a model encapsulated within the framework of a unified model, which synergistically leverages the capabilities of three distinct branches: the swin transformer, convolutional neural network, and encoder-decoder. The main objective was to facilitate multiscale feature learning, a pivotal facet in hyperspectral image classification, with each branch specializing in unique facets of multiscale feature extraction. The swin transformer, recognized for its competence in distilling long-range dependencies, captures structural features across different scales; simultaneously, convolutional neural networks undertake localized feature extraction, engendering nuanced spatial information preservation. The encoder-decoder branch undertakes comprehensive analysis and reconstruction, fostering the assimilation of both multiscale spectral and spatial intricacies. To evaluate our approach, we conducted experiments on publicly available datasets and compared the results with state-of-the-art methods. Our proposed model obtains the best classification result compared to others. Specifically, overall accuracies of 96.87%, 98.48%, and 98.62% were obtained on the Xuzhou, Salinas, and LK datasets.

9.
J Xray Sci Technol ; 31(4): 777-796, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37182861

RESUMO

BACKGROUND: Hyperspectral brain tissue imaging has been recently utilized in medical research aiming to study brain science and obtain various biological phenomena of the different tissue types. However, processing high-dimensional data of hyperspectral images (HSI) is challenging due to the minimum availability of training samples. OBJECTIVE: To overcome this challenge, this study proposes applying a 3D-CNN (convolution neural network) model to process spatial and temporal features and thus improve performance of tumor image classification. METHODS: A 3D-CNN model is implemented as a testing method for dealing with high-dimensional problems. The HSI pre-processing is accomplished using distinct approaches such as hyperspectral cube creation, calibration, spectral correction, and normalization. Both spectral and spatial features are extracted from HSI. The Benchmark Vivo human brain HSI dataset is used to validate the performance of the proposed classification model. RESULTS: The proposed 3D-CNN model achieves a higher accuracy of 97% for brain tissue classification, whereas the existing linear conventional support vector machine (SVM) and 2D-CNN model yield 95% and 96% classification accuracy, respectively. Moreover, the maximum F1-score obtained by the proposed 3D-CNN model is 97.3%, which is 2.5% and 11.0% higher than the F1-scores obtained by 2D-CNN model and SVM model, respectively. CONCLUSION: A 3D-CNN model is developed for brain tissue classification by using HIS dataset. The study results demonstrate the advantages of using the new 3D-CNN model, which can achieve higher brain tissue classification accuracy than conventional 2D-CNN model and SVM model.


Assuntos
Encéfalo , Redes Neurais de Computação , Humanos , Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador , Máquina de Vetores de Suporte
10.
Sensors (Basel) ; 23(7)2023 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-37050573

RESUMO

Graph convolutional neural network architectures combine feature extraction and convolutional layers for hyperspectral image classification. An adaptive neighborhood aggregation method based on statistical variance integrating the spatial information along with the spectral signature of the pixels is proposed for improving graph convolutional network classification of hyperspectral images. The spatial-spectral information is integrated into the adjacency matrix and processed by a single-layer graph convolutional network. The algorithm employs an adaptive neighborhood selection criteria conditioned by the class it belongs to. Compared to fixed window-based feature extraction, this method proves effective in capturing the spectral and spatial features with variable pixel neighborhood sizes. The experimental results from the Indian Pines, Houston University, and Botswana Hyperion hyperspectral image datasets show that the proposed AN-GCN can significantly improve classification accuracy. For example, the overall accuracy for Houston University data increases from 81.71% (MiniGCN) to 97.88% (AN-GCN). Furthermore, the AN-GCN can classify hyperspectral images of rice seeds exposed to high day and night temperatures, proving its efficacy in discriminating the seeds under increased ambient temperature treatments.

11.
Sensors (Basel) ; 23(6)2023 Mar 16.
Artigo em Inglês | MEDLINE | ID: mdl-36991898

RESUMO

Recently, convolution neural networks have been widely used in hyperspectral image classification and have achieved excellent performance. However, the fixed convolution kernel receptive field often leads to incomplete feature extraction, and the high redundancy of spectral information leads to difficulties in spectral feature extraction. To solve these problems, we propose a nonlocal attention mechanism of a 2D-3D hybrid CNN (2-3D-NL CNN), which includes an inception block and a nonlocal attention module. The inception block uses convolution kernels of different sizes to equip the network with multiscale receptive fields to extract the multiscale spatial features of ground objects. The nonlocal attention module enables the network to obtain a more comprehensive receptive field in the spatial and spectral dimensions while suppressing the information redundancy of the spectral dimension, making the extraction of spectral features easier. Experiments on two hyperspectral datasets, Pavia University and Salians, validate the effectiveness of the inception block and the nonlocal attention module. The results show that our model achieves an overall classification accuracy of 99.81% and 99.42% on the two datasets, respectively, which is higher than the accuracy of the existing model.

12.
Sensors (Basel) ; 23(2)2023 Jan 06.
Artigo em Inglês | MEDLINE | ID: mdl-36679453

RESUMO

A hyperspectral image (HSI), which contains a number of contiguous and narrow spectral wavelength bands, is a valuable source of data for ground cover examinations. Classification using the entire original HSI suffers from the "curse of dimensionality" problem because (i) the image bands are highly correlated both spectrally and spatially, (ii) not every band can carry equal information, (iii) there is a lack of enough training samples for some classes, and (iv) the overall computational cost is high. Therefore, effective feature (band) reduction is necessary through feature extraction (FE) and/or feature selection (FS) for improving the classification in a cost-effective manner. Principal component analysis (PCA) is a frequently adopted unsupervised FE method in HSI classification. Nevertheless, its performance worsens when the dataset is noisy, and the computational cost becomes high. Consequently, this study first proposed an efficient FE approach using a normalized mutual information (NMI)-based band grouping strategy, where the classical PCA was applied to each band subgroup for intrinsic FE. Finally, the subspace of the most effective features was generated by the NMI-based minimum redundancy and maximum relevance (mRMR) FS criteria. The subspace of features was then classified using the kernel support vector machine. Two real HSIs collected by the AVIRIS and HYDICE sensors were used in an experiment. The experimental results demonstrated that the proposed feature reduction approach significantly improved the classification performance. It achieved the highest overall classification accuracy of 94.93% for the AVIRIS dataset and 99.026% for the HYDICE dataset. Moreover, the proposed approach reduced the computational cost compared with the studied methods.


Assuntos
Máquina de Vetores de Suporte , Análise de Componente Principal
13.
Sensors (Basel) ; 22(10)2022 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-35632310

RESUMO

Convolutional neural networks (CNNs) have been prominent in most hyperspectral image (HSI) processing applications due to their advantages in extracting local information. Despite their success, the locality of the convolutional layers within CNNs results in heavyweight models and time-consuming defects. In this study, inspired by the excellent performance of transformers that are used for long-range representation learning in computer vision tasks, we built a lightweight vision transformer for HSI classification that can extract local and global information simultaneously, thereby facilitating accurate classification. Moreover, as traditional dimensionality reduction methods are limited in their linear representation ability, a three-dimensional convolutional autoencoder was adopted to capture the nonlinear characteristics between spectral bands. Based on the aforementioned three-dimensional convolutional autoencoder and lightweight vision transformer, we designed an HSI classification network, namely the "convolutional autoencoder meets lightweight vision transformer" (CAEVT). Finally, we validated the performance of the proposed CAEVT network using four widely used hyperspectral datasets. Our approach showed superiority, especially in the absence of sufficient labeled samples, which demonstrates the effectiveness and efficiency of the CAEVT network.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos , Aprendizagem
14.
Sensors (Basel) ; 21(19)2021 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-34640786

RESUMO

Recently developed hybrid models that stack 3D with 2D CNN in their structure have enjoyed high popularity due to their appealing performance in hyperspectral image classification tasks. On the other hand, biological genome graphs have demonstrated their effectiveness in enhancing the scalability and accuracy of genomic analysis. We propose an innovative deep genome graph-based network (GGBN) for hyperspectral image classification to tap the potential of hybrid models and genome graphs. The GGBN model utilizes 3D-CNN at the bottom layers and 2D-CNNs at the top layers to process spectral-spatial features vital to enhancing the scalability and accuracy of hyperspectral image classification. To verify the effectiveness of the GGBN model, we conducted classification experiments on Indian Pines (IP), University of Pavia (UP), and Salinas Scene (SA) datasets. Using only 5% of the labeled data for training over the SA, IP, and UP datasets, the classification accuracy of GGBN is 99.97%, 96.85%, and 99.74%, respectively, which is better than the compared state-of-the-art methods.


Assuntos
Algoritmos , Redes Neurais de Computação
15.
Sensors (Basel) ; 21(5)2021 Mar 03.
Artigo em Inglês | MEDLINE | ID: mdl-33802533

RESUMO

Hyperspectral image (HSI) classification is the subject of intense research in remote sensing. The tremendous success of deep learning in computer vision has recently sparked the interest in applying deep learning in hyperspectral image classification. However, most deep learning methods for hyperspectral image classification are based on convolutional neural networks (CNN). Those methods require heavy GPU memory resources and run time. Recently, another deep learning model, the transformer, has been applied for image recognition, and the study result demonstrates the great potential of the transformer network for computer vision tasks. In this paper, we propose a model for hyperspectral image classification based on the transformer, which is widely used in natural language processing. Besides, we believe we are the first to combine the metric learning and the transformer model in hyperspectral image classification. Moreover, to improve the model classification performance when the available training samples are limited, we use the 1-D convolution and Mish activation function. The experimental results on three widely used hyperspectral image data sets demonstrate the proposed model's advantages in accuracy, GPU memory cost, and running time.

16.
Sensors (Basel) ; 20(23)2020 Nov 29.
Artigo em Inglês | MEDLINE | ID: mdl-33260347

RESUMO

In recent years, hyperspectral images (HSIs) have attained considerable attention in computer vision (CV) due to their wide utility in remote sensing. Unlike images with three or lesser channels, HSIs have a large number of spectral bands. Recent works demonstrate the use of modern deep learning based CV techniques like convolutional neural networks (CNNs) for analyzing HSI. CNNs have receptive fields (RFs) fueled by learnable weights, which are trained to extract useful features from images. In this work, a novel multi-receptive CNN module called GhoMR is proposed for HSI classification. GhoMR utilizes blocks containing several RFs, extracting features in a residual fashion. Each RF extracts features which are used by other RFs to extract more complex features in a hierarchical manner. However, the higher the number of RFs, the greater the associated weights, thus heavier is the network. Most complex architectures suffer from this shortcoming. To tackle this, the recently found Ghost module is used as the basic building unit. Ghost modules address the feature redundancy in CNNs by extracting only limited features and performing cheap transformations on them, thus reducing the overall parameters in the network. To test the discriminative potential of GhoMR, a simple network called GhoMR-Net is constructed using GhoMR modules, and experiments are performed on three public HSI data sets-Indian Pines, University of Pavia, and Salinas Scene. The classification performance is measured using three metrics-overall accuracy (OA), Kappa coefficient (Kappa), and average accuracy (AA). Comparisons with ten state-of-the-art architectures are shown to demonstrate the effectiveness of the method further. Although lightweight, the proposed GhoMR-Net provides comparable or better performance than other networks. The PyTorch code for this study is made available at the iamarijit/GhoMR GitHub repository.

17.
Sensors (Basel) ; 20(18)2020 Sep 11.
Artigo em Inglês | MEDLINE | ID: mdl-32933016

RESUMO

Convolutional neural networks provide an ideal solution for hyperspectral image (HSI) classification. However, the classification effect is not satisfactory when limited training samples are available. Focused on "small sample" hyperspectral classification, we proposed a novel 3D-2D-convolutional neural network (CNN) model named AD-HybridSN (Attention-Dense-HybridSN). In our proposed model, a dense block was used to reuse shallow features and aimed at better exploiting hierarchical spatial-spectral features. Subsequent depth separable convolutional layers were used to discriminate the spatial information. Further refinement of spatial-spectral features was realized by the channel attention method and spatial attention method, which were performed behind every 3D convolutional layer and every 2D convolutional layer, respectively. Experiment results indicate that our proposed model can learn more discriminative spatial-spectral features using very few training data. In Indian Pines, Salinas and the University of Pavia, AD-HybridSN obtain 97.02%, 99.59% and 98.32% overall accuracy using only 5%, 1% and 1% labeled data for training, respectively, which are far better than all the contrast models.

18.
Sensors (Basel) ; 20(6)2020 Mar 16.
Artigo em Inglês | MEDLINE | ID: mdl-32188082

RESUMO

In recent years, deep learning methods have been widely used in the hyperspectral image (HSI) classification tasks. Among them, spectral-spatial combined methods based on the three-dimensional (3-D) convolution have shown good performance. However, because of the three-dimensional convolution, increasing network depth will result in a dramatic rise in the number of parameters. In addition, the previous methods do not make full use of spectral information. They mostly use the data after dimensionality reduction directly as the input of networks, which result in poor classification ability in some categories with small numbers of samples. To address the above two issues, in this paper, we designed an end-to-end 3D-ResNeXt network which adopts feature fusion and label smoothing strategy further. On the one hand, the residual connections and split-transform-merge strategy can alleviate the declining-accuracy phenomenon and decrease the number of parameters. We can adjust the hyperparameter cardinality instead of the network depth to extract more discriminative features of HSIs and improve the classification accuracy. On the other hand, in order to improve the classification accuracies of classes with small numbers of samples, we enrich the input of the 3D-ResNeXt spectral-spatial feature learning network by additional spectral feature learning, and finally use a loss function modified by label smoothing strategy to solve the imbalance of classes. The experimental results on three popular HSI datasets demonstrate the superiority of our proposed network and an effective improvement in the accuracies especially for the classes with small numbers of training samples.

19.
Sensors (Basel) ; 19(23)2019 Nov 29.
Artigo em Inglês | MEDLINE | ID: mdl-31795511

RESUMO

Every pixel in a hyperspectral image contains detailed spectral information in hundreds of narrow bands captured by hyperspectral sensors. Pixel-wise classification of a hyperspectral image is the cornerstone of various hyperspectral applications. Nowadays, deep learning models represented by the convolutional neural network (CNN) provides an ideal solution for feature extraction, and has made remarkable achievements in supervised hyperspectral classification. However, hyperspectral image annotation is time-consuming and laborious, and available training data is usually limited. Due to the "small-sample problem", CNN-based hyperspectral classification is still challenging. Focused on the limited sample-based hyperspectral classification, we designed an 11-layer CNN model called R-HybridSN (Residual-HybridSN) from the perspective of network optimization. With an organic combination of 3D-2D-CNN, residual learning, and depth-separable convolutions, R-HybridSN can better learn deep hierarchical spatial-spectral features with very few training data. The performance of R-HybridSN is evaluated over three public available hyperspectral datasets on different amounts of training samples. Using only 5%, 1%, and 1% labeled data for training in Indian Pines, Salinas, and University of Pavia, respectively, the classification accuracy of R-HybridSN is 96.46%, 98.25%, 96.59%, respectively, which is far better than the contrast models.

20.
Sensors (Basel) ; 19(15)2019 Jul 25.
Artigo em Inglês | MEDLINE | ID: mdl-31349589

RESUMO

Hyperspectral remote sensing images (HSIs) have great research and application value. At present, deep learning has become an important method for studying image processing. The Generative Adversarial Network (GAN) model is a typical network of deep learning developed in recent years and the GAN model can also be used to classify HSIs. However, there are still some problems in the classification of HSIs. On the one hand, due to the existence of different objects with the same spectrum phenomenon, if only according to the original GAN model to generate samples from spectral samples, it will produce the wrong detailed characteristic information. On the other hand, the gradient disappears in the original GAN model and the scoring ability of a single discriminator limits the quality of the generated samples. In order to solve the above problems, we introduce the scoring mechanism of multi-discriminator collaboration and complete semi-supervised classification on three hyperspectral data sets. Compared with the original GAN model with a single discriminator, the adjusted criterion is more rigorous and accurate and the generated samples can show more accurate characteristics. Aiming at the pattern collapse and diversity deficiency of the original GAN generated by single discriminator, this paper proposes a multi-discriminator generative adversarial networks (MDGANs) and studies the influence of the number of discriminators on the classification results. The experimental results show that the introduction of multi-discriminator improves the judgment ability of the model, ensures the effect of generating samples, solves the problem of noise in generating spectral samples and can improve the classification effect of HSIs. At the same time, the number of discriminators has different effects on different data sets.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA