Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
Add more filters










Publication year range
1.
Sensors (Basel) ; 22(17)2022 Aug 28.
Article in English | MEDLINE | ID: mdl-36080933

ABSTRACT

The required navigation performance (RNP) procedure is one of the two basic navigation specifications for the performance-based navigation (PBN) procedure as proposed by the International Civil Aviation Organization (ICAO) through an integration of the global navigation infrastructures to improve the utilization efficiency of airspace and reduce flight delays and the dependence on ground navigation facilities. The approach stage is one of the most important and difficult stages in the whole flying. In this study, we proposed deep reinforcement learning (DRL)-based RNP procedure execution, DRL-RNP. By conducting an RNP approach procedure, the DRL algorithm was implemented, using a fixed-wing aircraft to explore a path of minimum fuel consumption with reward under windy conditions in compliance with the RNP safety specifications. The experimental results have demonstrated that the six degrees of freedom aircraft controlled by the DRL algorithm can successfully complete the RNP procedure whilst meeting the safety specifications for protection areas and obstruction clearance altitude in the whole procedure. In addition, the potential path with minimum fuel consumption can be explored effectively. Hence, the DRL method can be used not only to implement the RNP procedure with a simulated aircraft but also to help the verification and evaluation of the RNP procedure.


Subject(s)
Aviation , Aircraft , Algorithms , Reward
2.
Article in English | MEDLINE | ID: mdl-35935666

ABSTRACT

To suppress the spread of COVID-19, accurate diagnosis at an early stage is crucial, chest screening with radiography imaging plays an important role in addition to the real-time reverse transcriptase polymerase chain reaction (RT-PCR) swab test. Due to the limited data, existing models suffer from incapable feature extraction and poor network convergence and optimization. Accordingly, a multi-stage residual network, MSRCovXNet, is proposed for effective detection of COVID-19 from chest x-ray (CXR) images. As a shallow yet effective classifier with the ResNet-18 as the feature extractor, MSRCovXNet is optimized by fusing two proposed feature enhancement modules (FEM), i.e., low-level and high-level feature maps (LLFMs and HLFMs), which contain respectively more local information and rich semantic information, respectively. For effective fusion of these two features, a single-stage FEM (MSFEM) and a multi-stage FEM (MSFEM) are proposed to enhance the semantic feature representation of the LLFMs and the local feature representation of the HLFMs, respectively. Without ensembling other deep learning models, our MSRCovXNet has a precision of 98.9% and a recall of 94% in detection of COVID-19, which outperforms several state-of-the-art models. When evaluated on the COVIDGR dataset, an average accuracy of 82.2% is achieved, leading other methods by at least 1.2%.

3.
IEEE J Biomed Health Inform ; 26(8): 4032-4043, 2022 08.
Article in English | MEDLINE | ID: mdl-35613061

ABSTRACT

The pandemic of COVID-19 has become a global crisis in public health, which has led to a massive number of deaths and severe economic degradation. To suppress the spread of COVID-19, accurate diagnosis at an early stage is crucial. As the popularly used real-time reverse transcriptase polymerase chain reaction (RT-PCR) swab test can be lengthy and inaccurate, chest screening with radiography imaging is still preferred. However, due to limited image data and the difficulty of the early-stage diagnosis, existing models suffer from ineffective feature extraction and poor network convergence and optimisation. To tackle these issues, a segmentation-based COVID-19 classification network, namely SC2Net, is proposed for effective detection of the COVID-19 from chest x-ray (CXR) images. The SC2Net consists of two subnets: a COVID-19 lung segmentation network (CLSeg), and a spatial attention network (SANet). In order to supress the interference from the background, the CLSeg is first applied to segment the lung region from the CXR. The segmented lung region is then fed to the SANet for classification and diagnosis of the COVID-19. As a shallow yet effective classifier, SANet takes the ResNet-18 as the feature extractor and enhances high-level feature via the proposed spatial attention module. For performance evaluation, the COVIDGR 1.0 dataset is used, which is a high-quality dataset with various severity levels of the COVID-19. Experimental results have shown that, our SC2Net has an average accuracy of 84.23% and an average F1 score of 81.31% in detection of COVID-19, outperforming several state-of-the-art approaches.


Subject(s)
COVID-19 , Algorithms , COVID-19/diagnostic imaging , Humans , Neural Networks, Computer , Radiography, Thoracic/methods , X-Rays
4.
IEEE Trans Cybern ; 52(1): 215-227, 2022 Jan.
Article in English | MEDLINE | ID: mdl-32217492

ABSTRACT

Band selection has become a significant issue for the efficiency of the hyperspectral image (HSI) processing. Although many unsupervised band selection (UBS) approaches have been developed in the last decades, a flexible and robust method is still lacking. The lack of proper understanding of the HSI data structure has resulted in the inconsistency in the outcome of UBS. Besides, most of the UBS methods are either relying on complicated measurements or rather noise sensitive, which hinder the efficiency of the determined band subset. In this article, an adaptive distance-based band hierarchy (ADBH) clustering framework is proposed for UBS in HSI, which can help to avoid the noisy bands while reflecting the hierarchical data structure of HSI. With a tree hierarchy-based framework, we can acquire any number of band subset. By introducing a novel adaptive distance into the hierarchy, the similarity between bands and band groups can be computed straightforward while reducing the effect of noisy bands. Experiments on four datasets acquired from two HSI systems have fully validated the superiority of the proposed framework.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Cluster Analysis
5.
IEEE Trans Cybern ; 52(7): 6158-6169, 2022 Jul.
Article in English | MEDLINE | ID: mdl-34499610

ABSTRACT

Singular spectral analysis (SSA) has recently been successfully applied to feature extraction in hyperspectral image (HSI), including conventional (1-D) SSA in spectral domain and 2-D SSA in spatial domain. However, there are some drawbacks, such as sensitivity to the window size, high computational complexity under a large window, and failing to extract joint spectral-spatial features. To tackle these issues, in this article, we propose superpixelwise adaptive SSA (SpaSSA), that is superpixelwise adaptive SSA for exploiting local spatial information of HSI. The extraction of local (instead of global) features, particularly in HSI, can be more effective for characterizing the objects within an image. In SpaSSA, conventional SSA and 2-D SSA are combined and adaptively applied to each superpixel derived from an oversegmented HSI. According to the size of the derived superpixels, either SSA or 2-D singular spectrum analysis (2D-SSA) is adaptively applied for feature extraction, where the embedding window in 2D-SSA is also adaptive to the size of the superpixel. Experimental results on the three datasets have shown that the proposed SpaSSA outperforms both SSA and 2D-SSA in terms of classification accuracy and computational complexity. By combining SpaSSA with the principal component analysis (SpaSSA-PCA), the accuracy of land-cover analysis can be further improved, outperforming several state-of-the-art approaches.

6.
Sensors (Basel) ; 21(20)2021 Oct 10.
Article in English | MEDLINE | ID: mdl-34695933

ABSTRACT

Variations in the quantity of plankton impact the entire marine ecosystem. It is of great significance to accurately assess the dynamic evolution of the plankton for monitoring the marine environment and global climate change. In this paper, a novel method is introduced for deep-sea plankton community detection in marine ecosystem using an underwater robotic platform. The videos were sampled at a distance of 1.5 m from the ocean floor, with a focal length of 1.5-2.5 m. The optical flow field is used to detect plankton community. We showed that for each of the moving plankton that do not overlap in space in two consecutive video frames, the time gradient of the spatial position of the plankton are opposite to each other in two consecutive optical flow fields. Further, the lateral and vertical gradients have the same value and orientation in two consecutive optical flow fields. Accordingly, moving plankton can be accurately detected under the complex dynamic background in the deep-sea environment. Experimental comparison with manual ground-truth fully validated the efficacy of the proposed methodology, which outperforms six state-of-the-art approaches.


Subject(s)
Plankton , Climate Change , Ecosystem , Oceans and Seas
7.
Comput Biol Med ; 131: 104245, 2021 04.
Article in English | MEDLINE | ID: mdl-33556893

ABSTRACT

BACKGROUND: Deep learning (DL) is the fastest-growing field of machine learning (ML). Deep convolutional neural networks (DCNN) are currently the main tool used for image analysis and classification purposes. There are several DCNN architectures among them AlexNet, GoogleNet, and residual networks (ResNet). METHOD: This paper presents a new computer-aided diagnosis (CAD) system based on feature extraction and classification using DL techniques to help radiologists to classify breast cancer lesions in mammograms. This is performed by four different experiments to determine the optimum approach. The first one consists of end-to-end pre-trained fine-tuned DCNN networks. In the second one, the deep features of the DCNNs are extracted and fed to a support vector machine (SVM) classifier with different kernel functions. The third experiment performs deep features fusion to demonstrate that combining deep features will enhance the accuracy of the SVM classifiers. Finally, in the fourth experiment, principal component analysis (PCA) is introduced to reduce the large feature vector produced in feature fusion and to decrease the computational cost. The experiments are performed on two datasets (1) the curated breast imaging subset of the digital database for screening mammography (CBIS-DDSM) and (2) the mammographic image analysis society digital mammogram database (MIAS). RESULTS: The accuracy achieved using deep features fusion for both datasets proved to be the highest compared to the state-of-the-art CAD systems. Conversely, when applying the PCA on the feature fusion sets, the accuracy did not improve; however, the computational cost decreased as the execution time decreased.


Subject(s)
Breast Neoplasms , Mammography , Breast , Breast Neoplasms/diagnostic imaging , Early Detection of Cancer , Female , Humans , Neural Networks, Computer
8.
Sensors (Basel) ; 20(20)2020 Oct 13.
Article in English | MEDLINE | ID: mdl-33066123

ABSTRACT

Melanoma recognition is challenging due to data imbalance and high intra-class variations and large inter-class similarity. Aiming at the issues, we propose a melanoma recognition method using deep convolutional neural network with covariance discriminant loss in dermoscopy images. Deep convolutional neural network is trained under the joint supervision of cross entropy loss and covariance discriminant loss, rectifying the model outputs and the extracted features simultaneously. Specifically, we design an embedding loss, namely covariance discriminant loss, which takes the first and second distance into account simultaneously for providing more constraints. By constraining the distance between hard samples and minority class center, the deep features of melanoma and non-melanoma can be separated effectively. To mine the hard samples, we also design the corresponding algorithm. Further, we analyze the relationship between the proposed loss and other losses. On the International Symposium on Biomedical Imaging (ISBI) 2018 Skin Lesion Analysis dataset, the two schemes in the proposed method can yield a sensitivity of 0.942 and 0.917, respectively. The comprehensive results have demonstrated the efficacy of the designed embedding loss and the proposed methodology.


Subject(s)
Dermoscopy , Melanoma , Neural Networks, Computer , Skin Neoplasms , Algorithms , Deep Learning , Humans , Melanoma/diagnostic imaging , Skin Neoplasms/diagnostic imaging
9.
IEEE J Biomed Health Inform ; 24(12): 3551-3563, 2020 12.
Article in English | MEDLINE | ID: mdl-32997638

ABSTRACT

The novel coronavirus disease 2019 (COVID-19) pandemic has led to a worldwide crisis in public health. It is crucial we understand the epidemiological trends and impact of non-pharmacological interventions (NPIs), such as lockdowns for effective management of the disease and control of its spread. We develop and validate a novel intelligent computational model to predict epidemiological trends of COVID-19, with the model parameters enabling an evaluation of the impact of NPIs. By representing the number of daily confirmed cases (NDCC) as a time-series, we assume that, with or without NPIs, the pattern of the pandemic satisfies a series of Gaussian distributions according to the central limit theorem. The underlying pandemic trend is first extracted using a singular spectral analysis (SSA) technique, which decomposes the NDCC time series into the sum of a small number of independent and interpretable components such as a slow varying trend, oscillatory components and structureless noise. We then use a mixture of Gaussian fitting (GF) to derive a novel predictive model for the SSA extracted NDCC incidence trend, with the overall model termed SSA-GF. Our proposed model is shown to accurately predict the NDCC trend, peak daily cases, the length of the pandemic period, the total confirmed cases and the associated dates of the turning points on the cumulated NDCC curve. Further, the three key model parameters, specifically, the amplitude (alpha), mean (mu), and standard deviation (sigma) are linked to the underlying pandemic patterns, and enable a directly interpretable evaluation of the impact of NPIs, such as strict lockdowns and travel restrictions. The predictive model is validated using available data from China and South Korea, and new predictions are made, partially requiring future validation, for the cases of Italy, Spain, the UK and the USA. Comparative results demonstrate that the introduction of consistent control measures across countries can lead to development of similar parametric models, reflected in particular by relative variations in their underlying sigma, alpha and mu values. The paper concludes with a number of open questions and outlines future research directions.


Subject(s)
Artificial Intelligence , COVID-19/therapy , COVID-19/epidemiology , COVID-19/virology , Humans , SARS-CoV-2/isolation & purification , Spain/epidemiology
10.
Sensors (Basel) ; 19(6)2019 Mar 18.
Article in English | MEDLINE | ID: mdl-30889902

ABSTRACT

Traditional industry is seeing an increasing demand for more autonomous and flexible manufacturing in unstructured settings, a shift away from the fixed, isolated workspaces where robots perform predefined actions repetitively. This work presents a case study in which a robotic manipulator, namely a KUKA KR90 R3100, is provided with smart sensing capabilities such as vision and adaptive reasoning for real-time collision avoidance and online path planning in dynamically-changing environments. A machine vision module based on low-cost cameras and color detection in the hue, saturation, value (HSV) space is developed to make the robot aware of its changing environment. Therefore, this vision allows the detection and localization of a randomly moving obstacle. Path correction to avoid collision avoidance for such obstacles with robotic manipulator is achieved by exploiting an adaptive path planning module along with a dedicated robot control module, where the three modules run simultaneously. These sensing/smart capabilities allow the smooth interactions between the robot and its dynamic environment, where the robot needs to react to dynamic changes through autonomous thinking and reasoning with the reaction times below the average human reaction time. The experimental results demonstrate that effective human-robot and robot-robot interactions can be realized through the innovative integration of emerging sensing techniques, efficient planning algorithms and systematic designs.

11.
Sensors (Basel) ; 19(6)2019 Mar 22.
Article in English | MEDLINE | ID: mdl-30909489

ABSTRACT

Electroencephalography (EEG)-based brain-computer interfaces (BCIs), particularly those using motor-imagery (MI) data, have the potential to become groundbreaking technologies in both clinical and entertainment settings. MI data is generated when a subject imagines the movement of a limb. This paper reviews state-of-the-art signal processing techniques for MI EEG-based BCIs, with a particular focus on the feature extraction, feature selection and classification techniques used. It also summarizes the main applications of EEG-based BCIs, particularly those based on MI data, and finally presents a detailed discussion of the most prevalent challenges impeding the development and commercialization of EEG-based BCIs.


Subject(s)
Brain/physiology , Electroencephalography/methods , Brain-Computer Interfaces , Evoked Potentials , Humans , Principal Component Analysis , Signal Processing, Computer-Assisted
12.
PeerJ ; 7: e6201, 2019.
Article in English | MEDLINE | ID: mdl-30713814

ABSTRACT

It is important to detect breast cancer as early as possible. In this manuscript, a new methodology for classifying breast cancer using deep learning and some segmentation techniques are introduced. A new computer aided detection (CAD) system is proposed for classifying benign and malignant mass tumors in breast mammography images. In this CAD system, two segmentation approaches are used. The first approach involves determining the region of interest (ROI) manually, while the second approach uses the technique of threshold and region based. The deep convolutional neural network (DCNN) is used for feature extraction. A well-known DCNN architecture named AlexNet is used and is fine-tuned to classify two classes instead of 1,000 classes. The last fully connected (fc) layer is connected to the support vector machine (SVM) classifier to obtain better accuracy. The results are obtained using the following publicly available datasets (1) the digital database for screening mammography (DDSM); and (2) the Curated Breast Imaging Subset of DDSM (CBIS-DDSM). Training on a large number of data gives high accuracy rate. Nevertheless, the biomedical datasets contain a relatively small number of samples due to limited patient volume. Accordingly, data augmentation is a method for increasing the size of the input data by generating new data from the original input data. There are many forms for the data augmentation; the one used here is the rotation. The accuracy of the new-trained DCNN architecture is 71.01% when cropping the ROI manually from the mammogram. The highest area under the curve (AUC) achieved was 0.88 (88%) for the samples obtained from both segmentation techniques. Moreover, when using the samples obtained from the CBIS-DDSM, the accuracy of the DCNN is increased to 73.6%. Consequently, the SVM accuracy becomes 87.2% with an AUC equaling to 0.94 (94%). This is the highest AUC value compared to previous work using the same conditions.

13.
Food Chem ; 270: 105-112, 2019 Jan 01.
Article in English | MEDLINE | ID: mdl-30174023

ABSTRACT

In this study, ultra-violet (UV) and short-wave infra-red (SWIR) hyperspectral imaging (HSI) was used to measure the concentration of phenolic flavour compounds on malted barley that are responsible for smoky aroma of Scotch whisky. UV-HSI is a relatively unexplored technique that has the potential to detect specific absorptions of phenols. SWIR-HSI has proven to detect phenols in previous applications. Support Vector Machine Classification and Regression was applied to classify malts with ten different concentration levels of the compounds of interest, and to estimate the concentration respectively. Results reveal that UV-HSI is at its current development stage unsuitable for this task whereas SWIR-HSI is able to produce robust results with a classification accuracy of 99.8% and a squared correlation coefficient of 0.98 with a Root Mean Squared Error of 0.32 ppm for regression. The results indicate that with further testing and development, HSI may potentially be exploited in an industrial production environment.


Subject(s)
Hordeum/chemistry , Phenols/analysis , Taste , Flavoring Agents/analysis , Odorants
14.
IEEE Trans Cybern ; 48(1): 436-447, 2018 Jan.
Article in English | MEDLINE | ID: mdl-28055941

ABSTRACT

Balancing exploration and exploitation according to evolutionary states is crucial to meta-heuristic search (M-HS) algorithms. Owing to its simplicity in theory and effectiveness in global optimization, gravitational search algorithm (GSA) has attracted increasing attention in recent years. However, the tradeoff between exploration and exploitation in GSA is achieved mainly by adjusting the size of an archive, named , which stores those superior agents after fitness sorting in each iteration. Since the global property of remains unchanged in the whole evolutionary process, GSA emphasizes exploitation over exploration and suffers from rapid loss of diversity and premature convergence. To address these problems, in this paper, we propose a dynamic neighborhood learning (DNL) strategy to replace the model and thereby present a DNL-based GSA (DNLGSA). The method incorporates the local and global neighborhood topologies for enhancing the exploration and obtaining adaptive balance between exploration and exploitation. The local neighborhoods are dynamically formed based on evolutionary states. To delineate the evolutionary states, two convergence criteria named limit value and population diversity, are introduced. Moreover, a mutation operator is designed for escaping from the local optima on the basis of evolutionary states. The proposed algorithm was evaluated on 27 benchmark problems with different characteristic and various difficulties. The results reveal that DNLGSA exhibits competitive performances when compared with a variety of state-of-the-art M-HS algorithms. Moreover, the incorporation of local neighborhood topology reduces the numbers of calculations of gravitational force and thus alleviates the high computational cost of GSA.

15.
Sensors (Basel) ; 17(11)2017 Nov 16.
Article in English | MEDLINE | ID: mdl-29144388

ABSTRACT

In our preliminary study, the reflectance signatures obtained from hyperspectral imaging (HSI) of normal and abnormal corneal epithelium tissues of porcine show similar morphology with subtle differences. Here we present image enhancement algorithms that can be used to improve the interpretability of data into clinically relevant information to facilitate diagnostics. A total of 25 corneal epithelium images without the application of eye staining were used. Three image feature extraction approaches were applied for image classification: (i) image feature classification from histogram using a support vector machine with a Gaussian radial basis function (SVM-GRBF); (ii) physical image feature classification using deep-learning Convolutional Neural Networks (CNNs) only; and (iii) the combined classification of CNNs and SVM-Linear. The performance results indicate that our chosen image features from the histogram and length-scale parameter were able to classify with up to 100% accuracy; particularly, at CNNs and CNNs-SVM, by employing 80% of the data sample for training and 20% for testing. Thus, in the assessment of corneal epithelium injuries, HSI has high potential as a method that could surpass current technologies regarding speed, objectivity, and reliability.


Subject(s)
Epithelium, Corneal/injuries , Algorithms , Animals , Image Enhancement , Neural Networks, Computer , Reproducibility of Results , Swine
16.
Sensors (Basel) ; 17(11)2017 Nov 13.
Article in English | MEDLINE | ID: mdl-29137159

ABSTRACT

As a new machine learning approach, the extreme learning machine (ELM) has received much attention due to its good performance. However, when directly applied to hyperspectral image (HSI) classification, the recognition rate is low. This is because ELM does not use spatial information, which is very important for HSI classification. In view of this, this paper proposes a new framework for the spectral-spatial classification of HSI by combining ELM with loopy belief propagation (LBP). The original ELM is linear, and the nonlinear ELMs (or Kernel ELMs) are an improvement of linear ELM (LELM). However, based on lots of experiments and much analysis, it is found that the LELM is a better choice than nonlinear ELM for the spectral-spatial classification of HSI. Furthermore, we exploit the marginal probability distribution that uses the whole information in the HSI and learns such a distribution using the LBP. The proposed method not only maintains the fast speed of ELM, but also greatly improves the accuracy of classification. The experimental results in the well-known HSI data sets, Indian Pines, and Pavia University, demonstrate the good performance of the proposed method.

17.
Appl Spectrosc ; 70(9): 1582-8, 2016 Sep.
Article in English | MEDLINE | ID: mdl-27145984

ABSTRACT

Hyperspectral remote sensing is experiencing a dazzling proliferation of new sensors, platforms, systems, and applications with the introduction of novel, low-cost, low-weight sensors. Curiously, relatively little development is now occurring in the use of Fourier transform (FT) systems, which have the potential to operate at extremely high throughput without use of a slit or reductions in both spatial and spectral resolution that thin film based mosaic sensors introduce. This study introduces a new physics-based analytical framework called singular spectrum analysis (SSA) to process raw hyperspectral imagery collected with FT imagers that addresses some of the data processing issues associated with the use of the inverse FT. Synthetic interferogram data are analyzed using SSA, which adaptively decomposes the original synthetic interferogram into several independent components associated with the signal, photon and system noise, and the field illumination pattern.

18.
Sci Rep ; 5: 14371, 2015 Sep 23.
Article in English | MEDLINE | ID: mdl-26394926

ABSTRACT

Although a significant amount of work has been performed to preserve the ancient murals in the Mogao Grottoes by Dunhuang Cultural Research, non-contact methods need to be developed to effectively evaluate the degree of flaking of the murals. In this study, we propose to evaluate the flaking by automatically analyzing hyperspectral images that were scanned at the site. Murals with various degrees of flaking were scanned in the 126th cave using a near-infrared (NIR) hyperspectral camera with a spectral range of approximately 900 to 1700 nm. The regions of interest (ROIs) of the murals were manually labeled and grouped into four levels: normal, slight, moderate, and severe. The average spectral data from each ROI and its group label were used to train our classification model. To predict the degree of flaking, we adopted four algorithms: deep belief networks (DBNs), partial least squares regression (PLSR), principal component analysis with a support vector machine (PCA + SVM) and principal component analysis with an artificial neural network (PCA + ANN). The experimental results show the effectiveness of our method. In particular, better results are obtained using DBNs when the training data contain a significant amount of striping noise.

19.
Comput Intell Neurosci ; 2015: 423581, 2015.
Article in English | MEDLINE | ID: mdl-26089862

ABSTRACT

Maximum likelihood classifier (MLC) and support vector machines (SVM) are two commonly used approaches in machine learning. MLC is based on Bayesian theory in estimating parameters of a probabilistic model, whilst SVM is an optimization based nonparametric method in this context. Recently, it is found that SVM in some cases is equivalent to MLC in probabilistically modeling the learning process. In this paper, MLC and SVM are combined in learning and classification, which helps to yield probabilistic output for SVM and facilitate soft decision making. In total four groups of data are used for evaluations, covering sonar, vehicle, breast cancer, and DNA sequences. The data samples are characterized in terms of Gaussian/non-Gaussian distributed and balanced/unbalanced samples which are then further used for performance assessment in comparing the SVM and the combined SVM-MLC classifier. Interesting results are reported to indicate how the combined classifier may work under various conditions.


Subject(s)
Decision Making/physiology , Learning/physiology , Likelihood Functions , Machine Learning , Base Sequence , Breast Neoplasms/classification , Databases, Factual/statistics & numerical data , Female , Humans , Male
20.
Appl Opt ; 53(20): 4440-9, 2014 Jul 10.
Article in English | MEDLINE | ID: mdl-25090063

ABSTRACT

Presented in a three-dimensional structure called a hypercube, hyperspectral imaging suffers from a large volume of data and high computational cost for data analysis. To overcome such drawbacks, principal component analysis (PCA) has been widely applied for feature extraction and dimensionality reduction. However, a severe bottleneck is how to compute the PCA covariance matrix efficiently and avoid computational difficulties, especially when the spatial dimension of the hypercube is large. In this paper, structured covariance PCA (SC-PCA) is proposed for fast computation of the covariance matrix. In line with how spectral data is acquired in either the push-broom or tunable filter method, different implementation schemes of SC-PCA are presented. As the proposed SC-PCA can determine the covariance matrix from partial covariance matrices in parallel even without prior deduction of the mean vector, it facilitates real-time data analysis while the hypercube is acquired. This has significantly reduced the scale of required memory and also allows efficient onsite feature extraction and data reduction to benefit subsequent tasks in coding and compression, transmission, and analytics of hyperspectral data.

SELECTION OF CITATIONS
SEARCH DETAIL
...