Minimal learning machine in hyperspectral imaging classification
Hakola, A.-M., & Pölönen, I. (2020). Minimal learning machine in hyperspectral imaging classification . In L. Bruzzone, F. Bovolo, & E. Santi (Eds.), Image and Signal Processing for Remote Sensing XXVI (Article 115330R). SPIE. Proceedings of SPIE : the International Society for Optical Engineering, 11533. https://doi.org/10.1117/12.2573578
© 2020 SPIE
A hyperspectral (HS) image is typically a stack of frames, where each frame represents the intensity of a diﬀerent wavelength of light. Each spatial pixel has a spectrum. In the classiﬁcation of the HS image, each spectrum is classiﬁed pixel-by-pixel. In some of the real-time applications, the amount of the HS image data causes performance challenges. Those issues relate to the platforms (e.g. drones) payload restrictions, the issues of the available energy and to the complexity of the machine learning models. In this study, we introduce the minimal learning machine (MLM) as a computationally cheap training and classiﬁcation machine learning method for the hyperspectral imaging classiﬁcation. MLM is a distance-based method that utilizes mapping between input and and output distances. Input distance is a distance between the training set and its subset R. Output distance is corresponding distances between the label values of the training set and the subset R. We propose a training point selection framework, which reduces the number of data points in the R by selecting the points class-by-class, in the direction of the principal components of each class. We test MLM’s performance against four other classiﬁcation machine learning methods: Random Forest, Artiﬁcial Neural Network, Support Vector Machine and Nearest Neighbours classiﬁer with three known hyper- spectral data sets. As the main outcomes, we will show how the performance is aﬀected by the size of the subset R. We compare our subset selection method MLM’s performance to the random selection MLM’s perfor- mance. Results show that MLM is an computationally eﬃcient way to train large training sets. MLM reduces the complexity of the analysis and provides computational beneﬁts against other models. Proposed framework oﬀers tools that can improve the MLM’s classiﬁcation time and the accuracy rate compared to the MLM with randomly picked training points. ...