Please use this identifier to cite or link to this item:
http://hdl.handle.net/123456789/1551
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Grewal, R | - |
dc.contributor.author | Kasana, G | - |
dc.contributor.author | Kasana, S | - |
dc.date.accessioned | 2024-10-03T10:25:47Z | - |
dc.date.available | 2024-10-03T10:25:47Z | - |
dc.date.issued | 2024-05 | - |
dc.identifier.uri | http://hdl.handle.net/123456789/1551 | - |
dc.description.abstract | This research presents an innovative technique for semantic segmentation of Hyperspectral Image (HSI) while focusing on its dimensionality reduction. A unique technique is applied to three distinct HSI landcover datasets, Indian Pines, Pavia University, and Salinas Valley, acquired from diverse sensors. HSIs are inherently multi-view structures, causing redundancy and computation overload due to their high dimensionality. The technique utilizes Canonical Correlation Analysis (CCA) variants, Pairwise CCA (PCCA) and Multiple Set CCA (MCCA), to extract features from multiple views of the input image simultaneously. The performance of PCCA and MCCA is compared with the traditional Principal Component Analysis (PCA) on all datasets. The superior performance of CCA variants, particularly MCCA, is demonstrated in achieving higher Overall Accuracy (OA) for semantic segmentation compared to PCA. The research extends the analysis by integrating machine learning classifiers for per pixel prediction, demonstrating the effectiveness of the proposed techniques i.e., PCCA-SVM and MCCA-SVM. | en_US |
dc.title | A Novel Technique for Semantic Segmentation of Hyperspectral Images Using Multi-View Features | en_US |
Appears in Collections: | School of Interdisciplinary & Applied Sciences |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
A Novel Technique for Semantic Segmentation of Hyperspectral Images Using Multi-View Features.pdf | 4 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.