Repository logo
 
Publication

Hyperspectral Image Classification: An Analysis Employing CNN, LSTM, Transformer, and Attention Mechanism

dc.contributor.authorViel, Felipe
dc.contributor.authorRenato Cotrim Macielen
dc.contributor.authorSeman, Laio Oriel
dc.contributor.authorZeferino, Cesar Albenes
dc.contributor.authorBezerra, Eduardo
dc.contributor.authorLEITHARDT, VALDERI
dc.date.accessioned2023-03-17T11:14:48Zen
dc.date.available2023-03-17T11:14:48Zen
dc.date.issued2016en
dc.date.updated2023-03-14T10:47:25Z
dc.description.abstractHyperspectral images contain tens to hundreds of bands, implying a high spectral resolution. This high spectral resolution allows for obtaining a precise signature of structures and compounds that make up the captured scene. Among the types of processing that may be applied to Hyperspectral Images, classification using machine learning models stands out. The classification process is one of the most relevant steps for this type of image. It can extract information using spatial and spectral information and spatial-spectral fusion. Artificial Neural Network models have been gaining prominence among existing classification techniques. They can be applied to data with one, two, or three dimensions. Given the above, this work evaluates Convolutional Neural Network models with one, two, and three dimensions to identify the impact of classifying Hyperspectral Images with different types of convolution. We also expand the comparison to Recurrent Neural Network models, Attention Mechanism, and the Transformer architecture. Furthermore, a novelty pre-processing method is proposed for the classification process to avoid generating data leaks between training, validation, and testing data. The results demonstrated that using 1 Dimension Convolutional Neural Network (1D-CNN), Long Short-Term Memory (LSTM), and Transformer architectures reduces memory consumption and sample processing time and maintain a satisfactory classification performance up to 99% accuracy on larger datasets. In addition, the Transfomer architecture can approach the 2D-CNN and 3D-CNN architectures in accuracy using only spectral information. The results also show that using two or three dimensions convolution layers improves accuracy at the cost of greater memory consumption and processing time per sample. Furthermore, the pre-processing methodology guarantees the disassociation of training and testing data.pt_PT
dc.description.versionN/Apt_PT
dc.identifier.doi10.1109/ACCESS.2023.3255164pt_PT
dc.identifier.slugcv-prod-3167791
dc.identifier.urihttp://hdl.handle.net/10400.26/44201en
dc.language.isoengpt_PT
dc.peerreviewedyespt_PT
dc.publisherIEEEpt_PT
dc.relationCOPELABS - Cognitive and People-centric Computing R&D Unit
dc.relationResearch Center for Endogenous Resource Valorization
dc.subjectHyperspectral imagingpt_PT
dc.subjectCNNpt_PT
dc.subjectLSTMpt_PT
dc.subjectTransformerpt_PT
dc.subjectRemote sensingpt_PT
dc.titleHyperspectral Image Classification: An Analysis Employing CNN, LSTM, Transformer, and Attention Mechanismpt_PT
dc.typejournal article
dspace.entity.typePublication
oaire.awardTitleCOPELABS - Cognitive and People-centric Computing R&D Unit
oaire.awardTitleResearch Center for Endogenous Resource Valorization
oaire.awardURIinfo:eu-repo/grantAgreement/FCT/6817 - DCRRNI ID/UIDB%2F04111%2F2020/PT
oaire.awardURIinfo:eu-repo/grantAgreement/FCT/6817 - DCRRNI ID/UIDB%2F05064%2F2020/PT
oaire.citation.titleIEEE Accesspt_PT
oaire.fundingStream6817 - DCRRNI ID
oaire.fundingStream6817 - DCRRNI ID
person.familyNameViel
person.familyNameSeman
person.familyNameZeferino
person.familyNameBezerra
person.familyNameREIS QUIETINHO LEITHARDT
person.givenNameFelipe
person.givenNameLaio Oriel
person.givenNameCesar
person.givenNameEduardo
person.givenNameVALDERI
person.identifierJsOq45sAAAAJ&hl=pt-PT
person.identifier.ciencia-id0614-5834-E7F3
person.identifier.orcid0000-0002-0972-2160
person.identifier.orcid0000-0002-6806-9122
person.identifier.orcid0000-0003-3039-4410
person.identifier.orcid0000-0002-2191-6064
person.identifier.orcid0000-0003-0446-9271
person.identifier.ridJ-1337-2014
person.identifier.scopus-author-id55608292500
person.identifier.scopus-author-id6507922313
person.identifier.scopus-author-id6701315162
person.identifier.scopus-author-id35303109600
project.funder.identifierhttp://doi.org/10.13039/501100001871
project.funder.identifierhttp://doi.org/10.13039/501100001871
project.funder.nameFundação para a Ciência e a Tecnologia
project.funder.nameFundação para a Ciência e a Tecnologia
rcaap.cv.cienciaid0614-5834-E7F3 | Valderi Reis Quietinho LeithardtPT
rcaap.rightsopenAccesspt_PT
rcaap.typearticlept_PT
relation.isAuthorOfPublicationaeff9a41-c947-4fdd-ba7f-3f0b226e396c
relation.isAuthorOfPublicationc05c06d4-41e9-46ff-a564-b0eaa379e245
relation.isAuthorOfPublication27387961-f89b-4dc7-8119-065acfd503b3
relation.isAuthorOfPublicationa26ad349-8363-4049-8af2-f8e97a960398
relation.isAuthorOfPublicationab15f7c6-e882-406e-813d-2629e9cec5c8
relation.isAuthorOfPublication.latestForDiscoveryab15f7c6-e882-406e-813d-2629e9cec5c8
relation.isProjectOfPublication7a9b4ee8-2a94-4a7b-bd54-c01b8971564a
relation.isProjectOfPublication05110dfa-e1d5-4a80-ad88-5c36b9c4552f
relation.isProjectOfPublication.latestForDiscovery7a9b4ee8-2a94-4a7b-bd54-c01b8971564a

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Hyperspectral_Image_Classification_An_Analysis_Employing_CNN_LSTM_Transformer_and_Attention_Mechanism.pdf
Size:
3.15 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.89 KB
Format:
Item-specific license agreed upon to submission
Description: