Näytä suppeat kuvailutiedot

dc.contributor.authorLaakom, Firas
dc.contributor.authorRaitoharju, Jenni
dc.contributor.authorIosifidis, Alexandros
dc.contributor.authorGabbouj, Moncef
dc.date.accessioned2024-01-26T08:23:48Z
dc.date.available2024-01-26T08:23:48Z
dc.date.issued2024
dc.identifier.citationLaakom, F., Raitoharju, J., Iosifidis, A., & Gabbouj, M. (2024). Reducing redundancy in the bottleneck representation of autoencoders. <i>Pattern Recognition Letters</i>, <i>178</i>, 202-208. <a href="https://doi.org/10.1016/j.patrec.2024.01.013" target="_blank">https://doi.org/10.1016/j.patrec.2024.01.013</a>
dc.identifier.otherCONVID_202074852
dc.identifier.urihttps://jyx.jyu.fi/handle/123456789/93076
dc.description.abstractAutoencoders (AEs) are a type of unsupervised neural networks, which can be used to solve various tasks, e.g., dimensionality reduction, image compression, and image denoising. An AE has two goals: (i) compress the original input to a low-dimensional space at the bottleneck of the network topology using an encoder, (ii) reconstruct the input from the representation at the bottleneck using a decoder. Both encoder and decoder are optimized jointly by minimizing a distortion-based loss which implicitly forces the model to keep only the information in input data required to reconstruct them and to reduce redundancies. In this paper, we propose a scheme to explicitly penalize feature redundancies in the bottleneck representation. To this end, we propose an additional loss term, based on the pairwise covariances of the network units, which complements the data reconstruction loss forcing the encoder to learn a more diverse and richer representation of the input. We tested our approach across different tasks, namely dimensionality reduction, image compression, and image denoising. Experimental results show that the proposed loss leads consistently to superior performance compared to using the standard AE loss.en
dc.format.mimetypeapplication/pdf
dc.language.isoeng
dc.publisherElsevier
dc.relation.ispartofseriesPattern Recognition Letters
dc.rightsCC BY 4.0
dc.subject.otherautoencoders
dc.subject.otherunsupervised learning
dc.subject.otherdiversity
dc.subject.otherfeature representation
dc.subject.otherdimensionality reduction
dc.subject.otherimage denoising
dc.subject.otherimage compression
dc.titleReducing redundancy in the bottleneck representation of autoencoders
dc.typeresearch article
dc.identifier.urnURN:NBN:fi:jyu-202401261564
dc.contributor.laitosInformaatioteknologian tiedekuntafi
dc.contributor.laitosFaculty of Information Technologyen
dc.type.urihttp://purl.org/eprint/type/JournalArticle
dc.type.coarhttp://purl.org/coar/resource_type/c_2df8fbb1
dc.description.reviewstatuspeerReviewed
dc.format.pagerange202-208
dc.relation.issn0167-8655
dc.relation.volume178
dc.type.versionpublishedVersion
dc.rights.copyright© 2024 The Authors. Published by Elsevier B.V.
dc.rights.accesslevelopenAccessfi
dc.type.publicationarticle
dc.relation.grantnumber4212/31/2021
dc.subject.ysokuvankäsittely
dc.subject.ysosyväoppiminen
dc.subject.ysokoneoppiminen
dc.subject.ysoneuroverkot
dc.format.contentfulltext
jyx.subject.urihttp://www.yso.fi/onto/yso/p6449
jyx.subject.urihttp://www.yso.fi/onto/yso/p39324
jyx.subject.urihttp://www.yso.fi/onto/yso/p21846
jyx.subject.urihttp://www.yso.fi/onto/yso/p7292
dc.rights.urlhttps://creativecommons.org/licenses/by/4.0/
dc.relation.doi10.1016/j.patrec.2024.01.013
dc.relation.funderBusiness Finlandfi
dc.relation.funderBusiness Finlanden
jyx.fundingprogramElinkeinoelämän kanssa verkottunut tutkimus, BFfi
jyx.fundingprogramPublic research networked with companies, BFen
jyx.fundinginformationThis work has been supported by the Academy of Finland Awcha project DN 334566 and NSF-Business Finland Center for Big Learning project AMALIA. The work of Jenni Raitoharju was supported by the Academy of Finland (projects 324475 and 333497).
dc.type.okmA1


Aineistoon kuuluvat tiedostot

Thumbnail

Aineisto kuuluu seuraaviin kokoelmiin

Näytä suppeat kuvailutiedot

CC BY 4.0
Ellei muuten mainita, aineiston lisenssi on CC BY 4.0