dc.contributor.author | Laakom, Firas | |
dc.contributor.author | Raitoharju, Jenni | |
dc.contributor.author | Iosifidis, Alexandros | |
dc.contributor.author | Gabbouj, Moncef | |
dc.date.accessioned | 2024-01-26T08:23:48Z | |
dc.date.available | 2024-01-26T08:23:48Z | |
dc.date.issued | 2024 | |
dc.identifier.citation | Laakom, F., Raitoharju, J., Iosifidis, A., & Gabbouj, M. (2024). Reducing redundancy in the bottleneck representation of autoencoders. <i>Pattern Recognition Letters</i>, <i>178</i>, 202-208. <a href="https://doi.org/10.1016/j.patrec.2024.01.013" target="_blank">https://doi.org/10.1016/j.patrec.2024.01.013</a> | |
dc.identifier.other | CONVID_202074852 | |
dc.identifier.uri | https://jyx.jyu.fi/handle/123456789/93076 | |
dc.description.abstract | Autoencoders (AEs) are a type of unsupervised neural networks, which can be used to solve various tasks, e.g., dimensionality reduction, image compression, and image denoising. An AE has two goals: (i) compress the original input to a low-dimensional space at the bottleneck of the network topology using an encoder, (ii) reconstruct the input from the representation at the bottleneck using a decoder. Both encoder and decoder are optimized jointly by minimizing a distortion-based loss which implicitly forces the model to keep only the information in input data required to reconstruct them and to reduce redundancies. In this paper, we propose a scheme to explicitly penalize feature redundancies in the bottleneck representation. To this end, we propose an additional loss term, based on the pairwise covariances of the network units, which complements the data reconstruction loss forcing the encoder to learn a more diverse and richer representation of the input. We tested our approach across different tasks, namely dimensionality reduction, image compression, and image denoising. Experimental results show that the proposed loss leads consistently to superior performance compared to using the standard AE loss. | en |
dc.format.mimetype | application/pdf | |
dc.language.iso | eng | |
dc.publisher | Elsevier | |
dc.relation.ispartofseries | Pattern Recognition Letters | |
dc.rights | CC BY 4.0 | |
dc.subject.other | autoencoders | |
dc.subject.other | unsupervised learning | |
dc.subject.other | diversity | |
dc.subject.other | feature representation | |
dc.subject.other | dimensionality reduction | |
dc.subject.other | image denoising | |
dc.subject.other | image compression | |
dc.title | Reducing redundancy in the bottleneck representation of autoencoders | |
dc.type | research article | |
dc.identifier.urn | URN:NBN:fi:jyu-202401261564 | |
dc.contributor.laitos | Informaatioteknologian tiedekunta | fi |
dc.contributor.laitos | Faculty of Information Technology | en |
dc.type.uri | http://purl.org/eprint/type/JournalArticle | |
dc.type.coar | http://purl.org/coar/resource_type/c_2df8fbb1 | |
dc.description.reviewstatus | peerReviewed | |
dc.format.pagerange | 202-208 | |
dc.relation.issn | 0167-8655 | |
dc.relation.volume | 178 | |
dc.type.version | publishedVersion | |
dc.rights.copyright | © 2024 The Authors. Published by Elsevier B.V. | |
dc.rights.accesslevel | openAccess | fi |
dc.type.publication | article | |
dc.relation.grantnumber | 4212/31/2021 | |
dc.subject.yso | kuvankäsittely | |
dc.subject.yso | syväoppiminen | |
dc.subject.yso | koneoppiminen | |
dc.subject.yso | neuroverkot | |
dc.format.content | fulltext | |
jyx.subject.uri | http://www.yso.fi/onto/yso/p6449 | |
jyx.subject.uri | http://www.yso.fi/onto/yso/p39324 | |
jyx.subject.uri | http://www.yso.fi/onto/yso/p21846 | |
jyx.subject.uri | http://www.yso.fi/onto/yso/p7292 | |
dc.rights.url | https://creativecommons.org/licenses/by/4.0/ | |
dc.relation.doi | 10.1016/j.patrec.2024.01.013 | |
dc.relation.funder | Business Finland | fi |
dc.relation.funder | Business Finland | en |
jyx.fundingprogram | Elinkeinoelämän kanssa verkottunut tutkimus, BF | fi |
jyx.fundingprogram | Public research networked with companies, BF | en |
jyx.fundinginformation | This work has been supported by the Academy of Finland Awcha project DN 334566 and NSF-Business Finland Center for Big Learning project AMALIA. The work of Jenni Raitoharju was supported by the Academy of Finland (projects 324475 and 333497). | |
dc.type.okm | A1 | |