open access publication

Article, 2024

Reducing redundancy in the bottleneck representation of autoencoders

Pattern Recognition Letters, ISSN 1872-7344, 0167-8655, Volume 178, Pages 202-208, 10.1016/j.patrec.2024.01.013

Contributors

Laakom, Firas 0000-0001-7436-5692 (Corresponding author) [1] Raitoharju, Jenni 0000-0003-4631-9298 [2] [3] Iosifidis, Alexandros 0000-0003-4807-1345 [4] Gabbouj, Moncef 0000-0002-9788-2323 [1]

Affiliations

  1. [1] Tampere University
  2. [NORA names: Finland; Europe, EU; Nordic; OECD];
  3. [2] Finnish Environment Institute
  4. [NORA names: Finland; Europe, EU; Nordic; OECD];
  5. [3] University of Jyväskylä
  6. [NORA names: Finland; Europe, EU; Nordic; OECD];
  7. [4] Aarhus University
  8. [NORA names: AU Aarhus University; University; Denmark; Europe, EU; Nordic; OECD]

Abstract

Autoencoders (AEs) are a type of unsupervised neural networks, which can be used to solve various tasks, e.g., dimensionality reduction, image compression, and image denoising. An AE has two goals: (i) compress the original input to a low-dimensional space at the bottleneck of the network topology using an encoder, (ii) reconstruct the input from the representation at the bottleneck using a decoder. Both encoder and decoder are optimized jointly by minimizing a distortion-based loss which implicitly forces the model to keep only the information in input data required to reconstruct them and to reduce redundancies. In this paper, we propose a scheme to explicitly penalize feature redundancies in the bottleneck representation. To this end, we propose an additional loss term, based on the pairwise covariances of the network units, which complements the data reconstruction loss forcing the encoder to learn a more diverse and richer representation of the input. We tested our approach across different tasks, namely dimensionality reduction, image compression, and image denoising. Experimental results show that the proposed loss leads consistently to superior performance compared to using the standard AE loss.

Keywords

autoencoder, bottleneck, compression, covariates, data, decoding, denoising, dimensionality, dimensionality reduction, encoding, experimental results, goal, image compression, image denoising, images, information, input, input data, loss, loss term, low-dimensional space, model, network, network topology, network units, neural network, original input, pairwise, pairwise covariances, performance, reconstruction loss, reduce redundancy, reduction, redundancy, representation, results, space, superior performance, task, term, topology, units, unsupervised neural network

Funders

  • Academy of Finland
  • Business Finland

Data Provider: Digital Science