open access publication

Conference Paper, 2024

Assessing Neural Network Robustness via Adversarial Pivotal Tuning

2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), ISBN 979-8-3503-1892-0, Volume 00, Pages 2940-2949, 10.1109/wacv57701.2024.00293

Contributors

Christensen, Peter Ebert (Corresponding author) [1] Snæbjarnarson, Vésteinn [1] Dittadi, Andrea [2] Belongie, Serge J 0000-0002-0388-5217 [1] Benaim, Sagie [3]

Affiliations

  1. [1] University of Copenhagen
  2. [NORA names: KU University of Copenhagen; University; Denmark; Europe, EU; Nordic; OECD];
  3. [2] Helmholtz AI
  4. [3] Hebrew University of Jerusalem
  5. [NORA names: Israel; Asia, Middle East; OECD]

Abstract

The robustness of image classifiers is essential to their deployment in the real world. The ability to assess this resilience to manipulations or deviations from the training data is thus crucial. These modifications have traditionally consisted of minimal changes that still manage to fool classifiers, and modern approaches are increasingly robust to them. Semantic manipulations that modify elements of an image in meaningful ways have thus gained traction for this purpose. However, they have primarily been limited to style, color, or attribute changes. While expressive, these manipulations do not make use of the full capabilities of a pretrained generative model. In this work, we aim to bridge this gap. We show how a pretrained image generator can be used to semantically manipulate images in a detailed, diverse, and photorealistic way while still preserving the class of the original image. Inspired by recent GAN-based image inversion methods, we propose a method called Adversarial Pivotal Tuning (APT). Given an image, APT first finds a pivot latent space input that reconstructs the image using a pretrained generator. It then adjusts the generator’s weights to create small yet semantic manipulations in order to fool a pretrained classifier. APT preserves the full expressive editing capabilities of the generative model. We demonstrate that APT is capable of a wide range of class-preserving semantic image manipulations that fool a variety of pretrained classifiers. Finally, we show that classifiers that are robust to other benchmarks are not robust to APT manipulations and suggest a method to improve them.

Keywords

adversary, approach, benchmarks, capability, changes, classifier, color, data, deployment, deviation, editing capabilities, elements, gap, generation, generative model, generator weight, image classifier, image generation, image inversion methods, image manipulation, images, input, inversion method, manipulation, method, minimal changes, model, modern approaches, modification, modified elements, network robustness, neural network robustness, original image, photorealistic way, pivot, pretrained classifier, real world, resilience, robustness, robustness of image classifiers, semantic manipulation, space inputs, style, traction, training, training data, tuning, way, weight, world

Data Provider: Digital Science