Article, 2024

Class-incremental learning with causal relational replay

Expert Systems with Applications, ISSN 0957-4174, 1873-6793, Volume 250, Page 123901, 10.1016/j.eswa.2024.123901

Contributors

Nguyen, Toan 0000-0003-2734-0622 [1] [2] [3] Kieu, Duc [1] [2] Duong, Bao 0000-0001-9850-0270 [3] Kieu, Tung 0000-0002-7696-1444 [4] Do, Kien 0000-0002-0119-122X [3] Nguyen, Thin 0000-0003-3467-8963 [3] Le, Bac Hoai 0000-0002-4306-6945 (Corresponding author) [1] [2]

Affiliations

  1. [1] Ho Chi Minh City University of Science
  2. [NORA names: Vietnam; Asia, South];
  3. [2] Vietnam National University, Ho Chi Minh City
  4. [NORA names: Vietnam; Asia, South];
  5. [3] Deakin University
  6. [NORA names: Australia; Oceania; OECD];
  7. [4] Aalborg University
  8. [NORA names: AAU Aalborg University; University; Denmark; Europe, EU; Nordic; OECD]

Abstract

In Class-Incremental Learning (Class-IL), deep neural networks often fail to learn a sequence of classes incrementally due to catastrophic forgetting, a phenomenon arising from the absence of exposure to old knowledge. To alleviate this issue, conventional rehearsal methods, such as experience replay, store a limited number of old exemplars and then interleave with the current data for joint learning and rehearsal. However, the networks following this training scheme might not successfully reduce forgetting due to the lack of direct consideration of relations between samples of previously learned and new classes. Drawing inspiration from how humans learn by noticing the similarities and differences between classes, we propose a novel Class-IL framework called Relational Replay (RR). RR learns and recalls relations between images across all classes over time. To ensure these relations remain intrinsic and robust to forgetting, we incorporate causal reasoning to RR, resulting in Causal Relational Replay (CRR). CRR analyzes these relations using a causality perspective, aiming to identify intrinsic relations rooted in the images’ semantic features, serving as the cause of these relations. Our proposed method shows a competitive performance compared to the state-of-the-art rehearsal methods in Class-IL with clear and consistent improvements in the majority of settings on standard benchmark datasets.

Keywords

Class-Incremental, Drawing inspiration, absence, benchmark datasets, catastrophic forgetting, causal reasoning, causality, causality perspective, class, class IL, class-incremental learning, competitive performance, data, dataset, deep neural networks, differences, exemplars, experience replay, experiments, features, forgetting, framework, humans, image semantic features, images, improvement, inspiration, intrinsic relation, issues, joint learning, knowledge, lack, learning, method, network, neural network, old knowledge, performance, perspective, phenomenon, reasons, reduce forgetting, rehearsal, rehearsal methods, relations, replay, samples, scheme, semantic features, sequence, sequence of classes, similarity, state-of-the-art, training, training scheme

Data Provider: Digital Science