Article, 2024

EEG-Based Multimodal Emotion Recognition: A Machine Learning Perspective

IEEE Transactions on Instrumentation and Measurement, ISSN 1557-9662, 0018-9456, Volume 73, Pages 1-29, 10.1109/tim.2024.3369130

Contributors

Liu, Huan 0000-0002-7863-3751 [1] Lou, Tianyu [1] Zhang, Yuzhe [1] Wu, Yixiao [1] Xiao, Yang 0000-0003-1410-0486 [2] Jensen, Christian Søndergaard 0000-0002-9697-7670 [3] Zhang, Dalin 0000-0002-5869-6544 (Corresponding author) [3]

Affiliations

  1. [1] Xi'an Jiaotong University
  2. [NORA names: China; Asia, East];
  3. [2] Xidian University
  4. [NORA names: China; Asia, East];
  5. [3] Aalborg University
  6. [NORA names: AAU Aalborg University; University; Denmark; Europe, EU; Nordic; OECD]

Abstract

Emotion, a fundamental trait of human beings, plays a pivotal role in shaping aspects of our lives, including our cognitive and perceptual abilities. Hence, emotion recognition also is central to human communication, decision-making, learning, and other activities. Emotion recognition from electroencephalography (EEG) signals has garnered substantial attention due to advantages such as noninvasiveness, high speed, and high temporal resolution; driven also by the complementarity between EEG and other physiological signals at revealing emotions, recent years have seen a surge in proposals for EEG-based multimodal emotion recognition (EMER). In short, EEG-based emotion recognition is a promising technology in medical measurements and health monitoring. While reviews exist, which explore emotion recognition from multimodal physiological signals, they focus mostly on general combinations of modalities and do not emphasize studies that center on EEG as the fundamental modality. Furthermore, existing reviews take a methodology-agnostic perspective, primarily concentrating on the biomedical basis or experimental paradigms, thereby giving little attention to the methodological characteristics unique to this field. To address these gaps, we present a comprehensive review of current EMER studies, with a focus on multimodal machine learning models. The review is structured around three key aspects: multimodal feature representation learning, multimodal physiological signal fusion, and incomplete multimodal learning models. In doing so, the review sheds light on the advances and challenges in the field of EMER, thus offering researchers who are new to the field a holistic understanding. The review also aims to provide valuable insight that may guide new research in this exciting and rapidly evolving field.

Keywords

EEG-based emotion recognition, ability, activity, advances, basis, beings, biomedical basis, center, challenges, characteristics, combination, combination of modalities, communication, complementarity, comprehensive review, decision-making, electroencephalography, emotion recognition, emotions, evolving field, experimental paradigm, feature representation learning, field, fusion, gap, health, health monitoring, high speed, holistic understanding, human beings, human communication, learning, learning models, life, machine learning models, measurements, medical measures, methodological characteristics, modalities, model, monitoring, multimodal emotion recognition, multimodal learning model, multimodal machine learning model, multimodal physiological signals, noninvasively, paradigm, perceptual abilities, perspective, physiological signals, proposal, recognition, representation learning, research, resolution, review, signal, signal fusion, speed, study, surge, technology, temporal resolution, traits, traits of human beings, understanding, years

Funders

  • National Natural Science Foundation of China
  • Ministry of Education of the People's Republic of China

Data Provider: Digital Science