Understanding Naturalistic Facial Expressions with Deep Learning and Multimodal Large Language Models
by , , ,
Abstract:
This paper provides a comprehensive overview of affective computing systems for facial expression recognition (FER) research in naturalistic contexts. The first section presents an updated account of user-friendly FER toolboxes incorporating state-of-the-art deep learning models and elaborates on their neural architectures, datasets, and performances across domains. These sophisticated FER toolboxes can robustly address a variety of challenges encountered in the wild such as variations in illumination and head pose, which may otherwise impact recognition accuracy. The second section of this paper discusses multimodal large language models (MLLMs) and their potential applications in affective science. MLLMs exhibit human-level capabilities for FER and enable the quantification of various contextual variables to provide context-aware emotion inferences. These advancements have the potential to revolutionize current methodological approaches for studying the contextual influences on emotions, leading to the development of contextualized emotion models.
Reference:
Understanding Naturalistic Facial Expressions with Deep Learning and Multimodal Large Language Models (Yifan Bian, Dennis Küster, Hui Liu, Eva G. Krumhuber), In Sensors, volume 24, 2024.
Bibtex Entry:
@article{bian2024mllm_facial_expression,
  title = {Understanding Naturalistic Facial Expressions with Deep Learning and Multimodal Large Language Models},
  author = {Bian, Yifan and Küster, Dennis and Liu, Hui and Krumhuber, Eva G.},
  journal = {Sensors},
  volume = {24},
  year = {2024},
  number = {1},
  number = {126},
  PubMedID = {38202988},
  issn = {1424-8220},
  doi = {10.3390/s24010126},
  url = {https://www.csl.uni-bremen.de/cms/images/documents/publications/BianKuesterLiuKrumhuber_Sesnors2024.pdf},
  abstract = {This paper provides a comprehensive overview of affective computing systems for facial expression recognition (FER) research in naturalistic contexts. The first section presents an updated account of user-friendly FER toolboxes incorporating state-of-the-art deep learning models and elaborates on their neural architectures, datasets, and performances across domains. These sophisticated FER toolboxes can robustly address a variety of challenges encountered in the wild such as variations in illumination and head pose, which may otherwise impact recognition accuracy. The second section of this paper discusses multimodal large language models (MLLMs) and their potential applications in affective science. MLLMs exhibit human-level capabilities for FER and enable the quantification of various contextual variables to provide context-aware emotion inferences. These advancements have the potential to revolutionize current methodological approaches for studying the contextual influences on emotions, leading to the development of contextualized emotion models.},
}