With the rapid development of digital photography and social networks, people get used to sharing their lives and expressing their opinions online. As a result, user-generated social media data, including text, images, audios, and videos, grow rapidly, which urgently demands advanced techniques on the management, retrieval, and understanding of these data. Most of the existing works on multimedia analysis focused on cognitive content understanding, such as scene understanding, object detection, and recognition. Recently, with a significant demand for emotion representation in artificial intelligence, multimedia affective analysis has attracted increasing research efforts from both academic and industrial research communities.
Affective computing of the user-generated large-scale multimedia data is rather challenging due to the following reasons. As emotion is a subjective concept, affective analysis involves multidisciplinary understanding of human perceptions and behaviors. Furthermore, emotions are often jointly expressed and perceived through multiple modalities. Multi-modal data fusion and complementation need to be explored. Recent solutions based on deep learning require large-scale data with fine labelling. The development of affective analysis is constrained by the affective gap between low-level affective features and high-level emotions, and the subjectivity of emotion perceptions among different viewers with the influence of social, educational and cultural factors. Recently, great advancements in machine learning and artificial intelligence have made large-scale affective computing of multimedia possible.
This MMM'20 special session aims to gather high-quality contributions reporting the most recent progress on multi-modal affective computing of large-scale multimedia data and its wide applications. It targets a mixed audience of researchers and product developers from several communities, i.e., multimedia, machine learning, psychology, artificial intelligence, etc.The topics of interest include, but are not limited to: