Journal Article


An automatic multimedia likability prediction system based on facial expression of observer

Abstract

Every individual's perception of multimedia content varies based on their interpretation. Therefore, it is quite challenging to predict likability of any multimedia just based on its content. This paper presents a novel system for analysis of facial expressions of subject against the multimedia content to be evaluated. First, we developed a dataset by recording facial expressions of subjects under uncontrolled environment. These subjects are volunteers recruited to watch the videos of different genre, and provide their feedback in terms of likability. Subject responses are divided into three categories: Like, Neutral and Dislike. A novel multimodal system is developed using the developed dataset. The model learns feature representation from data based on the three provided categories. The proposed system contains ensemble of time distributed convolutional neural network, 3D convolutional neural network, and long short term memory networks. All the modalities in proposed architecture are evaluated independently as well as in distinct combinations. The paper also provides detailed insight into learning behavior of the proposed system.

Attached files

Authors

Singh Bawa, Vivek
Sharma, Shailza
Usman, Mohammed
Gupta, Abhimat
Kumar, Vinay

Oxford Brookes departments

School of Engineering, Computing and Mathematics

Dates

Year of publication: 2021
Date of RADAR deposit: 2021-11-19


Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 International License


Related resources

This RADAR resource is Identical to An automatic multimedia likability prediction system based on facial expression of observer

Details

  • Owner: Joseph Ripp
  • Collection: Outputs
  • Version: 1 (show all)
  • Status: Live