Affective Computing

With the recent advancement of social media networks, a large fraction of world population is now able to socially interact with each other. Previously people used to write long paragraphs to convey their thoughts but now-a-days they prefer to use multimedia content like images, gifs and videos as they are more emotionally powerful and can catch more user attention. Therefore, millions of images and videos are being uploaded daily on social media sites from different events and gatherings taking place around the globe. This emotionally rich database of multimedia content creates a demand for tools that can automatically analyze it based on emotions in order to understand people’s reaction to it.

Affective Computing is a research field that addresses this problem and affectively analyze the multimedia to build emotionally intelligent machines, capable of better human machine interaction. The exciting applications of affective computing include affective content recommendation, abstraction and affective description generation. Affective content analysis also helps us find the reason why a specific content is evoking particular emotion in its viewers. For example, image (a) should elicit joyous feeling to most of viewers because of the reason that children are playing and having fun. Similarly, image (b) should induce feelings of fear in its viewers due to presence of scary doll.

(a) Induced Emotion: Joy
(a) Induced Emotion: Joy
(b) Induced Emotion: Fear
(b) Induced Emotion: Fear

(a) Induced Emotion: Joy
(a) Induced Emotion: Joy
(b) Induced Emotion: Fear
(b) Induced Emotion: Fear

Affective Computing at ITU

Much of the difficulty that appears in affective computing is due to presence of “affective gap” which can be defined as the disconnect between low-level visual features (like color, texture and saliency) and high-level affective concepts such as human emotions. Unlike existing work that uses low level visual features we believe that these features are neither sufficient nor adequate enough to model human emotions. Our team focuses on interpretable affective computing and uses high level concepts like objects, places and relationship among them to model human emotions. For example, given images will induce emotions of joy and amusement in the viewers due to presence of high level concepts like sky diving, sea view, park and Halloween.

We aim to extend our study to perform video affective analysis to enable mood based affective retrieval as mood of video or movie is one of the most important factors we consider when we select it for watching. For example, when a user will be sad or tired, he/she would be able to change his/her mood by watching a happy video.

Datasets

We are conducting a study on analysis of emotions induced in humans when any image is presented to them and collecting a dataset named SentimentMe. Through SentimentMe our aim is to gather user sentiments along with the contextual and content information about the images for better visual sentiment analysis. We want to study the reasons that are behind these induced emotions. You can contribute to this research cause by visiting our data collection page.

publication

Emotional Filters: Automatic Image transformation for Inducing Affect

Author Names:-Afsheen Rafaqat Ali, Mohsen Ali

Conference:-British Machine Vision Conference (BMVC) 2017
Publication Year:-2017 DOI:-xxx  Project Page  PDF

publication

High-Level Concepts for Affective Understanding of Images

Author Names:- Afsheen Rafaqat Ali, Usman Shahid, Mohsen Ali, Jeffrey Ho

Conference:- Winter Conference on Applications of Computer Vision (WACV) 2017
Publication Year:- 2017 DOI:-10.1109/WACV.2017.81