Gender Differences in Facial Emotion Perception for User Profiling via Implicit Behavioral Signals
Understanding human emotions has been of research interests to multiple domains of modern day science namely Neuroscience, Psychology and Computer Science. The ultimate goals of each of these domains in studying them might be different such as neuroscientists interest in emotions is primarily to understand the structural and functional abilities of brain, psychologists study them to understand human interactions and computer scientists to design interfaces and automation of certain human-centric tasks. Several earlier works have suggested the existence of two facets to emotions namely perception and expression. It has been advised to study emotions in the aspects of perception and expression as separate entities. This work attempts to study the existence of gender differences in emotion perception(in specfic the Ekman emotions). Our work aims at utilizing such differences for user profiling, particularly in terms of gender and emotion Recognition. We employed implicit signals–the non-invasive electrical scalp activity of brain through Electroencepholography(EEG) and gaze patterns acquired through low-cost commercial devices to achieve these. We studied the impact of facial emotion intensity and facial regions in invoking the differences through stimuli involving of different intensities and masking face regions which were deemed important in previous studies. We expressly examined the implicit signals for their ecological validity. Existence of correlations between our study and previous studies from the above said domains in terms of Event Related Potentials(ERPs) and fixation distributions have added uniqueness and strength to our work. We achieved a reliable gender and emotion recognition with Support Vector Machine based classifiers and further designed a deep learning model to significantly outperform them. We also analyzed for emotion specific time windows and key electrodes for maximum gender recognition to arrive at some interesting conclusions. The appendix chapter on cross-visualization based cognitive workload classification using EEG attempts to quantify workload in order to evaluate user-interfaces. We employ four common yet unique data visualization methods to induce varying levels of workload through a standard n-back task and attempt to classify it across visualizations with deep learning through transfer learning. We compare its performance against the Proximal Support Vector Machines adopted in earlier works for within visualization workload classification.
|Year of completion:||July 2018|
|Advisor :||Ramanathan Subramanian|