Automatic Affect Recognition through Multimodal Fusion

Automatically recognizing human’s affective state through nonverbal behaviors such as facial expression and body gesture can have a wide range of applications, including intelligent human computer interaction, fatigue detection, stress detection, and entertainment etc. Psychology studies show that both facial expression and body gesture carry significant amount of affect information. This project is to develop new algorithms which can effectively fuse multiple modalities, i.e., facial expression and body gesture, for the affect recognition.

Automatic Scene Understanding

Automatic visual scene understanding facilitates a large number of applications such as robotic navigation, content based image retrieval (CBIR), augmented reality on mobile phone, and remote sensing application etc. However, real world scene is incredibly complex. The project focuses on developing effective and efficient visual scene representation, which utilizes correlation among different objects in a visual scene, including the spatial correspondence. The compact scene representation is to address the needs arising from the enormous visual scene data available in our daily lives.