Co-Analysis of Signal and Sense

The goal of the proposed research is to use the interplay between the complementary modalities and the prosodic manifestations of their synchronization to develop novel algorithms for the recognition of gestures, facial expressions, emotions and dialog acts (DAs) and their applications in developing a intelligent interface for virtual agents such as the AutoTutor (an artificially intelligent web-based tutoring system), providing a natural means to interact with multimedia contents for instruction.

The framework for co-analyses of multimodal articulations will help to obtain a deeper understanding of (a) how the nucleus of an utterance and a visual prosody interact to render the intent of the utterance, and (b) how the synchronization with other modalities affects the production of multimodal co-articulation. These discoveries will facilitate the design and development of a novel interface for AutoTutor. This will enable the development of innovative applications, e.g., collaborative environments for agents and humans, and assistive technologies for the elderly and disabled.”

This research introduces the idea of co-analysis of signal and sense using prosodic relationship between verbal and non-verbal modalities, sophisticated method of mining the multimodal feature space for the analysis and application of multimodal co-articulation. The CAREER grant will help to realize the proposed research goals, and also serve as a symbol of trust from the peers in the research direction.