Biosignals To Emotive Signs and Fusion of Modalities
Resolving absence of mutual sympathy (rapport) in interaction between human and machine is one of the most important issues in advanced human-computer interaction (HCI) today. With exponentially evolving technology, it is no exaggeration to say that any interface that disregards human affective states in the interaction - and thus fails to pertinently react to the states - will never be able to inspire confidence. Instead, users will perceive it as cold, untrustworthy, and socially inept. In human communication, expression and understanding of emotions helps achieve mutual sympathy. To approach this in human-computer interaction, we need to equip machines with the means to interpret and understand human emotions without the input of a user’s translated intention. For realizing the affective human-computer interface, one of the most important prerequisites is a reliable emotion recognition system which guarantees acceptable recognition accuracy, robustness against any artifacts, and adaptability to practical applications.
In this talk I will first give a brief overview of emotions in HCI and then present results of my research work on automatic emotion recognition by analyzing physiological changes in multi-channel biosignals, e.g. ECG, EMG, SC, RSP, Temp etc. We humans use several modalities jointly to interpret emotional states, since emotion affects almost all modes- audiovisual (facial expression, voice, gesture, posture, etc.), physiological (respiration, skin temperature etc.), and contextual (goal, preference, environment, social situation, etc.) states in human communication. In the last part of this talk I will discuss fusion scheme of multiple modalities by presenting results of my previous work and introduce applications of currently involved EU-projects.
Information
Sprecher
Dr. Jonghwa Kim
Multimedia Concepts and Applications
University of Augsburg
Datum
Montag, 12. April 2010, 16 Uhr
Ort
Universität Ulm, Oberer Eselsberg, N27, Raum 2.033