Human-Machine Interaction, nowadays, is taking advantage of the rapid advances in Artificial Intelligence. Visual cues, related to facial analysis or body posture, can be a valuable source of information regarding a person’s emotional and/or cognitive state, while the recent explosion in computational power has paved the path for a new boost in machines’ capabilities to work in the wild, by employing methods coming from the state-of-the-art in AI, namely, deep neural networks. Such instruments can be of great use for robust, accurate and real time Facial Expression Recognition (FER) and/or Visual Focus of Attention (VFoA) estimation in Assisted Living environments, Serious Games, e-Learning, etc., allowing, thus, an AI-endowed machine (robot, computer, mobile phone….) to react promptly to human everyday needs.
Currently, together with my team, we are conducting research in emotion-driven adaptive learning environments, collaborative games and the role of automatically retrieved human emotion, human activity recognition in indoor settings, multimodality and cognitive analysis in human-computer interaction.