The complexity of the world around us is creating a demand for cognition-enabled interfaces that will simplify and enhance the way we interact with the environment. The recently unveiled project Glass by Google is a case that seeks to address this demand for the sense of vision. Project WEARHAP, developed in this proposal, aims at laying the scientific and technological foundations for wearable haptics, a novel concept for the systematic exploration of haptics in advanced cognitive systems and robotics that will redefine the way humans will cooperate with robots. The challenge of this new paradigm stems from the need for wearability which is a key element for natural interaction. This paradigm shift will enable novel forms of human intention recognition through haptic signals and novel forms of communication and cooperation between humans and robots. Wearable haptics will enable robots to observe humans during natural interaction with their shared environment. Research challenges are ambitious and cross traditional boundaries between robotics, cognitive science and neuroscience. Research findings derived from distributed robotics, biomechanical modeling, multisensory tracking, underaction in control and cognitive systems will be integrated to address the scientific and technological challenges imposed in creating effective wearable haptic interaction. To highlight the enabling nature, the versatility and the potential for industrial exploitation of WEARHAP, the research challenges will be guided by representative application scenarios. These applications cover robotics, health and social scenarios, stretching from human-robot interaction and cooperation for search and rescue, to human-human communication, and interaction with virtual worlds through interactive games.
More information here.
The scientific goals of the proposal revolve around the reciprocal linkages between the physical hand and its high‐level control functions, and about the way that the embodiment enables and determines its behaviours and cognitive functions. THE Hand Embodied refers to the “hand” as both a cognitive entity – standing for the sense of active touch – and as the physical embodiment of such sense, the organ, comprised of actuators and sensors that ultimately realize the link between perception and action. The study of the intrinsic relationship between the hand as a cognitive abstraction and its bodily instance will be made possible by: (a) performing neuroscientific and perceptual behavioural studies with participants engaged in controlled manual activities; and (b) the parallel development of a theoretical framework to lay the foundations for design and control of robotic hands and haptic interfaces.
The general idea is to study how the embodied characteristics of the human hand and its sensors, the sensorimotor transformations, and the very constraints they impose, affect and determine the learning and control strategies we use for such fundamental cognitive functions as exploring, grasping and manipulating. The ultimate goal of the present proposal is to learn from human data and hypotheses‐driven simulations how to devise improved system architectures for the “hand” as a cognitive organ, and eventually how to better design and control robot hands and haptic interfaces. The project hinges about the conceptual structure and the geometry of such enabling constraints, or synergies: correlations in redundant hand mobility (motor synergies), correlations in redundant cutaneous and kinaesthetic receptors readings (multi‐cue integration), and overall sensorimotor system synergies. These are also our key ideas for advancing the state of the art in artificial systems for robotic manipulation and haptic and neuroprosthetic interfaces.
More information here.
In the CyberWalk Project, which was funded as an FET-Open project by the EU between 2005 and 2008, we built an omnidirectional treadmill to alow unconstrained walking in virtual environments. Next to the Max Planck Institute which was my home at the time of the project, there were three other Partner universities involved: The TUM, desiging and building the platform, the ETH Zürich responsible for the virual content via the city engine and the markerless tracking, and the university of Rome La Sapienza respoinsible for the control algorithms for enabling a smooth walking experience.
For more information click here.
Immersence overall objective was to enable people to freely act and interact in highly realistic vitual environments with their eyes, ears and hands. The keyword was multi-modal: Human senses should be integrated into a single experience allowing comprehensive immersion.
Most existing systems receive the user merely as a passive observer. Whenever interaction with the virtual world is inevitable, like in the case of computer games, human action is restricted by basic devices compromising significantly the feeling of ‚being there‘.
Immersence goal was to change this very restrictive situation. Users of Virtual Environments (VE) should be able to manipulate items of various shapes, seizes, and textures as well as to interact with other users including physical contact and joint operations on virtual objects.
In order to achieve this new level of immersion, Immersence main focus lied on the investigation of the tactile dimension in order to catch up with the remarkable progress made in the fields of visual and auditive devices. The development of new techiques of signal processing should finally have led to the synthesis of all sensory modalities in a single perception allowing full multi-modal feedback in the planned virtual scenarios.
More information here.