back to Actors and Projects

Levitation with localised tactile and audio feedback for mid-air interactions

Start Date: 01-01-2017

End Date: 31-12-2020

Ref: https://www.levitateproject.org/

Id: Levitate

CORDIS identification number: 207474

This project will be the first to create, prototype and evaluate a radically new human-computer interaction paradigm that empowers the unadorned user to reach into a new kind of display composed of levitating matter. This tangible display will allow users to see, feel, manipulate, and hear three-dimensional objects in space. Our users can interact with the system in a walkup-and-use manner without any user instrumentation. It will be the first system to achieve this, establishing a new paradigm for tangible displays. We are now moving away from traditional human-computer interaction techniques like buttons, keyboards and mice towards touch (e.g., Smartphones and multi-touch gestures) and touchless interactions (as with the Kinect and Leap Motion controllers). The limiting factor in these new interactions is, ironically, the lack of physicality and direct feedback. Both touch and touchless interfaces lack a controller or interface element that provides meaningful physical feedback. Smooth glass touchscreens provide little affordance for direct physical feedback. Multimodal feedback such as visual or auditory cues can be disconnected from hands generating gesture and touch input. In this project, we propose a highly novel vision of bringing the physical interface to the user in mid-air. In our vision, the computer can control the existence, form, and appearance of complex levitating objects composed of "levitating particles". Users can reach into the levitating matter, feel it, manipulate it, and hear how they deform it with all feedback originating from the levitating object's position in mid-air, as it would with objects in real life. This represents a step change for tangible displays by creating a completely dynamic and fully interactive levitating object that responds realistically to mid-air touch interaction. We will draw on our understanding of acoustics to implement all of the components for this system in a radically new approach. In particular, we will draw on ultrasound beam-forming and manipulation techniques to create acoustic forces that levitate particles and to provide directional audio cues by making ultrasound audible through non-linear de-modulation in air. By using a phased array of ultrasound transducers, the team will create levitating objects that can be individually controlled. These objects will be given the affordances of physical objects using ultrasound-induced tactile feedback during user manipulations. We will then demonstrate that the levitating atoms can each become sound sources through the use of parametric audio with our ultrasound array serving as the carrier of the audible sound. We will visually project onto the objects to create a rich multimodal display levitated in three-dimensional space. There are numerous applications for this advanced display technology in all aspects of human interaction with computers. For example, instead of having to reach for an iDrive dial in a car, users may just reach out and the dial is created directly under their hand. Instead of controlling a virtual character on a TV screen when playing a tennis video game, players could hold a real physical racket in their hand and play with a ball made of levitating particles whose behaviour is controlled digitally. Instead of interacting with a virtual representation of a protein behind a computer screen, scientists could gather around a physical representation of the protein in mid-air, reach into it to fold it in different ways, and draw other proteins closer to see, feel and hear how they interact. The flexible medium of floating particles could be used by artists to create new forms of digital interactive installations for public spaces. Finally, engineers could directly walk with their clients around virtual prototypes floating in mid-air, while both are able to reach into the model and change it as they go.