Drones Make a Three-Dimensional UI for Programmable Matter
Computer user interfaces have evolved tremendously during the last few decades, starting from the humble punch card to early command-line interfaces, to today’s easy-to-use graphical user interfaces. There are augmented (AR) and virtual reality (VR) interfaces as well, but by and large, most interfaces available today are two-dimensional and aren’t necessarily all that interactive nor intuitive. But what if there was some way to create an interactive, three-dimensional user interface that users could control with gestures, using bits of programmable matter that can be haptically manipulated in real-time and in real space?
This intriguing idea isn’t new, and researchers from the Human Media Lab of Queen’s University in Canada are developing a platform that uses a swarm of cube-shaped nanocopters that act as a lattice of “interactive, touchable 3D graphics voxels.” Dubbed GridDrones, the system allows direct user interaction through a set of hand movements, permitting people to physically sculpt out three-dimensional forms with these drone “voxels” — or the three-dimensional version of a pixel. The idea is to provide users with the ability to create unsupported structures like arches, NURBs (non-uniform rational basis splines) or even 3D animations that can be wrangled around freely in three dimensions, rather than on a flat screen. Watch this demonstration of the research:
This new study builds upon BitDrones, the lab’s previous research into interactive 3D displays. To tackle some of the limitations of the prior work, the GridDrones system uses smaller drones, a better communication system, while also employing a lattice-like model that can deform and undergo spatial transformations in three dimensions.
“Unlike 3D printing materials, GridDrones do not require structural support as each element self-levitates to overcome gravity. Unlike 3D prints, the system is bi-directional: you can change the ‘print’ simply by picking up the pixels and re-arranging them,” said Human Media Lab director, study co-author and professor Roel Vertegaal. “This is an important first step towards robotic systems that render graphics as physical reality, rather than as just light. This means users will have a fully immersive experience without a head-mounted display, one that provides haptics for free.”
The team used 15 nano quadcopters that were able to maintain relative position to one another through the use of a Vicon motion capture system. Each nanocopter represents a physical building block that can be interacted with, through three different kinds of input: uni-manual touch to select single voxels; bi-manual touch to select groups of voxels and to rotate or transform the grid; and gestural inputs, such as the “point” gesture which sends out a “3D ray” that intersects with voxels.
Besides these inputs, the system utilizes many other conventional inputs that we are familiar with, like double-clicking and lassoing. In addition, the team developed a smartphone app that enables users to easily change the topographical relationship between drone-voxels using a touch slider; for instance, setting the vertical distance between voxels at a certain percentage so that when one voxel is moved, the rest will automatically reposition themselves to reflect that percentage setting.
A “Real Reality Interface”
While the current study is considered a “low-resolution” version that only uses 15 drones, according to the researchers, the GridDrones system can be easily scaled up to include many more drones.
“Future versions of the system will feature billions of drones that are so small that they will be able to cling together to create physical structures that are not discernible from real [3D] prints,” explained study co-author and professor Tim Merritt from Denmark’s Aalborg University. “This technology has the potential to ultimately displace virtual reality. The real advantage is that it is situated in the user’s real reality. That’s why we call it a ‘real reality interface.'”
Such “real reality interfaces” could be useful in creating full-scaled prototyping capabilities to engineers, designers and architects, as well as offering interactive educational tools for people of all ages. Such three-dimensional interfaces would be further extended by incorporating brain-computer interfaces (BCIs), allowing users to manipulate physical voxels with their brain waves. It’s a provocative idea that would potentially bring user interfaces out into our everyday lives, and controlling them as easy as making a gesture, or thinking a thought.