The idea of an interactive universe, where objects and systems can be manipulated with a mere gesture, is a compelling one that we’ve seen played out again and again in popular culture and in sci-fi films. Google’s Jacquard Project is one intriguing development, which weaves interactivity into fabric itself, allowing technology to become more embedded into everyday objects. Now, a Canadian team of researchers at Queens University’s Human Media Lab has unveiled what they believe is a breakthrough in interactive 3D displays — using swarms of hovering, micro-drones that act as three-dimensional pixels of programmable matter.
A “real-reality” interface
Calling the platform BitDrones in their recently published paper, the researchers envision people utilizing these nano-quadcopters as “self-levitating tangible building blocks” that can be manipulated in real-time and in real space, as opposed to virtual reality platforms. Human-Computer Interaction (HCI) Professor Roel Vertegaal, who oversaw the project, explains:
BitDrones brings flying programmable matter closer to reality. It is a first step towards allowing people to interact with virtual 3D objects as real physical objects. We call this a Real Reality interface rather than a Virtual Reality interface. This is what distinguishes it from technologies such as Microsoft HoloLens and the Oculus Rift: you can actually touch these pixels, and see them without a headset.
Flying, programmable voxels
The team manufactured three types of BitDrones, each configured with different floating displays of differing resolutions. PixelDrones come with one LED and a smaller-sized dot matrix display, while ShapeDrones are encapsulated in a meshed, 3D-printed frame, and are used as basic building blocks for 3D models. DisplayDrones, on the other hand, come with a curved, flexible high-resolution touchscreen, in addition to a forward-facing video camera and Android smartphone board.
All three variants of the BitDrone are outfitted with reflective markers that enable them to be tracked and positioned individually, using motion capture technology. This same technology is used to track markers on the user’s hands, allowing him or her to use gestures to ‘control’ these three-dimensional pixels, or “voxels,” using a custom application written in C#. Each drone represents a unit of interactive matter that can hover inside a predefined “interaction volume,” and can be used for input, or output, or both.
Full-room, interactive computers
The researchers envision a number of possible applications for the technology. For starters, users could explore their computer’s operating system in real, three-dimensional space, rather than on a screen, using PixelDrones as stand-ins for file folders and files. Using swiping or pinching gestures similar to those used on smart devices, one can open and close and access files in this refreshingly tactile fashion. Besides systems and information visualizations, the technology could potentially be used in “real-reality” 3D modeling, gaming and the robotics industry.
“Simple atomic information can be displayed by a single drone, while more complex 3D data displays can be constructed using several drones, providing the rudiments for a voxel-based 3D modeling system capable of representing sparse 3D graphics in real reality,” writes the team in their paper.
Science would benefit from the technology as well. For example, it could be used in medical imaging, or in three-dimensional, experimental models where chemical structures could be visualized and built in 3D, using ShapeDrones. Telepresence systems — which allow users to ‘visit’ and view a place remotely through a robotic avatar — could deploy DisplayDrones running Skype as a lightweight alternative to bulkier machines.
The use of drone swarm technology here is an interesting alternative approach in the emerging field of programmable matter, in contrast to 4D printed, self-transforming materials we’ve seen previously. But according to the team, the idea is nothing new, having been explored in the last fifty years under various names like claytronics, organic user interfaces, and radical atoms.
Of course, the team’s drone-based interface still has some room for improvement. Currently, their system consists of relatively unwieldy drones measuring 2.5 to 5 inches in size, but the plan is to scale up the system to coordinate thousands of smaller drones measuring no more than half an inch each, resulting in a future interface with “high-resolution” programmable matter.
If future development is successful, it could mean that computer interfaces would someday shift from two-dimensional screens to nano-scaled drone swarms that are whole-room, spatial interfaces, intuitively manipulated by hand in all three dimensions, creating a new, immersive experience that’s more visceral and ultimately, less virtual.
Images: Human Media Lab
The New Stack is a wholly owned subsidiary of Insight Partners. TNS owner Insight Partners is an investor in the following companies: Real.