How Human Workers Can Control Robots from Their Homes — with Virtual Reality

With the recent developments that are happening in robotics and artificial intelligence, it’s becoming clear that automation won’t just revolutionize factories. It’s quite likely that these technologies will also disrupt highly skilled, white-collar industries like finance and medicine as well — leading to questions about what to do once the robot takeover happens and massive human job losses ensue — should governments enforce a robot tax? Or roll out some kind of universal basic income?
It’s not clear what will happen, but it’s also possible that there’s a middle way. Rather than outright replacement, humans could be incorporated into the equation, working alongside collaborative robots or operating robots through virtual reality.
Yet these virtual-reality-based, teleoperation systems are often specialized and therefore costly, which has put a bit of a crimp in their development and deployment. But they might be made more feasible by incorporating off-the-shelf commercial solutions, as researchers over at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are proposing with their Homunculus Model system, which employs an Oculus Rift VR set to allow a human to virtually “get inside the head” of a robot and control it. Even more notable is that the system can work over long distances, meaning blue-collar factory workers can potentially telecommute and control their robotic counterparts from home.
New Telerobotics Architecture
As the researchers describe in their paper, their system offers a new architecture for telerobotics that places the user into a “virtual reality control room” (VRCR), which is outfitted with various virtual sensor displays. This gives the user the impression that they are embedded in the robot’s head, seeing what it sees and telling it what to do. The research used a humanoid robot named Baxter, manufactured by Rethink Robotics, in addition to an Oculus Rift VR headset and hand-controllers that link up their movements with the robot’s movements.
“A system like this could eventually help humans supervise robots from a distance,” said CSAIL’s Jeffrey Lipton, the paper’s lead author. “By teleoperating robots from home, blue-collar workers would be able to telecommute and benefit from the IT revolution just as white-collar workers do now.”
Human Behind the Robot
The team’s system alludes to the so-called “homunculus model of mind” (Latin for “little man”) — where it’s imagined that a little man (or immaterial self) sits at the brain’s helm, controlling the body. As the researchers point out, this dualistic approach to the mind-body paradox presents a logical fallacy that falls short of explaining the origin of cognition and intelligence but is nevertheless appropriate for this particular case of putting a human in the “brain” and control room of the robot.
The team’s proposal addresses a number of problems that arise with other teleoperation methods that utilize virtual reality. In the direct model, interaction is set up so that the user’s vision is directly coupled with the robot’s state, resulting in the potential for what’s called “simulator sickness” — a kind of nauseating motion sickness that occurs when there is a delayed signal between action and feedback.
In the cyber-physical model of teleoperation however, the user feels separate and interacts with virtual representations of the robot and the environment. The drawbacks to this approach are that it is much more data-intensive due to the mapping of both robot and environment, and requires a dedicated space for teleoperation.
In contrast, CSAIL’s technique eliminates the latency problems by mapping the robot’s space into the virtual space, creating “virtual copies” with which the user interacts with. This gives the user the sense of occupying the same space as the robot, at the same time. Instead of being overloaded by 3-D data being converted from a robot’s 2-D eye cameras, CSAIL’s model merely uses 2-D images that are displayed to each of the human user’s eyes, which the human brain can automatically convert into a 3-D image on its own. With the hand-controllers, users can interact with the controls in the virtual space, prompting the robot to manipulate items and perform simple tasks like picking things up, stapling wires and stacking blocks.
The results were impressive: the team found that the system allowed users to perform these simple actions 95 percent of the time, compared to 57 percent of the time using other VR teleoperation systems.
Not surprisingly, the team also discovered that experienced gamers performed much better in the experiments. But even more remarkable was the ability to control the robot from hundreds of miles away, as the team did when it successfully controlled the robot — located at MIT — from a hotel room in Washington, using the hotel’s wireless network.
There’s been a lot of hand-wringing about how automation will severely affect manufacturing and other blue-collar industries. But it need not be that way: with new developments such as this, humans can still be kept in the loop, in a cost-effective way that could benefit workers and the companies that employ them. The fact that such work could also be potentially be carried out by unemployed gamers gives it another distinct advantage, allowing gamers to perform heavy-duty work with less physical labor. The team now hopes to expand their research to explore how the system might be adapted to different commercial VR and robotic platforms.
Images: MIT CSAIL