Science / Technology / Top Stories /

This Robot Is Learning from Humans through Virtual Reality

21 Dec 2017 11:00am, by

Robots seem to be infiltrating into all sorts of places: into industrial settings as factory robots; in our offices as collaborative robots, or even on the streets as rolling delivery machines or cargo-carrying bots. But despite these apparent advances, there are still many fine-motor things that robots can’t do as well as humans — yet.

Quickly learning how to manipulate and grasping a variety of differently shaped and sized things is one of these challenges for robots. Currently, it’s possible to program robots to repeat a specific task of manipulating an object, as long as there isn’t much variance, or any complex hurdles such as folding pliable ropes or wires, for instance. But this handicap may soon become a thing of the past, with one startup exploring how artificially intelligent robots might be quickly trained by a human to perform a range of manipulation tasks, using off-the-shelf virtual reality (VR) equipment.

Founded by a team whose credentials include working with OpenAI and the University of California Berkeley, the Emeryville, California-based startup Embodied Intelligence is using a type of deep learning called imitation learning to help robots learn through mimicry.

Learning Through Virtual Imitation

While this may sound like an easy thing for a human to do, it’s not that simple for a machine. For a robotic manipulation, learning by observing demonstration is possible, but those demonstrations would need to be of a high caliber. For instance, a human might physically guide a robot by way of pushing on its appendages, but this won’t work if those human arms get picked up on the robot’s visual sensors and confused with the target objects. Reinforcement learning, or learning through a process of trial-and-error, is another option, but requires careful design and a lot of training time. Teleoperation methods — where a machine is remotely operated from a distance through an interface — seem to work the best, allowing for high-quality demonstrations to be collected for training.

The major factor for scaling up such an approach, however, is cost: robotic teleoperation systems can be quite expensive. So in a paper published on arXiv, the team described how they set out to create their own teleoperation system using commercially available Vive headset and hand controllers, and a PR2 robot. The software was written using Unity, a 3D game engine that’s compatible with many VR brands.

The first-person view from inside Embodied Intelligence’s VR teleoperation interface, during a training demonstration.

Using this approach, the team found that they could train the robot to reach for a randomly placed object in a bin or table, align, push or insert various kinds of objects into place. Surprisingly, these experiments achieved a relatively impressive accuracy of 80 to 90 percent, using human-led training demonstrations via VR which lasted less than a hour — a big improvement over other training methods.

“We can essentially teach a wide range of skills from just under thirty minutes of demonstration,” as Embodied Intelligence’s CEO Peter Chen told The Verge. “This is not just teaching the robot a fixed trajectory. It’s teaching it to recognize where a ball is, pick it up, and place it in a location — in different scenarios.”

This could mean that robots could be then used in a wider range of industrial, commercial and consumer applications — robots that can assemble a wide range of objects, serve in retail stores, or help you do all manner of finicky chores at home, with only a bit of training that could be done by any layperson, using widely available VR equipment and platforms.

But most importantly, the firm is planning to develop and release one universal version of software that can be used for all kinds of situations, rather than having to program robots specifically for each task.

“We bring software that we only have to write once, ahead of time, for all applications,” said Embodied Intelligence’s Pieter Abbeel, in IEEE Spectrum. “That’s a paradigm shift from needing to program for every specific task to programming once and then just doing data collection, either through demonstrations or reinforcement learning.”

Of course, in order to make those inroads, the system will need to improve its accuracy even further, especially for industrial use, where even a small margin for error can mean lost productivity at best, or potentially catastrophic breakdowns at its worst. But once those hurdles are cleared, it may very well be a big step toward paving that fully automated future that we are either dreading or looking forward to.

 

Images: Embodied Intelligence


A digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.