A Self-Correcting Robot that’s Telepathically Controlled by the Human Brain

In an increasingly automated world, there are a number of possible ways to get robots and other machines to do our bidding. For instance, robots can be controlled by voice commands, but of course, that would mean these machines would need to be equipped with some kind of AI that’s proficient in natural language processing. But as many of us know from personal experience with the digital assistants on our smartphones, that approach can be unreliable and frustrating.
So scientists are exploring alternatives in robot control, such as brain-computer interfaces (BCIs) that make a direct communication link between the brain and an external machine. A team of researchers from Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University recently created such an interface that allows humans to control a robot with their minds, utilizing machine learning and a commercially-available collaborative robot from Rethink Robotics named Baxter.
In the experiment outlined in the team’s paper, the co-bot Baxter was tasked with sorting spray paint cans and rolls of wire into two separate bins. The team’s interface involves the human subject placing an electroencephalography (EEG) cap on their head, connecting them to a closed, feedback loop system that includes the robot as well. The system permits a human to alert the robot in real-time with only a mere thought whenever it made an error in its sorting, even eliciting an “embarrassed” reaction from the robot, as seen in the video.
The system reads and acts on a certain kind of electrical activity in the brain called error-related potentials (ErrPs). This particular brain signal flares up when a person becomes aware of a mistake being made. These electrical impulses are picked up by the EEG cap, and within 10 to 30 milliseconds, are classified and decoded by machine learning algorithms into simple robotic control commands that prompt Baxter to stop and correct itself.
“Imagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button or even say a word,” said CSAIL director Daniela Rus. “A streamlined approach like that would improve our abilities to supervise factory robots, driverless cars, and other technologies we haven’t even invented yet.”
The team’s approach has some distinct advantages over other existing brain-computer interfaces, which often require human operators to be trained to modulate their brain signals in a certain way in order to get a response from the system. One example is a human subject hooked up to a BCI, mentally visualizing a cursor moving toward a target on a computer screen. The system’s software will interpret these brain waves and translate this mental thought into visible activity on the screen. One major disadvantage of this technique it can require a lot of concentration, constant visual stimuli and extra training. In contrast, the team’s aim is to develop a more intuitive method of control.
“As you watch the robot, all you have to do is mentally agree or disagree with what it is doing,” says Rus. “You don’t have to train yourself to think in a certain way — the machine adapts to you, and not the other way around.”
To increase the robot’s response accuracy and to offset the inherent weakness of ErrP signals, the team also turned their attention to what they call “secondary errors.” These signals occur in the human operator’s brain when the robot does not respond correctly to the operator’s initial feedback and are much easier to identify than the initial error-related brain signals.
While this extra feature cannot yet be used in real-time in this model, the team believes that once it’s integrated within the system, it will boost the system’s performance, improving accuracy up to 90 percent and create a continuous communication between human and robot.
The team also believes that the system could be refined beyond a dual, ‘yes-or-no’ distinction to encompass multiple choices, since the ‘wrongness’ of an ErrP signal can be analyzed in relative gradations in order to arrive at the answer or action out of many that is most ‘right.’
Intuitive brain-computer interfaces such as this one hold much promise for creating systems for controlling robots that require little to no training to use. Such an easy-to-use and immediately responsive interface would be valuable in a factory setting, and no doubt this would be convenient for people with medical conditions, who use some kind of prosthesis but have an illness or injury that prevents them from controlling their devices with their voice. As it is, this kind of research is showing that the idea of controlling a robot with your mind doesn’t sound as far-fetched as it might have in the past, and may have the potential to become a widespread technology that we may see integrated into our cars, homes and factories.
Images: CSAIL / Boston University