Autonomous Modular Robot Self-Transforms to Bypass Obstacles

Whether they are designed to perform jaw-dropping stunts or precision surgery, we are seeing now that robots can come in all shapes and sizes — and can even be made to transform themselves in order to adapt to the situation at hand. The idea of modular, self-reconfiguring robots (MSRRs) isn’t new, but has actually been kicking around for a few decades. However, getting a robot to perform tasks while adapting to an unknown environment with full autonomy has been a big challenge for researchers in the field, with most prototypes relying on either some kind of human input or other measures to bolster their limited autonomy.
Now, a team of engineers from Cornell University and the University of Pennsylvania have developed what they call a “centralized” system for perception, sensing and control, which gives their modular robot more autonomy to collect data about its environment, solve problems and execute tasks autonomously by reactively reconfiguring itself as needed. Watch it at work:
As the team explains in their paper, recently published in Science Robotics, their experiments utilized the SMORES-EP modular robot designed and built at the University of Pennsylvania. These battery-powered, cube-like modules have two rubber wheels and have four degrees of freedom (pan, tilt, turning left or right). Each of the faces on the module are outfitted with electro-permanent (EP) magnets that allow it to attach to other modules, which can also be turned on or off. The modular robot is also equipped with a camera that helps the robot receive visual feedback and a central processing unit that controls all the modules.
As the team points out, the big step forward in this study is the development of a centralized system that allows the robot to plan and execute higher-level decisions and functions in a flexible and adaptive way.
“MSRRs are by their nature mechanically distributed and, as a result, lend themselves naturally to distributed planning, sensing, and control,” wrote the team. “Most past systems have used entirely distributed frameworks. Our system was designed differently. It is distributed at the low level (hardware) but centralized at the high level (planning and perception), leveraging the advantages of both design paradigms.”
In the team’s experiments, the robot was assigned three different tasks that it had to complete. For instance, one of the robot’s tasks was to pick an object up and to place it at a certain spot. To accomplish this, the robot had to explore, map out and navigate within this unexplored environment in order to discover the location of the item. However, once the robot senses that the object is positioned in a narrow space, that prompts the robot’s centralized planning system to select from a software library the appropriate behavioral response to the problem — which in this case, necessitated a transformation from its original, scorpion-like shape into one that has a longer arm to reach into the crevice to retrieve the object using its magnets. Once the robot has the object in its grasp, it then navigates around to look for the designated spot — marked by a blue square — where it can unload its cargo. In other tests, the robot was tasked with climbing up stairs to achieve its goal, or with placing a postage stamp on a location overhead — all of which required it to collect data, and then plan the steps it needed to get the job done.
Of course, there are kinks to work out with a centralized approach such as this, and the study’s authors note that for purposes of the experiment, the test environment and the robot’s library of behaviors had to remain rather limited in order for the robot to succeed at its tasks. With a more open-ended environment and open-loop behaviors, the robot would likely make more errors. Nevertheless, the work presents an interesting approach that integrates the advantages of both a centralized and distributed architecture, and may someday help us design robots that are not only truly autonomous in how they solve problems, but also take full advantage of their modularity.
“Future systems could be made more robust by introducing more feedback from low-level components to high-level decision-making processes and by incorporating existing high-level failure-recovery frameworks,” suggested the team. “Distributed repair strategies could also be explored, to replace malfunctioning modules with nearby working ones on the fly.”
Images: Cornell University and University of Pennsylvania