Machine Learning

AI Algorithm Automatically ‘Tunes’ Prosthetics Within Minutes

8 Feb 2019 1:43pm, by

Prosthetics have been around for thousands of years, with some of the earliest recorded examples of iron legs and wooden feet being found in ancient India and Greece. Today, artificial limbs are much more refined in design, but the big problem is that it remains a time-consuming process for amputees to get used to moving with their new limbs. Generally, a patient with a prosthesis will need to come in person to a clinic to have a human healthcare practitioner manually ‘tune’ that device specifically for that individual — a procedure that can take hours.

But according to a team of researchers from North Carolina State University, the University of North Carolina and Arizona State University, that may change with some help from reinforcement learning algorithms. Similar algorithms — which are best explained as the machine version of learning by trial-and-error — have shown up in AI research projects producing machines that can learn by automatically extracting data from the Web, to machines that can achieve superhuman mastery of chess and other games in a matter of hours.

In this particular work, researchers were able to demonstrate that it is possible to automate the tuning process significantly, so that a patient can be walking comfortably on a flat surface with their prosthesis within 10 minutes. The team’s paper, titled “Online Reinforcement Learning Control for the Personalization of a Robotic Knee Prosthesis,” was recently published in IEEE Transactions on Cybernetics, and describes the new tuning system, which is capable of modifying 12 different control parameters that govern prosthesis dynamics that may crop up during what is called a bipedal gait cycle — the locomotive sequence that starts when one foot touches the ground, and ends when the same foot comes in contact with the ground again.

“We begin by giving a patient a powered prosthetic knee with a randomly selected set of parameters,” said paper co-author Helen Huang. “We then have the patient begin walking, under controlled circumstances.

“Data on the device and the patient’s gait are collected via a suite of sensors in the device,” added Huang. “A computer model adapts parameters on the device and compares the patient’s gait to the profile of a normal walking gait in real time. The model can tell which parameter settings improve performance and which settings impair performance. Using reinforcement learning, the computational model can quickly identify the set of parameters that allows the patient to walk normally. Existing approaches, relying on trained clinicians, can take half a day.”

Dynamic co-adaptation

This process of “co-adaptation” between an artificial limb and its human user can be a complex one that changes dynamically over the course of the day. In particular, the new tuning process covers twelve parameters like joint stiffness and the allowable range of motion, with a focus on data patterns that indicate a stable walking pattern. In addition, as the patient walks, the tuning algorithm can be trained to identify certain patterns that might indicate a possible fall, which would help the user to avoid such a situation.

Nevertheless, the project did face some challenges, one of them being the relatively small dataset that was gleaned from human patients, who could only walk for so long to generate data to train the computer model.

So far, the team has tested its tuning algorithm only on patients walking on flat surfaces. The next step would be to develop the algorithm further for prosthetics wearers to walk up and down stairs, as well as to create a wireless variant of the system so that additional tuning sessions can be done over time by the patient, in the comfort of their own home. Testing the algorithm’s safety in order to minimize falls would also be important.

Moreover, the team hopes to find a way to allow users to better incorporate their somatic preferences into the system: perhaps the user might find one gait more comfortable over another, regardless of what the algorithm determines as a stable walking pattern. In any case, such research shows that it may very well be possible to quickly and seamlessly integrate human and robotic elements together, saving prosthetics wearers some time and money.

Images: Heidi Agostini; North Carolina State University, University of North Carolina and Arizona State University.

A digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.