Abstract
This paper introduces a new generalized control method designed for multi-degrees-of-freedom (DoF) devices to help people with limited motion capabilities in their daily activities. The challenge lies in finding the most adapted strategy for the control interface to effectively map the user’s motions in a low-dimensional space to complex robotic assistive devices, such as prostheses, supernumerary limbs, and even remote robotic avatars. The goal is a system that integrates the human and the robotic parts into a unique system, moving to reach the targets decided by the human while autonomously reducing the user’s effort and discomfort. We present a framework to control general multi-DoFs assistive systems, which translates user-performed compensatory motions into the necessary robot commands for reaching targets while canceling or reducing compensation. The framework extends to prostheses of any number of DoFs up to full robotic avatars, regarded here as a sort of “whole-body prosthesis” of the person who sees the robot as an artificial extension of their own body without a physical link but with a sensory-motor integration. We have validated and applied this control strategy through tests encompassing simulated scenarios and real-world trials involving a virtual twin of the robotic parts (prosthesis and robot) and a physical humanoid avatar.
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
