Abstract
1. Introduction
One of the major challenges of robotics is to generate complex motor behavior that can match that of humans (Billard and Kragic, 2019). Numerous approaches have been developed to address this problem, including applied nonlinear control (Chung and Slotine, 2009; Slotine and Li, 1991), optimization-based approaches (Kuindersma et al., 2016; Posa et al., 2014), and machine learning algorithms (Haarnoja et al., 2018; Lillicrap et al., 2015; Schulman et al., 2015).
Among these methods, several advances have been made based on “motor primitives” (Hogan and Sternad, 2012; Ijspeert et al., 2013; Schaal, 1999). The fundamental concept originates from human motor control research, where complex motor behavior of biological systems appears to be generated by a combination of fundamental building blocks known as motor primitives (Bizzi and Mussa-Ivaldi, 2017; Flash and Hochner, 2005; Hogan and Sternad, 2012; Mussa-Ivaldi and Bizzi, 2000; Wolpert et al., 2001).
The concept of control based on motor primitives dates back at least a century (Sherrington, 1906), with a number of subsequent experiments providing support for its existence in biological systems. Sherrington was one of the first to suggest “reflex” as a fundamental element of complex motor behavior (Elliott et al., 2001; Sherrington, 1906). Sherrington proposed that reflexes can be treated as basic units of motor behavior that when chained together produce more complex movements (Clower, 1998). First formalized by Bernstein (Bernstein, 1935; Latash, 2021), “synergies” have also been suggested as a motor primitive to account for the simultaneous motion of multiple joints or activation of multiple muscles (Giszter et al., 1993; d’Avella et al., 2003; Giszter and Hart, 2013).
Along with reflex and synergy, kinematic motor primitives (Giszter, 2015) such as “rhythmic” and “discrete” movements have also been suggested as motor primitives (Hogan and Sternad, 2007, 2012; Huh and Sejnowski, 2015; Park et al., 2017; Schaal et al., 2004; Viviani and Flash, 1995). Rhythmic movements (e.g., locomotion) are phylogenetically old motor behaviors found in most biological species (Ronsse et al., 2009; Schaal et al., 2004). Central Pattern Generators (Brown, 1911, 1912; Marder and Bucher, 2001), which are specialized neural circuits for generating rhythmic motor patterns, have been identified in biological systems (Hogan and Sternad, 2012). In comparison, discrete movements (e.g., goal-directed reaching movements) are phylogenetically young motor behaviors, particularly observed in primates with developed upper extremities (Ronsse et al., 2009). Neural imaging studies have effectively ruled out the hypothesis that rhythmic (arm) movements result from the concatenation of discrete movements (Schaal et al., 2004), further supporting that rhythmic and discrete movements constitute distinct classes of primitives.
Recently, there is growing evidence that neural circuits responsible for maintaining posture are distinct from those that control movement. Hence, “stable postures” may be considered to be a distinct class of motor primitives (Jayasinghe et al., 2022; Shadmehr, 2017).
Motor primitives have also been applied to robotics. Two distinct approaches exist: Dynamic Movement Primitives (DMPs) (Schaal, 2006; Ijspeert et al., 2013; Saveriano et al., 2023) and Elementary Dynamic Actions (EDAs) (Hogan, 2017; Hogan and Sternad, 2012, 2013). 1 The key idea of these approaches is to formulate motor primitives as “attractors” (Hogan and Sternad, 2012; Ijspeert et al., 2013; Saveriano et al., 2023). An attractor is a prominent feature of nonlinear dynamical systems, defined as a set of states toward which the system tends to evolve. Its type ranges from relatively simple ones such as (stable) “point attractors” and (stable) “limit cycles,” to “strange attractors” (Strogatz, 2018) such as the “Lorenz attractor” (Lorenz, 1963), “Rössler attractor” (Rössler, 1976), and others (Sprott, 2014; Tam et al., 2008).
One of the key benefits of using motor primitives is that it enables highly dynamic behavior of the robot with minimal high-level intervention (Hogan and Sternad, 2012). As a result, the complexity of the control problem can be significantly reduced. For instance, by formulating discrete (respectively rhythmic) movement as a stable point attractor (respectively limit cycle), problem of generating the movement reduces to learning the parameters of the corresponding attractor (Schaal et al., 2007). Another important consequence is that it provides a modular structure of the controller. By treating motor primitives as basic “modules,” learning motor skills happens at the level of modules which provides adaptability and flexibility for robot control.
Since DMPs and EDAs stem from the theory of motor primitives, both approaches share the same philosophy. Nevertheless, significant differences exist such that their implementations diverge. In the opinion of the authors, this has not yet been sufficiently emphasized. An in-depth review that elucidates the similarities and differences between the two approaches may be beneficial to the robotics community. Furthermore, a method that integrates the strength of both motor-primitives approaches could be beneficial to solve a wide range of control tasks.
In this paper, we provide a comprehensive review of motor primitives in robotics, focusing specifically on the two distinct approaches—DMPs and EDAs. We first delineate the similarities and differences of both approaches by presenting nine extensively used robotic control examples (Section 3).
2
We show that (Table 1): • Both approaches use motor primitives as basic building blocks to parameterize the controller (Section 2). DMPs consist of a canonical system, nonlinear forcing terms, and transformation systems. EDAs consist of submovements, oscillations, and mechanical impedances. • For torque-controlled robots, DMPs require an inverse dynamic model of the robot, whereas EDAs do not impose this requirement (Section 3.1). • With an inverse dynamics model, DMPs can achieve perfect tracking, both in task-space and joint-space (Section 3.2, 3.3). Imitation Learning enables DMPs to learn and track trajectories of arbitrary complexity (Section 2.1.4, 5.2.2). Online trajectory modulation of DMPs enables achieving additional control objectives such as obstacle avoidance (Section 3.5), thereby providing advantages over spline methods. For tracking control with EDAs, an additional method for calculating an appropriate virtual trajectory and mechanical impedance to which it is connected (Section 2.2.4) is required (Section 2.2, 3.2, 3.3). • To control the position and/or orientation of the robot’s end-effector, DMPs require additional control methods to manage kinematic singularity and kinematic redundancy (Section 3.3, 3.9, 3.10). In contrast, for EDAs, stability near (and even at) kinematic singularity can be ensured (Section 3.3). Kinematic redundancy can be managed without solving the inverse kinematics (Section 3.9, 3.10). • Both approaches provide a modular framework for robot control. However, the extent of modularity and its practical implications differ between the two approaches. A clear distinction appears when combining multiple movements. For DMPs, discrete and rhythmic movements are represented by different DMPs (Section 2.1.1, 2.1.2). Hence, the two different DMPs cannot be directly superimposed to generate a combination of discrete and rhythmic movements (Section 3.7). Multiple discrete movements are generated by modifying the goal position of the previous movement (Section 3.8). While the weights of the nonlinear forcing terms learned from Imitation Learning (Section 2.1.4) can be reused, the weights of different DMPs cannot be simply combined. Summary of the main differences between dynamic movement primitives (DMPs) and elementary dynamic actions (EDAs). *A map from trajectory generated by DMP to torque command is required (Section 3.1). **Methods for managing kinematic singularity (Section 3.3) and redundancy (Section 3.9, 3.10) must be included.
• For DMPs, a low-gain PD controller is superimposed to manage uncertainty and physical contact (Section 3.1). EDAs include mechanical impedance as a separate primitive to manage physical interaction (Section 2.2.3). The dynamics of physical interaction can be controlled by modulating mechanical impedance (Section 3.4).
Once a detailed comparison has been made, we next show how DMPs and EDAs may be combined to leverage the best of both approaches (Section 4). Two major implementation examples on a KUKA LBR iiwa are provided.
2. Theory
In this Section, we provide an overview of DMPs and EDAs. For simplicity, we consider a system with a single DOF. A generalization to systems with multiple DOFs is presented in Section 3.
2.1. Dynamic movement primitives
DMPs, introduced by Schaal (1999, 2006); Ijspeert et al. (2013), parameterize the controller using the following three elements (Saveriano et al., 2023): a canonical system (Section 2.1.1), a nonlinear forcing term (Section 2.1.2), and a transformation system (Section 2.1.3). To generate discrete and rhythmic movements, two distinct definitions exist for the canonical system and nonlinear forcing term. For clarification, labels “Discrete” and “Rhythmic” are added next to the equation.
2.1.1. Canonical system
A canonical system
For discrete movements, the canonical system is exponentially convergent to 0 with a closed-form solution
2.1.2. Nonlinear forcing term
A nonlinear forcing term
The weights of the nonlinear forcing term
2.1.3. Transformation system
The nonlinear forcing term
While any positive values of
Using the canonical system
2.1.4. Imitation Learning
If the nonlinear forcing term is zero, that is,
For this, one prominent application of DMPs is “Imitation Learning,” also called “Learning from Demonstration” (Ijspeert et al., 2001, 2002, 2013; Schaal, 1999). Let
If the analytic solution of
The best-fit weight
The elements of
Along with Locally Weighted regression, one can also use Linear Least-Squares regression to find the best fit weights (Saveriano et al., 2019, 2023; Ude et al., 2014).
For Imitation Learning of discrete movements, the goal
With these best-fit weights
One might wonder why spline methods are not used to derive
If the
The batch regression method assumes a predefined number of basis functions
Note that Imitation Learning for joint trajectories is easily scalable to high DOF systems (Atkeson et al., 2000; Ijspeert et al., 2002). For an
2.2. Elementary dynamic actions
EDAs, introduced by Hogan and Sternad (2012, 2013), consist of (at least) three distinct classes of primitives: submovements (Section 2.2.1) and oscillations (Section 2.2.2) as kinematic primitives, and mechanical impedances as interaction primitives (Section 2.2.3) (Figure 1). The three primitives of Elementary Dynamic Actions (EDAs). Submovements and oscillations correspond to kinematic primitives and mechanical impedances manage physical interaction.
2.2.1. Submovements
A submovement
Submovements model discrete motions, and therefore
Given an initial condition,
Note that
As discussed in Hogan and Sternad (2012), the definition of submovement is not only to provide a detailed mathematical formulation for practical application, but also as an account of observable motor behavior of humans. Prior studies have shown that planar reaching movements of unimpaired subjects followed a highly stereotyped unimodal speed profile (Atkeson and Hollerbach, 1985; Flash and Henis, 1991; Hogan and Flash, 1987; Park et al., 2017; Rohrer et al., 2002, 2004). The mathematical definition of submovements (equation (7)) accounts for these observable motor behaviors, while also enabling their use to generate goal-directed discrete movement.
2.2.2. Oscillations
An oscillation
Note that this definition of oscillation can be too strict and the definition of oscillation can be expanded to almost-periodic functions (Hogan and Sternad, 2012). For our purposes, it is sufficient to think of an oscillation as a periodic function. Compared to submovements, oscillations model rhythmic and repetitive motions.
2.2.3. Mechanical impedances
Mechanical impedance
The impedance operator of equation (9) can denote both a map from joint displacement to torque or a map from end-effector (generalized) displacement to (generalized) force. The former impedance operator is often referred to as “joint-space impedance,” and the latter is often referred to as “task-space impedance.” For task-space impedance, both translational (Hogan, 1985) and rotational displacement (Caccavale et al., 1998) of the end-effector can be considered separately. The former is referred to as “position task-space impedance,” and the latter is referred to as “orientation task-space impedance.”
Along with the kinematic primitives, that is, submovements and oscillations, EDAs include mechanical impedance as a distinct primitive to manage physical interaction (Dietrich and Hogan, 2022; Hogan, 2017, 2022; Hogan and Buerger, 2018). The dynamics of physical interaction can be controlled by modulating mechanical impedance. For instance, tactile exploration and manipulation of fragile objects should evoke the use of low stiffness, while tasks such as drilling a hole on a surface require high stiffness for object stabilization (Hogan and Buerger, 2018).
Under the assumption that the environment is an admittance, mechanical impedances can be linearly superimposed even though each mechanical impedance is a nonlinear operator. This is the superposition principle of mechanical impedances (Hogan, 1985, 2017):
This principle provides a modular framework for robot control that can simplify multiple control tasks, for example, obstacle avoidance (Section 3.5) or managing kinematic redundancy (Section 3.9, 3.10).
Note that the impedance operators of equation (10) can include transformation maps. For instance, to superimpose a joint-space impedance and a task-space impedance at the torque level, the task-space impedance is multiplied by a Jacobian transpose to map from end-effector (generalized) force to joint torques (Section 3.1.2).
The choice of mechanical impedance decides whether the virtual trajectory
2.2.4. Norton equivalent network model
The three distinct classes of EDAs—submovements, oscillations, and mechanical impedances—may be combined using a Norton equivalent network model (Hogan, 2017), which provides an effective framework to relate these classes of primitives (Figure 2). The three Elementary Dynamic Actions (EDAs) combined using a Norton equivalent network model. The virtual trajectory 
In detail, the forward-path dynamics (Figure 2) specifies virtual trajectory
Note that submovements and/or oscillations can be directly combined at the level of the virtual trajectory
As shown in Figure 2, EDAs neither control
This property of EDAs has several benefits for robot control with physical interaction. Compared to
Note that the Norton equivalent network model separates forward-path dynamics (virtual trajectory
3. Comparison of the two approaches
In this Section, a detailed comparison between DMPs and EDAs is presented. To emphasize the similarities and differences between the two approaches, multiple simulation examples using the MuJoCo physics engine (Version 1.50) (Todorov et al., 2012) are presented. The code is available at https://github.com/mosesnah-shared/DMP-comparison.
3
A list of examples, ordered in progressive complexity, is shown below: • A goal-directed discrete movement in joint-space (Section 3.2). • A goal-directed discrete movement for task-space position (Section 3.3). • A goal-directed discrete movement for task-space position, with unexpected physical contact (Section 3.4). • A goal-directed discrete movement for task-space position, including obstacle avoidance (Section 3.5). • Rhythmic movement, both in joint-space and task-space position (Section 3.6). • Combination of discrete and rhythmic movements, both in joint-space and task-space position (Section 3.7). • A sequence of discrete movements for task-space position (Section 3.8). • A single (or sequence of) discrete movement(s) for task-space position, while managing kinematic redundancy (Section 3.9). • Discrete movement for task-space position and orientation, while managing kinematic redundancy (Section 3.10).
Some of the examples reproduce human-subject experiments in motor control research, for example, Burdet et al. (2001) for Section 3.3, and Flash and Henis (1991) for Section 3.8.
While, in general, position (or motion) control can be used to encode motor primitives, we will focus on the control of torque-actuated robots. Position control would create challenges that restrict the set of tasks that can be achieved (Hogan, 2022). For instance, one of the challenges is the kinematic transformation from task-space coordinates to the robot’s joint configuration, which complicates control in task-space. Another challenge is for tasks involving contact and physical interaction, which requires some level of compliance of the robotic manipulator. For the nine simulation examples, we highlight the challenges when using position-actuated robots, for example, managing contact and physical interaction (Section 3.4) and managing kinematic redundancy (Section 3.9, 3.10). We show that torque-actuated robots can address these tasks without imposing such challenges. Further discussion is deferred to Section 5.2.1.
Given a torque-actuated open-chain
In this paper, we consider controlling both position and orientation of the end-effector. For end-effector position, we use
Note that one can also represent spatial orientation with unit quaternions
Except for tasks with physical contact (e.g., Section 3.4), we assume
3.1. The existence of an inverse dynamics model
We first show that for torque-actuated robots, DMPs require an inverse dynamics model, whereas EDAs do not.
3.1.1. Dynamic movement primitives
For DMPs, the transformation system (equation (4)) represents kinematic relations. Hence for a torque-actuated robot, the approach requires an inverse dynamics model, which determines the feedforward joint torques
The requirement of an inverse dynamics model also implies that the DMP approach is in principle, a nonreactive feedforward control approach. To be robust against uncertainty or unexpected physical contact, a low-gain feedback control (e.g., PD control) is added to the feedforward inverse dynamics controller (Pastor et al., 2013; Schaal et al., 2007) (Section 3.3, 3.4). Moreover, for control in task-space where the maps from
3.1.2. Elementary dynamic actions
For EDAs, an inverse dynamics model is not required for a torque-actuated robot. Hence, an exact model of the robot, that is, exact computation of
Note that Δ
Compared to DMPs, the EDA approach is in principle, a reactive feedback control method. Instead of an inverse dynamics model, measurements
3.2. A goal-directed discrete movement in joint-space
We consider designing a controller to generate a goal-directed discrete movement planned in joint-space coordinates.
3.2.1. Dynamic movement primitives
The movement of each joint is represented by a transformation system. The
Note that different values of
Without using a nonlinear forcing term, that is,
In case we want to generate a goal-directed discrete movement that also follows a specific joint trajectory, Imitation Learning can be used. Let
3.2.2. Elementary dynamic actions
To generate a goal-directed discrete movement planned in joint-space coordinates, we construct the following controller:
To generate a goal directed discrete movement,
3.2.3. Simulation example
Consider a 2-DOF planar robot model, where each link consists of a single uniform slender bar with mass and length of 1 kg and 1 m, respectively. Let the initial joint configuration be
For
For submovement
The simulation results are shown in Figure 3. DMPs generated the goal-directed discrete movement while achieving perfect tracking of the minimum-jerk trajectory. Once the weights of the nonlinear forcing terms are learned using Imitation Learning, one can regenerate the minimum-jerk trajectory by simply retrieving these learned weights. While the presented example used a minimum-jerk trajectory, Imitation Learning can be used to achieve tracking control of a trajectory with arbitrary complexity. A goal-directed discrete movement in joint-space for Dynamic Movement Primitives (DMPs, blue) and Elementary Dynamic Actions (EDAs, orange) (Section 3.2.3). The black dotted line is a minimum-jerk trajectory (equation (14)). Goal location: 
In principle, perfect tracking may be achieved using DMPs. However, in practice, one must compensate for the inaccuracy of the inverse dynamics model by superimposing a joint-space Proportional Derivative (PD) feedback controller. Further details on this method are deferred to Section 3.4.
For EDAs, a non-zero error between Joint-space tracking performance of EDAs with respect to different values of mechanical impedances. (1st, 2nd, and 3rd Columns) Performance with respect to different joint stiffness matrices 
3.3. A goal-directed discrete movement for task-space position
We next design a controller to generate a goal-directed discrete movement of the end-effector’s position. For this Section, we assume no kinematic redundancy of the robot model, that is, the Jacobian matrix
3.3.1. Dynamic movement primitives
For DMPs, the task can be achieved by representing the end-effector trajectory
Once the desired end-effector trajectories
The calculated
Note that the presented method for inverse kinematics cannot be used for a kinematically redundant robot (with fewer task-space than joint-space DOFs). To manage kinematic redundancy, along with the feedforward torque command from the inverse dynamics model, an additional feedback controller should be employed (Nakanishi et al., 2008; Pastor et al., 2009; Slotine and Li, 1991). This is considered further in Section 3.9 and Section 3.10. Moreover, to handle kinematic singularity, that is, when
3.3.2. Elementary dynamic actions
To generate a goal-directed discrete movement planned in task-space coordinates, we construct the following controller:
Compared to DMPs (Section 3.3.1), this controller does not require a Jacobian inverse. Hence, the torque input is always well-defined near and even at kinematic singularities.
To generate a goal-directed discrete movement, as with the first-order joint-space impedance controller (Section 3.2),
For constant positive definite
3.3.3. Simulation example
The simulation example in Figure 5 reproduced the movement of the experiment conducted in Burdet et al. (2001). The goal-directed discrete movement was made in a direction away from the robot base, along the positive A goal-directed discrete movement for task-space position, for (A, C, E, F) Dynamic Movement Primitives (DMPs, blue) and (B, D, F) Elementary Dynamic Actions (EDAs, orange) (Section 3.3.3). (A, B, C, D, E) Goal-directed discrete movements in a direction along the positive 
We used the 2-DOF planar robot model from Section 3.2 that was constrained to move within the
As shown in Figure 5(A) and (B), both approaches successfully generated discrete movements that converged to the desired goal location. As discussed in Section 3.2.3, DMPs achieved the goal-directed discrete movement with perfect tracking. However, as in Section 3.2.3, perfect tracking requires an accurate model of the robot’s kinematics and dynamics. Hence in practice, one must compensate for the inaccuracy of these models by superimposing a joint-space PD feedback controller (Section 3.4).
For EDAs, goal-directed discrete movement was achieved but a tracking error was observed. However, as in Section 3.2.3 (Figure 4), one can reduce the tracking error in task-space by increasing the translational stiffness and damping values (Figure 6). Task-space position tracking performance of EDAs with respect to different values of mechanical impedance. (1st, 2nd, 3rd Columns) Performance with respect to different translational stiffness matrices 
Note that EDAs did not require the Jacobian inverse. The benefit of this property was emphasized when the planar robot model reached for a fully stretched configuration. As shown in Figures 5(C) and 5(F), DMPs without Damped Least-squares inverse became numerically unstable when the robot model approached a kinematic singularity. Hence, as shown in Figures 5(E) and (F), Damped Least-squares inverse must be employed to remain stable near kinematic singularity. For EDAs, not only was the approach stable, but the approach even “passed-through” the kinematic singularity (Figure 5(F), near
Note that in Figure 5(D), using EDAs, the robot model oscillated back and forth between its “left-hand” and “right-hand” configurations, passing through the singularity multiple times. This occurred because at the singular configuration the controller included no effective damping, hence no means to dissipate the angular momentum of the robot links. This oscillation may be suppressed by adding a non-zero joint-space damping, an example of impedance superposition (Section 2.2.3, 3.5, 3.9). Moreover, with EDAs, one can “exploit” rather “avoid” kinematic singularity. For instance, if a task can be facilitated by using the left-hand configuration, EDAs enable “shifting” from right- to left-hand configuration.
3.4. Tasks with unexpected physical contact
We next consider a case where unexpected physical contact is made while conducting a point-to-point discrete reaching movement presented in Section 3.3.3 (Figure 5(A) and (B)).
3.4.1. Dynamic movement primitives
As discussed in Section 3.1, DMPs use a feedforward torque command calculated from the inverse dynamics model. To handle model uncertainties and possible instabilities for control tasks involving unexpected contact, a feedback torque command is superimposed on the feedforward torque command.
Let
The gain values for
Note that for ideal torque-actuated robots, the joint-space PD controller (equation (18)) is identical to a first-order joint-space impedance controller of EDAs (equation (13)). However, it would be a mistake to conclude that PD control is identical to impedance control (Won et al., 1997). Further discussion is deferred to Section 5.2.3.
3.4.2. Elementary dynamic actions
As discussed in Hogan (1985, 2022), an impedance controller is robust against unexpected physical contact with passive environments. While it is common to use constant mechanical impedances, mechanical impedance can be modulated to regulate the dynamics of physical interaction (Lachner et al., 2021).
In detail, the first-order position task-space impedance controller of equation (16) can be adapted by modulating the translational stiffness and damping values:
With a slight abuse of notation,
This controller limits the impact of an unexpected contact, for example, during physical Human-Robot Interaction (pHRI) (Lachner, 2022; Lachner et al., 2021), by bounding the total energy of the robot via
For pHRI, the value
While the presented controller is a simplified example, a more advanced application exists where the damping term can be modulated as a function of stiffness and inertia (Albu-Schaffer et al., 2003). Moreover, the dissipative behavior of the robot can be modulated to limit the robot power, for example, by using “damping injection” (Stramigioli, 2015).
Note that for this advanced application, EDAs require the mass matrix of the robot
3.4.3. Simulation example
As in Section 3.3.3, we used a 2-DOF planar robot model to generate a goal-directed discrete movement in task-space coordinates. In this example, a square-shaped obstacle was placed to block the robot path. A few seconds after the first contact with the robot, the obstacle was moved aside and the robot could continue its motion (Andrews and Hogan, 1983; Newman, 1987). The code script for this simulation is main_unexpected_contact.py.
For DMPs, without a low-gain PD feedback controller, the learned weights from Section 3.3.3 were reused. For this controller, the robot model bounced back from the obstacle due to contact and failed to reach the goal (Figure 7(B)). Hence, it was necessary to add a low-gain PD controller (Figure 7(C)). This example demonstrated the modular property of DMPs, since the learned feedforward torque controller from Section 3.3 was reused without modification, and superimposed upon an additional feedback controller. A goal-directed discrete movement with unexpected physical contact for (A, B, C, D) Dynamic Movement Primitives (DMPs, blue) and (E, F, G, H) Elementary Dynamic Actions (EDAs, orange) (Section 3.4.3). For DMP: (A) Moment of contact. (A → B) DMP without PD controller (A → C) DMP with PD controller. (D) Time versus 
For EDAs, both the controller with and without energy limitation were able to reach the goal (Figure 7(F) and (G)). However in the latter case, high accelerations of the end-effector occurred after the obstacle was removed (Figure 7(H)). As shown in Figure 8, Simulation results using Elementary Dynamic Actions (EDAs) with impedance modulation via energy regulation (equation (20)) (Figure 7) (Section 3.4.3). (Top) Time 
3.5. Obstacle avoidance
We next consider obstacle avoidance while conducting a point-to-point discrete reaching movement presented in Section 3.3.3. We assume that the obstacle is fixed at location
3.5.1. Dynamic movement primitives
To avoid an obstacle located at
The coupling term
For spatial tasks,
3.5.2. Elementary dynamic actions
For EDAs, the idea resembles the method of obstacle avoidance using potential fields (Andrews and Hogan, 1983; Hjorth et al., 2020; Hogan, 1985; Khatib, 1985; Koditschek, 1987; Newman, 1987). A mechanical impedance which produces a repulsive force from the obstacle (i.e., a mechanical impedance with a point “repeller” at the obstacle location) is superimposed on the task-space impedance controller:
3.5.3. Simulation example
As in Section 3.3.3, we used the 2-DOF planar robot model to generate a goal-directed discrete movement in task-space coordinates. However, a stationary obstacle was located at
As shown in Figure 9, both approaches successfully achieved the task. Nevertheless, differences between the two approaches were observed. A goal-directed discrete movement with obstacle avoidance for Dynamic Movement Primitives (DMPs, blue) and Elementary Dynamic Actions (EDAs, orange) (Section 3.5.3). (Left) Trajectory of the end-effector position. Initial end-effector position 
DMPs considered the problem from a position control perspective. The coupling term
In comparison, EDAs achieved obstacle avoidance without explicit path planning (Hogan, 1985). Instead, both goal-reaching and obstacle avoidance tasks were achieved by using the superposition principle of mechanical impedances (equation (10)). The modular property of the superposition principle enabled EDAs to divide the task into multiple sub-tasks, allocate an appropriate mechanical impedance for each sub-task, and then combine the mechanical impedances to solve the original task. For this case, mechanical impedance
While the modular property of EDAs simplified the approach, care is required for implementation. The method based on EDA is reminiscent of classic potential field methods (Khatib, 1986; Newman, 1987). Obstacle avoidance is achieved by superimposing a virtual (or artificial) elastic potential field that produces a repulsive force field emanating from the obstacle. Hence, the approach may encounter the known limitations of classic potential field methods. For instance, the end-effector of the robot may stall at a local energy minimum (Newman, 1987). To avoid this, a slight offset (2 cm) in the positive
3.6. Rhythmic movement
We next consider a method to generate a rhythmic, repetitive movement. For this example, we considered movements planned in both joint-space and task-space position.
3.6.1. Dynamic movement primitives
For DMPs, we used a canonical system and nonlinear forcing terms of a rhythmic movement (Section 2.1). From the generated
Compared to discrete movements (Section 3.2, 3.3), Imitation Learning is necessary to generate rhythmic movements, as
3.6.2. Elementary dynamic actions
For EDAs, we defined
3.6.3. Simulation example
We used the 2-DOF planar robot model from Section 3.2 and Section 3.3. The code scripts for the simulations are main_joint_rhythmic.py for joint-space and main_task_rhythmic.py for task-space.
The rhythmic movement in joint-space followed a sinusoidal trajectory. For DMPs and EDAs, both
The rhythmic movement in task-space followed a circular trajectory. For DMPs and EDAs, both
As shown in Figure 10, both DMPs and EDAs successfully generated rhythmic movement in joint-space (Figures 10(A)–(D)) and task-space (Figures 10(E) and (F)). As discussed in Section 3.2.3 and Section 3.3.3, for DMPs, in principle, perfect tracking can be achieved in both joint-space and task-space. Given the period of the rhythmic movement (Section 2.1.4), rhythmic trajectories with arbitrary complexity can be learned using Imitation Learning. For EDAs, tracking error existed in both joint-space and task-space. Nevertheless, EDAs generated a rhythmic, repetitive movement without an inverse dynamics model and without solving the inverse kinematics. Moreover, as discussed in Section 3.2.3 and Section 3.3.3, the parameters of mechanical impedances can be chosen to reduce the tracking error (Figures 4 and 6). Rhythmic movements in (A, B, C, D) joint-space and (E, F) task-space for Dynamic Movement Primitives (DMPs, blue) and Elementary Dynamic Actions (EDAs, orange) (Section 3.6.3). (A, B) Rhythmic movement in joint-space and its (C, D) joint trajectories. (C, D) The black dashed lines which are perfectly overlapped with DMPs (blue lines) represent a sinusoidal trajectory with parameters: 
3.7. Combination of discrete and rhythmic movements
We next designed a controller to generate a combination of discrete and rhythmic movements (Hogan and Sternad, 2007). For this example, we considered a movement planned in both joint-space and task-space positions.
3.7.1. Dynamic movement primitives
For DMPs, the canonical system and nonlinear forcing term are different for discrete and rhythmic movements (Section 2.1.1, 2.1.2). Hence, both rhythmic and discrete DMPs cannot be directly combined. Instead, the discrete DMP generates a time-changing goal
The first two equations represent rhythmic DMPs, and the last equation represents discrete DMPs without a nonlinear forcing term and a canonical system.
3.7.2. Elementary dynamic actions
For EDAs, we simply add submovements
Note that for control in task-space,
3.7.3. Simulation example
Using the 2-DOF robot model in Section 3.2, the goal was to generate a combination of discrete and rhythmic movements both in joint-space and task-space. The code scripts for the simulations are main_joint_discrete_and_rhythmic.py for joint-space and main_task_discrete_and_rhythmic.py for task-space position.
For A combination of discrete and rhythmic movements in (A–F) joint-space and (G–J) task-space for Dynamic Movement Primitives (DMPs, blue) and Elementary Dynamic Actions (EDAs, orange) (Section 3.7.3). (C, E, I) Black filled lines represent the discrete change of 
For
For
For submovement
As shown in Figure 11, both approaches successfully produced a combination of discrete and rhythmic movements in joint-space and task-space. However, since DMPs separate the canonical system and nonlinear forcing terms for discrete and rhythmic movements (Section 2.1.1, 2.1.2), merging the two movements was not straightforward. DMPs circumvented this issue by assigning the time-changing goal
On the other hand, for EDAs, given a single impedance operator, discrete and rhythmic movements were directly combined at the level of the virtual trajectory (i.e., the forward path dynamics) (Figure 2). With modest parameter tuning, the discrete and rhythmic movements used in Section 3.2.3 and Section 3.6.3 were reused and combined. This approach intuitively provides convenience in practical implementation and also emphasizes the modularity of EDAs at a kinematic level.
3.8. Sequence of discrete movements
We next consider designing a controller to generate a sequence of discrete movements planned in task-space position. For this, we show how the controller generates a movement in response to a sudden change of goal location.
3.8.1. Dynamic movement primitives
With the controller introduced in Section 3.3.1, an additional differential equation for the time-varying goal location
Note that both
Note that
Hence, a sequence of finite submovements can be generated by discrete changes of the goal location from
While the first discrete movement can follow an arbitrary trajectory using Imitation Learning, with this formulation (Nemec and Ude, 2012), the subsequent discrete movements cannot, but follow the motion of a stable third-order linear system converging to the corresponding
3.8.2. Elementary dynamic actions
With the first-order position task-space impedance controller (Equation (16)), a sequence of discrete movements is generated by sequencing multiple submovements (Flash and Henis, 1991):
The amplitude of the
3.8.3. Simulation example
The simulation example in Figure 12 reproduced the experiment conducted in Flash and Henis (1991). Let the goal location of the first discrete movement be A sequence of discrete movements for (A, B, D, E) Dynamic Movement Primitives (DMPs, blue) and (A, C, D, E) Elementary Dynamic Actions (EDAs, orange) (Section 3.8.3). (A) Trajectory of the end-effector position. (B, C) Time-lapse of the movement of the 2-DOF robot model. (D, E) Time versus 
For DMPs, the first discrete movement followed a minimum-jerk trajectory with goal location
For EDAs, the minimum-jerk trajectory was used for the basis function of each submovement
As shown in Figure 12, for both approaches,
3.9. Managing kinematic redundancy
We next consider designing a controller to generate a goal-directed (or sequence of) discrete movement(s) of the end-effector’s position for a kinematically redundant robot. By definition, kinematic redundancy occurs when a Jacobian matrix has a null space (Siciliano, 1990). Hence, infinitely many joint velocity solutions exist to produce a desired end-effector velocity. While kinematic redundancy provides significant challenges, additional control objectives can be achieved by exploiting kinematic redundancy. Examples include obstacle avoidance (Baillieul, 1986; Maciejewski and Klein, 1985), joint limit avoidance (Hjorth et al., 2020; Liegeois, 1977), and minimization of instantaneous power during movement (Klein and Huang, 1983).
3.9.1. Dynamic movement primitives
For DMPs, a feedback controller is employed to manage kinematic redundancy. Multiple feedback control methods exist and can be divided into three categories: velocity-based control, acceleration-based control, and force-based control (Nakanishi et al., 2005, 2008). Among these methods, we used a “velocity-based control without joint-velocity integration” (Nakanishi et al., 2008; Pastor et al., 2009).
Let
Accordingly, the reference end-effector acceleration
From these values,
Note that this controller, suggested in Nakanishi et al. (2008), is equivalent to the sliding mode feedback controller introduced by Slotine and Li (1987). It was shown by Slotine and Li (1987) that
3.9.2 Elementary dynamic actions
For EDAs, kinematic redundancy of the robot manipulator can be managed by superimposing multiple mechanical impedances (Hermus et al., 2021; Verdi, 2019). In detail, a first-order position task-space impedance controller (equation (16)) can be superimposed with a joint-space controller that implements joint damping (equation (13)):
As in Section 3.2.2 and Section 3.3.2, with constant symmetric positive definite matrices of
The stability of this controller is shown in Arimoto et al. (2005a, 2005b); Arimoto and Sekimoto (2006); Lachner (2022), where an asymptotic convergence of
Like the controller in equation (16), this controller does not involve an inversion of the Jacobian matrix. Moreover, explicitly solving the inverse kinematics and an inverse dynamics model are not required. Hence, the approach remains stable near kinematic singularities.
For the desired discrete movement, we used a damping joint-space impedance operator to reduce the joint motions in the nullspace of
Superimposing joint-space and task-space impedances can yield task conflicts (Schettino et al., 2021), unless the virtual joint configuration to which the joint stiffness is connected is defined at the desired goal location (Hermus et al., 2021). Often, the task conflict is resolved by using null-space projection methods, as suggested by Khatib (1995). However, it is important to note that the resultant controller violates passivity (Lachner, 2022). Alternatively, it has been shown that, with sufficiently large null-space dimension, the task conflict is minimized or even eliminated (Hermus et al., 2021).
3.9.3. Simulation example
Consider a 5-DOF planar serial-link robot model, where each link consists of a single uniform slender bar with mass and length of 1 kg and 1m, respectively. With this robot model, we generated a single (or sequence of) discrete movement(s). As in Section 3.3.3, a minimum-jerk trajectory was used. For the sequence of discrete movements, the trajectories of Section 3.8.3 were used. The code scripts for the simulations are main_redundant_discrete.py for single discrete movement and main_redundant_sequencing.py for sequence of discrete movements.
As shown in Figure 13, both approaches were able to achieve goal-directed discrete movements. For DMPs, the approach used a feedback controller with reference trajectories generated by DMPs to manage kinematic redundancy (Nakanishi et al., 2008; Slotine and Li, 1991). For EDAs, the controller simply reused the task-space impedance controller in Section 3.3.2 and combined it with the impedance controller (with Managing kinematic redundancy using Dynamic Movement Primitives (DMPs, blue) and Elementary Dynamic Actions (EDAs, orange) (Section 3.9.3). (A, B, C) A single discrete movement. (D, E, F) A sequence of discrete movements. The first movement headed toward 
3.10. Managing kinematic redundancy while controlling position and orientation
We next consider controlling both the position and orientation of the robot’s end-effector, while managing kinematic redundancy. As with Section 3.9, we consider a discrete movement in task-space.
3.10.1. Dynamic movement primitives
Various DMPs have been proposed to represent task-space orientation (Abu-Dakka et al., 2015; Pastor et al., 2011; Saveriano et al., 2019, 2023; Ude et al., 2014). Koutras and Doulgeri (2020) suggested a formulation using unit quaternions, which addressed the limitations of prior methods.
In detail, DMPs to represent task-space orientation were defined by (Koutras and Doulgeri, 2020):
In contrast to joint-space (equation (12)) and task-space position (equation (15)) trajectories, the following nonlinear forcing term was used for task-space orientation (equation (29)) (Koutras and Doulgeri, 2020):
The formulation of this transformation system (equation (29)) is identical to a three-dimensional DMP. Hence, given
Once
Given
While an analytical form of
3.10.2. Elementary dynamic actions
Using EDAs, the superposition principle of mechanical impedances (equation (10)) may be exploited by simply adding an orientation task-space impedance operator onto equation (28). In detail, the following impedance operator for orientation,
To generate a goal-directed discrete movement for both position and orientation of the robot’s end-effector, the motions for
3.10.3. Simulation example
Consider a KUKA LBR iiwa robot, which has seven revolute joints (i.e.,
For position, a minimum-jerk trajectory was used (equation (17)). For orientation, a geodesic curve between initial
As shown in Figure 14, a goal-directed discrete movement for both position and orientation was achieved using both approaches. For DMPs, the approach used a feedback controller (equation (31)) with position and orientation trajectories generated by separate DMPs. For EDAs, the controller reused the impedance controller of Section 3.9.3 and simply superimposed an additional impedance operator Control of end-effector position and orientation while managing kinematic redundancy using Dynamic Movement Primitives (DMPs) and Elementary Dynamic Actions (EDAs). Successive frames of a MuJoCo model of a KUKA LBR iiwa14 robot using DMPs (A) and EDAs (B). The end-effector’s orientation 
4. Combining the two approaches
In this section, we provide practical implementations that combine DMPs and EDAs on real robot hardware. The benefit of the combination is to exploit the modularity of EDAs and the robust trajectory generation capabilities of DMPs. Merging these two motor-primitive approaches not only emphasizes the individual strengths of each approach, but also creates an efficient framework for programming multi-task control, especially for the combined control of both task-space position and orientation (Section 4.1, 4.2).
Using a KUKA LBR iiwa14, two examples are provided: a drawing and erasing task (Section 4.1) and a modular Imitation Learning approach (Section 4.2). For both tasks, both position and orientation of the robot end-effector were controlled. To derive the torque command for the robot, the controller based on EDAs in Section 3.10.2 was used (equation (32)). To encode
For control, KUKA’s Fast Robot Interface (FRI) was employed. For both tasks, the built-in gravity compensation was activated. The Forward Kinematic Map to derive
4.1. Drawing and erasing task
In this section, we provide an overview of the approach presented in Nah et al. (2023b), which considered a drawing and erasing task on a planar table. For further details on implementation, the readers are referred to Nah et al. (2023b).
The robot first drew a trajectory provided by demonstration. Once the demonstration was complete, the robot retraced the drawn trajectory with an additional oscillatory movement to thoroughly erase it.
This task can be executed by combining Imitation Learning of DMPs with the modularity of EDAs. Specifically, given a demonstrated trajectory
After the trajectory was drawn, erasure was achieved by overlaying an oscillatory circular trajectory with radius
The result is shown in Figure 15,
6
which shows the whole process of the drawing and erasing tasks. By merging Imitation Learning with EDAs (Figure 15(A)–(C)), the drawing (Figure 15(D)) and erasing tasks (Figure 15(E)) were successfully achieved. The key to this approach is the combination of Imitation Learning with the modularity of EDA. The trajectory The drawing and erasing task using a KUKA LBR iiwa. A green pen was used for drawing. (A) Data collection of human-demonstrated 
It is important to note that the
4.2. Modular imitation learning
Combining the superposition principle of mechanical impedances (equation (10)) with Imitation learning offers a unique advantage in robot programming. It allows for the separate learning of position and orientation trajectories, that can then be added through the superposition of mechanical impedances. Hence, different motions learned in different spaces can be separately learned and seamlessly combined, which thereby enables modular Imitation Learning.
An example application for modular Imitation Learning is shown in Figure 16.
7
For task-space position, the robot learned to draw the letter “M,” using Imitation Learning with a three-dimensional DMP in task-space (equation (15)) (Figure 16(A)–(C)). For task-space orientation, the robot learned to “lift up and down” (as in lifting a cup to the lips and then putting it back down), using Imitation Learning with DMP employing unit quaternions (equation (29)) (Figure 16(D)–(F)). For both task-space position and orientation, Locally Weighted regression was used for Imitation Learning. By exploiting the superposition of mechanical impedances, the motion of drawing letter M while lifting up and down can be generated by a linear combination of both robot commands (Figure 16(G)). Note that while the qualitative behavior of both trajectories is preserved in combination, small tracking errors still remain. Modular Imitation Learning. Data for position and orientation were collected from real robot hardware (B, E) and visualized using Exp[licit]-MATLAB library (Lachner et al., 2023) (A, C, D, F, G). (A) A demonstrated trajectory 
5. Discussion
In Section 3, we presented detailed implementations of both DMPs and EDAs to solve nine control tasks. Moreover in Section 4, we showed how these two methods can be combined to exploit the advantages of both approaches.
5.1. Similarities between the two approaches
DMPs and EDAs both stem from the idea of motor primitives. Hence, both approaches share the same principle—using motor primitives as fundamental building blocks to parameterize a controller. DMPs parameterize the controller with a canonical system, nonlinear forcing terms, and transformation systems (Section 2.1). EDAs parameterize the controller with submovements, oscillations, and mechanical impedances (Section 2.2).
Robot control based on motor primitives provides several advantages, and we presented nine control examples. First, by parameterizing the controller with motor primitives, the approaches are consistent with a high level of autonomy for generating dynamic robot behavior. Once triggered, the primitive behaviors “play out” without requiring intervention from higher levels of the control system.
As a result, the computational complexity of the control problem is reduced. For instance, we showed that DMPs can be scaled to multi-DOF systems by synchronizing a canonical system with multiple transformation systems (Section 2.1.4). With Locally Weighted (or linear least-squares) regression of Imitation Learning, learning new motor skills is reduced to calculating the best-fit weights of the nonlinear forcing terms. The best-fit weights are learned by simple matrix algebra, which is computationally efficient (Section 3.2, 3.3). In fact, it was reported that this computational efficiency of DMPs achieved control of a 30-DOF humanoid robot (Atkeson et al., 2000; Ijspeert et al., 2002, 2013; Schaal et al., 2007).
For EDAs, by parameterizing the controller with motor primitives, the process of acquiring and retaining complex motor skills is simplified by identifying a reduced set of parameters, for example, the initial and final positions of a submovement (Section 3.8, 3.9, 3.10). These computational advantages are particularly prominent in control tasks associated with high-DOF systems, for example, manipulation of flexible, high-dimensional objects (Nah et al., 2020, 2021; 2023a).
Motor primitives offer a modular framework for robot control. By treating motor primitives as basic modules, acquisition or generation of new motor skill occurs at the level of modules or their combination (d’Avella, 2016). Once the modules are learned, one can generate a new repertoire of movements by simply combining or reusing the learned modules. As shown above in several simulation examples, that is, obstacle avoidance (Section 3.5), combination of discrete and rhythmic movements (Section 3.7), sequencing discrete movements (Section 3.8), managing kinematic redundancy while controlling position (Section 3.9) and orientation (Section 3.10), these tasks were simplified by the modular properties of DMPs and EDAs. This modularity provides strong adaptability and flexibility for robot control, as learning new motor skills by combining or reusing learned modules is intuitively easier than learning “from scratch.” However, it is worth emphasizing that the details of the modular property are significantly different for the two approaches (Section 5.2.5).
5.2. Differences between the two approaches
5.2.1. The need for an inverse dynamics model
For torque-controlled robots, DMPs require an inverse dynamics model, whereas EDAs do not (Section 3.1). An inverse dynamics model can introduce practical challenges and constraints when applying DMPs to torque-controlled robots. In particular, acquiring accurate models of the robot can be challenging and time-consuming. Moreover, even if one acquires an accurate robot model, a low-gain joint PD controller must be additionally included to account for model uncertainties (Section 3.2, 3.3) and unexpected disturbances (Section 3.4).
However, the drawbacks associated with an inverse dynamics model can be dismissed for position control in joint-space. As a result, the application of DMPs in joint-space position-controlled robots is straightforward and efficient. On the other hand, EDAs are inappropriate for position-controlled robots, since “mechanical impedance” (Section 2.1.2) outputs (generalized) force, which is eventually commanded as a joint-torque input to the robot.
Nevertheless, joint-space position control presents its own challenges when compared to torque-controlled approaches. One obvious challenge is the problem of kinematic transformation between the robot’s generalized coordinates and task-space coordinates (Hogan, 2022). In fact, these challenges were highlighted in the examples presented, for example, the problem of inverse kinematics and kinematic singularity (Section 3.3) and managing kinematic redundancy (Section 3.9, 3.10). These problems necessitate additional methods, for example, Damped Least-squares inverse to manage kinematic singularity (Section 3.3) or feedback control methods such as sliding mode control to manage kinematic redundancy (Section 3.9, 3.10). Note that the eight control methods to manage kinematic redundancy presented by Nakanishi et al. (2008) (which include the latter approach) assume feedback control using torque-actuated robots, not position-controlled robots.
Perhaps more important, position control introduces the likelihood of instability for tasks involving contact and physical interaction. Robot control involving physical interaction requires controlling the interactive dynamics between the robot and environment (Hogan, 2022). For this, using position control turns out to be inadequate (De Santis et al., 2008). A position-actuated robot fails to provide the level of compliance needed to achieve safe physical interaction. Moreover, interactive dynamics cannot be directly regulated independent of the environment. Instead of position control, we considered control methods using (ideal) torque-actuated robots to manage contact and physical interaction (Section 3.4).
5.2.2. Tracking control
In the absence of uncertainties and external disturbances, in principle, DMPs can achieve perfect trajectory tracking, both in task-space and joint-space coordinates (Section 3.2, 3.3, 3.6, 3.7, 3.9, 3.10). Using Imitation Learning (Section 2.1.4), tracking a trajectory of arbitrary complexity can be achieved. DMPs also allow online trajectory modulation of the learned trajectory, which was shown in the obstacle avoidance example (Section 3.5).
EDAs control neither position nor force directly (Section 2.2.4). Hence, non-negligible tracking errors arise unless high values of mechanical impedance are employed (Section 3.3, 3.6). One can reduce the tracking error by using higher stiffness and/or damping values (Figures 4 and 6). Nevertheless, a non-negligible tracking error still exists. To achieve higher accuracy for tracking control with EDAs for a given desired trajectory, an additional method to derive the corresponding virtual trajectory should be employed. For instance, exploring trajectory optimization methods to calculate the time course of impedances and virtual trajectories that produce the desired trajectory is an opportunity for future research.
In fact, the topic of merging trajectory optimization with EDA is only one example of a potentially rich research field, which is to integrate the mathematical framework of optimal control theory into EDA. The key point of EDA is to choose appropriate values of mechanical impedance and the virtual trajectory to achieve the control task. The parameters of EDA used for the nine control examples, especially for the mechanical impedances, were mostly chosen based on trial-and-error, and these parameters were sufficient to achieve the task. Nevertheless, other work advances EDA by optimizing the parameters based on specific cost functions, for example, tracking error for position (Buchli et al., 2011).
The presented examples mainly focused on tracking control of point-to-point goal-directed discrete movements. For EDA, the movements can be generated by submovements and their linear composition. For DMP, the movements can be achieved with or without the nonlinear forcing term, where the latter follows the response of a stable second-order linear system. However, if one considers path-constrained tracking control, the benefit of DMP’s Imitation Learning is more emphasized. Compared to EDA, where the path should be decomposed into multiple submovements and their combination (Rohrer et al., 2002, 2004), DMP can directly learn the path by Imitation Learning which can be used for path-constrained tracking control. The path learned by Imitation Learning can also be generalized by spatial and temporal scaling, which makes DMP favorable over conventional spline methods (Ijspeert et al., 2013). In fact, this specific benefit of DMP for path generation was combined with EDA to get the best of both motor-primitives approaches, as shown in the two examples presented in Section 4. In summary, while both EDA and DMP offer methods for generating goal-directed movements, DMP’s Imitation Learning presents a significant advantage in path-constrained tracking control, providing a more efficient and adaptable solution compared to EDA.
5.2.3. Contact and physical interaction
To guarantee robustness against contact and physical interaction, DMPs superimpose a joint-space PD controller on the feedforward torque command from the inverse dynamics model (Section 3.1). On the other hand, EDAs include mechanical impedances as a distinct class of primitives (Section 2.2.3). With an appropriate choice of mechanical impedance, the approach is robust against uncertainty and unexpected physical contact (Hogan, 2022). The dynamics of physical interaction can be directly controlled by modulating mechanical impedance (Section 3.4). By superimposing passive mechanical impedances, passivity is preserved (Section 3.9, 3.10, 4.1).
Note that the equation of PD control used for DMPs (equation (18)) is identical to a first-order joint-space impedance controller (equation (13)). However, care is required: they are identical only if the robot actuators are ideal torque sources and the impedance is specified in joint space. Impedance control is more general than PD control and not limited to first-order joint-space behavior; by definition, mechanical impedance determines the dynamics of physical interaction at an interaction port, which may, in principle, be at any point(s) on the robot (Won et al., 1997).
5.2.4. Managing kinematic singularity and redundancy
For DMPs, control in task-space requires solving an inverse kinematics problem (Section 3.3). This introduces the challenges of managing kinematic singularity and kinematic redundancy. The latter can be resolved by using any of many feedback control methods presented by (Ceccarelli, 2008; Nakanishi et al., 2008). Nevertheless, this requires feedback control based on an error signal. This introduces a non-negligible error, and instead of perfect tracking, asymptotic convergence is achieved. Moreover, these methods still involve a Jacobian (pseudo-)inverse. Consequently, an additional method to handle kinematic singularity should be employed. Importantly, null-space projection methods violate passivity (Section 3.9.1, 3.10.1), and advanced methods to guarantee the robot’s stability might be needed (Dietrich et al., 2015; Lachner, 2022).
For EDAs, explicitly solving the inverse kinematics is not required (Hogan, 1987). Seamless operation into and out of kinematic singularities is possible (Section 3.3.3). EDAs superimpose multiple mechanical impedances to manage kinematic redundancy (Section 3.9.3, 3.10.3). Unlike null-space projection methods, passivity is preserved (Hogan, 2022; Lachner, 2022).
5.2.5. Modularity in robot control
As discussed in the examples of Section 3 and Section 5.1, for both approaches, motor primitives provide a modular control framework for robot control, which thereby simplifies programming of multiple control tasks (Section 3.5, 3.7, 3.8, 3.9, 3.10). Nevertheless, it is important to note that the extent of modularity and its practical implications significantly differ between these two approaches.
A clear distinction is evident when combining multiple movements. For DMPs, by design, discrete and rhythmic movements are generated by different DMPs. Hence, discrete and rhythmic movements cannot be simply superimposed (Section 3.7). Moreover, to sequence discrete movements, the goal location
On the other hand, for EDAs, sequencing and/or combining multiple movements can be seamlessly conducted at the level of the virtual trajectory simply by adding components (Section 2.2.4). Recall that a combination of discrete and rhythmic movements was achieved by simply adding submovements and oscillations (Section 3.7). For sequencing discrete movements, the subsequent discrete movement was superimposed without modifying the previous movement (Section 3.8). These properties provide a notable degree of simplicity and modularity, as individual motions can be separately planned and simply added without further modification.
The superposition principle of mechanical impedances enables breaking down complex tasks into simpler sub-tasks, solving each sub-task with a specific module, and simply adding the outputs of these modules to solve the original problem (Section 3.5). Using mechanical impedances with this “divide-and-conquer” strategy, the overall complexity of the control problem can be significantly reduced. A task may be achieved by simply reusing the impedance controllers of component tasks without modification. This modular property of EDAs is in contrast with DMPs. While the learned weights of the nonlinear forcing terms can be reused for a single DMP, multiple DMPs cannot be simply combined by merging the learned weights of different DMPs.
5.3. Combining the best of both approaches
The Advantages of DMP and EDA That Are Combined to Get the Best of Both Approaches.
In Section 4, we provided two examples that demonstrated the benefits of combining both approaches. The key idea was to use Imitation Learning of DMPs to generate the virtual trajectories of EDAs. With this, the drawing and erasing task was successfully achieved (Section 4.1). Thanks to EDAs favorable stability property, the robotic manipulator remained stable despite contact and physical interaction. With Imitation Learning of DMPs and the kinematic modularity of EDAs, the erasing motion was achieved by simply superimposing an oscillation onto the (reversed) drawing motion. Moreover, modular Imitation Learning was achieved where different motions learned in different spaces were separately learned and linearly combined (Section 4.2).
Along with Section 4, further examples illustrate combinations of the two approaches. Superimposing a low-gain PD controller onto the feedforward torque command of DMPs (Section 3.4) can be regarded as an example of combining both approaches. Given an ideal torque-actuated robot, a first-order joint-space impedance controller (an example of an EDA) is added to a feedforward torque command based on DMPs. Robustness against uncertainty or physical contact can also be achieved by superimposing other mechanical impedances, for example, a first-order position task-space impedance controller (equation (16)). Additionally, for tasks combining discrete and rhythmic movements (Section 3.7) and sequencing discrete movements (Section 3.8), one can instead encode
6. Conclusion
In this paper, we provided a detailed comparison of two motor-primitive approaches in robotics: DMPs and EDAs. Both approaches utilize motor primitives as fundamental building blocks to parameterize a controller, enabling highly dynamic robot behavior with minimal high-level intervention.
Despite this similarity, there are notable differences in their implementation. Using simulation, we delineated the differences between DMPs and EDAs through nine robot control examples. While DMPs can easily learn and track trajectories of arbitrary complexity, EDAs are robust against uncertainty and have advantages for physical interaction. Accounting for the similarities and differences of both approaches, we provided real robot implementation of how DMPs and EDAs can be combined to achieve a rich repertoire of movements that is also robust against uncertainty and physical interaction.
In conclusion, control approaches based on DMPs, EDAs or their combination offer valuable techniques to generate dynamic robot behavior. By understanding their similarities and differences, researchers may make informed decisions to select the most suitable approach for specific robot tasks and applications.
Supplemental Material
Supplemental Material
Footnotes
Declaration of conflicting interests
Funding
Supplemental Material
Notes
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
