[Documentation] [TitleIndex] [WordIndex

Proposal: General endpoint specification

Overview

This feature request targets point #4 on the wish list about being able to configure the teleoperation pipeline for different endpoints. This extends the the possible applications of the teleoperation pipeline and also enables its use with other robots than REEM.

Target changes

The current pipeline implementation already handles different types of endpoints. For example the retargeting of the head motion is done quite simple. Only the head orientation is mapped onto the robot. Retargeting the arm motion on the other hand involves the shoulder, elbow and hand endpoints.

But there is currently no way to retarget only parts of the body or parts of the motion of an endpoint. For example, one might only want to retarget the head motion, even if the motion of the whole body of the operator is captured and a robot with more than needed DOFs is available. Or one might only want retarget the the hand motion without caring about the elbow - the used robot might not even have one.

The above outlined differences mainly target motion_adaption, which is doing the rescaling of the target endpoints. The used IK service of the tree_kinematics package is already able to work with multiple endpoints and allows the specification, which task space directions are used (position and orientation).

Proposals for the interface redesign

List here your thoughts about how the above described feature could/should be implemented.

Marcus

Comment - 17.01.2012

My thoughts went into two directions: My main goal is to find an abstract endpoint definition, which allows to cover all/many use cases. Since I haven't found such a definition yet and I also expected that there might be some iterations/redesigns needed to find such, I tried to come up with simpler definitions. These need to cover at least the current use cases: retargeting the complete upper body of REEM and Robosem.

Currently 2 different endpoint types are implemented:

For Robosem we would need to add a new typ for the 2-DOF arms (2-DOF shoulder + fixed arm).

So, we could implement these 3 types and add new ones as needed.

Another idea is to allow the specification of involved segments. For example, the scaling of the translations for REEM's wrists involves 3 segments (torso->shoulder, shoulder->elbow, elbow->wrist), for Robosem's arms 2 and for the head and torso only 1. That however covers only the scaling of the translations. I still miss a good idea on how to integrate orientations into this approach. Maybe, we allow to specify if orientations are not mapped, mapped untouched or compute from scratch, such as in the case of the REEM's arms.

Comment - 01.04.2013

First I will respond to the request of providing more information about the adaption process.

Motion adaption is currently done specifically for REEM and our goal of reproducing "expressive motions". There are 2 particular types of adaption/retargeting involved, i.e. one for the left/right arm and one for the head and torso.

Head and torso adaptions are simple: fixed translation (i.e. ref frame -> head, ref frame -> torso) plus 1-to-1 tracking of the orientation.

Adaptions for the arm endpoints - shoulder, elbow and wrist - are more difficult. It requires information about the shoulder width and height (shoulder poses) and arm length (elbow and wrist poses). Based on this information new positions and orientations for the shoulder, elbow and wrist are calculated.

An endpoint here describes a 6D frame (points with position and orientation information), which is retargeted, i.e. adapted (task space, 6D pose -> 6D pose) and transformed (IK, 6D pose -> nD joint velocity/position vector).

I hope this gives a better idea of the motion adaption (and retargeting) process.

Regarding a (more) generic configuration:

I would like to have a modular structure, which allows to easily create various retargeting types/methods. However, our current uses cases are very different from each other and have different requirements, see head vs. arm retargeting. So far I couldn't come up with generic building blocks for those types.

Hence, I suggest we start with the use case we have and focus on an easy configuration of those. I.e. it should be straight-forward and easy to configure the same retargeting method for different robots, which fulfill the requirements (i.e. satisfy the require DOF), e.g. arm retargeting for REEM and PR2, head retargeting for REEM and Robosem.

I like the configuration structure proposed by Adolfo (comment 18.01.2012). To make configuration easy, I'd like to:

To implement the later, I'd like to move away from 3 separate nodes, i.e. motion adaption, tree kinematics and coordinator, and make the coordinator use motion adaption and tree kinematics as libraries, configure them on start-up and control the data flow (input -> motion adaption -> tree kinematics -> output) during runtime. I believe this would make the configuration easier to implement and understand (when looking inside the code) and enhance the performance of the whole pipeline.

Adolfo

Comment - 18.01.2012

I'd first like to see some doc on how motion adaption currently works, in particular how it chooses to adapt each frame. I checked the different branches in Github, and the doc is very poor.

Motion adaption should only adapt frames specified in the relevant configuration file, nothing else. So, even if we have a full humanoid robot, the application might be such that it only requires adapting a small subset of endpoints. I was thinking in something of the likes of:

coupling_type1
  requisite1: foo
  requisite2:
    -bar
    -baz

coupling_type2
...

The coupling_type represents each different type of supported adaption, and the requisites represent the requirements for that particular type of adaptaiton (input/output frames, extra metadata). I cannot provide a comprehensive list of examples without digging into the code, but this would be the next logical step.

The endpoint term is a bit overloaded, as it may also refer to IK endpoints. It might also be useful to bring IK endpoints into the discussion as well, as they might also be subject to a similar specification.


2024-11-16 14:46