[Documentation] [TitleIndex] [WordIndex

Planet ROS

Planet ROS - http://planet.ros.org

Planet ROS - http://planet.ros.org[WWW] http://planet.ros.org


ROS Discourse General: New packages for Humble Hawksbill 2025-08-15

Package Updates for Humble

Added Packages [128]:

Updated Packages [309]:

Removed Packages [0]:

Thanks to all ROS maintainers who make packages available to the ROS community. The above list of packages was made possible by the work of the following maintainers:

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/new-packages-for-humble-hawksbill-2025-08-15/49593

ROS Discourse General: UBLOX ZED-X20P Integration Complete - 25Hz NavSatFix

I’ve completed initial UBLOX ZED-X20P integration in the ublox_dgnss package with 25Hz NavSatFix output.

Quick Start

ros2 launch ublox_dgnss ublox_x20p_rover_hpposllh_navsatfix.launch.py -- device_family:=x20p

What’s New

Available Now

Available now on GitHub for local compilation:

Architecture Notes

X20P main interface (0x01ab) fully supported with F9P/F9R compatibility.

UART interfaces (0x050c/0x050d) under investigation - see X20P UART1/UART2 interfaces (0x050c/0x050d) not supported - use main interface (0x01ab) · Issue #48 · aussierobots/ublox_dgnss · GitHub.

Have an X20P?

If you want to test it out and give us feedback, it would be appreciated!

3 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/ublox-zed-x20p-integration-complete-25hz-navsatfix/49586

ROS Discourse General: RMW-RMW bridge - is it possible, has anyone done it?

We’re more and more thinking that there should be a RMW-RMW bridge for ROS 2.

Our specific use-case is simple - a microcontroller with MicroROS (thus FastDDS) and the rest would be better with Zenoh RMW. But we can’t use Zenoh in the rest of the system because DDS and Zenoh don’t talk to each other.

I know (or guess) that between DDS-based RMWs, there is the possibility to interoperate on the DDS level (through it’s incomplete for some combinations AFAIU).

But if you need to connect a non-DDS RMW, there’s currently no option.

I haven’t dived into RMW details too much yet, but I guess in principle, creating such bridge at the RMW level should be possible, right?

Has anyone tried that? Is it achievable to create something that is “RMW-agnostic”, meaning one generic bridge for any pair (or n-tuple) of RMWs to connect?

Of course, such solution would hinder performance (all messages would have to be brokered by the bridge), but in our case, we only have a uC feeding one IMU stream, odometry, some state and diagnostics, and receieving only cmd_vel and a few other commands. So performance should not be a problem at least in these simpler cases.

5 posts - 4 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/rmw-rmw-bridge-is-it-possible-has-anyone-done-it/49564

ROS Discourse General: Bagel's New Release -- Cursor Integration

We are thrilled to announce a new integration for our open-source tool, Bagel! Two weeks ago, we presented Bagel at the ROS/PX4 meetup in Los Angeles, and the community’s excitement was incredible. As promised, we’ve integrated Bagel with the Cursor IDE to make robotics development even easier.

You can find the full tutorial here: bagel/doc/tutorials/mcp/2_cursor_px4.ipynb at stage · Extelligence-ai/bagel · GitHub

What is Bagel?

If you’re new to Bagel, it’s a tool that lets you chat with your rosbags using natural language queries. This allows you to quickly get insights from your log files without writing code. For example, you can ask questions like:

Bagel currently was tested in:


How to Get Involved

Bagel is a community-driven project, and we’d love for you to be a part of it. Your contributions are what will make this tool truly great.

Here are a few ways you can help:

Many people have done so! The community found two bugs and filed two feature requests already!

Thank you for your support!

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/bagels-new-release-cursor-integration/49562

ROS Discourse General: Localization of ROS 2 Documentation

Hello, Open Robotics Community,

I’m glad to announce that the :tada: ros2-docs-l10n :tada: project is published now:

The goal of this project is to translate the ROS 2 documentation into multiple languages. Translations are contributed via the Crowdin platform and automatically synchronized with the GitHub repository. Translations can be previewed on GitHub Pages.

10 posts - 3 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/localization-of-ros-2-documentation/49558

ROS Discourse General: Open-sourcing ROS 1 code from RUVU (AMCL reimplementation, planners, controllers and more)

Hey everyone,

As part of the acqui-hire of our startup RUVU, we’re open-sourcing a large portion of the ROS 1 code we’ve built over the years.

While it’s all written for ROS 1 so not immediately plug-and-play for ROS 2 users, we hope some of it might still be useful, inspirational, or a good starting point for your own projects.

Some highlights:

Everything is released under the MIT license, so feel free to fork, adapt, and use anything you find interesting.

We’re not planning on actively maintaining this code right now, but that could change if there’s enough community interest.

If you have questions, ideas, or want to discuss this code, you can reach me here or at my current role at Nobleo Technology.

— The (old) RUVU Team

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/open-sourcing-ros-1-code-from-ruvu-amcl-reimplementation-planners-controllers-and-more/49556

ROS Discourse General: Fixed Position Recording and Replay for AgileX PIPER Robotic Arm

We recently implemented a fixed position recording and replay function for the AgileX PIPER robotic arm using the official Python SDK. This feature allows recording specific arm positions and replaying them repeatedly, which is useful for teaching demonstrations and automated operations.

In this post, I will share the detailed setup steps, code implementation, usage instructions, and a demonstration video to help you get started.

Tags

Position recording, Python SDK, teaching demonstration, position reproduction, AgileX PIPER

Code Repository

GitHub link: https://github.com/agilexrobotics/Agilex-College.git

Function Demonstration

PIPER Robotic Arm | Fixed Position Recording & Replay Demo

Preparation Before Use

Hardware Preparation for PIPER Robotic Arm

Environment Configuration

sudo apt install git
sudo apt install python3-pip
sudo apt install can-utils ethtool
git clone -b 1_0_0_beta https://github.com/agilexrobotics/piper_sdk.git
cd piper_sdk
pip3 install .

Operation Steps for Fixed Position Recording and Replay Function

  1. Power on the robotic arm and connect the USB-to-CAN module to the computer (ensure that only one CAN module is connected)
  2. Open the terminal and activate the CAN module

sudo ip link set can0 up type can bitrate 1000000

  1. Clone the remote code repository

git clone https://github.com/agilexrobotics/Agilex-College.git

  1. Switch to therecordAndPlayPosdirectory

cd Agilex-College/piper/recordAndPlayPos/

  1. Run the recording program

python3 recordPos_en.py

  1. Short-press the teach button to enter the teaching mode

  2. Place the position of the robotic arm well, press Enter in the terminal to record the position, and input ‘q’ to end the recording.

  3. After recording, short-press the teach button again to exit the teaching mode

  1. Notes before replay: When exiting the teaching mode for the first time, a specific initialization process is required to switch from the teaching mode to the CAN mode. Therefore, the replay program will automatically perform a reset operation to return joints 2, 3, and 5 to safe positions (zero points) to prevent the robotic arm from suddenly falling due to gravity and causing damage. In special cases, manual assistance may be required to return joints 2, 3, and 5 to zero points.
  2. Run the replay program

python3 playPos_en.py

  1. After successful enabling, press Enter in the terminal to play the positions

Problems and Solutions

Problem 1: There is no Piper class.

image

Reason: The currently installed SDK is not the version with API.

Solution: Execute pip3 uninstall piper_sdkto uninstall the current SDK, then install the 1_0_0_beta version of the SDK according to the method in 1.2. Environment Configuration.

Problem 2: The robotic arm does not move, and the terminal outputs as follows.

Reason: The teach button was short-pressed during the operation of the program.

Solution: Check whether the indicator light of the teach button is off. If yes, re-run the program; if not, short-press the teach button to exit the teaching mode first and then run the program.

Code/Principle and Parameter Description

Implementation of Position Recording Program

The position recording program is the data collection module of the system, which is responsible for capturing the joint position information of the robotic arm in the teaching mode.

Program Initialization and Configuration

Parameter Configuration Design

#  Whether there is a gripper
have_gripper = True
# Timeout for teaching mode detection, unit: second
timeout = 10.0
# CSV file path for saving positions
CSV_path = os.path.join(os.path.dirname(__file__), "pos.csv")

Analysis of configuration parameters:

Thehave_gripperparameter is of boolean type, and True means there is a gripper.

Thetimeoutparameter sets the timeout for teaching mode detection. After starting the program, if the teaching mode is not entered within 10s, the program will exit.

TheCSV_pathparameter sets the save path of the trajectory file, which defaults to the same path as the program, and the file name is pos.csv

Robotic Arm Connection and Initialization

# Initialize and connect the robotic arm
piper = Piper("can0")
interface = piper.init()
piper.connect()
time.sleep(0.1)

Analysis of connection mechanism:

Piper()is the core class of the API, which simplifies some common methods on the basis of the interface.

init()will create and return an interface instance, which can be used to call some special methods of Piper.

connect()will start a thread to connect to the CAN port and process CAN data.

time.sleep(0.1)is added to ensure that the connection is fully established. In embedded systems, hardware initialization usually takes a certain amount of time, and this short delay ensures the reliability of subsequent operations.

Position Acquisition and Data Storage

Implementation of Position Acquisition Function

def get_pos():
    '''Get the current joint radians of the robotic arm and the gripper opening distance'''
    joint_state = piper.get_joint_states()[0]
    if have_gripper:
        return joint_state + (piper.get_gripper_states()[0][0], )
    return joint_state

Mode Detection and Switching

print("INFO: Please click the teach button to enter the teaching mode")
over_time = time.time() + timeout
while interface.GetArmStatus().arm_status.ctrl_mode != 2:
    if over_time < time.time():
        print("ERROR: Teaching mode detection timeout, please check whether the teaching mode is enabled")
        exit()
    time.sleep(0.01)

Status polling strategy
The program uses polling to detect the control mode, and this method has the following characteristics:

Timeout protection mechanism:
The 10-second timeout setting takes into account the needs of actual operations:

Safety features of teaching mode:

Data Recording and Storage

count = 1
csv = open(CSV_path, "w")
while input("INPUT: Input q to exit, press Enter directly to record:") != "q":
    current_pos = get_pos()
    print(f"INFO: {count}th position, recorded position:  {current_pos}")
    csv.write(",".join(map(str, current_pos)) + "\n")
    count += 1
csv.close()
print("INFO: Recording ends, click the teach button again to exit the teaching mode")

Data integrity guarantee:
After each recording, the data is immediately written to the file and the buffer is refreshed to ensure that the data will not be lost due to abnormal exit of the program.

Data Format Selection
Reasons for choosing CSV format for data storage:

Data column attributes:

Complete Code Implementation of Position Recording Program

#!/usr/bin/env python3
# -*-coding:utf8-*-
# Record positions
import os, time
from piper_sdk import *

if __name__ == "__main__":
    # Whether there is a gripper
    have_gripper = True
    # Timeout for teaching mode detection, unit: second
    timeout = 10.0
    # CSV file path for saving positions
    CSV_path = os.path.join(os.path.dirname(__file__), "pos.csv")
    # Initialize and connect the robotic arm
    piper = Piper("can0")
    interface = piper.init()
    piper.connect()
    time.sleep(0.1)

    def get_pos():
        '''Get the current joint radians of the robotic arm and the gripper opening distance'''
        joint_state = piper.get_joint_states()[0]
        if have_gripper:
            return joint_state + (piper.get_gripper_states()[0][0], )
        return joint_state
    
    print("INFO: Please click the teach button to enter the teaching mode")
    over_time = time.time() + timeout
    while interface.GetArmStatus().arm_status.ctrl_mode != 2:
        if over_time < time.time():
            print("ERROR:Teaching mode detection timeout, please check whether the teaching mode is enabled")
            exit()
        time.sleep(0.01)

    count = 1
    csv = open(CSV_path, "w")
    while input("INPUT: Enter q to exit, press Enter directly to record:  ") != "q":
        current_pos = get_pos()
        print(f"INFO:  {count}th position, recorded position: {current_pos}")
        csv.write(",".join(map(str, current_pos)) + "\n")
        count += 1
    csv.close()
    print("INFO: Recording ends, click the teach button again to exit the teaching mode")

Implementation of Position Replay Program

The position replay program is the execution module of the system, responsible for reading the recorded position data and controlling the robotic arm to reproduce these positions.

Parameter Configuration and Data Loading

replay Parameter Configuration

# Number of replays, 0 means infinite loop
play_times = 1
# replay interval, unit: second, negative value means manual key control
play_interval = 0
# Movement speed percentage, recommended range: 10-100
move_spd_rate_ctrl = 100

Analysis of parameter design:

Theplay_timesparameter supports three replay modes:

The negative value design ofplay_intervalis an ingenious user interface design:

Themove_spd_rate_ctrlparameter provides a speed control function, which is very important for different application scenarios:

Data File Reading

try:
    with open(CSV_path, 'r', encoding='utf-8') as f:
        track = list(csv.reader(f))
        if not track:
            print("ERROR: The position file is empty")
            exit()
        track = [[float(j) for j in i] for i in track]    # Convert to a list of floating-point numbers
except FileNotFoundError:
    print("ERROR: The position file does not exist")
    exit()

Exception handling strategies:

Data type conversion:
In the process of converting string data to floating-point numbers, the program uses list comprehensions.

Safety Stop Function

def stop():
    '''Stop the robotic arm; when exiting the teaching mode for the first time, this function must be called first to control the robotic arm in CAN mode'''
    interface.EmergencyStop(0x01)
    time.sleep(1.0)
    limit_angle = [0.1745, 0.7854, 0.2094]  # The robotic arm can be restored only when the angles of joints 2, 3, and 5 are within the limit range to prevent damage caused by falling from a large angle
    pos = get_pos()
    while not (abs(pos[1]) < limit_angle[0] and abs(pos[2]) < limit_angle[0] and pos[4] < limit_angle[1] and pos[4] > limit_angle[2]):
        time.sleep(0.01)
        pos = get_pos()
    # Restore the robotic arm
    piper.disable_arm()
    time.sleep(1.0)

Staged stop strategy:
The stop function adopts a staged safety stop strategy:

  1. Emergency stop stage: EmergencyStop(0x01) sends an emergency stop command to immediately stop all joint movements (joints with impedance)
  2. Safe position waiting: Wait for key joints (joints 2, 3, and 5) to move within the safe range
  3. System recovery stage: Send a recovery command to reactivate the control system

Safety range design:
The program pays special attention to the positions of joints 2, 3, and 5, which is based on the mechanical structure characteristics of the PIPER robotic arm:

The setting of the safe angle range (10°, 45°, 12°) is based on the following considerations:

Real-time monitoring mechanism: The program uses real-time polling to monitor the joint positions to ensure that the next step is performed only when the safety conditions are met.

System Enable Function

def enable():
    '''Enable the robotic arm and gripper'''
    while not piper.enable_arm():
        time.sleep(0.01)
    if have_gripper:
        time.sleep(0.01)
        piper.enable_gripper()
    interface.ModeCtrl(0x01, 0x01, move_spd_rate_ctrl, 0x00)
    print("INFO: Enable successful")

Robotic arm enabling:enable_arm()

Gripper enabling:enable_gripper()

Control mode setting:
ModeCtrl(0x01, 0x01, move_spd_rate_ctrl, 0x00)Control mode parameters:

Replay Control Logic

count = 0
input("step 2: Press Enter to start playing positions")
while play_times == 0 or abs(play_times) != count:
    for n, pos in enumerate(track):
        while True:
            piper.move_j(pos[:-1], move_spd_rate_ctrl)
            time.sleep(0.01)
            current_pos = get_pos()
            print(f"INFO: {count + 1}th playback, {n + 1}th position, current position: {current_pos}, target position: {pos}")
            if all(abs(current_pos[i] - pos[i]) < 0.0698 for i in range(6)):
                break
        if have_gripper and len(pos) == 7:
            piper.move_gripper(pos[-1], 1)
            time.sleep(0.5)
        if play_interval < 0:
            if n != len(track) - 1 and input("Enter q to exit, press Enter directly to play:  ") == 'q':
                exit()
        else:
            time.sleep(play_interval)
    count += 1

Joint control: move_j()

Gripper control: move_gripper()

Position control closed-loop system:

  1. Target setting: Send target position commands to each joint through themove_j()function
  2. Status feedback: Obtain the current actual position through theget_pos()function
  3. Error calculation: Compare the difference between the target position and the actual position
  4. Convergence judgment: Consider reaching the target when the error is less than the threshold

Multi-joint coordinated control:
all(abs(current_pos[i] - pos[i]) < 0.0698 for i in range(6))ensures that the next step is performed only after all six joints reach the target position.

Gripper control strategy:
The gripper control adopts an independent control logic:

replay rhythm control:
The program supports three replay rhythms:

Complete Code Implementation of Position replay Program

#!/usr/bin/env python3
# -*-coding:utf8-*-
# Play positions
import os, time, csv
from piper_sdk import Piper

if __name__ == "__main__":
    # Whether there is a gripper
    have_gripper = True
    # Number of playbacks, 0 means infinite loop
    play_times = 1
    # Playback interval, unit: second; negative value means manual key control
    play_interval = 0
    # Movement speed percentage, recommended range: 10-100
    move_spd_rate_ctrl = 100
    # Timeout for switching to CAN mode, unit: second
    timeout = 5.0
    # CSV file path for saving positions
    CSV_path = os.path.join(os.path.dirname(__file__), "pos.csv")
    # Read the position file
    try:
        with open(CSV_path, 'r', encoding='utf-8') as f:
            track = list(csv.reader(f))
            if not track:
                print("ERROR: Position file is empty")
                exit()
            track = [[float(j) for j in i] for i in track]    # Convert to a list of floating-point numbers
    except FileNotFoundError:
        print("ERROR: Position file does not exist")
        exit()

    # Initialize and connect the robotic arm
    piper = Piper("can0")
    interface = piper.init()
    piper.connect()
    time.sleep(0.1)

    def get_pos():
        '''Get the current joint radians of the robotic arm and the gripper opening distance'''
        joint_state = piper.get_joint_states()[0]
        if have_gripper:
            return joint_state + (piper.get_gripper_states()[0][0], )
        return joint_state    

    def stop():
        '''Stop the robotic arm; this function must be called first when exiting the teaching mode for the first time to control the robotic arm in CAN mode'''
        interface.EmergencyStop(0x01)
        time.sleep(1.0)
        limit_angle = [0.1745, 0.7854, 0.2094]  # The robotic arm can be restored only when the radians of joints 2, 3, and 5 are within the limit range to prevent damage caused by falling from a large radian
        pos = get_pos()
        while not (abs(pos[1]) < limit_angle[0] and abs(pos[2]) < limit_angle[0] and pos[4] < limit_angle[1] and pos[4] > limit_angle[2]):
            time.sleep(0.01)
            pos = get_pos()
        # Restore the robotic arm
        piper.disable_arm()
        time.sleep(1.0)
    
    def enable():
        '''Enable the robotic arm and gripper'''
        while not piper.enable_arm():
            time.sleep(0.01)
        if have_gripper:
            time.sleep(0.01)
            piper.enable_gripper()
        interface.ModeCtrl(0x01, 0x01, move_spd_rate_ctrl, 0x00)
        print("INFO: Enable successful")

    print("step 1:  Please ensure the robotic arm has exited the teaching mode before playback")
    if interface.GetArmStatus().arm_status.ctrl_mode != 1:
        stop()  # This function must be called first when exiting the teaching mode for the first time to switch to CAN mode
    over_time = time.time() + timeout
    while interface.GetArmStatus().arm_status.ctrl_mode != 1:
        if over_time < time.time():
            print("ERROR: Failed to switch to CAN mode, please check if the teaching mode is exited")
            exit()
        interface.ModeCtrl(0x01, 0x01, move_spd_rate_ctrl, 0x00)
        time.sleep(0.01)
    
    enable()
    count = 0
    input("step 2: Press Enter to start playing positions")
    while play_times == 0 or abs(play_times) != count:
        for n, pos in enumerate(track):
            while True:
                piper.move_j(pos[:-1], move_spd_rate_ctrl)
                time.sleep(0.01)
                current_pos = get_pos()
                print(f"INFO: {count + 1}th playback, {n + 1}th position, current position: {current_pos}, target position: {pos}")
                if all(abs(current_pos[i] - pos[i]) < 0.0698 for i in range(6)):
                    break
            if have_gripper and len(pos) == 7:
                piper.move_gripper(pos[-1], 1)
                time.sleep(0.5)
            if play_interval < 0:
                if n != len(track) - 1 and input("INPUT: Enter 'q' to exit, press Enter directly to play:  ") == 'q':
                    exit()
            else:
                time.sleep(play_interval)
        count += 1

Summary

The above implements the fixed position recording and replay function based on the AgileX PIPER robotic arm. By applying the Python SDK, it is possible to record and repeatedly execute specific positions of the robotic arm, providing strong technical support for teaching demonstrations and automated operations.

If you have any questions regarding the use, please feel free to contact us at support@agilex.ai.

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/fixed-position-recording-and-replay-for-agilex-piper-robotic-arm/49533

ROS Discourse General: ROS Kerala | Building a Robotics Career in the USA | Robotics Talk | Jerin Peter | Lentin Joseph

:studio_microphone: ROS Kerala Presents: Robotic Talk Series Topic: Building a Robotics Career in the US – Myths, Challenges & Reality

Join Jerin Peter (Graduate Student – Robotics, UC Riverside) and Lentin Joseph (Senior ROS & AI Consultant, CTO & Co-Founder – RUNTIME Robotics) as they share real-world insights on launching and growing a career in robotics in the United States.

From higher education choices and visa hurdles to mastering ROS and cracking robotics interviews, this talk covers it all. Whether you’re a student, a robotics enthusiast, or a professional looking to go abroad, you’ll find valuable tips and lessons here.

Building a Robotics Career in the USA | Robotics Talk | Jerin Peter | Lentin Joseph

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/ros-kerala-building-a-robotics-career-in-the-usa-robotics-talk-jerin-peter-lentin-joseph/49507

ROS Discourse General: Native rcl::tensor type

We propose introducing the concept of a tensor as a natively supported type in ROS 2 Lyrical Luth. Below is a sketch of how this would work for initial feedback before we write a proper REP for review.

Abstract

Tensors are a fundamental data structure often used to represent multi-modal information for deep neural networks (DNNs) at the core of policy-driven robots. We introduce rcl::tensor as a native type in rcl, as a container for memory that can be optionally externally managed. This type would be supported through all client libraries (rclcpp, rclpy, …) the ROS IDL rosidl, and all RMW implementations. This enables tensor_msgs ROS messages based on sensor_msgs which use tensor instead of uint8[]. The default implementation of rcl::tensor operations for creation/destruction and manipulation will be available on all tiers of supported platforms.. With the presence of an optional package and an environment variable, a platform-optimized implementation for rcl::tensor operations can then be swapped in at runtime to take advantage of accelerator-managed memory/compute. Through adoption of rcl::tensor in developer code and ROS messages, we can enable seamless platform-specific acceleration determined at runtime without any recompilation or deployment.

Motivation

ROS 2 should be accelerator-aware but accelerator-agnostic like other popular frameworks such as PyTorch or NumPy. This enables package developers that conform to ROS 2 standards to gain platform-specific optimizations for free (“optimal where possible, compatible where necessary”).

Background

AI robots and policy-driven physical agents rely on accelerated deep neural network (DNN) model inference through tensors. Tensors are a fundamental data structure to represent multi-dimensional data from scalar (rank 0), vectors (rank 1), and matrices (rank 2) to batches of multi-channel matrices (rank 4). These can be used to encode all data flowing through such graphs including images, text, joint positions, poses, trajectories, IMU readings, and more.

Performing inference on these DNN model policies requires these tensors to reside in accelerator memory. ROS messages, however, expect their payloads to reside in main memory with field types such as uint8[] or multi-dimensional arrays. This requires these payloads to be copied from main memory to accelerator memory and then copied back to main memory after processing in order to populate a new ROS message to publish. This quickly becomes the primary bottleneck for policy inference. Type adaptation in rclcpp provides a solution for this, but it requires all participating packages to have accelerator-specific dependencies and only applies within the client library, so RMW implementations cannot apply optimized-for-accelerator memory, for example.

Additionally, without a canonical tensor type in ROS 2, a patchwork of different tensor libraries across various ROS packages is causing impedance mismatches with popular deep learning frameworks including PyTorch.

Requirements

Rough Sketch

struct rcl::tensor
{
    std::vector<size_t> shape; // shape of the tensor
    std::vector<size_t> strides; // strides of the tensor
    size_t rank; // number of dimensions

    union {
        void* data; // pointer to the data in memory handle
        size_t handle; // token stored by rcl::tensor for externally managed memory
    }
    size_t byte_size; // size of the data

    data_type_enum type; // the data type
}

Core Tensor APIs

Inline APIs available on all platforms in core ROS 2 rcl.

Creation

Create a new tensor from main memory.

Common operations

Manipulations performed on tensors that can be optionally accelerated. The more complete these APIs are, the less fragmented the ecosystem will be but the higher the burden on implementers. These should be modeled after PyTorch tensor API and existing C tensor libraries such as libXM or C++ libraries like xtensor.

Managed access

Provide a way to access elements individually in parallel.

Direct access

Retrieve the underlying data in main memory but may involve movement of data.

Other Conveniences

  1. rcl functions to check which tensor implementation is active.
  2. tensor_msgs::Image to mirror sensor_msgs::Image to enable smooth migration to using tensor type in common ROS messages. Alternative is to add a “union” field in sensor_msgs::Image with the uint8[] data field.
  3. cv_bridge API to convert between cv::Mat and tensor_msgs::Image.

Platform-specific tensor implementation

Without loss of generality, suppose we have an implementation of tensor that uses an accelerated library, such as rcl_tensor_cuda for CUDA. This package provides shared libraries that implement all of the core tensor APIs. An environment variable for RCL_TENSOR_IMPLEMENTATION = rcl_tensor_cuda enables the loading of rcl_tensor_cuda at runtime without rebuilding any other packages. Unlike the native implementation, rcl_tensor_cuda copies the input buffer into a CUDA buffer and uses CUDA to perform operations on that CUDA buffer.

It also provides new APIs for creating a tensor from a CUDA buffer, for checking whether the rcl_tensor_cuda implementation is active, and for accessing the CUDA buffer from a tensor available for any other package libraries the link to rcl_tensor_cuda directly. An RMW implementation linked against rcl_tensor_cuda would query the CUDA buffer backing a tensor and use optimized transport paths to handle it, while a general RMW implementation could just call rcl_tensor_materialize_bytes and transport the main memory payload as normal.

Simple Examples

Example #1: rcl::tensor with “accelerator-aware” subscriber

Node A publishes a ROS message with rcl::tensor from main memory bytes and sends it to a topic Node B subscribes to. Node B happens to be written to first check whether the rcl::tensor is backed by externally managed memory AND checks that rcl_tensor_cuda is active (indicates this is backed by CUDA). Node B has a direct dependency on rcl_tensor_cuda in order to perform this check.

Alternatively, Node B could have also been written with no dependency on any rcl::tensor implementation to simply retrieve the bytes from the rcl::tensor and ignore the externally managed memory flag altogether, which would have forced a copy back from accelerator memory in Scenario 2.

MyMsg.msg
—--------
std_msgs/Header header
tensor payload

Scenario 1: RCL_TENSOR_IMPLEMENTATION = <none>
----------------------------------------------

┌─────────────────┐    ROS Message    ┌─────────────────┐
│   Node A        │ ────────────────► │   Node B        │
│                 │                   │                 │
│ ┌─────────────┐ │                   │ ┌─────────────┐ │
│ │Create Tensor│ │                   │ │Receive MyMsg│ │
│ │in MyMsg     │ │                   │ │             │ │
│ └─────────────┘ │                   │ └─────────────┘ │
│         │       │                   │         │       │
│         ▼       │                   │         ▼       │
│ ┌─────────────┐ │                   │ ┌─────────────┐ │
│ │Publish      │ │                   │ │Check if     │ │
│ │MyMsg        │ │                   │ │Externally   │ │
│ └─────────────┘ │                   │ │Managed      │ │
└─────────────────┘                   │ └─────────────┘ │
                                      │         │       │
                                      │         ▼       │
                                      │ ┌─────────────┐ │
                                      │ │Copy         │ │
                                      │ │to Accel Mem │ │
                                      │ └─────────────┘ │
                                      │          │       │
                                      │         ▼       │
                                      │ ┌─────────────┐ │
                                      │ │Process on   │ │
                                      │ │Accelerator  │ │
                                      │ └─────────────┘ │
                                      └─────────────────┘

Scenario 2: RCL_TENSOR_IMPLEMENTATION = rcl_tensor_cuda
--------------------------------------------------------

┌─────────────────┐    ROS Message    ┌─────────────────┐
│   Node A        │ ────────────────► │   Node B        │
│                 │                   │                 │
│ ┌─────────────┐ │                   │ ┌─────────────┐ │
│ │Create Tensor│ │                   │ │Receive MyMsg│ │
│ │in MyMsg     │ │                   │ │             │ │
│ └─────────────┘ │                   │ └─────────────┘ │
│         │       │                   │         │       │
│         ▼       │                   │         ▼       │
│ ┌─────────────┐ │                   │ ┌─────────────┐ │
│ │Publish MyMsg│ │                   │ │Check if     │ │
│ └─────────────┘ │                   │ │Externally   │ │
└─────────────────┘                   │ │Managed      │ │
                                      │ └─────────────┘ │
                                      │         │       │
                                      │         ▼       │
                                      │ ┌─────────────┐ │
                                      │ │Process on   │ │
                                      │ │Accelerator  │ │
                                      │ └─────────────┘ │
                                      └─────────────────┘

In Scenario 2, the same tensor function call in Node A creates a tensor backed by accelerator memory instead. This allows Node B, which was checking for a rcl_tensor_cuda-managed tensor to skip the extra copy.

Example #2: CPU versus accelerated implementations

SCENARIO 1: RCL_TENSOR_IMPLEMENTATION = <none> (CPU/Main Memory Path)
========================================================================

┌─────────────────────────────────────────────────────────────────────────────┐
│                              CPU/Main Memory Path                           │
└─────────────────────────────────────────────────────────────────────────────┘

┌─────────────┐    ┌─────────────┐    ┌─────────────┐    ┌─────────────┐
│   Create    │    │  Normalize  │    │   Reshape   │    │ Materialize │
│   Tensor    │───▶│  Operation  │───▶│  Operation  │───▶│    Bytes    │
│  [CPU Mem]  │    │   [CPU]     │    │   [CPU]     │    │  [CPU Mem]  │
└─────────────┘    └─────────────┘    └─────────────┘    └─────────────┘
        │                   │                   │                   │
        ▼                   ▼                   ▼                   ▼
┌─────────────┐    ┌─────────────┐    ┌─────────────┐    ┌─────────────┐
│ Allocate    │    │ CPU-based   │    │ CPU-based   │    │ Return      │
│ main memory │    │ normalize   │    │ reshape     │    │ pointer to  │
│ for tensor  │    │ computation │    │ computation │    │ byte array  │
│ data        │    │ on CPU      │    │ on CPU      │    │ in main mem │
└─────────────┘    └─────────────┘    └─────────────┘    └─────────────┘

Memory Layout:
┌─────────────────────────────────────────────────────────────────────────────┐
│                              Main Memory                                    │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐         │
│  │   Tensor    │  │  Normalized │  │  Reshaped   │  │ Materialized│         │
│  │   Data      │  │   Tensor    │  │   Tensor    │  │    Bytes    │         │
│  │  [CPU]      │  │   [CPU]     │  │   [CPU]     │  │   [CPU]     │         │
│  └─────────────┘  └─────────────┘  └─────────────┘  └─────────────┘         │
└─────────────────────────────────────────────────────────────────────────────┘

SCENARIO 2: RCL_TENSOR_IMPLEMENTATION = rcl_tensor_cuda (GPU/CUDA Path)
=======================================================================

┌─────────────────────────────────────────────────────────────────────────────┐
│                              GPU/CUDA Path                                  │
└─────────────────────────────────────────────────────────────────────────────┘

┌─────────────┐    ┌─────────────┐    ┌─────────────┐    ┌─────────────┐
│   Create    │    │  Normalize  │    │   Reshape   │    │ Materialize │
│   Tensor    │───▶│  Operation  │───▶│  Operation  │───▶│    Bytes    │
│  [GPU Mem]  │    │   [CUDA]    │    │   [CUDA]    │    │  [CPU Mem]  │
└─────────────┘    └─────────────┘    └─────────────┘    └─────────────┘
        │                   │                   │                   │
        ▼                   ▼                   ▼                   ▼
┌─────────────┐    ┌─────────────┐    ┌─────────────┐    ┌─────────────┐
│ Allocate    │    │ CUDA kernel │    │ CUDA kernel │    │ Copy from   │
│ GPU memory  │    │ for normalize│   │ for reshape │    │ GPU to CPU  │
│ for tensor  │    │ computation │    │ computation │    │ memory      │
│ data        │    │ on GPU      │    │ on GPU      │    │ (cudaMemcpy)│
└─────────────┘    └─────────────┘    └─────────────┘    └─────────────┘

Memory Layout:
┌─────────────────────────────────────────────────────────────────────────────┐
│                              GPU Memory                                     │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐                          │
│  │   Tensor    │  │  Normalized │  │  Reshaped   │                          │
│  │   Data      │  │   Tensor    │  │   Tensor    │                          │
│  │  [GPU]      │  │   [GPU]     │  │   [GPU]     │                          │
│  └─────────────┘  └─────────────┘  └─────────────┘                          │
└─────────────────────────────────────────────────────────────────────────────┘
                                    │
                                    ▼
┌─────────────────────────────────────────────────────────────────────────────┐
│                              Main Memory                                    │
│                                                                             │
│                                                                             │
│  ┌─────────────┐                                                            │
│  │ Materialized│                                                            │
│  │    Bytes    │                                                            │
│  │   [CPU]     │                                                            │
│  └─────────────┘                                                            │
└─────────────────────────────────────────────────────────────────────────────┘

IMPLEMENTATION NOTES
===================

• Environment variable RCL_TENSOR_IMPLEMENTATION controls which path is taken
• Same API calls work in both scenarios (transparent to user code)
• GPU path requires CUDA runtime and rcl_tensor_cuda package
• Memory management handled automatically by implementation
• Backward compatibility maintained for CPU-only systems

Discussion Questions

  1. Should we constrain tensor creation functions to using memory allocators instead? rcl::tensor implementations would need to provide custom memory allocators for externally managed memory, for example.

  2. Do we allow for mixed runtimes of cpu-backed/external memory managed tensors in one runtime? What creation pattern would allow for precompiled packages to “pick up” accelerated memory dynamically at runtime by default but also explicitly opt-out from it for specific tensors as well?

  3. Do we need to expose the concept of “streams” and “devices” through the rcl::tensor API or can that be kept under the abstraction layer? They are generic concepts but may too strongly proscribe the underlying implementation. However, exposing them would let developers provide stronger intent on how they want their code to be executed in an accelerator-agnostic manner.

  4. What common tensor operations should we keep as supported? The more we choose, the higher the burden on the rcl::tensor implementations, but the more standardized and less fragmented our ROS 2 developer base. For example, we do not want fragmentation where packages begin to depend on rcl_tensor_cuda and thus fallback only to CPU for rcl_tensor_opencl (wlog).

  5. Should tensors have a multi-block interfaces from the get-go? Assuming one memory address seems problematic for rank 4 tensors, for example (e.g., sets of images from multiple cameras).

  6. Should the ROS 2 canonical implementation of rcl::tensor be inline or based on an existing, open source library? If so, which one?

Summary

7 posts - 7 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/native-rcl-tensor-type/49497

ROS Discourse General: ROS 2 Performance Benchmark - Code Release

In our ROS 2 Performance Benchmark tests, we had interesting findings demonstrating potential bottlenecks for message transport in ROS 2(rolling). Now, we’re excited to release the code which can be used to reproduce our results. Check it out here!

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/ros-2-performance-benchmark-code-release/49495

ROS Discourse General: ROS 2 Rust Meeting: August 2025

The next ROS 2 Rust Meeting will be Mon, Aug 11, 2025 2:00 PM UTC

The meeting room will be at https://meet.google.com/rxr-pvcv-hmu

In the unlikely event that the room needs to change, we will update this thread with the new info!

With the recent announcement about OSRF funding for adding Cargo dependency management to the buildfarm, and a few people having questions on that, I would like to reiterate that this meeting is open to everyone - working group member or not. If you want to learn what we’re trying to accomplish, please drop by! We’d love to have you!

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/ros-2-rust-meeting-august-2025/49487

ROS Discourse General: ROS 2 Cross-compilation / Multi architecture development

Hi,

I’m in the process of looking into migrating our indoor service robot from an amd64 based system to the Jetson Orin Nano.

How are you doing development when targeting aarch64/arm64 machines?

My development machine is not the newest, but reasonably powerful. (AMD Ryzen 9 3900X, 32GB RAM) But it struggles with the officially recommended QEMU based approach. Even the vanilla osrf/ros docker image is choppy under emulation. Building the actual image, stack or running a simulated environment is totally out of the question.

The different pathways I investigated so far are:

I’m interested in your approach of this problem. I imagine that using ARM based systems in production robots is a fairly common practice given the recent advances in this field.

7 posts - 6 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/ros-2-cross-compilation-multi-architecture-development/49449

ROS Discourse General: Why do robotics companies choose not to contribute to open source?

Hi all!

We wrote a blog post at Henki Robotics to share some of our thoughts on open-source collaboration, based on what we’ve seen and learned so far. We thought that it would be interesting for the community to hear and discuss the challenges open-source contributions pose from a company standpoint, while also highlighting the benefits of doing so and encouraging more companies to collaborate together.

We’d be happy to hear your thoughts and if you’ve had similar experiences!

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/why-do-robotics-companies-choose-not-to-contribute-to-open-source/49448

ROS Discourse General: A Dockerfile and a systemd service for starting a rmw-zenoh server

Meanwhile there’s no official method for autostarting rmw-zenoh server this might be useful:

4 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/a-dockerfile-and-a-systemd-service-for-starting-a-rmw-zenoh-server/49438

ROS Discourse General: How to Implement End-to-End Tracing in ROS 2 (Nav2) with OpenTelemetry for Pub/Sub Workflows?

I’m working on implementing end-to-end tracing for robotic behaviors using OpenTelemetry (OTel) in ROS 2. My goal is to trace:

  1. High-level requests (e.g., “move to location”) across components to analyze latency

  2. Control commands (e.g., teleop) through the entire pipeline to motors

Current Progress:

Challenges with Nav2:

Questions:

  1. Are there established patterns for OTel context propagation in ROS 2 pub/sub systems?

  2. How should we handle fan-out scenarios (1 publisher → N subscribers)?

  3. Any Nav2-specific considerations for tracing (e.g., lifecycle nodes, behavior trees)?

  4. Alternative approaches besides OTel that maintain compatibility with observability tools?

2 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/how-to-implement-end-to-end-tracing-in-ros-2-nav2-with-opentelemetry-for-pub-sub-workflows/49418

ROS Discourse General: Space ROS Jazzy 2025.07.0 Release

Hello ROS community!

The Space ROS team is excited to announce Space ROS Jazzy 2025.07.0 was released last week and is available as osrf/space-ros:jazzy-2025.07.0 on DockerHub.

Release details

This release includes a significant refactor the build of our base image making the main container over 60% smaller! Additionally, development images are now pushed to DockerHub to make building with Space ROS and an underly easier than ever. For an exhaustive list of all the issues addressed and PRs merged, check out the GitHub Project Board for this release here.

Code

Current versions of all packages released with Space ROS are available at:

What’s Next

This release comes 3 months after the last release. The next release is planned for October 31, 2025. If you want to contribute to features, tests, demos, or documentation of Space ROS, get involved on the Space ROS GitHub issues and discussion board.

All the best,

The Space ROS Team

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/space-ros-jazzy-2025-07-0-release/49417

ROS Discourse General: Bagel, the Open Source Project | Guest Speakers Arun Venkatadri and Shouheng Yi | Cloud Robotics WG Meeting 2025-08-11

Please come and join us for this coming meeting at Mon, Aug 11, 2025 4:00 PM UTCMon, Aug 11, 2025 5:00 PM UTC,
where guest speakers Arun Venkatadri and Shouheng Yi will be presenting Bagel. Bagel is a new open source project that lets you chat with your robotics data by using AI to search through recorded data. Bagel was recently featured in ROS News for the Week, and there’s a follow-up post giving more detail.

Last meeting, we tried out the service from Heex Technologies, which allows you to deploy agents to your robots or search through recorded data for set events. The software then records data around those events and uploads to the cloud, allowing you to view events from your robots. If you’d like to see the meeting, it is available on YouTube.

The meeting link for next meeting is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.

Hopefully we will see you there!

2 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/bagel-the-open-source-project-guest-speakers-arun-venkatadri-and-shouheng-yi-cloud-robotics-wg-meeting-2025-08-11/49412

ROS Discourse General: What if your Rosbags could talk? Meet Bagel🥯, the open-source tool we just released!

Huge thanks to @Katherine_Scott and @mrpollo for hosting us at the Joint ROS / PX4 Meetup at Neros in El Segundo, CA! It was an absolute blast connecting with the community in person!

:backhand_index_pointing_down: Missed the demo? No worries! Here’s the scoop on what we unveiled (we showed it with PX4 ULogs, but yes, ROS2 and ROS1 are fully supported!)

bagel

The problem? We felt the pain of wrestling with robotics data and LLMs. Unlike PDF files, we’re talking about massive sensor arrays, complex camera feeds, dense LiDAR point clouds – making LLMs truly useful here has been a real challenge… at least for us.

The solution? Meet Bagel ( GitHub - shouhengyi/bagel: Bagel is ChatGPT for physical data. Just ask questions. No Fuss. )! We built this powerful open-source tool to bridge that gap. Imagine simply asking questions about your robotics data, instead of endless parsing and plotting.

With Bagel, loaded with your ROS2 bag or PX4 ULog, you can ask things like:

Sound like something that could change your workflow? We’re committed to building Bagel in the open, with your help! This is where you come in:

Thanks a lot for being part of this journey. Happy prompting!

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/what-if-your-rosbags-could-talk-meet-bagel-the-open-source-tool-we-just-released/49376

ROS Discourse General: ROS Naija Linedlin Group

:rocket: Exciting News for Nigerian Roboticists!

We now have a ROS Naija Community group on here ,a space for engineers, developers, and enthusiasts passionate about ROS (Robot Operating System) and robotics.

Whether you’re a student, hobbyist, researcher, or professional, this is the place to:
:robot: Connect with like-minded individuals
:books: Share knowledge, resources, and opportunities
:light_bulb: Collaborate on robotics and ROS-based projects
:brain: Ask questions and learn from others in the community

If you’re interested in ROS and robotics, you’re welcome to join:

:link: Join here: LinkedIn Login, Sign in | LinkedIn

Let’s build and grow the Nigerian robotics ecosystem together!

ROS robotics #ROSNaija #NigeriaTech #Engineering #ROSCommunity #RobotOperatingSystem

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/ros-naija-linedlin-group/49368

ROS Discourse General: [Case Study] Cross-Morphology Policy Learning with UniVLA and PiPER Robotic Arm

We’d like to share a recent research project where our AgileX Robotics PiPER 6-DOF robotic arm was used to validate UniVLA, a novel cross-morphology policy learning framework developed by the University of Hong Kong and OpenDriveLab.

Paper: Learning to Act Anywhere with Task-Centric Latent Actions
arXiv: [2505.06111] UniVLA: Learning to Act Anywhere with Task-centric Latent Actions
Code: GitHub - OpenDriveLab/UniVLA: [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions


Motivation

Transferring robot policies across platforms and environments is difficult due to:

UniVLA addresses this by learning latent action representations from videos, without relying on action labels.


Framework Overview

UniVLA introduces a task-centric, latent action space for general-purpose policy learning. Key features include:

Figure2: Overview of the UniVLA framework. Visual-language features from third-view RGB and task instruction are tokenized and passed through an auto-regressive transformer, generating latent actions which are decoded into executable actions across heterogeneous robot morphologies.


PiPER in Real-World Experiments

To validate UniVLA’s transferability, the researchers selected the AgileX PiPER robotic arm as the real-world testing platform.

Tasks tested:

  1. Store a screwdriver
  2. Clean a cutting board
  3. Fold a towel twice
  4. Stack the Tower of Hanoi

These tasks evaluate perception, tool use, non-rigid manipulation, and semantic understanding.


Experimental Results


About PiPER

PiPER is a 6-DOF lightweight robotic arm developed by AgileX Robotics. Its compact structure, ROS support, and flexible integration make it ideal for research in manipulation, teleoperation, and multimodal learning.

Learn more: PiPER
Company website: https://global.agilex.ai

Click the link below to watch the experiment video using PIPER:

🚨 Our PiPER robotic arm was featured in cutting-edge robotics research!


Collaborate with Us

At AgileX Robotics, we work closely with universities and labs to support cutting-edge research. If you’re building on topics like transferable policies, manipulation learning, or vision-language robotics, we’re open to collaborations.

Let’s advance embodied intelligence—together.

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/case-study-cross-morphology-policy-learning-with-univla-and-piper-robotic-arm/49361

ROS Discourse General: [Demo] Remote Teleoperation with Pika on UR7e and UR12e

Hello ROS developers,

We’re excited to share a new demo featuring Pika, AgileX Robotics’ portable and ergonomic teleoperation gripper system. Pika integrates multiple sensors to enable natural human-to-robot skill transfer and rich multimodal data collection.

Key Features of Pika:

In this demo, Pika teleoperation system remotely controls two collaborative robot arms — UR7e (7.5 kg payload, 850 mm reach) and UR12e (12 kg payload, 33.5 kg robot weight) — to complete several everyday manipulation tasks:

:wrench: Task Set:

:hammer_and_wrench: System Highlights:

:package: Application Scenarios:

:movie_camera: Watch the demo here: Pika Remote Control Demo
:link: Learn more about Pika: https://global.agilex.ai/products/pika

:speech_balloon: Feel free to contact us for GitHub repositories, integration guides, or collaboration opportunities — we look forward to your feedback!

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/demo-remote-teleoperation-with-pika-on-ur7e-and-ur12e/49304

ROS Discourse General: TecGihan Force Sensor Amplifier for Robot Now Supports ROS 2

I would like to share that Tokyo Opensource Robotics Kyokai Association (TORK) has supported the development and release of the ROS 2 / Linux driver software for the DMA-03 for Robot, a force sensor amplifier manufactured by TecGihan Co., Ltd.

The DMA-03 for Robot is a real-time output version of the DMA-03, a compact 3-channel strain gauge amplifier, adapted for robotic applications.

As of July 2025, tecgihan_driver supports the following Linux / ROS environments:

A bilingual (Japanese/English) README with detailed usage instructions is available on the GitHub repository:

If you have any questions or need support, feel free to open an issue on the repository.


Yosuke Yamamoto
Tokyo Opensource Robotics Kyokai Association

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/tecgihan-force-sensor-amplifier-for-robot-now-supports-ros-2/49301

ROS Discourse General: RobotCAD 9.0.0 (Assemly WB -> RobotCAD converter)

Improvements:

  1. Add converter FreeCAD Assembly WB (default) to RobotCAD structure.
  2. Add tool for changing Joint Origin without touching downstream kinematic chain (move only target Joint Origin)
  3. Optimization of Set placement tools performance. Now it does not require intermediate recalculation scene in process.
  4. Decrease size of joint arrows to 150.
  5. Add created collisions to Collision group (folder). Unification of collision part prefix.
  6. Fix Set placement by orienteer for root link (align it to zero Placement)
  7. Refactoring of Set Placement tools.

Fixes:

  1. Fix error when creating collision for empty part.
  2. Fix getting wrapper for LCS body container. It fixes LCS adding to some objects.
  3. Fix NotImplementedError (some joint types units) to warning. Instead of error it will give warning and let possible to set values for other types of joints.

https://vkvideo.ru/video-219386643_456239081 - Converter Assembly WB → RobotCAD in work


1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/robotcad-9-0-0-assemly-wb-robotcad-converter/49289

ROS Discourse General: 🚀 [New Release] BUNKER PRO 2.0 – Reinforced Tracked Chassis for Extreme Terrain and Developer-Friendly Integration

Hello ROS community,

AgileX Robotics is excited to introduce the BUNKER PRO 2.0, a reinforced tracked chassis designed for demanding off-road conditions and versatile field robotics applications.

Key Features:

Intelligent Expansion, Empowering the Future

Typical Use Cases:

AgileX Robotics provides full ROS driver support and SDK documentation to accelerate your development process. We welcome collaboration opportunities and field testing partnerships with the community.

For detailed technical specifications or to discuss integration options, please contact us at sales@agilex.ai.

Learn more at https://global.agilex.ai/

4 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/new-release-bunker-pro-2-0-reinforced-tracked-chassis-for-extreme-terrain-and-developer-friendly-integration/49275

ROS Discourse General: Cloud Robotics WG Meeting 2025-07-28 | Heex Technologies Tryout and Anomaly Detection Discussion

Please come and join us for this coming meeting at Mon, Jul 28, 2025 4:00 PM UTCMon, Jul 28, 2025 5:00 PM UTC, where we will be trying out Heex Technologies service offering from their website and discussing anomaly detection for Logging & Observability.

Last meeting, we heard from Bruno Mendes De Silva, Co-Founder and CEO of Heex Technologies, and Benoit Hozjan, Project Manager in charge of customer experience at Heex Technologies. The two discussed the company and purpose of the service they offer, then demonstrated a showcase workspace for the visualisation and anomaly detection capabilities of the server. If you’d like to see the meeting, it is available on YouTube.

The meeting link for nex meeting is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.

Hopefully we will see you there!

2 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/cloud-robotics-wg-meeting-2025-07-28-heex-technologies-tryout-and-anomaly-detection-discussion/49274


2025-08-16 12:17