Planet ROS
Planet ROS - http://planet.ros.org
Planet ROS - http://planet.ros.org http://planet.ros.org
ROS Discourse General: New packages for Humble Hawksbill 2025-08-15
Package Updates for Humble
Added Packages [128]:
- ros-humble-adi-iio: 1.0.1-3
- ros-humble-adi-iio-dbgsym: 1.0.1-3
- ros-humble-as2-behaviors-swarm-flocking: 1.1.3-1
- ros-humble-as2-behaviors-swarm-flocking-dbgsym: 1.1.3-1
- ros-humble-autoware-adapi-adaptors: 1.4.0-1
- ros-humble-autoware-adapi-adaptors-dbgsym: 1.4.0-1
- ros-humble-autoware-adapi-specs: 1.4.0-1
- ros-humble-autoware-behavior-velocity-planner: 1.4.0-1
- ros-humble-autoware-behavior-velocity-planner-common: 1.4.0-1
- ros-humble-autoware-behavior-velocity-planner-common-dbgsym: 1.4.0-1
- ros-humble-autoware-behavior-velocity-planner-dbgsym: 1.4.0-1
- ros-humble-autoware-behavior-velocity-stop-line-module: 1.4.0-1
- ros-humble-autoware-behavior-velocity-stop-line-module-dbgsym: 1.4.0-1
- ros-humble-autoware-component-interface-specs: 1.4.0-1
- ros-humble-autoware-core: 1.4.0-1
- ros-humble-autoware-core-api: 1.4.0-1
- ros-humble-autoware-core-control: 1.4.0-1
- ros-humble-autoware-core-localization: 1.4.0-1
- ros-humble-autoware-core-map: 1.4.0-1
- ros-humble-autoware-core-perception: 1.4.0-1
- ros-humble-autoware-core-planning: 1.4.0-1
- ros-humble-autoware-core-sensing: 1.4.0-1
- ros-humble-autoware-core-vehicle: 1.4.0-1
- ros-humble-autoware-crop-box-filter: 1.4.0-1
- ros-humble-autoware-crop-box-filter-dbgsym: 1.4.0-1
- ros-humble-autoware-default-adapi: 1.4.0-1
- ros-humble-autoware-default-adapi-dbgsym: 1.4.0-1
- ros-humble-autoware-downsample-filters: 1.4.0-1
- ros-humble-autoware-downsample-filters-dbgsym: 1.4.0-1
- ros-humble-autoware-ekf-localizer: 1.4.0-1
- ros-humble-autoware-ekf-localizer-dbgsym: 1.4.0-1
- ros-humble-autoware-euclidean-cluster-object-detector: 1.4.0-1
- ros-humble-autoware-euclidean-cluster-object-detector-dbgsym: 1.4.0-1
- ros-humble-autoware-geography-utils: 1.4.0-1
- ros-humble-autoware-geography-utils-dbgsym: 1.4.0-1
- ros-humble-autoware-global-parameter-loader: 1.4.0-1
- ros-humble-autoware-gnss-poser: 1.4.0-1
- ros-humble-autoware-gnss-poser-dbgsym: 1.4.0-1
- ros-humble-autoware-ground-filter: 1.4.0-1
- ros-humble-autoware-ground-filter-dbgsym: 1.4.0-1
- ros-humble-autoware-gyro-odometer: 1.4.0-1
- ros-humble-autoware-gyro-odometer-dbgsym: 1.4.0-1
- ros-humble-autoware-interpolation: 1.4.0-1
- ros-humble-autoware-interpolation-dbgsym: 1.4.0-1
- ros-humble-autoware-kalman-filter: 1.4.0-1
- ros-humble-autoware-kalman-filter-dbgsym: 1.4.0-1
- ros-humble-autoware-lanelet2-map-visualizer: 1.4.0-1
- ros-humble-autoware-lanelet2-map-visualizer-dbgsym: 1.4.0-1
- ros-humble-autoware-lanelet2-utils: 1.4.0-1
- ros-humble-autoware-lanelet2-utils-dbgsym: 1.4.0-1
- ros-humble-autoware-localization-util: 1.4.0-1
- ros-humble-autoware-localization-util-dbgsym: 1.4.0-1
- ros-humble-autoware-map-height-fitter: 1.4.0-1
- ros-humble-autoware-map-height-fitter-dbgsym: 1.4.0-1
- ros-humble-autoware-map-loader: 1.4.0-1
- ros-humble-autoware-map-loader-dbgsym: 1.4.0-1
- ros-humble-autoware-map-projection-loader: 1.4.0-1
- ros-humble-autoware-map-projection-loader-dbgsym: 1.4.0-1
- ros-humble-autoware-marker-utils: 1.4.0-1
- ros-humble-autoware-marker-utils-dbgsym: 1.4.0-1
- ros-humble-autoware-mission-planner: 1.4.0-1
- ros-humble-autoware-mission-planner-dbgsym: 1.4.0-1
- ros-humble-autoware-motion-utils: 1.4.0-1
- ros-humble-autoware-motion-utils-dbgsym: 1.4.0-1
- ros-humble-autoware-motion-velocity-obstacle-stop-module: 1.4.0-1
- ros-humble-autoware-motion-velocity-obstacle-stop-module-dbgsym: 1.4.0-1
- ros-humble-autoware-motion-velocity-planner: 1.4.0-1
- ros-humble-autoware-motion-velocity-planner-common: 1.4.0-1
- ros-humble-autoware-motion-velocity-planner-common-dbgsym: 1.4.0-1
- ros-humble-autoware-motion-velocity-planner-dbgsym: 1.4.0-1
- ros-humble-autoware-ndt-scan-matcher: 1.4.0-1
- ros-humble-autoware-ndt-scan-matcher-dbgsym: 1.4.0-1
- ros-humble-autoware-node: 1.4.0-1
- ros-humble-autoware-node-dbgsym: 1.4.0-1
- ros-humble-autoware-object-recognition-utils: 1.4.0-1
- ros-humble-autoware-object-recognition-utils-dbgsym: 1.4.0-1
- ros-humble-autoware-objects-of-interest-marker-interface: 1.4.0-1
- ros-humble-autoware-objects-of-interest-marker-interface-dbgsym: 1.4.0-1
- ros-humble-autoware-osqp-interface: 1.4.0-1
- ros-humble-autoware-osqp-interface-dbgsym: 1.4.0-1
- ros-humble-autoware-path-generator: 1.4.0-1
- ros-humble-autoware-path-generator-dbgsym: 1.4.0-1
- ros-humble-autoware-perception-objects-converter: 1.4.0-1
- ros-humble-autoware-perception-objects-converter-dbgsym: 1.4.0-1
- ros-humble-autoware-planning-factor-interface: 1.4.0-1
- ros-humble-autoware-planning-factor-interface-dbgsym: 1.4.0-1
- ros-humble-autoware-planning-test-manager: 1.4.0-1
- ros-humble-autoware-planning-test-manager-dbgsym: 1.4.0-1
- ros-humble-autoware-planning-topic-converter: 1.4.0-1
- ros-humble-autoware-planning-topic-converter-dbgsym: 1.4.0-1
- ros-humble-autoware-point-types: 1.4.0-1
- ros-humble-autoware-pose-initializer: 1.4.0-1
- ros-humble-autoware-pose-initializer-dbgsym: 1.4.0-1
- ros-humble-autoware-pyplot: 1.4.0-1
- ros-humble-autoware-qp-interface: 1.4.0-1
- ros-humble-autoware-qp-interface-dbgsym: 1.4.0-1
- ros-humble-autoware-route-handler: 1.4.0-1
- ros-humble-autoware-route-handler-dbgsym: 1.4.0-1
- ros-humble-autoware-signal-processing: 1.4.0-1
- ros-humble-autoware-signal-processing-dbgsym: 1.4.0-1
- ros-humble-autoware-simple-pure-pursuit: 1.4.0-1
- ros-humble-autoware-simple-pure-pursuit-dbgsym: 1.4.0-1
- ros-humble-autoware-stop-filter: 1.4.0-1
- ros-humble-autoware-stop-filter-dbgsym: 1.4.0-1
- ros-humble-autoware-test-node: 1.4.0-1
- ros-humble-autoware-test-node-dbgsym: 1.4.0-1
- ros-humble-autoware-test-utils: 1.4.0-1
- ros-humble-autoware-test-utils-dbgsym: 1.4.0-1
- ros-humble-autoware-testing: 1.4.0-1
- ros-humble-autoware-trajectory: 1.4.0-1
- ros-humble-autoware-trajectory-dbgsym: 1.4.0-1
- ros-humble-autoware-twist2accel: 1.4.0-1
- ros-humble-autoware-twist2accel-dbgsym: 1.4.0-1
- ros-humble-autoware-vehicle-info-utils: 1.4.0-1
- ros-humble-autoware-vehicle-info-utils-dbgsym: 1.4.0-1
- ros-humble-autoware-vehicle-velocity-converter: 1.4.0-1
- ros-humble-autoware-vehicle-velocity-converter-dbgsym: 1.4.0-1
- ros-humble-autoware-velocity-smoother: 1.4.0-1
- ros-humble-autoware-velocity-smoother-dbgsym: 1.4.0-1
- ros-humble-husarion-components-description: 0.0.2-1
- ros-humble-robotraconteur-companion: 0.4.2-1
- ros-humble-robotraconteur-companion-dbgsym: 0.4.2-1
- ros-humble-ros2-control-cmake: 0.2.1-1
- ros-humble-turtlebot3-home-service-challenge: 1.0.5-1
- ros-humble-turtlebot3-home-service-challenge-core: 1.0.5-1
- ros-humble-turtlebot3-home-service-challenge-manipulator: 1.0.5-1
- ros-humble-turtlebot3-home-service-challenge-manipulator-dbgsym: 1.0.5-1
- ros-humble-turtlebot3-home-service-challenge-tools: 1.0.5-1
Updated Packages [309]:
- ros-humble-ackermann-steering-controller: 2.48.0-1 → 2.49.1-1
- ros-humble-ackermann-steering-controller-dbgsym: 2.48.0-1 → 2.49.1-1
- ros-humble-admittance-controller: 2.48.0-1 → 2.49.1-1
- ros-humble-admittance-controller-dbgsym: 2.48.0-1 → 2.49.1-1
- ros-humble-aerostack2: 1.1.2-2 → 1.1.3-1
- ros-humble-apriltag: 3.4.3-1 → 3.4.4-1
- ros-humble-apriltag-dbgsym: 3.4.3-1 → 3.4.4-1
- ros-humble-apriltag-detector: 3.0.2-1 → 3.0.3-1
- ros-humble-apriltag-detector-dbgsym: 3.0.2-1 → 3.0.3-1
- ros-humble-apriltag-detector-mit: 3.0.2-1 → 3.0.3-1
- ros-humble-apriltag-detector-mit-dbgsym: 3.0.2-1 → 3.0.3-1
- ros-humble-apriltag-detector-umich: 3.0.2-1 → 3.0.3-1
- ros-humble-apriltag-detector-umich-dbgsym: 3.0.2-1 → 3.0.3-1
- ros-humble-apriltag-draw: 3.0.2-1 → 3.0.3-1
- ros-humble-apriltag-draw-dbgsym: 3.0.2-1 → 3.0.3-1
- ros-humble-apriltag-tools: 3.0.2-1 → 3.0.3-1
- ros-humble-apriltag-tools-dbgsym: 3.0.2-1 → 3.0.3-1
- ros-humble-as2-alphanumeric-viewer: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-alphanumeric-viewer-dbgsym: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-behavior: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-behavior-dbgsym: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-behavior-tree: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-behavior-tree-dbgsym: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-behaviors-motion: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-behaviors-motion-dbgsym: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-behaviors-path-planning: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-behaviors-path-planning-dbgsym: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-behaviors-perception: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-behaviors-perception-dbgsym: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-behaviors-platform: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-behaviors-platform-dbgsym: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-behaviors-trajectory-generation: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-behaviors-trajectory-generation-dbgsym: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-cli: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-core: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-core-dbgsym: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-external-object-to-tf: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-external-object-to-tf-dbgsym: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-gazebo-assets: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-gazebo-assets-dbgsym: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-geozones: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-geozones-dbgsym: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-keyboard-teleoperation: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-map-server: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-map-server-dbgsym: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-motion-controller: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-motion-controller-dbgsym: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-motion-reference-handlers: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-msgs: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-msgs-dbgsym: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-platform-gazebo: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-platform-gazebo-dbgsym: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-platform-multirotor-simulator: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-platform-multirotor-simulator-dbgsym: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-python-api: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-realsense-interface: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-realsense-interface-dbgsym: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-rviz-plugins: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-rviz-plugins-dbgsym: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-state-estimator: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-state-estimator-dbgsym: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-usb-camera-interface: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-usb-camera-interface-dbgsym: 1.1.2-2 → 1.1.3-1
- ros-humble-as2-visualization: 1.1.2-2 → 1.1.3-1
- ros-humble-autoware-adapi-v1-msgs: 1.3.0-1 → 1.9.0-1
- ros-humble-autoware-adapi-v1-msgs-dbgsym: 1.3.0-1 → 1.9.0-1
- ros-humble-autoware-adapi-version-msgs: 1.3.0-1 → 1.9.0-1
- ros-humble-autoware-adapi-version-msgs-dbgsym: 1.3.0-1 → 1.9.0-1
- ros-humble-autoware-internal-debug-msgs: 1.10.0-1 → 1.12.0-1
- ros-humble-autoware-internal-debug-msgs-dbgsym: 1.10.0-1 → 1.12.0-1
- ros-humble-autoware-internal-localization-msgs: 1.10.0-1 → 1.12.0-1
- ros-humble-autoware-internal-localization-msgs-dbgsym: 1.10.0-1 → 1.12.0-1
- ros-humble-autoware-internal-metric-msgs: 1.10.0-1 → 1.12.0-1
- ros-humble-autoware-internal-metric-msgs-dbgsym: 1.10.0-1 → 1.12.0-1
- ros-humble-autoware-internal-msgs: 1.10.0-1 → 1.12.0-1
- ros-humble-autoware-internal-msgs-dbgsym: 1.10.0-1 → 1.12.0-1
- ros-humble-autoware-internal-perception-msgs: 1.10.0-1 → 1.12.0-1
- ros-humble-autoware-internal-perception-msgs-dbgsym: 1.10.0-1 → 1.12.0-1
- ros-humble-autoware-internal-planning-msgs: 1.10.0-1 → 1.12.0-1
- ros-humble-autoware-internal-planning-msgs-dbgsym: 1.10.0-1 → 1.12.0-1
- ros-humble-autoware-lanelet2-extension: 0.7.2-1 → 0.8.0-1
- ros-humble-autoware-lanelet2-extension-dbgsym: 0.7.2-1 → 0.8.0-1
- ros-humble-autoware-lanelet2-extension-python: 0.7.2-1 → 0.8.0-1
- ros-humble-autoware-lanelet2-extension-python-dbgsym: 0.7.2-1 → 0.8.0-1
- ros-humble-axis-camera: 2.0.3-1 → 2.0.4-1
- ros-humble-axis-description: 2.0.3-1 → 2.0.4-1
- ros-humble-axis-msgs: 2.0.3-1 → 2.0.4-1
- ros-humble-axis-msgs-dbgsym: 2.0.3-1 → 2.0.4-1
- ros-humble-bicycle-steering-controller: 2.48.0-1 → 2.49.1-1
- ros-humble-bicycle-steering-controller-dbgsym: 2.48.0-1 → 2.49.1-1
- ros-humble-clearpath-common: 1.3.3-1 → 1.3.5-1
- ros-humble-clearpath-config: 1.3.1-1 → 1.3.2-1
- ros-humble-clearpath-control: 1.3.3-1 → 1.3.5-1
- ros-humble-clearpath-customization: 1.3.3-1 → 1.3.5-1
- ros-humble-clearpath-description: 1.3.3-1 → 1.3.5-1
- ros-humble-clearpath-generator-common: 1.3.3-1 → 1.3.5-1
- ros-humble-clearpath-generator-common-dbgsym: 1.3.3-1 → 1.3.5-1
- ros-humble-clearpath-manipulators: 1.3.3-1 → 1.3.5-1
- ros-humble-clearpath-manipulators-description: 1.3.3-1 → 1.3.5-1
- ros-humble-clearpath-mounts-description: 1.3.3-1 → 1.3.5-1
- ros-humble-clearpath-platform-description: 1.3.3-1 → 1.3.5-1
- ros-humble-clearpath-ros2-socketcan-interface: 1.0.2-1 → 1.0.3-1
- ros-humble-clearpath-ros2-socketcan-interface-dbgsym: 1.0.2-1 → 1.0.3-1
- ros-humble-clearpath-sensors-description: 1.3.3-1 → 1.3.5-1
- ros-humble-diff-drive-controller: 2.48.0-1 → 2.49.1-1
- ros-humble-diff-drive-controller-dbgsym: 2.48.0-1 → 2.49.1-1
- ros-humble-dynamixel-hardware-interface: 1.4.9-1 → 1.4.11-1
- ros-humble-dynamixel-hardware-interface-dbgsym: 1.4.9-1 → 1.4.11-1
- ros-humble-effort-controllers: 2.48.0-1 → 2.49.1-1
- ros-humble-effort-controllers-dbgsym: 2.48.0-1 → 2.49.1-1
- ros-humble-etsi-its-cam-coding: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-cam-coding-dbgsym: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-cam-conversion: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-cam-msgs: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-cam-msgs-dbgsym: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-cam-ts-coding: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-cam-ts-coding-dbgsym: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-cam-ts-conversion: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-cam-ts-msgs: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-cam-ts-msgs-dbgsym: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-coding: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-conversion: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-conversion-dbgsym: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-cpm-ts-coding: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-cpm-ts-coding-dbgsym: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-cpm-ts-conversion: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-cpm-ts-msgs: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-cpm-ts-msgs-dbgsym: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-denm-coding: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-denm-coding-dbgsym: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-denm-conversion: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-denm-msgs: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-denm-msgs-dbgsym: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-denm-ts-coding: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-denm-ts-coding-dbgsym: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-denm-ts-conversion: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-denm-ts-msgs: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-denm-ts-msgs-dbgsym: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-mapem-ts-coding: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-mapem-ts-coding-dbgsym: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-mapem-ts-conversion: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-mapem-ts-msgs: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-mapem-ts-msgs-dbgsym: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-mcm-uulm-coding: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-mcm-uulm-coding-dbgsym: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-mcm-uulm-conversion: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-mcm-uulm-msgs: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-mcm-uulm-msgs-dbgsym: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-messages: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-msgs: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-msgs-utils: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-primitives-conversion: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-rviz-plugins: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-rviz-plugins-dbgsym: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-spatem-ts-coding: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-spatem-ts-coding-dbgsym: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-spatem-ts-conversion: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-spatem-ts-msgs: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-spatem-ts-msgs-dbgsym: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-vam-ts-coding: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-vam-ts-coding-dbgsym: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-vam-ts-conversion: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-vam-ts-msgs: 3.2.1-1 → 3.3.0-1
- ros-humble-etsi-its-vam-ts-msgs-dbgsym: 3.2.1-1 → 3.3.0-1
- ros-humble-event-camera-renderer: 2.0.0-1 → 2.0.1-1
- ros-humble-event-camera-renderer-dbgsym: 2.0.0-1 → 2.0.1-1
- ros-humble-examples-tf2-py: 0.25.15-1 → 0.25.16-1
- ros-humble-filters: 2.2.1-1 → 2.2.2-1
- ros-humble-filters-dbgsym: 2.2.1-1 → 2.2.2-1
- ros-humble-force-torque-sensor-broadcaster: 2.48.0-1 → 2.49.1-1
- ros-humble-force-torque-sensor-broadcaster-dbgsym: 2.48.0-1 → 2.49.1-1
- ros-humble-forward-command-controller: 2.48.0-1 → 2.49.1-1
- ros-humble-forward-command-controller-dbgsym: 2.48.0-1 → 2.49.1-1
- ros-humble-franka-inria-inverse-dynamics-solver: 1.0.0-1 → 1.0.1-1
- ros-humble-franka-inria-inverse-dynamics-solver-dbgsym: 1.0.0-1 → 1.0.1-1
- ros-humble-geometry2: 0.25.15-1 → 0.25.16-1
- ros-humble-gpio-controllers: 2.48.0-1 → 2.49.1-1
- ros-humble-gpio-controllers-dbgsym: 2.48.0-1 → 2.49.1-1
- ros-humble-gripper-controllers: 2.48.0-1 → 2.49.1-1
- ros-humble-gripper-controllers-dbgsym: 2.48.0-1 → 2.49.1-1
- ros-humble-hebi-cpp-api: 3.12.3-1 → 3.13.0-1
- ros-humble-hebi-cpp-api-dbgsym: 3.12.3-1 → 3.13.0-1
- ros-humble-imu-sensor-broadcaster: 2.48.0-1 → 2.49.1-1
- ros-humble-imu-sensor-broadcaster-dbgsym: 2.48.0-1 → 2.49.1-1
- ros-humble-inverse-dynamics-solver: 1.0.0-1 → 1.0.1-1
- ros-humble-inverse-dynamics-solver-dbgsym: 1.0.0-1 → 1.0.1-1
- ros-humble-joint-state-broadcaster: 2.48.0-1 → 2.49.1-1
- ros-humble-joint-state-broadcaster-dbgsym: 2.48.0-1 → 2.49.1-1
- ros-humble-joint-trajectory-controller: 2.48.0-1 → 2.49.1-1
- ros-humble-joint-trajectory-controller-dbgsym: 2.48.0-1 → 2.49.1-1
- ros-humble-kdl-inverse-dynamics-solver: 1.0.0-1 → 1.0.1-1
- ros-humble-kdl-inverse-dynamics-solver-dbgsym: 1.0.0-1 → 1.0.1-1
- ros-humble-libcaer-driver: 1.5.1-1 → 1.5.2-1
- ros-humble-libcaer-driver-dbgsym: 1.5.1-1 → 1.5.2-1
- ros-humble-librealsense2: 2.55.1-1 → 2.56.4-1
- ros-humble-librealsense2-dbgsym: 2.55.1-1 → 2.56.4-1
- ros-humble-mecanum-drive-controller: 2.48.0-1 → 2.49.1-1
- ros-humble-mecanum-drive-controller-dbgsym: 2.48.0-1 → 2.49.1-1
- ros-humble-mola-test-datasets: 0.4.1-1 → 0.4.2-1
- ros-humble-mrpt-map-server: 2.2.0-1 → 2.2.3-1
- ros-humble-mrpt-map-server-dbgsym: 2.2.0-1 → 2.2.3-1
- ros-humble-mrpt-msgs-bridge: 2.2.0-1 → 2.2.3-1
- ros-humble-mrpt-nav-interfaces: 2.2.0-1 → 2.2.3-1
- ros-humble-mrpt-nav-interfaces-dbgsym: 2.2.0-1 → 2.2.3-1
- ros-humble-mrpt-navigation: 2.2.0-1 → 2.2.3-1
- ros-humble-mrpt-path-planning: 0.2.1-1 → 0.2.2-1
- ros-humble-mrpt-path-planning-dbgsym: 0.2.1-1 → 0.2.2-1
- ros-humble-mrpt-pf-localization: 2.2.0-1 → 2.2.3-1
- ros-humble-mrpt-pf-localization-dbgsym: 2.2.0-1 → 2.2.3-1
- ros-humble-mrpt-pointcloud-pipeline: 2.2.0-1 → 2.2.3-1
- ros-humble-mrpt-pointcloud-pipeline-dbgsym: 2.2.0-1 → 2.2.3-1
- ros-humble-mrpt-rawlog: 2.2.0-1 → 2.2.3-1
- ros-humble-mrpt-rawlog-dbgsym: 2.2.0-1 → 2.2.3-1
- ros-humble-mrpt-reactivenav2d: 2.2.0-1 → 2.2.3-1
- ros-humble-mrpt-reactivenav2d-dbgsym: 2.2.0-1 → 2.2.3-1
- ros-humble-mrpt-tps-astar-planner: 2.2.0-1 → 2.2.3-1
- ros-humble-mrpt-tps-astar-planner-dbgsym: 2.2.0-1 → 2.2.3-1
- ros-humble-mrpt-tutorials: 2.2.0-1 → 2.2.3-1
- ros-humble-ntrip-client-node: 0.5.7-1 → 0.6.1-1
- ros-humble-ntrip-client-node-dbgsym: 0.5.7-1 → 0.6.1-1
- ros-humble-pid-controller: 2.48.0-1 → 2.49.1-1
- ros-humble-pid-controller-dbgsym: 2.48.0-1 → 2.49.1-1
- ros-humble-plotjuggler: 3.10.10-1 → 3.10.11-1
- ros-humble-plotjuggler-dbgsym: 3.10.10-1 → 3.10.11-1
- ros-humble-pose-broadcaster: 2.48.0-1 → 2.49.1-1
- ros-humble-pose-broadcaster-dbgsym: 2.48.0-1 → 2.49.1-1
- ros-humble-position-controllers: 2.48.0-1 → 2.49.1-1
- ros-humble-position-controllers-dbgsym: 2.48.0-1 → 2.49.1-1
- ros-humble-range-sensor-broadcaster: 2.48.0-1 → 2.49.1-1
- ros-humble-range-sensor-broadcaster-dbgsym: 2.48.0-1 → 2.49.1-1
- ros-humble-rc-genicam-api: 2.6.5-1 → 2.8.1-1
- ros-humble-rc-genicam-api-dbgsym: 2.6.5-1 → 2.8.1-1
- ros-humble-rc-genicam-driver: 0.3.1-1 → 0.3.2-1
- ros-humble-rc-genicam-driver-dbgsym: 0.3.1-1 → 0.3.2-1
- ros-humble-rc-reason-clients: 0.4.0-2 → 0.5.0-1
- ros-humble-rc-reason-msgs: 0.4.0-2 → 0.5.0-1
- ros-humble-rc-reason-msgs-dbgsym: 0.4.0-2 → 0.5.0-1
- ros-humble-realsense2-camera: 4.55.1-1 → 4.56.4-2
- ros-humble-realsense2-camera-dbgsym: 4.55.1-1 → 4.56.4-2
- ros-humble-realsense2-camera-msgs: 4.55.1-1 → 4.56.4-2
- ros-humble-realsense2-camera-msgs-dbgsym: 4.55.1-1 → 4.56.4-2
- ros-humble-realsense2-description: 4.55.1-1 → 4.56.4-2
- ros-humble-robotraconteur: 1.2.2-1 → 1.2.5-1
- ros-humble-robotraconteur-dbgsym: 1.2.2-1 → 1.2.5-1
- ros-humble-ros2-controllers: 2.48.0-1 → 2.49.1-1
- ros-humble-ros2-controllers-test-nodes: 2.48.0-1 → 2.49.1-1
- ros-humble-rqt-joint-trajectory-controller: 2.48.0-1 → 2.49.1-1
- ros-humble-rqt-robot-steering: 1.0.2-1 → 1.0.3-1
- ros-humble-steering-controllers-library: 2.48.0-1 → 2.49.1-1
- ros-humble-steering-controllers-library-dbgsym: 2.48.0-1 → 2.49.1-1
- ros-humble-swri-cli-tools: 3.8.5-1 → 3.8.7-1
- ros-humble-swri-console-util: 3.8.5-1 → 3.8.7-1
- ros-humble-swri-console-util-dbgsym: 3.8.5-1 → 3.8.7-1
- ros-humble-swri-dbw-interface: 3.8.5-1 → 3.8.7-1
- ros-humble-swri-geometry-util: 3.8.5-1 → 3.8.7-1
- ros-humble-swri-geometry-util-dbgsym: 3.8.5-1 → 3.8.7-1
- ros-humble-swri-image-util: 3.8.5-1 → 3.8.7-1
- ros-humble-swri-image-util-dbgsym: 3.8.5-1 → 3.8.7-1
- ros-humble-swri-math-util: 3.8.5-1 → 3.8.7-1
- ros-humble-swri-math-util-dbgsym: 3.8.5-1 → 3.8.7-1
- ros-humble-swri-opencv-util: 3.8.5-1 → 3.8.7-1
- ros-humble-swri-opencv-util-dbgsym: 3.8.5-1 → 3.8.7-1
- ros-humble-swri-roscpp: 3.8.5-1 → 3.8.7-1
- ros-humble-swri-roscpp-dbgsym: 3.8.5-1 → 3.8.7-1
- ros-humble-swri-route-util: 3.8.5-1 → 3.8.7-1
- ros-humble-swri-route-util-dbgsym: 3.8.5-1 → 3.8.7-1
- ros-humble-swri-serial-util: 3.8.5-1 → 3.8.7-1
- ros-humble-swri-serial-util-dbgsym: 3.8.5-1 → 3.8.7-1
- ros-humble-swri-transform-util: 3.8.5-1 → 3.8.7-1
- ros-humble-swri-transform-util-dbgsym: 3.8.5-1 → 3.8.7-1
- ros-humble-tf2: 0.25.15-1 → 0.25.16-1
- ros-humble-tf2-bullet: 0.25.15-1 → 0.25.16-1
- ros-humble-tf2-dbgsym: 0.25.15-1 → 0.25.16-1
- ros-humble-tf2-eigen: 0.25.15-1 → 0.25.16-1
- ros-humble-tf2-eigen-kdl: 0.25.15-1 → 0.25.16-1
- ros-humble-tf2-eigen-kdl-dbgsym: 0.25.15-1 → 0.25.16-1
- ros-humble-tf2-geometry-msgs: 0.25.15-1 → 0.25.16-1
- ros-humble-tf2-kdl: 0.25.15-1 → 0.25.16-1
- ros-humble-tf2-msgs: 0.25.15-1 → 0.25.16-1
- ros-humble-tf2-msgs-dbgsym: 0.25.15-1 → 0.25.16-1
- ros-humble-tf2-py: 0.25.15-1 → 0.25.16-1
- ros-humble-tf2-py-dbgsym: 0.25.15-1 → 0.25.16-1
- ros-humble-tf2-ros: 0.25.15-1 → 0.25.16-1
- ros-humble-tf2-ros-dbgsym: 0.25.15-1 → 0.25.16-1
- ros-humble-tf2-ros-py: 0.25.15-1 → 0.25.16-1
- ros-humble-tf2-sensor-msgs: 0.25.15-1 → 0.25.16-1
- ros-humble-tf2-tools: 0.25.15-1 → 0.25.16-1
- ros-humble-tricycle-controller: 2.48.0-1 → 2.49.1-1
- ros-humble-tricycle-controller-dbgsym: 2.48.0-1 → 2.49.1-1
- ros-humble-tricycle-steering-controller: 2.48.0-1 → 2.49.1-1
- ros-humble-tricycle-steering-controller-dbgsym: 2.48.0-1 → 2.49.1-1
- ros-humble-turtlebot3-home-service-challenge-aruco: 1.0.4-1 → 1.0.5-1
- ros-humble-ublox-dgnss: 0.5.7-1 → 0.6.1-1
- ros-humble-ublox-dgnss-node: 0.5.7-1 → 0.6.1-1
- ros-humble-ublox-dgnss-node-dbgsym: 0.5.7-1 → 0.6.1-1
- ros-humble-ublox-nav-sat-fix-hp-node: 0.5.7-1 → 0.6.1-1
- ros-humble-ublox-nav-sat-fix-hp-node-dbgsym: 0.5.7-1 → 0.6.1-1
- ros-humble-ublox-ubx-interfaces: 0.5.7-1 → 0.6.1-1
- ros-humble-ublox-ubx-interfaces-dbgsym: 0.5.7-1 → 0.6.1-1
- ros-humble-ublox-ubx-msgs: 0.5.7-1 → 0.6.1-1
- ros-humble-ublox-ubx-msgs-dbgsym: 0.5.7-1 → 0.6.1-1
- ros-humble-ur-client-library: 2.1.0-1 → 2.2.0-1
- ros-humble-ur-client-library-dbgsym: 2.1.0-1 → 2.2.0-1
- ros-humble-ur-msgs: 2.2.0-1 → 2.3.0-1
- ros-humble-ur-msgs-dbgsym: 2.2.0-1 → 2.3.0-1
- ros-humble-ur10-inverse-dynamics-solver: 1.0.0-1 → 1.0.1-1
- ros-humble-ur10-inverse-dynamics-solver-dbgsym: 1.0.0-1 → 1.0.1-1
- ros-humble-velocity-controllers: 2.48.0-1 → 2.49.1-1
- ros-humble-velocity-controllers-dbgsym: 2.48.0-1 → 2.49.1-1
Removed Packages [0]:
Thanks to all ROS maintainers who make packages available to the ROS community. The above list of packages was made possible by the work of the following maintainers:
- Adam Dabrowski
- Analog Devices
- Autoware
- Bence Magyar
- Berkay Karaman
- Bernd Pfrommer
- CVAR-UPM
- Chris Bollinger
- Chris Iverach-Brereton
- Chris Lalancette
- David Wong
- Davide Faconti
- Dirk Thomas
- Enrico Ferrentino
- Felix Exner
- Felix Ruess
- Fumiya Watanabe
- G.A. vd. Hoorn
- Geoff Sokoll
- Husarion
- Jean-Pierre Busch
- John Wason
- José Luis Blanco-Claraco
- Kosuke Takeuchi
- Kyoichi Sugahara
- LibRealSense ROS Team
- Luis Camero
- M. Fatih Cırıt
- Mamoru Sobue
- Markus Bader
- Max Krogius
- Maxime CLEMENT
- Maxime Clement
- Mete Fatih Cırıt
- Nick Hortovanyi
- Pyo
- Ryohsuke Mitsudome
- Ryu Yamamoto
- Satoshi OTA
- Satoshi Ota
- Southwest Research Institute
- Taiki Tanaka
- Takagi, Isamu
- Takamasa Horibe
- Takayuki Murooka
- Temkei Kem
- Tomoya Kimura
- Tully Foote
- Xingang Liu
- Yamato Ando
- Yoshi Ri
- Yuki Takagi
- Yukihiro Saito
- Yukinari Hisaki
- amc-nu
- mitsudome-r
- pyo
- ruess
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: UBLOX ZED-X20P Integration Complete - 25Hz NavSatFix
I’ve completed initial UBLOX ZED-X20P integration in the ublox_dgnss
package with 25Hz NavSatFix output.
Quick Start
ros2 launch ublox_dgnss ublox_x20p_rover_hpposllh_navsatfix.launch.py -- device_family:=x20p
What’s New
- 25Hz NavSatFix output - significant performance boost
- Multi-device support - F9P/F9R/X20P automatic detection
- USB architecture adaptation - handles X20P’s different interface structure
- Backward compatible - existing F9P/F9R setups unchanged
Available Now
Available now on GitHub for local compilation:
- Repository: GitHub - aussierobots/ublox_dgnss: This usb based ROS2 driver is focused on UBLOX Generation 9 UBX messaging, for a DGNSS rover. High precision data is available.
- Publishing soon to package repositories
Architecture Notes
X20P main interface (0x01ab) fully supported with F9P/F9R compatibility.
UART interfaces (0x050c/0x050d) under investigation - see X20P UART1/UART2 interfaces (0x050c/0x050d) not supported - use main interface (0x01ab) · Issue #48 · aussierobots/ublox_dgnss · GitHub.
Have an X20P?
If you want to test it out and give us feedback, it would be appreciated!
3 posts - 2 participants
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: RMW-RMW bridge - is it possible, has anyone done it?
We’re more and more thinking that there should be a RMW-RMW bridge for ROS 2.
Our specific use-case is simple - a microcontroller with MicroROS (thus FastDDS) and the rest would be better with Zenoh RMW. But we can’t use Zenoh in the rest of the system because DDS and Zenoh don’t talk to each other.
I know (or guess) that between DDS-based RMWs, there is the possibility to interoperate on the DDS level (through it’s incomplete for some combinations AFAIU).
But if you need to connect a non-DDS RMW, there’s currently no option.
I haven’t dived into RMW details too much yet, but I guess in principle, creating such bridge at the RMW level should be possible, right?
Has anyone tried that? Is it achievable to create something that is “RMW-agnostic”, meaning one generic bridge for any pair (or n-tuple) of RMWs to connect?
Of course, such solution would hinder performance (all messages would have to be brokered by the bridge), but in our case, we only have a uC feeding one IMU stream, odometry, some state and diagnostics, and receieving only cmd_vel and a few other commands. So performance should not be a problem at least in these simpler cases.
5 posts - 4 participants
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Bagel's New Release -- Cursor Integration
We are thrilled to announce a new integration for our open-source tool, Bagel! Two weeks ago, we presented Bagel at the ROS/PX4 meetup in Los Angeles, and the community’s excitement was incredible. As promised, we’ve integrated Bagel with the Cursor IDE to make robotics development even easier.
You can find the full tutorial here: bagel/doc/tutorials/mcp/2_cursor_px4.ipynb at stage · Extelligence-ai/bagel · GitHub
What is Bagel?
If you’re new to Bagel, it’s a tool that lets you chat with your rosbags using natural language queries. This allows you to quickly get insights from your log files without writing code. For example, you can ask questions like:
-
“Is the front left camera calibrated?”
-
“Were there any hard decelerations detected in the IMU data?”
-
“Are there any errors or warnings in this log?”
Bagel currently was tested in:
- Claude Code
- Gemini CLI
- Cursor
- and more integrations to come…
How to Get Involved
Bagel is a community-driven project, and we’d love for you to be a part of it. Your contributions are what will make this tool truly great.
Here are a few ways you can help:
-
Star us on GitHub: Show your support and help us grow by giving us a star.
-
File a bug report: If you find an issue, let us know so we can fix it.
-
Pitch a feature request: Have an idea for a new feature? We’d love to hear it.
-
Join us on Discord: Hang out with the community and chat directly with the Bagel team.
Many people have done so! The community found two bugs and filed two feature requests already!
Thank you for your support!
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Localization of ROS 2 Documentation
Hello, Open Robotics Community,
I’m glad to announce that the ros2-docs-l10n
project is published now:
Preview: ros2-docs-l10n
Crowdin: ros2-docs-l10n
GitHub: ros2-docs-l10n
The goal of this project is to translate the ROS 2 documentation into multiple languages. Translations are contributed via the Crowdin platform and automatically synchronized with the GitHub repository. Translations can be previewed on GitHub Pages.
10 posts - 3 participants
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Open-sourcing ROS 1 code from RUVU (AMCL reimplementation, planners, controllers and more)
Hey everyone,
As part of the acqui-hire of our startup RUVU, we’re open-sourcing a large portion of the ROS 1 code we’ve built over the years.
While it’s all written for ROS 1 so not immediately plug-and-play for ROS 2 users, we hope some of it might still be useful, inspirational, or a good starting point for your own projects.
Some highlights:
- https://github.com/ruvu/mcl/tree/master/ruvu_mcl:
A modern C++ reimplementation of AMCL, including a landmark sensor model. - https://github.com/ruvu/common/tree/master/intelligence/ruvu_networkx:
A graph based global planner - https://github.com/ruvu/common/tree/master/intelligence/ruvu_rabbitmq_bridge:
A two way bridge from ROS 1 to RabbitMQ - common/simulation/ruvu_gazebo_plugins at master · ruvu/common · GitHub
A collection of useful Gazebo plugins. - GitHub - ruvu/odometry_calibration
Automatic wheel separation and radius calibration from sensor data. - GitHub - ruvu/ruvu_carrot_local_planner
A simple, robust carrot based local planner - https://github.com/ruvu/project-packman:
An example package showing how we set up launch files for customer projects.
Everything is released under the MIT license, so feel free to fork, adapt, and use anything you find interesting.
We’re not planning on actively maintaining this code right now, but that could change if there’s enough community interest.
If you have questions, ideas, or want to discuss this code, you can reach me here or at my current role at Nobleo Technology.
— The (old) RUVU Team
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Fixed Position Recording and Replay for AgileX PIPER Robotic Arm
We recently implemented a fixed position recording and replay function for the AgileX PIPER robotic arm using the official Python SDK. This feature allows recording specific arm positions and replaying them repeatedly, which is useful for teaching demonstrations and automated operations.
In this post, I will share the detailed setup steps, code implementation, usage instructions, and a demonstration video to help you get started.
Tags
Position recording, Python SDK, teaching demonstration, position reproduction, AgileX PIPER
Code Repository
GitHub link: https://github.com/agilexrobotics/Agilex-College.git
Function Demonstration
PIPER Robotic Arm | Fixed Position Recording & Replay Demo
Preparation Before Use
Hardware Preparation for PIPER Robotic Arm
- There are no obstacles in the workspace to provide sufficient movement space for the robotic arm.
- Confirm that the power supply of the robotic arm is normal and all indicator lights are in the normal state.
- The lighting conditions are good for observation of the position and state of the robotic arm.
- If equipped with a gripper, check whether the gripper actions are normal.
- The ground is stable to avoid vibration affecting recording accuracy.
- The teach button functions normally.
Environment Configuration
- Operating system: Ubuntu (Ubuntu 18.04 or higher is recommended)
- Python environment: Python 3.6 or higher
- git code management tool: used to clone remote code repositories
sudo apt install git
- pip package manager: used to install Python dependency packages
sudo apt install python3-pip
- Install CAN tools
sudo apt install can-utils ethtool
- Install the official Python SDK package, among which
1_0_0_beta
is the version with API
git clone -b 1_0_0_beta https://github.com/agilexrobotics/piper_sdk.git
cd piper_sdk
pip3 install .
- Reference document:https://github.com/agilexrobotics/piper_sdk/blob/1_0_0_beta/README(ZH).MD
Operation Steps for Fixed Position Recording and Replay Function
- Power on the robotic arm and connect the USB-to-CAN module to the computer (ensure that only one CAN module is connected)
- Open the terminal and activate the CAN module
sudo ip link set can0 up type can bitrate 1000000
- Clone the remote code repository
git clone https://github.com/agilexrobotics/Agilex-College.git
- Switch to the
recordAndPlayPos
directory
cd Agilex-College/piper/recordAndPlayPos/
- Run the recording program
python3 recordPos_en.py
-
Short-press the teach button to enter the teaching mode
-
Place the position of the robotic arm well, press Enter in the terminal to record the position, and input ‘q’ to end the recording.
-
After recording, short-press the teach button again to exit the teaching mode
- Notes before replay: When exiting the teaching mode for the first time, a specific initialization process is required to switch from the teaching mode to the CAN mode. Therefore, the replay program will automatically perform a reset operation to return joints 2, 3, and 5 to safe positions (zero points) to prevent the robotic arm from suddenly falling due to gravity and causing damage. In special cases, manual assistance may be required to return joints 2, 3, and 5 to zero points.
- Run the replay program
python3 playPos_en.py
- After successful enabling, press Enter in the terminal to play the positions
Problems and Solutions
Problem 1: There is no Piper class.
Reason: The currently installed SDK is not the version with API.
Solution: Execute pip3 uninstall piper_sdk
to uninstall the current SDK, then install the 1_0_0_beta version of the SDK according to the method in 1.2. Environment Configuration.
Problem 2: The robotic arm does not move, and the terminal outputs as follows.
Reason: The teach button was short-pressed during the operation of the program.
Solution: Check whether the indicator light of the teach button is off. If yes, re-run the program; if not, short-press the teach button to exit the teaching mode first and then run the program.
Code/Principle and Parameter Description
Implementation of Position Recording Program
The position recording program is the data collection module of the system, which is responsible for capturing the joint position information of the robotic arm in the teaching mode.
Program Initialization and Configuration
Parameter Configuration Design
# Whether there is a gripper
have_gripper = True
# Timeout for teaching mode detection, unit: second
timeout = 10.0
# CSV file path for saving positions
CSV_path = os.path.join(os.path.dirname(__file__), "pos.csv")
Analysis of configuration parameters:
Thehave_gripper
parameter is of boolean type, and True means there is a gripper.
Thetimeout
parameter sets the timeout for teaching mode detection. After starting the program, if the teaching mode is not entered within 10s, the program will exit.
TheCSV_path
parameter sets the save path of the trajectory file, which defaults to the same path as the program, and the file name is pos.csv
Robotic Arm Connection and Initialization
# Initialize and connect the robotic arm
piper = Piper("can0")
interface = piper.init()
piper.connect()
time.sleep(0.1)
Analysis of connection mechanism:
Piper()
is the core class of the API, which simplifies some common methods on the basis of the interface.
init()
will create and return an interface instance, which can be used to call some special methods of Piper.
connect()
will start a thread to connect to the CAN port and process CAN data.
time.sleep(0.1)
is added to ensure that the connection is fully established. In embedded systems, hardware initialization usually takes a certain amount of time, and this short delay ensures the reliability of subsequent operations.
Position Acquisition and Data Storage
Implementation of Position Acquisition Function
def get_pos():
'''Get the current joint radians of the robotic arm and the gripper opening distance'''
joint_state = piper.get_joint_states()[0]
if have_gripper:
return joint_state + (piper.get_gripper_states()[0][0], )
return joint_state
Mode Detection and Switching
print("INFO: Please click the teach button to enter the teaching mode")
over_time = time.time() + timeout
while interface.GetArmStatus().arm_status.ctrl_mode != 2:
if over_time < time.time():
print("ERROR: Teaching mode detection timeout, please check whether the teaching mode is enabled")
exit()
time.sleep(0.01)
Status polling strategy:
The program uses polling to detect the control mode, and this method has the following characteristics:
- Simple implementation and clear logic
- Low requirements on system resources
Timeout protection mechanism:
The 10-second timeout setting takes into account the needs of actual operations:
- Time for users to understand prompt information
- Time to find and press the teach button
- Time for system response and state switching
- Fault tolerance handling in abnormal situations
Safety features of teaching mode:
- Joint torque is released, allowing manual operation
- Maintain position feedback and monitor position changes in real time
Data Recording and Storage
count = 1
csv = open(CSV_path, "w")
while input("INPUT: Input q to exit, press Enter directly to record:") != "q":
current_pos = get_pos()
print(f"INFO: {count}th position, recorded position: {current_pos}")
csv.write(",".join(map(str, current_pos)) + "\n")
count += 1
csv.close()
print("INFO: Recording ends, click the teach button again to exit the teaching mode")
Data integrity guarantee:
After each recording, the data is immediately written to the file and the buffer is refreshed to ensure that the data will not be lost due to abnormal exit of the program.
Data Format Selection:
Reasons for choosing CSV format for data storage:
- High versatility, almost all data processing tools support CSV format
- Strong human readability, easy for debugging and verification
- Simple structure and high parsing efficiency
- Widely supported, easy to integrate with other tools
Data column attributes:
- Columns 1-6: Joint motor radians
- Column 7: Gripper opening distance, unit: m
Complete Code Implementation of Position Recording Program
#!/usr/bin/env python3
# -*-coding:utf8-*-
# Record positions
import os, time
from piper_sdk import *
if __name__ == "__main__":
# Whether there is a gripper
have_gripper = True
# Timeout for teaching mode detection, unit: second
timeout = 10.0
# CSV file path for saving positions
CSV_path = os.path.join(os.path.dirname(__file__), "pos.csv")
# Initialize and connect the robotic arm
piper = Piper("can0")
interface = piper.init()
piper.connect()
time.sleep(0.1)
def get_pos():
'''Get the current joint radians of the robotic arm and the gripper opening distance'''
joint_state = piper.get_joint_states()[0]
if have_gripper:
return joint_state + (piper.get_gripper_states()[0][0], )
return joint_state
print("INFO: Please click the teach button to enter the teaching mode")
over_time = time.time() + timeout
while interface.GetArmStatus().arm_status.ctrl_mode != 2:
if over_time < time.time():
print("ERROR:Teaching mode detection timeout, please check whether the teaching mode is enabled")
exit()
time.sleep(0.01)
count = 1
csv = open(CSV_path, "w")
while input("INPUT: Enter q to exit, press Enter directly to record: ") != "q":
current_pos = get_pos()
print(f"INFO: {count}th position, recorded position: {current_pos}")
csv.write(",".join(map(str, current_pos)) + "\n")
count += 1
csv.close()
print("INFO: Recording ends, click the teach button again to exit the teaching mode")
Implementation of Position Replay Program
The position replay program is the execution module of the system, responsible for reading the recorded position data and controlling the robotic arm to reproduce these positions.
Parameter Configuration and Data Loading
replay Parameter Configuration
# Number of replays, 0 means infinite loop
play_times = 1
# replay interval, unit: second, negative value means manual key control
play_interval = 0
# Movement speed percentage, recommended range: 10-100
move_spd_rate_ctrl = 100
Analysis of parameter design:
Theplay_times
parameter supports three replay modes:
- Single replay(play_times = 1):Suitable for demonstration and testing
- Multiple replay (play_times > 1): Suitable for repetitive tasks
- Infinite loop(play_times = 0): Suitable for continuous operations
The negative value design ofplay_interval
is an ingenious user interface design:
- Positive value: Automatic replay mode, the system executes automatically at the set interval
- Zero value: Continuous replay mode, no delay between positions
- Negative value: Manual control mode, users control the replay rhythm through keys.
Themove_spd_rate_ctrl
parameter provides a speed control function, which is very important for different application scenarios:
- High-speed mode (80-100%): Suitable for no-load fast movement
- Medium-speed mode (50-80%): Suitable for general operation tasks
- Low-speed mode (10-50%): Suitable for precision operations and scenarios with high safety requirements
Data File Reading
try:
with open(CSV_path, 'r', encoding='utf-8') as f:
track = list(csv.reader(f))
if not track:
print("ERROR: The position file is empty")
exit()
track = [[float(j) for j in i] for i in track] # Convert to a list of floating-point numbers
except FileNotFoundError:
print("ERROR: The position file does not exist")
exit()
Exception handling strategies:
FileNotFoundError
:Handle the case where the file does not exist- Empty file check: Prevent reading empty data files
- Data format verification: Ensure that the data can be correctly converted to numerical types
Data type conversion:
In the process of converting string data to floating-point numbers, the program uses list comprehensions.
Safety Stop Function
def stop():
'''Stop the robotic arm; when exiting the teaching mode for the first time, this function must be called first to control the robotic arm in CAN mode'''
interface.EmergencyStop(0x01)
time.sleep(1.0)
limit_angle = [0.1745, 0.7854, 0.2094] # The robotic arm can be restored only when the angles of joints 2, 3, and 5 are within the limit range to prevent damage caused by falling from a large angle
pos = get_pos()
while not (abs(pos[1]) < limit_angle[0] and abs(pos[2]) < limit_angle[0] and pos[4] < limit_angle[1] and pos[4] > limit_angle[2]):
time.sleep(0.01)
pos = get_pos()
# Restore the robotic arm
piper.disable_arm()
time.sleep(1.0)
Staged stop strategy:
The stop function adopts a staged safety stop strategy:
- Emergency stop stage: EmergencyStop(0x01) sends an emergency stop command to immediately stop all joint movements (joints with impedance)
- Safe position waiting: Wait for key joints (joints 2, 3, and 5) to move within the safe range
- System recovery stage: Send a recovery command to reactivate the control system
Safety range design:
The program pays special attention to the positions of joints 2, 3, and 5, which is based on the mechanical structure characteristics of the PIPER robotic arm:
- Joint 2 (shoulder joint): Controls the pitching movement of the upper arm, affecting the overall stability
- Joint 3 (elbow joint): Controls the angle of the forearm, directly affecting the end position
- Joint 5 (wrist joint): Controls the end posture, affecting the direction of the gripper
The setting of the safe angle range (10°, 45°, 12°) is based on the following considerations:
- Avoid the robotic arm from falling quickly under gravity
- Ensure that the joints will not collide with mechanical limits
- Provide sufficient operating space for subsequent movements
Real-time monitoring mechanism: The program uses real-time polling to monitor the joint positions to ensure that the next step is performed only when the safety conditions are met.
System Enable Function
def enable():
'''Enable the robotic arm and gripper'''
while not piper.enable_arm():
time.sleep(0.01)
if have_gripper:
time.sleep(0.01)
piper.enable_gripper()
interface.ModeCtrl(0x01, 0x01, move_spd_rate_ctrl, 0x00)
print("INFO: Enable successful")
Robotic arm enabling:enable_arm()
Gripper enabling:enable_gripper()
Control mode setting:
ModeCtrl(0x01, 0x01, move_spd_rate_ctrl, 0x00)
Control mode parameters:
- The first parameter (0x01): Set to CAN command control mode
- The second parameter (0x01): Set to joint movement mode
- The third parameter (0 to 100): Set the movement speed percentage
- The fourth parameter (0x00): Set to position-speed mode
Replay Control Logic
count = 0
input("step 2: Press Enter to start playing positions")
while play_times == 0 or abs(play_times) != count:
for n, pos in enumerate(track):
while True:
piper.move_j(pos[:-1], move_spd_rate_ctrl)
time.sleep(0.01)
current_pos = get_pos()
print(f"INFO: {count + 1}th playback, {n + 1}th position, current position: {current_pos}, target position: {pos}")
if all(abs(current_pos[i] - pos[i]) < 0.0698 for i in range(6)):
break
if have_gripper and len(pos) == 7:
piper.move_gripper(pos[-1], 1)
time.sleep(0.5)
if play_interval < 0:
if n != len(track) - 1 and input("Enter q to exit, press Enter directly to play: ") == 'q':
exit()
else:
time.sleep(play_interval)
count += 1
Joint control: move_j()
- The first parameter: A tuple containing the radians of the six joint motors
- The second parameter: Movement speed percentage, range 0-100
Gripper control: move_gripper()
- The first parameter: Gripper opening distance, unit: m
- The second parameter: Gripper torque, unit: N/m
Position control closed-loop system:
- Target setting: Send target position commands to each joint through the
move_j()
function - Status feedback: Obtain the current actual position through the
get_pos()
function - Error calculation: Compare the difference between the target position and the actual position
- Convergence judgment: Consider reaching the target when the error is less than the threshold
Multi-joint coordinated control:
all(abs(current_pos[i] - pos[i]) < 0.0698 for i in range(6))
ensures that the next step is performed only after all six joints reach the target position.
Gripper control strategy:
The gripper control adopts an independent control logic:
- Execute gripper control only when the data contains gripper information
- The gripper action is executed after the joint movement is completed to avoid interference
- A 0.5-second delay ensures that the gripper action is fully completed
replay rhythm control:
The program supports three replay rhythms:
- Automatic continuous replay:
play_interval >= 0
- Manual step-by-step replay:
play_interval < 0
- Real-time adjustment: Users can interrupt replay at any time
Complete Code Implementation of Position replay Program
#!/usr/bin/env python3
# -*-coding:utf8-*-
# Play positions
import os, time, csv
from piper_sdk import Piper
if __name__ == "__main__":
# Whether there is a gripper
have_gripper = True
# Number of playbacks, 0 means infinite loop
play_times = 1
# Playback interval, unit: second; negative value means manual key control
play_interval = 0
# Movement speed percentage, recommended range: 10-100
move_spd_rate_ctrl = 100
# Timeout for switching to CAN mode, unit: second
timeout = 5.0
# CSV file path for saving positions
CSV_path = os.path.join(os.path.dirname(__file__), "pos.csv")
# Read the position file
try:
with open(CSV_path, 'r', encoding='utf-8') as f:
track = list(csv.reader(f))
if not track:
print("ERROR: Position file is empty")
exit()
track = [[float(j) for j in i] for i in track] # Convert to a list of floating-point numbers
except FileNotFoundError:
print("ERROR: Position file does not exist")
exit()
# Initialize and connect the robotic arm
piper = Piper("can0")
interface = piper.init()
piper.connect()
time.sleep(0.1)
def get_pos():
'''Get the current joint radians of the robotic arm and the gripper opening distance'''
joint_state = piper.get_joint_states()[0]
if have_gripper:
return joint_state + (piper.get_gripper_states()[0][0], )
return joint_state
def stop():
'''Stop the robotic arm; this function must be called first when exiting the teaching mode for the first time to control the robotic arm in CAN mode'''
interface.EmergencyStop(0x01)
time.sleep(1.0)
limit_angle = [0.1745, 0.7854, 0.2094] # The robotic arm can be restored only when the radians of joints 2, 3, and 5 are within the limit range to prevent damage caused by falling from a large radian
pos = get_pos()
while not (abs(pos[1]) < limit_angle[0] and abs(pos[2]) < limit_angle[0] and pos[4] < limit_angle[1] and pos[4] > limit_angle[2]):
time.sleep(0.01)
pos = get_pos()
# Restore the robotic arm
piper.disable_arm()
time.sleep(1.0)
def enable():
'''Enable the robotic arm and gripper'''
while not piper.enable_arm():
time.sleep(0.01)
if have_gripper:
time.sleep(0.01)
piper.enable_gripper()
interface.ModeCtrl(0x01, 0x01, move_spd_rate_ctrl, 0x00)
print("INFO: Enable successful")
print("step 1: Please ensure the robotic arm has exited the teaching mode before playback")
if interface.GetArmStatus().arm_status.ctrl_mode != 1:
stop() # This function must be called first when exiting the teaching mode for the first time to switch to CAN mode
over_time = time.time() + timeout
while interface.GetArmStatus().arm_status.ctrl_mode != 1:
if over_time < time.time():
print("ERROR: Failed to switch to CAN mode, please check if the teaching mode is exited")
exit()
interface.ModeCtrl(0x01, 0x01, move_spd_rate_ctrl, 0x00)
time.sleep(0.01)
enable()
count = 0
input("step 2: Press Enter to start playing positions")
while play_times == 0 or abs(play_times) != count:
for n, pos in enumerate(track):
while True:
piper.move_j(pos[:-1], move_spd_rate_ctrl)
time.sleep(0.01)
current_pos = get_pos()
print(f"INFO: {count + 1}th playback, {n + 1}th position, current position: {current_pos}, target position: {pos}")
if all(abs(current_pos[i] - pos[i]) < 0.0698 for i in range(6)):
break
if have_gripper and len(pos) == 7:
piper.move_gripper(pos[-1], 1)
time.sleep(0.5)
if play_interval < 0:
if n != len(track) - 1 and input("INPUT: Enter 'q' to exit, press Enter directly to play: ") == 'q':
exit()
else:
time.sleep(play_interval)
count += 1
Summary
The above implements the fixed position recording and replay function based on the AgileX PIPER robotic arm. By applying the Python SDK, it is possible to record and repeatedly execute specific positions of the robotic arm, providing strong technical support for teaching demonstrations and automated operations.
If you have any questions regarding the use, please feel free to contact us at support@agilex.ai.
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: ROS Kerala | Building a Robotics Career in the USA | Robotics Talk | Jerin Peter | Lentin Joseph
ROS Kerala Presents: Robotic Talk Series Topic: Building a Robotics Career in the US – Myths, Challenges & Reality
Join Jerin Peter (Graduate Student – Robotics, UC Riverside) and Lentin Joseph (Senior ROS & AI Consultant, CTO & Co-Founder – RUNTIME Robotics) as they share real-world insights on launching and growing a career in robotics in the United States.
From higher education choices and visa hurdles to mastering ROS and cracking robotics interviews, this talk covers it all. Whether you’re a student, a robotics enthusiast, or a professional looking to go abroad, you’ll find valuable tips and lessons here.
Building a Robotics Career in the USA | Robotics Talk | Jerin Peter | Lentin Joseph
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Native rcl::tensor type
We propose introducing the concept of a tensor as a natively supported type in ROS 2 Lyrical Luth. Below is a sketch of how this would work for initial feedback before we write a proper REP for review.
Abstract
Tensors are a fundamental data structure often used to represent multi-modal information for deep neural networks (DNNs) at the core of policy-driven robots. We introduce rcl::tensor
as a native type in rcl,
as a container for memory that can be optionally externally managed. This type would be supported through all client libraries (rclcpp
, rclpy
, …) the ROS IDL rosidl
, and all RMW implementations. This enables tensor_msgs
ROS messages based on sensor_msgs
which use tensor
instead of uint8[]
. The default implementation of rcl::tensor
operations for creation/destruction and manipulation will be available on all tiers of supported platforms.. With the presence of an optional package and an environment variable, a platform-optimized implementation for rcl::tensor
operations can then be swapped in at runtime to take advantage of accelerator-managed memory/compute. Through adoption of rcl::tensor
in developer code and ROS messages, we can enable seamless platform-specific acceleration determined at runtime without any recompilation or deployment.
Motivation
ROS 2 should be accelerator-aware but accelerator-agnostic like other popular frameworks such as PyTorch or NumPy. This enables package developers that conform to ROS 2 standards to gain platform-specific optimizations for free (“optimal where possible, compatible where necessary”).
Background
AI robots and policy-driven physical agents rely on accelerated deep neural network (DNN) model inference through tensors. Tensors are a fundamental data structure to represent multi-dimensional data from scalar (rank 0), vectors (rank 1), and matrices (rank 2) to batches of multi-channel matrices (rank 4). These can be used to encode all data flowing through such graphs including images, text, joint positions, poses, trajectories, IMU readings, and more.
Performing inference on these DNN model policies requires these tensors to reside in accelerator memory. ROS messages, however, expect their payloads to reside in main memory with field types such as uint8[]
or multi-dimensional arrays. This requires these payloads to be copied from main memory to accelerator memory and then copied back to main memory after processing in order to populate a new ROS message to publish. This quickly becomes the primary bottleneck for policy inference. Type adaptation in rclcpp
provides a solution for this, but it requires all participating packages to have accelerator-specific dependencies and only applies within the client library, so RMW implementations cannot apply optimized-for-accelerator memory, for example.
Additionally, without a canonical tensor type in ROS 2, a patchwork of different tensor libraries across various ROS packages is causing impedance mismatches with popular deep learning frameworks including PyTorch.
Requirements
- Provide a native way to represent tensors across all interfaces from client libraries through RMW implementations.
- Make available a set of common operations on tensors that can be used by all interfaces.
- Enable accelerated implementations of common tensor operations when available at runtime.
- Enable accelerator memory management backing these tensors when available at runtime.
- Optimize flow of tensors for deep neural network (DNN) model inference to avoid unnecessary memory copies.
- Allow for backwards compatibility with all non-accelerated platforms.
Rough Sketch
struct rcl::tensor
{
std::vector<size_t> shape; // shape of the tensor
std::vector<size_t> strides; // strides of the tensor
size_t rank; // number of dimensions
union {
void* data; // pointer to the data in memory handle
size_t handle; // token stored by rcl::tensor for externally managed memory
}
size_t byte_size; // size of the data
data_type_enum type; // the data type
}
Core Tensor APIs
Inline APIs available on all platforms in core ROS 2 rcl
.
Creation
Create a new tensor from main memory.
rcl_tensor_create_copy_from_bytes(const void *data_ptr, size_t byte_size, data_type_enum type)
rcl_tensor_wrap_bytes(void *data_ptr, size_t size, data_type_enum type)
rcl_tensor_create_copy_from(const struct rcl::tensor & tensor)
Common operations
Manipulations performed on tensors that can be optionally accelerated. The more complete these APIs are, the less fragmented the ecosystem will be but the higher the burden on implementers. These should be modeled after PyTorch tensor API and existing C tensor libraries such as libXM or C++ libraries like xtensor.
reshape()
squeeze()
normalize()
fill()
zero()
- …
Managed access
Provide a way to access elements individually in parallel.
rcl_tensor_apply(<functor on each element with index>)
Direct access
Retrieve the underlying data in main memory but may involve movement of data.
void* rcl_tensor_materialized_data()
Other Conveniences
rcl
functions to check which tensor implementation is active.tensor_msgs::Image
to mirrorsensor_msgs::Image
to enable smooth migration to usingtensor
type in common ROS messages. Alternative is to add a “union” field insensor_msgs::Image
with theuint8[]
data
field.cv_bridge
API to convert betweencv::Mat
andtensor_msgs::Image
.
Platform-specific tensor implementation
Without loss of generality, suppose we have an implementation of tensor
that uses an accelerated library, such as rcl_tensor_cuda
for CUDA. This package provides shared libraries that implement all of the core tensor APIs. An environment variable for RCL_TENSOR_IMPLEMENTATION = rcl_tensor_cuda
enables the loading of rcl_tensor_cuda
at runtime without rebuilding any other packages. Unlike the native implementation, rcl_tensor_cuda
copies the input buffer into a CUDA buffer and uses CUDA to perform operations on that CUDA buffer.
It also provides new APIs for creating a tensor from a CUDA buffer, for checking whether the rcl_tensor_cuda
implementation is active, and for accessing the CUDA buffer from a tensor
available for any other package libraries the link to rcl_tensor_cuda
directly. An RMW implementation linked against rcl_tensor_cuda
would query the CUDA buffer backing a tensor
and use optimized transport paths to handle it, while a general RMW implementation could just call rcl_tensor_materialize_bytes
and transport the main memory payload as normal.
Simple Examples
Example #1: rcl::tensor with “accelerator-aware” subscriber
Node A publishes a ROS message with rcl::tensor
from main memory bytes and sends it to a topic Node B subscribes to. Node B happens to be written to first check whether the rcl::tensor
is backed by externally managed memory AND checks that rcl_tensor_cuda
is active (indicates this is backed by CUDA). Node B has a direct dependency on rcl_tensor_cuda
in order to perform this check.
Alternatively, Node B could have also been written with no dependency on any rcl::tensor
implementation to simply retrieve the bytes from the rcl::tensor
and ignore the externally managed memory flag altogether, which would have forced a copy back from accelerator memory in Scenario 2.
MyMsg.msg
—--------
std_msgs/Header header
tensor payload
Scenario 1: RCL_TENSOR_IMPLEMENTATION = <none>
----------------------------------------------
┌─────────────────┐ ROS Message ┌─────────────────┐
│ Node A │ ────────────────► │ Node B │
│ │ │ │
│ ┌─────────────┐ │ │ ┌─────────────┐ │
│ │Create Tensor│ │ │ │Receive MyMsg│ │
│ │in MyMsg │ │ │ │ │ │
│ └─────────────┘ │ │ └─────────────┘ │
│ │ │ │ │ │
│ ▼ │ │ ▼ │
│ ┌─────────────┐ │ │ ┌─────────────┐ │
│ │Publish │ │ │ │Check if │ │
│ │MyMsg │ │ │ │Externally │ │
│ └─────────────┘ │ │ │Managed │ │
└─────────────────┘ │ └─────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────┐ │
│ │Copy │ │
│ │to Accel Mem │ │
│ └─────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────┐ │
│ │Process on │ │
│ │Accelerator │ │
│ └─────────────┘ │
└─────────────────┘
Scenario 2: RCL_TENSOR_IMPLEMENTATION = rcl_tensor_cuda
--------------------------------------------------------
┌─────────────────┐ ROS Message ┌─────────────────┐
│ Node A │ ────────────────► │ Node B │
│ │ │ │
│ ┌─────────────┐ │ │ ┌─────────────┐ │
│ │Create Tensor│ │ │ │Receive MyMsg│ │
│ │in MyMsg │ │ │ │ │ │
│ └─────────────┘ │ │ └─────────────┘ │
│ │ │ │ │ │
│ ▼ │ │ ▼ │
│ ┌─────────────┐ │ │ ┌─────────────┐ │
│ │Publish MyMsg│ │ │ │Check if │ │
│ └─────────────┘ │ │ │Externally │ │
└─────────────────┘ │ │Managed │ │
│ └─────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────┐ │
│ │Process on │ │
│ │Accelerator │ │
│ └─────────────┘ │
└─────────────────┘
In Scenario 2, the same tensor function call in Node A creates a tensor backed by accelerator memory instead. This allows Node B, which was checking for a rcl_tensor_cuda-managed tensor to skip the extra copy.
Example #2: CPU versus accelerated implementations
SCENARIO 1: RCL_TENSOR_IMPLEMENTATION = <none> (CPU/Main Memory Path)
========================================================================
┌─────────────────────────────────────────────────────────────────────────────┐
│ CPU/Main Memory Path │
└─────────────────────────────────────────────────────────────────────────────┘
┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Create │ │ Normalize │ │ Reshape │ │ Materialize │
│ Tensor │───▶│ Operation │───▶│ Operation │───▶│ Bytes │
│ [CPU Mem] │ │ [CPU] │ │ [CPU] │ │ [CPU Mem] │
└─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘
│ │ │ │
▼ ▼ ▼ ▼
┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Allocate │ │ CPU-based │ │ CPU-based │ │ Return │
│ main memory │ │ normalize │ │ reshape │ │ pointer to │
│ for tensor │ │ computation │ │ computation │ │ byte array │
│ data │ │ on CPU │ │ on CPU │ │ in main mem │
└─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘
Memory Layout:
┌─────────────────────────────────────────────────────────────────────────────┐
│ Main Memory │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Tensor │ │ Normalized │ │ Reshaped │ │ Materialized│ │
│ │ Data │ │ Tensor │ │ Tensor │ │ Bytes │ │
│ │ [CPU] │ │ [CPU] │ │ [CPU] │ │ [CPU] │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────────────────────────┘
SCENARIO 2: RCL_TENSOR_IMPLEMENTATION = rcl_tensor_cuda (GPU/CUDA Path)
=======================================================================
┌─────────────────────────────────────────────────────────────────────────────┐
│ GPU/CUDA Path │
└─────────────────────────────────────────────────────────────────────────────┘
┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Create │ │ Normalize │ │ Reshape │ │ Materialize │
│ Tensor │───▶│ Operation │───▶│ Operation │───▶│ Bytes │
│ [GPU Mem] │ │ [CUDA] │ │ [CUDA] │ │ [CPU Mem] │
└─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘
│ │ │ │
▼ ▼ ▼ ▼
┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Allocate │ │ CUDA kernel │ │ CUDA kernel │ │ Copy from │
│ GPU memory │ │ for normalize│ │ for reshape │ │ GPU to CPU │
│ for tensor │ │ computation │ │ computation │ │ memory │
│ data │ │ on GPU │ │ on GPU │ │ (cudaMemcpy)│
└─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘
Memory Layout:
┌─────────────────────────────────────────────────────────────────────────────┐
│ GPU Memory │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Tensor │ │ Normalized │ │ Reshaped │ │
│ │ Data │ │ Tensor │ │ Tensor │ │
│ │ [GPU] │ │ [GPU] │ │ [GPU] │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────────────┐
│ Main Memory │
│ │
│ │
│ ┌─────────────┐ │
│ │ Materialized│ │
│ │ Bytes │ │
│ │ [CPU] │ │
│ └─────────────┘ │
└─────────────────────────────────────────────────────────────────────────────┘
IMPLEMENTATION NOTES
===================
• Environment variable RCL_TENSOR_IMPLEMENTATION controls which path is taken
• Same API calls work in both scenarios (transparent to user code)
• GPU path requires CUDA runtime and rcl_tensor_cuda package
• Memory management handled automatically by implementation
• Backward compatibility maintained for CPU-only systems
Discussion Questions
-
Should we constrain tensor creation functions to using memory allocators instead?
rcl::tensor
implementations would need to provide custom memory allocators for externally managed memory, for example. -
Do we allow for mixed runtimes of cpu-backed/external memory managed tensors in one runtime? What creation pattern would allow for precompiled packages to “pick up” accelerated memory dynamically at runtime by default but also explicitly opt-out from it for specific tensors as well?
-
Do we need to expose the concept of “streams” and “devices” through the
rcl::tensor
API or can that be kept under the abstraction layer? They are generic concepts but may too strongly proscribe the underlying implementation. However, exposing them would let developers provide stronger intent on how they want their code to be executed in an accelerator-agnostic manner. -
What common tensor operations should we keep as supported? The more we choose, the higher the burden on the
rcl::tensor
implementations, but the more standardized and less fragmented our ROS 2 developer base. For example, we do not want fragmentation where packages begin to depend onrcl_tensor_cuda
and thus fallback only to CPU forrcl_tensor_opencl
(wlog). -
Should tensors have a multi-block interfaces from the get-go? Assuming one memory address seems problematic for rank 4 tensors, for example (e.g., sets of images from multiple cameras).
-
Should the ROS 2 canonical implementation of
rcl::tensor
be inline or based on an existing, open source library? If so, which one?
Summary
tensor
as a native type inrcl
and made available through all client libraries, ROS IDL, and all RMW implementations, likestring array
oruint8[]
.tensor_msgs::Image
issensor_msgs::Image
but withtensor
payload instead ofuint8[]
.- Add
cv_bridge
functions to createtensor_msgs::Image
fromcv2::Mat
to spur adoption.
- Implementations for
tensor
lifecycle and manipulation can be dynamically swapped at runtime with a package and an environment variable.- Data for tensors can then be optionally stored in externally managed memory, eliminating need for type adaptation in
rclcpp
. - Operations on tensors can then be optionally implemented with accelerated libraries.
- Data for tensors can then be optionally stored in externally managed memory, eliminating need for type adaptation in
7 posts - 7 participants
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: ROS 2 Performance Benchmark - Code Release
In our ROS 2 Performance Benchmark tests, we had interesting findings demonstrating potential bottlenecks for message transport in ROS 2(rolling). Now, we’re excited to release the code which can be used to reproduce our results. Check it out here!
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: ROS 2 Rust Meeting: August 2025
The next ROS 2 Rust Meeting will be Mon, Aug 11, 2025 2:00 PM UTC
The meeting room will be at https://meet.google.com/rxr-pvcv-hmu
In the unlikely event that the room needs to change, we will update this thread with the new info!
With the recent announcement about OSRF funding for adding Cargo dependency management to the buildfarm, and a few people having questions on that, I would like to reiterate that this meeting is open to everyone - working group member or not. If you want to learn what we’re trying to accomplish, please drop by! We’d love to have you!
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: ROS 2 Cross-compilation / Multi architecture development
Hi,
I’m in the process of looking into migrating our indoor service robot from an amd64 based system to the Jetson Orin Nano.
How are you doing development when targeting aarch64/arm64 machines?
My development machine is not the newest, but reasonably powerful. (AMD Ryzen 9 3900X, 32GB RAM) But it struggles with the officially recommended QEMU based approach. Even the vanilla osrf/ros docker image is choppy under emulation. Building the actual image, stack or running a simulated environment is totally out of the question.
The different pathways I investigated so far are:
-
Using QEMU emulation - unusable
-
Using the target platform as the development machine - slow build, but reasonable runtime performance
-
Cloud building the development container - A bit pricey, and the question of building the actual stack still remains. Maybe CMake cross compilation in native container.
-
Using Apple Silicon for development - haven’t looked into it
I’m interested in your approach of this problem. I imagine that using ARM based systems in production robots is a fairly common practice given the recent advances in this field.
7 posts - 6 participants
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Why do robotics companies choose not to contribute to open source?
Hi all!
We wrote a blog post at Henki Robotics to share some of our thoughts on open-source collaboration, based on what we’ve seen and learned so far. We thought that it would be interesting for the community to hear and discuss the challenges open-source contributions pose from a company standpoint, while also highlighting the benefits of doing so and encouraging more companies to collaborate together.
We’d be happy to hear your thoughts and if you’ve had similar experiences!
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: A Dockerfile and a systemd service for starting a rmw-zenoh server
Meanwhile there’s no official method for autostarting rmw-zenoh server this might be useful:
4 posts - 2 participants
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: How to Implement End-to-End Tracing in ROS 2 (Nav2) with OpenTelemetry for Pub/Sub Workflows?
I’m working on implementing end-to-end tracing for robotic behaviors using OpenTelemetry (OTel) in ROS 2. My goal is to trace:
-
High-level requests (e.g., “move to location”) across components to analyze latency
-
Control commands (e.g., teleop) through the entire pipeline to motors
Current Progress:
-
Successfully wrapped ROS 2 Service and Action servers to generate OTel traces
-
Basic request/response flows are visible in tracing systems
Challenges with Nav2:
-
Nav2 heavily uses pub/sub patterns where traditional instrumentation falls short
-
Difficult to maintain context propagation across:
-
Multiple subscribers processing the same message
-
Chained topic processing (output of one node becomes input to another)
-
Asynchronous publisher/subscriber relationships
-
Questions:
-
Are there established patterns for OTel context propagation in ROS 2 pub/sub systems?
-
How should we handle fan-out scenarios (1 publisher → N subscribers)?
-
Any Nav2-specific considerations for tracing (e.g., lifecycle nodes, behavior trees)?
-
Alternative approaches besides OTel that maintain compatibility with observability tools?
2 posts - 2 participants
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Space ROS Jazzy 2025.07.0 Release
Hello ROS community!
The Space ROS team is excited to announce Space ROS Jazzy 2025.07.0 was released last week and is available as osrf/space-ros:jazzy-2025.07.0
on DockerHub.
Release details
This release includes a significant refactor the build of our base image making the main container over 60% smaller! Additionally, development images are now pushed to DockerHub to make building with Space ROS and an underly easier than ever. For an exhaustive list of all the issues addressed and PRs merged, check out the GitHub Project Board for this release here.
Code
Current versions of all packages released with Space ROS are available at:
-
GitHub - space-ros/space-ros: The Space ROS meta operating system for space robotics.
-
GitHub - space-ros/docker: Docker images to facilitate Docker-based development.
-
GitHub - space-ros/simulation: Simulation assets of space-ros demos
-
GitHub - space-ros/process_sarif: Tools to process and aggregate SARIF output from static analysis.
What’s Next
This release comes 3 months after the last release. The next release is planned for October 31, 2025. If you want to contribute to features, tests, demos, or documentation of Space ROS, get involved on the Space ROS GitHub issues and discussion board.
All the best,
The Space ROS Team
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Bagel, the Open Source Project | Guest Speakers Arun Venkatadri and Shouheng Yi | Cloud Robotics WG Meeting 2025-08-11
Please come and join us for this coming meeting at Mon, Aug 11, 2025 4:00 PM UTC→Mon, Aug 11, 2025 5:00 PM UTC,
where guest speakers Arun Venkatadri and Shouheng Yi will be presenting Bagel. Bagel is a new open source project that lets you chat with your robotics data by using AI to search through recorded data. Bagel was recently featured in ROS News for the Week, and there’s a follow-up post giving more detail.
Last meeting, we tried out the service from Heex Technologies, which allows you to deploy agents to your robots or search through recorded data for set events. The software then records data around those events and uploads to the cloud, allowing you to view events from your robots. If you’d like to see the meeting, it is available on YouTube.
The meeting link for next meeting is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.
Hopefully we will see you there!
2 posts - 2 participants
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: What if your Rosbags could talk? Meet Bagel🥯, the open-source tool we just released!
Huge thanks to @Katherine_Scott and @mrpollo for hosting us at the Joint ROS / PX4 Meetup at Neros in El Segundo, CA! It was an absolute blast connecting with the community in person!
Missed the demo? No worries! Here’s the scoop on what we unveiled (we showed it with PX4 ULogs, but yes, ROS2 and ROS1 are fully supported!)
The problem? We felt the pain of wrestling with robotics data and LLMs. Unlike PDF files, we’re talking about massive sensor arrays, complex camera feeds, dense LiDAR point clouds – making LLMs truly useful here has been a real challenge… at least for us.
The solution? Meet Bagel ( GitHub - shouhengyi/bagel: Bagel is ChatGPT for physical data. Just ask questions. No Fuss. )! We built this powerful open-source tool to bridge that gap. Imagine simply asking questions about your robotics data, instead of endless parsing and plotting.
With Bagel, loaded with your ROS2 bag or PX4 ULog, you can ask things like:
- “Is this front left camera calibrated?”
- “Were there any hard decelerations detected in the IMU data?”
Sound like something that could change your workflow? We’re committed to building Bagel in the open, with your help! This is where you come in:
- Dive In! Clone the repo, give Bagel a spin, and tell us what you think.
- Speak Your Mind! Got an idea? File a feature request. Your insights are crucial to Bagel’s evolution.
- Code with Us! Open a PR and become a core contributor. Let’s build something amazing together.
- Feeling the Love? If Bagel sparks joy (or solves a big headache!), please consider giving us a star on GitHub
. It’s a huge motivator!
Thanks a lot for being part of this journey. Happy prompting!
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: ROS Naija Linedlin Group
Exciting News for Nigerian Roboticists!
We now have a ROS Naija Community group on here ,a space for engineers, developers, and enthusiasts passionate about ROS (Robot Operating System) and robotics.
Whether you’re a student, hobbyist, researcher, or professional, this is the place to:
Connect with like-minded individuals
Share knowledge, resources, and opportunities
Collaborate on robotics and ROS-based projects
Ask questions and learn from others in the community
If you’re interested in ROS and robotics, you’re welcome to join:
Join here: LinkedIn Login, Sign in | LinkedIn
Let’s build and grow the Nigerian robotics ecosystem together!
ROS robotics #ROSNaija #NigeriaTech #Engineering #ROSCommunity #RobotOperatingSystem
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: [Case Study] Cross-Morphology Policy Learning with UniVLA and PiPER Robotic Arm
We’d like to share a recent research project where our AgileX Robotics PiPER 6-DOF robotic arm was used to validate UniVLA, a novel cross-morphology policy learning framework developed by the University of Hong Kong and OpenDriveLab.
Paper: Learning to Act Anywhere with Task-Centric Latent Actions
arXiv: [2505.06111] UniVLA: Learning to Act Anywhere with Task-centric Latent Actions
Code: GitHub - OpenDriveLab/UniVLA: [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions
Motivation
Transferring robot policies across platforms and environments is difficult due to:
- High dependence on manually annotated action data
- Poor generalization between different robot morphologies
- Visual noise (camera motion, background movement) causing instability
UniVLA addresses this by learning latent action representations from videos, without relying on action labels.
Framework Overview
UniVLA introduces a task-centric, latent action space for general-purpose policy learning. Key features include:
- Cross-hardware and cross-environment transfer via a unified latent space
- Unsupervised pretraining from video data
- Lightweight decoder for efficient deploymen
Figure2: Overview of the UniVLA framework. Visual-language features from third-view RGB and task instruction are tokenized and passed through an auto-regressive transformer, generating latent actions which are decoded into executable actions across heterogeneous robot morphologies.
PiPER in Real-World Experiments
To validate UniVLA’s transferability, the researchers selected the AgileX PiPER robotic arm as the real-world testing platform.
Tasks tested:
- Store a screwdriver
- Clean a cutting board
- Fold a towel twice
- Stack the Tower of Hanoi
These tasks evaluate perception, tool use, non-rigid manipulation, and semantic understanding.
Experimental Results
- Average performance improved by 36.7% over baseline models
- Up to 86.7% success rate on semantic tasks (e.g., Tower of Hanoi)
- Fine-tuned with only 20–80 demonstrations per task
- Evaluated using a step-by-step scoring system
About PiPER
PiPER is a 6-DOF lightweight robotic arm developed by AgileX Robotics. Its compact structure, ROS support, and flexible integration make it ideal for research in manipulation, teleoperation, and multimodal learning.
Learn more: PiPER
Company website: https://global.agilex.ai
Click the link below to watch the experiment video using PIPER:
🚨 Our PiPER robotic arm was featured in cutting-edge robotics research!
Collaborate with Us
At AgileX Robotics, we work closely with universities and labs to support cutting-edge research. If you’re building on topics like transferable policies, manipulation learning, or vision-language robotics, we’re open to collaborations.
Let’s advance embodied intelligence—together.
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: [Demo] Remote Teleoperation with Pika on UR7e and UR12e
Hello ROS developers,
We’re excited to share a new demo featuring Pika, AgileX Robotics’ portable and ergonomic teleoperation gripper system. Pika integrates multiple sensors to enable natural human-to-robot skill transfer and rich multimodal data collection.
Key Features of Pika:
- Lightweight design (~370g) for comfortable extended handheld use
- Integrated multimodal sensors including fisheye RGB camera, Intel RealSense depth camera, 6-DoF IMU, and high-precision gripper encoders
- USB-C plug-and-play connectivity supporting ROS 1 and ROS 2
- Open-source Python and C++ APIs for easy integration and control
- Compatible with URDF models, suitable for demonstration-based and teleoperation control
In this demo, Pika teleoperation system remotely controls two collaborative robot arms — UR7e (7.5 kg payload, 850 mm reach) and UR12e (12 kg payload, 33.5 kg robot weight) — to complete several everyday manipulation tasks:
Task Set:
- Twist open a bottle cap
- Pick up a dish and place it in a cabinet
- Grab a toy and put it in a container
System Highlights:
- Precise gripper control with high-resolution encoder feedback
- 6-DoF IMU for accurate motion tracking
- Synchronized multimodal data capture (vision, 6D pose, gripper status)
- Low-latency USB-C connection ensuring real-time responsiveness
- Ergonomic and lightweight design for comfortable long-duration use
Application Scenarios:
- Human-in-the-loop teleoperation
- Learning from Demonstration (LfD) and Imitation Learning (IL)
- Vision-based dexterous manipulation and robot learning
- Remote maintenance and industrial collaboration
- Bimanual coordination and complex task execution
Watch the demo here: Pika Remote Control Demo
Learn more about Pika: https://global.agilex.ai/products/pika
Feel free to contact us for GitHub repositories, integration guides, or collaboration opportunities — we look forward to your feedback!
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: TecGihan Force Sensor Amplifier for Robot Now Supports ROS 2
I would like to share that Tokyo Opensource Robotics Kyokai Association (TORK) has supported the development and release of the ROS 2 / Linux driver software for the DMA-03 for Robot, a force sensor amplifier manufactured by TecGihan Co., Ltd.
- GitHub – tecgihan_driver
The DMA-03 for Robot is a real-time output version of the DMA-03, a compact 3-channel strain gauge amplifier, adapted for robotic applications.
- TecGihan Website (English)
As of July 2025, tecgihan_driver
supports the following Linux / ROS environments:
- Ubuntu 22.04 + ROS 2 Humble
- Ubuntu 24.04 + ROS 2 Jazzy
A bilingual (Japanese/English) README with detailed usage instructions is available on the GitHub repository:
If you have any questions or need support, feel free to open an issue on the repository.
–
Yosuke Yamamoto
Tokyo Opensource Robotics Kyokai Association
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: RobotCAD 9.0.0 (Assemly WB -> RobotCAD converter)
Improvements:
- Add converter FreeCAD Assembly WB (default) to RobotCAD structure.
- Add tool for changing Joint Origin without touching downstream kinematic chain (move only target Joint Origin)
- Optimization of Set placement tools performance. Now it does not require intermediate recalculation scene in process.
- Decrease size of joint arrows to 150.
- Add created collisions to Collision group (folder). Unification of collision part prefix.
- Fix Set placement by orienteer for root link (align it to zero Placement)
- Refactoring of Set Placement tools.
Fixes:
- Fix error when creating collision for empty part.
- Fix getting wrapper for LCS body container. It fixes LCS adding to some objects.
- Fix NotImplementedError (some joint types units) to warning. Instead of error it will give warning and let possible to set values for other types of joints.
https://vkvideo.ru/video-219386643_456239081 - Converter Assembly WB → RobotCAD in work
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: 🚀 [New Release] BUNKER PRO 2.0 – Reinforced Tracked Chassis for Extreme Terrain and Developer-Friendly Integration
Hello ROS community,
AgileX Robotics is excited to introduce the BUNKER PRO 2.0, a reinforced tracked chassis designed for demanding off-road conditions and versatile field robotics applications.
Key Features:
- Christie suspension system + Matilda four-wheel independent balancing suspension provide excellent terrain adaptability and ride stability.
- Easily crosses 30° slope terrain.
- Maximum unloaded range: 20 km; maximum loaded range: 15 km.
- Capable of crossing 40 cm trenches and clearing obstacles up to 180 mm in height.
- IP67-rated enclosure ensures robust protection against dust, water, and mud.
- Rated payload capacity: 120 kg, supporting a wide range of sensors, manipulators, and payloads.
- Maximum speed at full load: 1.5 m/s.
- Minimum turning radius: 67 cm.
- Developer-ready interfaces and ROS compatibility.
Intelligent Expansion, Empowering the Future
- Supports customizable advanced operation modes.
- Communication via CAN bus protocol.
- Open-source SDK and ROS packages for easy integration and development.
Typical Use Cases:
- Outdoor Inspection & Patrol
- Agricultural Transport
- Engineering & Construction Operations
- Specialized Robotics Applications
AgileX Robotics provides full ROS driver support and SDK documentation to accelerate your development process. We welcome collaboration opportunities and field testing partnerships with the community.
For detailed technical specifications or to discuss integration options, please contact us at sales@agilex.ai.
Learn more at https://global.agilex.ai/
4 posts - 2 participants
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Cloud Robotics WG Meeting 2025-07-28 | Heex Technologies Tryout and Anomaly Detection Discussion
Please come and join us for this coming meeting at Mon, Jul 28, 2025 4:00 PM UTC→Mon, Jul 28, 2025 5:00 PM UTC, where we will be trying out Heex Technologies service offering from their website and discussing anomaly detection for Logging & Observability.
Last meeting, we heard from Bruno Mendes De Silva, Co-Founder and CEO of Heex Technologies, and Benoit Hozjan, Project Manager in charge of customer experience at Heex Technologies. The two discussed the company and purpose of the service they offer, then demonstrated a showcase workspace for the visualisation and anomaly detection capabilities of the server. If you’d like to see the meeting, it is available on YouTube.
The meeting link for nex meeting is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.
Hopefully we will see you there!
2 posts - 2 participants
![[WWW] [WWW]](./rostheme/img/moin-www.png)