Autoware.Auto
3D perception stack

Overview

This demo aims to explain the general process for running the following perception nodes. The Autoware.Auto 3D perception stack consists of a set of nodes necessary to compute and publish object bounding boxes. The minimal stack for doing so is:

  1. point_cloud_filter_transform_node: Transforms output of the velodyne_node to a common frame. Documentation can be found here: point_cloud_filter_transform_nodes.
  2. ray_ground_classifier_node: Classifies lidar points to indicate whether they belong to a ground or non-ground surface. Documentation can be found here: ray_ground_classifier.
  3. euclidean_cluster_node: Clusters the non-ground points into object detections. Documentation can be found here: euclidean_cluster.

There are also optional nodes, not covered in this tutorial, that can be used to augment the stack:

  1. point_cloud_fusion: Fuses point clouds from multiple sources into a single message. This is used currently to fuse the front and rear lidar data into a single message stream. This tutorial only uses the front lidar data. Documentation can be found here: point_cloud_fusion_nodes.
  2. voxel_grid_nodes: This can be used to down-sample point cloud data through a voxel grid representation. This tutorial does not perform down-sampling. Documentation can be found here: voxel_grid_filter.

Launch Perception Nodes

Prerequisites

This demo assumes that instructions are run inside the ADE environment. For more information for installation see Install ADE. Depending on which mode is choosen to generate data, a different .aderc file might need to be used ensure the correct environment is configured.

Generating Sensor Data

To run the perception stack, sensor data will need to be generated and publish along with the corresponding "robot state" that is used to determine the sensor positions with respect to the vehicle. This section covers three different ways of achieving this:

  • Running the simulator
  • Replaying sensor data
  • Connecting to the physical sensor

Running the Simulator

  1. Enter ADE using the LGSVL configuration:
    $ ade --rc .aderc-lgsvl start --update --enter
  2. See Running the SVL Simulator along side Autoware.Auto
  3. Publish the robot state description:
    $ ade enter
    ade$ ros2 run robot_state_publisher robot_state_publisher /opt/AutowareAuto/share/lexus_rx_450h_description/urdf/lexus_rx_450h.urdf

Replaying Sensor Data

  1. Download the PCAP file Dual VLP-16 Hi-Res pcap file.
  2. Enter ADE using the default script with extra flags to properly run RViz
    $ ade --rc .aderc start --update --enter
  3. Move the downloaded file into your adehome folder.
  4. Replay the file using udpreplay:
    $ ade enter
    ade$ udpreplay -r -1 route_small_loop_rw.pcap
  5. Launch the velodyne_node for the front lidar:
    $ ade enter
    ade$ source /opt/AutowareAuto/setup.bash
    ade$ ros2 run velodyne_nodes velodyne_cloud_node_exe --ros-args -p model:=vlp16 --remap __ns:=/lidar_front --params-file /opt/AutowareAuto/share/velodyne_nodes/param/vlp16_test.param.yaml
  6. Launch the velodyne_node for the rear lidar:
    $ ade enter
    ade$ source /opt/AutowareAuto/setup.bash
    ade$ ros2 run velodyne_nodes velodyne_cloud_node_exe --ros-args -p model:=vlp16 --remap __ns:=/lidar_rear --params-file /opt/AutowareAuto/share/velodyne_nodes/param/vlp16_test_rear.param.yaml
  7. Publish the robot state description:
    $ ade enter
    ade$ source /opt/AutowareAuto/setup.bash
    ade$ ros2 run robot_state_publisher robot_state_publisher /opt/AutowareAuto/share/lexus_rx_450h_description/urdf/lexus_rx_450h_pcap.urdf

Connecting to the Physical Sensor

  1. Enter ADE using the default script with extra flags to properly run RViz
    $ ade --rc .aderc start --update --enter -- --net=host --privileged
  2. To do this, update the IP address and port arguments in the parameter file for the velodyne_node and then launch the node:
    $ ade enter
    ade$ source /opt/AutowareAuto/setup.bash
    ade$ ros2 run velodyne_nodes velodyne_cloud_node_exe --ros-args -p model:=vlp16 --remap __ns:=/lidar_front --params-file /opt/AutowareAuto/share/velodyne_nodes/param/vlp16_test.param.yaml

Launch Visualization

rviz2 can be used to visualize perception data as it is published. To start the visualizer, open a new terminal, then:

$ ade enter
ade$ source /opt/AutowareAuto/setup.bash
ade$ rviz2 -d /opt/AutowareAuto/share/autoware_auto_examples/rviz2/autoware_perception_stack.rviz

The rviz config has displays for all topics in this tutorial. As nodes are launched, they will be displayed in rviz. The check-boxes next to topic names can be checked and unchecked to toggle which perception outputs are visualized.

Launch Perception Nodes

  1. Launch the point_cloud_filter_transform_node node. This node transforms point clouds from the velodyne_node to a common frame. In a new terminal, do:

    $ ade enter
    ade$ source /opt/AutowareAuto/setup.bash
    ade$ ros2 run point_cloud_filter_transform_nodes point_cloud_filter_transform_node_exe --ros-args --remap __ns:=/lidar_front --params-file /opt/AutowareAuto/share/point_cloud_filter_transform_nodes/param/vlp16_sim_lexus_filter_transform.param.yaml --remap __node:=filter_transform_vlp16_front --remap points_in:=/lidar_front/points_xyzi
    Autoware.Auto transformed points snapshot
  2. Launch the ray_ground_classifier_node node. This node classifies point cloud points according to whether they are ground or non-ground. In a new terminal, do:

    $ ade enter
    ade$ source /opt/AutowareAuto/setup.bash
    ade$ ros2 run ray_ground_classifier_nodes ray_ground_classifier_cloud_node_exe --ros-args --params-file /opt/AutowareAuto/share/ray_ground_classifier_nodes/param/vlp16_lexus_pcap.param.yaml --remap points_in:=/lidar_front/points_filtered
    Autoware.Auto ray ground filter snapshot
  3. Launch the euclidean_cluster_node node. This node clusters non-ground points into objects and publishes bounding boxes or convex polygon prisms optionally. For publish bounding boxes, in a new terminal, do:

    $ ade enter
    ade$ source /opt/AutowareAuto/setup.bash
    ade$ ros2 run euclidean_cluster_nodes euclidean_cluster_node_exe --ros-args --params-file /opt/AutowareAuto/share/euclidean_cluster_nodes/param/vlp16_lexus_cluster.param.yaml --remap points_in:=/points_nonground
    Autoware.Auto bounding boxes segmentation snapshot

Alternatively, publish the convex polygon prism from the euclidean clustering algorithm. In a new terminal, do:

$ ade enter
ade$ source /opt/AutowareAuto/setup.bash
ade$ ros2 run euclidean_cluster_nodes euclidean_cluster_node_exe --ros-args --params-file /opt/AutowareAuto/share/euclidean_cluster_nodes/param/vlp16_lexus_cluster_as_polygon.param.yaml --remap points_in:=/points_nonground
Autoware.Auto convex polygon prisms segmentation snapshot


Convenience Launch Files

To simplify the process of launching these nodes there exists a convenience launch file that can bring up the robot_state_publisher along with the rest of the perception stack using a single command. To bring up the perception stack, use the following launch files:

PCAP Sensor Data

Note
See above steps for replaying the sensor data if you don't have a copy of the PCAP file.
  1. Replay the file using udpreplay:
    $ ade enter
    ade$ udpreplay -r -1 route_small_loop_rw.pcap
  2. Launch the velodyne_node for the front lidar in another terminal:
    $ ade enter
    ade$ source /opt/AutowareAuto/setup.bash
    ade$ ros2 run velodyne_nodes velodyne_cloud_node_exe --ros-args -p model:=vlp16 --remap __ns:=/lidar_front --params-file /opt/AutowareAuto/share/velodyne_nodes/param/vlp16_test.param.yaml
  3. Launch the velodyne_node for the rear lidar in another terminal:
    $ ade enter
    ade$ source /opt/AutowareAuto/setup.bash
    ade$ ros2 run velodyne_nodes velodyne_cloud_node_exe --ros-args -p model:=vlp16 --remap __ns:=/lidar_rear --params-file /opt/AutowareAuto/share/velodyne_nodes/param/vlp16_test_rear.param.yaml
  4. Enter ADE and run the launch file

    Publish the convex polygon prisms for describe the spatial location of objects.

ade enter
ade$ source /opt/AutowareAuto/setup.bash
ade$ ros2 launch autoware_demos lidar_polygon_prisms_pcap.launch.py

Or alternativelly, publish the bounding boxes for describe the spatial location of objects.

ade enter
ade$ source /opt/AutowareAuto/setup.bash
ade$ ros2 launch autoware_demos lidar_bounding_boxes_pcap.launch.py

SVL Simulator

Note
If the SVL Simulator is already running, skip step 1.
  1. See Running the SVL Simulator along side Autoware.Auto
  2. Enter ADE and run the launch file

    Publish the convex polygon prisms for describe the spatial location of objects.

$ ade enter
ade$ source /opt/AutowareAuto/setup.bash
ade$ ros2 launch autoware_demos lidar_polygon_prisms_lgsvl.launch.py

Or alternativelly, publish the bounding boxes for describe the spatial location of objects.

$ ade enter
ade$ source /opt/AutowareAuto/setup.bash
ade$ ros2 launch autoware_demos lidar_bounding_boxes_lgsvl.launch.py