watonomous.github.io

  1. Software Division
  2. Software Division Home
  3. Perception Group

[ Software Division : Getting Started with Perception ROS Package ]

Created by [ Rowan Dempster], last modified by [ Anita Hu] on Sep 29, 2020

Software Setup

Platform requirements are:

OS: Ubuntu 16.04
ROS Version: Kinetic Kame

Do the Software Onboarding Tutorial to get your machine setup. Then do some ROS tutorials to familiarize yourself with the framework.

Create a catkin workspace with the following structure.

~/dev/catkin_ws
    - perception-models (master branch): https://git.uwaterloo.ca/WATonomous/perception-models
    - src
        - perception-year-2 (master branch): https://git.uwaterloo.ca/WATonomous/perception-year-2
        - ros_msgs (master branch): https://git.uwaterloo.ca/WATonomous/ros_msgs


[NOTE:] If you get a 502 proxy error when trying to git clone, try cloning with SSH on GitLab.

Before you can compile the ROS package code, you require a few more dependencies. We are using OpenVino

2019 R 3.1 or 2018 R5 (use this if available)

A library made by Intel to accelerate the speed of computer vision and AI models in real-time. Follow the instructions here (2019 R3.1) or here (2018 R5) to download and install on Linux.

**[IMPORTANT:]

We need a symbolic link "computer_vision_sdk" to the current openvino folder:

cd /opt/intel
sudo ln -s openvino computer_vision_sdk

Next, in /opt/intel/openvino/inference_engine/lib, openvino 2019 doesn't have ubuntu_16.04 folder, which contains intel64 in the 2018 version. We need a symbolic link "ubuntu_16.04" linking to the directory itself.

cd /opt/intel/openvino/inference_engine/lib
sudo ln -s ./ ubuntu_16.04

A dependency is missing in the openvino 2019. We need to manually add that file to a location. Download libiomp5.so from here.

cd /opt/intel/openvino/deployment_tools/inference_engine/external
mkdir omp && mkdir omp/lib
mv location_to_libiomp5.so omp/lib

Check ~/.bashrc, make sure the following paths are in LD_LIBRARY_PATH

/opt/intel/computer_vision_sdk/deployment_tools/inference_engine/external/omp/lib
/opt/intel/computer_vision_sdk/deployment_tools/inference_engine/lib/ubuntu_16.04/intel64
~/inference_engine_samples_build/intel64/Release/lib

It should look something like

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/intel/computer_vision_sdk/deployment_tools/inference_engine/external/omp/lib:/opt/intel/computer_vision_sdk/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/opt/intel/computer_vision_sdk/deployment_tools/inference_engine/lib/ubuntu_16.04/intel64:~/inference_engine_samples_build/intel64/Release/lib

After you installed it to your root directory, compile the samples to produce the correct libraries required by the ROS perception code to talk to the hardware:

# Run this script to compile libraries (the path might not be exactly the same depending 
# on the Openvino version you are using, so be prepared to dig around for documentation or a different path):
/opt/intel/computer_vision_sdk/deployment_tools/inference_engine/samples/build_samples.sh


[NOTE:] if you get an error like: /usr/bin/ld: cannot find -lformat_reader, make sure you add this path to your LD_LIBRARY_PATH environment variable, as well as any other path Openvino requires (see the Perception CMakelist for a full list of library paths): ~/inference_engine_samples_build/intel64/Release/lib
[NOTE:] if all the apt commands split out warnings like /sbin/ldconfig.real: /opt/intel/mediasdk/lib64/libva-glx.so is not a symbolic link, this is a mistake from intel. I had it for .2 and .1 and fixed it by manually creating the symlinks:sudo rm /opt/intel/mediasdk/lib64/libva-glx.so.2; sudo ln -s /opt/intel/mediasdk/lib64/libva-glx.so /opt/intel/mediasdk/lib64/libva-glx.so.2; sudo rm /opt/intel/mediasdk/lib64/libigdgmm.so.1;sudo ln -s /opt/intel/mediasdk/lib64/libigdgmm.so /opt/intel/mediasdk/lib64/libigdgmm.so.1

Testing your setup

Change and edit rosparam paths to your own in ./src/perception-year-2/perception-pc.launch.
Important paths to change are the 4 model paths on line 3, 5, 7, and 9

<param name="perception/nn_segmentation_model_path" type="str" value="path to /perception-models/segnet-skip-lanelines-01-29-2019/intel-model/frozen_model">

In perception-pc.launch, fake camera images are sent using a directory of images you specify as a rosparam in the launch file. They are published by the frame_publisher_node. You can test out the obstacle detection node by downloading images here.
In the launch file, uncomment and update this param value (line 20) to the path where you downloaded the image folder.

<param name="perception/img_test_dir" type="str" value="path to /img_test_dir/obstacle_test" />

Uncomment and change the value of perception/frame_pub_topic (line 18) to the correct camera topic (currently /camera/right/image_color).

<param name="perception/frame_pub_topic" type="str" value="/camera/right/image_color" />

Make sure the node are uncommented (line 75 and 80)

<node pkg="perception" type="frame_publisher_node" name="frame_publisher_node" output="screen"/>
<node pkg="perception" type="obstacle_detection_node" name="obstacle_detection_node" output="screen"/>

Usage

Here are some commands to start running the Perception module:

# Optional: choose number of threads to run your network (choose 1 or 2 if your computer is slow)
export OMP_NUM_THREADS=2

cd ~/dev/catkin_ws
catkin build
source devel/setup.sh

# Launch perception nodes on PC:
roslaunch ./src/perception-year-2/perception-pc.launch

# Or run this instead of you are on the vehicle:
roslaunch ./src/perception-year-2/perception-vehicle.launch

# Perform the next commands each in separate terminals:
# Visualizer for detections, see perception topics link below:
rosrun image_view image_view image:=/obstacle_detection_visualizer

# Raw camera images
rosrun image_view image_view image:=/camera/right/image_color


[NOTE:] you can find all perception topics here

Contributing

Go through these ROS Tutorials (1 -6, 8, 11, 17).

Let's try to follow the ROS C++ style guide.

Make changes on a separate branch and submit a merge request.

Document generated by Confluence on Dec 10, 2021 04:02

Atlassian