- Software Division
- Software Division Home
- Infrastructure Group
Created by [ Rowan Dempster], last modified by [ Charles
Zhang] on Jan 14, 2020
RViz is a tool offered by ROS to visualize ROS messages. These can be
standard or custom based on different sub teams. This will improve
development efficiency for software teams and testing on test track day.
Path Planning
How to run RViz with the Path Planning Pipeline
- Ensure your catkin_ws is set up properly with the decision and
ros_msgs repositories
- Run the path planning pipeline
-
roslaunch path_planning planner_main.launch
-
[The path planning pipeline should start, and the rviz should
pop up as a separate window. ]
- Run the desired ROS bag in a new terminal window
How to run the Path Planning RViz Manually
Ensure setup of Catkin workspace and have RViz installed.
- In ~/catkin_ws/src, clone the repositories
for decision and ros_msgs
- Run roscore in a new terminal window
- Run the RVIZ publisher and any other nodes you need in separate
terminal windows
- rosrun path_planning rviz_publisher
- rosrun ... (eg `rosrun path_planning local_planner`)
- Run RViz in a new terminal window
- Run the desired ROS bag in a new terminal window
- rosbag play <bag_name> -l
In RViz GUI,
- File→Open
`catkin_ws/src/decision/rviz_publisher/environment.rviz`
OR:
- Change the fixed_frame to world
- Change the target_frame to odom
- Click Add to add a marker. Click the by_topic tab. Scroll down
to marker_array and click it to visualize the marker.
- Click the Zero button
Perception
Ensure setup of Catkin workspace and have RViz installed. Perception has
a rviz configuration file which is used to launch a preconfigured rviz
when you launch perception-vehicle-exp.launch
(https://git.uwaterloo.ca/WATonomous/perception-year-2/blob/master/perception-vehicle-exp.launch)
Run `roslaunch perception-vehicle-exp.launch` to launch the perception
pipeline (I do not recommend doing this on your own computer as it is
very computationally expensive, running neural networks, lidar data
processing and rviz)
It is best to run 1 or 2 perception ros nodes at a time on your own
computer with the perception-pc.launch file which doesn't automatically
launch rviz and visualize detection results using `[rosrun image_view
image_view
image:=/insert_topic_name`]
From Rosbag
Tutorial:
- In Rviz, to visualize raw LIDAR data, switch the Fixed Frame value
to /velodyne and then add the /velodyne_points topic. If you don't
what Rviz is or how to use it, see the tutorial
here: http://wiki.ros.org/rviz/Tutorials.
Team Requirements
General
- Super helpful for real-time detections to help diagnosis the issues
instantly. The system would help determine which sub-team has bugs
- It would also help validate model accuracy
- Helps fine tune the parameters combined with the hyper parameter
server
- A visualization will help us figure it out. We cannot retrain the
model on the fly but we can retake notes. But you could never really
know if you don’t visualize it
- Example: The car runs a red light. It shows that the car didn’t
detect it or that it did detect it but it saw that it was green.
Path Planning
What they have
What they want
- Make the system run faster
- Visualize the Cost Map
- Visualize a 2D representation of the cost map on the floor that
is accurately placed wrt to the car and other visualized
obstacles
- It is easy to differentiate the cost values at a glance
- Research in whether it's better to use point cloud or the voxel
grid
- Values of the cost map range from 0-255, so use different
colours to signify the cost of visiting a certain location
- Ex. The lane lines will have a cost of 150 and the car itself
will have a cost of 255
- The cost map is passed as a matrix representation
- The message is sent over the RolloutCostmapAndConfig topic
- Visualize Feedback controller outputs (steering angle, torque,
desired velocity, gear)
- The outputs of the controller are steering angle on front wheel
and longitudinal force.
- the output topic of feedback controllers is
called desired_feedback_output
- the message type sent over that topic is DesiredOutput.msg
- output in the form of text dialog or stdout is sufficient.
- Global planner way points (not being published currently)
- Fix velodyne stuff from Y1
- Cutting layer RVIZ / Other internal PP logic (nothing being
published currently)
- Visualize turn signals (not being published currently)
Perception
What they have
- Visualizes LiDAR raw data and 3D bounding boxes using the Euclidean
clustering algorithm
What they want
- Real-time camera feeds
- Representation of a pedestrian and stop sign in 3D space
- 2D bounding box visualizations
Processing
What they have
- Currently using the visualizer that Charles developed
- Current process: the output is coordinates represented as a string.
They print out the string to approximate what is happening.
- Example: They wouldn't know how far away they were from a stop
sign. They guessed what was going on and why certain decisions were
made.
What they want
- Needs the ability to visualize Environment.msg using Charles' RViz
tool
- An RViz tool which visualizes the environment messages
- High priority for testing day
Process Flow of RViz Tool
Document generated by Confluence on Dec 10, 2021 04:01
Atlassian