watonomous.github.io

  1. Software Division
  2. Software Division Home
  3. Processing Group

[ Software Division : Lidar To Camera Calibration: Projection Matrix ]

Created by [ Rowan Dempster] on Dec 28, 2019

This process will give us the projection matrix needed for lidar to camera conversion. It is done by getting lots of lidar points and the corresponding camera pixel points. Then a python script will give us an approximation of the projection matrix.

Run the cameras and lidars

Running Camera Drivers: https://phabricator.watonomous.ca/w/electrical/sensor-fusion/camera_setup/camera_drivers/
Running Lidar Drivers: https://phabricator.watonomous.ca/w/electrical/sensor-fusion/lidar_setup/lidar_drivers/

Alternatively there is a script that runs both cameras and lidars

roslaunch state_machine sensor_launch.launch

Troubleshooting:

Run perception node

We need perception to merge the lidars together.
Find the perception-pc.launch file in src folder in perception.
Link to phab: https://phabricator.watonomous.ca/w/software/perception/perception_ros_integration/

roslaunch perception-pc.launch

Check with Rviz

Check that it is running by displaying both sensor inputs into rviz

rviz rviz

In rviz click add, By topc, then find /lidar_merged_visualised/ PointCloud2

[{.confluence-embedded-image .confluence-external-resource}]

\

It should look something like this:

[{.confluence-embedded-image .confluence-external-resource}]

Calibration of camera and lidar using a chessboard

Place the chessboard in front of the sensors

Download the necessary files from the WATonomous google drive

Download this entire folder into your catkin workspace:

Download this anywhere but remember this path:

It should catkin_make without any errors
Source your devel/setup.bash
Enter this command to run the lidar-to-camera calibration

roslaunch autoware_camera_lidar_calibrator camera_lidar_calibration.launch intrinsics_file:=/PATH/TO/YOUR/camera_right_intrinsic.yaml image_src:=/camera/right/image_color

Very important to change the path to your own path

If working on the rugged, path is: /home/autodrive/camera_right_intrinsic_parameters.yaml
Something like this should pop up

[{.confluence-embedded-image .confluence-external-resource}]

Collect at least 25 points for both camera and lidar points. Collect points by clicking Publish Point in Rviz and clicking on a Lidar point on the chessboard. Then go to the autoware picture viewer and click the corresponding point of the chessboard but in the camera view this time. Repeat these steps a few times, move the chessboard to a different location, and keep repeating.

You should see the terminal looking like this: (only 4 points in this example)

[{.confluence-embedded-image .confluence-external-resource}]

\

Enter the points into the python code

Download the python script from the processing git repo in high_level_fusion folder.
Enter each corresponding pair of points x,y,z from lidar and x,y from camera into the file lidar_to_camera.py.
Insert your coordinates into lidar_points and camera_points in order.

[{.confluence-embedded-image .confluence-external-resource}]

[{.confluence-embedded-image .confluence-external-resource}]

\

\

[IMPORTANT:] We get (lidarX, lidarY, lidarZ) from Rviz publish point. We need to input this into the python in this order: ( lidarY, -lidarZ, -lidarX)

e.g. given (6.46856, -4.14217, 0.522161) from the lidar, in the python code you should input: np.array([4.14217, -0.522161, 6.46856)

Execute the file, and it should return the projection matrix: a linear transform from world coordinates to image coordinates

python lidar_to_camera.py

Expected output below:

[{.confluence-embedded-image .confluence-external-resource}]

Update Intrinsic Parameters

The intrinsic parameters stored in the google drive only works for the current (Aug, 2019) camera-right.
If you want to follow these steps for a different camera, for example camera-left, then you need a different intrinsic parameter yaml file.

Run the cameras.
Go to the autoware ros package that you downloaded from the google drive.
Run in a sourced terminal:

rosrun autoware_camera_lidar_calibrator cameracalibrator.py --square 0.108 --size 8x6 image:=/camera/right/image_color

Move the checkerboard around within the field of view of the camera until the bars turn green.
Press the Calibrate button.
A new yaml file containing the new intrinsic parameters should be created in your home directory.
Use this new yaml file as the input for the above camera-to-lidar calibration.

Comments:

+———————————————————————–+ | [] | | | | A little out of date. | | | | {.smallfont align=”left” style=”color: #666666; width: 98%; margi | | n-bottom: 10px;”} | | | | Posted by henry.wang at Jan 17, 2020 08:12 | | | +———————————————————————–+

Document generated by Confluence on Dec 10, 2021 04:02

Atlassian