Intrinsic camera calibration in ROS Melodic

Pranjal Paul
6 min readApr 13, 2021

--

Before starting the task, it is better to get some prior knowledge about what we are going to do. If you are directly or indirectly related to Computer Vision, I recommend you to read this article by OpenCV and watch videos by Cyril Stachniss on photogrammetry.

Resources used -

  1. Logitech C170 5MP XVGA Webcam
  2. Chessboard pattern with 9x7 squares each of 10 mm
  3. Ubuntu 18.04 + ROS Melodic
  4. 2 Cups of Tea :D

Pre-requisites

The calibration process calculates the lens and image sensor parameters of a camera that can be used to correct distortion, or measure the size of real-world objects in global co-ordinate through camera-frame, etc.

Meaning, it is about finding the true parameters of your camera that will capture semantic information with the least or no error, resulting in high accuracy.

By calibration, we mean, calibration of following parameters -

  1. Intrinsic parameters- Measures camera’s internal parameters such as focal length, distortion coefficients, focal center, etc.
  2. Extrinsic parameters- Camera’s position and orientation with respect to global coordinates.

So, we basically look for an accurate relationship of 3D real-world points with their corresponding 2D projection (pixels) in an image.

However, in this article, we will calibrate only intrinsic parameters.

[Suggested] Head over to this link for more technical understanding- https://learnopencv.com/camera-calibration-using-opencv/

Chessboard Pattern

To perform the process, we need some patterns that hold key information. There are multiple patterns available with their unique properties and purposes based on the algorithm to be used to perform calibration, such as Circle grid, ChArUco targets, Asymmetric circles, and the Chessboard pattern being most popular.

Why Chessboard pattern ?

As said, each pattern holds unique characteristics, so -

  1. Chessboard inherits “Line Intersection” which is robust and accurate since, with the line equation, we get sub-pixel accuracy.
  2. Each vertex can be localized easily due to sharp change in gradient (black & white)

I recommend you to read this stack-overflow answer for a better understanding- https://dsp.stackexchange.com/questions/24734/camera-calibration-why-chessboards

Generating Pattern

For the calibration process with a checkerboard pattern, we seek two pieces of information-

  1. Size of each square (in mm)
  2. Number of Vertices (not the number of squares!)

Though it is very easy to generate the chessboard pattern, I’m still lazy to do it. So, assuming you feel the same, you can download different checkerboard pattern from this collection- https://markhedleyjones.com/projects/calibration-checkerboard-collection

OR, you can generate other pattern types from calib.io- https://calib.io/pages/camera-calibration-pattern-generator

Optimal Chessboard Size

But, how do we know what size fits good to our purpose? Well, it can be figured out in two steps-

  1. Decide the volume of space that you want it to be calibrated- this space can be visible from one or more cameras. Let’s say, the dimensions are 0.4 x 0.5 x 0.6 meters from camera frame.
  2. Choose the pattern size that is half the minimum size of the dimensions of volume. Min(0.4,0.5,0.6) = 0.4 mtr. So, checkerboard size will be 1/2*0.4 = 0.2 meters. Why half? with the left-out part, you get the room to move the pattern into different orientations within the camera frame.

So, does that mean that selection of pattern size depends on to what distance from the camera frame (far or near) we want to keep it calibrated? Yes, absolutely! The farther you go, the pattern size will appear smaller, and hence you will have to shift to a larger size. If you started with A4 size print, then you might need to shift to A3 or maybe A2 size.

Okay, that was enough. Now we shall start our actual task that begins with cloning the Github repo.

ROS Package

There is a camera_calibration package under image_pipeline stack that supports most of the drivers, however, for other USB cameras, you need to clone usb_cam repo and run make:

$ cd ~
$ mkdir -p usb_ws/src
$ cd usb_ws/src
$ git clone https://github.com/ros-drivers/usb_cam.git
$ cd .. && catkin_make

STEP-1:

Open terminal and launch usb_cam-test.launch.

$ cd ~
$ roslaunch usb_cam usb_cam-test.launch

You should now see a window open under topic /usb_cam/image_raw

Fig 1. Camera window

The default device is set to /dev/video0. If you want to run for other external cameras, open the launch file:

$ sudo nano ~/usb_ws/src/usb_cam/launch/usb_cam-test.launch

And edit /dev/video0 to other camera- /dev/videoX , X=0,1,2,…

<param name="video_device" value="/dev/video4" />

Before Calibration

For an uncalibrated camera, the default parameters are assigned. Following parameters will be calibrated:

$ rostopic echo /usb_cam/camera_info -n1header: 
seq: 1652
stamp:
secs: 1618307015
nsecs: 383231835
frame_id: "usb_cam"
height: 480
width: 640
distortion_model: ''
D: []
K: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
R: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
P: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
binning_x: 0
binning_y: 0
roi:
x_offset: 0
y_offset: 0
height: 0
width: 0
do_rectify: False
---

Start Calibration Window

STEP-2:

Keep the camera window remain OPEN. In a new terminal, run the calibration script under the same topic, /usb_cam/image_raw:

$ rosrun camera_calibration cameracalibrator.py --size 8x6 --square 0.100 image:=/usb_cam/image_raw camera:=/usb_cam --no-service-check

where,

8 x 6 = number of vertices (not the squares)

0.100 = size of each square- 10 mm

So, now you have 2 windows open on your screen, one shows the camera RGB input and the other, calibration GUI that looks like this-

Fig 2. Calibration Window

Note that, “Calibrate”, “Save” and “Commit” are disabled.

Run Calibration

Now to start calibrating your camera with a chessboard pattern, place it before your camera.

[HIGHLY RECOMMENDED] Stick the pattern to a cardboard or a hard flat surface. This will keep all the boxes in a single plane, thereby reducing distortion

You will see colored lines drawn through the vertices and, parallelly in the terminal window, you should see samples registering for each unique X, Y, Size, and Skew.

Fig 3. Generating samples

Now, as we noted before that the “Calibrate”, “Save” and “Commit” options were disabled, after the successful generation of 40 samples, the “Calibrate” option gets activated.

STEP-3:

Click on “Calibrate” and wait for some time. In the terminal, you should see that calibrated values are generated but they aren’t saved yet !

Fig 4. Activated CALIBRATE button after 40 samples

STEP-4:

Click on “Commit”. This will generate the calibration file as head_camera.yaml under hidden folder .ros/camera_info/

Do not skip “Commit”. Without this, the file won’t generate.

You can check the generated file and it should look similar to this:

$ sudo nano ./ros/camera_info/head_camera.yamlOUTPUT-image_width: 640
image_height: 480
camera_name: head_camera
camera_matrix:
rows: 3
cols: 3
data: [729.1016874494248, 0, 277.5382540847794, 0, 728.3444988325477, 218.6703499234103, 0, 0, 1]
distortion_model: plumb_bob
distortion_coefficients:
rows: 1
cols: 5
data: [-0.00898402793551297, -0.04505947514194832, -0.006922247877124037, -0.01114510192485125, 0]
rectification_matrix:
rows: 3
cols: 3
data: [1, 0, 0, 0, 1, 0, 0, 0, 1]
projection_matrix:
rows: 3
cols: 4
data: [724.7064819335938, 0, 271.8295085543687, 0, 0, 730.1134643554688, 216.3038060841536, 0, 0, 0, 1, 0]

If you found it useful, give a Clap and also share!

--

--

No responses yet