Course-Eye-in-Hand Camera Calibration Guide (Using RGB + Pose Data)
๐ฏ Objective
The goal of this calibration is to determine the intrinsic and extrinsic parameters of an RGB camera that is mounted on a robotic arm (eye-in-hand configuration).
- Intrinsic Parameters: Describe the internal characteristics of the camera (e.g., focal length, principal point, distortion).
- Extrinsic Parameters: Describe the spatial relationship (rotation and translation) between the camera and the robot end-effector.
๐๏ธ Dataset Description
- Data Format: Each calibration sample includes:
- One RGB image
- One 6-DOF pose of the robot end-effector
- Total Samples: 37
- Pose Representation:
- Translation: $ (x, y, z) $, in millimeters
- Rotation: $ (r_x, r_y, r_z) $, in radians
- Rotation Order: XYZ (fixed angles)
- Coordinate System: Right-hand coordinate system
๐ Calibration Setup
- Camera Mounting: Eye-in-hand (camera is fixed on the robot end-effector)
- Target: Usually a checkerboard or AprilTag board fixed in the world coordinate frame
- Assumption: The transformation between the robot base and the calibration target is static
๐งฎ Calibration Procedure
Step 1: Image and Pose Collection
- Move the robotic arm to various poses while ensuring the target is visible in the camera view.
- For each pose:
- Record the RGB image
- Record the 6-DOF pose of the robot end-effector
Step 2: Detect Features in Image
- Detect corner points or tag centers (e.g., checkerboard corners) in each image.
- These 2D image points correspond to known 3D points on the target.
Step 3: Estimate Intrinsic Parameters
- Use OpenCV or similar toolboxes to compute:
- Focal lengths $ f_x, f_y $
- Principal point $ p_x, p_y $
- Lens distortion coefficients
Step 4: Estimate Extrinsic Parameters
- Solve the Hand-Eye Calibration problem:
- Input: Robot poses + camera poses relative to the calibration board
- Output: Transformation from camera to robot flange $ \mathbf{T}_{\text{camera}}^{\text{end-effector}} $
- Common method: Tsai-Lenz or Dual Quaternion-based approach
๐ Output
- Intrinsic Matrix $ K $:
-
Distortion Coefficients: $ k_1, k_2, p_1, p_2, k_3 $
-
Extrinsic Matrix $[\mathbf{R} \mid \mathbf{t}]$: Rotation and translation from robot end-effector to camera
โ Notes
- Accuracy improves with more diverse poses and clear feature detection
- Avoid positions with poor visibility or insufficient viewpoint change
- Double-check unit consistency (e.g., mm vs m, degrees vs radians)
๐ Recommended Libraries
๐ Application
This calibration result can be used in:
- 3D vision-based grasping
- Visual servoing
- Pose estimation
- Robotic SLAM and mapping
Leave a comment