Camera Calibration with OpenCV

This notebook demonstrates camera calibration using OpenCV with a chessboard pattern. We’ll:

  1. Detect chessboard corners in multiple calibration images
  2. Calibrate the camera to find intrinsic parameters and distortion coefficients
  3. Undistort images using the calibration results
  4. Visualize the results to verify calibration quality

Run the code below to extract object points and image points for camera calibration

from datasets import Dataset, Features, Value, Image
import os

# Load Hugging Face token from .env
from dotenv import load_dotenv
from huggingface_hub import login, hf_api

load_dotenv()
hf_token = os.getenv("HF_TOKEN")
if hf_token is None:
    raise RuntimeError("HF_TOKEN not found in .env file.")

# Login to Hugging Face Hub
login(token=hf_token)

calib_dir = "calibration_wide"

dataset_name = "pantelism/wide-camera-calibration"

try:
    hf_api.dataset_info(dataset_name)
    print(f"Dataset '{dataset_name}' already exists in Hugging Face Hub.")
except Exception as e:
    print(f"Error checking dataset info: {e}")
    print(f"Dataset '{dataset_name}' does not exist. Creating and uploading...")
    image_files = sorted(
        [os.path.join(calib_dir, fname) for fname in os.listdir(calib_dir) if fname.lower().endswith(".jpg")]
    )

    data = {"image": image_files, "filename": [os.path.basename(f) for f in image_files]}

    features = Features(
        {
            "image": Image(),
            "filename": Value("string"),
        }
    )

    hf_dataset = Dataset.from_dict(data, features=features)

    # Upload to Hugging Face Hub
    hf_dataset.push_to_hub("pantelism/wide-camera-calibration")
    print(f"Created and Uploaded Hugging Face dataset with {len(hf_dataset)} images.")
Note: Environment variable`HF_TOKEN` is set and is the current active token independently from the token you've just configured.
Dataset 'pantelism/wide-camera-calibration' already exists in Hugging Face Hub.
from datasets import load_dataset

dataset = load_dataset("pantelism/wide-camera-calibration")
print(dataset)
# Access images: dataset["train"][0]["image"]
DatasetDict({
    train: Dataset({
        features: ['image', 'filename'],
        num_rows: 44
    })
})
import numpy as np
import cv2
import glob
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
import os
import warnings

warnings.filterwarnings("ignore")

# Set matplotlib to inline for Jupyter compatibility
%matplotlib inline

print("OpenCV version:", cv2.__version__)
print("NumPy version:", np.__version__)

# Prepare object points for 8x6 chessboard (inner corners)
# Object points are 3D points in real world space (z=0 for planar chessboard)
objp = np.zeros((6 * 8, 3), np.float32)
objp[:, :2] = np.mgrid[0:8, 0:6].T.reshape(-1, 2)

# Arrays to store object points and image points from all images
objpoints = []  # 3d points in real world space
imgpoints = []  # 2d points in image plane

# # Make a list of calibration images
# images = glob.glob(os.path.join(calib_dir, "GO*.jpg"))
images = dataset["train"]["image"]  # List of PIL Images

print(f"Found {len(images)} calibration images")
OpenCV version: 4.11.0
NumPy version: 1.26.4
Found 44 calibration images
if len(images) == 0:
    print("No calibration images found! Please check the directory path.")
else:
    # Initialize visualization
    fig, axes = plt.subplots(2, 5, figsize=(20, 8))
    axes = axes.flatten()

    successful_detections = 0

    # Step through the list and search for chessboard corners
    for idx, img in enumerate(images[:10]):  # Limit to first 10 for visualization
        try:
            # convert the PIL Image to numpy array
            img = np.array(img)
            if img is None:
                print(f"Warning: Could not read image {fname}")
                continue

            gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

            # Find the chessboard corners with enhanced detection
            ret, corners = cv2.findChessboardCorners(
                gray,
                (8, 6),
                flags=cv2.CALIB_CB_ADAPTIVE_THRESH + cv2.CALIB_CB_FAST_CHECK + cv2.CALIB_CB_NORMALIZE_IMAGE,
            )

            # If found, refine corner positions and add to arrays
            if ret:
                objpoints.append(objp)

                # Refine corner positions for sub-pixel accuracy
                corners2 = cv2.cornerSubPix(
                    gray, corners, (11, 11), (-1, -1), (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
                )
                imgpoints.append(corners2)
                successful_detections += 1

                # Draw corners for visualization
                img_with_corners = img.copy()
                cv2.drawChessboardCorners(img_with_corners, (8, 6), corners2, ret)

                # Convert BGR to RGB for matplotlib
                img_rgb = cv2.cvtColor(img_with_corners, cv2.COLOR_BGR2RGB)

                # Display in subplot
                if idx < 10:
                    axes[idx].imshow(img_rgb)
                    axes[idx].set_title(f"Image {idx + 1}: ✓ Found", fontsize=10, color="green")
                    axes[idx].axis("off")
            else:
                # Show failed detection
                if idx < 10:
                    img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
                    axes[idx].imshow(img_rgb)
                    axes[idx].set_title(f"Image {idx + 1}: ✗ Failed", fontsize=10, color="red")
                    axes[idx].axis("off")

        except Exception as e:
            print(f"Error processing {fname}: {e}")
            continue

    # Hide unused subplots
    for idx in range(len(images[:10]), 10):
        axes[idx].axis("off")

    plt.tight_layout()
    plt.suptitle(
        f"Chessboard Corner Detection Results\n"
        f"Successfully detected corners in {successful_detections}/{len(images)} images",
        fontsize=14,
        y=1.02,
    )
    plt.show()

    print(f"\nCalibration data summary:")
    print(f"- Total images processed: {len(images)}")
    print(f"- Successful corner detections: {successful_detections}")
    print(f"- Success rate: {successful_detections / len(images) * 100:.1f}%")

    if successful_detections < 10:
        print("Warning: Less than 10 successful detections. Consider adding more images for better calibration.")
    else:
        print("Ready for camera calibration!")


Calibration data summary:
- Total images processed: 44
- Successful corner detections: 10
- Success rate: 22.7%
Ready for camera calibration!

Camera Calibration and Undistortion

If the above cell ran successfully, you should now have objpoints and imgpoints needed for camera calibration.

What happens next:

  1. Camera Calibration: Use cv2.calibrateCamera() to find:

    • Camera matrix (K): Contains focal lengths (fx, fy) and principal point (cx, cy)
    • Distortion coefficients: Correct for lens distortion (radial and tangential)
    • Rotation/translation vectors: Camera pose for each calibration view
  2. Image Undistortion: Apply the calibration to remove lens distortion from images

  3. Results Visualization: Compare original vs. undistorted images

Run the cell below to calibrate and test undistortion!

import pickle
import os
from pathlib import Path

# Check if we have calibration data
if "objpoints" not in locals() or "imgpoints" not in locals():
    print("Error: No calibration data found. Please run the previous cell first.")
elif len(objpoints) == 0:
    print("Error: No successful corner detections found. Cannot proceed with calibration.")
else:
    print(f"Starting calibration with {len(objpoints)} successful detections...")

    # Load test image
    test_image_path = os.path.join(calib_dir, "test_image.jpg")
    if not os.path.exists(test_image_path):
        # Use first calibration image as fallback
        test_image_path = images[0]
        print(f"Using {os.path.basename(test_image_path)} as test image")

    img = cv2.imread(test_image_path)
    if img is None:
        print("Error: Could not load test image")
    else:
        img_size = (img.shape[1], img.shape[0])  # (width, height)
        print(f"Image size: {img_size}")

        # Perform camera calibration
        print("Performing camera calibration...")
        ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, img_size, None, None)

        print(f"Calibration successful! RMS error: {ret:.4f}")

        # Print calibration results
        print("\n=== Camera Calibration Results ===")
        print("Camera Matrix (K):")
        print(mtx)
        print(f"\nFocal lengths: fx={mtx[0, 0]:.2f}, fy={mtx[1, 1]:.2f}")
        print(f"Principal point: cx={mtx[0, 2]:.2f}, cy={mtx[1, 2]:.2f}")
        print(f"Aspect ratio: {mtx[0, 0] / mtx[1, 1]:.4f}")

        print(f"\nDistortion Coefficients:")
        print(f"k1={dist[0, 0]:.6f}, k2={dist[0, 1]:.6f}, p1={dist[0, 2]:.6f}")
        print(f"p2={dist[0, 3]:.6f}, k3={dist[0, 4]:.6f}")

        # Test undistortion on the image
        print(f"\nApplying undistortion to test image...")
        dst = cv2.undistort(img, mtx, dist, None, mtx)

        # Save undistorted image
        output_path = os.path.join(calib_dir, "test_undist.jpg")
        cv2.imwrite(output_path, dst)
        print(f"Saved undistorted image to: {output_path}")

        # Save calibration results
        calib_data = {
            "mtx": mtx,
            "dist": dist,
            "rvecs": rvecs,
            "tvecs": tvecs,
            "rms_error": ret,
            "image_size": img_size,
            "num_images": len(objpoints),
        }

        pickle_path = os.path.join(calib_dir, "wide_dist_pickle.p")
        with open(pickle_path, "wb") as f:
            pickle.dump(calib_data, f)
        print(f"Saved calibration data to: {pickle_path}")

        # Visualize undistortion results
        fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 8))

        # Convert images from BGR to RGB for matplotlib
        img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
        dst_rgb = cv2.cvtColor(dst, cv2.COLOR_BGR2RGB)

        ax1.imshow(img_rgb)
        ax1.set_title("Original Image", fontsize=16)
        ax1.axis("off")

        ax2.imshow(dst_rgb)
        ax2.set_title("Undistorted Image", fontsize=16)
        ax2.axis("off")

        plt.suptitle(f"Camera Calibration Results (RMS Error: {ret:.4f})", fontsize=18)
        plt.tight_layout()
        plt.show()

        # Calculate and display reprojection error statistics
        total_error = 0
        total_points = 0
        max_error = 0

        for i in range(len(objpoints)):
            # Project 3D points back to image plane
            projected_points, _ = cv2.projectPoints(objpoints[i], rvecs[i], tvecs[i], mtx, dist)
            # Calculate error
            error = cv2.norm(imgpoints[i], projected_points, cv2.NORM_L2) / len(projected_points)
            total_error += error
            total_points += len(projected_points)
            max_error = max(max_error, error)

        mean_error = total_error / len(objpoints)

        print(f"\n=== Reprojection Error Analysis ===")
        print(f"Mean reprojection error: {mean_error:.4f} pixels")
        print(f"Maximum reprojection error: {max_error:.4f} pixels")
        print(f"RMS reprojection error: {ret:.4f} pixels")

        if ret < 1.0:
            print("✓ Excellent calibration quality (RMS < 1.0)")
        elif ret < 2.0:
            print("✓ Good calibration quality (RMS < 2.0)")
        else:
            print("⚠ Consider recalibrating with more/better images (RMS >= 2.0)")

        print(f"\nCalibration complete! You can now use 'mtx' and 'dist' to undistort images.")
Starting calibration with 10 successful detections...
Image size: (1280, 960)
Performing camera calibration...
Calibration successful! RMS error: 0.4605

=== Camera Calibration Results ===
Camera Matrix (K):
[[560.41573382   0.         650.84218311]
 [  0.         561.65699486 498.55268095]
 [  0.           0.           1.        ]]

Focal lengths: fx=560.42, fy=561.66
Principal point: cx=650.84, cy=498.55
Aspect ratio: 0.9978

Distortion Coefficients:
k1=-0.244031, k2=0.075706, p1=0.000043
p2=0.000314, k3=-0.012066

Applying undistortion to test image...
Saved undistorted image to: calibration_wide/test_undist.jpg
Saved calibration data to: calibration_wide/wide_dist_pickle.p


=== Reprojection Error Analysis ===
Mean reprojection error: 0.0643 pixels
Maximum reprojection error: 0.0805 pixels
RMS reprojection error: 0.4605 pixels
✓ Excellent calibration quality (RMS < 1.0)

Calibration complete! You can now use 'mtx' and 'dist' to undistort images.
# Additional Analysis: Test calibration on multiple images
if "mtx" in locals() and "dist" in locals():
    print("=== Testing Calibration on Additional Images ===")

    # Test on a few more images
    test_images = images[:4]  # Test on first 4 images

    fig, axes = plt.subplots(2, 4, figsize=(16, 8))

    for idx, img in enumerate(test_images):
        img = np.array(img)
        if img is not None:
            # Apply undistortion
            undist_img = cv2.undistort(img, mtx, dist, None, mtx)

            # Convert to RGB for matplotlib
            img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
            undist_rgb = cv2.cvtColor(undist_img, cv2.COLOR_BGR2RGB)

            # Show original
            axes[0, idx].imshow(img_rgb)
            axes[0, idx].set_title(f"Original {idx + 1}", fontsize=10)
            axes[0, idx].axis("off")

            # Show undistorted
            axes[1, idx].imshow(undist_rgb)
            axes[1, idx].set_title(f"Undistorted {idx + 1}", fontsize=10)
            axes[1, idx].axis("off")

    plt.suptitle("Calibration Results on Multiple Images", fontsize=14)
    plt.tight_layout()
    plt.show()

    # Calculate field of view
    fx, fy = mtx[0, 0], mtx[1, 1]
    width, height = img_size

    fov_x = 2 * np.arctan(width / (2 * fx)) * 180 / np.pi
    fov_y = 2 * np.arctan(height / (2 * fy)) * 180 / np.pi

    print(f"\n=== Camera Parameters Summary ===")
    print(f"Image resolution: {width} x {height}")
    print(f"Focal length: fx={fx:.1f}, fy={fy:.1f} pixels")
    print(f"Field of view: {fov_x:.1f}° x {fov_y:.1f}°")
    print(f"Principal point: ({mtx[0, 2]:.1f}, {mtx[1, 2]:.1f})")

    # Distortion visualization
    fig, ax = plt.subplots(1, 1, figsize=(10, 6))

    # Create radial distance array
    r_max = np.sqrt((width / 2) ** 2 + (height / 2) ** 2)
    r = np.linspace(0, r_max, 1000)

    # Calculate distortion factor
    k1, k2, p1, p2, k3 = dist[0]
    r_norm = r / max(width, height)  # Normalize radius
    distortion_factor = 1 + k1 * r_norm**2 + k2 * r_norm**4 + k3 * r_norm**6

    ax.plot(r, distortion_factor, "b-", linewidth=2, label="Radial distortion")
    ax.axhline(y=1, color="r", linestyle="--", alpha=0.7, label="No distortion")
    ax.set_xlabel("Radial distance from center (pixels)")
    ax.set_ylabel("Distortion factor")
    ax.set_title("Radial Distortion Profile")
    ax.legend()
    ax.grid(True, alpha=0.3)
    plt.show()

    print("✓ Camera calibration analysis complete!")

else:
    print("No calibration data available. Please run the calibration cell first.")
=== Testing Calibration on Additional Images ===


=== Camera Parameters Summary ===
Image resolution: 1280 x 960
Focal length: fx=560.4, fy=561.7 pixels
Field of view: 97.6° x 81.0°
Principal point: (650.8, 498.6)

✓ Camera calibration analysis complete!

OpenCV Functions on Camera Calibration using Zhang’s Method

initCameraMatrix2D

Finds an initial camera intrinsic matrix from 3D-2D point correspondences.

This function provides a good initial estimate for camera calibration by finding the camera matrix that minimizes reprojection error for a set of calibration points.

Parameters:

  • objectPoints: Vector of vectors of 3D calibration pattern points

  • imagePoints: Vector of vectors of corresponding 2D image points

  • imageSize: Image size used only to initialize camera intrinsic matrix

  • aspectRatio: If it’s zero, both fx and fy are estimated independently. Otherwise, fx = fy * aspectRatio

findChessboardCorners

Finds the positions of internal corners of the chessboard.

This function detects the positions of internal corners in a chessboard calibration pattern. The chessboard is one of the most commonly used calibration patterns.

Parameters:

  • image: Source chessboard view (8-bit grayscale or color)

  • patternSize: Number of inner corners per chessboard row and column

  • corners: Output array of detected corners

  • flags: Various operation flags:

    • CALIB_CB_ADAPTIVE_THRESH: Use adaptive thresholding

    • CALIB_CB_NORMALIZE_IMAGE: Normalize image gamma

    • CALIB_CB_FILTER_QUADS: Use additional criteria to filter out false quads

    • CALIB_CB_FAST_CHECK: Run fast check on image to quickly determine if pattern is present

Returns: True if all corners are found and properly ordered

findChessboardCornersSB

Finds the positions of internal corners of the chessboard using a sector based approach.

This is an improved version of corner detection that’s more robust to lighting conditions and partial occlusions.

Parameters:

  • image: Source chessboard view

  • patternSize: Number of inner corners per chessboard row and column

  • corners: Output array of detected corners

  • flags: Operation flags (similar to findChessboardCorners)

  • meta: Optional output metadata about detected corners

estimateChessboardSharpness

Estimates the sharpness of a detected chessboard.

This function can be used to assess the quality of calibration images by measuring how sharp the chessboard pattern appears.

Parameters:

  • image: Source chessboard view

  • patternSize: Size of the chessboard pattern

  • corners: Chessboard corners detected by findChessboardCorners

  • rise_distance: Rise distance 0.8 means 10% … 90% of the final signal strength

  • vertical: By default, the function checks the vertical edges of the chessboard

drawChessboardCorners

Renders the detected chessboard corners.

This function is useful for visualizing detected corners and verifying the correctness of corner detection.

Parameters:

  • image: Destination image (color or grayscale, 8-bit)

  • patternSize: Number of inner corners per chessboard row and column

  • corners: Array of detected corners

  • patternWasFound: Parameter indicating whether complete board was found

drawFrameAxes

Draw axes of the world/object coordinate system from pose estimation.

This function draws the 3D coordinate system axes to visualize object pose, commonly used to verify pose estimation results.

Parameters:

  • image: Input/output image

  • cameraMatrix: Input camera intrinsic matrix

  • distCoeffs: Input distortion coefficients

  • rvec: Rotation vector

  • tvec: Translation vector

  • length: Length of the drawn axes in the same unit as tvec

  • thickness: Line thickness of the drawn axes

findCirclesGrid

Finds centers in the grid of circles.

This function detects a grid of circles pattern, which is an alternative to chessboard patterns for camera calibration.

Parameters:

  • image: Grid view of circles (8-bit grayscale or color)

  • patternSize: Number of circles per row and column

  • centers: Output array of detected centers

  • flags: Various operation flags:

    • CALIB_CB_SYMMETRIC_GRID: Grid is symmetric

    • CALIB_CB_ASYMMETRIC_GRID: Grid is asymmetric

    • CALIB_CB_CLUSTERING: Use k-means clustering to find grid

Returns: True if grid is found

calibrateCamera

Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.

This is the main camera calibration function that estimates all camera parameters from multiple views of a known calibration pattern.

Parameters:

  • objectPoints: Vector of vectors of 3D calibration pattern points

  • imagePoints: Vector of vectors of corresponding 2D image points

  • imageSize: Size of the image used only to initialize camera intrinsic matrix

  • cameraMatrix: Input/output 3x3 camera intrinsic matrix

  • distCoeffs: Input/output vector of distortion coefficients

  • rvecs: Output vector of rotation vectors for each pattern view

  • tvecs: Output vector of translation vectors for each pattern view

  • flags: Different flags for calibration:

    • CALIB_USE_INTRINSIC_GUESS: cameraMatrix contains valid initial values

    • CALIB_FIX_PRINCIPAL_POINT: Principal point is not changed

    • CALIB_FIX_ASPECT_RATIO: Fix fx/fy ratio

    • CALIB_ZERO_TANGENT_DIST: Tangential distortion coefficients are set to zeros

    • CALIB_FIX_K1, CALIB_FIX_K2, etc.: Fix specific distortion coefficients

  • criteria: Termination criteria for iterative optimization algorithm

Returns: Overall RMS re-projection error