New CodeProject.AI License Plate Recognition (YOLO11) Module

  • Haha
Reactions: johnfitz
I see 7 people downloaded the appsettings.json file, I am curious on how the new module is working for everyone.
I've had it going for about 24 hours and it is working great. I think it is more accurate. It is also picking up everything on a vehicle such as phone numbers, advertising ect... But, that is good!!
 
  • Like
Reactions: MikeLud1
Watching this thread closely, no free time currently but as soon as I do, some time into November, I'll be all over this!
 
  • Like
Reactions: MikeLud1
Below is the inner working of the ALPRYOLO11 module

ALPRYOLO11 Module - High-Level Pipeline Description

Overview


The ALPRYOLO11 module is an Automatic License Plate Recognition (ALPR) system that detects and reads license plates from images using YOLO11-based ONNX models. The module integrates with CodeProject.AI Server and provides license plate detection, character recognition, state classification, and optional vehicle detection.

Architecture Layers

1. Entry Point Layer
(alpr_adapter.py)

  • ALPRAdapter class extends CodeProject.AI's ModuleRunner
  • Handles HTTP requests to /v1/vision/alpr endpoint
  • Manages thread-safe request processing with locks
  • Detects and configures hardware acceleration (CUDA, MPS, DirectML)
  • Initializes the core ALPR system
  • Tracks statistics (plates detected, histogram of license plates)
  • Manages module lifecycle (initialization, cleanup, status reporting)

2. Configuration Layer (alpr/config.py)

  • ALPRConfig dataclass loads settings from environment variables
  • Configures feature flags (state detection, vehicle detection, debug mode)
  • Sets confidence thresholds for all models
  • Manages model paths and validates file existence
  • Defines character organization parameters
  • Validates all configuration values

3. Core Processing Layer (alpr/core.py)

  • ALPRSystem orchestrates the complete detection pipeline
  • Coordinates all detector and classifier components
  • Manages the multi-stage processing workflow
  • Handles debug image generation at each stage
  • Converts results to API-compatible format

Processing Pipeline

Stage 1: Image Input & Preprocessing


Code:
HTTP Request → ALPRAdapter.process() → Image extraction → RGB to BGR conversion

  • Receives image from API request
  • Converts PIL image to OpenCV numpy array (BGR format)
  • Saves input debug image (optional)

Stage 2: License Plate Detection (YOLO/plate_detector.py)

Code:
Image → PlateDetector.detect() → Day/Night plate bounding boxes

  • Detects license plates using plate_detector.onnx model
  • Returns separate lists for day plates and night plates
  • Outputs bounding box corners for each detected plate
  • Applies confidence threshold filtering
  • Saves plate detection debug images (optional)

Stage 3: Plate Extraction & Transformation

Code:
Plate corners → four_point_transform() → Cropped & warped plate image

  • Uses 4-point perspective transform to extract plate region
  • Corrects for viewing angle and skew
  • Applies configured aspect ratio (default 4.0)
  • Dilates corners by configured pixels (default 5px)
  • Saves cropped plate to alpr.jpg
  • Saves cropped plate debug image (optional)

Stage 4: State Classification (YOLO/state_classifier.py)

Code:
Plate image → StateClassifier.classify() → US state label

  • Only for day plates when enable_state_detection=True
  • Uses state_classifier.onnx model
  • Identifies US state from license plate design
  • Returns state code and confidence score
  • Saves state classification debug image (optional)

Stage 5: Character Detection (YOLO/character_detector.py)

Code:
Plate image → CharacterDetector.detect() → Character bounding boxes

  • Uses char_detector.onnx model
  • Detects individual character locations on plate
  • Returns bounding boxes for each character
  • Applies confidence threshold filtering

Stage 6: Character Classification (YOLO/char_classifier_manager.py)

Code:
Character crops → CharClassifierManager.classify_characters() → Character labels

  • Uses char_classifier.onnx model
  • Recognizes each character (A-Z, 0-9)
  • Returns character label and confidence for each detection
  • Supports multiple prediction alternatives

Stage 7: Character Organization (YOLO/char_organizer.py)

Code:
Character detections → CharacterOrganizer.organize() → Sorted character sequence

  • Critical stage for correct license plate reading
  • Implements deterministic, stable multi-key sorting
  • Handles multiple plate layouts:
    • Single-line horizontal (e.g., "ABC1234")
    • Multi-line plates (e.g., "ABC" over "1234")
    • Vertical characters (e.g., "ABC123MD")
    • Mixed layouts
    • Overlapping characters
  • Sorts purely by spatial coordinates (X, Y)
  • Does NOT use YOLO detection order (which is arbitrary)
  • Filters invalid characters by height threshold
  • Generates reading order debug visualizations (optional):
    • Numbered sequence image
    • Arrow flow diagram

Stage 8: License Plate Assembly

Code:
Sorted characters → Concatenate → Final license plate string

  • Combines organized characters into final license number
  • Generates top-N alternative readings
  • Calculates overall confidence score
  • Saves character detection debug image (optional)

Stage 9 (Optional): Vehicle Detection (YOLO/vehicle_detector.py)

Code:
Original image → VehicleDetector.detect_and_classify() → Vehicle make/model

  • Only when enable_vehicle_detection=True AND day plates detected
  • Uses vehicle_detector.onnx and vehicle_classifier.onnx
  • Detects vehicles in the image
  • Classifies vehicle make and model
  • Saves vehicle detection debug image (optional)

Stage 10: Response Assembly

Code:
All results → Format API response → Return JSON

  • Aggregates day plates, night plates, and vehicles
  • Formats as CodeProject.AI compatible JSON
  • Includes:
    • License plate numbers
    • Bounding boxes
    • Confidence scores
    • State information
    • Top alternative readings
    • Vehicle make/model (if enabled)
    • Processing time metrics
  • Updates module statistics
  • Saves final annotated debug image (optional)

Key Components

ONNX Session Manager
(YOLO/session_manager.py)

  • Singleton pattern for managing ONNX Runtime sessions
  • Handles DirectML fallback for Windows GPU acceleration
  • Provides session reuse across requests
  • Manages cleanup and resource deallocation

Base YOLO Model (YOLO/base.py)

  • Abstract base class for all YOLO-based detectors
  • Handles ONNX model loading and inference
  • Provides common preprocessing and postprocessing
  • Manages hardware acceleration (CPU, CUDA, MPS, DirectML)

Image Processing Utilities (utils/image_processing.py)

  • Four-point perspective transformation
  • Debug image generation with annotations
  • Drawing utilities for bounding boxes and labels

Hardware Acceleration

Priority order:

  1. CUDA (NVIDIA GPUs) - Highest priority
  2. MPS (Apple Silicon) - macOS only
  3. DirectML (Windows GPUs) - Windows fallback
  4. CPU - Universal fallback

Debug Mode Features

When SAVE_DEBUG_IMAGES=True, generates:

  • input_*.jpg - Original input images
  • plate_detector_*.jpg - Plate detections with boxes
  • plate_crop_*.jpg - Extracted plate regions
  • state_classifier_*.jpg - State classification results
  • char_detector_*.jpg - Character detections
  • char_organizer_reading_order.jpg - Numbered character sequence
  • char_organizer_character_flow.jpg - Arrow-based reading path
  • vehicle_detector_*.jpg - Vehicle detections
  • final_*.jpg - Complete annotated results

Performance Characteristics

  • Thread-safe: Uses locks for concurrent request handling
  • Stateful: Maintains statistics across requests
  • Efficient: Reuses ONNX sessions for multiple inferences
  • Configurable: Extensive environment variable configuration
  • Robust: Comprehensive error handling and cleanup

Data Flow Summary

Code:
HTTP Request
    ↓
[ALPRAdapter] → Thread-safe processing
    ↓
[ALPRSystem] → Orchestration
    ↓
[PlateDetector] → Detect plates (day/night)
    ↓
[four_point_transform] → Extract plate region
    ↓
[StateClassifier] → Identify state (day plates only)
    ↓
[CharacterDetector] → Detect characters
    ↓
[CharClassifierManager] → Recognize characters
    ↓
[CharacterOrganizer] → Sort spatially (NOT by detection order)
    ↓
[Assembly] → Build license plate string + alternatives
    ↓
[VehicleDetector] → Detect & classify vehicle (optional)
    ↓
[Response] → JSON with all results
    ↓
HTTP Response

Models Used

ModelFilePurpose
Plate Detectorplate_detector.onnxDetects license plates (day/night)
State Classifierstate_classifier.onnxIdentifies US state from plate design
Character Detectorchar_detector.onnxDetects individual character locations
Character Classifierchar_classifier.onnxRecognizes characters (OCR)
Vehicle Detectorvehicle_detector.onnxDetects vehicles (optional)
Vehicle Classifiervehicle_classifier.onnxClassifies vehicle make/model (optional)

Configuration Parameters

Core Settings


  • ENABLE_STATE_DETECTION - Enable/disable state identification (default: false)
  • ENABLE_VEHICLE_DETECTION - Enable/disable vehicle detection (default: false)
  • PLATE_ASPECT_RATIO - License plate aspect ratio (default: 4.0)
  • CORNER_DILATION_PIXELS - Corner dilation for plate extraction (default: 5)

Confidence Thresholds

  • PLATE_DETECTOR_CONFIDENCE - Plate detection threshold (default: 0.45)
  • STATE_CLASSIFIER_CONFIDENCE - State classification threshold (default: 0.45)
  • CHAR_DETECTOR_CONFIDENCE - Character detection threshold (default: 0.40)
  • CHAR_CLASSIFIER_CONFIDENCE - Character recognition threshold (default: 0.40)
  • VEHICLE_DETECTOR_CONFIDENCE - Vehicle detection threshold (default: 0.45)
  • VEHICLE_CLASSIFIER_CONFIDENCE - Vehicle classification threshold (default: 0.45)

Character Organization (Advanced)

  • LINE_SEPARATION_THRESHOLD - Multi-line detection threshold (default: 0.6)
  • VERTICAL_ASPECT_RATIO - Vertical character threshold (default: 1.5)
  • OVERLAP_THRESHOLD - IoU threshold for overlaps (default: 0.3)
  • MIN_CHARS_FOR_CLUSTERING - Minimum chars for clustering (default: 6)
  • HEIGHT_FILTER_THRESHOLD - Height ratio filter (default: 0.6)
  • CLUSTERING_Y_SCALE_FACTOR - Y-coordinate scaling (default: 3.0)

Debug Options

  • SAVE_DEBUG_IMAGES - Enable debug image saving (default: false)
  • DEBUG_IMAGES_DIR - Debug images directory (default: "debug_images")

Error Handling

The pipeline includes comprehensive error handling at each stage:

  • Model loading errors trigger initialization failures
  • Individual plate processing errors are caught and logged without stopping the entire batch
  • Invalid configurations are validated at startup
  • Resource cleanup is guaranteed through destructors and cleanup methods
  • Thread-safe operations prevent race conditions

API Endpoint

Code:
POST /v1/vision/alpr

Parameters:

  • image - Image file (required)
  • min_confidence - Minimum confidence threshold 0.0-1.0 (optional, default: 0.4)

Response:

Code:
{
  "success": true,
  "processMs": 150,
  "inferenceMs": 120,
  "count": 1,
  "message": "Found 1 license plates: ABC1234",
  "predictions": [
    {
      "confidence": 0.85,
      "is_day_plate": true,
      "label": "ABC1234",
      "plate": "ABC1234",
      "x_min": 100,
      "y_min": 150,
      "x_max": 300,
      "y_max": 200,
      "state": "CA",
      "state_confidence": 0.92,
      "top_plates": [
        {"plate": "ABC1234", "confidence": 0.85},
        {"plate": "ABC1Z34", "confidence": 0.78}
      ]
    }
  ]
}


 
Last edited:
I see 7 people downloaded the appsettings.json file, I am curious on how the new module is working for everyone.
I downloaded the appsettings.json file and followed your instructions, but the new module will not stay running on my machine. I will get more details on the exact symptoms and errors this evening. I am away from the PC right now.
 
Can please post your System Info like the below

View attachment 229931
See below:

Server version: 2.9.5
System: Windows
Operating System: Windows (Windows 10 Redstone)
CPUs: Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz (Intel)
1 CPU x 4 cores. 8 logical processors (x64)
GPU (Primary): Intel(R) HD Graphics 630 (1,024 MiB) (Intel Corporation)
Driver: 27.20.100.9664
System RAM: 32 GiB
Platform: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
Runtimes installed:
.NET runtime: 9.0.0
.NET SDK: Not found
Default Python: Not found
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
Intel(R) HD Graphics 630:
Driver Version 27.20.100.9664
Video Processor Intel(R) HD Graphics Family
System GPU info:
GPU 3D Usage 0%
GPU RAM Usage 0
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168