Full ALPR Database System for Blue Iris!

Okay, I've found the issue and adding it here in case someone runs into this in the future. On this system, I was still running the older 0.18 code and when I upgraded to 0.19, it fixed the problem partially. It seems in newer versions of BI (certainly in 5.9.9.73, but NOT in 5.9.9.33), the AI handling was changed slightly.

Essentially, if you have object detection enabled at the BI main settings, and even if you did not enable it on a camera, it is still do it, and the results will be sent via the &JSON macro. In 0.18 of the ALPR Database, it would result in the error above, but in 0.19, it handles it fine. If you have BI burn the detection into the image, you will see:

View attachment 229371

The fix is:

1) Upgrade to 0.19 of ALPR Database, and/or
2) On BI, turn off object detection at the main settings menu, and maybe enable it the individual camera if you need it, but not on your ALPR camera.

Recent similar GitHub issue: getting error in alpr · Issue #72 · algertc/ALPR-Database

Like you mention, and I say in that GitHub issue, I made changes that seem to fix part of this in the last update. Seems like I still need to make the parsing more robust. It’s the multiple plate numbers in one image part that makes it a bit tricky. Shouldn’t be hard to fix though.

If you could paste the AI_dump that causes the issue, that would be very helpful.
 
Recent similar GitHub issue: getting error in alpr · Issue #72 · algertc/ALPR-Database

Like you mention, and I say in that GitHub issue, I made changes that seem to fix part of this in the last update. Seems like I still need to make the parsing more robust. It’s the multiple plate numbers in one image part that makes it a bit tricky. Shouldn’t be hard to fix though.

If you could paste the AI_dump that causes the issue, that would be very helpful.

Sure, how do I get the AI_dump data?
 
Maybe try adding an email alert with &JSON and see if you can email it to yourself. I think that should work.

It won't expand &AI_dump, it just literally prints that text, whereas &CAM will expand it to the camera name. I didn't go back to older versions of BI nor ALPRdatabase, this is with 0.19 and 5.9.9.73.
 
I have been working on a totally new ALPR module based on YOLO11. I have been running in parallel with the current ALPR module and I am see better results, below is one example that the old ALPR could not read the plate also Plate Recognition could not read the plate. The new module are using all ONNX models that I trained. The ONNX model are fast and can work on Intel iGPUs. This new module I am planning on adding vehicle color, make model and also State recognition . I am think of creating a patch for CP.AI that will allow users to install new module using a GitHub link

View attachment 229136


View attachment 229137

Can't Wait to test this out.
 
It's because the macro is called &JSON in Blue Iris. AI_dump was just my random name for it within the app.

Here is the dump and the corresponding image.

[{"api":"objects","found":{"message":"Found car, car, car","count":3,"predictions":[{"confidence":0.91733,"label":"car","x_min":0,"y_min":1,"x_max":1387,"y_max":618},{"confidence":0.885463,"label":"car","x_min":1476,"y_min":0,"x_max":1919,"y_max":476},{"confidence":0.778288,"label":"car","x_min":1037,"y_min":1,"x_max":1662,"y_max":480}],"success":true,"processMs":61,"inferenceMs":55,"moduleId":"ObjectDetectionYoloRKNN","moduleName":"Object Detection (YOLOv5 RKNN)","code":200,"command":"detect","requestId":"011aafb1-ed85-4b3b-9d6f-3231e2025fff","inferenceDevice":"NPU","analysisRoundTripMs":95,"processedBy":"localhost","timestampUTC":"Wed, 08 Oct 2025 18:54:35 GMT"}},{"api":"alpr","found":{"success":true,"inferenceMs":343,"processMs":387,"predictions":[{"confidence":0.99442,"label":"Plate: 9UJW535","plate":"9UJW535","x_min":1091,"y_min":262,"x_max":1232,"y_max":365}],"message":"Found Plate: 9UJW535","moduleId":"ALPR-RKNN","moduleName":"License Plate Reader (RKNN)","code":200,"command":"alpr","requestId":"c82a1a62-522a-47f7-a4e8-5177e93dc4c8","inferenceDevice":"NPU","analysisRoundTripMs":1519,"processedBy":"localhost","timestampUTC":"Wed, 08 Oct 2025 18:54:36 GMT"}}]

Screenshot from 2025-10-08 11-57-33.png
 
Last edited:
  • Like
Reactions: algertc
Thank you @PeteJ - will check with that.

I built the AI secondary verification feature yesterday and it works right now, but not if there are multiple plates in the same image. I have an idea of how to get that worked out.

The way it works right now:
  • Enable secondary verification and add your OpenAI or Gemini API key
  • Set a confidence threshold below which you want plates to be sent for secondary verification
  • Optionally add individual plates to always be sent for secondary

While hopefully the new models will reduce misreads, that last option could get expensive if you have a ton of traffic. I think I’ll add something like a lazy mode that will act sort of like a cached result for plates consistently misread. If the app receives a plate that is consistently misread and has received a corrected plate number back from OpenAI more than a few times, no need to keep sending it, just assume it’s the same value it has already been getting back. This should keep costs very very low.

This is nice because not only is it a solid hybrid to accomplish the requested forwarding functionality, but it will also automatically identify the plates that need that behavior instead of you manually having to add them.


Also, I did successfully make an Apple TV Blue Iris UI. Just a viewer, so nothing too fancy, but it works well and the cameras load even faster than UI3.

I have to update my MacOS version in order to sign the app archive. I’m a major tab hoarder and leave a lot of code editors open too, so might take a while before I end up closing everything to do that update…

It’s cool tho. I’ll post a screen recording.
 
Last edited:
Thank you @PeteJ - will check with that.

I built the AI secondary verification feature yesterday and it works right now, but not if there are multiple plates in the same image. I have an idea of how to get that worked out.

The way it works right now:
  • Enable secondary verification and add your OpenAI or Gemini API key
  • Set a confidence threshold below which you want plates to be sent for secondary verification
  • Optionally add individual plates to always be sent for secondary

While hopefully the new models will reduce misreads, that last option could get expensive if you have a ton of traffic. I think I’ll add something like a lazy mode that will act sort of like a cached result for plates consistently misread. If the app receives a plate that is consistently misread and has received a corrected plate number back from OpenAI more than a few times, no need to keep sending it, just assume it’s the same value it has already been getting back. This should keep costs very very low.

This is nice because not only is it a solid hybrid to accomplish the requested forwarding functionality, but it will also automatically identify the plates that need that behavior instead of you manually having to add them.


Also, I did successfully make an Apple TV Blue Iris UI. Just a viewer, so nothing too fancy, but it works well and the cameras load even faster than UI3.

I have to update my MacOS version in order to sign the app archive. I’m a major tab hoarder and leave a lot of code editors open too, so might take a while before I end up closing everything to do that update…

It’s cool tho. I’ll post a screen recording.

Nice! Looking forward to checking it out.
 
Just quick question as I was very confuse which one should I use for the Codeproject AI Licence Plate Reader in Blue iris to successfully make this ALPR Database work?

{"ai_dump":&JSON, "Image":"&ALERT_JPEG", "camera":"&CAM", "ALERT_PATH":"&ALERT_PATH", "ALERT_CLIP":"&ALERT_CLIP", "timestamp":"&ALERT_TIME"}
or
{"plate_number":"&PLATE", "Image":"&ALERT_JPEG", "camera":"&CAM", "ALERT_PATH": "&ALERT_PATH", "ALERT_CLIP": "&ALERT_CLIP", "timestamp":"&ALERT_TIME"}