Full ALPR Database System for Blue Iris!

So is your TPMS idea dead or a complete backburner?

Not dead. I'd actually really like to have it work. The reason that it's stalled is because my SDR can't seem to receive and decode the transmissions anymore. It worked at one point. Tried 3 different SDRs. I have a full cable run specifically for the radio box all the way out to my street. No idea why it stopped working because I was able to receive them just fine at one point.

If I can't get the data from the SDR to use when building/testing it, it's basically just guessing in the dark. It's not actually a very complex functionality to implement, but my failed troubleshooting efforts to get a reliable stream of data back from the SDR stalled my roll on that. If anyone knows why this is happening or how to fix it, please do comment. I'd love to get it decoding properly and add the feature to the ALPR database. I've used both airspy and RTLv4 devices with RTL433 to decode. I have two different RPIs, each with exactly tuned antennas from digikey for 315 and 433.95. Neither seems to be able to pick up anything anymore. I get the occasional decode, but it's maybe 1 out of 100 cars.


I still believe that this is a really powerful piece of data to have, so if I could figure it out, I'd be very keen to get it to work. This and the OCR model have been my personal top wishlist items for improvement for a long while. I tried so many different things with no avail and was frustrated knowing that it did, in fact, work at one point, and basically rage-quit and halted troubleshooting efforts after spending several hours crouching in the foliage in my yard. I haven't given up on or back-burnered the functionality within the ALPR database; I just can't seem to set up the systems I need in order to develop it. Really frustrating because I don't know why it stopped working, and it was completely random timing. If I could somehow get it to work or understand why I'm facing the issue I am, I'd build the TPMS functionality in an evening.


Any SDR/433 decoding insight would be greatly appreciated.
 
  • Haha
Reactions: djangel
Furthermore, I really like the idea of the DB being a "brain" that can help us filter out the plates that are worthy of our review. A plate I haven't seen before or recently, one that has been going by at odd hours, one that I've tagged, etc. would be ones I'd want to see immediately.

Absolutely agree and would love to hear any further suggestions from anyone for how this should be implemented. I see this as a traffic intelligence tool, and that's a critical piece in order to understand and archive all traffic passing by your property.


Expecting the OCR to be 100% correct is unrealistic.

This is true, and even Flock Safety and Motorola have a material number of incorrect reads. The percentage of misreads, however, is relatively low. We are challenged by the fact that the model we all use was trained with the minimal data that was available at the time.


At a minimum, if there were a way to get a report of plates that have few reads but that are close to other plates that have many reads, we could have a way to fix these ourselves.

There's a reason it's an oft requested feature.

Like I mentioned, I experience the exact same behavior with a large number of misreads. It's bothersome to me too. I've just been apprehensive about implementing the requested functionality because it feels principally incorrect.

I'd like to start an initiative to prepare the data and improve the model, but if that seems like it's going to take a while, I think I might have a sensible solution.


Instead of hardcoding "redirects", fuzzy match on consistently misread plates and assign them for secondary inspection. This can use OpenAI or Gemini, which are both highly capable and accurate. Receive a plate read similar to one seen many times in the DB --> it will be flagged for secondary confirmation. I think this is a solid middle-ground solution that doesn't meddle with the overall flow of the whole camera system.


Should cost nearly nothing unless you have a massive amount of traffic passing by your cameras. There's really no alternative as of Oct 2025. Hopefully, we will have something self-hostable in the future, but for now, this is definitely the best approach.
 
I think the band-aid approach is still very valuable as a built-in feature to correct misreads. OCR will always give some inaccurate readings, especially when we are trying to do this in various weather and lighting conditions. And given the challenges with CPAI slow development, maybe there isn't a way to get better OCR models anytime soon.

For images to be cropped, since you have the coordinates for the bounding box already, couldn't that just be cropped and then stored via some additional code, maybe with opencv? In the live view, the plate is cropped, that's what made me think saving that as the image for training wouldn't be that difficult?
 
  • Like
Reactions: algertc
I think the band-aid approach is still very valuable as a built-in feature to correct misreads. OCR will always give some inaccurate readings, especially when we are trying to do this in various weather and lighting conditions. And given the challenges with CPAI slow development, maybe there isn't a way to get better OCR models anytime soon.

For images to be cropped, since you have the coordinates for the bounding box already, couldn't that just be cropped and then stored via some additional code, maybe with opencv? In the live view, the plate is cropped, that's what made me think saving that as the image for training wouldn't be that difficult?

I'll start with trying the secondary confirmation to google/openai. If that isn't sufficient (it really should be), I'll build the direct forwarding functionality.

As far as the training data goes, the images needed for the OCR are actually even more cropped in than what the license plate model marks. See image below. It crops all the way into the characters. So the crops I have now are not the correct ones. Additionally, we need the actual characters in a special annotation format. The images collected so far can absolutely be used to train the OCR, but a significant amount of transformation/cleaning/preparation is required.

The necessary images look like this:
fe05721f-5bb3-4ff5-9233-6bcfdeaa576b.jpg
fbad4925-b0cc-45c7-9fcd-bc1af7462bb4.jpg
f5972aba-dd9c-444e-8358-5bfe29334175.jpg
 
  • Like
Reactions: PeteJ
I have been working on a totally new ALPR module based on YOLO11. I have been running in parallel with the current ALPR module and I am see better results, below is one example that the old ALPR could not read the plate also Plate Recognition could not read the plate. The new module are using all ONNX models that I trained. The ONNX model are fast and can work on Intel iGPUs. This new module I am planning on adding vehicle color, make model and also State recognition . I am think of creating a patch for CP.AI that will allow users to install new module using a GitHub link

1759432009604.png



1759432696002.png
 
I'll start with trying the secondary confirmation to google/openai. If that isn't sufficient (it really should be), I'll build the direct forwarding functionality.

As far as the training data goes, the images needed for the OCR are actually even more cropped in than what the license plate model marks. See image below. It crops all the way into the characters. So the crops I have now are not the correct ones. Additionally, we need the actual characters in a special annotation format. The images collected so far can absolutely be used to train the OCR, but a significant amount of transformation/cleaning/preparation is required.

The necessary images look like this:
View attachment 229133View attachment 229134View attachment 229135

Thanks Charlie, I wasn't aware that it needed to be that tight. This type of work seems like it'd be ideal for AI vision to do the cropping! :)
 
  • Like
Reactions: algertc
I have been working on a totally new ALPR module based on YOLO11. I have been running in parallel with the current ALPR module and I am see better results, below is one example that the old ALPR could not read the plate also Plate Recognition could not read the plate. The new module are using all ONNX models that I trained. The ONNX model are fast and can work on Intel iGPUs. This new module I am planning on adding vehicle color, make model and also State recognition . I am think of creating a patch for CP.AI that will allow users to install new module using a GitHub link

View attachment 229136


View attachment 229137

WOW! Can't wait to see this. The vehicle data (make, model, year, color) would be super helpful!
 
  • Like
Reactions: algertc
Absolutely agree and would love to hear any further suggestions from anyone for how this should be implemented.

I've just been apprehensive about implementing the requested functionality because it feels principally incorrect.
I don't feel it's "principally incorrect" to use additional information from the database.

The recognition can always be changed to a better model, a more accurate model, or even a different service (e.g. PlateRecognizer). That change should be independent of what additional steps the database can do on ingestion of the plate.

On plate read, I think you should always keep what characters were sent in with the original read.

For plate reads, especially those that didn't match any existing plates, do an additional "fuzzy" match and if there are potential matches, flag that record.

As for what to do with that specifically, at a minimum we should be able to view those flagged plates and be able to pick the corrected plate from the fuzzy match(es). We should also be able to revert a plate back to its originally read characters if this was done in error.

For a further enhancement, if I choose to enable a new "autocorrect" preference, have it pick the best fuzzy match automatically. Again, since the originally read characters are still stored, I should be able to bring up the "review" page and be able to revert it to the original read or a different fuzzy match.

Would something like that be helpful and relatively easy to implement?
 
  • Like
Reactions: algertc
but because of the status with CPAI and the slow updates, almost no images for the actual OCR. Mike added the necessary data for the character recognition a while ago, but it took quite a while to be available for update through the codeproject UI.
I just made a new tab that will give users the ability to install new modules from a GitHub link, I still need to test it. To add this all that is need is to replace two files.

1759454032382.png
 
Hey @algertc

Did you see this post? Would be cool as an addon feature to get AI to give us vehicle make, model, color, year
Yes, I did see. The AI agent branch I was working and actually has this full functionality with the ability to backfill all past recognitions.

Seems like Mike has something in the works. Like I’ve said, we should aim to address the functionality at its roots. As a fallback solution If we can’t get the same data, I will add the public foundation models as a verification option.

Again, apologies for my absence. I’d love to work on this all day long but things have picked up recently and I just don’t have the bandwidth.

My focus now is to try to improve the CV, as I think it will have the greatest impact and improvement across the board
 
  • Like
Reactions: prsmith777
Seems like Mike has something in the works.

My focus now is to try to improve the CV, as I think it will have the greatest impact and improvement across the board
@MikeLud1 can correct me, but I think the screenshot you saw where it was detecting car details like the make and model was coming from PlateRecognizer.

He was showing an example where the plate was at angle where his updated YOLO11 model was doing better than PlateRecognizer. But I don't think he was implying the updated model provides car details.

I still don't think improving CV should be the main focus of ALPR DB. In parallel, we can improve the detection of plates that are misread by matching them to ones that were read correctly.

And rich notification based on patterns of detection (new read, unusual hour, specific tags, etc.) should be another primary focus.
 
@MikeLud1 can correct me, but I think the screenshot you saw where it was detecting car details like the make and model was coming from PlateRecognizer.

He was showing an example where the plate was at angle where his updated YOLO11 model was doing better than PlateRecognizer. But I don't think he was implying the updated model provides car details.

I still don't think improving CV should be the main focus of ALPR DB. In parallel, we can improve the detection of plates that are misread by matching them to ones that were read correctly.

And rich notification based on patterns of detection (new read, unusual hour, specific tags, etc.) should be another primary focus.
I reread @MikeLud1 's post and he does indeed say he is thinking of adding color, make and model. So ignore what I just said about that. Impressive if he can do that.

But, that's yet another piece of data that we could use to match misread plates. If a Red Ford Bronco with plates AOT123 is seen for the first time, but our neighbor drives a Red Ford Bronco with plates AQT123...
 
Last edited:
I just made a new tab that will give users the ability to install new modules from a GitHub link, I still need to test it. To add this all that is need is to replace two files.

View attachment 229152
WIll this be added to a CPAI build, or is Chris Maunder no longer making changes so we have to do our own patches? I would love to see a loadable YOLO11 module while we're at it. I know you (@MikeLud1) were working with Chris to get one added, but I haven't seen any traction on that request in the CPAI discussion thread. YOLO v11 · codeproject CodeProject.AI-Server · Discussion #286
 
Anyone running Blue Iris 5.9.9.73 (8/28/2025)? My BI system crashed and when it rebooted, it updated to 5.9.9.73, and suddenly I am no longer getting any saved plates in ALPR Dashboard. And I started to see this in the log:

10/4/2025, 8:43:55 PM [INFO] Received plate read data: [object Object]
10/4/2025, 8:43:55 PM [INFO] Database connection established
10/4/2025, 8:43:55 PM [INFO] [FileStorage] Successfully saved image
10/4/2025, 8:43:55 PM [ERROR] Error processing request: error: null value in column "plate_number" of relation "plate_reads" violates not-null constraint
10/4/2025, 8:43:55 PM [INFO] Fetching latest plate reads
10/4/2025, 8:43:56 PM [INFO] Received plate read data: [object Object]
10/4/2025, 8:43:56 PM [INFO] Database connection established
10/4/2025, 8:43:56 PM [INFO] [FileStorage] Successfully saved image
10/4/2025, 8:43:56 PM [ERROR] Error processing request: error: null value in column "plate_number" of relation "plate_reads" violates not-null constraint

If I go back to the previous version of Blue Iris (5.9.9.33) everything works perfectly. No errors in the log, etc.

Anyone know what could be causing this? I suspect it's the payload from BI has changed in this version?

Thanks
 
Anyone running Blue Iris 5.9.9.73 (8/28/2025)? My BI system crashed and when it rebooted, it updated to 5.9.9.73, and suddenly I am no longer getting any saved plates in ALPR Dashboard. And I started to see this in the log:

10/4/2025, 8:43:55 PM [INFO] Received plate read data: [object Object]
10/4/2025, 8:43:55 PM [INFO] Database connection established
10/4/2025, 8:43:55 PM [INFO] [FileStorage] Successfully saved image
10/4/2025, 8:43:55 PM [ERROR] Error processing request: error: null value in column "plate_number" of relation "plate_reads" violates not-null constraint
10/4/2025, 8:43:55 PM [INFO] Fetching latest plate reads
10/4/2025, 8:43:56 PM [INFO] Received plate read data: [object Object]
10/4/2025, 8:43:56 PM [INFO] Database connection established
10/4/2025, 8:43:56 PM [INFO] [FileStorage] Successfully saved image
10/4/2025, 8:43:56 PM [ERROR] Error processing request: error: null value in column "plate_number" of relation "plate_reads" violates not-null constraint

If I go back to the previous version of Blue Iris (5.9.9.33) everything works perfectly. No errors in the log, etc.

Anyone know what could be causing this? I suspect it's the payload from BI has changed in this version?

Thanks
Try this:

Then run these commands in your docker (the db one, not the app one):
psql -d postgres -U postgres -f /docker-entrypoint-initdb.d/schema.sql
psql -d postgres -U postgres -f /docker-entrypoint-initdb.d/migrations.sql
 
WIll this be added to a CPAI build, or is Chris Maunder no longer making changes so we have to do our own patches? I would love to see a loadable YOLO11 module while we're at it. I know you (@MikeLud1) were working with Chris to get one added, but I haven't seen any traction on that request in the CPAI discussion thread. YOLO v11 · codeproject CodeProject.AI-Server · Discussion #286

Don't want to hijack this thread but I was wondering the same thing. I thought I read in the Github discussions that Chris was no longer the owner and only a contributor now.

Also, I don't remember the details but I did read something about Yolo changing it's licensing and as a result of that, a lot of people were abandoning the Yolo models. It sucks because it seemed like they were the best to work with (from my very limited attempts at building custom models).
 
Try this:

Then run these commands in your docker (the db one, not the app one):
psql -d postgres -U postgres -f /docker-entrypoint-initdb.d/schema.sql
psql -d postgres -U postgres -f /docker-entrypoint-initdb.d/migrations.sql

Thanks, I'm the guy that posted that originally :)
 
Anyone running Blue Iris 5.9.9.73 (8/28/2025)? My BI system crashed and when it rebooted, it updated to 5.9.9.73, and suddenly I am no longer getting any saved plates in ALPR Dashboard. And I started to see this in the log:

10/4/2025, 8:43:55 PM [INFO] Received plate read data: [object Object]
10/4/2025, 8:43:55 PM [INFO] Database connection established
10/4/2025, 8:43:55 PM [INFO] [FileStorage] Successfully saved image
10/4/2025, 8:43:55 PM [ERROR] Error processing request: error: null value in column "plate_number" of relation "plate_reads" violates not-null constraint
10/4/2025, 8:43:55 PM [INFO] Fetching latest plate reads
10/4/2025, 8:43:56 PM [INFO] Received plate read data: [object Object]
10/4/2025, 8:43:56 PM [INFO] Database connection established
10/4/2025, 8:43:56 PM [INFO] [FileStorage] Successfully saved image
10/4/2025, 8:43:56 PM [ERROR] Error processing request: error: null value in column "plate_number" of relation "plate_reads" violates not-null constraint

If I go back to the previous version of Blue Iris (5.9.9.33) everything works perfectly. No errors in the log, etc.

Anyone know what could be causing this? I suspect it's the payload from BI has changed in this version?

Thanks

Okay, I've found the issue and adding it here in case someone runs into this in the future. On this system, I was still running the older 0.18 code and when I upgraded to 0.19, it fixed the problem partially. It seems in newer versions of BI (certainly in 5.9.9.73, but NOT in 5.9.9.33), the AI handling was changed slightly.

Essentially, if you have object detection enabled at the BI main settings, and even if you did not enable it on a camera, it is still do it, and the results will be sent via the &JSON macro. In 0.18 of the ALPR Database, it would result in the error above, but in 0.19, it handles it fine. If you have BI burn the detection into the image, you will see:

Screenshot from 2025-10-06 18-49-09.png


The fix is:

1) Upgrade to 0.19 of ALPR Database, and/or
2) On BI, turn off object detection at the main settings menu, and maybe enable it the individual camera if you need it, but not on your ALPR camera.