Full ALPR Database System for Blue Iris!

So is your TPMS idea dead or a complete backburner?

Not dead. I'd actually really like to have it work. The reason that it's stalled is because my SDR can't seem to receive and decode the transmissions anymore. It worked at one point. Tried 3 different SDRs. I have a full cable run specifically for the radio box all the way out to my street. No idea why it stopped working because I was able to receive them just fine at one point.

If I can't get the data from the SDR to use when building/testing it, it's basically just guessing in the dark. It's not actually a very complex functionality to implement, but my failed troubleshooting efforts to get a reliable stream of data back from the SDR stalled my roll on that. If anyone knows why this is happening or how to fix it, please do comment. I'd love to get it decoding properly and add the feature to the ALPR database. I've used both airspy and RTLv4 devices with RTL433 to decode. I have two different RPIs, each with exactly tuned antennas from digikey for 315 and 433.95. Neither seems to be able to pick up anything anymore. I get the occasional decode, but it's maybe 1 out of 100 cars.


I still believe that this is a really powerful piece of data to have, so if I could figure it out, I'd be very keen to get it to work. This and the OCR model have been my personal top wishlist items for improvement for a long while. I tried so many different things with no avail and was frustrated knowing that it did, in fact, work at one point, and basically rage-quit and halted troubleshooting efforts after spending several hours crouching in the foliage in my yard. I haven't given up on or back-burnered the functionality within the ALPR database; I just can't seem to set up the systems I need in order to develop it. Really frustrating because I don't know why it stopped working, and it was completely random timing. If I could somehow get it to work or understand why I'm facing the issue I am, I'd build the TPMS functionality in an evening.


Any SDR/433 decoding insight would be greatly appreciated.
 
  • Haha
Reactions: djangel
Furthermore, I really like the idea of the DB being a "brain" that can help us filter out the plates that are worthy of our review. A plate I haven't seen before or recently, one that has been going by at odd hours, one that I've tagged, etc. would be ones I'd want to see immediately.

Absolutely agree and would love to hear any further suggestions from anyone for how this should be implemented. I see this as a traffic intelligence tool, and that's a critical piece in order to understand and archive all traffic passing by your property.


Expecting the OCR to be 100% correct is unrealistic.

This is true, and even Flock Safety and Motorola have a material number of incorrect reads. The percentage of misreads, however, is relatively low. We are challenged by the fact that the model we all use was trained with the minimal data that was available at the time.


At a minimum, if there were a way to get a report of plates that have few reads but that are close to other plates that have many reads, we could have a way to fix these ourselves.

There's a reason it's an oft requested feature.

Like I mentioned, I experience the exact same behavior with a large number of misreads. It's bothersome to me too. I've just been apprehensive about implementing the requested functionality because it feels principally incorrect.

I'd like to start an initiative to prepare the data and improve the model, but if that seems like it's going to take a while, I think I might have a sensible solution.


Instead of hardcoding "redirects", fuzzy match on consistently misread plates and assign them for secondary inspection. This can use OpenAI or Gemini, which are both highly capable and accurate. Receive a plate read similar to one seen many times in the DB --> it will be flagged for secondary confirmation. I think this is a solid middle-ground solution that doesn't meddle with the overall flow of the whole camera system.


Should cost nearly nothing unless you have a massive amount of traffic passing by your cameras. There's really no alternative as of Oct 2025. Hopefully, we will have something self-hostable in the future, but for now, this is definitely the best approach.
 
I think the band-aid approach is still very valuable as a built-in feature to correct misreads. OCR will always give some inaccurate readings, especially when we are trying to do this in various weather and lighting conditions. And given the challenges with CPAI slow development, maybe there isn't a way to get better OCR models anytime soon.

For images to be cropped, since you have the coordinates for the bounding box already, couldn't that just be cropped and then stored via some additional code, maybe with opencv? In the live view, the plate is cropped, that's what made me think saving that as the image for training wouldn't be that difficult?
 
I think the band-aid approach is still very valuable as a built-in feature to correct misreads. OCR will always give some inaccurate readings, especially when we are trying to do this in various weather and lighting conditions. And given the challenges with CPAI slow development, maybe there isn't a way to get better OCR models anytime soon.

For images to be cropped, since you have the coordinates for the bounding box already, couldn't that just be cropped and then stored via some additional code, maybe with opencv? In the live view, the plate is cropped, that's what made me think saving that as the image for training wouldn't be that difficult?

I'll start with trying the secondary confirmation to google/openai. If that isn't sufficient (it really should be), I'll build the direct forwarding functionality.

As far as the training data goes, the images needed for the OCR are actually even more cropped in than what the license plate model marks. See image below. It crops all the way into the characters. So the crops I have now are not the correct ones. Additionally, we need the actual characters in a special annotation format. The images collected so far can absolutely be used to train the OCR, but a significant amount of transformation/cleaning/preparation is required.

The necessary images look like this:
fe05721f-5bb3-4ff5-9233-6bcfdeaa576b.jpg
fbad4925-b0cc-45c7-9fcd-bc1af7462bb4.jpg
f5972aba-dd9c-444e-8358-5bfe29334175.jpg
 
  • Like
Reactions: PeteJ
I have been working on a totally new ALPR module based on YOLO11. I have been running in parallel with the current ALPR module and I am see better results, below is one example that the old ALPR could not read the plate also Plate Recognition could not read the plate. The new module are using all ONNX models that I trained. The ONNX model are fast and can work on Intel iGPUs. This new module I am planning on adding vehicle color, make model and also State recognition . I am think of creating a patch for CP.AI that will allow users to install new module using a GitHub link

1759432009604.png



1759432696002.png
 
  • Like
Reactions: djangel and PeteJ
I'll start with trying the secondary confirmation to google/openai. If that isn't sufficient (it really should be), I'll build the direct forwarding functionality.

As far as the training data goes, the images needed for the OCR are actually even more cropped in than what the license plate model marks. See image below. It crops all the way into the characters. So the crops I have now are not the correct ones. Additionally, we need the actual characters in a special annotation format. The images collected so far can absolutely be used to train the OCR, but a significant amount of transformation/cleaning/preparation is required.

The necessary images look like this:
View attachment 229133View attachment 229134View attachment 229135

Thanks Charlie, I wasn't aware that it needed to be that tight. This type of work seems like it'd be ideal for AI vision to do the cropping! :)
 
I have been working on a totally new ALPR module based on YOLO11. I have been running in parallel with the current ALPR module and I am see better results, below is one example that the old ALPR could not read the plate also Plate Recognition could not read the plate. The new module are using all ONNX models that I trained. The ONNX model are fast and can work on Intel iGPUs. This new module I am planning on adding vehicle color, make model and also State recognition . I am think of creating a patch for CP.AI that will allow users to install new module using a GitHub link

View attachment 229136


View attachment 229137

WOW! Can't wait to see this. The vehicle data (make, model, year, color) would be super helpful!
 
Absolutely agree and would love to hear any further suggestions from anyone for how this should be implemented.

I've just been apprehensive about implementing the requested functionality because it feels principally incorrect.
I don't feel it's "principally incorrect" to use additional information from the database.

The recognition can always be changed to a better model, a more accurate model, or even a different service (e.g. PlateRecognizer). That change should be independent of what additional steps the database can do on ingestion of the plate.

On plate read, I think you should always keep what characters were sent in with the original read.

For plate reads, especially those that didn't match any existing plates, do an additional "fuzzy" match and if there are potential matches, flag that record.

As for what to do with that specifically, at a minimum we should be able to view those flagged plates and be able to pick the corrected plate from the fuzzy match(es). We should also be able to revert a plate back to its originally read characters if this was done in error.

For a further enhancement, if I choose to enable a new "autocorrect" preference, have it pick the best fuzzy match automatically. Again, since the originally read characters are still stored, I should be able to bring up the "review" page and be able to revert it to the original read or a different fuzzy match.

Would something like that be helpful and relatively easy to implement?