Full ALPR Database System for Blue Iris!

Can someone please confirm what these two buttons do (and do not do)?
View attachment 238194

If I'm guessing correctly:
  1. Correct Plate updates the plate in the database, but does NOT feed anything back to the AI model.
  2. Confirm AI Label tells the AI model, "Yes, you got this correct," where "this" is the AI read, e.g. the text in the orange box.
  3. If one uses Correct Plate first, and then clicks Confirm AI Label, that would not do what we want, because Confirm AI Label does not use the corrected data, only the original read.
Can anyone confirm :p if I've got that correct :p:p?

If yes, is there a way to feed the corrected plate back to the model, e.g. "You got it wrong; here's what it actually is?" Or am I wrong on #3? If I'm wrong, perhaps that button could change to Correct AI Label if Correct Plate has previously been used.
As I understand it, when you correct a plate, it is also marked as confirmed. You'll see it turn green meaning it is confirmed as correct and can be used by AI for training.
 
  • Like
Reactions: TheWaterbug
As I understand it, when you correct a plate, it is also marked as confirmed. You'll see it turn green meaning it is confirmed as correct and can be used by AI for training.
Ah, I did not notice the Confirm button turning green after correction. So this software is smarter than I thought!

Now if we could only get multiple user logins . . . :p
 
@algertc is apparently pretty busy but I too hope we see some of the outstanding items addressed in the near future. I guess we were spoiled by the rapid pace early on.
 
@algertc is apparently pretty busy but I too hope we see some of the outstanding items addressed in the near future. I guess we were spoiled by the rapid pace early on.

I have indeed had a lot going on for the past few months. Checking on the thread, it seems like there’s been a lot of blue iris config related discussion and troubleshooting. Anything else that I’ve missed?

Any specific items ^ that you’re thinking of @VideoDad ? I did also see the mention about multiple user accounts.

I can make some changes soon. I’m curious to see how Claude code might be able to do with some of the featurebase requests. I’ve had this in mind because it seems like a perfect use case, and it would be fantastic if I could send the agent to crank those out on its own and just review pull requests.
 
I have indeed had a lot going on for the past few months. Checking on the thread, it seems like there’s been a lot of blue iris config related discussion and troubleshooting. Anything else that I’ve missed?

Any specific items ^ that you’re thinking of @VideoDad ? I did also see the mention about multiple user accounts.

I can make some changes soon. I’m curious to see how Claude code might be able to do with some of the featurebase requests. I’ve had this in mind because it seems like a perfect use case, and it would be fantastic if I could send the agent to crank those out on its own and just review pull requests.
Welcome back! IMNSHO it's mature enough to warrant exposure to users other than the system admins, which is why I think multiple user logins would be useful. I'd like to give my office neighbors access to the feed, so they can tell me to add/hide/ignore their known plates, etc.

Thank you for your incredible project!
 
  • Like
Reactions: algertc
I have indeed had a lot going on for the past few months. Checking on the thread, it seems like there’s been a lot of blue iris config related discussion and troubleshooting. Anything else that I’ve missed?

Any specific items ^ that you’re thinking of @VideoDad ? I did also see the mention about multiple user accounts.

I can make some changes soon. I’m curious to see how Claude code might be able to do with some of the featurebase requests. I’ve had this in mind because it seems like a perfect use case, and it would be fantastic if I could send the agent to crank those out on its own and just review pull requests.
My biggest annoyance are:
1. System jumps back to login screen constantly when editing plates, searching, etc.
2. Date and time filters don't correctly account for local time. It seems to calculate a UTC date, or hour but then not show the correct local times for that date. And hours can be off with events before and after daylight saving.
3. Notifications need to be useful with better rules.
4. Login should have a "keep me logged in" option like UI3.
5. Field length on editing a plate is too short, especially on mobile.

I think most of these are entered into the roadmap/bug database already.
 
  • Like
Reactions: algertc
Can someone please confirm what these two buttons do (and do not do)?
View attachment 238194

If I'm guessing correctly:
  1. Correct Plate updates the plate in the database, but does NOT feed anything back to the AI model.
  2. Confirm AI Label tells the AI model, "Yes, you got this correct," where "this" is the AI read, e.g. the text in the orange box.
  3. If one uses Correct Plate first, and then clicks Confirm AI Label, that would not do what we want, because Confirm AI Label does not use the corrected data, only the original read.
Can anyone confirm :p if I've got that correct :p:p?

If yes, is there a way to feed the corrected plate back to the model, e.g. "You got it wrong; here's what it actually is?" Or am I wrong on #3? If I'm wrong, perhaps that button could change to Correct AI Label if Correct Plate has previously been used.

Correct on correct plate. Confirm AI Label means you've fixed the incorrect read. It doesn't do anything else besides setting that flag in the database.

There is no automated function to retrain the AI model--this is difficult to do.
 
  • Like
Reactions: TheWaterbug
The computer I use to host this has had some catastrophic failures. Is anyone able to send like 20-50 images that I can use for testing? It would also be helpful to have the request sent by BI in whatever new format in BI6 that has caused problems.
 
Correct on correct plate. Confirm AI Label means you've fixed the incorrect read. It doesn't do anything else besides setting that flag in the database.

There is no automated function to retrain the AI model--this is difficult to do.
Ah yes, I keep forgetting that the database and the actual LPR are two completely different modules, and that the communication is one-way.
 
The computer I use to host this has had some catastrophic failures. Is anyone able to send like 20-50 images that I can use for testing? It would also be helpful to have the request sent by BI in whatever new format in BI6 that has caused problems.
Do you want a set of .jpgs and their matching .dat files?
 
I don’t even need the dat files. Just the images.

A sample from BI would be great too. Could get this either by sending the same payload that is sent to this app in an email from BI or using the write to file option.
Sorry; I don't understand what the "sample from BI" constitutes. The only data I have is the jpgs with the burnt-in plates:

1771627619101.png


and their accompanying.dat files.
 
Sorry; I don't understand what the "sample from BI" constitutes. The only data I have is the jpgs with the burnt-in plates:

View attachment 238517

and their accompanying.dat files.

What I mean by this is the http request that BI sends. You could see this by creating an alert notification in BI with the same content that we use for the ALPR database, but selecting email or something else instead of http request. Emailing it to yourself will show you what BI actually sends to this app.

If it’s too much trouble don’t worry. I’m mostly asking for those who said they had issues after upgrading. But yes just a zip of some images would be great.
 
My plan is to more or less start completely from scratch and use Claude code to remake the whole app in parallel in a more principled way, while integrating all of the requested features. I’m pretty excited and hoping this can be done this weekend.

I might also make a publicly accessible staging site that can show live versions for each feature added by the AI and allow you to tinker with it and approve or deliberate before updating your own system.
 
Would be greatly helpful if anyone is willing to send me a dump of their database (ideally a more than one person) so I can ensure this will migrate smoothy. I am going to upgrade the postgres to version 16. With this, there will be no more migrations.sql bullcrap and a proper ORM will be used. My plan is to add a new empty pg16 database container alongside the current database, use the ORM to apply the correct schema, and then copy just the data (instead of restoring the whole schema) from the old database over to this new pristine database. I'm like 95% sure that this should work with no issue for everyone running this, regardless of what state their database is in, because the differences seem to mostly be with constraints, functions, and other secondary things - the actual columns and types should all be the exact same, otherwise the app wouldn't be working.

By doing this, everyone will have the exact same database configuration that is all type checked and can be tested by the app, your data will transfer over, and the schema.sql and migrations.sql will be deleted, leaving all of the headache caused by that behind. Any future updates to the database will be tracked by the ORM, which is idiot-proofed and isn't supposed to let me do anything that will break it, and will apply the changes for you and verify that they're synced and correct.

You can run this to get the data only dump of your db:
Code:
docker compose exec db pg_dump -U postgres --data-only -Fc postgres > alpr_data.dump

Or in pgAdmin:
  1. Right-click the database name (probably postgres) in the left tree
  1. Click Backup...
  2. In the dialog:
  • Filename: pick a save location, name it something like alpr_data.dump
  • Format: select Custom (this is the -Fc equivalent)
  • Go to the Data/Objects tab (or Dump options depending on pgAdmin version)
  • Toggle Only data to Yes (this is --data-only)
  • Leave everything else default
  1. Click Backup

You don't need to save these or do anything manually when actually installing the update. I am just asking for these so I can verify that everything transfers over properly with this approach. TIA
 
@VideoDad RE: complex automations/notification rules - to do this well and have it really be good is actually pretty complicated. Home Assistant has a pretty advanced rules engine from what I see. If you could have every recognition sent to an MQTT topic with all of the metadata that comes from ALPR DB + the ability for HA to query for things like known plates, tags, etc. if needed, would this be adequate?
 
@VideoDad RE: complex automations/notification rules - to do this well and have it really be good is actually pretty complicated. Home Assistant has a pretty advanced rules engine from what I see. If you could have every recognition sent to an MQTT topic with all of the metadata that comes from ALPR DB + the ability for HA to query for things like known plates, tags, etc. if needed, would this be adequate?
Given I don't have Home Assistant, it would mean adding yet another thing to the box to get intelligent notifications. BI is pretty dumb in this respect. I was hoping that it could be a module in ALPR DB directly at some point.
 
Given I don't have Home Assistant, it would mean adding yet another thing to the box to get intelligent notifications. BI is pretty dumb in this respect. I was hoping that it could be a module in ALPR DB directly at some point.
Once the foundational updates are complete, I can try to give some different AI models a spec and see if any of them come back with something workable. I'll send the spec when I get there so you can see how it goes about doing it.