Full ALPR Database System for Blue Iris!

I can also try and send a db dump late tonight if you need another. Happy to help support this project!

Since you are starting from scratch a few thoughts I’ve had:
  • Ability to handle when a car gets a new plate. Would be nice to see all reads for that car and its prior plates. Basically if I search/view ABC123, also return records for XYZ123.
  • Attach common misreads to a saved plate and auto-correct if the misread is captured. We have a few cars with a “D” on the plate but always gets read as an “O”. I try and do manual corrections but fall behind fast.
  • Maybe it’s there already, but plates/strings to ignore/not save to the db. Sometimes the side of the garbage truck gets read as a plate.

No idea the reality of these but thought I’d toss the ideas out there! These might have been mentioned before but this thread is getting too long to scour!

Thanks!


Sent from my iPhone using Tapatalk
 
Would be greatly helpful if anyone is willing to send me a dump of their database (ideally a more than one person) so I can ensure this will migrate smoothy. I am going to upgrade the postgres to version 16. With this, there will be no more migrations.sql bullcrap and a proper ORM will be used. My plan is to add a new empty pg16 database container alongside the current database, use the ORM to apply the correct schema, and then copy just the data (instead of restoring the whole schema) from the old database over to this new pristine database. I'm like 95% sure that this should work with no issue for everyone running this, regardless of what state their database is in, because the differences seem to mostly be with constraints, functions, and other secondary things - the actual columns and types should all be the exact same, otherwise the app wouldn't be working.

By doing this, everyone will have the exact same database configuration that is all type checked and can be tested by the app, your data will transfer over, and the schema.sql and migrations.sql will be deleted, leaving all of the headache caused by that behind. Any future updates to the database will be tracked by the ORM, which is idiot-proofed and isn't supposed to let me do anything that will break it, and will apply the changes for you and verify that they're synced and correct.

You can run this to get the data only dump of your db:
Code:
docker compose exec db pg_dump -U postgres --data-only -Fc postgres > alpr_data.dump

Or in pgAdmin:
  1. Right-click the database name (probably postgres) in the left tree
  2. Click Backup...
  3. In the dialog:
  • Filename: pick a save location, name it something like alpr_data.dump
  • Format: select Custom (this is the -Fc equivalent)
  • Go to the Data/Objects tab (or Dump options depending on pgAdmin version)
  • Toggle Only data to Yes (this is --data-only)
  • Leave everything else default
  1. Click Backup

You don't need to save these or do anything manually when actually installing the update. I am just asking for these so I can verify that everything transfers over properly with this approach. TIA
DM Sent!
 
Lately I've noticed that the ALPR Database quits working every few days. Requests to from other machines timeout, as does a request to from the Windows 11 Pro machine it's running on, which is also the same machine running BI and CPAI, both of which are still running.

Stopping and starting the Container fixes it.

But today, I couldn't restart the Container, because it wasn't even showing in Docker Desktop. There was a pending update to 4.62.0 anyway, so I asked it to update/restart, and now it's running again, but I can't tell what the root cause of the failure was. Memory leak within the container? I wish I'd screen capped before restarting Docker Desktop, but here's what it looks like right now, less than 5 minutes after a Docker restart:

1772152676521.png


I'll check it again over the next few days until it dies again. What does everyone else's look like? Host machine is i7-6700/3.40GHz w/32 GB of RAM.
 
Lately I've noticed that the ALPR Database quits working every few days. Requests to from other machines timeout, as does a request to from the Windows 11 Pro machine it's running on, which is also the same machine running BI and CPAI, both of which are still running.

Stopping and starting the Container fixes it.
Actually, I just discovered that during today's failure, while CPAI itself was running, ALPR was not:

Code:
07:39:48:Response rec'd from Object Detection (YOLOv5 .NET) command 'custom' (...ad51a8) ['Found DayPlate']  took 393ms
07:39:48:Response rec'd from Object Detection (YOLOv5 .NET) command 'custom' (...c69e9c) ['Found DayPlate']  took 558ms
07:39:48:Response rec'd from Object Detection (YOLOv5 .NET) command 'custom' (...89d4b6) ['Found DayPlate']  took 718ms
07:39:48:Response rec'd from Object Detection (YOLOv5 .NET) command 'custom' (...010d3d) ['Found DayPlate']  took 802ms
07:39:55:Connection id "0HNJAJFH6M2SM", Request id "0HNJAJFH6M2SM:00000008": An unhandled exception was thrown by the application.
07:40:05:Module ALPR has shutdown
07:40:05:ALPR_adapter.py: has exited
16:43:31:Update ALPR. Setting AutoStart=true
16:43:31:Restarting License Plate Reader to apply settings change
16:43:31:
16:43:31:Module 'License Plate Reader' 3.3.4 (ID: ALPR)

I just restarted ALPR and YOLOv5, and now I'm getting plates again.

Does the ALPR Database container barf if the ALPR module quits in the middle of a call?
 
Hello everyone

I created this project as an alternative to the super expensive options from PlateMinder and Rekor. This still depends on your own CodeProject or DeepStack AI, but offers a nice all-in-one solution to actually use and make sense of the data, which is half the point of having the AI read the plates to begin with. It has been working great for me so far, really huge upgrade, so I wanted to share it.

I know there was a NodeRed app created a while ago that had some of this functionality. I took some inspiration from that and tried to bring it to the next level.

Would love to hear if anyone tries it out.

Project link
Nice idea pulling everything into one place. The big pain with ALPR setups is not detection, it is actually storing, searching, and making the data usable later without paying for a cloud service. If this keeps it local and lets you query plates quickly or trigger alerts without a ton of manual glue, that is a huge win. Curious how it handles larger databases over time and whether performance stays decent once you have months of reads stacked up.
 
  • Like
Reactions: algertc
Nice idea pulling everything into one place. The big pain with ALPR setups is not detection, it is actually storing, searching, and making the data usable later without paying for a cloud service. If this keeps it local and lets you query plates quickly or trigger alerts without a ton of manual glue, that is a huge win. Curious how it handles larger databases over time and whether performance stays decent once you have months of reads stacked up.
I have AI alerts saved to my boot drive (C) instead of my video drive (V) for now. I've been running for about a year, starting in February 2025, but with some downtime when things weren't working, saving LPR alerts from 2 cameras to start, and then increasing to 4 over time. Here's my Alerts folder today:

1772386227910.png


The ALPR Database dashboard is pretty much instantly responsive to any query, except in the cases where it bogs down for reasons unknown, and its container needs to be rebooted (see my post above).

I suppose I ought to move that back to a dedicated folder on the V drive, as my boot drive is a small SSD that's getting full.

Is there a way to move all the data so BI and ALPR Database can still see it?
 
I'm on Windows, not having any issues with ALPR database crashing.
@VideoDad Which version of Docker are you running? I'm guessing something other than 4.62? @TheWaterbug what version are you running?

Seems like newer Docker Desktop versions added a Resource Saver Mode
Havent tested out if this will fix it however to start

Docker Desktop → Settings → Resources → Advanced and then Turn OFF Resource Saver Mode

Then also update the settings-store.json
C:\Users\XXXXXX\AppData\Roaming\Docker\settings-store.json

{
"AnalyticsEnabled": false,
"AutoStart": true,
"DisableUpdate": true,
"DisplayedOnboarding": true,
"EnableCLIHints": true,
"EnableDockerAI": false,
"InferenceCanUseGPUVariant": true,
"LastContainerdSnapshotterEnable": 1744875473,
"LicenseTermsVersion": 2,
"OpenUIOnStartupDisabled": true,
"SettingsVersion": 43,
"ShowInstallScreen": true,
"SilentModulesUpdate": false,
"UpdateInstallTime": 0,
"UseContainerdSnapshotter": true,
"UseResourceSaver": false,
"WslUpdateRequired": false,
"autoPauseTimedActivitySeconds": 0,
"autoPauseTimeoutSeconds": 0
}

Will update once I tested this for a few hours/day
 
Making good progress so far. Lot of stuff hardened, completely fresh new DB with query builder, new auth with multiple users and roles, tons of data fetching stuff fixed, and many other improvements.

Agents requiring more supervision than I initially expected and I have a bunch of other tasks I need to get done before tmr, so won’t be done today.

I also found two different libraries that might be able to solve the hard part of the automation rule builder @VideoDad brought up.

Going to redo a bunch of UI too. Might use this:
Overkill for this but really cool.
 
No shot. Still crashing for me after a few hours, then I can only access it locally on the PC via localhost:9999. After that Docker just locks up and have to task kill and restart it
 
@VideoDad Which version of Docker are you running? I'm guessing something other than 4.62? @TheWaterbug what version are you running?

Seems like newer Docker Desktop versions added a Resource Saver Mode
Havent tested out if this will fix it however to start

Docker Desktop → Settings → Resources → Advanced and then Turn OFF Resource Saver Mode

Then also update the settings-store.json
C:\Users\XXXXXX\AppData\Roaming\Docker\settings-store.json



Will update once I tested this for a few hours/day
I was running Docker Desktop << 4.62.0, but I just updated to 4.62.0, which is what prompted the restart of the container. :idk: