Full ALPR Database System for Blue Iris!

So I just tried installing this using the script, and everything seems to have completed correctly, except that it keeps throwing "incorrect password" errors when I try to log into the app (). I triple checked the password is correct. I even deleted/reinstalled the container (again using the script) with a very simple password to make sure. Am I missing something here?

Do you have an auth directory?
 
Sorry I am a complete newb when it comes to docker. I just followed the instructions on the github to install on the BI pc. What is an auth directory?
 
Sorry I am a complete newb when it comes to docker. I just followed the instructions on the github to install on the BI pc. What is an auth directory?

Take a look at your docker compose file, you should have lines in there that look like this:

volumes:
- app-auth:/app/auth
- app-config:/app/config
- app-plate_images:/app/storage

volumes:
db-data:
app-auth:
driver: local
driver_opts:
type: none
o: bind
device: ./auth

Check if this directory exists and has the correct permissions.



... it's step 3 if you were installing manually:

Quick Start:​

  1. Ensure you have Docker and Docker compose installed on your system.
  2. In a new directory, create a file named docker-compose.yml and paste in the content below, changing the variables to the passwords you would like to use.
  3. Create three new directories / folders in this directory called "config", "storage", and "auth". These will ensure that your data is saved separately and not lost during updates.
 
  • Like
Reactions: Vettester
OK thank you. I verified all of those directories exist, and have 'admin' permissions (which is administrator for the PC)... same user I installed docker and the alprdb script with. The dockercompose.yaml matches what the manual install instructions say... with the correct passwords. What worked for me is stopping the container, deleting it (from docker desktop), deleting all folders in the alprdb directory, and rerunning the script. That seems to have gotten everything working as expected.

Thanks again to all the selfless work between devs and forum members... I very much appreciate your help!
 
  • Like
Reactions: VideoDad
Quick question, how do I ensure docker and the container will it restart after a reboot?

I saw in docker desktop settings a checkbox for "start docker desktop when user logs in", but that didn't sound like 'starting as a service' so not sure if it would work.
 
Quick question, how do I ensure docker and the container will it restart after a reboot?

I saw in docker desktop settings a checkbox for "start docker desktop when user logs in", but that didn't sound like 'starting as a service' so not sure if it would work.

I am not sure how Windows does it, but in your docker compose file, you need to have the restart line:

services:
app:
restart: unless-stopped
 
I do have "unless-stopped" in my compose file. Not sure if that has to be "always", and I need to somehow get docker desktop started as a service.

On another note, I got some recognized plates with it running, but they aren't going into the database and the logs show the error:
'[ERROR]Error processing request: error: null value in column "id" of relation "plate_reads" violates not-null constraint'

Not sure what's going on there. I am using the blueiris rest macro as shown in the guide.
...edit: I just ran the update script after reading the troubleshooting section in the docs. Hopefully that gets the plates flowing to the database... nope still alerts still not going to the database. There are more logs that may help with troubleshooting:

Code:
[INFO]POST /api/plate-reads

9/10/2025, 1:00:19 PM

[INFO]Received plate read data: [object Object]

9/10/2025, 1:00:19 PM

[INFO]Database connection established

9/10/2025, 1:00:19 PM

[INFO][FileStorage] Successfully saved image

9/10/2025, 1:00:19 PM

[ERROR]Error processing request: error: null value in column "id" of relation "plate_reads" violates not-null constraint

9/10/2025, 1:00:19 PM
 
Last edited:
I do have "unless-stopped" in my compose file. Not sure if that has to be "always", and I need to somehow get docker desktop started as a service.

On another note, I got some recognized plates with it running, but they aren't going into the database and the logs show the error:
'[ERROR]Error processing request: error: null value in column "id" of relation "plate_reads" violates not-null constraint'

Not sure what's going on there. I am using the blueiris rest macro as shown in the guide.
...edit: I just ran the update script after reading the troubleshooting section in the docs. Hopefully that gets the plates flowing to the database... nope still alerts still not going to the database. There are more logs that may help with troubleshooting:

Code:
[INFO]POST /api/plate-reads

9/10/2025, 1:00:19 PM

[INFO]Received plate read data: [object Object]

9/10/2025, 1:00:19 PM

[INFO]Database connection established

9/10/2025, 1:00:19 PM

[INFO][FileStorage] Successfully saved image

9/10/2025, 1:00:19 PM

[ERROR]Error processing request: error: null value in column "id" of relation "plate_reads" violates not-null constraint

9/10/2025, 1:00:19 PM

You might be having the same issue as this user.
 
Weird, my issue was slightly different, as I did see successful db connection in the logs. I got it actually fully working now after several attempts to remove and reinstall the container. Some observations I made along the way...

1) The first issue I ran in to, the admin password not working... I'm fairly certain was due to using a '$' in the password during the setup script. To get past this, I had to use a very basic password, then after logging in to the alprdb webui, changing the password to a proper password (containing the $). Then it works as expected. Twice in a row when using the complex pw in the script, I got the same result. Must be some issue with passing a "$" as password with that script.

2) The "[ERROR]Error processing request: error: null value in column "id" of relation "plate_reads" violates not-null constraint" happens every time after finishing the install script, and BI sends an alert. Running the ALPRdb update.ps1 script (and selecting "release version") whilst it was installed in a directory located in 'my documents' didn't seem to fix it. After a complete wipe of the container, volumes, and files, then reinstalling alprdb in a root folder "c:\alprdb\", the same error came up. Then this time, running the update.ps1 script, this time using "latest development release", and this the db to start working as expected.

So now everything seems to be working, but when I reboot the pc it doesn't restart itself. Circling back to the subject of 'running aplrdb as a service'... any tips/reading are greatly appreciated (in context to using docker desktop on windows... ie a newb install).
 
Last edited:
Weird, my issue was slightly different, as I did see successful db connection in the logs. I got it actually fully working now after several attempts to remove and reinstall the container. Some observations I made along the way...

1) The first issue I ran in to, the admin password not working... I'm fairly certain was due to using a '$' in the password during the setup script. To get past this, I had to use a very basic password, then after logging in to the alprdb webui, changing the password to a proper password (containing the $). Then it works as expected. Twice in a row when using the complex pw in the script, I got the same result. Must be some issue with passing a "$" as password with that script.

2) The "[ERROR]Error processing request: error: null value in column "id" of relation "plate_reads" violates not-null constraint" happens every time after finishing the install script, and BI sends an alert. Running the ALPRdb update.ps1 script (and selecting "release version") whilst it was installed in a directory located in 'my documents' didn't seem to fix it. After a complete wipe of the container, volumes, and files, then reinstalling alprdb in a root folder "c:\alprdb\", the same error came up. Then this time, running the update.ps1 script, this time using "latest development release", and this the db to start working as expected.

So now everything seems to be working, but when I reboot the pc it doesn't restart itself. Circling back to the subject of 'running aplrdb as a service'... any tips/reading are greatly appreciated (in context to using docker desktop on windows... ie a newb install).
I asked about running docker as a service in Windows awhile ago, but wasn't able to find a solution. If you figure something out, let me know.
 
  • Like
Reactions: truglo
Roger that. I tried using the manual install method, and it works, but same issue with it starting as a service. I randomly tried enabling the 'docker desktop service' with autostart in windows services, but that didn't work either. Since my BI pc is headless, alprdb becomes pretty much useless to me without being able to start it as a service.
 
Roger that. I tried using the manual install method, and it works, but same issue with it starting as a service. I randomly tried enabling the 'docker desktop service' with autostart in windows services, but that didn't work either. Since my BI pc is headless, alprdb becomes pretty much useless to me without being able to start it as a service.

Man.. it's 2025 and this is still an issue? (former MCSE from the 90s).
 
  • Love
Reactions: truglo
I do have linux boxes on my lan that could run this, but I'd rather keep it all on the same PC. My HA rpi seems inappropriate for this (especially running hassos), my NAS is off limits for this sort of thing, which leaves the pi5 I use only for ham radio operation stuff... if I put alprdb on that then it stops working if I bring that pi for a park activation or similar.

I was hoping for a way to tweak this to work in windows. I tried a bunch of really hacky stuff, but nothing working so far. Seems it should be doable with a batch script no?
 
Seems it should be doable with a batch script no?
There's a "Start Docker Desktop when you log in" option in the Docker Desktop settings that works well.

Screen Shot 2025-09-10 at 10.22.50 PM.png
 
Last edited:
  • Like
Reactions: Skinny1 and truglo
As I found, that doesn't seem to work until a user is manually logged in. So I have to RDP to the BI machine which logs in as a user, and then starts the docker engine.

Again if anyone knows of a way to do this automatically in Windows (short of disabling user credentials) several of us would appreciate it.
 
  • Like
Reactions: truglo
Poor recognition is typically a camera or model issue with the latter usually being the problem.

With all of the plate data captured and sent anonymously I was hoping for an improved license plate model.

I've gotten lots of images for the plate detection model, which just identifies that there is a license plate in the frame, but because of the status with CPAI and the slow updates, almost no images for the actual OCR. Mike added the necessary data for the character recognition a while ago, but it took quite a while to be available for update through the codeproject UI. Furthermore, lots of users never really look at the CPAI web ui unless they're tinkering, so the lag on update distribution is heavy.

That being said, the images from the plate detection model are still quite valuable and absolutely could be used to make a significant improvement in the OCR. In order to do this, every single image would have to be annotated with the correct characters and a bounding box/pixel coordinates within the image where the actual numbers and letters are. The images then need to be cropped in to just the characters in the plate using those coordinates in order to train the OCR model. If anyone has suggestions for how to tackle that or wants to take on that task, I'm all ears. I can deal with training it if we can somehow acomplish that.


So @algertc, what's in the next update? Are you working on the AI integration? Or auto correction of plates? I'd also like to put in a reminder request for a method of creating rules for notifications, etc.

When you correct a plate and enable "Correct all occurrences of this plate number", does it correct the future errors that are occuring?

Seems like I am making the same corrections over and over again, which would presume that future errors are not being corrected.

If this is the case, I would really like to have this as an option for future releases, understanding that it is possible there could be a real plate that has this corrected version and would be classified incorrectly in the database.


The "forward to correct plate" functionality is one of the most requested features. I understand the frustration with the detections because I experience the same thing, but it just feels wrong to me to add a workaround solution like this instead of addressing the problem where it's coming from. I'm not entirely opposed, but it would be a band-aid solution. It would be much better if we could deal with it either by improving the model or in a more flexible way.

I had been thinking about the notification rules/conditions and how to make them more granular and precise. I think i have a good approach and will focus on that as top priority.

Finishing off the requested MQTT improvements will come with that. Apologies for the absence. I haven't had any time to attend to this in a while. Still quite busy, but I'm going to try to make some improvements.


I think the most valuable improvement, by far, would be improving the computer vision. I'd pay a foreign freelancer to annotate them if someone can advise on how exactly that needs to be done and what instructions they would need to be provided.


Lastly, I've had to deal with some mobile development recently, and have found that the framework used to convert NextJS apps to mobile has improved massively since I last used it. Since I already dealt with all the requirements for PWA compatibility, it's actually fairly simple to package a proper app store app without much refactoring. I likely will do this because it would allow for push notifications and better integration for users who do not use HomeAssisant (I don't). Can do much better rich notifications, especially with the complex notification rules future functionality. A definite nice to have.


Lastly lastly, I've always been kind of annoyed that I can't watch my cameras on my TVs without using a clunky browser app on the smart TV. I use Apple TV, and I've always wished that there was a tvOS app for Blue Iris. I'm going to try to make this since I personally want it. If anyone else has Apple TVs, a BI app might be available on the store sometime soon.
 
I've gotten lots of images for the plate detection model, which just identifies that there is a license plate in the frame, but because of the status with CPAI and the slow updates, almost no images for the actual OCR. Mike added the necessary data for the character recognition a while ago, but it took quite a while to be available for update through the codeproject UI. Furthermore, lots of users never really look at the CPAI web ui unless they're tinkering, so the lag on update distribution is heavy.

That being said, the images from the plate detection model are still quite valuable and absolutely could be used to make a significant improvement in the OCR. In order to do this, every single image would have to be annotated with the correct characters and a bounding box/pixel coordinates within the image where the actual numbers and letters are. The images then need to be cropped in to just the characters in the plate using those coordinates in order to train the OCR model. If anyone has suggestions for how to tackle that or wants to take on that task, I'm all ears. I can deal with training it if we can somehow acomplish that.







The "forward to correct plate" functionality is one of the most requested features. I understand the frustration with the detections because I experience the same thing, but it just feels wrong to me to add a workaround solution like this instead of addressing the problem where it's coming from. I'm not entirely opposed, but it would be a band-aid solution. It would be much better if we could deal with it either by improving the model or in a more flexible way.

I had been thinking about the notification rules/conditions and how to make them more granular and precise. I think i have a good approach and will focus on that as top priority.

Finishing off the requested MQTT improvements will come with that. Apologies for the absence. I haven't had any time to attend to this in a while. Still quite busy, but I'm going to try to make some improvements.


I think the most valuable improvement, by far, would be improving the computer vision. I'd pay a foreign freelancer to annotate them if someone can advise on how exactly that needs to be done and what instructions they would need to be provided.


Lastly, I've had to deal with some mobile development recently, and have found that the framework used to convert NextJS apps to mobile has improved massively since I last used it. Since I already dealt with all the requirements for PWA compatibility, it's actually fairly simple to package a proper app store app without much refactoring. I likely will do this because it would allow for push notifications and better integration for users who do not use HomeAssisant (I don't). Can do much better rich notifications, especially with the complex notification rules future functionality. A definite nice to have.


Lastly lastly, I've always been kind of annoyed that I can't watch my cameras on my TVs without using a clunky browser app on the smart TV. I use Apple TV, and I've always wished that there was a tvOS app for Blue Iris. I'm going to try to make this since I personally want it. If anyone else has Apple TVs, a BI app might be available on the store sometime soon.
I'm using PlateRecognizer for OCR and it does a pretty decent job of character recognition. But even that can have issues with a slight change in lighting, an obstruction, change in angle of travel, front vs. rear, etc. Expecting the OCR to be 100% correct is unrealistic.

The big benefit I see with ALPR DB is that we have a large personal database of plates that have already been read by our cameras. If I've had 100 plates go by that are 456LOB9 and one day it sees 45GL089, the fuzzy match logic could easily flag it for review, or even automatically correct the plate, based on a setting, perhaps.

At a minimum, if there were a way to get a report of plates that have few reads but that are close to other plates that have many reads, we could have a way to fix these ourselves.

There's a reason it's an oft requested feature.

Furthermore, I really like the idea of the DB being a "brain" that can help us filter out the plates that are worthy of our review. A plate I haven't seen before or recently, one that has been going by at odd hours, one that I've tagged, etc. would be ones I'd want to see immediately.