Algertc ALPR database and VPNs

Gimmons

Getting comfortable
Jan 7, 2022
320
463
California
I've been trying to setup algertc's ALPR database to handle the lpr data from Blue Iris with no luck.

My first setup had the database running on my BI computer. The program seemed to be installed correctly, and I could see jpeg images and thumbnails going to the correct folders in the database directory, but the db program was not displaying the data. When I triggered an alert within BI, I got the Error: Web: No stream: 500 error.

I then tried setting up the database on a separate computer. Similar results, except the jpegs and thumbnails were not making it to the lpr directory, and the error message changed to Web: No stream: 1200 (or sometimes 1202.)

I'm wondering if this has to do with using tailscale, which changes the BI computer's IP address. I tried a number of variations, putting the db computer on tailscale, taking the BI computer off tailscale. I read through all 51 pages of the main ALPR thread, and tried everything that seemed relevant.

So any insight as to how to get it working with a vpn involved would be much appreciated.
 
I've been trying to setup algertc's ALPR database to handle the lpr data from Blue Iris with no luck.

My first setup had the database running on my BI computer. The program seemed to be installed correctly, and I could see jpeg images and thumbnails going to the correct folders in the database directory, but the db program was not displaying the data. When I triggered an alert within BI, I got the Error: Web: No stream: 500 error.

I then tried setting up the database on a separate computer. Similar results, except the jpegs and thumbnails were not making it to the lpr directory, and the error message changed to Web: No stream: 1200 (or sometimes 1202.)

I'm wondering if this has to do with using tailscale, which changes the BI computer's IP address. I tried a number of variations, putting the db computer on tailscale, taking the BI computer off tailscale. I read through all 51 pages of the main ALPR thread, and tried everything that seemed relevant.

So any insight as to how to get it working with a vpn involved would be much appreciated.
I would go back to having the ALPR database on the same machine. Verify you have correctly set the API key in the settings.

From another PC on the LAN, confirm you can reach the login screen. You want to use the internal IP of the BI/DB machine (leave the VPN out of the equation).

Then ensure that you are making the API call correctly and with the correct API key. Again, use the internal IP of the BI/DB machine. You could even just use localhost:3000 because you are calling the DB directly from the BI machine which is the same PC.

Verify the call in the BI logs. Also check the ALPR DB logs.

Post back and we'll help you diagnose the symptoms.
 
Last edited:
  • Like
Reactions: Gimmons
I don't use Tailscale so I am not sure exactly what's involved, but one thing you should confirm is connectivity between BI to CPAI and BI to ALPRDB. Not just ping, you should verify tcp connectivity since Tailscale might be looking at layer 4 info.
 
  • Like
Reactions: Gimmons
I'm still struggling. I've fixed a few errors, so the lpr-app-1 container is receiving an image and data from BI/Code Project. The log for lpr-app-1 shows an error which seems to postgres trying to resolve a conflict. Here's what I think is the relevant bit. I'd love it if someone could tell me "Change x to y. and it will work" I feel like it's getting close:

Code:
Error processing request: error: there is no unique or exclusion constraint matching the ON CONFLICT specification

    at <unknown> (/app/node_modules/pg/lib/client.js:535:17)

    at async p (/app/.next/server/app/api/plate-reads/route.js:1:6064)

    at async te.do (/app/node_modules/next/dist/compiled/next-server/app-route.runtime.prod.js:18:17826)

    at async te.handle (/app/node_modules/next/dist/compiled/next-server/app-route.runtime.prod.js:18:22492)

    at async doRender (/app/node_modules/next/dist/server/base-server.js:1455:42)

    at async responseGenerator (/app/node_modules/next/dist/server/base-server.js:1814:28)

    at async NextNodeServer.renderToResponseWithComponentsImpl (/app/node_modules/next/dist/server/base-server.js:1824:28)

    at async NextNodeServer.renderPageComponent (/app/node_modules/next/dist/server/base-server.js:2240:24)

    at async NextNodeServer.renderToResponseImpl (/app/node_modules/next/dist/server/base-server.js:2278:32) {

  length: 148,

  severity: 'ERROR',

  code: '42P10',

  detail: undefined,

  hint: undefined,

  position: undefined,

  internalPosition: undefined,

  internalQuery: undefined,

  where: undefined,

  schema: undefined,

  table: undefined,

  column: undefined,

  dataType: undefined,

  constraint: undefined,

  file: 'plancat.c',

  line: '870',

  routine: 'infer_arbiter_indexes'

}
 
I'm still struggling. I've fixed a few errors, so the lpr-app-1 container is receiving an image and data from BI/Code Project. The log for lpr-app-1 shows an error which seems to postgres trying to resolve a conflict. Here's what I think is the relevant bit. I'd love it if someone could tell me "Change x to y. and it will work" I feel like it's getting close:

Code:
Error processing request: error: there is no unique or exclusion constraint matching the ON CONFLICT specification

    at <unknown> (/app/node_modules/pg/lib/client.js:535:17)

    at async p (/app/.next/server/app/api/plate-reads/route.js:1:6064)

    at async te.do (/app/node_modules/next/dist/compiled/next-server/app-route.runtime.prod.js:18:17826)

    at async te.handle (/app/node_modules/next/dist/compiled/next-server/app-route.runtime.prod.js:18:22492)

    at async doRender (/app/node_modules/next/dist/server/base-server.js:1455:42)

    at async responseGenerator (/app/node_modules/next/dist/server/base-server.js:1814:28)

    at async NextNodeServer.renderToResponseWithComponentsImpl (/app/node_modules/next/dist/server/base-server.js:1824:28)

    at async NextNodeServer.renderPageComponent (/app/node_modules/next/dist/server/base-server.js:2240:24)

    at async NextNodeServer.renderToResponseImpl (/app/node_modules/next/dist/server/base-server.js:2278:32) {

  length: 148,

  severity: 'ERROR',

  code: '42P10',

  detail: undefined,

  hint: undefined,

  position: undefined,

  internalPosition: undefined,

  internalQuery: undefined,

  where: undefined,

  schema: undefined,

  table: undefined,

  column: undefined,

  dataType: undefined,

  constraint: undefined,

  file: 'plancat.c',

  line: '870',

  routine: 'infer_arbiter_indexes'

}
See if this helps:
 
  • Like
Reactions: Gimmons
Looks like the same thing. I know I read that before but my mind glazed over. I started this thread because I found the first one unwieldy. I'll try it and report. Thanks VideoDad.
 
I use tailscale and it shouldn't have anything to do with the ALPR database. Like @VideoDad mentioned you need to setup the ALPR database app on your LAN using the local IP address of your BI PC.
 
  • Like
Reactions: Gimmons
I tried following Janhaus's instructions, running psql -d postgres -U postgres -f /docker-entrypoint-initdb.d/schema.sql and the equivalent migrations commands inside the database container, and got a message that everything after ...-initdb was a superfluous argument and would be ignored. I uninstalled docker, and deleted everything in my lpr directory, reinstalled docker and the algertc alpr files, and am exactly where I was before.

BI is still showing "Web:No stream: 400" errors upon a plate alert, even though the jpegs are getting to the storage folder and are being seen by docker.

A couple of possible red herrings, but maybe they mean somethiing:

1. In the On alert action panel, the Post/payload line begins

{"ai_dump":&JSON,...

Shouldn't &JSON be in quotes like all the other predicates?

2. My lpr-app-1 docker logs include many massive Image strings. I'm guessing these are representation of the jpeg images sent in the alert payload. Should these be in the log files?

3. Although BI and Code Project use the system time (UTC-8) docker uses UTC. I've had problems in the past where BI and cameras were on different times. Could this be causing the "No stream" errors?

That's all I can think of at the moment. I appreciate your help, gentlemen.
 
I tried following Janhaus's instructions, running psql -d postgres -U postgres -f /docker-entrypoint-initdb.d/schema.sql and the equivalent migrations commands inside the database container, and got a message that everything after ...-initdb was a superfluous argument and would be ignored. I uninstalled docker, and deleted everything in my lpr directory, reinstalled docker and the algertc alpr files, and am exactly where I was before.

BI is still showing "Web:No stream: 400" errors upon a plate alert, even though the jpegs are getting to the storage folder and are being seen by docker.

A couple of possible red herrings, but maybe they mean somethiing:

1. In the On alert action panel, the Post/payload line begins

{"ai_dump":&JSON,...

Shouldn't &JSON be in quotes like all the other predicates?

2. My lpr-app-1 docker logs include many massive Image strings. I'm guessing these are representation of the jpeg images sent in the alert payload. Should these be in the log files?

3. Although BI and Code Project use the system time (UTC-8) docker uses UTC. I've had problems in the past where BI and cameras were on different times. Could this be causing the "No stream" errors?

That's all I can think of at the moment. I appreciate your help, gentlemen.
1. The JSON should not be in quotes.
2. The path and file names for the images are long; probably not an issue.
3. The timezone shouldn't affect the reading of plates. They are actually sent and stored in UTC.

Can you check your DB logs again? If you're still getting the ON CONFLICT error, that's the problem.

It sounds like the API call is made correctly. It is parsed and the images are written to storage on your DB machine. That's a further indication that the call made it through from BI.

But the next step would be to write the plate details into the database and then that's apparently where it's failing because of an incorrect database schema.
 
Here's perhaps the problem. The db1 log shows the following error, apparently repeating with each new jpeg:

Code:
2025-05-24 19:22:02.967 PDT [11662] ERROR:  there is no unique or exclusion constraint matching the ON CONFLICT specification
2025-05-24 19:14:28.945 PDT [11662] STATEMENT:  WITH new_plate AS (

              INSERT INTO plates (plate_number)

              VALUES ($1)

              ON CONFLICT (plate_number) DO NOTHING

            ),
            new_read AS (

              INSERT INTO plate_reads (

                plate_number,
                image_data,
                image_path,
                thumbnail_path,
                timestamp,
                camera_name,
                bi_path,
                confidence,
                crop_coordinates,
                ocr_annotation,
                plate_annotation

              )

So somewhere in the lines:
STATEMENT: WITH new_plate AS (
INSERT INTO plates (plate_number)
VALUES ($1)
ON CONFLICT (plate_number) DO NOTHING
),
There needs to be a unique or conclusion constraint. I keep reading that these terms have to do with tables. What tables? Some commenters say an "Index predicate" solves the problem, but I'm not sure what that would look like here.
 
The problem, I think, is the database schema is different than what the code is expecting.

From that prior post I linked, I'd try those separate db container calls.

Try them one by one and report back if you can get through them without an error.
 
Just wanted to let you know that I have Tailscale working with ALPR database just fine. Once you get it going locally then you should have no problem using Tailscale
 
The problem, I think, is the database schema is different than what the code is expecting.

From that prior post I linked, I'd try those separate db container calls.

Try them one by one and report back if you can get through them without an error.
And just like that, it's working. Thank you VideoDad particularly, for sticking with me, also PeteJ, Vettster. and prsmith777. I actually ended up learning a lot about Blue Iris, as well as sql through this process. You guys were very helpful, and I appreciate it.
 
  • Like
Reactions: looney2ns