[tool] [tutorial] Free AI Person Detection for Blue Iris

Tinbum

Pulling my weight
Joined
Sep 5, 2017
Messages
448
Reaction score
126
Location
UK
have you tried reinstalling dot NET 8?
 

chumoface

n3wb
Joined
Mar 26, 2022
Messages
4
Reaction score
1
Location
dc
have you tried reinstalling dot NET 8?
Yes, I also reinstalled visual studio installer and NET 6 in addition to 8

AI Tool version 2.2.24.8133 works fine

running
dotnet --list-runtimes

returns
Microsoft.AspNetCore.App 6.0.20 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 8.0.4 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.NETCore.App 6.0.16 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 8.0.1 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 8.0.4 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.WindowsDesktop.App 6.0.16 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 8.0.1 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 8.0.4 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
 

Chris Dodge

Pulling my weight
Joined
Aug 9, 2019
Messages
96
Reaction score
119
Location
massachusetts

Chris Dodge

Pulling my weight
Joined
Aug 9, 2019
Messages
96
Reaction score
119
Location
massachusetts
Yes, I also reinstalled visual studio installer and NET 6 in addition to 8

AI Tool version 2.2.24.8133 works fine

running
dotnet --list-runtimes

returns
Microsoft.WindowsDesktop.App 6.0.16 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Sorry, missed this.

When I look at the installer code it is literally calling 'dotnet --list-runtimes' and simply making sure '
Code:
Microsoft.NETCore.App 6.0.
' appears in the list.

It outputs the result of that command to "%TMP%\dotnet.txt" then deletes it when its done.

Perhaps if your %TMP% environment variable (as apposed to %TEMP%) is not set to a valid folder that could be a factor.

Can you get to it if you enter %TMP% in a file explorer address bar, or do you get an error?

You can force a log file to be created for the install that might help. Run this from an administrative command prompt:

Code:
"C:\PathToSetup\AIToolSetup.2.6.53.exe" /LOG="%TEMP%\AITOOLSETUP.LOG"
 

dohat leku

Getting the hang of it
Joined
May 19, 2018
Messages
329
Reaction score
34
Location
usa
Guys - getting deepstack: timeout on all my alerts that are landing up in canceled alerts. My BI 5.5.5.13 x64 and Deepstack are from 2022. How do I find my deepstack version to post here and would this script running on a daily basis possibly work - saw it on reddit

del C:\DeepStack\redis\*.rdb

del C:\Users\username\appdata\Local\Temp\DeepStack\. /q
 

dohat leku

Getting the hang of it
Joined
May 19, 2018
Messages
329
Reaction score
34
Location
usa
Sorry meant to say this folder has 80gb of data - C:\Users\username\appdata\Local\Temp\DeepStack\.
 

dohat leku

Getting the hang of it
Joined
May 19, 2018
Messages
329
Reaction score
34
Location
usa
I'm also open to updating deepstack to the latest as long as it's a good move and it'll work with an old 2022 BI 5.5.5.13. vs just being safer to simply stay with an older version
 

Chris Dodge

Pulling my weight
Joined
Aug 9, 2019
Messages
96
Reaction score
119
Location
massachusetts
I'm also open to updating deepstack to the latest as long as it's a good move and it'll work with an old 2022 BI 5.5.5.13. vs just being safer to simply stay with an older version
Deepstack is a dead product and has been for a few years.

Uninstall deepstack, delete it and never look back. Use CodeProject.AI. They keep it up to date and it has a nice web interface for installing updates and components:

The latest version of AITOOL works great with it. Or use the latest version of BI directly with CodeProject and skip AITOOL if you like. Its not as powerful or flexible but it gets the job done.
 

dohat leku

Getting the hang of it
Joined
May 19, 2018
Messages
329
Reaction score
34
Location
usa
So I have to install codeproject, then AItool. Got it - do you think it would work well with an older version of BI from 2022 - 5.5.5.13 - I guess it would because all BI is doing is simply sending it alerts and awaiting a response right?
 

Chris Dodge

Pulling my weight
Joined
Aug 9, 2019
Messages
96
Reaction score
119
Location
massachusetts
So I have to install codeproject, then AItool. Got it - do you think it would work well with an older version of BI from 2022 - 5.5.5.13 - I guess it would because all BI is doing is simply sending it alerts and awaiting a response right?
Should work fine with older versions of BI
 

whoami ™

Pulling my weight
Joined
Aug 4, 2019
Messages
232
Reaction score
224
Location
South Florida
Is it possible to run multiple instances of CodeProject AI like it is Deepstack?

Ive been using Deepstack all this time running 4 instances at once but figured CPAI has been around long enough to be a better option but with my first testing I cant see how to run multiple instances of the same model and with a single instance it built up 100+ images in queue on the first day and is unusable.
 

Schrodinger's Cat

Young grasshopper
Joined
Nov 17, 2020
Messages
49
Reaction score
22
Location
USA
Is the Github monthly/one time sponsor setup the best way to support this project or is there a more direct way to support who's actively doing the development at the moment? AITool is awesome and I really want to kick a few bucks into the pot in hopes it sticks around.
 

Chris Dodge

Pulling my weight
Joined
Aug 9, 2019
Messages
96
Reaction score
119
Location
massachusetts
Is it possible to run multiple instances of CodeProject AI like it is Deepstack?

Ive been using Deepstack all this time running 4 instances at once but figured CPAI has been around long enough to be a better option but with my first testing I cant see how to run multiple instances of the same model and with a single instance it built up 100+ images in queue on the first day and is unusable.
Currently AITOOL handles the queue and only allows one request at a time to CPAI. I dont see an easy way like with Deepstack to run multiple instances.

However, I now see CPAI has its own queuing system that deepstack never had so it may be beneficial to update AITOOL to just hit cpai with everything we get and let it handle what it can. This will take some doing in the code so I'm not sure when I'll have time.

So the only option for now is to set up more CPAI servers to handle the requests:
  • [Same machine] Install Docker, then CodeProject AI docker image on it. (easiest, least resources)
  • [Same machine] Install Windows Subsystem for Linux (WSL), then install the Linux version
  • [Same machine] Install another virtual machine tool (vmware/virtual box], install windows then install CodeProject AI normally
  • Install on most any other machine in your network.
  • Get a cheep Raspberry pi, Jetson or Intel NAC and install the appropriate docker versions
  • Or of course keep running deepstack if you dont mind dealing with its instability
 

Chris Dodge

Pulling my weight
Joined
Aug 9, 2019
Messages
96
Reaction score
119
Location
massachusetts
@whoami ™ @Pentagano @Tinbum

New version:

  • Support CPAI Mesh / Queuing! Settings > AI SERVERS > Edit server > "Allow AI Server based queue". Because CodeProject.AI manages its own queue it can handle concurrent requests (Unlike Deepsack), we ignore the fact that it is "in use" and just keep giving it as many images as we get to process. So far this actually seems to work really well. It should prevent some cases of the default 100 image queue error from happening. Note: When you enable this it will be more rare that a server OTHER THAN THE FIRST is used.
  • 1717284995623.png
  • If you still want other AITOOL AI servers to be used there are a few things you can do:
    1) Reduce AI SERVER > Edit URL > Max AI server queue length setting. CPAI defaults to 1024, so if, for example, you dropped that down to 4, it would only try the next server in line when the queue was above 4. You will have to test in your environment to see if this makes sense as it may not.
    2) Reduce 'AI Server Queue Seconds larger than'. If its queue gets too high you can force it to go to the next AITOOLS server in the list.
    3) Reduce 'Skip if AITOOL Img Queue Larger Than' setting. If the AITOOL image queue is larger than this value, and the AI server
    has at least 1 item in its queue, skip to the next server to give it a
    chance to help lower the queue.
    4) Or, In AITOOL > Settings, enable "queued" checkbox. This way AITOOL will take turns and always use the server that was used the longest ago. This may not be ideal if some of the servers are much slower than others.

    Tip: In CPAI settings web page, enable MESH and make sure it can talk to the other servers you may have configured. (all have to be on the same network with open/forwarded UDP ports - docker to docker to physical instance may take some work to get to see each other). This way, CPAI will do the work of offloading to the next server in line!

    Tip: For faster queue processing, enable as many modules (YOLOv5 6.2, YOLOv5.NET, YOLOv8, etc). It will help spread the workload out so in some cases you dont even need more than one CPAI server.

    Tip: If you use IPCAM Animal and and a few others as 'linked servers', you will get errors if you have anything other than YOLOv5 6.2 enabled because the models have not been build for the others yet. I haven't found a good way around this yet.

    Tip: If the MESH cannot see DOCKER or VM based instances of CPAI servers, edit your C:\ProgramData\CodeProject\AI\serversettings.json file and manually add the servers it cannot automatically find. For example:

    "KnownMeshHostnames": [ "prox-docker", "pihole"],

  • Then, stop/start the Codeproject.ai service.


A few more:
  • Some new columns in Edit AI URL screen related to queue time, min, max, etc. AIQueueLength, AIQueueLengthCalcs, AIQueueTimeCalcs, etc.
  • Update setup to only check for .NET 8 rather than 6
  • Implement new easier to use version of Threadsafe classes. This should also shrink the json settings file a bit and make the code easier to read.
  • If you enable 'Ignore if offline' for a CPAI server that is running in mesh mode and mesh returns an error (ie the mesh computer was turned off for example) you will not see an error.
    Fixed bug where using linked servers, there may be duplicates or disabled urls in the list slowing down the overall response time.
  • Gotham City's corruption problem is still a work in progress. I'm Batman.
 

Attachments

Top