Acceleration for Built-in AI?

TheWaterbug

Known around here
Oct 20, 2017
1,332
2,315
Palos Verdes
I'm doing LPR from 4 cameras in an office park, and feeding that into the ALPR Database.

I also have 20+ other cameras doing regular monitoring, and my CPU is hovering around 50-60% when there's not a lot of traffic, but during peak traffic times it can peg, especially when the Docker container for the ALPR database starts mishbeving.

I was thinking that offloading some of the AI burden might help. I was running CPAI on the same box as BI, and recently switched to BI 6's built-in AI.

Given that the built-in AI runs on the ONNX runtime, which uses DirectML, which requires only DirectX 12 support, it seems like there should be lots of options.

How much VRAM do I need for LPR? This card is only $50 at amazon, but it's got only 2 GB. For $57 I can get 4 GB. edit: that one's DirectX 11.

What's a decent minimum spec for this type of application? Does performance fall of a cliff with insufficient RAM? Or does it degrade gracefully?

Going to 8 GB puts me into far more expensive territory for a science experiment.

Thanks!
 
Last edited:
As an Amazon Associate IPCamTalk earns from qualifying purchases.
Is there a way to see how much BI' Onnx-based AI is using? I expanded-all in Task Manager, and I don't see it as a discrete thing.
 
Nvidia 700 series graphics cards are 13 years old and out of driver support. Those are particularly low end models too with low memory bandwidth. I wouldn't expect them to accelerate anything. Best you could hope for is to offload some of the work from the CPU and system RAM, but even then I wouldn't hold my breath.
 
You might want to try one of the laptops with a built-in nvidia GPU, like this one:


It's $150 (assuming you have an extra 256G nvme, and 8G of ram, and a USB-C charger that does 60W). These have a Nvidia T550 with 4G of vram, and 12th gen Intel CPU with iGPU. You could run CPAI with either GPU. T550 is pretty new, full driver support.
 
As an eBay Associate IPCamTalk earns from qualifying purchases.
I am not sure getting a different GPU is going to help when 3090 and 5090s aren't cutting it. I think BI uses DirectML which doesn't use CUDA directly so it will bang quite a bit on the CPU. I think I am going to go back to Codeproject which can use Cuda directly. Is it possible to get all the same models working on that?