Blue Iris run as a VM or Docker

ALias

n3wb
Aug 31, 2025
4
0
Earth USA
I am consolidating servers and moving to virtualization and containers for the various services I run locally.

On the topic of Blue Iris I have some questions, any input would be greatly appreciated especially from those who have run from a VM or container (Docker, etc).

1. Can BI be run from a VM or container?
1a. If yes which would provide the best performance (equal or as close to bare metal)?

2. Server will be setup as Z490 main board, 64gb memory, 12gb NVDA GPU, 750watt PS, i10700K CPU, 8tb storage space ssd/hdd config, 256gb NVME for OS boot, a GPU yet to be spec'd for BI graphics processing, 4 port 1gbs Intel NIC. This spec has been verified as sufficient with some future proofing built in. Any suggestions to make it tighter/robust is welcomed.

3. Total number of cameras will be +/-10.

4. FYI, host system resources (cpu, memory, graphics, storage) will be configured in container/vm per requirements. BI will get a dedicated amount of resources to ensure consistent availability and performance. If you have any suggestions on min/max resource allocation please share.

Thank you in advance for your input.
 
That server will be more than powerful enough.

Blue Iris is a traditional windows GUI program so it would probably be a waste of time trying to run it in a container/docker. It will work fine in a virtual machine. For a load of 10 cameras, I would allocate at least 10 GB of RAM and 4 to 8 CPU cores. You can always adjust as needed later. In a well-optimized Blue Iris installation, you'll be using sub streams for most or all of the cameras, so "idle" CPU usage, with nobody watching the cameras, should be fairly low. CPU usage will rise when someone is viewing a high resolution camera in "solo" or maximized mode, because that requires Blue Iris to do additional video decoding. Remote viewing in particular can be CPU intensive because it requires Blue Iris to encode an H.264 video stream in realtime. You'll want to allocate enough CPU cores to handle the peak load your system gets, even if that means Blue Iris only uses around 10% of the allocated CPU most of the time.

Quick Sync is mostly irrelevant anymore. Don't worry about passing through a GPU unless you want to use it for AI processing (which is optional; the models used for AI analytics can run on CPU). I don't use AI in any of my Blue Iris installations so I'm not really up to date on the system requirements for that, but you will have plenty of CPU and RAM to spare if needed and you can always add a GPU later.

For video recording, I would recommend passing through an entire hard drive to the VM (or more than one if you need more capacity). If you are going to record to a virtual disk, I'd suggest making it a separate virtual disk from the one you put the OS on, and consider making it thick provisioned because it will fill up relatively fast anyway and stay nearly full at all times. Thin provisioning would be just added overhead I suspect.

I should also note there are utilities available to remove built-in Windows components to bring its RAM usage down and have less crap consuming CPU intermittently. Here's one I've used on an old tablet which worked fine: Revision | Revision
 
Last edited:
Overkilled. I use an Optiplex Micro PC with I9-9900T CPU with 32GB of ram and a Synology NAS. The PC runs Proxmox with 4VMs; Windows 10 with BI, Debian with CodeProject.AI thru CPU and ALPR in Docker, Home Assistant, another Debian CLI only.
I got 15 cameras, including 4 LPR cameras.
 
I run mine in a vm. I give it 10 cpu and 6 gb of ram to start(dynamic) and I could get by with less cpu, I just give it more cuz I can