New CodeProject.AI Object Detection (YOLO11 .NET) Module

I'm glad users like @Vettester post snapshots of their screens. I installed the downloaded appsettings.json file, stopped and restarted the CP.ai service, and I still don't see the YOLO11 items. :(
I had to shut down the machine and restart it then I also had to start and restart Codeproject a couple times before it started working. Now I am good to go! Works GREAT!!
 
Yup, been there, done that. Even waited ten or fifteen minutes, and did it again. :(
Thanks, Mike!
Sounds like a problem with the app settings.json file. You mentioned you “installed” the file. Did you save the old file or just overwrite it?
 
I'm glad users like @Vettester post snapshots of their screens. I installed the downloaded appsettings.json file, stopped and restarted the CP.ai service, and I still don't see the YOLO11 items. :(
There are several files called appsettings.json in the codeproject folder structure, did you replace the one in the "C:\Program Files\CodeProject\AI\Server" folder?
 
Do you still need video cards for the higher YOLO modules to work?

AMD processors seem to getting some pretty serious graphics capability built in eg 8500G has 256 Stream Processors, the 8600G 512. The next gen 9000G series are supposed to be more powerful still although I couldn't find any specs to indicate whether the SP count has increased.

Even older Intel processors aren't short of Shader Units eg. 10400 has 184.

Will these handle the higher YOLO models? eg I have a 10400 with 184 SU's. Will it process Yolo 11 without being maxxed out or causing issues? Whats the SU / CUDA requirement for each of the YOLO models? Anyone know?

I feel like using the higher models but the longstanding advice from CP seems to be use Yolo 4 unless you have a dedicated GPU. However, CPU's have received more powerful integrated GPUs in the meant time.
 
There are several files called appsettings.json in the codeproject folder structure, did you replace the one in the "C:\Program Files\CodeProject\AI\Server" folder?
@Bruce_H was on the right track. Checked it again this morning, and somewhere between Windows 11 and me (I'm blaming me) the file I downloaded had a hidden '.txt' extension on it. DOH!
 
You can ignore the timeout it should fully install, I get the same thing and it install correctly.
I’ve been trying for days but the installation doesn’t start, while the license plate reading module installs correctly, both the .NET and ONNX versions.
 

Attachments

  • Screenshot 2025-10-29 175934.png
    Screenshot 2025-10-29 175934.png
    133.2 KB · Views: 9
Hey Mike, great job on everything, all working quite well on my end.

I am curious if you plan on adding another module or model to support animals? Or that delivery model I've seen floating around?

If not, what would you recommend in addition to these fine v11 modules? Run v5 .NET but have that one NOT look for people/vehicles?

Cheers
 
  • Like
Reactions: The_Penguin
Forgive my ignorance but this will only work on windows, correct?
I have not tested it on other OSs. If you do not have a Nvidia GPU is is best to run CP.AI on Windows using the .NET modules with DirectML. The .NET module with DirectML will work with almost any GPU and is faster the CUDA

The DirectML execution provider requires a DirectX 12 capable device. Almost all commercially-available graphics cards released in the last several years support DirectX 12. Here are some examples of compatible hardware:
  • NVIDIA Kepler (GTX 600 series) and above
  • AMD GCN 1st Gen (Radeon HD 7000 series) and above
  • Intel Haswell (4th-gen core) HD Integrated Graphics and above
  • Qualcomm Adreno 600 and above
 
I have not tested it on other OSs. If you do not have a Nvidia GPU is is best to run CP.AI on Windows using the .NET modules with DirectML. The .NET module with DirectML will work with almost any GPU and is faster the CUDA

The DirectML execution provider requires a DirectX 12 capable device. Almost all commercially-available graphics cards released in the last several years support DirectX 12. Here are some examples of compatible hardware:
  • NVIDIA Kepler (GTX 600 series) and above
  • AMD GCN 1st Gen (Radeon HD 7000 series) and above
  • Intel Haswell (4th-gen core) HD Integrated Graphics and above
  • Qualcomm Adreno 600 and above

The reason I am asking is because I just got a 5060 TI (previously had a 1070 that was working fine), and it is unsupported in the current version of CPAI.
I have been running on docker and can not get CPAI to work with the 5060 ti. I tried running this yolo11 addon in my docker container but it did not work either, it acted as if it couldn't see the GPU even though nvidia-smi is functioning in the container (see system info output from CPAI below).
I think DirectML is only available on bare-metal windows?
I run blue iris in a windows 11 VM on proxmox, and CPAI in docker in an ubuntu VM with GPU passed through.
Unfortunately I need the GPU passed through to the ubuntu VM otherwise I'd move it to the windows VM that BI is running on.

Just trying to figure out a way to get CPAI working again for Blue Iris with my new 5060 TI... unfortunately it's looking like I might just need to suck it up and move to Frigate..


Server version: 2.9.5
System: Docker (Ubuntu-Plex)
Operating System: Linux (Ubuntu 22.04 Jammy Jellyfish)
CPUs: AMD Ryzen 9 5950X 16-Core Processor (AMD)
1 CPU x 12 cores. 12 logical processors (x64)
GPU (Primary): NVIDIA GeForce RTX 5060 Ti (16 GiB) (NVIDIA)
Driver: 580.95.05, CUDA: 13.0 (up to: 13.0), Compute: 12.0, cuDNN: 8.9.6
System RAM: 59 GiB
Platform: Linux
BuildConfig: Release
Execution Env: Docker
Runtime Env: Production
Runtimes installed:
.NET runtime: 9.0.0
.NET SDK: Not found
Default Python: 3.10.12
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
System GPU info:
GPU 3D Usage 13%
GPU RAM Usage 2.8 GiB
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
 
The reason I am asking is because I just got a 5060 TI (previously had a 1070 that was working fine), and it is unsupported in the current version of CPAI.
I have been running on docker and can not get CPAI to work with the 5060 ti. I tried running this yolo11 addon in my docker container but it did not work either, it acted as if it couldn't see the GPU even though nvidia-smi is functioning in the container (see system info output from CPAI below).
I think DirectML is only available on bare-metal windows?
I run blue iris in a windows 11 VM on proxmox, and CPAI in docker in an ubuntu VM with GPU passed through.
Unfortunately I need the GPU passed through to the ubuntu VM otherwise I'd move it to the windows VM that BI is running on.

Just trying to figure out a way to get CPAI working again for Blue Iris with my new 5060 TI... unfortunately it's looking like I might just need to suck it up and move to Frigate..


Server version: 2.9.5
System: Docker (Ubuntu-Plex)
Operating System: Linux (Ubuntu 22.04 Jammy Jellyfish)
CPUs: AMD Ryzen 9 5950X 16-Core Processor (AMD)
1 CPU x 12 cores. 12 logical processors (x64)
GPU (Primary): NVIDIA GeForce RTX 5060 Ti (16 GiB) (NVIDIA)
Driver: 580.95.05, CUDA: 13.0 (up to: 13.0), Compute: 12.0, cuDNN: 8.9.6
System RAM: 59 GiB
Platform: Linux
BuildConfig: Release
Execution Env: Docker
Runtime Env: Production
Runtimes installed:
.NET runtime: 9.0.0
.NET SDK: Not found
Default Python: 3.10.12
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
System GPU info:
GPU 3D Usage 13%
GPU RAM Usage 2.8 GiB
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
Try moving CP.AI to the Windows 11 VM, it looks like you can get DirectML to work on the VM, see the below

 
Try moving CP.AI to the Windows 11 VM, it looks like you can get DirectML to work on the VM, see the below

Yes but then it won't have GPU acceleration, my Windows 11 VM doesn't have a GPU passed through to it. and I'm running things on the ubuntu VM that require a GPU so I can't just remove it from the ubuntu VM and attach it to the w11 vm BI is running on.
 
Yes but then it won't have GPU acceleration, my Windows 11 VM doesn't have a GPU passed through to it. and I'm running things on the ubuntu VM that require a GPU so I can't just remove it from the ubuntu VM and attach it to the w11 vm BI is running on.
This weekend I will try to get a CUDA version and this should work in your ubuntu VM. Does the Object Detection (YOLOv5 .NET) 1.14.0 module work in your ubuntu VM using CUDA?
 
Last edited:
This weekend I will try to get a CUDA version and this should work in your ubuntu VM. Does the Object Detection (YOLOv5 .NET) 1.14.0 module work in your ubuntu VM using CUDA?
That would be awesome and greatly appreciated. I've tried updating pytorch in the container but can't get it working, my knowledge just isn't there unfortunately.

Yolov5.net is CPU only in the container on my Ubuntu VM, doesn't even try to use the GPU.
 
That would be awesome and greatly appreciated. I've tried updating pytorch in the container but can't get it working, my knowledge just isn't there unfortunately.

Yolov5.net is CPU only in the container on my Ubuntu VM, doesn't even try to use the GPU.
Did you try Enable GPU, if GPU (CUDA) does not work with the Object Detection (YOLOv5 .NET) 1.14.0 module in your Ubuntu VM then most likely the new Object Detection (YOLO11 .NET) 1.3.0 module will not work also

1761916617538.png
 
Did you try Enable GPU, if GPU (CUDA) does not work with the Object Detection (YOLOv5 .NET) 1.14.0 module in your Ubuntu VM then most likely the new Object Detection (YOLO11 .NET) 1.3.0 module will not work also

View attachment 231409

Like I said, Yolov5.net is CPU only in the container on my Ubuntu VM, doesn't even try to use the GPU.
1761917681206.png
1761917722202.png

I do, however, have the enable/disable GPU options for yolov5 6.2, but pytorch is not compatible and needs to be updated to support sm_120.
1761917808254.png
 
That would be awesome and greatly appreciated. I've tried updating pytorch in the container but can't get it working, my knowledge just isn't there unfortunately.

Yolov5.net is CPU only in the container on my Ubuntu VM, doesn't even try to use the GPU.

Try installing Cuda 12.4.0_551.61?