Sharing setups with CodeProject AI on Coral hardware (Spring 2024)

Just upgraded my M.2 single Edge TPU to M.2 Dual Edge TPU. No even need for driver re-installation. It works right out of the ESD carrier :) Other hardware is Asrock DeskMini A300 with Ryzen 3 3200, 32Gb RAM. Real powerhouse!

Where you able to get both TPU's working Code Project. The dual code won't use both of my TPUs. I even moved code project to my Linux server in docker and still the same result. So for now, I'm still using a single TPU even though the OS sees both TPUs.
 
Where you able to get both TPU's working Code Project. The dual code won't use both of my TPUs. I even moved code project to my Linux server in docker and still the same result. So for now, I'm still using a single TPU even though the OS sees both TPUs.


I just upgraded an added in a dual tpu chip into my server in addition to its original single tpu. I was occasionally getting timeouts when the single tpu was busy.

Multi-tpu is now working as far as I can tell. Both code project and blue iris show multi-tpu in use.

I now have the original single tpu M2 M key and dual tpu e key via a cheap aliexpress pci-e wireless adaptor card (cant paste a link to this).
The dual tpu only shows as a single device in windows (confirmed this by having the dual tpu as the only tpu and checking available devices).

The only modification I had to do was edit the options.py file. There were a heap of red messages within the CPAI log window (No multi-TPU interpreters" error).

The work around was to edit C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\options.py) and changed the 60 seconds to 1 day.

Exact line

self.MAX_IDLE_SECS_BEFORE_RECYCLE = 86400.0 # To be added to non-multi code

source :

Anyone know of any logging option to be able to tell which tpu is being sent the detection job?
 
I just upgraded an added in a dual tpu chip into my server in addition to its original single tpu. I was occasionally getting timeouts when the single tpu was busy.

Multi-tpu is now working as far as I can tell. Both code project and blue iris show multi-tpu in use.

I now have the original single tpu M2 M key and dual tpu e key via a cheap aliexpress pci-e wireless adaptor card (cant paste a link to this).
The dual tpu only shows as a single device in windows (confirmed this by having the dual tpu as the only tpu and checking available devices).

The only modification I had to do was edit the options.py file. There were a heap of red messages within the CPAI log window (No multi-TPU interpreters" error).

The work around was to edit C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\options.py) and changed the 60 seconds to 1 day.

Exact line

self.MAX_IDLE_SECS_BEFORE_RECYCLE = 86400.0 # To be added to non-multi code

source :

Anyone know of any logging option to be able to tell which tpu is being sent the detection job?


AFAIK, the only way to get both TPUs working (for the DUAL TPU card) is to use one of these adapters:

PCIe adapter

m.2 B+M adapter

I don't remember the exact reason but if I remember correctly it had to do with the number of PCIe lanes for the WIFI slot which is handled by the motherboard.

As far as logging to see which TPU is being used, I think you'd have to modify the python code yourself. I forget the exact file but I think it's objectdetection_coral_multitpu.py
 
  • Love
Reactions: koops
Thanks!

Yeah i did see those. Just ordered one now. I'll have a play with the logging and see if that info tpu instance information is available.

edit
As far as logging to see which TPU is being used, I think you'd have to modify the python code yourself. I forget the exact file but I think it's objectdetection_coral_multitpu.py
You were right. Its only seeing 2 tpu's not 3.

Had a look around and and modified near line #519 of tpu_runner.py (in the logs it shows as objectdetection_coral_adapter.py but its not) to put in :

show_tpu_list = edgetpu.list_edge_tpus()
logging.critical(f"TPU list {tpu_count}")
logging.critical(f"TPU list {show_tpu_list}")

Which then shows detected tpu's on start up.

16:28:42:Started Object Detection (Coral) module
16:28:44 objectdetection_coral_adapter.py: CRITICAL:root:TPU list 2
16:28:44 objectdetection_coral_adapter.py: CRITICAL:root:TPU list [{'type': 'pci', 'path': '\\\\?\\ApexDevice0'}, {'type': 'pci', 'path': '\\\\?\\ApexDevice1'}]
16:28:44 objectdetection_coral_adapter.py: TPU detected
16:28:44 objectdetection_coral_adapter.py: Attempting multi-TPU initialisation
16:28:44 objectdetection_coral_adapter.py: Supporting multiple Edge TPUs
 
Last edited:
I just upgraded an added in a dual tpu chip into my server in addition to its original single tpu. I was occasionally getting timeouts when the single tpu was busy.

Multi-tpu is now working as far as I can tell. Both code project and blue iris show multi-tpu in use.

I now have the original single tpu M2 M key and dual tpu e key via a cheap aliexpress pci-e wireless adaptor card (cant paste a link to this).
The dual tpu only shows as a single device in windows (confirmed this by having the dual tpu as the only tpu and checking available devices).

The only modification I had to do was edit the options.py file. There were a heap of red messages within the CPAI log window (No multi-TPU interpreters" error).

The work around was to edit C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\options.py) and changed the 60 seconds to 1 day.

Exact line

self.MAX_IDLE_SECS_BEFORE_RECYCLE = 86400.0 # To be added to non-multi code

source :

Anyone know of any logging option to be able to tell which tpu is being sent the detection job?

Thanks!

Yeah i did see those. Just ordered one now. I'll have a play with the logging and see if that info tpu instance information is available.

edit

You were right. Its only seeing 2 tpu's not 3.

Had a look around and and modified near line #519 of tpu_runner.py (in the logs it shows as objectdetection_coral_adapter.py but its not) to put in :

show_tpu_list = edgetpu.list_edge_tpus()
logging.critical(f"TPU list {tpu_count}")
logging.critical(f"TPU list {show_tpu_list}")

Which then shows detected tpu's on start up.

16:28:42:Started Object Detection (Coral) module
16:28:44 objectdetection_coral_adapter.py: CRITICAL:root:TPU list 2
16:28:44 objectdetection_coral_adapter.py: CRITICAL:root:TPU list [{'type': 'pci', 'path': '\\\\?\\ApexDevice0'}, {'type': 'pci', 'path': '\\\\?\\ApexDevice1'}]
16:28:44 objectdetection_coral_adapter.py: TPU detected
16:28:44 objectdetection_coral_adapter.py: Attempting multi-TPU initialisation
16:28:44 objectdetection_coral_adapter.py: Supporting multiple Edge TPUs

I have not looked at this for a few months. I had opened a Git issue about it and nothing has been done since posting it:

There are a bunch of open issues with Multi-TPU. So you are seeing both of your TPUs from the single PCIe card? What version of CPAI are you running?
 
So you are seeing both of your TPUs from the single PCIe card? What version of CPAI are you running?
No im seeing 1 x single tpu (m2) 1 x dual tpu (m2 e key on a pcie card) for a total of 2 available tpu's when i should have 3.

So mutli tpu is working for me for a day or so now but not all 3 (or so I believe).

For reference im using CPAI 2.9.5 & Object detection (coral) 2.4.0.
 
Last edited:
Just to clarify, koops was talking about 2 separate issues.

1. Getting the Dual TPU card (M.2 Accelerator with Dual Edge TPU | Coral) to get both TPUs recognized by the OS. That is where you need one of the two adapters I linked to. That has nothing to do with CPAI. FYI - I have one of each of those cards and they both worked well for me. It did take about 10 days to get since it's coming from China.

2. Getting the CPAI software to work with Multiple TPU's. This is the problem everyone has with the Assets (models files) and has goes back to the version after 2.5.1. (that is the version I kept on my BI PC even thought it's only single TPU). I opened an issue on the Code Project site but that's gone now. I opened it later on IPCamTalk as well but that didn't help so I gave up. However I've seen a few people post about issues about it on Git so will see if that helps. FYI - The developer for the Coral module is very helpful but he doesn't control the released code or the assets so we can't blame him.

The Coral code isn't bad and I can figure it out. You can run it standalone which is what the developer does (executes from CLI).

Me venting now...
I tried looking at the server code with each release and it just keeps getting more convoluted in my opinion. Instead of focusing on simplicity and reliability it seems to be going the route of fancy and elaborate (which if you read the release notes it even says it may cause some problems). There also hasn't been any development in 4-5 months. I get this is FOSS so can't complain and just venting...
 
  • Like
Reactions: koops
With #2 i'll keep an eye on the CPAI logs windows to see if anything does show up but it does seem to be running in multi-tpu mode.
Entirely possible its only due to me having the single standalone tpu installed. Blueiris also gets sent back into saying "Mutli-tpu".
I can't prove that more than 1 are in use apart from lack of previous messages which seemed to indicate that the tpu was in use.

I was hoping to high jack that message and see if I could include the tpu instance within it so I could spot check and see that different tpu's were in fact being used.

1747267409656.png
 
@koops
I haven't looked at the code in a while but I think he was using the "Pipeline" method so I don't think you would see the different TPUs. In other words, you are splitting the work of a single inference between the 2 (or more) TPUs. A segment of a single model would be loaded into each TPU. An inference would start in the 1st TPU against the 1st segment and then once complete move the next TPU to perform the next segment (and so on and so forth for X number of TPUs). This is especially handy for larger models (I think from size medium and up).

It is not using each TPU like "threads" which it sounds like what you are thinking. For example, if you sent 2 pictures to run inference on it would not send picture #1 to one TPU and picture #2 to the different TPU.

At least that was my understanding the last time I looked at it.

Also, I don't know if the latest version (2.9.5) has the assets correct but I think in a previous version the assets for the segmented files were not correct for at least one of the models. I think someone also noticed that and I think they posted that as an issue on Github if you want to look. This is how I found out the developer for the Coral module was not editing the releases.

BTW which model and size are you using?

For my situation, EfficientDet Lite and Medium seemed to be the most accurate at reasonable speed.
 
  • Like
Reactions: koops
I think i was using mobilenet small. When i tried to use anything else it would either not work at all or revert to cpu based detection.
 
I think i was using mobilenet small. When i tried to use anything else it would either not work at all or revert to cpu based detection.
Sounds like the assets/models are still broken then.

If using the pipeline method I don't think you would see any advantage with the dual TPU and the small model since the whole model would fit on the TPU but I don't know.
 
Hi guys,

I need your help.
I've just installed Coral Dual TPU chip with a PCIe x1 adapter to support the dual TPU so I can use it with Blue Iris.
I've installed the drivers and in Windows, boths chips are detected.
1759690414577.png

I've already tried to activate the Multi-TPU Support in CPAI:
1759690300134.png

But when CPAI reestarts, I still have the following LOG messages.
What do I need to do to use both TPU chips?
1759690196269.png

Should I use YOLO Modules?
Or do I need to change some configuration in CPAI?

Thanks in advance!
 
Hi guys,

I need your help.
I've just installed Coral Dual TPU chip with a PCIe x1 adapter to support the dual TPU so I can use it with Blue Iris.
I've installed the drivers and in Windows, boths chips are detected.
View attachment 229259

I've already tried to activate the Multi-TPU Support in CPAI:
View attachment 229258

But when CPAI reestarts, I still have the following LOG messages.
What do I need to do to use both TPU chips?
View attachment 229257

Should I use YOLO Modules?
Or do I need to change some configuration in CPAI?

Thanks in advance!

Never got it to work with both TPUs. Tried Windows and Docker under Linux. The Coral TPU code is stagnate right now. I couldn't get License Plate to work with the Coral TPU either.

I switched to Yeston GeForce RTX 3050 6GB GDDR6:
The AI detection quality went way up using the GPU with much better detection. I set the alert for 90% or higher detection rate. I was lucky to get 70% with the TPU. Object, Face, and License all work, using the Huge model and I'm seeing 25-30ms for each detection. So the GPU vastly out performs TPU and I won't be going back.
 
As an Amazon Associate IPCamTalk earns from qualifying purchases.
I think if you look at the GitHub discussions there are some comments about the dual TPU. IIRC there was a problem with the Models(Assets) for the Dual TPU. I'm not sure if anyone fixed them though.

I can't help you because I'm still running an old version with single TPU because it just worked (even though I have a dual TPU and would like to use it). Accuracy was also very good with the version I have because of the model it used. I keep saying I will look into the new version but time is not my friend.

FYI - If you get timeout errors that is a config that is also discussed on Github.

The TPU is a capable device and many still use it (especially with Frigate) but Google has abandoned it making it difficult to use unless you want to recompile their code and troubleshoot. I personally prefer it over a GPU because it is low power and my rates electric rates almost doubled this year.
 
I have two open tickets on GitHub about the TPU. They've been open for many months with nothing but crickets. Others have open tickets and it's mostly the same. I believe that CPAI is on life support now and I really hope it can be revived as AI with Blue Iris is a game changer.

That Yeston card is bit of a unicorn. It's a single slot low profile card and it does not need external power and works fine with the 70W PCIe slot.. It idles around 4 watts and and seems to burst to around 30 watts when actively doing inference, and that is with the Huge model. Only note, that if you go down this path, you need to use CUDA 11.7 and CUDNN 8.9.7.29 for CPAI to work correctly (I spent many hours figuring that out).

So the TPU is extremely low power but CPAI support is lacking and seems to be abandoned. I could not get it to work with License model, but it worked fine for Object and Face processing. Accuracy was "okay" compared to CPU. I had to lower the detection threshold from 85%+ down to 60%+ for the TPU to detect objects, though it's inference speed was much faster then CPU, 35ms for Small model and 250ms for Medium model vs 500-1500ms for CPU.

So using the TPU is better then the CPU if you don't need License support and it is exremely low power. The Nvidia GPU seems to be the best solution and it's also fairly low power. My TPU and daughter card cost about $90 total, the Yeston GPU about $205.

I wanted to really like the Coral TPU. In my head it ticked off all the boxes. If the TPU support was better, I wouldn't have even explored the GPU road. I hope CPAI picks up development again as it took Blue Iris to the next level.
 
  • Like
Reactions: AlwaysSomething
I'll search GitHub later but I think the post is list under something like Incorrect/broken models or assets.

I have mixed feelings on the status of CPAI but hope it revives. As far as the Coral Module, I definitely feel that may be abandoned. In fact I "think" the Coral developer moved on. It's sad because it really had great potential and checked off all the boxes like you said which is why many people, including me, jumped on it. Time will tell.

I actually bought the MSI RTX 3050 just for LPR. It's similar to your Yeston. I have it in a Proxmox rig so don't know exact power usage. Since the older version of CPAI/TPU (2.5.1 IIRC) is working well for me, I still use the TPU for Object Detection. FYI - it can't be used for LPR. That would need a new model. I tried creating one but never succeeded.
 
I have two open tickets on GitHub about the TPU. They've been open for many months with nothing but crickets. Others have open tickets and it's mostly the same. I believe that CPAI is on life support now and I really hope it can be revived as AI with Blue Iris is a game changer.

That Yeston card is bit of a unicorn. It's a single slot low profile card and it does not need external power and works fine with the 70W PCIe slot.. It idles around 4 watts and and seems to burst to around 30 watts when actively doing inference, and that is with the Huge model. Only note, that if you go down this path, you need to use CUDA 11.7 and CUDNN 8.9.7.29 for CPAI to work correctly (I spent many hours figuring that out).

So the TPU is extremely low power but CPAI support is lacking and seems to be abandoned. I could not get it to work with License model, but it worked fine for Object and Face processing. Accuracy was "okay" compared to CPU. I had to lower the detection threshold from 85%+ down to 60%+ for the TPU to detect objects, though it's inference speed was much faster then CPU, 35ms for Small model and 250ms for Medium model vs 500-1500ms for CPU.

So using the TPU is better then the CPU if you don't need License support and it is exremely low power. The Nvidia GPU seems to be the best solution and it's also fairly low power. My TPU and daughter card cost about $90 total, the Yeston GPU about $205.

I wanted to really like the Coral TPU. In my head it ticked off all the boxes. If the TPU support was better, I wouldn't have even explored the GPU road. I hope CPAI picks up development again as it took Blue Iris to the next level.
Yeston card that you are referring to? They are a manufacturer, so which one is this low power/high performance model you are referencing?