Any reason to still have 3rd party AI now that BI have it built in?

dannieboiz

Getting the hang of it
May 13, 2015
505
75
I'm currently using CodeProject and it's been working great. Now that BI 6 have AI built in, is it better to use CodeProject or just use the built in AI?
 
One large difference is that Coral is not supported by the integrated AI. I tried using integrated AI but the analysis time per frame was from 700-1200 ms on my Dell OptiPlex 3070, which is unusable. With CPAI and two USB Coral modules plugged in, detection times range from 16 to 25 ms.
 
One large difference is that Coral is not supported by the integrated AI. I tried using integrated AI but the analysis time per frame was from 700-1200 ms on my Dell OptiPlex 3070, which is unusable. With CPAI and two USB Coral modules plugged in, detection times range from 16 to 25 ms.
Also my reason on of my setups. No real graphics card on it and and older cpu. Built in AI does work but id rather save the cpu useage. Coral chip doing great. Really wish he added hardware decoder support.
 
One large difference is that Coral is not supported by the integrated AI. I tried using integrated AI but the analysis time per frame was from 700-1200 ms on my Dell OptiPlex 3070, which is unusable. With CPAI and two USB Coral modules plugged in, detection times range from 16 to 25 ms.
Which version of CPAI are you using that you got 2 Corals to work? Most people that I know that tried more than 1 had issues and gave up (me included). I use the Coral because of the power efficiency in a location that loses power a lot and runs on battery backups.
 
For me running both cpai and bi in separate VMs I see faster times keeping thrm separate vs using the built in AI. I dont have the ability to do gpu pass-through on my hypervisor...
 
Which version of CPAI are you using that you got 2 Corals to work? Most people that I know that tried more than 1 had issues and gave up (me included). I use the Coral because of the power efficiency in a location that loses power a lot and runs on battery backups.
I am always using the latest available version, currently 2.9.5, but I had to increase the MAX_IDLE_SECS_BEFORE_RECYCLE value - see GitHub commit. It is running fine for more than a year. I also tried to implement using MikeLud's fine-tunned models (GitHub branch), but never took the time to finish the proper integration.
 
I am always using the latest available version, currently 2.9.5, but I had to increase the MAX_IDLE_SECS_BEFORE_RECYCLE value - see GitHub commit. It is running fine for more than a year. I also tried to implement using MikeLud's fine-tunned models (GitHub branch), but never took the time to finish the proper integration.
@bojanpotocnik Hate to bug you again but not a lot of people using Coral TPU so info is limited. I'm also running on old SFF hardware so the Coral has been a HUGE help and power saver (especially when on battery backup). I would love to get a DUAL Coral working (since I already have 2). What type of CPAI install are you using? Windows, Linux (version), or Docker?

I've also tried creating a custom model a few times but was never successful. All I care about is people, vehicles (cars, trucks, buses, motorcycles), bicycles, license plates, cats, and dogs. I may try again again but it's probably even harder now with newer versions of apps/libraries.
 
@bojanpotocnik Hate to bug you again but not a lot of people using Coral TPU so info is limited. I'm also running on old SFF hardware so the Coral has been a HUGE help and power saver (especially when on battery backup). I would love to get a DUAL Coral working (since I already have 2). What type of CPAI install are you using? Windows, Linux (version), or Docker?
This is a machine dedicated only for BI, so I am running latest Windows installation on Windows 10 Enterprise LTSC 10.0.19044 Build 19044
Code:
Server version:   2.9.5
System:           Windows
Operating System: Windows (Windows 10 Redstone)
CPUs:             Intel(R) Core(TM) i5-8500 CPU @ 3.00GHz (Intel)
                  1 CPU x 6 cores. 6 logical processors (x64)
GPU (Primary):    Intel(R) UHD Graphics 630 (1.024 MiB) (Intel Corporation)
                  Driver: 31.0.101.2135
System RAM:       32 GiB
Platform:         Windows
BuildConfig:      Release
Execution Env:    Native
Runtime Env:      Production
Runtimes installed:
  .NET runtime:     9.0.0
  .NET SDK:         Not found
  Default Python:   3.11.2
  Go:               Not found
  NodeJS:           Not found
  Rust:             Not found
Video adapter info:
  Intel(R) UHD Graphics 630:
    Driver Version     31.0.101.2135
    Video Processor    Intel(R) UHD Graphics Family
System GPU info:
  GPU 3D Usage       7%
  GPU RAM Usage      0
Global Environment variables:
  CPAI_APPROOTPATH = <root>
  CPAI_PORT        = 32168
Code:
Module 'Object Detection (Coral)' 2.4.0 (ID: ObjectDetectionCoral)
Valid:            True
Module Path:      <root>\modules\ObjectDetectionCoral
Module Location:  Internal
AutoStart:        True
Queue:            objectdetection_queue
Runtime:          python3.9
Runtime Location: Local
FilePath:         objectdetection_coral_adapter.py
Start pause:      1 sec
Parallelism:      16
LogVerbosity:     
Platforms:        all
GPU Libraries:    installed if available
GPU:              use if supported
Accelerator:      
Half Precision:   enable
Environment Variables
   CPAI_CORAL_MODEL_NAME = MobileNet SSD
   CPAI_CORAL_MULTI_TPU  = True
   MODELS_DIR            = <root>\modules\ObjectDetectionCoral\assets
   MODEL_SIZE            = medium
Status Data:  {
  "inferenceDevice": "Multi-TPU",
  "inferenceLibrary": "TF-Lite",
  "canUseGPU": "false",
  "successfulInferences": 325102,
  "failedInferences": 246953,
  "numInferences": 572055,
  "averageInferenceMs": 11.380655917219826
}
Status:       Started
Code:
15:36:31:Update ObjectDetectionCoral. Setting AutoStart=true
15:36:31:Restarting Object Detection (Coral) to apply settings change
15:36:31:Running module using: C:\CodeProject\AI\modules\ObjectDetectionCoral\bin\windows\python39\venv\Scripts\python
15:36:31:Starting C:\CodeProject...dows\python39\venv\Scripts\python "C:\CodeProject...bjectdetection_coral_adapter.py"
15:36:31:
15:36:31:Attempting to start ObjectDetectionCoral with C:\CodeProject\AI\modules\ObjectDetectionCoral\bin\windows\python39\venv\Scripts\python "C:\CodeProject\AI\modules\ObjectDetectionCoral\objectdetection_coral_adapter.py"
15:36:31:
15:36:31:Module 'Object Detection (Coral)' 2.4.0 (ID: ObjectDetectionCoral)
15:36:31:Valid:            True
15:36:31:Module Path:      <root>\modules\ObjectDetectionCoral
15:36:31:Module Location:  Internal
15:36:31:AutoStart:        True
15:36:31:Queue:            objectdetection_queue
15:36:31:Runtime:          python3.9
15:36:31:Runtime Location: Local
15:36:31:FilePath:         objectdetection_coral_adapter.py
15:36:31:Start pause:      1 sec
15:36:31:Parallelism:      16
15:36:31:LogVerbosity:
15:36:31:Platforms:        all
15:36:31:GPU Libraries:    installed if available
15:36:31:GPU:              use if supported
15:36:31:Accelerator:
15:36:31:Half Precision:   enable
15:36:31:Environment Variables
15:36:31:CPAI_CORAL_MODEL_NAME = MobileNet SSD
15:36:31:CPAI_CORAL_MULTI_TPU  = True
15:36:31:MODELS_DIR            = <root>\modules\ObjectDetectionCoral\assets
15:36:31:MODEL_SIZE            = medium
15:36:31:
15:36:31:Started Object Detection (Coral) module
15:36:37:objectdetection_coral_adapter.py: Using PIL for image manipulation (Either OpenCV or numpy not available for this module)
15:36:37:objectdetection_coral_adapter.py: MODULE_PATH:           C:\CodeProject\AI\modules\ObjectDetectionCoral
15:36:37:objectdetection_coral_adapter.py: MODELS_DIR:            C:\CodeProject\AI\modules\ObjectDetectionCoral\assets
15:36:37:objectdetection_coral_adapter.py: TPU detected
15:36:37:objectdetection_coral_adapter.py: Attempting multi-TPU initialisation
15:36:37:objectdetection_coral_adapter.py: Running init for Object Detection (Coral)
15:36:37:objectdetection_coral_adapter.py: CPAI_CORAL_MODEL_NAME: mobilenet ssd
15:36:37:objectdetection_coral_adapter.py: MODEL_SIZE:            medium
15:36:37:objectdetection_coral_adapter.py: CPU_MODEL_NAME:        ssdlite_mobiledet_coco_qat_postprocess.tflite
15:36:37:objectdetection_coral_adapter.py: TPU_MODEL_NAME:        ssdlite_mobiledet_coco_qat_postprocess_edgetpu.tflite
15:36:37:objectdetection_coral_adapter.py: Supporting multiple Edge TPUs

Another option would be to run multiple server instances (VMs/dockers) using one TPU each, then using CPAI Mesh function to combine them.