I've run a few tests and found that processing times are the same for the main stream (1920x1080) and the sub-stream (856x480). I'm attaching the images - two for mode=low and two for mode=high.
I also tried sending a sample image gradually reducing its resolution to the API endpoint and got the same result. Processing times are the same for 1920x1024 and let's say 100x100 pixels.
Am I correct in my assumption that DeepStack resizes the images according to the mode setting and it doesn't really matter what their original sizes are? Or am I missing something? I'm quite puzzled because I heard that people get different results when they switch detection to sub-stream.
Does anyone know what resolution the modes correspond to?
I'm running a Windows GPU version on T400.
I also tried sending a sample image gradually reducing its resolution to the API endpoint and got the same result. Processing times are the same for 1920x1024 and let's say 100x100 pixels.
Am I correct in my assumption that DeepStack resizes the images according to the mode setting and it doesn't really matter what their original sizes are? Or am I missing something? I'm quite puzzled because I heard that people get different results when they switch detection to sub-stream.
Does anyone know what resolution the modes correspond to?
I'm running a Windows GPU version on T400.