Best practice for I Frames setting based on FPS

oldspyguy

Young grasshopper
Jan 8, 2026
31
12
Cold North
Just getting my head around I Frames in terms of impact on video.

Lets just keep it simple for now lol

So I have a 1080p empiretech with 264 encoding and I have the FPS at 20 and I Frames at 40. This is probably wrong but I understand it like this?
For each 40 frames an I frame is inserted to give the video file some sort of points that assist in searching?

I know I changed it during testing and I got a video stutter every second and half on the live stream as well, then I changed it to 20FPS and 40 I Frames.

Thank you, Once I get this camera in its proper placement I will sent a field of view shot and probably have some questions in dialing it in. The distance to the target car side on will probably need fast shutters and a whole lot of other settings.
Again Thank you.....
 
The iframes and FPS matching was more a requirement before cameras had AI and one would use BI motion and motion detecting only starts on an iframe, so motion could be completely missed if they didn't match.

So if you are using the camera AI, the iframes isn't as important as it relates to BI.
So if your using onboard IVS only the camaera is looking at every frame to validate say a line crossing event even if the I frame is say 40?
Got ya, I think...
 
Just getting my head around I Frames in terms of impact on video.

Lets just keep it simple for now lol

So I have a 1080p empiretech with 264 encoding and I have the FPS at 20 and I Frames at 40. This is probably wrong but I understand it like this?
For each 40 frames an I frame is inserted to give the video file some sort of points that assist in searching?

I know I changed it during testing and I got a video stutter every second and half on the live stream as well, then I changed it to 20FPS and 40 I Frames.
Compressed video formats typically encode the difference between frames. However, to start a stream/video or allow for reasonably responsive seeking, key frames (a.k.a. I-frames) are needed to draw the entire picture every so often as a reference point. The problem with key frames is that they're very large compared to delta frames (a.k.a. P-frames and B-frames), which increases data usage and can cause stutters. To address this, their quality is often reduced—but if by too much, a pulsing effect appears.

Ideally, you want key frames to be as far apart as possible while still providing a reasonable seek speed and cutting granularity. Personally, I always use the maximum value these cameras support, which is 150. At 30fps, that is just five seconds (150 frames ÷ 30fps = 5 seconds). Most modern PCs can seek anywhere in such an encoded video in well under a second, this reduces video stuttering/pulsing, and significantly improves the amount of recorded video time your storage can hold. The other consideration is Blue Iris, which is written with outdated standards from when PCs were far less powerful and didn't have much memory; so it will flag such cameras with an error, which I promptly ignore as it proceeds to work just fine. Just beware of three things:
  1. When you click on a live camera in BI/UI3 to maximize it, the image will start out blurry (using the camera's sub stream) until the next keyframe, where it will switch to the main stream. If your keyframe interval is 150 @ 30fps, this could take up to 5 seconds—depending where in the keyframe sequence the camera was the moment you clicked. If you can't tolerate that delay, reduce your keyframe interval to shorten it.
  2. Recorded clips must start with a keyframe, so make sure you set Blue Iris' Stream buffer time and Pretrigger record time to a number sufficiently high enough that it can go back in time to the previous keyframe (or the one before it) when receiving a recording trigger, so your recordings don't start several seconds after an event begins. I typically use a value of 20 seconds for both (right-click camera in BI -> Camera settings -> Record).
  3. During playback in BI/UI3, seeking in recorded clips will be slower on videos with larger key frame intervals. If you find it too slow, reduce your key frame interval (changes will take effect on future recorded clips). A faster CPU on the BI machine also helps with this (when you seek, it jumps to the nearest key frame, and then has to decode out to the spot you selected before it can begin playback).

So if your using onboard IVS only the camaera is looking at every frame to validate say a line crossing event even if the I frame is say 40?
Yes, the camera always uses all frames for motion detection, as does Blue Iris, regardless of your key frame interval. The only potential issue is what I wrote in #2 above. To further illustrate, let's say your key frame interval is 600, and your camera is running at 10fps (a very extreme example). This puts your key frames a whopping 60 seconds apart, and let's say they're happening on the turn of every minute of the clock. 12:45:00 PM you have a key frame, 12:46:00 PM you have another key frame, etc. At 12:46:15 PM, a person walks by. Your camera will send an ONVIF trigger to Blue Iris right away (or Blue Iris will detect the motion right away with its built-in motion detector), but it cannot start actually recording until the next key frame comes in at 12:47:00 PM—long after the person is gone! With a sufficiently large Stream buffer time and Pretrigger record time, Blue Iris can go back in time and start your recording at the 12:46:00 PM key frame, or even at the 12:45:00 PM key frame—despite the motion trigger happening at 12:46:15 PM. Too far back though, and it'll use more memory (to remember all those frames in case it needs to go back), and your clips will have a lot of nothing happening at the beginning.

For Blue Iris, what's more important than your key frame interval is that your main+sub stream frame rates match each other, and your main+sub stream key frame intervals match each other. The rest is a matter of preference.
 
Last edited:
Compressed video formats typically encode the difference between frames. However, to start a stream/video or allow for reasonably responsive seeking, key frames (a.k.a. I-frames) are needed to draw the entire picture every so often as a reference point. The problem with key frames is that they're very large compared to delta frames (a.k.a. P-frames and B-frames), which increases data usage and can cause stutters. To address this, their quality is often reduced—but if by too much, a pulsing effect appears.

Ideally, you want key frames to be as far apart as possible while still providing a reasonable seek speed and cutting granularity. Personally, I always use the maximum value these cameras support, which is 150. At 30fps, that is just five seconds (150 frames ÷ 30fps = 5 seconds). Most modern PCs can seek anywhere in such an encoded video in well under a second, this reduces video stuttering/pulsing, and significantly improves the amount of recorded video time your storage can hold. The other consideration is Blue Iris, which is written with outdated standards from when PCs were far less powerful and didn't have much memory; so it will flag such cameras with an error, which I promptly ignore as it proceeds to work just fine. Just beware of three things:
  1. When you click on a live camera in BI/UI3 to maximize it, the image will start out blurry (using the camera's sub stream) until the next keyframe, where it will switch to the main stream. If your keyframe interval is 150 @ 30fps, this could take up to 5 seconds—depending where in the keyframe sequence the camera was the moment you clicked. If you can't tolerate that delay, reduce your keyframe interval to shorten it.
  2. Recorded clips must start with a keyframe, so make sure you set Blue Iris' Stream buffer time and Pretrigger record time to a number sufficiently high enough that it can go back in time to the previous keyframe (or the one before it) when receiving a recording trigger, so your recordings don't start several seconds after an event begins. I typically use a value of 20 seconds for both (right-click camera in BI -> Camera settings -> Record).
  3. During playback in BI/UI3, seeking in recorded clips will be slower on videos with larger key frame intervals. If you find it too slow, reduce your key frame interval (changes will take effect on future recorded clips). A faster CPU on the BI machine also helps with this (when you seek, it jumps to the nearest key frame, and then has to decode out to the spot you selected before it can begin playback).


Yes, the camera always uses all frames for motion detection, as does Blue Iris, regardless of your key frame interval. The only potential issue is what I wrote in #2 above. To further illustrate, let's say your key frame interval is 600, and your camera is running at 10fps (a very extreme example). This puts your key frames a whopping 60 seconds apart, and let's say they're happening on the turn of every minute of the clock. 12:45:00 PM you have a key frame, 12:46:00 PM you have another key frame, etc. At 12:46:15 PM, a person walks by. Your camera will send an ONVIF trigger to Blue Iris right away (or Blue Iris will detect the motion right away with its built-in motion detector), but it cannot start actually recording until the next key frame comes in at 12:47:00 PM—long after the person is gone! With a sufficiently large Stream buffer time and Pretrigger record time, Blue Iris can go back in time and start your recording at the 12:46:00 PM key frame, or even at the 12:45:00 PM key frame—despite the motion trigger happening at 12:46:15 PM. Too far back though, and it'll use more memory (to remember all those frames in case it needs to go back), and your clips will have a lot of nothing happening at the beginning.

For Blue Iris, what's more important than your key frame interval is that your main+sub stream frame rates match each other, and your main+sub stream key frame intervals match each other. The rest is a matter of preference.
This should be a sticky note :goodpost::thumb:
Thanks.......
 
yes it should be a stick note. Where is Chuck Norris when Ipcamtalk needs a Sticky Note Roundhouse to the web page?
 
  • Haha
Reactions: oldspyguy
So if your using onboard IVS only the camaera is looking at every frame to validate say a line crossing event even if the I frame is say 40?
Do any of us here have a clue how the internals of IVS work?
 
Do any of us here have a clue how the internals of IVS work?
I stumbeled on a post reply somewhere from wittaj that touched on the logic flow from MD, SMD, and IVS.
As a new guy I started seeing unicorns at that point. :wow:
I am attemting to get some hikvision clones IVS to play nicely and the vendor doesnt seem to have a clue.:idk: