Video stream Bit Rate vs Image Bit Rate Reference Bit Rate and Custom Bit Rate

Jahua95

Getting the hang of it
May 25, 2020
111
44
Florida, USA
Can someone with extensive knowledge explain the importance of Video stream Bit Rate to Image Bit Rate and how this relays to Custom vs Reference Bit Rate
 
I have noticed that my NVR will not allow change of Bit rate beyond reference bitrate of the camera however the camera Bit rate in custom setting allows higher then that (not all cameras but some). Dose the Bit rate beyond reference improves video quality when recording?
 
Not an expert and don't know about extensive knowledge but I'll tell you the basics. The camera's video stream from the sensor is RAW ie it contains all the data representing the picture. The camera processor then compresses the video to write to storage. This is because RAW files are huge and would take up huge amounts of space. It compresses footage by identifying areas with repetitive patterns or similar colour and throws this individual data away instead replacing it with data that represents the areas generally. So eg an area thats roughly blue might contain hundreds of subtle shades of blue whilst the camera might discard this data and replace it with data suggesting this area should be filled instead with areas containing only 2 or 3 different shades of blue. So for example instead of storing data for every single pixel in the picture, it simply defines an area with a colour reference, thus saving thousands of individual references. This gets rid of large amounts of data and makes the files storage requirements smaller. The same is seen with patterns eg leaves are often replaced with green shapes that represent leaves but aren't leaves so close inspection of video or a still shows a regular pattern is used instead of leaves. This is how compression broadly works. It throws away data and replaces it with something else that represents it. How much data it throws away depends on the amount of compression ie the amount it has to reduce the file size by and this is dictated also by the bit rate as this represents the maximum amount of data the camera is permitted to transmit / save in a second according to what has been set. The lower the bit rate is set, the more the camera has to compress the picture to fit and the more data it discards., The higher the bit rate, the less information it has to discard. Therefore a lower compression or higher bit rate results in a more accurate and some may say better picture. The argument for bit rate is there's a point beyond which the discarded information is so subtle, you cannot see any improvement unless you forensically examine the picture in detail. Below this is a point where it becomes hard to notice the compression. It's the latter point that most people who aren't concerned about storage but like the best possible quality picture aim for. The maximum visible quality without noticing the reduction in quality as this still allows compression but maintains a good picture. Above this is the more perfect picture but very large files, and below this is a picture trade off where the picture quality slowly degrades as the bit rate and file size decrease, and thus storage capability increases. Most people using CCTV fall in this latter band as they have certain storage needs so are prepared to accept a lower quality picture to gain more storage space. Some enthusiasts with not too many cameras such as myself, aim for the high bit rate not noticeable point. It's personal choice, storage requirement. So far as custom vs reference bit rate is concerned. The reference bit rates are simply presets the manufacturer is suggesting you might want to use. The custom bit rate is simply the ability to set any bit rate you may desire within the limits of what the manufacturer has set as the lowest and highest available bit rate. This highest is probably set due to camera processing limitations and the lowest at the point beyond where the manufacturer deems the picture becomes unacceptable and most people would no longer wish to record it, but I wouldn't know. Ultimately what you choose comes down to your personal requirements for picture quality and storage space.

Hopefully that fills in a little bit until an expert can explain the true technicatilites of it all. However, I'd suggest all you really need to know is that higher bit rates generally give a higher quality picture but require more storage space and there is a point beyond which improvement becomes hard to see and arguably pointless. Lower bit rates allow you to store more, but involve some compromise on picture quality. Ultimately, whatever you set, it's a compromise between picture and storage requirements.
 
  • Like
Reactions: RyanB
Not an expert and don't know about extensive knowledge but I'll tell you the basics. The camera's video stream from the sensor is RAW ie it contains all the data representing the picture. The camera processor then compresses the video to write to storage. This is because RAW files are huge and would take up huge amounts of space. It compresses footage by identifying areas with repetitive patterns or similar colour and throws this individual data away instead replacing it with data that represents the areas generally. So eg an area thats roughly blue might contain hundreds of subtle shades of blue whilst the camera might discard this data and replace it with data suggesting this area should be filled instead with areas containing only 2 or 3 different shades of blue. So for example instead of storing data for every single pixel in the picture, it simply defines an area with a colour reference, thus saving thousands of individual references. This gets rid of large amounts of data and makes the files storage requirements smaller. The same is seen with patterns eg leaves are often replaced with green shapes that represent leaves but aren't leaves so close inspection of video or a still shows a regular pattern is used instead of leaves. This is how compression broadly works. It throws away data and replaces it with something else that represents it. How much data it throws away depends on the amount of compression ie the amount it has to reduce the file size by and this is dictated also by the bit rate as this represents the maximum amount of data the camera is permitted to transmit / save in a second according to what has been set. The lower the bit rate is set, the more the camera has to compress the picture to fit and the more data it discards., The higher the bit rate, the less information it has to discard. Therefore a lower compression or higher bit rate results in a more accurate and some may say better picture. The argument for bit rate is there's a point beyond which the discarded information is so subtle, you cannot see any improvement unless you forensically examine the picture in detail. Below this is a point where it becomes hard to notice the compression. It's the latter point that most people who aren't concerned about storage but like the best possible quality picture aim for. The maximum visible quality without noticing the reduction in quality as this still allows compression but maintains a good picture. Above this is the more perfect picture but very large files, and below this is a picture trade off where the picture quality slowly degrades as the bit rate and file size decrease, and thus storage capability increases. Most people using CCTV fall in this latter band as they have certain storage needs so are prepared to accept a lower quality picture to gain more storage space. Some enthusiasts with not too many cameras such as myself, aim for the high bit rate not noticeable point. It's personal choice, storage requirement. So far as custom vs reference bit rate is concerned. The reference bit rates are simply presets the manufacturer is suggesting you might want to use. The custom bit rate is simply the ability to set any bit rate you may desire within the limits of what the manufacturer has set as the lowest and highest available bit rate. This highest is probably set due to camera processing limitations and the lowest at the point beyond where the manufacturer deems the picture becomes unacceptable and most people would no longer wish to record it, but I wouldn't know. Ultimately what you choose comes down to your personal requirements for picture quality and storage space.

Hopefully that fills in a little bit until an expert can explain the true technicatilites of it all. However, I'd suggest all you really need to know is that higher bit rates generally give a higher quality picture but require more storage space and there is a point beyond which improvement becomes hard to see and arguably pointless. Lower bit rates allow you to store more, but involve some compromise on picture quality. Ultimately, whatever you set, it's a compromise between picture and storage requirements.
NVR Bit rate for individual camera not allowing anything beyond reference Bit rate but records with higher Bit rate if set thru camera GUI. Dose it make a difference for NVR as in the section of BPS stream BPS will equal to camera Bitrate and image BPS equals to max of reference Bit rate
 
Not sure I want to go down the rabbit hole here, but generally:

The camera has a fixed upper bitrate that is typically shown in the specs. I run my 4K-X at 16-20 Mbps

My older 5216-16P-4K2SE NVR does not spec an individual channel bitrate. It says it can handle 24MB Live or playback and 320Mbps incoming. I dont think it cares how thats divvy'd up between channels

Yet another reason I make ALL camera settings ON the Camera GUI
 
Last edited:
  • Like
Reactions: CCTVCam
Adding to what BigRedFish has said on letting the camera handle the compression, as a general principle, compressing footage a 2nd time should always be avoided.
 
Last edited: