Delta decoding



Hello expert,

i am confused by the following snippet mentioned in FAST specification for dealing with delta decoding.

“The size of the integer required for the delta may be larger than the specified size for the field type. For example, if a field of type uInt32 has the base 4294967295 and the new value is 17 an int64 is required to represent the delta -4294967278. However, this does not affect how the delta appears in the stream.”

in order to decode a signed integer people has to know what type of integer it is, int32 or int64. but above paragraph means even field is int32, people has to use int64 to decode the delta part. in this case how to manage to know what type of signed integer is used in delta decoding?

please advise

thanks & regards


I’m trying to remember this from some time ago but I think with stop bit encoding you don’t know how many bytes of integer data you may have, so you have to use an int64 even if the data type is an int32. IIRC, in the case of a delta for an int64 you could even have 9 bytes of integer spread over 11 bytes of protocol.

Once you have the value though, you need to do sign extension assuming the value is signed as it can be in the case with unsigned deltas.

In practice at my organisation we can’t handle this scenario and assume that the delta will be 8 bytes or less. This is not such a huge hole - the idea of a delta is that the difference between values is small, thus saving bits when encoding.


It’s quite simple. Just math.

The type of the field is int32, with values in the range ~ [-2e9, 2e9]. If the previous value was -2e9, and the new value is 2e9, then the delta must be 4e9 which does not fit into an int32. The other extreme value is -4e9. So, you need an additional bit to represent these values and since there is no int33, the next data type at hand is int64.

Same goes for int64 or uInt64, where the delta does not fit in an int64 in the extreme cases.