Network compression

Originally posted to Shawn Hargreaves Blog on MSDN, Saturday, December 22, 2007

Because network bandwidth is so limited, it is critically important to compress all the data you send over the wire.

Generalized compression algorithms like zip don't tend to be much use here. To get good results, that kind of compression needs a reasonably large piece of data to sink its teeth into. But network packets are small. Zip isn't much use when you need to compress a 20 byte packet down to just 10 bytes.

Beginners are often tempted to think they can get better results by compressing many packets in a row, using the compressor state from the previous packet to prime the algorithm for the next. It's true this can dramatically increase the compression ratio, but it doesn't work, because in order to decompress that data, you need all the previous packets as well as the current one. That means you have to use SendDataOptions.ReliableInOrder, which can increase latency.

You can sometimes achieve useful compression by sending deltas (differences relative to a previous object state) instead of the complete state. But this suffers from the same problem as generalized compression: it only works if you send everything reliably and in order.

It is usually better to rely on old-skool tricks such as bit packing and quantization. These techniques were common in the days of 8 bit microcomputers with 4k RAM, but have been largely forgotten in the land of .NET and gigabyte working sets.

For instance, don't send strings over the wire. Use integer IDs or enums instead.

If you have a matrix that combines a rotation and translation, and you happen to know there will never be any scaling, shear, or projection, don't send the whole matrix. Instead, send Matrix.Translation, which is a 12 byte Vector3, and Quaternion.CreateFromRotationMatrix(matrix), which is a 16 byte Quaternion. You just compressed a 64 byte matrix into 28 bytes.

Stay tuned for more...

Blog index   -   Back to my homepage