Code Poetry
and Text Adventures

by catid posted (>30 days ago) 11:57pm Tue. Mar 26th 2013 PDT
For lossless compression, the filtering has to be done byte-wise in the decoder.  This prompted me to consider alternative pixel representations to do the math word-wise instead.  I came up with three options that all failed.  But read on if you're interested...

The basic operations to speed up would be similar to this:

A[0] += B[0] + (C[0] - D[0]) / 2
A[1] += B[1] + (C[1] - D[1]) / 2
A[2] += B[2] + (C[2] - D[2]) / 2

This is what an un-filtering step would look like at the end of lossless image decompression.  The terms on the right are the filter prediction of the pixel value, and the A[0..2] data is the output of decompression.  So how do we make this go faster, or maybe better?  Let's explore some alternative pixel representations:


GF256

Galois Fields are polynomial rings where addition is XOR and multiplication is a table lookup when implemented in C code.  This can be really fast if addition is a common operation.

So the example above becomes: TABLE[A ^ B ^ TABLE[C ^ D]], which may be faster.

But then you have to do table lookups for each byte, which is slow again.  Seems like SOME filters that use a lot of addition could be faster.  But another hurdle is that the Huffman decoder would need to support outputting two sets of symbols based on which filter is active, the GF256 symbols or the binary ones.

In the end it's way too much work and probably won't be any better.


Gray codes

Gray codes are useful because the Hamming distance between similar codes is small when you do an XOR between them.  So, XOR is a good potential filter operation.  The problem is that's the ONLY filter operation that I know of.  No one really does math with Gray codes, just bitwise parity and majority -type filters.  It would basically reduce to doing everything with 24 bitplanes instead of RGB bytes.  And all the bitwise conversion would kill the advantage of fast filtering.

Another disadvantage is that low Hamming distance isn't easy to take advantage of for compression.  Real pixel numbers are like 45, 46 or 23, 24.  The binary difference will be "1" or "-1" fairly often.  However, the Gray codes will be very different and I don't know of a way to compress words that have a small number of bits set and that's the only info you are given.  I suspect it would be slower than Huffman coding.  Dead end for me.


Word-wise addition bit hacks

A = ((A & 01110111) + (B & 01110111)) ^ ((A ^ B) & 10001000)

is an alternative to

A[0] += B[0], A[1] += B[1], A[2] += B[2], A[3] += B[3]

but not a very good one.  I think it starts getting interesting with 64-bit numbers.  But on 32-bit cellphones not so much.


So tonight I struck out with novel pixel formats.  Going to stick with good old-fashioned byte-wise addition, but it was fun to think about Gray codes and GF256 and bit hacks again in the context of a real-world problem like image compression. =)
last edit by catid edited (>30 days ago) 12:05am Wed. Mar 27th 2013 PDT