Code Poetry
and Text Adventures

by catid posted (>30 days ago) 11:53am Tue. Jan 14th 2014 PST
Car updates over RF are a great application for fountain codes (e.g.

Using cheap per-unit GPRS radios, the cars can dial home for updates periodically, and log into an update server when updates are required.  The protocol can run over GPRS and would be UDP-based containing:

[CRC(8 bytes)][Control Header(1 byte)][Error Correction Header(8 bytes)][Error Correction Code Block]

The CRC would reject bad error correction data and indicate a total loss of the block.  When enough blocks arrive the data is decoded.  I imagine a simple control protocol that allows the car to check for updates and then initiate a download.

On the encoder side, a new block is generated periodically, and the same datagram gets multicast to all the GPRS receivers.  This is the advantage of using a fountain code; the same datagram fills gaps in different parts for each receiver based on what they have with high probability.

To prevent tampering, I see two good ways of implementing the scheme:

(1) Use non-cryptographic hashes on each message block.

This allows someone to actively deny updates to the car, though eventually the updates may make it through.  This is the easiest way but has a trivial DoS attack.

When the final message is decoded the system can use a cryptographic signature to verify the update.

(2) Use cryptographic signatures in addition to hashes, which are checked if the updates have been tampered with, requiring 64 bytes of overhead per packet.

Using ECC this could be done by precomputing a table for the public key of the data source, and then the signature computation could be done in half the best general-purpose time with 64 bytes overhead per packet minimum, requiring <20 usec per packet (50 MB/s -- it's good enough for GPRS rates).

The decoder could ignore the signatures and just buffer valid-looking data, and if the updates fail it could start checking the signatures.

The encoder would need to do about the same amount of work, but since all the receivers can receive any piece of the data to recover the file, it only needs to do the signature once, and all users get broadcast the same datagram, so signing each message is not too hard for the encoder side.

Applicability to BitTorrent?

The big issue with deploying fountain codes for BitTorrent has been this validation problem.  When the BitTorrent+FountainCode client gets enough pieces to reconstruct a file block, if the resulting block signature fails, each of the file piece signatures must be checked individually.

The overhead is again 64 bytes per packet, which is 5% overhead for a file transfer.

The Ed25519 "batch" verification doesn't work because we already have a batch verifier, and it failed -- It needs fast per-signature validation.  Again the best approach I know is to compute a table on-the-fly and then use it to verify each message to weed out the bad ones.

This is a very simplistic take on things of course.  In reality, eventually the original sender stops seeding and leaves the network.  Then of the remaining users, whose public key should sign new updates?  And if only old updates are cached and redistributed, then all the packets received would effectively need to be stored along with the data -- taking twice as much space on disk.  This alone makes it impractical, though the other issue is what happens when the old pieces run out?  Does a new signer take over, and how is that chosen?

So basically per-message signing doesn't make sense in this case.
last edit by catid edited (>30 days ago) 10:23pm Tue. Jan 14th 2014 PST