### Huffman's codes, Huffman's algorithm, Huffman Compression

I'm not going to tell here how does the algorithm works, cause this can be easily find in internet. Before I've started to write this little tool, I thought about pesymistic case. It is quite easy to create file, that would generate such a tree, that it some of Huffman's codes would be longer then 32 bits...e.g.

abcddddeeeeeeeeffffffffffffffff...First three single different letters. and then 4, 8, 16, and next powers of 2. It is quite easy to show that Huffman's codes tree for such a file would have codes up to (n-1) bits, where n is number of letters in file. It is also easy to show, that If we produce file like one above that would have 8GB-1B size [if I haven't made a mistake], the longest huffman code for that file would have 33 bits. [If we'd produce file from all 256 bytes, it'd have 53919893334301279589334030174039261347274288845081144962207220498432 GIGABYTES! and the longest Huffman's code for this one would have of course 255 bits].

As I've noticed that, I didn't want my program to be limited by a 32 bits, so I've decided to use BIGNUM type implemented in OpenSSL and also EAY's SSL IIRC. Maybe this is quite senseless, and it'be better to write my own simple and faster lib to deal with bignumbers, but I've wanted to get closer with OpenSSL API. [btw: I've noticed that BN_print_fp from openssl on my Debian Box allocates 220 bytes, and seems to doesn't free'em later, I haven't looked to OpenSSL's code, cause I don't have time, but I suppose it creates some temporary buffer or sth. But the good sign, that this number (220) is constant, and doesn't depend from number of calls to BN_print_fp]

I've played some time with valgrind [that is how I know about that 220 bytes in BN_print_fp], and It says my program doesn't have any memory leaks, but I'm not sure of this...

Ok That'd be enough of talk, Here you can find the package with sources, some tests and scripts...

Pure sources: huffman.c heap.c heap.h