compression algorithm that allows random reads/writes in a file? -


does have better compression algorithm allow random reads/writes?

i think use compression algorithm if write in blocks, ideally not have decompress whole block @ time. if have suggestions on easy way , how know block boundaries, please let me know. if part of solution, please let me know when data want read across block boundary?

in context of answers please assume file in question 100gb, , i'll want read first 10 bytes, , i'll want read last 19 bytes, , i'll want read 17 bytes in middle. .

have these people never heard of "compressed file systems", have been around since before microsoft sued in 1993 stac electronics on compressed file system technology?

i hear lzs , lzjb popular algorithms people implementing compressed file systems, require both random-access reads , random-access writes.

perhaps simplest , best thing turn on file system compression file, , let os deal details. if insist on handling manually, perhaps can pick tips reading ntfs transparent file compression.


Comments