Random access gzip stream
Asked Answered
E

4

13

I'd like to be able to do random access into a gzipped file. I can afford to do some preprocessing on it (say, build some kind of index), provided that the result of the preprocessing is much smaller than the file itself.

Any advice?

My thoughts were:

  • Hack on an existing gzip implementation and serialize its decompressor state every, say, 1 megabyte of compressed data. Then to do random access, deserialize the decompressor state and read from the megabyte boundary. This seems hard, especially since I'm working with Java and I couldn't find a pure-java gzip implementation :(
  • Re-compress the file in chunks of 1Mb and do same as above. This has the disadvantage of doubling the required disk space.
  • Write a simple parser of the gzip format that doesn't do any decompressing and only detects and indexes block boundaries (if there even are any blocks: I haven't yet read the gzip format description)
Encamp answered 26/3, 2010 at 21:39 Comment(0)
M
6

Have a look at this link (C code example).

/* zran.c -- example of zlib/gzip stream indexing and random access
...

Gzip is just zlib with an envelope.

Mallet answered 26/3, 2010 at 21:46 Comment(3)
@jkff: If you don't need cross-platform deployment, check out JNA. It's surprisingly easy to use as a way to call C libraries.Nereid
Thanks again, I did so and it works like a charm! Rex, thanks to you, too: I used JNA :)Encamp
Whoohoo, now my unlimited-size-log viewer supports gzip'd logs!Encamp
F
4

The BGZF file format, compatible with GZIP was developped by the biologists.

(...) The advantage of BGZF over conventional gzip is that BGZF allows for seeking without having to scan through the entire file up to the position being sought.

In http://picard.svn.sourceforge.net/viewvc/picard/trunk/src/java/net/sf/samtools/util/ , have a look at BlockCompressedOutputStream and BlockCompressedInputStream.java

Fine answered 22/4, 2010 at 10:2 Comment(1)
Thanks, that's nice, but I need my tool to be immediately applicable to existing logfiles, and they're usually archived in .zip or .gzip by a third-party archiver. Besides, I've already got a working solution :)Encamp
B
1

FWIW: I've developed a command line tool upon zlib's zran.c source code which can do random access to gzip with the creation of indexes for gzip files: https://github.com/circulosmeos/gztool

It can even create an index for a still-growing gzip file (for example a log created by rsyslog directly in gzip format) thus reducing in the practice to zero the time of index creation. See the -S (Supervise) option.

Belabor answered 24/7, 2019 at 21:19 Comment(0)
C
0

interesting question. I don't understand why your 2nd option (recompress file in chunks) would double the disk space. Seems to me it would be the same, less a small amount of overhead. If you have control over the compression piece, then that seems like the right idea.

Maybe what you mean is that you don't have control over the input, and therefore it would double.

If you can do it, I'm imagining modelling it as a CompressedFileStream class that uses as its backing store, a series of 1mb gzip'd blobs. When reading, a Seek() on the stream would move to the appropriate blob and decompress. A Read() past the end of a blob would cause the stream to open the next blob.

ps: GZIP is described in IETF RFC 1952, but it uses DEFLATE for the compression format. There'd be no reason to use the GZIP elaboration if you implemented this CompressedFileStream class as I've imagined it.

Confirmatory answered 26/3, 2010 at 21:51 Comment(1)
I don't like the 2nd option because I'm not going to delete the original files, and I don't have control over how they are generated. However, for now that's how I actually implemented the stuff (quite as you described), but I was not satisfied with that and that's why I asked the question :)Encamp

© 2022 - 2024 — McMap. All rights reserved.