So, I'm trying to do read-only access of a zip file in Java, decompressing in a multithreaded manner, because my standard simple singly-threaded solution of ZipFile/ZipEntry using the enumeration and inputstreams and what-not results in it taking about five full seconds just to decompress into memory a 50-meg zipfile which takes one second AT MOST for my disk to read without decompressing.
However, the entire Java zip library is synchronized to an incredibly-obnoxious degree, no doubt because it's all abstracted for reading/writing/etc. in the same code instead of having nice efficient non-synchronized read-only code.
I've looked at third-party Java libraries, and all of them are either massive VFS libraries that are worse than using an elephant gun to shoot a fly, or else the only reason they have a performance benefit is they multithread to the extent that most of the threads are blocking on disk IO anyway.
All I want to do is pull a zipfile into a byte[], fork some threads, and work on it. There's no reason any synchronization would be needed in any way for anything, because each of the unzipped files I use separately in memory without interaction.
Why must this be so difficult?
ZipInputStream
. – Persevering