Is there a faster lossy compression than JPEG?
Asked Answered
L

6

7

Is there a compression algorithm that is faster than JPEG yet well supported? I know about jpeg2000 but from what I've heard it's not really that much faster.

Edit: for compressing.

Edit2: It should run on Linux 32 bit and ideally it should be in C or C++.

Laquitalar answered 29/12, 2010 at 16:50 Comment(4)
for decompressing or compressing?Anatolian
Just curious, why do the images need to be compressed? And by how much?Heterogamy
@Mark Ransom: Well, I need them compressed to send them from small humanoid robot with 500Mhz CPU and 256MB RAM over UDP to a pc for processing. I need to get to at least 20 images per second and the wifi stick is not fast enough to send that much data over 1 second so I am using JPEG to decrease the bandwidth.Laquitalar
A video codec would be more appropriate than managing individual full frames.Christensen
P
5

Jpeg encoding and decoding should be extremely fast. You'll have a hard time finding a faster algorithm. If it's slow, your problem is probably not the format but a bad implementation of the encoder. Try the encoder from libavcodec in the ffmpeg project.

Prosector answered 29/12, 2010 at 17:44 Comment(8)
JPEG encoding is designed for fast decoding. This does not always mean that it has fast encoding as well (in fact, many times it is much slower to encode).Photoflood
Both are extremely fast if you're not striving for the optimal encoding. A low-end x86 from within the last few years should be able to encode jpeg at a rate of 30 megapixels per second or better (rough estimate off the top of my head).Prosector
An encoder meant for video encoding is bound to be optimized for speed. I know that MJPEG has been plenty fast for years, although I always thought it took specialized hardware to achieve that.Heterogamy
Well, I am using OpenCV to encode raw images to jpeg on a robot with 500Mhz CPU and 256MB RAM. It is taking 0.25s to encode one 640*480 RGB image now which is not acceptable. I need 20+ images per second.Laquitalar
I think OpenCV is your problem. Even my old K6 at 450 MHz could encode 640x480 JPEG at 25-30 fps. Of course I did have YUV source rather than RGB. If there's any way you could arrange for the source images to be YUV, that would help a lot. If not, make sure you're using a fast conversion routine. libswscale from ffmpeg is the fastest I know of.Prosector
The problem is I don't really have a possibility to mess with the embedded linux on the robot. I don't think I will be allowed to install ffmpeg there. Do you think if I get the camera to return YUV422 images (it is supposed to from the manual) the OpenCV will encode them into jpeg faster?Laquitalar
In principle it should be considerably faster. The only problem would be if OpenCV is really stupid and converts them to RGB and back to YUV for no reason.Prosector
Well, I'm hoping to get to 0.05s for encoding a 640*480 image (YUV422) which would mean 20 fps. I hope it's realistic on 500Mhz CPU.Laquitalar
H
3

Do you have MMX/SSE2 instructions available on your target architecture? If so, you might try libjpeg-turbo. Alternatively, can you compress the images with something like zlib and then offload the actual reduction to another machine? Is it imperative that actual lossy compression of the images take place on the embedded device itself?

Hiding answered 29/12, 2010 at 17:13 Comment(4)
the license for libjpeg-turbo is lgpl = not right for commercial or true/actual open source projects.Osteophyte
zlib compression is several times slower than jpeg compression.Prosector
png uses zlib compression. zlib is painful with embedded, the code is not really 32/64 bit clean and cross compiles poorly as well as requiring lots of ram in its default configuration. depends on how embedded you are.Drynurse
You could use the busybox implementation for embedded systems, but I'm not sure how well it performs.Prosector
A
2

In what context? On a PC or a portable device?

From my experience you've got JPEG, JPEG2000, PNG, and ... uh, that's about it for "well-supported" image types in a broad context (lossy or not!)

(Hooray that GIF is on its way out.)

Abnaki answered 29/12, 2010 at 16:54 Comment(5)
I'd go so far as to say JPEG2000 isn't universal, so the list is really down to just JPEG and PNG.Involutional
The patents on LZW have expired at least in parts of Europe, so there's no real reason avoiding GIF except for it's limited colorspace. And that can be circumvented (rather ugly though).Decaliter
It's for an embedded linux robot.Laquitalar
tiff is still around somehow, I seem to keep running into it with scanners. also non-lossy.Drynurse
DCT-compressed images can be put in a TIFF container, so technically TIFF can be either lossy or non-. Doesn't change the baseline observation that DCT is just about the only game in town for lossy image compression, though.Moskow
D
2

JPEG2000 isn't faster at all. Is it encoding or decoding that's not fast enough with jpeg? You could probably be alot faster by doing only 4x4 FDCT and IDCT on jpeg.

It's hard to find any documentation on IJG libjpeg, but if you use that, try lowering the quality setting, it might make it faster, also there seems to be a fast FDCT option.

Someone mentioned libjpeg-turbo that uses SIMD instructions and is compatible with the regular libjpeg. If that's an option for you, I think you should try it.

Decaliter answered 29/12, 2010 at 16:57 Comment(3)
It's encoding binary images to JPEG which is too slow on my embedded linux robot.Laquitalar
@Richard Knop: Binary? As in black/white with no gray and no color? That changes things considerably.Heterogamy
@Mark Ransom I went binary images as "raw" images. They are colorful.Laquitalar
A
1

I think wavelet-based compression algorithms are in general slower than the ones using DCT. Maybe you should take a look at the JPEG XR and WebP formats.

Assyrian answered 29/12, 2010 at 17:1 Comment(0)
H
1

You could simply resize the image to a smaller one if you don't require the full image fidelity. Averaging every 2x2 block into a single pixel will reduce the size to 1/4 very quickly.

Heterogamy answered 29/12, 2010 at 17:40 Comment(7)
Unless you write some extremely optimized code to do the downscaling, performing the jpeg compression with libavcodec will probably take less time than your downscaling code.Prosector
@R, isn't the algorithm I suggested capable of being extremely optimized very easily?Heterogamy
Yes if you write it in asm, but I doubt a pure C implementation of that downscaling algorithm can beat libavcodec's jpeg encoder, at least not with current compiler technology.Prosector
@R, interesting. I might have to try it for myself.Heterogamy
You need (low pass) filtering prior to downsampling, and that will be the expensive part. (Averaging is a very poor low pass filter.)Disembarrass
@Paul R, Averaging might not be the best low pass filter, but I would hardly call it "very poor". Other methods would certainly be much slower.Heterogamy
@Mark Ransom: using averaging will typically leave visible artefacts due to significant energy above Nyquist. If you don't care about image quality and visible artefacts then it might be good enough, but for a computer vision application it might cause problems with things like edge detection.Disembarrass

© 2022 - 2024 — McMap. All rights reserved.