Is it possible to tell the quality level of a JPEG?
Asked Answered
H

13

91

This is really a two part question, since I don't fully understand how these things work just yet:

My situation: I'm writing a web app which lets the user upload an image. My app then resizes to something displayable (eg: 640x480-ish) and saves the file for use later.

My questions:

  1. Given an arbitrary JPEG file, is it possible to tell what the quality level is, so that I can use that same quality when saving the resized image?
  2. Does this even matter?? Should I be saving all the images at a decent level (eg: 75-80), regardless of the original quality?

I'm not so sure about this because, as I figure it: (let's take an extreme example), if someone had a 5 megapixel image saved at quality 0, it would be blocky as anything. Reducing the image size to 640x480, the blockiness would be smoothed out and barely less noticeable... until I saved it with quality 0 again...

On the other end of the spectrum, if there was an image which was 800x600 with q=0, resizing to 640x480 isn't going to change the fact that it looks like utter crap, so saving with q=80 would be redundant.

Am I even close?

I'm using GD2 library on PHP if that is of any use

Homunculus answered 8/1, 2010 at 1:50 Comment(1)
nickf Could you please accept @DuyLuc's answer which uses identify -format '%Q'? It answers the problem, unlike the currently accepted answer. Just tried it on my files and it works. Thanks!Ruhnke
C
27
  1. JPEG is a lossy format. Every time you save a JPEG same image, regardless of quality level, you will reduce the actual image quality. Therefore even if you did obtain a quality level from the file, you could not maintain that same quality when you save a JPEG again (even at quality=100).

  2. You should save your JPEG at as high a quality as you can afford in terms of file size. Or use a loss-less format such as PNG.

Low quality JPEG files do not simply become more blocky. Instead colour depth is reduced and the detail of sections of the image are removed. You can't rely on lower quality images being blocky and looking ok at smaller sizes.

According to the JFIF spec. the quality number (0-100) is not stored in the image header, although the horizontal and vertical pixel density is stored.

Cleancut answered 8/1, 2010 at 1:57 Comment(6)
+1, it's true in general that resaving a JPEG will tend to reduce quality, but in some restricted cases this can be worked around -- see en.wikipedia.org/wiki/Jpeg#Lossless_editing. E.g. Irfanview has a plugin for lossless JPEG cropping/rotating.Osculum
do you know if the same applies to mpeg videos?Hatpin
@johnny, MPEG2 (and I believe MPEG4) also uses lossy compression, so re-encoding multiple times will reduce the quality. Not sure if the compression level is stored in the MPEG headers though.Cleancut
The only places in the compression process where loss actually happens is (potentially) rounding error and the quantization step. If the quatization can be performed identically and the math is done with enough precision, then it is possible to guarantee that the quality does not decrease.The major problem here is that the standard does not precisely define AFAIK how the quantization step is to be performed.We'd have to make certain guesses for files not encoded with a known encoder. There are of course other factors in practice, but for the OP's use case it is at least technically feasible.Ostracod
While it is technically true that every cycle of re-saving a JPEG file reduces the quality further, the quality loss is minimal after the first save. Saving a RAW image with 80%-90% JPEG quality will cause a easily visible quality loss. However, if you then re-open and re-save the file many times, the additional quality loss is almost invisible (if you stick to the same quality setting).Jemimah
How can this answer be the solution? It doesn't answer the question at all.Nolita
T
151

You can view compress level using the identify tool in ImageMagick. Download and installation instructions can be found at the official website.

After you install it, run the following command from the command line:

identify -format '%Q' yourimage.jpg

This will return a value from 0 (low quality, small filesize) to 100 (high quality, large filesize).

Information source

Taeniafuge answered 22/8, 2013 at 10:35 Comment(6)
The specific item you're looking for in the output is the Quality: item which appears right before the Properties: section with all the EXIF data. With maximum quality this value will be 100.Bugle
@OranDennison Your comment was ambiguous enough to make me look for a "Quality" tag in EXIF data. But there is not : "Quality" is a feature of identify -verbose's output.Aksoyn
I don't understand why this answer has not been marked as solution??? It really nails it down without any blabla.Nolita
@Nolita because although ImageMagick returns a quality assessment number that doesn't mean it's correct. See faqs.org/faqs/jpeg-faq/part1/section-5.html from the jpeg creators themselves, and photo.stackexchange.com/questions/88167/…Perfume
If I use a software to open a .bmp file with original quality image, and save it into a jpeg file of quality level of 50 . Then I open this jpeg file, and save it into the second jpeg file of quality level of 60. Then I open the second jpeg file, and save it into the third jpeg file of quality level of 70. Now, if I use the ImageMagick identify tool to check the quality of the third jpeg file, would it tell me the quality of it is "70"? or "21" (0.5*0.6*0.7)?Stridulous
One other thing to be aware of: a while ago I discovered that ImageMagick uses "92" as a fallback value if it is unable to estimate the JPEG quality! This means you might get this value even for very low quality images. I reported this as a bug some time ago, but it hasn't been fixed so far. Details here: <github.com/ImageMagick/ImageMagick6/issues/260>Flexuous
C
27
  1. JPEG is a lossy format. Every time you save a JPEG same image, regardless of quality level, you will reduce the actual image quality. Therefore even if you did obtain a quality level from the file, you could not maintain that same quality when you save a JPEG again (even at quality=100).

  2. You should save your JPEG at as high a quality as you can afford in terms of file size. Or use a loss-less format such as PNG.

Low quality JPEG files do not simply become more blocky. Instead colour depth is reduced and the detail of sections of the image are removed. You can't rely on lower quality images being blocky and looking ok at smaller sizes.

According to the JFIF spec. the quality number (0-100) is not stored in the image header, although the horizontal and vertical pixel density is stored.

Cleancut answered 8/1, 2010 at 1:57 Comment(6)
+1, it's true in general that resaving a JPEG will tend to reduce quality, but in some restricted cases this can be worked around -- see en.wikipedia.org/wiki/Jpeg#Lossless_editing. E.g. Irfanview has a plugin for lossless JPEG cropping/rotating.Osculum
do you know if the same applies to mpeg videos?Hatpin
@johnny, MPEG2 (and I believe MPEG4) also uses lossy compression, so re-encoding multiple times will reduce the quality. Not sure if the compression level is stored in the MPEG headers though.Cleancut
The only places in the compression process where loss actually happens is (potentially) rounding error and the quantization step. If the quatization can be performed identically and the math is done with enough precision, then it is possible to guarantee that the quality does not decrease.The major problem here is that the standard does not precisely define AFAIK how the quantization step is to be performed.We'd have to make certain guesses for files not encoded with a known encoder. There are of course other factors in practice, but for the OP's use case it is at least technically feasible.Ostracod
While it is technically true that every cycle of re-saving a JPEG file reduces the quality further, the quality loss is minimal after the first save. Saving a RAW image with 80%-90% JPEG quality will cause a easily visible quality loss. However, if you then re-open and re-save the file many times, the additional quality loss is almost invisible (if you stick to the same quality setting).Jemimah
How can this answer be the solution? It doesn't answer the question at all.Nolita
S
16

For future visitors, checking the quality of a given jpeg, you could just use imagemagick tooling:

$> identify -format '%Q' filename.jpg
   92%
Skinflint answered 2/4, 2015 at 17:35 Comment(5)
Wow, ImageMagick even gives you RGB channel kurtosis and skewness and whether it is interlaced! I use identity all the time, but never with the -verbose option. Very useful.Kelter
How is this answer different to DuyLuc's ?Coxalgia
@Coxalgia If you check carefully, you'll see that answer was updated, with a similar solution, only after my post here..Skinflint
@Skinflint How awful! :(Coxalgia
@jtlz2—The thing this answer added to DuyLuc's was a grep Quality pipe, which extracted the "Quality" item mentioned in Oran Dennison's comment. (Both answers have since been edited to use -format '%Q' instead.)Philbrick
S
12

Jpeg compression algorithm has some parameters which influence on the quality of the result image.

One of such parameters are quantization tables which defines how many bits will be used on each coefficient. Different programs use different quatization tables.

Some programs allow user to set quality level 0-100. But there is no common defenition of this number. The image made with Photoshop with 60% quality takes 46 KB, while the image made with GIMP takes only 26 KB.

Quantization tables are also different.

There are other parameters such subsampling, dct method and etc.

So you can't describe all of them by single quality level number and you can't compare quality of jpeg images by single number. But you can create such number like photoshop or gimp which will describe compromiss between size on quality.

More information: http://patrakov.blogspot.com/2008/12/jpeg-quality-is-meaningless-number.html

Common practice is that you resize the image to appropriate size and apply jpeg after that. In this case huge and middle images will have the same size and quality.

Soothsay answered 11/1, 2010 at 17:38 Comment(0)
P
7

As there are already two answers using identify, here's one that also outputs the file name (for scanning multiple files at once):

If you wish to have a simple output of filename: quality for use on multiple images, you can use

identify -format '%f: %Q' *

to show the filename + compression of all files within the current directory.

Parkway answered 9/5, 2017 at 9:20 Comment(0)
F
6

Here is a formula I've found to work well:

  1. jpg100size (the size it should not exceed in bytes for 98-100% quality) = width*height/1.7

  2. jpgxsize = jpg100size*x (x = percent, e.g. 0.65)

so, you could use these to find out statistically what quality your jpg was last saved at. if you want to get it down to let's say 65% quality and if you want to avoid resampling, you should compare the size initially to make sure it's not already too low, and only then reduce the quality

Finitude answered 22/10, 2014 at 14:13 Comment(3)
Cool to see that someone else has been playing around with the same idea as me. I'm trying to figure it what is a "good" size for a jpeg (or png) image, when used in our CMS. I've done some statistics on what's already in our database, and I've also been playing around with some figures.. As of now I will start warning users uploading a JPEG-image which is larger than 4K and 32x32 pixels and have an Area:Bytes-ratio <1.Echinus
Raul, how'd you calculate the 1.7 coefficient?Hocker
more like trial and error, but it was very close for a straight linear formulaFinitude
F
3

If you resave a JPEG using the same software that created it originally, using the same settings, you'll find that the damage is minimized - the algorithm will tend to throw out the same information it threw out the first time. I don't think there's any way to know what level was selected just by looking at the file; even if you could, different software almost guarantees different parameters and rounding, making a match almost impossible.

Fustanella answered 8/1, 2010 at 2:9 Comment(0)
S
3

So, there are basically two cases you care about:

  1. If an incoming image has quality set too high, it may take up an inappropriate amount of space. Therefore, you might want, for example, to reduce incoming q=99 to q=85.

  2. If an incoming image has quality set too low, it might be a waste of space to raise it's quality. Except that an image that's had a large amount of data discarded won't magically take up more space when the quality is raised -- blocky images will compress very nicely even at high quality settings. So, in my opinion it's perfectly OK to raise incoming q=1 to q=85.

From this I would think simply forcing a decent quality setting is a perfectly acceptable thing to do.

Scalf answered 11/1, 2010 at 18:39 Comment(7)
I realize this is a rather old thread, but your second point isn't correct. Compressed images will often eat more space when re-compressed.Parada
@BenD If you increase the quality between compression you don't magically get frequency components which weren't in there in the first compressed file. If this happens in practice, then the compressor is not being very intelligent, since the data is a priori already appropriately quantized. I think the issue here is that the standard allows for small differences in the decoded image, which then has a potentially different DCT the second time. With an appropriately coded jpeg library, the point #2 should be possible in theory AND in practice.Ostracod
@Tim I may be misunderstanding your critique, but I believe that you're wrong on this point. Because JPEG compressions often result in noise, and the resulting JPEG does not store it's current compression level, secondary compressions (i.e Original -> Quality=60 ->Quality=80) will often be larger than the primary compression. I've run this test using both GD and desktop compressors (see my original link)... Because the compressor cannot know the image's current compression state it cannot be intelligent when running subsequent compressions. If you know of a way around this I'd love to know.Parada
As a test, try compressing an image to very low quality (say, q=10). The result will be mess (blurry blocks, etc), but it will have a small file size. Now try recompressing your new JPG to q=90... the new compression will result in a substantially larger file with worse quality, because it is trying to faithfully preserve all the new artifacts. I'm sure you could get around this with a simple if filesize(original)<filesize(new) {use original;}, but the compression itself will often result in larger files with multiple compressions.Parada
@BenD The biggest problem here is the quantization step in the compression. Unless you know intimate details about the compressor used, it might be impossible to guess the tradeoffs made during quantization. Also since the standard allows for a 1 bit difference per block IIRC during decompression, and the DCT isn't required to be done with perfect rounding accuracy means that you are mostly right. But these are pretty much practical problems only. If you decompress a jpeg with accuracy up to the machine epsilon and then perform the DCT again without quantization,Ostracod
the resulting coefficents should be almost identical to the ones in the compressed file. Again all I really meant is that the coefficients shouldn't change in the absence of other factors. I don't know of a specific way of getting such results, because I think all encoders preform quantization and not necessarily the same way as the first encoder.Ostracod
This is a fascinating discussion. I almost don't mind the fact that it means my original answer is wrong. :)Scalf
F
2

Every new save of the file will further decrease overall quality, by using higher quality values you will preserve more of image. Regardless of what original image quality was.

Fetation answered 8/1, 2010 at 1:56 Comment(0)
W
2

This may be a silly question, but why would you be concerned about micromanaging the quality of the document? I believe if you use ImageMagick to do the conversion, it will manage the quality of the JPEG for you for best effect. http://www.php.net/manual/en/intro.imagick.php

Wenda answered 11/1, 2010 at 18:32 Comment(1)
I sometimes want to make sure an artist actually delivered an uncompressed image (several times they have accidentally given me a compressed JPEG).Colleen
C
2

Here are some ways to achieve your (1) and get it right.

  1. There are ways to do this by fitting to the quantization tables. Sherloq - for example - does this:

    https://github.com/GuidoBartoli/sherloq

    The relevant (python) code is at https://github.com/GuidoBartoli/sherloq/blob/master/gui/quality.py

  2. There is another algorithm written up in https://arxiv.org/abs/1802.00992 - you might consider contacting the author for any code etc.

  3. You can also simulate file_size(image_dimensions,quality_level) and then invert that function/lookup table to get quality_level(image_dimensions,file_size). Hey presto!

  4. Finally, you can adopt a brute-force https://en.wikipedia.org/wiki/Error_level_analysis approach by calculating the difference between the original image and recompressed versions each saved at a different quality level. The quality level of the original is roughly the one for which the difference is minimized. Seems to work reasonably well (but is linear in the for-loop..).

Most often the quality factor used seems to be 75 or 95 which might help you to get to the result faster. Probably no-one would save a JPEG at 100. Probably no-one would usefully save it at < 60 either.

I can add other links for this as they become available - please put them in the comments.

Coxalgia answered 25/3, 2021 at 13:39 Comment(1)
I can confirm that sherloq's method for determining jpeg quality is the same described in point 4, also this brute-force approach fails if you are looking at intervals of 5 (75, 80, 85 and so on) and the quality of your image is not a multiple of 5, say 84. The predicted quality in this case will be 100.Legislature
A
0

If you trust Irfanview estimation of JPEG compression level you can extract that information from the info text file created by the following Windows line command (your path to i_view32.exe might be different):

"C:\Program Files (x86)\IrfanView\i_view32.exe" <image-file> /info=txtfile
Aranyaka answered 13/2, 2022 at 9:50 Comment(0)
G
0

Jpg compression level is recorded in the IPTC data of an image.

Use exiftool (it's free) to get the exif data of an image then do a search on the returned string for "Photoshop Quality". Or at least put the data returned into a text document and check to see what's recorded. It may vary depending on the software used to save the image.

"Writer Name : Adobe Photoshop Reader Name : Adobe Photoshop CS6 Photoshop Quality : 7"

Gradey answered 25/11, 2022 at 19:34 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.