Detect which image is sharper
Asked Answered
C

4

36

I'm looking for a way to detect which of two (similar) images is sharper.

I'm thinking this could be using some measure of overall sharpness and generating a score (hypothetical example: image1 has sharpness score of 9, image2 has sharpness score of 7; so image1 is sharper)

I've done some searches for sharpness detection/scoring algorithms, but have only come across ones that will enhance image sharpness.

Has anyone done something like this, or have any useful resources/leads?

I would be using this functionality in the context of a webapp, so PHP or C/C++ is preferred.

Crossover answered 11/7, 2011 at 6:8 Comment(3)
Are they two images of the same object/distance but one is sharper than the other?Humanist
interesting paper: ieeexplore.ieee.org/xpl/… (Image sharpness measure using eigenvalues)Humanist
@gigantt, thanks will check it out. for the most part, I imagine the images will mostly similar. Perhaps slight changes in distance thst could cause small variations in subject size, or with a narrow depth of field that could cause different parts to be in/out of focus.Crossover
Y
24

As e.g. shown in this Matlab Central page, the sharpness can be estimated by the average gradient magnitude.

I used this in Python as

from PIL import Image
import numpy as np

im = Image.open(filename).convert('L') # to grayscale
array = np.asarray(im, dtype=np.int32)

gy, gx = np.gradient(array)
gnorm = np.sqrt(gx**2 + gy**2)
sharpness = np.average(gnorm)

A similar number can be computed with the simpler numpy.diff instead of numpy.gradient. The resulting array sizes need to be adapted there:

dx = np.diff(array)[1:,:] # remove the first row
dy = np.diff(array, axis=0)[:,1:] # remove the first column
dnorm = np.sqrt(dx**2 + dy**2)
sharpness = np.average(dnorm)
Yoshieyoshiko answered 24/9, 2014 at 10:38 Comment(3)
Yes, less sharpness means more blur.Yoshieyoshiko
Does this need to be done on the grayscale image, as in the matlab code? Or should it work on the colorful image as well? (I am assuming that array = list(img.getdata()), is this correct?)Septillion
@faerubin, my code is for grayscale. I have now extended the snippet to show this. However, a similar method would work on color image data.Yoshieyoshiko
A
16

The simple method is to measure contrast -- the image with the largest differences between pixel values is the sharpest. You can, for example, compute the variance (or standard deviation) of the pixel values, and whichever produces the larger number wins. That looks for maximum overall contrast, which may not be what you want though -- in particular, it will tend to favor pictures with maximum depth of field.

Depending on what you want, you may prefer to use something like an FFT, to see which displays the highest frequency content. This allows you to favor a picture that's extremely sharp in some parts (but less so in others) over one that has more depth of field, so more of the image is reasonably sharp, but the maximum sharpness is lower (which is common, due to diffraction with smaller apertures).

Armil answered 11/7, 2011 at 6:29 Comment(3)
About the FFT method, Interesting approach! Do you mean do compare the brightneses of images in certain parts of the FFT transformed image? Would higher frequences be located in the centre or on the edges of the image?Dung
@ellockie: After the FFT what you have is data describing the image, but no longer an actual image. The higher frequencies would depend on the content of the image, not the location in the image (i.e., could be anywhere--the idea is that it would happen in the parts that were sharpest).Armil
So in the FFT transorm image can you say that the pixels further from the centre represent higher frequences related to more detailed features? Thank you for the explanation.Dung
B
6

Simple practical approach would be to use edge detection (more edges == sharper image).

Quick and dirty hands-on using PHP GD

function getBlurAmount($image) {
    $size = getimagesize($image);
    $image = imagecreatefromjpeg($image);
    imagefilter($image, IMG_FILTER_EDGEDETECT);    
    $blur = 0;
    for ($x = 0; $x < $size[0]; $x++) {
        for ($y = 0; $y < $size[1]; $y++) {
            $blur += imagecolorat($image, $x, $y) & 0xFF;
        }
    }
    return $blur;
}

$e1 = getBlurAmount('http://upload.wikimedia.org/wikipedia/commons/thumb/5/51/Jonquil_flowers_at_f32.jpg/800px-Jonquil_flowers_at_f32.jpg');
$e2 = getBlurAmount('http://upload.wikimedia.org/wikipedia/commons/thumb/0/01/Jonquil_flowers_at_f5.jpg/800px-Jonquil_flowers_at_f5.jpg');

echo "Relative blur amount: first image " . $e1 / min($e1, $e2) . ", second image " . $e2 / min($e1, $e2);

(image with less blur is sharper) More efficient approach would be to detect edges in your code, using Sobel operator. PHP example (rewriting in C++ should give huge performance boost I guess).

Bedazzle answered 11/7, 2011 at 9:29 Comment(2)
The last byte returned from imagecolorat() contains the blue component. To consider red and green use the filter imagefilter ($image, IMG_FILTER_GRAYSCALE); before.Medeah
The imagefilter($image, IMG_FILTER_EDGEDETECT) return values around 127. If there is more contrast in the picture then the values differ more from that value locally. Nevertheless, the mean value is always close to 127. To fix: Calculate the variance of the grey values.Medeah
D
3

This paper describes a method for computing a blur factor using DWT. Looked pretty straight forward but instead of detecting sharpness it's detecting blurredness. Seems it detects edges first (simple convolution) and then uses DWT to accumulate and score it.

Dewdrop answered 11/7, 2011 at 7:7 Comment(1)
Link is broken.Elsieelsinore

© 2022 - 2024 — McMap. All rights reserved.