Is there some robust metric of image sharpness or bluriness? I have various set of images with different parameters of saturation and captured from different optical systems, and i heed to show user something like "quality" of focusing. For getting most focused image i use metric getted with Sobel-Tenengrad operator(summ of high-contrast pixels), but the problem is that for different objects are quite different range of metric(depends on unknown parameters of image intensity, optical system ) - needed some metric where possible to say that image has bad focus whithout comparing with reference image, like this is "bad" or "good" focused image.
You can calculate the accutance of the image by calculating the mean of the Gradient Filter.
Reference this StackOverflow answer to a similar question.
Autofocus is an interesting problem on its own, and so evaluating sharpness across arbitrary images is another level of complexity.
On sharpness evaluation, I suggest this paper from Cornell. Their conclusion was that the variance metric provided the best evaluation of a given image. And it doesn't hurt that it's really easy to calculate!
For creating a consistent metric across different images, you'll need a way to normalize. The metric might be in units of variance per pixel. You could take advantage of the fact that lack of focus provides an upper bound on variance, and so look for clustering at a maximal rate of local variance.
You need a no-reference sharpness metric, such as:
- Cumulative probability of blur detection (CPBD) https://ivulab.asu.edu/software/quality/cpbd
- S3 http://vision.eng.shizuoka.ac.jp/pubs/pdfs/S3_preprint.pdf
- Just noticeable blur (JNB) https://ivulab.asu.edu/software/quality/jnbm
- LPC-SI https://ece.uwaterloo.ca/~z70wang/publications/TIP_LPCSharpness.pdf
Here's a short paper describing a method for detecting blurredness using a Haar Wavelet Transform
The other answers to this PAQ may also be helpful.
© 2022 - 2024 — McMap. All rights reserved.