Normalize histogram (brightness and contrast) of a set of images using Python Image Library (PIL)
Asked Answered
G

2

14

I have a script which uses Google Maps API to download a sequence of equal-sized square satellite images and generates a PDF. The images need to be rotated beforehand, and I already do so using PIL.

I noticed that, due to different light and terrain conditions, some images are too bright, others are too dark, and the resulting pdf ends up a bit ugly, with less-than-ideal reading conditions "in the field" (which is backcountry mountain biking, where I want to have a printed thumbnail of specific crossroads).

(EDIT) The goal then is to make all images end up with similar apparent brightness and contrast. So, the images that are too bright would have to be darkened, and the dark ones would have to be lightened. (by the way, I once used imagemagick autocontrast, or auto-gamma, or equalize, or autolevel, or something like that, with interesting results in medical images, but don't know how to do any of these in PIL).

I already used some image corrections after converting to grayscale (had a grayscale printer a time ago), but the results weren't good, either. Here is my grayscale code:

#!/usr/bin/python

def myEqualize(im)
    im=im.convert('L')
    contr = ImageEnhance.Contrast(im)
    im = contr.enhance(0.3)
    bright = ImageEnhance.Brightness(im)
    im = bright.enhance(2)
    #im.show()
    return im

This code works independently for each image. I wonder if it would be better to analyze all images first and then "normalize" their visual properties (contrast, brightness, gamma, etc).

Also, I think it would be necessary to perform some analysis in the image (histogram?), so as to apply a custom correction depending on each image, and not an equal correction for all of them (although any "enhance" function implicitly considers initial contitions).

Does anybody had such problem and/or know a good alternative to do this with the colored images (no grayscale)?

Any help will be appreciated, thanks for reading!

Guardrail answered 19/8, 2011 at 1:45 Comment(2)
Good question! Some clarification is needed, however. Also, posting example images would be very helpful for people to use as test cases. First, is the problem that the tile edges don't match well when you download them? Or are you looking for a way to brighten the dark tiles and dim the bright ones? Or do you need to do the latter while also maintaining edge continuity?Seaman
Edges are not a problem, because the set of images is not continuous. The goal is to darken the brightest and brighten the dark ones, as you said.Guardrail
S
9

What you are probably looking for is a utility that performs "histogram stretching". Here is one implementation. I am sure there are others. I think you want to preserve the original hue and apply this function uniformly across all color bands.

Of course there is a good chance that some of the tiles will have a noticeable discontinuity in level where they join. Avoiding this, however, would involve spatial interpolation of the "stretch" parameters and is a much more involved solution. (...but would be a good exercise if there is that need.)

Edit:

Here is a tweak that preserves image hue:

import operator

def equalize(im):
    h = im.convert("L").histogram()
    lut = []
    for b in range(0, len(h), 256):
        # step size
        step = reduce(operator.add, h[b:b+256]) / 255
        # create equalization lookup table
        n = 0
        for i in range(256):
            lut.append(n / step)
            n = n + h[i+b]
    # map image through lookup table
    return im.point(lut*im.layers)
Seaman answered 24/8, 2011 at 3:11 Comment(5)
Wow, it seems exactely what I wanted. I'll give it a quick try and posto some feedback very soon!Guardrail
Actually, this implementation seems to work one image at a time, and I was thinking about analyzing all images first, and then apply equalization. Also, the images will not be tiled, they are from different locations and don't usually overlap. I'll test your suggestion and see what I get. Thanks!Guardrail
I tried it on regular digital images from my camera with great results. However when i tried it on a screencap of a google satellite image, it was terrible. I think the sat images are highly posterized or something.Seaman
I tried to run your code, but there is an error with inexistent attribute layers on class Image (last line of your code). I am using Python 2.7, but could not find this attribute in PIL's docs. Any idea? :o(Guardrail
@Guardrail im.layers will be 3 for an RBG image, 4 for an RGBA image. Just substitute the appropriate value.Seaman
D
3

The following code works on images from a microscope (which are similar), to prepare them prior to stitching. I used it on a test set of 20 images, with reasonable results.

The brightness average function is from another Stackoverflow question.

from PIL import Image
from PIL import ImageStat
import math

# function to return average brightness of an image
# Source: https://mcmap.net/q/395926/-what-are-some-methods-to-analyze-image-brightness-using-python

def brightness(im_file):
   im = Image.open(im_file)
   stat = ImageStat.Stat(im)
   r,g,b = stat.mean
   return math.sqrt(0.241*(r**2) + 0.691*(g**2) + 0.068*(b**2))   #this is a way of averaging the r g b values to derive "human-visible" brightness

myList = [0.0]
deltaList = [0.0]
b = 0.0
num_images = 20                         # number of images   

# loop to auto-generate image names and run prior function  
for i in range(1, num_images + 1):      # for loop runs from image number 1 thru 20
    a = str(i)
    if len(a) == 1: a = '0' + str(i)    # to follow the naming convention of files - 01.jpg, 02.jpg... 11.jpg etc.
    image_name = 'twenty/' + a + '.jpg'
    myList.append(brightness(image_name))

avg_brightness = sum(myList[1:])/num_images
print myList
print avg_brightness

for i in range(1, num_images + 1):
   deltaList.append(i)
   deltaList[i] = avg_brightness - myList[i] 

print deltaList

At this point, the "correction" values (i.e. difference between value and mean) are stored in deltaList. The following section applies this correction to all the images one by one.

for k in range(1, num_images + 1):      # for loop runs from image number 1 thru 20
   a = str(k)
   if len(a) == 1: a = '0' + str(k)       # to follow the naming convention of files - 01.jpg, 02.jpg... 11.jpg etc.
   image_name = 'twenty/' + a + '.jpg'
   img_file = Image.open(image_name)
   img_file = img_file.convert('RGB')     # converts image to RGB format
   pixels = img_file.load()               # creates the pixel map
   for i in range (img_file.size[0]):
      for j in range (img_file.size[1]):
         r, g, b = img_file.getpixel((i,j))  # extracts r g b values for the i x j th pixel
         pixels[i,j] = (r+int(deltaList[k]), g+int(deltaList[k]), b+int(deltaList[k])) # re-creates the image
   j = str(k)
   new_image_name = 'twenty/' +'image' + j + '.jpg'      # creates a new filename
   img_file.save(new_image_name)                         # saves output to new file name
Digamma answered 1/7, 2016 at 5:59 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.