After processing a previously optimized indexed color PNG image with transparency (see here for some background, since this question refers to the same image file), using the following code, the PLTE chunk seems to be expanded with more colors than those effectively being used.
Mu current code:
#!/usr/bin/env/python3
import os
from PIL import Image
source_file = os.path.expanduser("~/Desktop/prob.png")
dest_file = os.path.expanduser("~/Desktop/processed_img.png")
img = Image.open(source_file)
# Convert all colors in the palette to grayscale and save the new palette
pal = img.getpalette()
for i in range(len(pal) // 3):
# Using ITU-R 601-2 luma transform
g = (pal[3*i] * 299 + pal[3*i+1] * 587 + pal[3*i+2] * 114) // 1000
pal[3*i: 3*i+3] = [g, g, g]
img.putpalette(pal)
try:
img.save(dest_file, optimize=True, format="PNG")
except IOError:
ImageFile.MAXBLOCK = img.size[0] * img.size[1]
img.save(dest_file, optimize=True, format="PNG")
Using pngcheck
I get a small 16 colors palette for the original file:
$ pngcheck -tc7pv ~/Desktop/prob.png
File: /Users/victor/Desktop/prob.png (12562 bytes)
chunk IHDR at offset 0x0000c, length 13
825 x 825 image, 8-bit palette, non-interlaced
chunk PLTE at offset 0x00025, length 48: 16 palette entries
0: ( 0, 0, 0) = (0x00,0x00,0x00)
1: (230,230,230) = (0xe6,0xe6,0xe6)
2: (215,215,215) = (0xd7,0xd7,0xd7)
3: (199,199,199) = (0xc7,0xc7,0xc7)
4: (175,175,175) = (0xaf,0xaf,0xaf)
5: (143,143,143) = (0x8f,0x8f,0x8f)
6: (111,111,111) = (0x6f,0x6f,0x6f)
7: ( 79, 79, 79) = (0x4f,0x4f,0x4f)
8: ( 22, 22, 22) = (0x16,0x16,0x16)
9: ( 0, 0, 0) = (0x00,0x00,0x00)
10: ( 47, 47, 47) = (0x2f,0x2f,0x2f)
11: (254,254,254) = (0xfe,0xfe,0xfe)
12: (115, 89, 0) = (0x73,0x59,0x00)
13: (225,176, 0) = (0xe1,0xb0,0x00)
14: (255,211, 0) = (0xff,0xd3,0x00)
15: (254,204, 0) = (0xfe,0xcc,0x00)
chunk tRNS at offset 0x00061, length 1: 1 transparency entry
0: 0 = 0x00
chunk IDAT at offset 0x0006e, length 12432
zlib: deflated, 32K window, maximum compression
chunk IEND at offset 0x0310a, length 0
No errors detected in /Users/victor/Desktop/prob.png (5 chunks, 98.2% compression).
Then, after processing the image using the code sample above, pngcheck
displays a much bigger PLTE chunk, filled with lots of (probably unused) color values:
$ pngcheck -tc7pv ~/Desktop/processed_img.png
File: /Users/victor/Desktop/processed_img.png (14680 bytes)
chunk IHDR at offset 0x0000c, length 13
825 x 825 image, 8-bit palette, non-interlaced
chunk PLTE at offset 0x00025, length 768: 256 palette entries
0: ( 0, 0, 0) = (0x00,0x00,0x00)
1: (230,230,230) = (0xe6,0xe6,0xe6)
2: (215,215,215) = (0xd7,0xd7,0xd7)
3: (199,199,199) = (0xc7,0xc7,0xc7)
4: (175,175,175) = (0xaf,0xaf,0xaf)
5: (143,143,143) = (0x8f,0x8f,0x8f)
6: (111,111,111) = (0x6f,0x6f,0x6f)
7: ( 79, 79, 79) = (0x4f,0x4f,0x4f)
8: ( 22, 22, 22) = (0x16,0x16,0x16)
9: ( 0, 0, 0) = (0x00,0x00,0x00)
10: ( 47, 47, 47) = (0x2f,0x2f,0x2f)
11: (254,254,254) = (0xfe,0xfe,0xfe)
12: ( 86, 86, 86) = (0x56,0x56,0x56)
13: (170,170,170) = (0xaa,0xaa,0xaa)
14: (200,200,200) = (0xc8,0xc8,0xc8)
15: (195,195,195) = (0xc3,0xc3,0xc3)
16: ( 16, 16, 16) = (0x10,0x10,0x10)
17: ( 17, 17, 17) = (0x11,0x11,0x11)
18: ( 18, 18, 18) = (0x12,0x12,0x12)
19: ( 19, 19, 19) = (0x13,0x13,0x13)
20: ( 20, 20, 20) = (0x14,0x14,0x14)
(...) --- and it goes on listing all values up to 255:
254: (254,254,254) = (0xfe,0xfe,0xfe)
255: (255,255,255) = (0xff,0xff,0xff)
chunk tRNS at offset 0x00331, length 1: 1 transparency entry
0: 0 = 0x00
chunk IDAT at offset 0x0033e, length 13830
zlib: deflated, 32K window, maximum compression
chunk IEND at offset 0x03950, length 0
No errors detected in /Users/victor/Desktop/processed_img.png (5 chunks, 97.8% compression).
Is this behavior normal on Pillow? Is there any way to save a shorter PLTE chunk, similar to the original file (I am trying to optimize for smaller file sizes)?
If Pillow can't do it, is there any other simple way to do it? Preferably using pure Python, but it could also be with numpy
or some additional pure Python package, like PyPNG
or PurePNG
, if that helps.
Image.convert()
– Hahnke