using pyspark, read/write 2D images on hadoop file system
Asked Answered
G

1

7

I want to be able to read / write images on an hdfs file system and take advantage of the hdfs locality.

I have a collection of images where each image is composed of

  • 2D arrays of uint16
  • basic additional information stored as an xml file.

I want to create an archive over hdfs file system, and use spark for analyzing the archive. Right now I am struggling over the best way to store the data over hdfs file system in order to be able to take full advantage of spark+hdfs structure.

From what I understand, the best way would be to create a sequenceFile wrapper. I have two questions :

  • Is creating a sequenceFile wrapper the best way ?
  • Does anybody have any pointer to examples I could use to start with ? I must not be first one that needs to read something different than text file on hdfs through spark !
Gambia answered 25/2, 2015 at 22:46 Comment(0)
G
8

I have found a solution that works : using the pyspark 1.2.0 binaryfile does the job. It is flagged as experimental, but I was able to read tiff images with a proper combination of openCV.

import cv2
import numpy as np

# build rdd and take one element for testing purpose
L = sc.binaryFiles('hdfs://localhost:9000/*.tif').take(1)

# convert to bytearray and then to np array
file_bytes = np.asarray(bytearray(L[0][1]), dtype=np.uint8)

# use opencv to decode the np bytes array 
R = cv2.imdecode(file_bytes,1)

Note the help of pyspark :

binaryFiles(path, minPartitions=None)

    :: Experimental

    Read a directory of binary files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI as a byte array. Each file is read as a single record and returned in a key-value pair, where the key is the path of each file, the value is the content of each file.

    Note: Small files are preferred, large file is also allowable, but may cause bad performance.
Gambia answered 26/2, 2015 at 21:40 Comment(2)
Thanks - this is interesting. Just curious to know if you've used spark for tiff file analysis, too? I'm working with large (~800MB) tiff files and would like to create a pyspark RDD from the numpy array, but not sure how to go about it.Liaotung
From my experiments, it is much easier if I prepare the data correctly : I transform image files into "avro" files that contain overlapping image tiles. I have to deal with very large images (400 Mpixels) and it is the best solution for me.Gambia

© 2022 - 2024 — McMap. All rights reserved.