What Haskell representation is recommended for 2D, unboxed pixel arrays with millions of pixels?
Asked Answered
C

4

121

I want to tackle some image-processing problems in Haskell. I'm working with both bitonal (bitmap) and color images with millions of pixels. I have a number of questions:

  1. On what basis should I choose between Vector.Unboxed and UArray? They are both unboxed arrays, but the Vector abstraction seems heavily advertised, particular around loop fusion. Is Vector always better? If not, when should I use which representation?

  2. For color images I will wish to store triples of 16-bit integers or triples of single-precision floating-point numbers. For this purpose, is either Vector or UArray easier to use? More performant?

  3. For bitonal images I will need to store only 1 bit per pixel. Is there a predefined datatype that can help me here by packing multiple pixels into a word, or am I on my own?

  4. Finally, my arrays are two-dimensional. I suppose I could deal with the extra indirection imposed by a representation as "array of arrays" (or vector of vectors), but I'd prefer an abstraction that has index-mapping support. Can anyone recommend anything from a standard library or from Hackage?

I am a functional programmer and have no need for mutation :-)

Cylinder answered 15/5, 2011 at 3:29 Comment(8)
I think there's only Repa that meets number 4, see cse.unsw.edu.au/~chak/papers/repa.pdf.Thrombosis
@stephen: the standard Array interface supports multi-dimensional arrays. You can simply use a tuple for the index.Schreck
The fact that this question is highly upvoted and favorited (including by me) seems to indicate that Haskell handling of arrays is not very well documented.Licastro
@Alexandre C.: Handling of basic everyday arrays is well documented; handling large blocks of memory holding mutable data is as straightforward as it would be in C; handling large immutable multidimensional arrays as efficiently as possible is somewhat less obvious. This is about performance tuning a scenario where subtle, less-well-documented details would be an issue in any language.Fagot
@camccann: My bet is that Haskell is expressive enough that these multidimensional arrays should be handled seamlessly.Licastro
@Alexandre C.: For most applications, it is seamless. And it's not really Haskell itself at issue, it's the library and compiler. A plain UArray indexed by a tuple of Ints is simple to work with and often good enough, but even GHC's deep magic isn't going to optimize code using its minimal API into something competitive with a library tweaked for fast parallelized bulk data processing.Fagot
@camccann: Sure, I do agree with you. However, Haskell main strength is the ability to provide the end user with such tweaked libraries as simply as if they were run-of-the-mill arrays. My point is that there is room for improvement. Repa seems very good in this respect.Licastro
An working link to the repa paper cse.unsw.edu.au/~keller/Papers/repa.pdfBlinker
G
89

For multi-dimensional arrays, the current best option in Haskell, in my view, is repa.

Repa provides high performance, regular, multi-dimensional, shape polymorphic parallel arrays. All numeric data is stored unboxed. Functions written with the Repa combinators are automatically parallel provided you supply +RTS -Nwhatever on the command line when running the program.

Recently, it has been used for some image processing problems:

I've started writing a tutorial on the use of repa, which is a good place to start if you already know Haskell arrays, or the vector library. The key stepping stone is the use of shape types instead of simple index types, to address multidimensional indices (and even stencils).

The repa-io package includes support for reading and writing .bmp image files, though support for more formats is needed.

Addressing your specific questions, here is a graphic, with discussion:


All three of UArray, Vector, and Repa support unboxing. Vector and Repa have a rich, flexible API, but UArray does not. UArray and Repa have multi-dimensional indexing, but Vector does not. They all have support for bit-packing, although Vector and Repa have some caveats in that regard. Vector and Repa interoperate with C data and code, but UArray does not. Only Repa supports stencils.


On what basis should I choose between Vector.Unboxed and UArray?

They have approximately the same underlying representation, however, the primary difference is the breadth of the API for working with vectors: they have almost all the operations you'd normally associate with lists (with a fusion-driven optimization framework), while UArray have almost no API.

For color images I will wish to store triples of 16-bit integers or triples of single-precision floating-point numbers.

UArray has better support for multi-dimensional data, as it can use arbitrary data types for indexing. While this is possible in Vector (by writing an instance of UA for your element type), it isn't the primary goal of Vector -- instead, this is where Repa steps in, making it very easy to use custom data types stored in an efficient manner, thanks to the shape indexing.

In Repa, your triple of shorts would have the type:

Array DIM3 Word16

That is, a 3D array of Word16s.

For bitonal images I will need to store only 1 bit per pixel.

UArrays pack Bools as bits, Vector uses the instance for Bool which does do bit packing, instead using a representation based on Word8. Howver, it is easy to write a bit-packing implementation for vectors -- here is one, from the (obsolete) uvector library. Under the hood, Repa uses Vectors, so I think it inherits that libraries representation choices.

Is there a predefined datatype that can help me here by packing multiple pixels into a word

You can use the existing instances for any of the libraries, for different word types, but you may need to write a few helpers using Data.Bits to roll and unroll packed data.

Finally, my arrays are two-dimensional

UArray and Repa support efficient multi-dimensional arrays. Repa also has a rich interface for doing so. Vector on its own does not.


Notable mentions:

  • hmatrix, a custom array type with extensive bindings to linear algebra packages. Should be bound to use the vector or repa types.
  • ix-shapeable, getting more flexible indexing from regular arrays
  • chalkboard, Andy Gill's library for manipulating 2D images
  • codec-image-devil, read and write various image formats to UArray
Geyer answered 15/5, 2011 at 7:39 Comment(4)
Also, you can now do image IO of 3D repa arrays in many formats, thanks to repa-devil.Geyer
Could you please explain how Repa can interoperate with C code? I didn't find Storable instances for Data.Array.Repa...Cryptogram
Copying to pointers is probably the easiest path to storable data, but clearly not a long term solution. For that we'll need Storable vectors under the hood.Geyer
An example of doing image desaturation with repa and repa-devilGeyer
C
17

Once I reviewed the features of Haskell array libraries which matter for me, and compiled a comparison table (only spreadsheet: direct link). So I'll try to answer.

On what basis should I choose between Vector.Unboxed and UArray? They are both unboxed arrays, but the Vector abstraction seems heavily advertised, particular around loop fusion. Is Vector always better? If not, when should I use which representation?

UArray may be preferred over Vector if one needs two-dimensional or multi-dimensional arrays. But Vector has nicer API for manipulating, well, vectors. In general, Vector is not well suited for simulating multi-dimensional arrays.

Vector.Unboxed cannot be used with parallel strategies. I suspect that UArray cannot be used neither, but at least it is very easy to switch from UArray to boxed Array and see if parallelization benefits outweight the boxing costs.

For color images I will wish to store triples of 16-bit integers or triples of single-precision floating-point numbers. For this purpose, is either Vector or UArray easier to use? More performant?

I tried using Arrays to represent images (though I needed only grayscale images). For color images I used Codec-Image-DevIL library to read/write images (bindings to DevIL library), for grayscale images I used pgm library (pure Haskell).

My major problem with Array was that it provides only random access storage, but it doesn't provide many means of building Array algorithms nor doesn't come with ready to use libraries of array routines (doesn't interface with linear algebra libs, doesn't allow to express convolutions, fft and other transforms).

Almost every time a new Array has to be built from the existing one, an intermediate list of values has to be constructed (like in matrix multiplication from the Gentle Introduction). The cost of array construction often out-weights the benefits of faster random access, to the point that a list-based representation is faster in some of my use cases.

STUArray could have helped me, but I didn't like fighting with cryptic type errors and efforts necessary to write polymorphic code with STUArray.

So the problem with Arrays is that they are not well suited for numerical computations. Hmatrix' Data.Packed.Vector and Data.Packed.Matrix are better in this respect, because they come along with a solid matrix library (attention: GPL license). Performance-wise, on matrix multiplication, hmatrix was sufficiently fast (only slightly slower than Octave), but very memory-hungry (consumed several times more than Python/SciPy).

There is also blas library for matrices, but it doesn't build on GHC7.

I didn't have much experience with Repa yet, and I don't understand repa code well. From what I see it has very limited range of ready to use matrix and array algorithms written on top of it, but at least it is possible to express important algorithms by the means of the library. For example, there are already routines for matrix multiplication and for convolution in repa-algorithms. Unfortunately, it seems that convolution is now limited to 7×7 kernels (it's not enough for me, but should suffice for many uses).

I didn't try Haskell OpenCV bindings. They should be fast, because OpenCV is really fast, but I am not sure if the bindings are complete and good enough to be usable. Also, OpenCV by its nature is very imperative, full of destructive updates. I suppose it's hard to design a nice and efficient functional interface on top of it. If one goes OpenCV way, he is likely to use OpenCV image representation everywhere, and use OpenCV routines to manipulate them.

For bitonal images I will need to store only 1 bit per pixel. Is there a predefined datatype that can help me here by packing multiple pixels into a word, or am I on my own?

As far as I know, Unboxed arrays of Bools take care of packing and unpacking bit vectors. I remember looking at implementation of arrays of Bools in other libraries, and didn't see this elsewhere.

Finally, my arrays are two-dimensional. I suppose I could deal with the extra indirection imposed by a representation as "array of arrays" (or vector of vectors), but I'd prefer an abstraction that has index-mapping support. Can anyone recommend anything from a standard library or from Hackage?

Apart from Vector (and simple lists), all other array libraries are capable of representing two-dimensional arrays or matrices. I suppose they avoid unneccesary indirection.

Cryptogram answered 16/5, 2011 at 8:44 Comment(4)
The opencv bindings mentioned below are incomplete. It is really not possible for one person to create and maintain a complete set for such a huge library. However, it is still cost efficient to use opencv even if you have to build a wrapper for the function you need yourself, since it does implement some really complex stuff.Ubald
@Ubald Yes, I understand that it's really huge amount of work for one person. BTW, if you are a maintainer, could you please publish haddock docs somewhere, so it were possible to evaluate the library and bindings' coverage without installing locally? (docs are not available on Hackage due to a build error; and it doesn't build for me with neither GHC 6.12.1 nor GHC 7.0.2 due to M_PI undeclared).Cryptogram
@jextee Hey, thanks for the tip! I've uploaded a new version that might fix of the both issues.Ubald
@Ubald Thanks, now it builds cleanly.Cryptogram
U
5

Although, this doesn't exactly answer your question and isn't really even haskell as such, I would recommend taking a look at CV or CV-combinators libraries at hackage. They bind the many rather useful image processing and vision operators from the opencv-library and make working with machine vision problems much faster.

It would be rather great if someone figures out how repa or some such array library could be directly used with opencv.

Ubald answered 15/5, 2011 at 8:56 Comment(0)
C
0

Here is a new Haskell Image Processing library that can handle all of the tasks in question and much more. Currently it uses Repa and Vector packages for underlying representations, which consequently inherits fusion, parallel computation, mutation and most of the other goodies that come with those libraries. It provides an easy to use interface that is natural for image manipulation:

  • 2D indexing and unboxed pixels with arbitrary precision (Double, Float, Word16, etc..)
  • all essential functions like map, fold, zipWith, traverse ...
  • support for various color spaces: RGB, HSI, gray scale, Bi-tonal, Complex, etc.
  • common image processing functionality:
    • Binary morphology
    • Convolution
    • Interpolation
    • Fourier transform
    • Histogram plotting
    • etc.
  • Ability to treat pixels and images as regular numbers.
  • Reading and writing common image formats through JuicyPixels library

Most importantly, it is a pure Haskell library, so it does not depend on any external programs. It is also highly extendable, new color spaces and image representations can be introduced.

One thing that it does not do is packing multiple binary pixels in a Word, instead it uses a Word per binary pixel, maybe in a future...

Cranial answered 17/4, 2016 at 7:25 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.