Efficient way to partially read large numpy file?
Asked Answered
D

1

36

I have a huge numpy 3D tensor which is stored in a file on my disk (which I normally read using np.load). This is a binary .npy file. On using np.load, I quickly end up using most of my memory.

Luckily, at every run of the program, I only require a certain slice of the huge tensor. The slice is of a fixed size and its dimensions are provided from an external module.

What's the best way to do this? The only way I could figure out is somehow storing this numpy matrix into a MySQL database. But I'm sure there are much better / easier ways. I'll also be happy to build my 3D tensor file differently if it will help.


Does the answer change if my tensor is sparse in nature?

Delimitate answered 10/3, 2017 at 20:44 Comment(5)
File type'd help.Ferrin
It's a binary file, .npy. Saved using np.saveDelimitate
Good question. I don't know of any tool for this (but there may well be one). Is the slice always along the same axis?Ferrin
Here's a place to start. what are the dimensions / dtype of the tensor?Palladic
chunky3d works well for sparse 3D data. Documentation is scarce but there are some neat functionalities in it.Pteridology
P
51

use numpy.load as normal, but be sure to specify the mmap_mode keyword so that the array is kept on disk, and only necessary bits are loaded into memory upon access.

mmap_mode : {None, ‘r+’, ‘r’, ‘w+’, ‘c’}, optional If not None, then memory-map the file, using the given mode (see numpy.memmap for a detailed description of the modes). A memory-mapped array is kept on disk. However, it can be accessed and sliced like any ndarray. Memory mapping is especially useful for accessing small fragments of large files without reading the entire file into memory.

The modes are described in numpy.memmap:

mode : {‘r+’, ‘r’, ‘w+’, ‘c’}, optional The file is opened in this mode: ‘r’ Open existing file for reading only. ‘r+’ Open existing file for reading and writing. ‘w+’ Create or overwrite existing file for reading and writing. ‘c’ Copy-on-write: assignments affect data in memory, but changes are not saved to disk. The file on disk is read-only.

*be sure to not use 'w+' mode, as it will erase your file's contents.

Palladic answered 10/3, 2017 at 21:9 Comment(8)
Amazing! I don't even know that. This is such an impressive note about numpy, providing SSD is so popular today. :)Loux
Not unfortunately that if you need to read the entire file, just not load it all at once, mmap is not of much help. For example if you create a generator that yields chunks of the data, with the hope that your program never consumes more memory than the cost of a chunk. With mmap, memory used grows and grows as you request more and more chunks to be loaded, without 'releasing' older chunks that you might be done with.Farrica
@Farrica true, however using a generator is a little out of kind for numpy anyway as the preferred method is to take advantage of vectorization rather than iteration. In that instance I would likely use struct to pack the data into a binary file, and numba to jit compile a fast function to read and analyze the data.Palladic
@Farrica It would be great if you could specify a cache size for what to keep in memory with mmap before flushing to disk. (anyone wanna write a pull request??)Palladic
@Palladic I think you are mixing up two different concepts. Using a generator has no connection to vectorization. You could load NumPy data in batches (where each batch is yielded by a generator) just to save memory instead of loading it all at once. But then for each batch, you might still use heavily vectorized operations to apply some calculations to the whole batch using normal NumPy idioms. Note that I am not suggesting you should have a generator that yields one record of data at a time from some NumPy file. Rather, as many records as efficiently fit in memory for your use case.Farrica
@Palladic I agree it would be awesome if NumPy's interface to mmap had easier-to-use semantics for manually specifying which segments of the data to unload from memory. Given that, it would be easy to stream from a disk-backed NumPy array at constant memory, solely using mmap, and not needing any heavier machinery like a Keras generator or Dask.Farrica
@Farrica I may be projecting here, but I think if there's any case where you're going to be reading the entire file anyway, you wouldn't be limited by processing anyway, and either approach would be fine. I will say however I can see a problem with where to stop with regards to the database capability of mmap.. next someone's going to want multiprocess safe accesses, and sharding across a network..Palladic
For example, when pre-processing very large data sets to feed in as the input to training a neural network. You might not be able to load the whole thing in memory all at once, but you will have to pass every part of the contents through memory at some point, and you might need to perform linear algebra, data cleaning, etc., in vectorized fashion, even for sub-portions of the data which can fit into memory.Farrica

© 2022 - 2024 — McMap. All rights reserved.