How can I make a really long string using IndexedDB without crashing the browser?
Asked Answered
T

2

32

I'm writing a web app that generates a potentially large text file that the user will download, and all the processing is done in the browser. So far I'm able to read a file over 1 GB in small chunks, process each chunk, generate a large output file incrementally, and store the growing output in IndexedDB. My more naïve attempt which kept all the results in memory and then serialized them to a file at the very end was causing all browsers to crash.

My question is two-fold:

  1. Can I append to an entry in IndexedDB (either a string or an array) without reading the whole thing into memory first? Right now, this:

    task.dbInputWriteQueue.push(output);
    var transaction = db.transaction("files", "readwrite");
    var objectStore = transaction.objectStore("files");
    var request = objectStore.get(file.id);
    request.onsuccess = function()
    {
        request.results += nextPartOfOutput
        objectStore.put(request.results);
    };
    

    is causing crashes after the output starts to get big. I could just write a bunch of small entries into the database, but then I'd have to read them all in to memory later anyway to concatenate them. See part 2 of my question...

  2. Can I make a data object URL to reference a value in IndexedDB without loading that value into memory? For small strings I can do:

    var url = window.URL.createObjectURL(new Blob([myString]), {type: 'text/plain'});
    

    But for large strings this doesn't jive too well. In fact, it crashes before the string is loaded. It seems that big reads using get() from IndexedDB cause Chrome, at least, to crash (even the developer tools crash).

Would it be faster if I was using Blobs instead of strings? Is that conversion cheap?

Basically I need a way, with JavaScript, to write a really big file to disk without loading the whole thing into memory at any one point. I know that you can give createObjectURL a File, but that doesn't work in my case since I'm generating a new file from one the user provides.

Terina answered 22/11, 2014 at 19:59 Comment(13)
Is using the file system API (which was deprecated before it was releated) out of the question?Mariken
Yeah, browser support for writing files using the file system api is abysmal, and our customers expect a file to download.Terina
What kind of file is this ?Pug
@JuniusRendel In my case it's a delimited text file which the customer can import into a spreadsheet or a database...Terina
Why not give a number to your chunk (maybe the number of the line if it's CSV like) and use that ID as key for the DB ?Halfhardy
@dystroy Storing the output, even in pieces, in the database is not a problem. That would certainly work. However, when the Blob is created to produce the file, all the chunks must be concatenated in memory -- as far as I know -- and my question is how to do that without loading the whole thing into memory, if possible. If not, maybe we need to wait for browser technologies to mature a little more.Terina
Hi Matt, I'm afraid this is possible using only the file system API on chrome, I've implemented such solution and I was able to create files up to 4GB, other solution would be to send the data to the server and create the file there. Until they add a write function to the File API this won't be possible in the browser.Yeo
As @DeniSpasovski suggests, attempt to "send the data to the server and create the file there". Also, post it, rather than use GET as POST allows sending more data than GET.Atmolysis
One other direction which might be an option if your target is only desktop browsers - using swf to generate files - #8151016Yeo
@AgiHammerthief Yeah .. I'm gonna wait for that 1-4Gb upload ;<Fugger
Similar question: #20624115 How about compressing file in browser so it won't exceed user's memory size?Outofdoor
I know it is very bad way of implementation, but you can give user ability to download file in parts (2 to 4 parts). And then he must download one script file that execute on their machine to concatenate file.Atal
Maybe this #20624115 will help for you first questionLandri
I
8

Storing a Blob will use a lot less space and resources as there is no longer a need for conversion to base64. You can even store "text/plain" objects as blobs:

var blob = new Blob(['blob object'], {type: 'text/plain'});
var store = db.transaction(['entries'], 'readwrite').objectStore('entries');

// Store the object  
var req = store.put(blob, 'blob');
req.onerror = function(e) {
    console.log(e);
};
req.onsuccess = function(event) {
    console.log('Successfully stored a blob as Blob.');
};

You can see more info here: https://hacks.mozilla.org/2012/02/storing-images-and-files-in-indexeddb/

Chrome has supported this only since summer of 2014: http://updates.html5rocks.com/2014/07/Blob-support-for-IndexedDB-landed-on-Chrome-Dev so you cannot use this on older versions of Chrome.

Insincere answered 18/12, 2014 at 21:17 Comment(3)
Storing files in blob blows after you store about 1Gb of data.Yeo
I think that until now, you can't put file to be downloaded in js without loading them in the memory. It'll crash anywayLandri
Using file storage you can, but that API is only available on ChromeYeo
Y
0

I just reopened the Chrome bug which I submitted 2 years ago and created another bug for the FF team, related to the browser crash when creating a large blob. Generating large files shouldn't be a issue for the browsers.

Yeo answered 10/1, 2015 at 17:59 Comment(1)
Still crashes! damn...Waistcoat

© 2022 - 2024 — McMap. All rights reserved.