Best practise of using localstorage to store a large amount of objects
Asked Answered
G

3

13

Currently I'm experimenting with localStorage to store a large amount of objects of same type, and I am getting a bit confused.

One way of thinking is to store all the object in an array. But then for each read/write of a single object I need to deserialise/serialise the whole array.

The other way is to directly store each object with its key in the localStorage. This will make accessing each object much easier but I'm worried of the amount of objects that will be stored (tens of thousands). Also, getting all the objects will require iterating the whole localStorage!

I'm wondering which way will be better in your experience? Also, would it be worthwhile to try on more sophisticated client side database like PouchDB?

Grandpapa answered 1/7, 2015 at 9:43 Comment(4)
Is there a chance, that your project may collect more than 5 MBs of data offline? If so, then you definitely need PouchDB and it's WebSQL and IndexedDB adapters(while still have localStorage option for very old browsers)Smutty
I'm aware of the 5MB limit on localstorage. It is good so far so I'm not too worried about it. PouchDB is a 100k+ dependency I don't really want to add this if it really help.Grandpapa
Well, if the space is not the problem, then why not go PouchDB and avoid all that wondering how to store data and how to late update it. Let the PouchDB API care for that instead you. You also has very good support for map/reduce CouchDB style, or even SQL, GQL and Mongo Plugins to write queries in SQL, Google QL, or Mongo style. Good luckSmutty
PouchDB is ~50KB min+gz, whereas LocalForage is ~20KB. :)Continuator
C
3

If you do not want to have a lot of keys, you can:

  • concat row JSONs with \n and store them as a single key
  • build and update an index(es) stored under separate keys, each linking some key with a particular row number.

In this case parsing rows is just .split('\n') that is ~2 orders of magnitude faster, then JSON.parse.

Please, notice, that you possibly need special effort to syncronize simultaneously opened tabs. It can be a challenge in complex cases.

localStorage has both good and bad parts.

Good parts:

  • syncronous;
  • extremely fast, both read and write are just memcpy – it‘s 100+Mb/s throughput even on weak devices (for example JSON.stringify is in general 5-20 times slower than localStorage.setItem);
  • thoroughly tested and reliable.

Bad news:

  • no transactions, so you need an engineering effort to sync tabs;
  • think you have not more than 2Mb (cause there exist systems with this limit);
  • 2Mb of storage actually mean 1M chars you can save.

These points show borders of localStorage applicability as a DB. LS is good for tasks, where you need syncronicity and speed, and where you can trim you DB to fit into quota.

So localStorage is good for caches and logs. Not more.

Cassidycassie answered 8/7, 2015 at 11:55 Comment(4)
This answer is very interesting to me, however I wish there was a code example as I'm having a bit of trouble wrapping my head around the idea.Claimant
@Claimant which idea in particular?Cassidycassie
I think I'm starting to get it after thinking about it more but it's this idea: "1. concat row JSONs with \n and store them as a single key 2. build and update an index(es) stored under separate keys, each linking some key with a particular row number" Can you give a short code example of what it would look like?Claimant
Sorry, I can not.Cassidycassie
C
4

If you want something simple for storing a large amount of key/values, and you don't want to have to worry about the types, then I recommend LocalForage. You can store strings, numbers, arrays, objects, Blobs, whatever you want. It uses IndexedDB and WebSQL where available, so the storage limits are much higher than LocalStorage.

PouchDB works too, but the API is more complex, and it's better-suited for when you want to sync data with CouchDB on the server.

Continuator answered 2/7, 2015 at 15:14 Comment(0)
C
3

If you do not want to have a lot of keys, you can:

  • concat row JSONs with \n and store them as a single key
  • build and update an index(es) stored under separate keys, each linking some key with a particular row number.

In this case parsing rows is just .split('\n') that is ~2 orders of magnitude faster, then JSON.parse.

Please, notice, that you possibly need special effort to syncronize simultaneously opened tabs. It can be a challenge in complex cases.

localStorage has both good and bad parts.

Good parts:

  • syncronous;
  • extremely fast, both read and write are just memcpy – it‘s 100+Mb/s throughput even on weak devices (for example JSON.stringify is in general 5-20 times slower than localStorage.setItem);
  • thoroughly tested and reliable.

Bad news:

  • no transactions, so you need an engineering effort to sync tabs;
  • think you have not more than 2Mb (cause there exist systems with this limit);
  • 2Mb of storage actually mean 1M chars you can save.

These points show borders of localStorage applicability as a DB. LS is good for tasks, where you need syncronicity and speed, and where you can trim you DB to fit into quota.

So localStorage is good for caches and logs. Not more.

Cassidycassie answered 8/7, 2015 at 11:55 Comment(4)
This answer is very interesting to me, however I wish there was a code example as I'm having a bit of trouble wrapping my head around the idea.Claimant
@Claimant which idea in particular?Cassidycassie
I think I'm starting to get it after thinking about it more but it's this idea: "1. concat row JSONs with \n and store them as a single key 2. build and update an index(es) stored under separate keys, each linking some key with a particular row number" Can you give a short code example of what it would look like?Claimant
Sorry, I can not.Cassidycassie
G
1

I hadn't personally used localStorage to manage so many elements.

However, the pattern I usually use to manage data is to load the complete info database into a javascript object, manage it on memory during the proccess and saving it again to localStorage when the proccess is finished.

Of course, this pattern may not be a good approach to your needings, depending on your project specifications.

If you need to save data constantly, data access could become a problem, and thus probably using some type of small database access is a better option.

If your data volume is exceptionally high it also could be a problem to manage it on memory, however, depending on data model, you'd be able to build it to efficient structures that would allow you to load and save data just when it's needed.

Glottis answered 1/7, 2015 at 12:52 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.