How should I auto-expire entires in an ETS table, while also limiting its total size?
Asked Answered
I

3

7

I have a lot of analytics data which I'm looking to aggregate every so often (let's say one minute.) The data is being sent to a process which stores it in an ETS table, and every so often a timer sends it a message to process the table and remove old data.

The problem is that the amount of data that comes in varies wildly, and I basically need to do two things to it:

  • If the amount of data coming in is too big, drop the oldest data and push the new data in. This could be viewed as a fixed size queue, where if the amount of data hits the limit, the queue would start dropping things from the front as new data comes to the back.
  • If the queue isn't full, but the data has been sitting there for a while, automatically discard it (after a fixed timeout.)

If these two conditions are kept, I could basically assume the table has a constant size, and everything in it is newer than X.

The problem is that I haven't found an efficient way to do these two things together. I know I could use match specs to delete all entires older than X, which should be pretty fast if the index is the timestamp. Though I'm not sure if this is the best way to periodically trim the table.

The second problem is keeping the total table size under a certain limit, which I'm not really sure how to do. One solution comes to mind is to use an auto-increment field wich each insert, and when the table is being trimmed, look at the first and the last index, calculate the difference and again, use match specs to delete everything below the threshold.

Having said all this, it feels that I might be using the ETS table for something it wasn't designed to do. Is there a better way to store data like this, or am I approaching the problem correctly?

Ilianailine answered 30/5, 2015 at 18:10 Comment(2)
How are the data normally accessed? Are you using ets because you normally require data lookup by key?Liggett
@SteveVinoski no, I'm using ETS simply because storing a lot of data in the process state didn't seem like a reasonable idea.Ilianailine
R
2

You can determine the amount of data occupied using ets:info(Tab, memory). The result is in number of words. But there is a catch. If you are storing binaries only heap binaries are included. So if you are storing mostly normal Erlang terms you can use it and with a timestamp as you described, it is a way to go. For size in bytes just multiply by erlang:system_info(wordsize).

Rival answered 31/5, 2015 at 12:22 Comment(0)
M
1

I haven't used ETS for anything like this, but in other NoSQL DBs (DynamoDB) an easy solution is to use multiple tables: If you're keeping 24 hours of data, then keep 24 tables, one for each hour of the day. When you want to drop data, drop one whole table.

Medea answered 31/5, 2015 at 11:43 Comment(0)
L
0

I would do the following: Create a server responsible for

  • receiving all the data storage messages. This messages should be time stamped by the client process (so it doesn't matter if it waits a little in the message queue). The server will then store then in the ETS, configured as ordered_set and using the timestamp, converted in an integer, as key (if the timestamps are delivered by the function erlang:now in one single VM they will be different, if you are using several nodes, then you will need to add some information such as the node name to guarantee uniqueness).
  • receiving a tick (using for example timer:send_interval) and then processes the message received in the last N µsec (using the Key = current time - N) and looking for ets:next(Table,Key), and continue to the last message. Finally you can discard all the messages via ets:delete_all_objects(Table). If you had to add an information such as a node name, it is still possible to use the next function (for example the keys are {TimeStamp:int(),Node:atom()} you can compare to {Time:int(),0} since a number is smaller than any atom)
Liturgist answered 31/5, 2015 at 21:18 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.