Im using MongoDb with Mongoskin. In a collection I'm saving events.
Among other fields, these events have a start and an end, saved as Dates
in Mongodb.
events {
start: "Date1",
end: "Date2",
...
}
When inserting new documents in this collection I need a constrain that forbids insertion of document which start-end dates overlapping an event alreay created. In short, I dont want any events share the same time span.
Question: Is there a way to handle this constraint trough MongoDb with some kind of unique index? I think not, but please correct me if I'm wrong!
If not:
Question Do I have to check possible overlaps trough code before inserting new events? Do I need to set up some kind of write lock, so that another user can't squeeze in an event between the time I check for overlaps and inserting my own event? How is this done in MongoDb?
EDIT
This is the best way i have come up with so far, it actually seems to work pretty good.
var input = getPostInput();
var query = {$and: [
{start: {$lte: input.end}},
{end: {$gte: input.start}}
]};
db.events.findAndModify(query, {}, {$setOnInsert: input}, {new: true, upsert: true}, callback)
It uses the findAndModify
as a type of "findOrCreate" operator. $setOnInsert
add the POST input properties only if the findAndModify
don't find a document, and upsert: true
says it should create an document if none is found. These two options in combination seems to create a findOrCreate operator.
EDIT
Problems arise when updating (PUT) an event. I can't reuse the code above because it's relies on upsert
and $setOnInsert
.
EDIT
@wdberkeley:
I'm still struggling with this main problem: ensure uniqueness on a range. The more I think about it, it seems that "the array of time slices" might be the most non problematic solution. For example, lets say that 5 minutes is chosen as the smallest time period, and the average booking is 45 minutes. This would require me to save 9 numbers (probably date
s): timespan = [0,5,10,15,20,25,30,35,40]
, instead of two: start=0, end=45
.
This is more than four times more saved data for the average booking.
I dont mean to be harsh, but don't you see this as a problem? Or does it become a problem first when the saved data is 10 times larger or 100 times larger? I do realise that this is also relative to the totalt amount of bookings actually made...