How to save batch of data in Parse Cloud Code?
Asked Answered
T

2

32

In my cloud code, I would like to update all of my record which is around 50k with a new data. But I noticed that my job fails even though I follow 1000 records limit. I get success/error was not called error for this job. Any Idea how can I resolve this?

Parse.Cloud.job("hello", function(request, response) {
Parse.Cloud.useMasterKey();  
var results = [];
var limit = 1000;

var saveUpdatedQueries = function(queries) {
    console.log("updating records " + queries.length);

    Parse.Object.saveAll(queries,{
        success:function(lists){
        console.log("lists ok "+lists.length);

        if (!results.length) {
            response.success("finished");
            return;
        }

        updatingRecords(lists.length);

        },error: function(reason){
            console.log("error");
        }
    });
}

var updatingRecords = function(skip) {
    var tempRecords = [];

    if (skip) {
        results = results.slice(skip);
    }

    console.log("skip: " + skip + " Results length: "+ results.length);

    for (var i = 0; i < results.length; i++) {
        var today = new Date();
        var newObject = results[i];
        newObject.set('newCulumn', today);
        tempRecords.push(newObject);

        if (i === results.length - 1 || tempRecords.length === limit) {
            break;
        };
    };

    saveUpdatedQueries(tempRecords);
}

var processCallback = function(res) {
    results = results.concat(res);
    if (res.length === limit) {
        process(res[res.length - 1].id);
        return;
    }

    updatingRecords(0);
}

var process = function(skip) {
    var query = new Parse.Query(Parse.Installation);

    if (skip) {
        query.greaterThan("objectId", skip);
    }

    query.limit(limit);
    query.ascending("objectId");
    query.find().then(function querySuccess(res) {
    processCallback(res);

    }, function queryFailed(reason) {
        if (reason.code == 155 || reason.code == 141) { // exceeded parse timout
            console.log("time out error");
            process(skip);
        } else {
            response.error("query unsuccessful, length of result " + results.length + ", error:" + reason.code + " " + reason.message);
        }
    });
}

process(false);

});
Trent answered 15/12, 2015 at 11:47 Comment(11)
Why are you not using Promises ?Mohsen
@MoNazemi I tried with saveAll promises, but I still get the same resultTrent
How does it fail? Does it time out? A job will be cut after 15 seconds...Waxbill
it usually stops after like 3 minutes and it shows "success/error was not called". When I look at the data cloud, it only updated 1750 records. And I noticed that the request call is already exceeded more than 30 calls per seconds.Trent
You will hit the free plan limit if you make more than 1800 requests per minute.Mohsen
I meant 15 minutes, of course. Not seconds. You will probably need to rethink your logic. Why do you need to update 50K records?Waxbill
@LonelyPenguin everytime I send a push notification, I need to update an attribute in each row.Trent
That sounds like a candidate for a redesigned model... Maybe if you describe your use case, we might offer you ideas on how you can do this differently. Your solution does not sound scalable.Waxbill
@LonelyPenguin, so I'm trying to send a reminder notification to those users who have not used the app for more than let's say 3 days. This background job needs to be run everyday. So in order to stop sending the same message to those user that I sent yesterday, I need to update an attribute from their records. Thus I came up with this idea of updating their records and then later on sending push notification.Trent
It's a good practice to embed retention (e.g. 1/3/7 days of inactivity) pushes inside your application, rather than using Parse. Your app has exact knowledge when was the last time it was launched, and you avoid situation where you send a notification to a user who has launched your app between process time and push receive time. For android you can use Alarm Manager. iOS and Windows should support something like this too.Ephialtes
@Ephialtes the reason I wanted to be done in cloud code is that, I might want to change the reminder date from like 3 days to 5 days. And I might wanna send different reminder message to users.Trent
A
1

Basically in cloud architecture, request time out time is around 60 sec, but you try to insert over thousands records in one transaction , it takes more than 60 seconds, that's why your request always fail.

There's better ways to insert bigger amount of records,

  1. Task Queues
  2. Cron or scheduled task

I think task queue is better for your problem. watch this video, you can get super idea about task queues

Task queue & cron jobs

Animation answered 19/11, 2017 at 5:16 Comment(0)
H
0

Workaround: You could schedule a cron job in batches of an acceptably low number of records, limited by the hosting services limit you have. For example, if you can only process 10 requests every minute, you would first request all the IDs that need to be updated, then split them into chunks that the server will accept and process within the time limit. It's just a workaround.

Long-Term: A better solution would be to design your app to request as little data as possible from the server, rather than forcing the server to do all the heavy lifting. This also allows your business logic to be exposed through a convenient public API, rather than sitting as a hidden process on your server.

Howitzer answered 28/8, 2017 at 20:14 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.