How to perform work manager tasks in sequence (when you don't have all work at the same time)
Asked Answered
F

1

6

I want to upload some objects to a server. I'm using work manager and uniqueWork to avoid uploading duplicate objects. There is a constraint on the work so that they only operate when there is internet connectivity. The problem is I want each of these objects to upload one at a time, but all the work happens at once.

I know that I can use beginWith and workContinuations to perform work in sequence, but unfortunately multiple objects can be created at different times, so I do not have access to all the work at the time of creating the work.

val workRequest = OneTimeWorkRequestBuilder<UploadWorker>()
            .setConstraints(networkConstraint)
            .build()
WorkManager.getInstance()
            .enqueueUniqueWork(uniqueName, ExistingWorkPolicy.KEEP, workRequest)

I assumed enqueue meant that all the work would happen one at a time like a queue. Is there a way to make it work that way?

Foreignborn answered 14/6, 2019 at 19:12 Comment(0)
I
5

You can use WorkManager's UniqueWork construct to be able to enqueue work so that it is executed only once. The Unique work is identified by the uniqueName.

In your case you should use a different ExistingWorkPollicy, as explained in the documentation: ExistingWorkPollicy.APPEND. This ensure that your new request is appended after other requests using the same uniqueName (this actually creates a chain of work).

val workRequest = OneTimeWorkRequestBuilder<UploadWorker>()
            .setConstraints(networkConstraint)
            .build()
WorkManager.getInstance()
            .enqueueUniqueWork(uniqueName, ExistingWorkPolicy.APPEND, workRequest)
Igneous answered 23/6, 2019 at 16:59 Comment(3)
The problem with this solution is that if your work fails/is cancelled, then all the appended works are cancelled too. That's not what one would usually expect for a queue of (independent) tasks.Polygamist
That's a good point. the new ExistingWorkPolicy.APPEND_OR_REPLACE improves a bit the situation as work enqueued after the failure creates a new chain, but it doesn't solve the problem of the work already enqueued at the time of the failure. Maybe the best solution for the app is to keep track by itself of the work. Scheduling each upload at the end of the previous one.Igneous
yep, having our own queue of some sort is probably the best we can do now. It would be great if Google added a ready to use solution for this in a future release of the framework though.Polygamist

© 2022 - 2024 — McMap. All rights reserved.