Equivalent of GCD serial dispatch queue in iOS 3.x
Asked Answered
G

5

11

Apple's Grand Central Dispatch (GCD) is great, but only works on iOS 4.0 or greater. Apple's documentation says, "[A] serialized operation queue does not offer quite the same behavior as a serial dispatch queue in Grand Central Dispatch does" (because the queue is not FIFO, but order is determined by dependencies and priorities).

What is the right way to achieve the same effect as GCD's serial dispatch queues while supporting OS versions before GCD was released? Or put another way, what is the recommended way to handle simple background processing (doing web service requests, etc.) in iOS apps that want to support versions less than 4.0?

Groth answered 27/5, 2011 at 22:50 Comment(0)
M
3

Seems like people are going to a lot of effort to rewrite NSRunloop. Per the NSRunloop documentation:

Your application cannot either create or explicitly manage NSRunLoop objects. Each NSThread object, including the application’s main thread, has an NSRunLoop object automatically created for it as needed.

So surely the trivial answer would be, to create a usable queue:

- (void)startRunLoop:(id)someObject
{
    NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];

    [[NSRunLoop currentRunLoop] run];

    [pool release];
}

...

NSThread *serialDispatchThread = [[NSThread alloc] 
                   initWithTarget:self 
                   selector:@selector(startRunLoop:) 
                   object:nil];
[serialDispatchThread start];

To add a task to the queue:

[object
    performSelector:@selector(whatever:) 
    onThread:serialDispatchThread
    withObject:someArgument
    waitUntilDone:NO];

Per the Threading Programming Guide section on Run Loops:

Cocoa defines a custom input source that allows you to perform a selector on any thread. ... perform selector requests are serialized on the target thread, alleviating many of the synchronization problems that might occur with multiple methods being run on one thread.

So you've got an explicitly serial queue. Of course, mine isn't fantastically written because I've told the run loop to run forever, and you may prefer one you can terminate later, but those are easy modifications to make.

Millan answered 5/7, 2011 at 21:25 Comment(0)
N
4

How about this PseudoSerialQueue? It is a minimal implementation like the Dispatch Serial Queue.

#import <Foundation/Foundation.h>

@interface PseudoTask : NSObject
{
    id target_;
    SEL selector_;
    id queue_;
}

@property (nonatomic, readonly) id target;

- (id)initWithTarget:(id)target selector:(SEL)selector queue:(id)queue;
- (void)exec;
@end

@implementation PseudoTask

@synthesize target=target_;

- (id)initWithTarget:(id)target selector:(SEL)selector queue:(id)queue;
{
    self = [super init];
    if (self) {
        target_ = [target retain];
        selector_ = selector;
        queue_ = [queue retain];
    }
    return self;
}

- (void)exec
{
    [target_ performSelector:selector_];
}

- (void)dealloc
{
    [target_ release];
    [queue_ release];
}
@end

@interface PseudoSerialQueue : NSObject
{
    NSCondition *condition_;
    NSMutableArray *array_;
    NSThread *thread_;
}
- (void)addTask:(id)target selector:(SEL)selector;
@end

@implementation PseudoSerialQueue
- (id)init
{
    self = [super init];
    if (self) {
        array_ = [[NSMutableArray alloc] init];
        condition_ = [[NSCondition alloc] init];
        thread_ = [[NSThread alloc]
            initWithTarget:self selector:@selector(execQueue) object:nil];
        [thread_ start];
    }
    return self;
}

- (void)addTask:(id)target selector:(SEL)selector
{
    [condition_ lock];
    PseudoTask *task = [[PseudoTask alloc]
        initWithTarget:target selector:selector queue:self];
    [array_ addObject:task];
    [condition_ signal];
    [condition_ unlock];
}

- (void)quit
{
    [self addTask:nil selector:nil];
}

- (void)execQueue
{
    for (;;) {
        NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];

        [condition_ lock];
        while (array_.count == 0)
            [condition_ wait];
        PseudoTask *task = [array_ objectAtIndex:0];
        [array_ removeObjectAtIndex:0];
        [condition_ unlock];

        if (!task.target) {
            [pool drain];
            break;
        }

        [task exec];
        [task release];

        [pool drain];
    }
}

- (void)dealloc
{
    [array_ release];
    [condition_ release];
}
@end

How to use:

PseudoSerialQueue *q = [[[PseudoSerialQueue alloc] init] autorelease];
[q addTask:self selector:@selector(test0)];
[q addTask:self selector:@selector(test1)];
[q addTask:self selector:@selector(test2)];
[q quit];
Nollie answered 6/6, 2011 at 21:43 Comment(1)
Sort of complicated, and goes all the way down to the NSThread level, but looks like it would work (haven't tried it out). It still seems like there should be a less complex way to do this, though...Groth
W
3

you can simulate it using NSOperationQueue, then just set the task count to one.

EDIT

-- oops, should have read more carefully. the fifo solution follows:

i can't think of a way that the majority of ios devs would use in your situation.

i'm not afraid of writing threaded programs, so here is one solution:

  • create a fifo worker queue that:
    • supports locking
    • holds one NSOperationQueue
    • holds an NSOperation subclass, designed to pull workers from the fifo queue in its implementation of main. only one may exist at a time.
    • holds an NSArray of workers to be run (defining a worker is up to you - is it an NSInvocation, class, operation, ...)

the NSOperation subclass pulls the workers from the fifo worker queue until the fifo worker queue is exhausted.

when the fifo work queue has workers and no active child operation, it creates a child operation, adds it to its operation queue.

there are a few pitfalls if you aren't comfortable writing threaded programs -- for this reason, this solution is not ideal for everybody, but this solution would not take very long to write if you are already comfortable using all the technologies required.

good luck

Wigan answered 27/5, 2011 at 22:57 Comment(5)
The sentence I quoted is in the paragraph explaining how you can "serialize" a NSOperationQueue by setting the max concurrent operations to 1. But it seems to say, in contrast to a serial dispatch queue, there's no guarantee tasks will be executed first in, first out.Groth
@Groth you could make a subclass of NSOperationQueue that guarantees that none of its added operations have any dependencies... and really, unless you're explicitly setting them yourself, you are guaranteeing it'll be FIFO.Epimenides
Is it guaranteed to be FIFO if I don't have any dependencies/priorities? I don't see that in the documentation.Groth
yes - but you have to design it that way. clients add workers to the fifo worker queue. the fifo worker queue then vends the workers to the NSOperation subclass until it has no workers to vend. at that time, the NSOperation subclass tells the fifo worker queue that it is exiting. the fifo worker queue ensures that only one (or zero) NSOperation subclasses are running at any given time. design the workers so that they have no support for dependencies or priority. this is free if you write your own worker or use NSInvocation. if you choose NSOperations for (cont)Wigan
(cont) as your workers, then they may be require a priority or dependency. note that the NSOperation subclass which pulls workers from the fifo worker queue is not a worker. using NSInvocation as the worker, the NSOperation subclass says "fifo worker queue, give me the next NSInvocation to run". you can also significantly reduce the complexity of this design (200 lines?) if you know that you only need to create an array of workers which need to be executed in order, rather than relying on a central worker queue that is designed for insertion at arbitrary points in time.Wigan
M
3

Seems like people are going to a lot of effort to rewrite NSRunloop. Per the NSRunloop documentation:

Your application cannot either create or explicitly manage NSRunLoop objects. Each NSThread object, including the application’s main thread, has an NSRunLoop object automatically created for it as needed.

So surely the trivial answer would be, to create a usable queue:

- (void)startRunLoop:(id)someObject
{
    NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];

    [[NSRunLoop currentRunLoop] run];

    [pool release];
}

...

NSThread *serialDispatchThread = [[NSThread alloc] 
                   initWithTarget:self 
                   selector:@selector(startRunLoop:) 
                   object:nil];
[serialDispatchThread start];

To add a task to the queue:

[object
    performSelector:@selector(whatever:) 
    onThread:serialDispatchThread
    withObject:someArgument
    waitUntilDone:NO];

Per the Threading Programming Guide section on Run Loops:

Cocoa defines a custom input source that allows you to perform a selector on any thread. ... perform selector requests are serialized on the target thread, alleviating many of the synchronization problems that might occur with multiple methods being run on one thread.

So you've got an explicitly serial queue. Of course, mine isn't fantastically written because I've told the run loop to run forever, and you may prefer one you can terminate later, but those are easy modifications to make.

Millan answered 5/7, 2011 at 21:25 Comment(0)
N
2

There are things NSOperationQueue documentation writer forgot to mention, making such implementation seem trivial when in fact it's not.

Setting the maximum concurrent operation count to 1 is guaranteed to be serial only if NSOperations are added to the queue from same thread.

I'm using another option because it just works.

Add NSOperations from different threads but use NSCondition to manage queuing. startOperations can (and should, you don't want to block main thread with locks) be called with performSelectorOnBackgroundThread...

startOperations method represents single job that consists of one or more NSOperations.

- (void)startOperations
{
    NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];

    [[AppDelegate condition] lock];

    while (![[[AppDelegate queue] operations] count] <= 0) 
    {
        [[AppDelegate condition] wait];
    }

    NSOperation *newOperation = [alloc, init]....;
    [[AppDelegate queue] addOperation:newOperation];
    [[AppDelegate queue] waitUntilAllOperationsAreFinished]; // Don't forget this!

    NSOperation *newOperation1 = [alloc, init]....;
    [[AppDelegate queue] addOperation:newOperation1];
    [[AppDelegate queue] waitUntilAllOperationsAreFinished]; // Don't forget this!

    NSOperation *newOperation2 = [alloc, init]....;
    [[AppDelegate queue] addOperation:newOperation2];
    [[AppDelegate queue] waitUntilAllOperationsAreFinished]; // Don't forget this!

    // Add whatever number operations you need for this single job

    [[AppDelegate queue] signal];
    [[AppDelegate queue] unlock];

    [NotifyDelegate orWhatever]

    [pool drain];
}

That's it!

Nick answered 3/6, 2011 at 4:56 Comment(2)
Looks good if all the operations were being added at one time, but I'm thinking of a case where operations are arbitrarily being added at different times from different places.Groth
It doesn't matter, 1 or 10 operations. Three operations do different things on the same object because fragmenting tasks isn't logical for this specific example I pulled the code out. If you need to perform one specific task per object, go on...Nick
A
0

If the processing is in the background anyway, do you really need it to be strictly in-order? If you do, you can achieve the same effect simply by setting up your dependencies so 1 depends on 0, 2 on 1, 3 on 2, etc. The operation queue is then forced to handle them in order. Set the maximum concurrent operation count to 1, and the queue is also guaranteed to be serial.

Are answered 2/6, 2011 at 20:4 Comment(1)
I can think of cases where it would need to be strictly in order. Setting dependencies like that would work if all the tasks were available and queued up at the same time, but I'm thinking about arbitrarily adding them down the road.Groth

© 2022 - 2024 — McMap. All rights reserved.