Mike Ash Singleton: Placing @synchronized
Asked Answered
S

5

7

I came accross this on the Mike Ash "Care and feeding of singletons" and was a little puzzeled by his comment:

This code is kind of slow, though. Taking a lock is somewhat expensive. Making it more painful is the fact that the vast majority of the time, the lock is pointless. The lock is only needed when foo is nil, which basically only happens once. After the singleton is initialized, the need for the lock is gone, but the lock itself remains.

+(id)sharedFoo {
    static Foo *foo = nil;
    @synchronized([Foo class]) {
        if(!foo) foo = [[self alloc] init];
    }
    return foo;
}

My question is, and there is no doubt a good reason for this but why can't you write (see below) to limit the lock to when foo is nil?

+(id)sharedFoo {
    static Foo *foo = nil;
    if(!foo) {
        @synchronized([Foo class]) {
            foo = [[self alloc] init];
        }
    }
    return foo;
}

cheers gary

Sortition answered 10/3, 2010 at 16:53 Comment(4)
Ah ok, so basically you need a check inside the @synchronize block?Sortition
That’s the whole point of the @synchronized: Allow one thread at a time to make the check.Yestreen
Try dispatch_once() instead: https://mcmap.net/q/92999/-create-singleton-using-gcd-39-s-dispatch_once-in-objective-c/290295Kierkegaardian
Here are some benchmarks comparing @synchronized "singleton" implementations vs. dispatch_once() (dispatch_once wins handily) bjhomer.blogspot.com/2011/09/synchronized-vs-dispatchonce.htmlKierkegaardian
G
18

Because then the test is subject to a race condition. Two different threads might independently test that foo is nil, and then (sequentially) create separate instances. This can happen in your modified version when one thread performs the test while the other is still inside +[Foo alloc] or -[Foo init], but has not yet set foo.

By the way, I wouldn't do it that way at all. Check out the dispatch_once() function, which lets you guarantee that a block is only ever executed once during your app's lifetime (assuming you have GCD on the platform you're targeting).

Garnishee answered 10/3, 2010 at 17:1 Comment(7)
That’s of course true. But wouldn’t the best solution be to test twice (inside and outside of the @synchronized). Then there would be no race condition nor performance penalty.Yestreen
@Nikolai: tell me there's a performance penalty after you've run Shark. :-)Garnishee
@Graham: There’s no doubt that performance is bad in the original version that always takes the lock. I had it in my code and I did run Shark ;). Also, Mike Ash pointed it out in his original blog post.Yestreen
Hi Graham, at the moment I am developing for the iPhone, so I really just wanted to understand what was happening.Sortition
@Nikolai: Mike doesn't always run Shark. But if performance is bad for you, how frequently are you accessing this singleton? Must be thousands of times a second or more to get any noticeable cost.Garnishee
@Graham: In my case I searched a database of 800,000 records. In the hot loop, I accessed the database using the singleton pattern. Searching was twice as fast after moving the [MyDB sharedDB] out of the loop. I came to the conclusion that synchronization is really slow (at least on the arm platform I was using) and I looked for alternatives. Mike Ash (and @mfazekas below) explained, why double checked locking is no good idea.Yestreen
developer.apple.com/library/mac/#documentation/Darwin/Reference/…Fang
T
7

This is called the double checked locking "optimization". As documented everywhere this is not safe. Even if it's not defeated by a compiler optimization, it will be defeated the way memory works on modern machines, unless you use some kind of fence/barriers.

Mike Ash also shows the correct solution using volatile and OSMemoryBarrier();.

The issue is that when one thread executes foo = [[self alloc] init]; there is no guarantee that when an other thread sees foo != 0 all memory writes performed by init is visible too.

Also see DCL and C++ and DCL and java for more details.

Tractarianism answered 11/3, 2010 at 8:34 Comment(2)
+1 Thanks for making this clear. Instruction reordering and out-of-order memory access are both concepts that most programmers are not aware of.Yestreen
dispatch_once is the real solution, just use that and quit hackingFang
F
1

In your version the check for !foo could be occurring on multiple threads at the same time, allowing two threads to jump into the alloc block, one waiting for the other to finish before allocating another instance.

Firelock answered 10/3, 2010 at 17:4 Comment(0)
O
1

You can optimize by only taking the lock if foo==nil, but after that you need to test again (within the @synchronized) to guard against race conditions.

+ (id)sharedFoo {
    static Foo *foo = nil;
    if(!foo) {
        @synchronized([Foo class]) {
            if (!foo)  // test again, in case 2 threads doing this at once
                foo = [[self alloc] init];
        }
    }
    return foo;
}
Oquendo answered 10/3, 2010 at 17:4 Comment(0)
R
1

Best way if you have grand cenral dispatch

+ (MySingleton*) instance {
 static dispatch_once_t _singletonPredicate;
 static MySingleton *_singleton = nil;

 dispatch_once(&_singletonPredicate, ^{
    _singleton = [[super allocWithZone:nil] init];
 });

 return _singleton
 }
+ (id) allocWithZone:(NSZone *)zone {
  return [self instance];
 }
Reincarnation answered 7/3, 2014 at 11:21 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.