How to implement readonly property

  • I have a property:

    @property (readonly)  NSDictionary *someDictionary;

    This property should be computed on demand, and should be accessible by several threads.

    My current implementation is:

    - (NSDictionary *)someDictionary;
    {
    static NSDictionary *someDictionary;
    static dispatch_once_t justOnce;
    dispatch_once( &justOnce, ^
      {
      // create a temp dictionary (might take some time)
      someDictionary = temp;
      }
    );

    return someDictionary;
    }

    The first thread which needs someDictionary will trigger its creation. Ok.

    But what happens when another thread wants to access someDictionary while it is still being created? I guess it will receive just nil.
    This would be not correct; it really should wait until the dictionary is ready.

    How to achieve this? Use a lock? Use @synchronize?

    10.8.2 with Arc.

    Gerriet.
  • Looking at the docs, dispatch_once takes care of the synchronization for you:

    https://developer.apple.com/library/mac/ipad/#documentation/Performance/Ref
    erence/GCD_libdispatch_Ref/Reference/reference.html


    It should therefore be thread safe to use without any additional synchronization code.

    Sent from my New iPad. I blame all typos on the Fruit Company. May have been dictated.

    On 2012-11-12, at 7:56 AM, "Gerriet M. Denkmann" <gerriet...> wrote:

    > I have a property:
    >
    > @property (readonly)  NSDictionary *someDictionary;
    >
    > This property should be computed on demand, and should be accessible by several threads.
    >
    > My current implementation is:
    >
    > - (NSDictionary *)someDictionary;
    > {
    > static NSDictionary *someDictionary;
    > static dispatch_once_t justOnce;
    > dispatch_once( &justOnce, ^
    > {
    > // create a temp dictionary (might take some time)
    > someDictionary = temp;
    > }
    > );
    >
    > return someDictionary;
    > }
    >
    > The first thread which needs someDictionary will trigger its creation. Ok.
    >
    > But what happens when another thread wants to access someDictionary while it is still being created? I guess it will receive just nil.
    > This would be not correct; it really should wait until the dictionary is ready.
    >
    > How to achieve this? Use a lock? Use @synchronize?
    >
    > 10.8.2 with Arc.
    >
    >
    > Gerriet.
  • On 12 Nov 2012, at 12:56, "Gerriet M. Denkmann" <gerriet...> wrote:

    > I have a property:
    >
    > @property (readonly)  NSDictionary *someDictionary;
    >
    > This property should be computed on demand, and should be accessible by several threads.
    >
    > My current implementation is:
    >
    > - (NSDictionary *)someDictionary;
    > {
    > static NSDictionary *someDictionary;
    > static dispatch_once_t justOnce;
    > dispatch_once( &justOnce, ^
    > {
    > // create a temp dictionary (might take some time)
    > someDictionary = temp;
    > }
    > );
    >
    > return someDictionary;
    > }
    >
    > The first thread which needs someDictionary will trigger its creation. Ok.
    >
    > But what happens when another thread wants to access someDictionary while it is still being created? I guess it will receive just nil.
    > This would be not correct; it really should wait until the dictionary is ready.
    >
    > How to achieve this? Use a lock? Use @synchronize?

    This is completely the wrong way to implement a property.  The static variable will be shared between all instances.  Here's how you should be doing a lazy loaded var:

    @implementation MyClass
    {
        NSDictionary *_someDictionary
    }

    - (NSDictionary *)someDictionary
    {
        static dispatch_once_t justOnce;
        dispatch_once(&justOnce, ^
            {
                someDictionary = [[NSDictionary alloc] initWithObjectsAndKeys: …… nil];
            });
        return someDictionary;
    }

    In answer to your threading question.  If multiple threads ask for the dictionary at once, the second one to hit the dispatch_once will block until the first one has finished the dispatch_once block, and then continue to execute (without touching the contents).  Thus, both threads will receive the same dictionary (assuming it's the same instance it's called on), and it will be allocated only once.

    Tom Davie
  • > This is completely the wrong way to implement a property.  The static variable will be shared between all instances.  Here's how you should be doing a lazy loaded var:
    >
    > @implementation MyClass
    > {
    > NSDictionary *_someDictionary
    > }
    >
    > - (NSDictionary *)someDictionary
    > {
    > static dispatch_once_t justOnce;
    > dispatch_once(&justOnce, ^
    > {
    > someDictionary = [[NSDictionary alloc] initWithObjectsAndKeys: …… nil];
    > });
    > return someDictionary;
    > }

    I don't think this does what you think it does; my understanding is that dispatch_once will execute only once for the lifetime of an app, so the code you posted will only run once for the first object that requires is, and then never run again, resulting in _dictionary being Nil in all other instances.

    I understood the OP's request as wanting to implement a singleton, which, based on your reading, may not be the case. dispatch_once will be fine for a singleton, but if you need a thread-safe, lazily-instantiated read-only property, maybe something like this will do the trick:

    @implementation MyClass {
      NSDictionary *_someDictionary
    }

    - (NSDictionary *) someDictionary {
      @synchronized(self) {
        if (!_someDictionary) {
          _someDictionary = [[NSDictionary alloc] initWith… ]
        }
      }

      return _someDictionary;
    }
  • On 12 Nov 2012, at 13:39, Marco Tabini <mtabini...> wrote:

    >> This is completely the wrong way to implement a property.  The static variable will be shared between all instances.  Here's how you should be doing a lazy loaded var:
    >>
    >> @implementation MyClass
    >> {
    >> NSDictionary *_someDictionary
    >> }
    >>
    >> - (NSDictionary *)someDictionary
    >> {
    >> static dispatch_once_t justOnce;
    >> dispatch_once(&justOnce, ^
    >> {
    >> someDictionary = [[NSDictionary alloc] initWithObjectsAndKeys: …… nil];
    >> });
    >> return someDictionary;
    >> }
    >
    > I don't think this does what you think it does; my understanding is that dispatch_once will execute only once for the lifetime of an app, so the code you posted will only run once for the first object that requires is, and then never run again, resulting in _dictionary being Nil in all other instances.

    Very good point!  My bad.

    > I understood the OP's request as wanting to implement a singleton, which, based on your reading, may not be the case. dispatch_once will be fine for a singleton, but if you need a thread-safe, lazily-instantiated read-only property, maybe something like this will do the trick:

    I'm pretty sure he doesn't want a singleton, as he was talking about a property.  And yeh, sorry about the misinformation there!

    >
    > @implementation MyClass {
    > NSDictionary *_someDictionary
    > }
    >
    > - (NSDictionary *) someDictionary {
    > @synchronized(self) {
    > if (!_someDictionary) {
    > _someDictionary = [[NSDictionary alloc] initWith… ]
    > }
    > }
    >
    > return _someDictionary;
    > }
  • You can use dispatch_sync. The blog post of oliver dobnigg (cocoanetics) summs that up quite nicely:
    http://www.cocoanetics.com/2012/02/threadsafe-lazy-property-initialization/

    Cheers, Jörg
    On Nov 12, 2012, at 2:44 PM, Tom Davie <tom.davie...> wrote:

    >
    > On 12 Nov 2012, at 13:39, Marco Tabini <mtabini...> wrote:
    >
    >>> This is completely the wrong way to implement a property.  The static variable will be shared between all instances.  Here's how you should be doing a lazy loaded var:
    >>>
    >>> @implementation MyClass
    >>> {
    >>> NSDictionary *_someDictionary
    >>> }
    >>>
    >>> - (NSDictionary *)someDictionary
    >>> {
    >>> static dispatch_once_t justOnce;
    >>> dispatch_once(&justOnce, ^
    >>> {
    >>> someDictionary = [[NSDictionary alloc] initWithObjectsAndKeys: …… nil];
    >>> });
    >>> return someDictionary;
    >>> }
    >>
    >> I don't think this does what you think it does; my understanding is that dispatch_once will execute only once for the lifetime of an app, so the code you posted will only run once for the first object that requires is, and then never run again, resulting in _dictionary being Nil in all other instances.
    >
    > Very good point!  My bad.
    >
    >> I understood the OP's request as wanting to implement a singleton, which, based on your reading, may not be the case. dispatch_once will be fine for a singleton, but if you need a thread-safe, lazily-instantiated read-only property, maybe something like this will do the trick:
    >
    > I'm pretty sure he doesn't want a singleton, as he was talking about a property.  And yeh, sorry about the misinformation there!
    >
    >>
    >> @implementation MyClass {
    >> NSDictionary *_someDictionary
    >> }
    >>
    >> - (NSDictionary *) someDictionary {
    >> @synchronized(self) {
    >> if (!_someDictionary) {
    >> _someDictionary = [[NSDictionary alloc] initWith… ]
    >> }
    >> }
    >>
    >> return _someDictionary;
    >> }

  • On 12 Nov 2012, at 14:18, Joerg Simon <j_simon...> wrote:

    > You can use dispatch_sync. The blog post of oliver dobnigg (cocoanetics) summs that up quite nicely:
    > http://www.cocoanetics.com/2012/02/threadsafe-lazy-property-initialization/

    Or you can use dispatch_once, but make sure the once token is an ivar, unlike I did.

    Tom Davie
  • As you can read in the blog too, the developer documentation of dispatch_once states:

    "The predicate must point to a variable stored in global or static scope. The result of using a predicate with automatic or dynamic storage is undefined."

    so, no, you can not. Actually it works most of the time, but you can not rely on it...

    Cheers, Jörg

    On Nov 12, 2012, at 3:33 PM, Tom Davie <tom.davie...> wrote:

    >
    > On 12 Nov 2012, at 14:18, Joerg Simon <j_simon...> wrote:
    >
    >> You can use dispatch_sync. The blog post of oliver dobnigg (cocoanetics) summs that up quite nicely:
    >> http://www.cocoanetics.com/2012/02/threadsafe-lazy-property-initialization/
    >
    > Or you can use dispatch_once, but make sure the once token is an ivar, unlike I did.
    >
    > Tom Davie
  • On Nov 12, 2012, at 8:41 AM, Joerg Simon wrote:

    > On Nov 12, 2012, at 3:33 PM, Tom Davie <tom.davie...> wrote:
    >
    >> On 12 Nov 2012, at 14:18, Joerg Simon <j_simon...> wrote:
    >>
    >>> You can use dispatch_sync. The blog post of oliver dobnigg (cocoanetics) summs that up quite nicely:
    >>> http://www.cocoanetics.com/2012/02/threadsafe-lazy-property-initialization/
    >>
    >> Or you can use dispatch_once, but make sure the once token is an ivar, unlike I did.
    >
    > As you can read in the blog too, the developer documentation of dispatch_once states:
    >
    > "The predicate must point to a variable stored in global or static scope. The result of using a predicate with automatic or dynamic storage is undefined."
    >
    > so, no, you can not. Actually it works most of the time, but you can not rely on it...

    Far be it from me to discourage people from paying attention to the docs, but I'm pretty sure that the docs are excessively restrictive in this case.

    From working with similar constructs in other APIs, I believe the actual requirements are:

    1) All of the threads which are to coordinate on doing a task exactly once must be referring to the same storage for the once predicate.

    2) The predicate storage has to be guaranteed to have been allocated and initialized to zero before any threads access it.

    3) The storage must not be deallocated until after it is guaranteed that no threads will reference it again.

    Obviously, automatic storage violates rule 1.  Most schemes which try to dynamically allocate the storage just before it's needed would normally run into the same race condition that dispatch_once() is trying to solve.  And the most common anticipated use case is for ensuring that a given task is only performed once, globally.  So, avoiding dynamic storage is a nice simple rule.  (It's also easy to forget to set dynamically allocated storage to zero before using it.)

    But an instance variable still satisfies all three requirements for the case where a task needs to be performed only once per instance.

    And the blog's speculation that there's some special characteristic of statically allocated memory vs. dynamically allocated memory that is important to dispatch_once()'s synchronization strikes me as very, very improbable.

    Regards,
    Ken
  • On Nov 12, 2012, at 8:36 AM, Ken Thomases <ken...> wrote:
    > Far be it from me to discourage people from paying attention to the docs, but I'm pretty sure that the docs are excessively restrictive in this case.
    >
    > From working with similar constructs in other APIs, I believe the actual requirements are:
    >
    > 1) All of the threads which are to coordinate on doing a task exactly once must be referring to the same storage for the once predicate.
    >
    > 2) The predicate storage has to be guaranteed to have been allocated and initialized to zero before any threads access it.
    >
    > 3) The storage must not be deallocated until after it is guaranteed that no threads will reference it again.
    >
    > Obviously, automatic storage violates rule 1.  Most schemes which try to dynamically allocate the storage just before it's needed would normally run into the same race condition that dispatch_once() is trying to solve.  And the most common anticipated use case is for ensuring that a given task is only performed once, globally.  So, avoiding dynamic storage is a nice simple rule.  (It's also easy to forget to set dynamically allocated storage to zero before using it.)
    >
    > But an instance variable still satisfies all three requirements for the case where a task needs to be performed only once per instance.
    >
    > And the blog's speculation that there's some special characteristic of statically allocated memory vs. dynamically allocated memory that is important to dispatch_once()'s synchronization strikes me as very, very improbable.

    There is something special about statically-allocated memory. Statically-allocated memory has always been zero for the life of the process. Dynamically-allocated memory may have been non-zero at some point in the past (i.e. if it was previously part of a now-freed allocation).

    The problem is your condition #2. If the memory was previously non-zero and you set it to zero, you need appropriate memory barriers on some architectures to prevent a race where the caller of dispatch_once() sees the old non-zero value. Neither dispatch_once() nor the malloc system nor the Objective-C runtime promise to provide the correct barriers.

    In some cases you might be able to add an appropriate memory barrier to your -init... method, assuming that no calls to dispatch_once() occur before then.

    In practice this is a difficult race to hit, but it's not impossible.

    --
    Greg Parker    <gparker...>    Runtime Wrangler
  • On 12 Nov 2012, at 20:45, Greg Parker <gparker...> wrote:

    >
    >
    > There is something special about statically-allocated memory. Statically-allocated memory has always been zero for the life of the process. Dynamically-allocated memory may have been non-zero at some point in the past (i.e. if it was previously part of a now-freed allocation).
    >
    > The problem is your condition #2. If the memory was previously non-zero and you set it to zero, you need appropriate memory barriers on some architectures to prevent a race where the caller of dispatch_once() sees the old non-zero value. Neither dispatch_once() nor the malloc system nor the Objective-C runtime promise to provide the correct barriers.
    >
    > In some cases you might be able to add an appropriate memory barrier to your -init... method, assuming that no calls to dispatch_once() occur before then.
    >
    > In practice this is a difficult race to hit, but it's not impossible.
    >

    Sorry, I'm a bit late to the party here but I've just read this and I don't understand it.

    If this race condition really exists, you couldn't assume that *any* instance variables of a newly initialised object have been zeroed out.

    What am I missing?
  • At 7:56 PM +0700 11/12/12, Gerriet M. Denkmann wrote:
    > - (NSDictionary *)someDictionary;
    > {
    > static NSDictionary *someDictionary;
    > static dispatch_once_t justOnce;
    > dispatch_once( &justOnce, ^
    > {
    > // create a temp dictionary (might take some time)
    > someDictionary = temp;
    > }
    > );
    >
    > return someDictionary;
    > }

    Here's what I usually do:

    assume that _someDictionary is an instance variable initialized to
    nil and never changed once initialized to non-nil

    - (NSDictionary *)someDictionary;
    {
      if (!_someDictionary)
      {
        @synchronized (self)
        {
          if (!_someDictionary)
          {
            // create a temp dictionary (might take some time)
              _someDictionary = temp;
          }
        }
      }

      return _someDictionary;
    }

    the outer if avoids the overhead of @synchronized if _someDictionary
    is already created -- this is just an optimization

    the inner if is necessary to resolve the race condition if multiple
    threads make it past the outer one

    HTH,

    -Steve
  • On Dec 7, 2012, at 8:01 PM, Steve Sisak wrote:

    > Here's what I usually do:
    >
    > assume that _someDictionary is an instance variable initialized to nil and never changed once initialized to non-nil
    >
    > - (NSDictionary *)someDictionary;
    > {
    > if (!_someDictionary)
    > {
    > @synchronized (self)
    > {
    > if (!_someDictionary)
    > {
    > // create a temp dictionary (might take some time)
    > _someDictionary = temp;
    > }
    > }
    > }
    >
    > return _someDictionary;
    > }
    >
    > the outer if avoids the overhead of @synchronized if _someDictionary is already created -- this is just an optimization
    >
    > the inner if is necessary to resolve the race condition if multiple threads make it past the outer one

    This is a classic anti-pattern called double-checked locking.  It is not safe.  Don't rely on it.
    https://en.wikipedia.org/wiki/Double-checked_locking
    http://erdani.com/publications/DDJ_Jul_Aug_2004_revised.pdf

    Regards,
    Ken
  • At 8:57 PM -0600 12/7/12, Ken Thomases wrote:
    >> the outer if avoids the overhead of
    > @synchronized if _someDictionary is already
    > created -- this is just an optimization
    >>
    >> the inner if is necessary to resolve the race
    >> condition if multiple threads make it past the
    >> outer one
    >
    > This is a classic anti-pattern called
    > double-checked locking.  It is not safe.  Don't
    > rely on it.
    > https://en.wikipedia.org/wiki/Double-checked_locking
    > http://erdani.com/publications/DDJ_Jul_Aug_2004_revised.pdf

    Hi Ken,

    From the first link you cite:

    > The pattern, when implemented in some
    > language/hardware combinations, can be unsafe.
    > At times, it can be considered an
    > anti-pattern.[2]

    That is far different from being a "a classic anti-pattern".

    In this example:

    1) The language is Obj-C
    2) I explicitly used @synchronized(self) and an instance variable

    So, in this case, what I'm doing is explicitly supported by the language.

    Your second article is explicitly focused on C++
    (and singletons) -- it's also dated 2004.

    On Mac OS X, the correct implementation of a
    singleton is dispatch_once() -- in fact, that is
    the function's raison d'être.

    So, while I support the position that
    double-checked locking can be unsafe in con
    language/hardware combination, in this case we're
    using language features specifically designed for
    the purpose.

    That said, it's worth noting that you need to
    understand your complier when dealing with
    synchronization.

    I'm interested if there are an any issued I'm
    missing in the Obj-C, @synchronized(self),
    instance variable case.

    -Steve
  • On Dec 7, 2012, at 8:18 PM, Steve Sisak <sgs-lists...> wrote:

    > I'm interested if there are an any issued I'm missing in the Obj-C, @synchronized(self), instance variable case.

    Your pattern can fail if this line
            _someDictionary = temp;
    isn't atomic.
  • On Dec 7, 2012, at 10:18 PM, Steve Sisak wrote:

    > At 8:57 PM -0600 12/7/12, Ken Thomases wrote:
    >>> the outer if avoids the overhead of @synchronized if _someDictionary is already created -- this is just an optimization
    >>>
    >>> the inner if is necessary to resolve the race condition if multiple threads make it past the outer one
    >>
    >> This is a classic anti-pattern called double-checked locking.  It is not safe.  Don't rely on it.
    >> https://en.wikipedia.org/wiki/Double-checked_locking
    >> http://erdani.com/publications/DDJ_Jul_Aug_2004_revised.pdf
    >
    > Hi Ken,
    >
    > From the first link you cite:
    >
    >> The pattern, when implemented in some language/hardware combinations, can be unsafe. At times, it can be considered an anti-pattern.[2]
    >
    > That is far different from being a "a classic anti-pattern".

    So Wikipedia waffled.  That doesn't make what I said false.  Double-checked locking is widely regarded as unsafe in C-based languages.  Only in certain languages which have much stronger memory models than C can it be made safe.  If in doubt, avoid it.

    > In this example:
    >
    > 1) The language is Obj-C

    That's part of the problem.  Objective-C is a C-based language and doesn't have any stronger guarantees about the memory model than C.  If we were working in Java, you might be safe.

    > 2) I explicitly used @synchronized(self) and an instance variable

    So?  The use of a lock is exactly part of the double-checked _locking_ technique.  It doesn't make it safe, it only makes it seem safe. Do you imagine that @synchronized is fundamentally different than any other locking mechanism, such as a pthreads mutex?

    > So, in this case, what I'm doing is explicitly supported by the language.

    Nope.  The compiler, the CPU, and/or the cache can reorder the execution (or apparent execution) of the instructions within the lock such that the check that's outside of the lock will skip the lock even though the object hasn't been created and fully initialized yet.

    The problem is precisely that the first check is outside of the lock and outside of any protection provided by memory barriers.

    > Your second article is explicitly focused on C++ (and singletons) -- it's also dated 2004.

    It explains the principles that are applicable to all C-based languages (and even addresses Java).  And the fact that it's from 2004 doesn't make it wrong.  The passage of time didn't erode its accuracy.

    It also directly explains that they used the implementation of a singleton as an example but that the problems with double-checked locking are not specific to singletons.

    > On Mac OS X, the correct implementation of a singleton is dispatch_once() -- in fact, that is the function's raison d'être.

    The function's raison d'être is not singletons, it's doing something once and only once.

    It's also the correct mechanism for what you're using double-checked locking for.

    > So, while I support the position that double-checked locking can be unsafe in con language/hardware combination, in this case we're using language features specifically designed for the purpose.

    No, you're not.  Since you're checking the instance variable outside of the lock, and failing to take the lock based on its state, you are not using the lock in all cases.

    Regards,
    Ken
  • On Dec 7, 2012, at 8:38 PM, Marco S Hyman <marc...> wrote:

    > On Dec 7, 2012, at 8:18 PM, Steve Sisak <sgs-lists...> wrote:
    >
    >> I'm interested if there are an any issued I'm missing in the Obj-C, @synchronized(self), instance variable case.
    >
    >
    > Your pattern can fail if this line
    > _someDictionary = temp;
    > isn't atomic.

    The real issue with double-checked locking is whether the compiler promises to generate the proper memory barriers such that other threads are guaranteed to see the assignment to _someDictionary *after* the object has been constructed. C makes no such guarantee; other threads might see a non-nil value for _someDictionary before the first thread is done constructing the object.

    --Kyle Sluder
  • At 8:35 AM -0800 12/8/12, Kyle Sluder wrote:
    > On Dec 7, 2012, at 8:38 PM, Marco S Hyman <marc...>  wrote:
    >
    >> On Dec 7, 2012, at 8:18 PM, Steve Sisak <sgs-lists...> wrote:
    >>
    >>> I'm interested if there are an any issued I'm missing in the
    >>> Obj-C, @synchronized(self), instance variable case.
    >>
    >>
    >> Your pattern can fail if this line
    >> _someDictionary = temp;
    >> isn't atomic.
    >
    > The real issue with double-checked locking is whether the compiler
    > promises to generate the proper memory barriers such that other
    > threads are guaranteed to see the assignment to _someDictionary
    > *after* the object has been constructed. C makes no such guarantee;
    > other threads might see a non-nil value for _someDictionary before
    > the first thread is done constructing the object.

    I'm fairly sure that @synchronized, being a compiler built-in, rather
    than a function, makes that guarantee -- specifically the the values
    of instance variables can change across entry and exit of an
    @synchronized block.

    Further, if writes were not complete at the end of the block, the
    construct would be essentially useless for its intended purpose.

    In any case, removing the outer check makes the code correct
    regardless of compiler guarantees.
  • On Dec 8, 2012, at 10:06 AM, Steve Sisak <sgs-lists...> wrote:

    > At 8:35 AM -0800 12/8/12, Kyle Sluder wrote:
    >> On Dec 7, 2012, at 8:38 PM, Marco S Hyman <marc...> wrote:
    >>
    >>> On Dec 7, 2012, at 8:18 PM, Steve Sisak <sgs-lists...> wrote:
    >>>
    >>>> I'm interested if there are an any issued I'm missing in the Obj-C, @synchronized(self), instance variable case.
    >>>
    >>>
    >>> Your pattern can fail if this line
    >>> _someDictionary = temp;
    >>> isn't atomic.
    >>
    >> The real issue with double-checked locking is whether the compiler promises to generate the proper memory barriers such that other threads are guaranteed to see the assignment to _someDictionary *after* the object has been constructed. C makes no such guarantee; other threads might see a non-nil value for _someDictionary before the first thread is done constructing the object.
    >
    > I'm fairly sure that @synchronized, being a compiler built-in, rather than a function, makes that guarantee -- specifically the the values of instance variables can change across entry and exit of an @synchronized block.

    If you actually understood the problem with double-checked locking, you'd understand that the problem is with the early return that happens before the @synchronized block.

    Please reread the Wikipedia article Ken linked to; the Java example exhibits *exactly* the same flaw as your Objective-C code.

    >
    > Further, if writes were not complete at the end of the block, the construct would be essentially useless for its intended purpose.

    Again, the problem exists *before* entering the @synchronized block.

    >
    > In any case, removing the outer check makes the code correct regardless of compiler guarantees.

    Yes, which is why your "optimization" is a classic anti-pattern.

    --Kyle Sluder
  • On Dec 8, 2012, at 10:06 AM, Steve Sisak <sgs-lists...> wrote:

    >
    > Further, if writes were not complete at the end of the block, the construct would be essentially useless for its intended purpose.

    By the way, you're wrong about this too. All @synchronized does is act as a mutex around a code block. It does not cause the compiler to reorder instructions and issue memory barriers in such a way that initialization is guaranteed to precede assignment from the perspective of all threads.

    --Kyle Sluder
  • At 10:24 AM -0800 12/8/12, Kyle Sluder wrote:
    > On Dec 8, 2012, at 10:06 AM, Steve Sisak <sgs-lists...>  wrote:
    >
    >> Further, if writes were not complete at the end of the block, the
    > construct would be essentially useless for its intended purpose.
    >
    > By the way, you're wrong about this too. All @synchronized does is
    > act as a mutex around a code block. It does not cause the compiler
    > to reorder instructions and issue memory barriers in such a way that
    > initialization is guaranteed to precede assignment from the
    > perspective of all threads.

    Please cite a source for this assertion.

    From:

    <https://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/Mul
    tithreading/ThreadSafety/ThreadSafety.html
    >

    "If you are already using a mutex to protect a section of code, do
    not automatically assume you need to use the volatile keyword to
    protect important variables inside that section. A mutex includes a
    memory barrier to ensure the proper ordering of load and store
    operations."

    I acknowledge that, without proper memory barriers, double-checked
    locking is problematic, but am providing an example using a construct
    which I'm fairly sure uses proper memory barriers.

    -Steve
  • On Dec 8, 2012, at 1:17 PM, Steve Sisak wrote:

    > At 10:24 AM -0800 12/8/12, Kyle Sluder wrote:
    >> On Dec 8, 2012, at 10:06 AM, Steve Sisak <sgs-lists...> wrote:
    >>
    >>> Further, if writes were not complete at the end of the block, the construct would be essentially useless for its intended purpose.
    >>
    >> By the way, you're wrong about this too. All @synchronized does is act as a mutex around a code block. It does not cause the compiler to reorder instructions and issue memory barriers in such a way that initialization is guaranteed to precede assignment from the perspective of all threads.
    >
    > Please cite a source for this assertion.
    >
    > From:
    >
    > <https://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/Mul
    tithreading/ThreadSafety/ThreadSafety.html
    >
    >
    > "If you are already using a mutex to protect a section of code, do not automatically assume you need to use the volatile keyword to protect important variables inside that section. A mutex includes a memory barrier to ensure the proper ordering of load and store operations."
    >
    > I acknowledge that, without proper memory barriers, double-checked locking is problematic, but am providing an example using a construct which I'm fairly sure uses proper memory barriers.

    The memory barrier is only at the boundaries of the @synchronized block.  Your extra check is not protected by the memory barriers of the @synchronized block because it's not within it.

    If you search for examples of "fixed" double-checked locking using explicit memory barriers, you'll see that one of the barriers occurs in the always-taken code path.  Your code does not have that.

    Regards,
    Ken
  • Speaking of Double-Checked Locking, there is this interesting article from Scott Meyers and Andrei Alexandrescu, written in 2004, which states:
    This article explains why Singleton isn’t thread safe, how DCLP attempts to address that problem, why DCLP may fail on both uni- and multiprocessor ar-chitectures, and why you can’t (portably) do anything about it. Along the way, it clarifies the relationships among statement ordering in source code, sequence points, compiler and hardware optimizations, and the actual order of statement execution. Finally, it concludes with some suggestions regarding how to add thread-safety to Singleton (and similar constructs) such that the resulting code is both reliable and efficient.

    DCLP = Double-Checked Locking Pattern
    Link: http://erdani.com/publications/DDJ_Jul_Aug_2004_revised.pdf

    Jean

    -----------
    Jean Suisse
    Institut de Chimie Moléculaire de l’Université de Bourgogne
    (ICMUB) — UMR 6302

    U.F.R. Sciences et Techniques, Bâtiment Mirande
    Aile B, bureau 413
    9, avenue Alain Savary — B.P. 47870
    21078 DIJON CEDEX
  • On Dec 8, 2012, at 11:17 AM, Steve Sisak <sgs-lists...> wrote:
    > At 10:24 AM -0800 12/8/12, Kyle Sluder wrote:
    >> On Dec 8, 2012, at 10:06 AM, Steve Sisak <sgs-lists...> wrote:
    >>
    >>> Further, if writes were not complete at the end of the block, the construct would besentially useless for its intended purpose.
    >>
    >> By the way, you're wrong about this too. All @synchronized does is act as a mutex around a code block. It does not cause the compiler to reorder instructions and issue memory barriers in such a way that initialization is guaranteed to precede assignment from the perspective of all threads.
    >
    > Please cite a source for this assertion.

    Source: me, the author of the current @synchronized implementation. @synchronized performs the same synchronization as a pthread mutex.

    > From:
    >
    > <https://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/Mul
    tithreading/ThreadSafety/ThreadSafety.html
    >
    >
    > "If you are already using a mutex to protect a section of code, do not automatically assume you need to use the volatile keyword to protect important variables inside that section. A mutex includes a memory barrier to ensure the proper ordering of load and store operations."

    To a close approximation, you should pretend that `volatile` does not exist in C-based languages.

    The above says that if you already have mutexes then you do not need `volatile`. The mutex alone does all of the work.

    Conversely, `volatile` with no mutex is also not a safe multithreading pattern.

    > I acknowledge that, without proper memory barriers, double-checked locking is problematic, but am providing an example using a construct which I'm fairly sure uses proper memory barriers.
    >
    > - (NSDictionary *)someDictionary;
    > {
    > if (!_someDictionary)
    > {
    > @synchronized (self)
    > {
    > if (!_someDictionary)
    > {
    > // create a temp dictionary (might take some time)
    > _someDictionary = temp;
    > }
    > }
    > }
    >
    > return _someDictionary;
    > }

    The example provided does not use proper memory barriers.

    In general, memory barriers need to occur in pairs, one on each thread. The coordination of the two memory barriers achieves the desired synchronization, so that both sides observe events occurring in the same order.

    Mutexes and similar constructs achieve this. The mutex lock() and unlock() procedures form a barrier pair. Code running with the mutex held is therefore correctly synchronized with respect to other code that runs with the mutex held. But code running outside the mutex is not protected, because it didn't call the barrier inside the lock() procedure.

    In faulty double-checked locking code, the problem is that the writer has a memory barrier but the reader does not. Because it has no barriers, the reader may observe events occur out of the desired order. That's why it fails. (Here the "writer" is the thread actually calling the initializer and the "reader" is a second thread simultaneously performing the double-check sequence.)

    (Faulty double-checked locking code has a second problem because the writer-side barrier inside the mutex unlock is the wrong barrier to use with a reader that is not locking the mutex.)

    You need to do one of two things to fix the reader side of a double-checked lock:
    * add appropriate barriers to the reader side, or
    * cheat in a way that is guaranteed to work on all architectures you care about.

    dispatch_once() actually cheats. It performs a very expensive barrier on the writer side (much more expensive than the barriers used in ordinary mutexes and @synchronized), which guarantees that no barrier is needed on the reader side on the CPUs that run OS X and iOS. The expensive barrier on the reader side is an acceptable trade-off because the writer path runs only once.

    --
    Greg Parker    <gparker...>    Runtime Wrangler
  • Greg,

    So, from what you are saying, either of these snippets should be valid, right?

    > +(id)sharedInstance{
    > static id _sharedInstance = nil;
    >
    > if (!_sharedInstance){
    > @synchronized([self class]){
    > if (!_sharedInstance){
    > id sharedInstance = [[super allocWithZone:NULL] init];
    > OSMemoryBarrier();
    > _sharedInstance = sharedInstance;
    > }
    > }
    > }
    >
    > OSMemoryBarrier();
    > return _sharedInstance;
    > }

    vs

    > +(id)sharedInstance{
    > static id _sharedInstance = nil;
    >
    > static dispatch_once_t onceToken;
    > dispatch_once(&onceToken, ^{
    > _sharedInstance = [[super allocWithZone:NULL] init];
    > });
    >
    > return _sharedInstance;
    > }

    Any massive advantages / disadvantages with either approach?

    -Richard

    On 08/12/2012, at 4:45:37 PM, Greg Parker <gparker...> wrote:

    > On Dec 8, 2012, at 11:17 AM, Steve Sisak <sgs-lists...> wrote:
    >> At 10:24 AM -0800 12/8/12, Kyle Sluder wrote:
    >>> On Dec 8, 2012, at 10:06 AM, Steve Sisak <sgs-lists...> wrote:
    >>>
    >>>> Further, if writes were not complete at the end of the block, the construct would besentially useless for its intended purpose.
    >>>
    >>> By the way, you're wrong about this too. All @synchronized does is act as a mutex around a code block. It does not cause the compiler to reorder instructions and issue memory barriers in such a way that initialization is guaranteed to precede assignment from the perspective of all threads.
    >>
    >> Please cite a source for this assertion.
    >
    > Source: me, the author of the current @synchronized implementation. @synchronized performs the same synchronization as a pthread mutex.
    >
    >
    >> From:
    >>
    >> <https://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/Mul
    tithreading/ThreadSafety/ThreadSafety.html
    >
    >>
    >> "If you are already using a mutex to protect a section of code, do not automatically assume you need to use the volatile keyword to protect important variables inside that section. A mutex includes a memory barrier to ensure the proper ordering of load and store operations."
    >
    > To a close approximation, you should pretend that `volatile` does not exist in C-based languages.
    >
    > The above says that if you already have mutexes then you do not need `volatile`. The mutex alone does all of the work.
    >
    > Conversely, `volatile` with no mutex is also not a safe multithreading pattern.
    >
    >
    >> I acknowledge that, without proper memory barriers, double-checked locking is problematic, but am providing an example using a construct which I'm fairly sure uses proper memory barriers.
    >>
    >> - (NSDictionary *)someDictionary;
    >> {
    >> if (!_someDictionary)
    >> {
    >> @synchronized (self)
    >> {
    >> if (!_someDictionary)
    >> {
    >> // create a temp dictionary (might take some time)
    >> _someDictionary = temp;
    >> }
    >> }
    >> }
    >>
    >> return _someDictionary;
    >> }
    >
    >
    > The example provided does not use proper memory barriers.
    >
    > In general, memory barriers need to occur in pairs, one on each thread. The coordination of the two memory barriers achieves the desired synchronization, so that both sides observe events occurring in the same order.
    >
    > Mutexes and similar constructs achieve this. The mutex lock() and unlock() procedures form a barrier pair. Code running with the mutex held is therefore correctly synchronized with respect to other code that runs with the mutex held. But code running outside the mutex is not protected, because it didn't call the barrier inside the lock() procedure.
    >
    > In faulty double-checked locking code, the problem is that the writer has a memory barrier but the reader does not. Because it has no barriers, the reader may observe events occur out of the desired order. That's why it fails. (Here the "writer" is the thread actually calling the initializer and the "reader" is a second thread simultaneously performing the double-check sequence.)
    >
    > (Faulty double-checked locking code has a second problem because the writer-side barrier inside the mutex unlock is the wrong barrier to use with a reader that is not locking the mutex.)
    >
    > You need to do one of two things to fix the reader side of a double-checked lock:
    > * add appropriate barriers to the reader side, or
    > * cheat in a way that is guaranteed to work on all architectures you care about.
    >
    > dispatch_once() actually cheats. It performs a very expensive barrier on the writer side (much more expensive than the barriers used in ordinary mutexes and @synchronized), which guarantees that no barrier is needed on the reader side on the CPUs that run OS X and iOS. The expensive barrier on the reader side is an acceptable trade-off because the writer path runs only once.
    >
    >
    > --
    > Greg Parker    <gparker...>    Runtime Wrangler
  • On Dec 8, 2012, at 5:27 PM, Richard Heard <heardrwt...> wrote:

    > Greg,
    >
    > So, from what you are saying, either of these snippets should be valid, right?
    >
    >> +(id)sharedInstance{
    >> static id _sharedInstance = nil;
    >>
    >> if (!_sharedInstance){
    >> @synchronized([self class]){
    >> if (!_sharedInstance){
    >> id sharedInstance = [[super allocWithZone:NULL] init];
    >> OSMemoryBarrier();
    >> _sharedInstance = sharedInstance;
    >> }
    >> }
    >> }
    >>
    >> OSMemoryBarrier();
    >> return _sharedInstance;
    >> }
    >
    > vs
    >
    >> +(id)sharedInstance{
    >> static id _sharedInstance = nil;
    >>
    >> static dispatch_once_t onceToken;
    >> dispatch_once(&onceToken, ^{
    >> _sharedInstance = [[super allocWithZone:NULL] init];
    >> });
    >>
    >> return _sharedInstance;
    >> }
    >
    >
    > Any massive advantages / disadvantages with either approach?

    Use dispatch_once(). It is shorter, simpler, and more obviously correct. The memory barrier implementation may also be correct, but why take the risk?

    --
    Greg Parker    <gparker...>    Runtime Wrangler
  • On Dec 8, 2012, at 18:51 , Greg Parker <gparker...> wrote:

    > Use dispatch_once(). It is shorter, simpler, and more obviously correct. The memory barrier implementation may also be correct, but why take the risk?

    What about for similar non-static properties?

    --
    Rick
  • On Sat, Dec 8, 2012, at 05:27 PM, Richard Heard wrote:
    > Greg,
    >
    > So, from what you are saying, either of these snippets should be valid,
    > right?
    >
    >> +(id)sharedInstance{
    >> static id _sharedInstance = nil;
    >>
    >> if (!_sharedInstance){
    >> @synchronized([self class]){
    >> if (!_sharedInstance){
    >> id sharedInstance = [[super allocWithZone:NULL] init];
    >> OSMemoryBarrier();
    >> _sharedInstance = sharedInstance;
    >> }
    >> }
    >> }
    >>
    >> OSMemoryBarrier();
    >> return _sharedInstance;
    >> }

    Greg's advice notwithstanding, I'm not certain this is correct. You need
    to ensure Thread B's read of _sharedInstance is atomic with respect to
    both Thread A's assignment to _sharedInstance *and* the construction of
    the object to which _sharedInstance points. By putting the always-taken
    memory barrier _after_ the conditional, you've failed to guarantee the
    order of Thread B's branch relative to Thread A's assignment to
    _sharedInstance.

    The first example of corrected double-checked locking in this paper
    issues the memory barrier before reading the value of _sharedInstance
    from outside the critical section:
    http://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html

    If dispatch_once() really is unsuitable for use with a dispatch_once_t
    stored in Objective-C instance storage, then the correct example in the
    paper I've cited might be a sufficient workaround.

    Of course, it's just as simple to push the entire contents of your
    +sharedInstance method into the critical section and not have to worry
    about memory barriers at all by relying on the coarser, higher-order
    synchronization primitive @synchronized provides. The performance hit is
    probably negligible.

    --Kyle Sluder
  • Le 9 déc. 2012 à 02:27, Richard Heard <heardrwt...> a écrit :

    > Greg,
    >
    > So, from what you are saying, either of these snippets should be valid, right?
    >
    >> +(id)sharedInstance{
    >> static id _sharedInstance = nil;
    >>
    >> …

    >> OSMemoryBarrier();
    >> return _sharedInstance;
    >> }

    OSMemoryBarrier are not cheap. If dispatch_once uses it only on the write side, then it should be faster than forcing a full barrier at each access.

    By the way, as nobody mention it until then, all this discussion look like to me as pointless premature optimization. If accessing your singleton is a bottleneck (and nothing tell us it is), there is probably simpler way to avoid this cost.

    > vs
    >
    >> +(id)sharedInstance{
    >> static id _sharedInstance = nil;
    >>
    >> static dispatch_once_t onceToken;
    >> dispatch_once(&onceToken, ^{
    >> _sharedInstance = [[super allocWithZone:NULL] init];
    >> });
    >>
    >> return _sharedInstance;
    >> }
    >
    >
    > Any massive advantages / disadvantages with either approach?
    >
    > -Richard
    >
    > On 08/12/2012, at 4:45:37 PM, Greg Parker <gparker...> wrote:
    >
    >> On Dec 8, 2012, at 11:17 AM, Steve Sisak <sgs-lists...> wrote:
    >>> At 10:24 AM -0800 12/8/12, Kyle Sluder wrote:
    >>>> On Dec 8, 2012, at 10:06 AM, Steve Sisak <sgs-lists...> wrote:
    >>>>
    >>>>> Further, if writes were not complete at the end of the block, the construct would besentially useless for its intended purpose.
    >>>>
    >>>> By the way, you're wrong about this too. All @synchronized does is act as a mutex around a code block. It does not cause the compiler to reorder instructions and issue memory barriers in such a way that initialization is guaranteed to precede assignment from the perspective of all threads.
    >>>
    >>> Please cite a source for this assertion.
    >>
    >> Source: me, the author of the current @synchronized implementation. @synchronized performs the same synchronization as a pthread mutex.
    >>
    >>
    >>> From:
    >>>
    >>> <https://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/Mul
    tithreading/ThreadSafety/ThreadSafety.html
    >
    >>>
    >>> "If you are already using a mutex to protect a section of code, do not automatically assume you need to use the volatile keyword to protect important variables inside that section. A mutex includes a memory barrier to ensure the proper ordering of load and store operations."
    >>
    >> To a close approximation, you should pretend that `volatile` does not exist in C-based languages.
    >>
    >> The above says that if you already have mutexes then you do not need `volatile`. The mutex alone does all of the work.
    >>
    >> Conversely, `volatile` with no mutex is also not a safe multithreading pattern.
    >>
    >>
    >>> I acknowledge that, without proper memory barriers, double-checked locking is problematic, but am providing an example using a construct which I'm fairly sure uses proper memory barriers.
    >>>
    >>> - (NSDictionary *)someDictionary;
    >>> {
    >>> if (!_someDictionary)
    >>> {
    >>> @synchronized (self)
    >>> {
    >>> if (!_someDictionary)
    >>> {
    >>> // create a temp dictionary (might take some time)
    >>> _someDictionary = temp;
    >>> }
    >>> }
    >>> }
    >>>
    >>> return _someDictionary;
    >>> }
    >>
    >>
    >> The example provided does not use proper memory barriers.
    >>
    >> In general, memory barriers need to occur in pairs, one on each thread. The coordination of the two memory barriers achieves the desired synchronization, so that both sides observe events occurring in the same order.
    >>
    >> Mutexes and similar constructs achieve this. The mutex lock() and unlock() procedures form a barrier pair. Code running with the mutex held is therefore correctly synchronized with respect to other code that runs with the mutex held. But code running outside the mutex is not protected, because it didn't call the barrier inside the lock() procedure.
    >>
    >> In faulty double-checked locking code, the problem is that the writer has a memory barrier but the reader does not. Because it has no barriers, the reader may observe events occur out of the desired order. That's why it fails. (Here the "writer" is the thread actually calling the initializer and the "reader" is a second thread simultaneously performing the double-check sequence.)
    >>
    >> (Faulty double-checked locking code has a second problem because the writer-side barrier inside the mutex unlock is the wrong barrier to use with a reader that is not locking the mutex.)
    >>
    >> You need to do one of two things to fix the reader side of a double-checked lock:
    >> * add appropriate barriers to the reader side, or
    >> * cheat in a way that is guaranteed to work on all architectures you care about.
    >>
    >> dispatch_once() actually cheats. It performs a very expensive barrier on the writer side (much more expensive than the barriers used in ordinary mutexes and @synchronized), which guarantees that no barrier is needed on the reader side on the CPUs that run OS X and iOS. The expensive barrier on the reader side is an acceptable trade-off because the writer path runs only once.
    >>
    >>
    >> --
    >> Greg Parker    <gparker...>    Runtime Wrangler


    -- Jean-Daniel
  • On Dec 9, 2012, at 1:27 AM, Kyle Sluder wrote:

    > If dispatch_once() really is unsuitable for use with a dispatch_once_t
    > stored in Objective-C instance storage, then the correct example in the
    > paper I've cited might be a sufficient workaround.

    I thought we had established that, in all sane use cases, an instance variable once predicate is fine.  The cases where an instance variable once predicate would be unsafe are exactly the cases where it would be unsafe to access any instance variable, including the isa pointer.  So, if you're using the instance in any way, you've already assumed conditions that make the once predicate fine. (And, hopefully, you've more than assumed it, you've ensured it by proper inter-thread communication of the object pointer.)

    Regards,
    Ken
  • On Dec 9, 2012, at 6:53 AM, Ken Thomases <ken...> wrote:

    > On Dec 9, 2012, at 1:27 AM, Kyle Sluder wrote:
    >
    >> If dispatch_once() really is unsuitable for use with a dispatch_once_t
    >> stored in Objective-C instance storage, then the correct example in the
    >> paper I've cited might be a sufficient workaround.
    >
    > I thought we had established that, in all sane use cases, an instance variable once predicate is fine.

    Hence the hedge. ;-)

    > The cases where an instance variable once predicate would be unsafe are exactly the cases where it would be unsafe to access any instance variable, including the isa pointer.  So, if you're using the instance in any way, you've already assumed conditions that make the once predicate fine. (And, hopefully, you've more than assumed it, you've ensured it by proper inter-thread communication of the object pointer.)

    Yes, but as Greg pointed out the real danger comes from not understanding all the nuances of this, and assuming that dispatch_once is a more powerful synchronization primitive than it really is.

    --Kyle Sluder
  • On Dec 9, 2012, at 10:37 AM, Kyle Sluder wrote:

    > On Dec 9, 2012, at 6:53 AM, Ken Thomases <ken...> wrote:
    >
    >> On Dec 9, 2012, at 1:27 AM, Kyle Sluder wrote:
    >>
    >>> If dispatch_once() really is unsuitable for use with a dispatch_once_t
    >>> stored in Objective-C instance storage, then the correct example in the
    >>> paper I've cited might be a sufficient workaround.
    >>
    >> I thought we had established that, in all sane use cases, an instance variable once predicate is fine.
    >
    > Hence the hedge. ;-)
    >
    >> The cases where an instance variable once predicate would be unsafe are exactly the cases where it would be unsafe to access any instance variable, including the isa pointer.  So, if you're using the instance in any way, you've already assumed conditions that make the once predicate fine. (And, hopefully, you've more than assumed it, you've ensured it by proper inter-thread communication of the object pointer.)
    >
    > Yes, but as Greg pointed out the real danger comes from not understanding all the nuances of this, and assuming that dispatch_once is a more powerful synchronization primitive than it really is.

    I'm still not understanding the circumspection.  The use of dispatch_once() never _contributes_ to unsafe access.  It is unsafe if the situation is already unsafe.  If you try to avoid using dispatch_once() using other techniques like @synchronized(self), etc., that doesn't help anything.

    Intellectually, I understand the concern that people will assume that dispatch_once() introduces safety where it doesn't, but as a practical matter I'm finding it hard to imagine a scenario where a) that comes up or b) avoiding dispatch_once() for some vague (to the naive developer) notion that it's unsafe would lead to better safety.  Can you or Greg illustrate with an example?

    I feel that dispatch_once() with an instance variable once predicate _is_ the right answer for the class of problems where people would be tempted to use it and that we should be encouraging developers to rely on it rather than invariably worse alternatives.

    Regards,
    Ken
  • This is a somewhat older but quite interesting thread - nonetheless I felt the final conclusion was still too vague.

    So, I did my best to put up a simple "worst case" sample and tried to trick dispatch_once into a race. But I failed. That is, dispatch_once was doing what one would like to expect. While this is eventually no proof, it at least increases the probability that certain use cases are "quite safe", for a given environment.

    Maybe someone else will detect a race? Or perhaps, somebody is able to find an even worser worst case? ;)

    Note: if a race has been occurred, it would print "x" to stdout.

    #include <iostream>
    #include <new>
    #include <type_traits>
    #include <dispatch/dispatch.h>
    struct Bar {

        void setResult(long value) {
            dispatch_once(&_once, ^{
                _result = value;
            });
        }

        long getResult() const {
            return _result;
        }

        dispatch_once_t _once;
        long _result;
    };

    int main(int argc, const char * argv[])
    {
        dispatch_semaphore_t finished_sem = dispatch_semaphore_create(0);

        const int N = 1000000;
        int n = N;
        typedef std::aligned_storage<sizeof(Bar), std::alignment_of<Bar>::value>::type storage_t;
        storage_t storage;

        while (n) {
            //memset(storage, -1, sizeof(storage));
            Bar* bar = new (&storage) Bar();
            dispatch_async(dispatch_get_global_queue(0, 0), ^{
                bar->setResult(n);
                dispatch_semaphore_signal(finished_sem);
            });
            dispatch_semaphore_wait(finished_sem, DISPATCH_TIME_FOREVER);
            if (bar->getResult() != n) {
                dispatch_async(dispatch_get_global_queue(0, 0), ^{
                    printf("x");
                });
            }
            --n;
        }

        printf("finished");

        return 0;
    }

    Andreas

    On 09.12.2012, at 17:52, Ken Thomases wrote:

    > On Dec 9, 2012, at 10:37 AM, Kyle Sluder wrote:
    >
    >> On Dec 9, 2012, at 6:53 AM, Ken Thomases <ken...> wrote:
    >>
    >>> On Dec 9, 2012, at 1:27 AM, Kyle Sluder wrote:
    >>>
    >>>> If dispatch_once() really is unsuitable for use with a dispatch_once_t
    >>>> stored in Objective-C instance storage, then the correct example in the
    >>>> paper I've cited might be a sufficient workaround.
    >>>
    >>> I thought we had established that, in all sane use cases, an instance variable once predicate is fine.
    >>
    >> Hence the hedge. ;-)
    >>
    >>> The cases where an instance variable once predicate would be unsafe are exactly the cases where it would be unsafe to access any instance variable, including the isa pointer.  So, if you're using the instance in any way, you've already assumed conditions that make the once predicate fine. (And, hopefully, you've more than assumed it, you've ensured it by proper inter-thread communication of the object pointer.)
    >>
    >> Yes, but as Greg pointed out the real danger comes from not understanding all the nuances of this, and assuming that dispatch_once is a more powerful synchronization primitive than it really is.
    >
    > I'm still not understanding the circumspection.  The use of dispatch_once() never _contributes_ to unsafe access.  It is unsafe if the situation is already unsafe.  If you try to avoid using dispatch_once() using other techniques like @synchronized(self), etc., that doesn't help anything.
    >
    > Intellectually, I understand the concern that people will assume that dispatch_once() introduces safety where it doesn't, but as a practical matter I'm finding it hard to imagine a scenario where a) that comes up or b) avoiding dispatch_once() for some vague (to the naive developer) notion that it's unsafe would lead to better safety.  Can you or Greg illustrate with an example?
    >
    > I feel that dispatch_once() with an instance variable once predicate _is_ the right answer for the class of problems where people would be tempted to use it and that we should be encouraging developers to rely on it rather than invariably worse alternatives.
    >
    > Regards,
    > Ken
previous month november 2012 next month
MTWTFSS
      1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30    
Go to today