NSNotFound signed???

  • I've just been shocked to find out that NSNotFound is defined by the
    framework as NSIntegerMax, that is maximal value of *SIGNED* integer,
    which in 64 bit architecture is 0x7fffffffffffffff.

    I used to think that it should be NSUIntegerMax, that is the maximal
    value of *UNSIGNED* integer, corresponding to 0xffffffffffffffff.

    It had to be so because NSNotFound value is used in many places of the
    system framework for working with array indexes, for example,
    -[NSArray indexOfObject:] returns unsigned NSUInteger indexes, *BUT*
    it returns NSNotFound if the element is not found.

    But NSNotFound should be a valid index in the middle of possible
    unsigned integer range! It means that even though the NSArray allows
    NSUIntegerMax of elements, you can actually use only NSIntegerMax
    elements, because starting from NSIntegerMax you will get screwed up
    by things like indexOfObject:

    How come?
  • On Jun 12, 2013, at 00:11 , Oleg Krupnov <oleg.krupnov...> wrote:

    > I've just been shocked to find out that NSNotFound is defined by the
    > framework as NSIntegerMax, that is maximal value of *SIGNED* integer,
    > which in 64 bit architecture is 0x7fffffffffffffff.
    >
    > I used to think that it should be NSUIntegerMax, that is the maximal
    > value of *UNSIGNED* integer, corresponding to 0xffffffffffffffff.
    >
    > It had to be so because NSNotFound value is used in many places of the
    > system framework for working with array indexes, for example,
    > -[NSArray indexOfObject:] returns unsigned NSUInteger indexes, *BUT*
    > it returns NSNotFound if the element is not found.
    >
    > But NSNotFound should be a valid index in the middle of possible
    > unsigned integer range! It means that even though the NSArray allows
    > NSUIntegerMax of elements, you can actually use only NSIntegerMax
    > elements, because starting from NSIntegerMax you will get screwed up
    > by things like indexOfObject:
    >
    > How come?

    It's the only extreme-ish compile-time constant that's representable unambiguously in both signed and unsigned expressions. C has funny rules for conversions in compile-time expressions, so that makes using (for example) 0xFFFFFFFFFFFFFFFF treacherous in other ways.

    As long as you're aware of it, it's not a major problem in the NSArray case, since it would be extremely rare to actually want array indexes that don't fit into 63 unsigned bits, as opposed to 64. In particular, there isn't enough address space to create a NSArray containing even NSIntegerMax pointers. The easiest conceptual adjustment here is to think of NSUInteger as a 63-bit number, not a 64-bit number.

    Actually, the "weird" value for NSNotFound is not really the most treacherous aspect. Far more dangerous is the fact that it's not generally safe to archive (i.e. use the NSCoder protocol on) NSNumber objects whose value represents NSNotFound, because the value is architecture dependent. If you archive a 64-bit NSNotFound, it's no longer NSNotFound when unarchived on a 32-bit architecture, and vice versa.
  • On Jun 12, 2013, at 00:42 , Quincey Morris <quinceymorris...> wrote:

    > If you archive a 64-bit NSNotFound, it's no longer NSNotFound when unarchived on a 32-bit architecture, and vice versa.

    Oops, just to clarify:

    I don't mean there's anything wrong with the archiving or unarchiving per se.

    It's no longer NSNotFound when you encode/decode a NSUInteger/NSInteger variable, because there'll be truncation or "incorrect" sign extension of the scalar value, in one direction or the other.
  • > there isn't enough address space to create a NSArray containing even NSIntegerMax pointers.

    I knew someone will say this. Who needs more than 640 KB RAM after
    all? :) © Bill Gates

    What if I have sparse array etc.

    One higher bit is actually twice as many elements. Why having
    NSUInteger at all if you can't use more than NSIntegerMax? This
    doesn't seem right.

    Anyway thanks for your time, Quincey!

    On Wed, Jun 12, 2013 at 10:52 AM, Quincey Morris
    <quinceymorris...> wrote:
    > On Jun 12, 2013, at 00:42 , Quincey Morris
    > <quinceymorris...> wrote:
    >
    > If you archive a 64-bit NSNotFound, it's no longer NSNotFound when
    > unarchived on a 32-bit architecture, and vice versa.
    >
    >
    > Oops, just to clarify:
    >
    > I don't mean there's anything wrong with the archiving or unarchiving per
    > se.
    >
    > It's no longer NSNotFound when you encode/decode a NSUInteger/NSInteger
    > variable, because there'll be truncation or "incorrect" sign extension of
    > the scalar value, in one direction or the other.
    >
  • Le 12 juin 2013 à 10:14, Oleg Krupnov <oleg.krupnov...> a écrit :

    >> there isn't enough address space to create a NSArray containing even NSIntegerMax pointers.
    >
    > I knew someone will say this. Who needs more than 640 KB RAM after
    > all? :) © Bill Gates
    >

    This has nothing to do with the amount of available RAM. The limitation is with the virtual memory system.
    If you can have 2^64 bytes of address space, you can a max store "2^64 / sizeof(void *)" pointers.

    > What if I have sparse array etc.

    If you have sparse array, so just don't write it using NSNotFound. We are talking about a well defined class and well defined API, not a potential non existing API that can use the wrong constant.

    > One higher bit is actually twice as many elements. Why having
    > NSUInteger at all if you can't use more than NSIntegerMax? This
    > doesn't seem right.

    Because you want to use an unsigned number to defined the number of element (having a negative number of elements does not make sense), and it is pointless and inefficient to to defined and used a 56bit integer type on actual architectures.

    -- Jean-Daniel
  • > This has nothing to do with the amount of available RAM. The limitation is with the virtual memory system.
    > If you can have 2^64 bytes of address space, you can a max store "2^64 / sizeof(void *)" pointers.

    You misunderstood, I wasn't talking about RAM. It was merely an
    illustration of how an assumption that seemed justified basing on some
    then current technical limits, in hindsight turned out ridiculous and
    crippling after the technical limits were pushed further. Never mind.

    > If you have sparse array, so just don't write it using NSNotFound. We are talking about a well defined class and well defined API, not a potential non existing API that can use the wrong constant.

    That's exactly the pitfall that I felt into, and that's the point of
    my original post. I thought it was safe to use NSNotFound with my
    classes because it is so commonly used with NSUInteger array indexes
    in the framework, but it turns out to be unsafe, and without any
    documented cautions.

    >> One higher bit is actually twice as many elements. Why having
    >> NSUInteger at all if you can't use more than NSIntegerMax? This
    >> doesn't seem right.
    >
    > Because you want to use an unsigned number to defined the number of element (having a negative number of elements does not make sense), and it is pointless and inefficient to to defined and used a 56bit integer type on actual architectures.

    You seem to be ardently defending the idea that it's somehow okay to
    lose half of possible array indexes just because. I admit there may be
    good reasons for this, but I'm yet to see why exactly.
  • On Jun 12, 2013, at 3:01 AM, Oleg Krupnov <oleg.krupnov...> wrote:

    > You misunderstood, I wasn't talking about RAM. It was merely an
    > illustration of how an assumption that seemed justified basing on some
    > then current technical limits, in hindsight turned out ridiculous and
    > crippling after the technical limits were pushed further.

    It’s not a technical limit, it’s a mathematical limit. An n-bit address space simply cannot hold 2^n pointers. (I think the limit is 2^(n - n/8) … but of course even at that limit you wouldn’t have room for any objects for the pointers to point to!)

    If you want to implement a sparse array that can support 2^64 items, go for it, but you’ll need some mechanism other than NSNotFound to indicate when a value isn’t found.

    —Jens
  • On Jun 12, 2013, at 01:14 , Oleg Krupnov <oleg.krupnov...> wrote:

    > What if I have sparse array etc.
    >
    > One higher bit is actually twice as many elements. Why having
    > NSUInteger at all if you can't use more than NSIntegerMax? This
    > doesn't seem right.

    -- If you're talking about NSArray, there's no such thing as a sparse array. That's why NSNotFound == NSIntegerMax works  -- it's a value you can *never* use as a NSArray index, *not* an artificial limitation on the range of indexes.

    -- The same is true for (non-sparse) C arrays.

    -- If you implement a sparse array yourself, and you want to use the full index range, then you likely wouldn't use NSNotFound as a marker at all. If you need a marker, you'd likely use 0 or NSUIntegerMax, right?
previous month june 2013 next month
MTWTFSS
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
Go to today