floats & Color APIs

  • I'm just curious...

    What is the reasoning behind moving to floats to represent color in
    Apple's Cocoa color APIs?

    For example,

    <http://developer.apple.com/documentation/Cocoa/Reference/
    ApplicationKit/Classes/NSColor_Class/Reference/Reference.html#//
    apple_ref/occ/instm/NSColor/blueComponent
    >

    Why wasn't 1 byte or even 2 bytes considered to be enough?
    Were the reasons purely having to do with the speed of certain
    calculations?
    Is there any reason not to just covert these numbers to single byte
    components for the purpose of file storage?
  • Even QuickDraw used 16-bit shorts for representing color.

    I think there are good reasons to support a higher range of color
    values than what an average monitor can display (256 unique shades of
    red, green or blue). A good printer can probably use 10-12 bits of
    color information per channel. Also, any time you are compositing
    images, having "extra" resolution above and beyond what you strictly
    need can make the final output look better. In a similar vein, audio
    editing tools usually work at 96KHz with 24- or 32-bit precision,
    even though that's much more precision than the ear can distinguish;
    this provides a lot of "slop" for the error that normally accumulates
    during composition.

    For compact storage, you could certainly scale down the values if you
    want. Unless your customers demand exact color precision, it should
    be OK.

    On Oct 23, 2006, at 11:47 AM, Eric Gorr wrote:

    > I'm just curious...
    >
    > What is the reasoning behind moving to floats to represent color in
    > Apple's Cocoa color APIs?
    >
    > For example,
    >
    > <http://developer.apple.com/documentation/Cocoa/Reference/
    > ApplicationKit/Classes/NSColor_Class/Reference/Reference.html#//
    > apple_ref/occ/instm/NSColor/blueComponent>
    >
    > Why wasn't 1 byte or even 2 bytes considered to be enough?
    > Were the reasons purely having to do with the speed of certain
    > calculations?
    > Is there any reason not to just covert these numbers to single byte
    > components for the purpose of file storage?
    >
    >
    > _______________________________________________
    > Do not post admin requests to the list. They will be ignored.
    > Cocoa-dev mailing list      (<Cocoa-dev...>)
    > Help/Unsubscribe/Update your Subscription:
    > http://lists.apple.com/mailman/options/cocoa-dev/jstiles%
    > 40blizzard.com
    >
    > This email sent to <jstiles...>
  • This is just speculation, but it could be because Apple's color API sits on top of Open-GL and Open-GL prefers float.  Open-GL may prefer float because the hardware acceleration for Open GL prefers float.

      Another reason may be to facilitate alpha compositing with pre-multiplied alpha: (From CoreImage.pdf)
      "Premultiplied alpha is a term used to describe a source color, the components of which have already been multiplied by an alpha value. Premultiplying speeds up the rendering of an image by eliminating the need to perform a multiplication operation for each color component. For example, in an RGB color space, rendering an image with premultiplied alpha eliminates three multiplication operations (red times alpha, green times alpha, and blue times alpha) for each pixel in the image...

      By default, Core Image assumes that processing nodes are 128 bits-per-pixel, linear light, premultiplied RGBA floating-point values..."
  • Quickly moving into the realm of an off-topic thread....but, I just
    had a thought...

    Perhaps the need for a great deal of precision in color beyond what
    would seem useful has also to do with topic of watermarking...where
    one needs to be able to specify a color that is nearly identical to
    another so a person won't see the difference, but a sensitive piece
    of hardware could...

    (btw, anyone happen to know if more then a single byte per RGBA
    component is useful when only considering what the human eye can
    perceive?)
  • I think it is pretty well established that 256 levels for each color
    component has nothing to do with what the eye can see.  HDR rendering uses
    more like 16-32 bits per component to approach what the human eye can
    perceive.

    Michael

    > -----Original Message-----
    > From: cocoa-dev-bounces+lattam=<mac.com...> [mailto:cocoa-dev-
    > bounces+lattam=<mac.com...>] On Behalf Of Eric Gorr
    > Sent: Tuesday, October 24, 2006 11:33 AM
    > To: <cocoa-dev...>
    > Subject: Re: floats & Color APIs
    >
    > Quickly moving into the realm of an off-topic thread....but, I just
    > had a thought...
    >
    > Perhaps the need for a great deal of precision in color beyond what
    > would seem useful has also to do with topic of watermarking...where
    > one needs to be able to specify a color that is nearly identical to
    > another so a person won't see the difference, but a sensitive piece
    > of hardware could...
    >
    > (btw, anyone happen to know if more then a single byte per RGBA
    > component is useful when only considering what the human eye can
    > perceive?)
    > _______________________________________________
    > Do not post admin requests to the list. They will be ignored.
    > Cocoa-dev mailing list      (<Cocoa-dev...>)
    > Help/Unsubscribe/Update your Subscription:
    > http://lists.apple.com/mailman/options/cocoa-dev/<lattam...>
    >
    > This email sent to <lattam...>
  • I wouldn't be surprised if the human eye could perceive 10-12 bits of
    fidelity, but that's pushing it. 16 bits is almost certainly overkill.

    Anyway, HDR rendering isn't just about adding bits of precision. It's
    got a lot more to do with how we blend and mix colors. For example,
    the effects that a camera gives when an image is overexposed are
    difficult to render on a GPU without HDR techniques. This is because
    it's difficult to represent the intermediate colors without any way
    to differentiate between, say, a plain white T-shirt and the red-hot
    whiteness of the surface of the sun.

    After the image has been rendered, you can take an image generated
    with HDR techniques and downsample it to a standard 24-bit RGB image
    and it looks identical.

    On Oct 24, 2006, at 11:50 AM, Michael Latta wrote:

    > I think it is pretty well established that 256 levels for each color
    > component has nothing to do with what the eye can see.  HDR
    > rendering uses
    > more like 16-32 bits per component to approach what the human eye can
    > perceive.
    >
    > Michael
    >
    >> -----Original Message-----
    >> From: cocoa-dev-bounces+lattam=<mac.com...>
    >> [mailto:cocoa-dev-
    >> bounces+lattam=<mac.com...>] On Behalf Of Eric Gorr
    >> Sent: Tuesday, October 24, 2006 11:33 AM
    >> To: <cocoa-dev...>
    >> Subject: Re: floats & Color APIs
    >>
    >> Quickly moving into the realm of an off-topic thread....but, I just
    >> had a thought...
    >>
    >> Perhaps the need for a great deal of precision in color beyond what
    >> would seem useful has also to do with topic of watermarking...where
    >> one needs to be able to specify a color that is nearly identical to
    >> another so a person won't see the difference, but a sensitive piece
    >> of hardware could...
    >>
    >> (btw, anyone happen to know if more then a single byte per RGBA
    >> component is useful when only considering what the human eye can
    >> perceive?)
    >> _______________________________________________
    >> Do not post admin requests to the list. They will be ignored.
    >> Cocoa-dev mailing list      (<Cocoa-dev...>)
    >> Help/Unsubscribe/Update your Subscription:
    >> http://lists.apple.com/mailman/options/cocoa-dev/<lattam...>
    >>
    >> This email sent to <lattam...>
    >
    > _______________________________________________
    > Do not post admin requests to the list. They will be ignored.
    > Cocoa-dev mailing list      (<Cocoa-dev...>)
    > Help/Unsubscribe/Update your Subscription:
    > http://lists.apple.com/mailman/options/cocoa-dev/jstiles%
    > 40blizzard.com
    >
    > This email sent to <jstiles...>
  • On Oct 23, 2006, at 1:47 PM, Eric Gorr wrote:

    > I'm just curious...
    >
    > What is the reasoning behind moving to floats to represent color in
    > Apple's Cocoa color APIs?

    Since nobody has mentioned it, how about the simple fact that Cocoa
    is based on technology that was built on top of Display PostScript,
    which uses floats for color components?  Just like "why is the origin
    in the bottom left", "what's up with those wacky font names", etc...

    Glenn Andreas                      <gandreas...>
      <http://www.gandreas.com/> wicked fun!
    quadrium | flame : flame fractals & strange attractors : build,
    mutate, evolve, animate
  • Please keep this specific to Cocoa.
  • On Oct 23, 2006, at 8:47 PM, Eric Gorr wrote:

    > I'm just curious...
    >
    > What is the reasoning behind moving to floats to represent color in
    > Apple's Cocoa color APIs?

    First and foremost because representing color components as floats
    makes it easy to support device-independent specification of color
    values. An application specifies a color in the form of floating-
    point values which are in the range zero to one and the graphics
    library (Quartz in our case) maps the color value to the closest
    color that the current output device is capable of displaying. This
    process is called quantization and may or may not include dithering.

    Another important part of a device-independent color model is support
    for color transformations. The application tells Quartz in what color
    space the colors are defined that it sends to Quartz and then it
    transforms those colors from application (user) space to the color
    space of the output device. Quartz uses the color matching services
    provided by ColorSync to do this.

    This TN has a very good description of how colors are handled by
    Quartz and ColorSync: <http://developer.apple.com/technotes/tn/
    tn2035.html
    >

    > ApplicationKit/Classes/NSColor_Class/Reference/Reference.html#//
    > apple_ref/occ/instm/NSColor/blueComponent>
    >
    > Why wasn't 1 byte or even 2 bytes considered to be enough?

    Because this would mean that it would not be possible to describe all
    colors that are supported by an output devices that supports more
    than 8 or 16 bit per color component. Consider for example a modern
    graphic card with floating-point frame buffer support.

    > Were the reasons purely having to do with the speed of certain
    > calculations?

    This may have been a factor. It is certainly faster to keep data in
    one representation than to keep it in different representations and
    convert it whenever you need to combine your data with other data.
    Consider for example compositing operations. However, actual practice
    is much more complicated.

    > Is there any reason not to just covert these numbers to single byte
    > components for the purpose of file storage?

    If device-independence is important to you, then store them as
    floating-point values.

    Regards,

    Dietmar Planitzer
  • The human eye can definitely tell a difference at 8 bits and maybe a few
    more, particularly when areas are side-by-side.  You can easily see
    contouring at that level in near-solid areas and even perceive mach-banding
    that occurs due to pre-processing in the eye's retina.  16 is overkill, but
    is the next convenient level and allows for processing re-quantization.
    Floating point allows easy scaling, fading, etc.  Modern processors compute
    floating point as easily (or even easier) than integer arithmetic.

    --
    Gordon Apple
    Ed4U
    Little Rock, AR
    <ga...>

    > I wouldn't be surprised if the human eye could perceive 10-12 bits of
    > fidelity, but that's pushing it. 16 bits is almost certainly overkill.
    >
  • Not to get super nitpicky, but I think 10 bits per component plus two
    bits of slop (R10 G10 B10 X2) is another useful step up from regular
    24-bit. It keeps each pixel at 32 bits, which is a great size for
    working with standard CPUs, and it's still lot smaller than 16-bit
    integer or floating point formats which require 48 bits per pixel.
    There is some GPU support, but I'm not sure how widespread it is.

    Unfortunately—keeping things relevant to Cocoa—I don't think there is
    a way in OS X to drive your monitor at greater than 8 bits per color
    component... which makes most of this discussion academic :(

    On Oct 25, 2006, at 10:12 AM, Gordon Apple wrote:

    > The human eye can definitely tell a difference at 8 bits and
    > maybe a few
    > more, particularly when areas are side-by-side.  You can easily see
    > contouring at that level in near-solid areas and even perceive mach-
    > banding
    > that occurs due to pre-processing in the eye's retina.  16 is
    > overkill, but
    > is the next convenient level and allows for processing re-
    > quantization.
    > Floating point allows easy scaling, fading, etc.  Modern processors
    > compute
    > floating point as easily (or even easier) than integer arithmetic.
    >
    >
    > --
    > Gordon Apple
    > Ed4U
    > Little Rock, AR
    > <ga...>
    >
    >
    >
    >
    >> I wouldn't be surprised if the human eye could perceive 10-12 bits of
    >> fidelity, but that's pushing it. 16 bits is almost certainly
    >> overkill.
    >>
    >
    > _______________________________________________
    > Do not post admin requests to the list. They will be ignored.
    > Cocoa-dev mailing list      (<Cocoa-dev...>)
    > Help/Unsubscribe/Update your Subscription:
    > http://lists.apple.com/mailman/options/cocoa-dev/jstiles%
    > 40blizzard.com
    >
    > This email sent to <jstiles...>
previous month october 2006 next month
MTWTFSS
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          
Go to today