access to the pixel data of an image?

  • hi all,

    for my current Cocoa project (a pure 2D graphics app) I need to look
    at each and every pixel of a given image (mostly jpg and raw formats)
    because I have to know their color values (preferably in the HSV or
    HLS color space).

    do you have a pointer for me what docs I should study, which classes
    might be helpful etc.?

    is the Cocoa wrapper for quartz 2D adequate to use? or should I
    directly use Quartz 2D function calls?

    I had a look at NSImage and CIImage so far but couldn't find any
    pixel data related functions such as "give me a 2D array of color
    values that representsteh whole image".

    Excuse me if that's  a dumb question, I'm still I newbie,
    unfortunately... :-(

    Thank you so much,
    Stefan.
  • On Sep 6, 2007, at 2:44 PM, Stefan Wolfrum wrote:

    > for my current Cocoa project (a pure 2D graphics app) I need to look
    > at each and every pixel of a given image (mostly jpg and raw
    > formats) because I have to know their color values (preferably in
    > the HSV or HLS color space).
    >
    > do you have a pointer for me what docs I should study, which classes
    > might be helpful etc.?

    A popular way to do that is use Quartz 2D to create a CGBitmapContext
    and then draw the image you are interested in examining inside that
    context.  You can then examine the memory you've allocated for teh
    CGBitmapContext to extract pixel information.

    > Is the Cocoa wrapper for quartz 2D adequate to use? or should I
    > directly use Quartz 2D function calls?

    As someone else pointed out, if you can convince NSImage to create an
    NSBitmapImageRep (or if you can create one of these yourself) then you
    can get at the pixel buffer and examine it.

    > I had a look at NSImage and CIImage so far but couldn't find any
    > pixel data related functions such as "give me a 2D array of color
    > values that representsteh whole image".

    That is because both of these entities are high-level abstractions,
    though different from one another in the abstractions they represent.

    NSImage represents "something that can be drawn".  It can contain
    pixel images, but it can also include images that are not pixel based
    (like PDF items).  An NSImage can contain a large number of
    representations of a particular graphic.  There's no "give me a pixel
    array" because you don't know which representation it should return
    the pixels for (provided that the representation has pixels).

    CIImage represents a formula, or recipe for drawing an image.  A
    CIImage, for example, might implement the recipe "Take a source image,
    apply these filters, crop out everything except this part, and then
    draw the result".  Because they are based on drawing commands
    (filters) CIImage can represent a graphics that can't otherwise be
    represented.  For example it is quite common to have a CIImage that is
    infinite in extent.  If you wanted to get that image's pixel buffer
    you'd be asking for an infinite number of pixels.  I can guarantee
    that if you asked for an infinite number of pixels, you'd run out of
    memory... even on a 64 bit machine :-)

    The point that is in the case of an NSImage, a CIImage, and a CGImage,
    the actual pixels that will make up the image are not definatively
    determined until you actually draw them to a pixel environment.  The
    mechansim I described above (drawing the images into a
    CGBItmapContext) is such an operation.

    Scott
previous month september 2007 next month
MTWTFSS
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
Go to today