drawing an array of pixels to the screen

  • I am working on a computer vision based app that detects objects in
    video images.  I have to implement custom algorithms to locate and
    track objects, so I need to write code that has direct access to the
    pixel values and keep track of the x,y coordinates of certain
    objects.  This is easiest to do with the image represented as a 2D
    array of int values.  However, it seems that the only way to draw
    bitmaps to the screen is with NSImage and NSBitmapImageRep or
    CIImage.  Is there another way?  It seems like a waste (and a
    performance hit) to convert a 2D array of pixel values into a byte
    stream every time you want to display the image.  Is there a way to
    draw the pixel values directly to the screen (and do it quickly).

    Thanks for any ideas.
  • OpenGL. Its easy to use on OS X and its a great 2D drawing mechanism.
    As Ben at VMware said - its as close to BitBlt as you're going to get
    on OS X :)

    On Nov 9, 2007, at 9:57 PM, <cocoa-dev-request...> wrote:

    > Message: 12
    > Date: Fri, 9 Nov 2007 22:27:19 -0500
    > From: Jason Horn <jason...>
    > Subject: drawing an array of pixels to the screen
    > To: <Cocoa-dev...>
    > Message-ID: <86B5E083-EB2D-4FE0-894E-E9AC248AEE9A...>
    > Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
    >
    > I am working on a computer vision based app that detects objects in
    > video images.  I have to implement custom algorithms to locate and
    > track objects, so I need to write code that has direct access to the
    > pixel values and keep track of the x,y coordinates of certain
    > objects.  This is easiest to do with the image represented as a 2D
    > array of int values.  However, it seems that the only way to draw
    > bitmaps to the screen is with NSImage and NSBitmapImageRep or
    > CIImage.  Is there another way?  It seems like a waste (and a
    > performance hit) to convert a 2D array of pixel values into a byte
    > stream every time you want to display the image.  Is there a way to
    > draw the pixel values directly to the screen (and do it quickly).
    >
    > Thanks for any ideas.

    Alex Kac - President and Founder
    Web Information Solutions, Inc. - Microsoft Certified Partner

    "If at first you don't succeed, skydiving is not for you."
    -- Francis Roberts
  • +NSBitmapImageRep
    initWithBitmapDataPlanes:pixelsWide:pixelsHigh:bitsPerSample:samplesPerP
    ixel:hasAlpha:isPlanar:colorSpaceName:bitmapFormat:bytesPerRow:bitsPerPi
    xel: is pretty efficient. You're not copying the whole byte array, it
    just takes a reference to it.

    However, another poster hit the nail on the head—if performance is
    important to you, might as well go OpenGL. You can't get any faster
    than that on OS X.

    On Nov 9, 2007, at 7:27 PM, Jason Horn wrote:

    > I am working on a computer vision based app that detects objects in
    > video images.  I have to implement custom algorithms to locate and
    > track objects, so I need to write code that has direct access to
    > the pixel values and keep track of the x,y coordinates of certain
    > objects.  This is easiest to do with the image represented as a 2D
    > array of int values.  However, it seems that the only way to draw
    > bitmaps to the screen is with NSImage and NSBitmapImageRep or
    > CIImage.  Is there another way?  It seems like a waste (and a
    > performance hit) to convert a 2D array of pixel values into a byte
    > stream every time you want to display the image.  Is there a way to
    > draw the pixel values directly to the screen (and do it quickly).
    >
    > Thanks for any ideas.
  • On Nov 9, 2007, at 8:56 PM, John Stiles wrote:
    > +NSBitmapImageRep
    > initWithBitmapDataPlanes:pixelsWide:pixelsHigh:bitsPerSample:samplesPerPixel:hasAlpha:isPlanar:colorSpaceName:bitmapFormat:bytesPerRow:bitsPerPixel
    > : is pretty efficient. You're not copying the whole byte array, it
    > just takes a reference to it.
    >
    > However, another poster hit the nail on the head—if performance is
    > important to you, might as well go OpenGL. You can't get any faster
    > than that on OS X.

    This has come up before.  Maybe I already answered this?

    In any case, the next best thing to OpenGL for blasting pixels to the
    screen is NSRectFillList() with a set of 1x1 rectangles.  Damned fast
    and you can use NSBitmapImageReps to cache the results.

    See:

    http://svn.red-bean.com/restedit/trunk/source/HopView.py

    Specifically:

    ...
        def drawRect_(self, aRect):
            if self.backingStore:
                self.backingStore.draw()
            else:
                self.eraseView_(aRect)

            for pointArray, rectCount, color in self.pointsCountsAndColors:
                color.set()
                NSRectFillList(pointArray, len(pointArray) / 4)

            self.pointsCountsAndColors = []
            self.backingStore =
    NSBitmapImageRep.alloc().initWithFocusedViewRect_(self.bounds())

            if self.pointCount > 100000:
                self.pointCount = 0
                self.passCount = self.passCount + 1
                if self.passCount > 100:
                    self.stopCalculation()
    ...
    Where pointArray is, quite literally, an array of floating point that
    is treated as an array of NSRects.  As long as you can break down your
    sets of points by color it works great.  I plot a few thousand points
    of each color and then cache the results in an NSBitmapImageRep that
    is blitted down on each drawRect_() invocation.
    (Python via PyObjC... but the Obj-C is basically the same)

    b.bum
  • John,

    Thanks for your thoughts here, but you've hit exactly on the problem.
    NSBitmapImageRep takes a byte array, NOT a 2d array of pixel values.
    That means after every analytical operation I perform on the 2d pixel
    array, I have to then transform it back into a byte stream (or at
    least a flat array) before drawing it.  This slows everything down.
    AFIK, NSBitmapImageRep will not take a 2d array of values. (I've
    tried).  The crux of my question was, once you have a 2d array (x,y)
    arrangement of pixels, what's the fastest way to get it onto the screen.

    - Jason

    On Nov 9, 2007, at 11:56 PM, John Stiles wrote:

    > +NSBitmapImageRep
    > initWithBitmapDataPlanes:pixelsWide:pixelsHigh:bitsPerSample:samplesPerPixel:hasAlpha:isPlanar:colorSpaceName:bitmapFormat:bytesPerRow:bitsPerPixel
    > : is pretty efficient. You're not copying the whole byte array, it
    > just takes a reference to it.
    >
    > However, another poster hit the nail on the head—if performance is
    > important to you, might as well go OpenGL. You can't get any faster
    > than that on OS X.
    >
    > On Nov 9, 2007, at 7:27 PM, Jason Horn wrote:
    >
    >> I am working on a computer vision based app that detects objects in
    >> video images.  I have to implement custom algorithms to locate and
    >> track objects, so I need to write code that has direct access to
    >> the pixel values and keep track of the x,y coordinates of certain
    >> objects.  This is easiest to do with the image represented as a 2D
    >> array of int values.  However, it seems that the only way to draw
    >> bitmaps to the screen is with NSImage and NSBitmapImageRep or
    >> CIImage.  Is there another way?  It seems like a waste (and a
    >> performance hit) to convert a 2D array of pixel values into a byte
    >> stream every time you want to display the image.  Is there a way to
    >> draw the pixel values directly to the screen (and do it quickly).
    >>
    >> Thanks for any ideas.
    >
  • You can also use coords with a raw array

    array2d[x][y] is the same than array[x + y *width].

    AFAK, a 1D array will be the fastest representation as it is the
    native representation, and any function that take a 2D array will
    probably convert it into a raw array to display it.

    Le 12 nov. 07 à 13:50, Jason Horn a écrit :

    > John,
    >
    > Thanks for your thoughts here, but you've hit exactly on the
    > problem.  NSBitmapImageRep takes a byte array, NOT a 2d array of
    > pixel values.  That means after every analytical operation I
    > perform on the 2d pixel array, I have to then transform it back
    > into a byte stream (or at least a flat array) before drawing it.
    > This slows everything down.  AFIK, NSBitmapImageRep will not take a
    > 2d array of values. (I've tried).  The crux of my question was,
    > once you have a 2d array (x,y) arrangement of pixels, what's the
    > fastest way to get it onto the screen.
    >
    > - Jason
    >
    >
    > On Nov 9, 2007, at 11:56 PM, John Stiles wrote:
    >
    >> +NSBitmapImageRep
    >> initWithBitmapDataPlanes:pixelsWide:pixelsHigh:bitsPerSample:samplesP
    >> erPixel:hasAlpha:isPlanar:colorSpaceName:bitmapFormat:bytesPerRow:bit
    >> sPerPixel: is pretty efficient. You're not copying the whole byte
    >> array, it just takes a reference to it.
    >>
    >> However, another poster hit the nail on the head—if performance is
    >> important to you, might as well go OpenGL. You can't get any
    >> faster than that on OS X.
    >>
    >> On Nov 9, 2007, at 7:27 PM, Jason Horn wrote:
    >>
    >>> I am working on a computer vision based app that detects objects
    >>> in video images.  I have to implement custom algorithms to locate
    >>> and track objects, so I need to write code that has direct access
    >>> to the pixel values and keep track of the x,y coordinates of
    >>> certain objects.  This is easiest to do with the image
    >>> represented as a 2D array of int values.  However, it seems that
    >>> the only way to draw bitmaps to the screen is with NSImage and
    >>> NSBitmapImageRep or CIImage.  Is there another way?  It seems
    >>> like a waste (and a performance hit) to convert a 2D array of
    >>> pixel values into a byte stream every time you want to display
    >>> the image.  Is there a way to draw the pixel values directly to
    >>> the screen (and do it quickly).
    >>>
    >>> Thanks for any ideas.
    >>

  • There seems to be some confusion.

      In C, the following two arrays are interchangeable:

      int  twoDArray[100][500];
      int  oneDArray[100 * 500];

      Both occupy the same amount of memory.  They can be cast back and forth harmlessly.  Both are of the following assignments are legal and meaningful:

      int  *array = twoDArray;
      int  *array = oneDArray;

      Both styles of array decalartion are usable with -initWithBitmapDataPlanes:pixelsWide:pixelsHigh:bitsPerSample:samplesPerPixel:hasAlpha:isPlanar:colorSpaceName:bitmapFormat:bytesPerRow:bitsPerPixel
      and with openGL.
  • Le 12 nov. 07 à 15:38, Erik Buck a écrit :

    > There seems to be some confusion.
    >
    > In C, the following two arrays are interchangeable:
    >
    > int  twoDArray[100][500];
    > int  oneDArray[100 * 500];
    >
    > Both occupy the same amount of memory.  They can be cast back and
    > forth harmlessly.  Both are of the following assignments are legal
    > and meaningful:
    >
    > int  *array = twoDArray;
    > int  *array = oneDArray;
    >
    > Both styles of array decalartion are usable with -
    > initWithBitmapDataPlanes:pixelsWide:pixelsHigh:bitsPerSample:samplesPe
    > rPixel:hasAlpha:isPlanar:colorSpaceName:bitmapFormat:bytesPerRow:bitsP
    > erPixel
    > and with openGL.

    It's true only for a fixed size array. The following statement will
    crash:

    int width = 100, height = 500;
    int **array = malloc(width * height);
    array[50][50] = 0;
  • On 12-Nov-07, at 6:08 AM, Jean-Daniel Dupas wrote:

    > You can also use coords with a raw array
    >
    > array2d[x][y] is the same than array[x + y *width].

    > AFAK, a 1D array will be the fastest representation as it is the
    > native representation, and any function that take a 2D array will
    > probably convert it into a raw array to display it.

    Agreed.

    I would adopt the 1d array as the internal representation.  Granted
    a[x][y] is nicer than a[x+y*w], but one can easily provide an
    equivalent syntax.

    For those functions expecting a 2d array which you can't or don't want
    to change, you can create a 2d array from a 1d array by creating a
    column of row pointers...
      pixel *rows = malloc(n_rows * sizeof(pixel *));
      for (i = 0; i < n_rows; ++i)
        rows[i] = &bitmap[i * w];

    dave

    > Le 12 nov. 07 à 13:50, Jason Horn a écrit :
    >
    >> John,
    >>
    >> Thanks for your thoughts here, but you've hit exactly on the
    >> problem.  NSBitmapImageRep takes a byte array, NOT a 2d array of
    >> pixel values.  That means after every analytical operation I
    >> perform on the 2d pixel array, I have to then transform it back
    >> into a byte stream (or at least a flat array) before drawing it.
    >> This slows everything down.  AFIK, NSBitmapImageRep will not take a
    >> 2d array of values. (I've tried).  The crux of my question was,
    >> once you have a 2d array (x,y) arrangement of pixels, what's the
    >> fastest way to get it onto the screen.
    >>
    >> - Jason
    >>
    >>
    >> On Nov 9, 2007, at 11:56 PM, John Stiles wrote:
    >>
    >>> +NSBitmapImageRep
    >>> initWithBitmapDataPlanes:pixelsWide:pixelsHigh:bitsPerSample:samplesPerPixel:hasAlpha:isPlanar:colorSpaceName:bitmapFormat:bytesPerRow:bitsPerPixel
    >>> : is pretty efficient. You're not copying the whole byte array, it
    >>> just takes a reference to it.
    >>>
    >>> However, another poster hit the nail on the head—if performance is
    >>> important to you, might as well go OpenGL. You can't get any
    >>> faster than that on OS X.
    >>>
    >>> On Nov 9, 2007, at 7:27 PM, Jason Horn wrote:
    >>>
    >>>> I am working on a computer vision based app that detects objects
    >>>> in video images.  I have to implement custom algorithms to locate
    >>>> and track objects, so I need to write code that has direct access
    >>>> to the pixel values and keep track of the x,y coordinates of
    >>>> certain objects.  This is easiest to do with the image
    >>>> represented as a 2D array of int values.  However, it seems that
    >>>> the only way to draw bitmaps to the screen is with NSImage and
    >>>> NSBitmapImageRep or CIImage.  Is there another way?  It seems
    >>>> like a waste (and a performance hit) to convert a 2D array of
    >>>> pixel values into a byte stream every time you want to display
    >>>> the image.  Is there a way to draw the pixel values directly to
    >>>> the screen (and do it quickly).
    >>>>
    >>>> Thanks for any ideas.
    >>>


    >
  • What you say is true, but it seems unlikely to me that an imaging
    operation would be coded with fixed dimensions.  More likely that each
    operation is a function taking image arguments with specified
    dimensions.  In the 2d case

        fun(pixel **image, ...)

    the pixels are accessed as image[y][x], and in the 1d case

        fun(pixel *image, int w, ...)

    one would have to write image[y * w + x].

    dave

    On 12-Nov-07, at 7:38 AM, Erik Buck wrote:

    > There seems to be some confusion.
    >
    > In C, the following two arrays are interchangeable:
    >
    > int  twoDArray[100][500];
    > int  oneDArray[100 * 500];
    >
    > Both occupy the same amount of memory.  They can be cast back and
    > forth harmlessly.  Both are of the following assignments are legal
    > and meaningful:
    >
    > int  *array = twoDArray;
    > int  *array = oneDArray;
    >
    > Both styles of array decalartion are usable with -
    > initWithBitmapDataPlanes:pixelsWide:pixelsHigh:bitsPerSample:samplesPerPixel:hasAlpha:isPlanar:colorSpaceName:bitmapFormat:bytesPerRow:bitsPerPixel
    > and with openGL.
  • Dave,

    Thanks for your input.  Wouldn't something like  array[x + y *width]
    be slower than array2d[x][y] when you are iterating over a large
    image?  Does the extra multiplication operation slow you down?

    - Jason

    On Nov 12, 2007, at 12:07 PM, David Spooner wrote:

    > On 12-Nov-07, at 6:08 AM, Jean-Daniel Dupas wrote:
    >
    >> You can also use coords with a raw array
    >>
    >> array2d[x][y] is the same than array[x + y *width].
    >
    >
    >> AFAK, a 1D array will be the fastest representation as it is the
    >> native representation, and any function that take a 2D array will
    >> probably convert it into a raw array to display it.
    >
    > Agreed.
    >
    > I would adopt the 1d array as the internal representation.  Granted
    > a[x][y] is nicer than a[x+y*w], but one can easily provide an
    > equivalent syntax.
    >
    > For those functions expecting a 2d array which you can't or don't
    > want to change, you can create a 2d array from a 1d array by
    > creating a column of row pointers...
    > pixel *rows = malloc(n_rows * sizeof(pixel *));
    > for (i = 0; i < n_rows; ++i)
    > rows[i] = &bitmap[i * w];
    >
    > dave
    >
    >> Le 12 nov. 07 à 13:50, Jason Horn a écrit :
    >>
    >>> John,
    >>>
    >>> Thanks for your thoughts here, but you've hit exactly on the
    >>> problem.  NSBitmapImageRep takes a byte array, NOT a 2d array of
    >>> pixel values.  That means after every analytical operation I
    >>> perform on the 2d pixel array, I have to then transform it back
    >>> into a byte stream (or at least a flat array) before drawing it.
    >>> This slows everything down.  AFIK, NSBitmapImageRep will not take
    >>> a 2d array of values. (I've tried).  The crux of my question was,
    >>> once you have a 2d array (x,y) arrangement of pixels, what's the
    >>> fastest way to get it onto the screen.
    >>>
    >>> - Jason
    >>>
    >>>
    >>> On Nov 9, 2007, at 11:56 PM, John Stiles wrote:
    >>>
    >>>> +NSBitmapImageRep
    >>>> initWithBitmapDataPlanes:pixelsWide:pixelsHigh:bitsPerSample:samplesPerPixel:hasAlpha:isPlanar:colorSpaceName:bitmapFormat:bytesPerRow:bitsPerPixel
    >>>> : is pretty efficient. You're not copying the whole byte array,
    >>>> it just takes a reference to it.
    >>>>
    >>>> However, another poster hit the nail on the head—if performance
    >>>> is important to you, might as well go OpenGL. You can't get any
    >>>> faster than that on OS X.
    >>>>
    >>>> On Nov 9, 2007, at 7:27 PM, Jason Horn wrote:
    >>>>
    >>>>> I am working on a computer vision based app that detects objects
    >>>>> in video images.  I have to implement custom algorithms to
    >>>>> locate and track objects, so I need to write code that has
    >>>>> direct access to the pixel values and keep track of the x,y
    >>>>> coordinates of certain objects.  This is easiest to do with the
    >>>>> image represented as a 2D array of int values.  However, it
    >>>>> seems that the only way to draw bitmaps to the screen is with
    >>>>> NSImage and NSBitmapImageRep or CIImage.  Is there another way?
    >>>>> It seems like a waste (and a performance hit) to convert a 2D
    >>>>> array of pixel values into a byte stream every time you want to
    >>>>> display the image.  Is there a way to draw the pixel values
    >>>>> directly to the screen (and do it quickly).
    >>>>>
    >>>>> Thanks for any ideas.
    >>>>


    >>

  • The generated code is the same. The [][] syntax is simply syntactic
    sugar which hides the multiplication.

    On Nov 12, 2007, at 12:58 PM, Jason Horn wrote:

    > Dave,
    >
    > Thanks for your input.  Wouldn't something like  array[x + y
    > *width] be slower than array2d[x][y] when you are iterating over a
    > large image?  Does the extra multiplication operation slow you down?
    >
    > - Jason
    >
    >
    >
    >
    > On Nov 12, 2007, at 12:07 PM, David Spooner wrote:
    >
    >> On 12-Nov-07, at 6:08 AM, Jean-Daniel Dupas wrote:
    >>
    >>> You can also use coords with a raw array
    >>>
    >>> array2d[x][y] is the same than array[x + y *width].
    >>
    >>
    >>> AFAK, a 1D array will be the fastest representation as it is the
    >>> native representation, and any function that take a 2D array will
    >>> probably convert it into a raw array to display it.
    >>
    >> Agreed.
    >>
    >> I would adopt the 1d array as the internal representation.
    >> Granted a[x][y] is nicer than a[x+y*w], but one can easily provide
    >> an equivalent syntax.
    >>
    >> For those functions expecting a 2d array which you can't or don't
    >> want to change, you can create a 2d array from a 1d array by
    >> creating a column of row pointers...
    >> pixel *rows = malloc(n_rows * sizeof(pixel *));
    >> for (i = 0; i < n_rows; ++i)
    >> rows[i] = &bitmap[i * w];
    >>
    >> dave
    >>
    >>> Le 12 nov. 07 à 13:50, Jason Horn a écrit :
    >>>
    >>>> John,
    >>>>
    >>>> Thanks for your thoughts here, but you've hit exactly on the
    >>>> problem.  NSBitmapImageRep takes a byte array, NOT a 2d array of
    >>>> pixel values.  That means after every analytical operation I
    >>>> perform on the 2d pixel array, I have to then transform it back
    >>>> into a byte stream (or at least a flat array) before drawing
    >>>> it.  This slows everything down.  AFIK, NSBitmapImageRep will
    >>>> not take a 2d array of values. (I've tried).  The crux of my
    >>>> question was, once you have a 2d array (x,y) arrangement of
    >>>> pixels, what's the fastest way to get it onto the screen.
    >>>>
    >>>> - Jason
    >>>>
    >>>>
    >>>> On Nov 9, 2007, at 11:56 PM, John Stiles wrote:
    >>>>
    >>>>> +NSBitmapImageRep
    >>>>> initWithBitmapDataPlanes:pixelsWide:pixelsHigh:bitsPerSample:sampl
    >>>>> esPerPixel:hasAlpha:isPlanar:colorSpaceName:bitmapFormat:bytesPerR
    >>>>> ow:bitsPerPixel: is pretty efficient. You're not copying the
    >>>>> whole byte array, it just takes a reference to it.
    >>>>>
    >>>>> However, another poster hit the nail on the head—if performance
    >>>>> is important to you, might as well go OpenGL. You can't get any
    >>>>> faster than that on OS X.
    >>>>>
    >>>>> On Nov 9, 2007, at 7:27 PM, Jason Horn wrote:
    >>>>>
    >>>>>> I am working on a computer vision based app that detects
    >>>>>> objects in video images.  I have to implement custom
    >>>>>> algorithms to locate and track objects, so I need to write
    >>>>>> code that has direct access to the pixel values and keep track
    >>>>>> of the x,y coordinates of certain objects.  This is easiest to
    >>>>>> do with the image represented as a 2D array of int values.
    >>>>>> However, it seems that the only way to draw bitmaps to the
    >>>>>> screen is with NSImage and NSBitmapImageRep or CIImage.  Is
    >>>>>> there another way?  It seems like a waste (and a performance
    >>>>>> hit) to convert a 2D array of pixel values into a byte stream
    >>>>>> every time you want to display the image.  Is there a way to
    >>>>>> draw the pixel values directly to the screen (and do it quickly).
    >>>>>>
    >>>>>> Thanks for any ideas.
    >>>>>


    >>>


  • On 12-Nov-07, at 2:11 PM, John Stiles wrote:

    > The generated code is the same. The [][] syntax is simply syntactic
    > sugar which hides the multiplication.

    I believe this is only true when the array dimensions are known.  For
    T a[H][W], a[y][x] is effectively *(a + y*W + x), but for T **a, a[y]
    [x] is *(*(a + y) + x).  Of course there is multiplication implicit in
    the pointer arithmetic...

    > On Nov 12, 2007, at 12:58 PM, Jason Horn wrote:
    >
    >> Dave,
    >>
    >> Thanks for your input.  Wouldn't something like  array[x + y
    >> *width] be slower than array2d[x][y] when you are iterating over a
    >> large image?  Does the extra multiplication operation slow you down?
    >>
    >> - Jason

    I am not concerned by the multiplication, but I don't claim to be an
    expert in things optimal...

    dave
  • Yes, you're correct. If you declare the array as
    int a[10][10];

    Then really 'a' is a 100-element array and [Y][X] is syntactic sugar
    for doing a[Y*10+X];

    But if you declare the array as
    int **a;

    Then [Y][X] does two memory lookups. First it looks up a[Y] and
    assigns it to a temporary (let's call it TEMP), and then it does a
    second lookup to find TEMP[X].
    This is getting pretty far off-topic, though, since none of this is
    specific to ObjC.

    On Nov 12, 2007, at 2:17 PM, David Spooner wrote:

    > On 12-Nov-07, at 2:11 PM, John Stiles wrote:
    >
    >> The generated code is the same. The [][] syntax is simply
    >> syntactic sugar which hides the multiplication.
    >
    > I believe this is only true when the array dimensions are known.
    > For T a[H][W], a[y][x] is effectively *(a + y*W + x), but for T
    > **a, a[y][x] is *(*(a + y) + x).  Of course there is multiplication
    > implicit in the pointer arithmetic...
    >
    >
    >> On Nov 12, 2007, at 12:58 PM, Jason Horn wrote:
    >>
    >>> Dave,
    >>>
    >>> Thanks for your input.  Wouldn't something like  array[x + y
    >>> *width] be slower than array2d[x][y] when you are iterating over
    >>> a large image?  Does the extra multiplication operation slow you
    >>> down?
    >>>
    >>> - Jason
    >
    > I am not concerned by the multiplication, but I don't claim to be
    > an expert in things optimal...
    >
    > dave
previous month november 2007 next month
MTWTFSS
      1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30    
Go to today