fast, threaded drawing to NSImageView from C ?

  • I am trying to wrap an existing C/C++ program with a Cocoa UI. The just of it is that my C/C++ code, which itself is multithreaded, will make asynchronous calls to a method called SetBucket(), which basically hands the computed results (pixels) back to cocoa for drawing.

    First, i use detachDrawingThread to run my main C/C++ work in a separate thread. My worker thread itself might create several other threads which each might call SetBucket. In set bucket, i am making a new bitmap rep and fill it with the bucket pixels, then i draw the new bucket/tile onto the static NSImage i keep. Then i hand the updated NSImage to the NSImageView and ask the view to redraw in the NSRect i just updated. The reason we keep a static NSImage is that we want to allocate the image only once, and "accumulate" all the tiles thruout the processing, which eventually will result in complete picture.

    This code "sorta" works if my worker thread only runs a single thread. However, if my worker thread itself spawns multiple threads itself, i am getting all sorts of erratic behavior. crashing, "class NSImage autoreleased with no pool in place - just leaking" messages, no redraw, partial redraw, etc. We have tried various thread locking in SetBucket but that does not seem to make any difference.

    So for starters, does my code below remotely make sense? Is this the right approach? Our goal is to get the fastest tile update in our view as possible.

    any suggestions welcome.
    thanks in advance!
    -r

    static NSWindow*    g_window = NULL;
    static NSImageView* g_view  = NULL;
    static NSImage*    g_image  = NULL;

    WindowCreate(...)
    {
      ....
      g_image = [[NSImage alloc] initWithSize:NSMakeSize(hsize, vsize)]];
    }

    SetBucket(...)
    {
      NSBitmapImageRep *rep = [[NSBitmapImageRep alloc]
                                initWithBitmapDataPlanes:nil ...];

      unsigned char* p = [rep bitmapData];

      for (y)
          for (x)
            *p++ = col;

      [g_image lockFocus];
      [rep drawAtPoint: NSMakePoint(bucket_origin[0], bucket_origin[1])];
      [g_image unlockFocus];
      [rep release];

      [g_view performSelectorOnMainThread:@selector(setImage:)
                                withObject:g_image
                            waitUntilDone:NO];

      [g_view setNeedsDisplayInRect:
              NSMakeRect(bucket_origin[0], bucket_origin[1], bucket_size[0], bucket_size[1])];

    }

    - (void)doWork:(id)obj
    {
      //window & view come in via nib
      g_window = window;
      g_view  = view;

      // call C/C++ to do work
      goDoWork();
    }

    - (void)applicationDidFinishLaunching:(NSNotification*)notification
    {
      //launch new thread to do our C/C++ work
      [NSApplication detachDrawingThread:@selector(doWork:) toTarget:self withObject:nil];
    }
  • On Oct 30, 2007, at 1:10 PM, Rene Limberger wrote:
    > So for starters, does my code below remotely make sense? Is this the
    > right approach? Our goal is to get the fastest tile update in our
    > view as possible.

    No. :)

    In general, randomly spawning threads and pounding on the drawing
    infrastructure from said threads simply isn't going to work.  Even if
    it did work, it isn't likely to be terribly fast as you are going to
    end up seriously contending for scarce resources like the connection
    to the window server, etc...

    NSImage is an incredibly heavy weight class for just rendering to
    screen.  Very likely not what you want.  As well, the -lockFocus/-
    unlockFocus are contending for high level resources that shouldn't be
    arbitrarily diddled from threads.

    In any case, I suggest taking a step back and *first* solving the
    problem of how to most quickly get bits to the screen.  If you are
    just drawing a bunch of points and the sets of points are going to be
    interlaced, then using NSRectFillList() is the fastest way I found to
    get points to the screen without resorting to OpenGL.

    See:

    http://svn.red-bean.com/restedit/trunk/source/HopView.py

    As an example.  It is a simple HopView fractal which, with each
    iteration, has to draw a couple of thousand more pixels over an
    existing image.  It accumulates the rendered image into
    NSBitmapImageRep instances, avoiding the overhead of NSImage.  It
    isn't threaded, but the calculation portion could be trivially broken
    out into a thread (while the drawing code would remain relatively
    untouched).  It happens to be implemented in Python, but it uses the
    stock Cocoa APIs -- there isn't anything in there that can't be
    directly translated to Cocoa (save for the generator used in the
    actual math bits).

    Now, if your image is truly tiled -- if each set of points are
    isolated from each other -- then this may not be the fastest way to
    get the bits to the screen.  Nor is it a particularly convenient way
    if your points are going to be varying in color.

    Looking at your code, the easiest possible modification would be to
    move the NSImage out of the thread.  Create NSBitmapImageReps in the
    thread, pass 'em to the main thread for rendering.  And be very
    careful about memory management.  That, at least, should move things
    to "working".  Maybe the performance will be good enough (but I doubt
    it -- easier to optimize than debug threading issues, though).

    b.bum
previous month october 2007 next month
MTWTFSS
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31        
Go to today