CoreImage problems with very large images

  • I'm using CoreImage in an app, but it is unusable with very large
    images, e.g. a 52-megabyte 18,000 x 14,000 jpg.  A 5000 x 4000 jpg
    image loads and processes with great performance, but trying to
    process the large image leads to very high memory use and much
    swapping to disk.  I tried to load the large image in the Core Image
    Fun House example app, and ended up force-quitting after waiting 10-15
    minutes.

    The old implementation using NSImage could load and process (scale,
    rotate, watermark, broken grayscale) the large images, and while it
    wasn't fast, it was at least usable.

    Is CoreImage just not meant to handle images this big?  Or are there
    ways to gain acceptable performance with these large images with
    CoreImage?  Will the problem possibly go away on a Mac Pro?

    Thanks,

    Jim
  • Hey,

    The image you are trying to process dexpmpresses to an image that is
    about 968 megabytes in size. This is almost certainly bigger than your
    graphics card VRAM. Assuming that core image is programmer cleverly it
    will take the image and break it down into tiles that overlap slightly
    and reconstruct the image afterwards. This won't be really slow but it
    may well chew up a gig of ram in the process.

    If core image is not implemented to deal with large images it might
    just give up on the GPU, load the whole lot into RAM and process using
    Altivec or SSE. This may well be very very slow indeed.

    You might want to try tiling and reconstructing the image yourself so
    you never overstep your VRAM.

    Chris

    On 16 Nov 2007, at 08:07, Jim Crate <jim...> wrote:

    > I'm using CoreImage in an app, but it is unusable with very large
    > images, e.g. a 52-megabyte 18,000 x 14,000 jpg.  A 5000 x 4000 jpg
    > image loads and processes with great performance, but trying to
    > process the large image leads to very high memory use and much
    > swapping to disk.  I tried to load the large image in the Core
    > Image Fun House example app, and ended up force-quitting after
    > waiting 10-15 minutes.
    >
    > The old implementation using NSImage could load and process (scale,
    > rotate, watermark, broken grayscale) the large images, and while it
    > wasn't fast, it was at least usable.
    >
    > Is CoreImage just not meant to handle images this big?  Or are there
    > ways to gain acceptable performance with these large images with
    > CoreImage?  Will the problem possibly go away on a Mac Pro?
    >
    > Thanks,
    >
    > Jim
  • On Nov 16, 2007, at 9:17 AM, Chris Blackburn wrote:

    > Hey,
    >
    > The image you are trying to process dexpmpresses to an image that is
    > about 968 megabytes in size. This is almost certainly bigger than
    > your graphics card VRAM. Assuming that core image is programmer
    > cleverly it will take the image and break it down into tiles that
    > overlap slightly and reconstruct the image afterwards. This won't be
    > really slow but it may well chew up a gig of ram in the process.
    >
    > If core image is not implemented to deal with large images it might
    > just give up on the GPU, load the whole lot into RAM and process
    > using Altivec or SSE. This may well be very very slow indeed.
    >
    > You might want to try tiling and reconstructing the image yourself
    > so you never overstep your VRAM.

    This sounds right to me, so I'm wondering if this figure is affected
    by having multiple layers and/or images handled simultaneously by the
    graphics card. That is, will Core Image punt to CPU and main memory
    even if that one image would fit, but there is just too much of an
    overall load on the graphics card?

    In other words, is it merely a matter or finding the amount of VRAM in
    the user's machine and making sure that one image will fit in it, or
    are there other things to take into consideration?

    regards,

    Ralph

    Raffael Cavallaro, Ph.D.
    <raffaelcavallaro...>
  • There's a 2000 x 2000 limit on image size for some classes as I
    recall reading in some of the documents.
    (Can't find it at the moment).  That's the limit of the GPU.  It's
    possible NSImage dealt with this but some don't.
    That's why there's a CATiledLayer.

    Scott
  • On Nov 16, 2007, at 3:07 AM, Jim Crate wrote:

    > I'm using CoreImage in an app, but it is unusable with very large
    > images, e.g. a 52-megabyte 18,000 x 14,000 jpg.  A 5000 x 4000 jpg
    > image loads and processes with great performance, but trying to
    > process the large image leads to very high memory use and much
    > swapping to disk.  I tried to load the large image in the Core
    > Image Fun House example app, and ended up force-quitting after
    > waiting 10-15 minutes.

    Someone suggested to try CocoaSlides, which I did, and CocoaSlides can
    handle the large image in a slide show, even with a CoreImage
    transition, in about 15 seconds on my MacBook Pro.  This is definitely
    acceptable performance, and means that I'm apparently doing something
    wrong.  I'll do more digging and see what I find out.

    When debugging, the big hit in my app is when CoreImage is rendering
    the image.  Creating the CGImageRef and setting up the CIFilters takes
    no time at all.  I started off taking the easy way to render the image
    into the NSImageView, by creating an NSImage with an NSCIImageRep, so
    I'm guessing that is probably not the best way to do the rendering.
    It works well for what would normally be considered large images, up
    to 10000 x 8000, and didn't run into problems until hitting my largest
    test images.

    Thanks,

    Jim
  • If the image processing required lends itself well to parallel
    processing and you've got a bunch of fast computers lying around, you
    may want to investigate the possibility of using Xgrid to send the
    tiles out to multiple machines.

    -- Ilan

    On Nov 16, 2007, at 9:17 AM, Chris Blackburn wrote:

    > Hey,
    >
    > The image you are trying to process dexpmpresses to an image that
    > is about 968 megabytes in size. This is almost certainly bigger
    > than your graphics card VRAM. Assuming that core image is
    > programmer cleverly it will take the image and break it down into
    > tiles that overlap slightly and reconstruct the image afterwards.
    > This won't be really slow but it may well chew up a gig of ram in
    > the process.
    >
    > If core image is not implemented to deal with large images it might
    > just give up on the GPU, load the whole lot into RAM and process
    > using Altivec or SSE. This may well be very very slow indeed.
    >
    > You might want to try tiling and reconstructing the image yourself
    > so you never overstep your VRAM.
    >
    > Chris
    >
    >
    > On 16 Nov 2007, at 08:07, Jim Crate <jim...> wrote:
    >
    >> I'm using CoreImage in an app, but it is unusable with very large
    >> images, e.g. a 52-megabyte 18,000 x 14,000 jpg.  A 5000 x 4000 jpg
    >> image loads and processes with great performance, but trying to
    >> process the large image leads to very high memory use and much
    >> swapping to disk.  I tried to load the large image in the Core
    >> Image Fun House example app, and ended up force-quitting after
    >> waiting 10-15 minutes.
    >>
    >> The old implementation using NSImage could load and process
    >> (scale, rotate, watermark, broken grayscale) the large images, and
    >> while it wasn't fast, it was at least usable.
    >>
    >> Is CoreImage just not meant to handle images this big?  Or are
    >> there ways to gain acceptable performance with these large images
    >> with CoreImage?  Will the problem possibly go away on a Mac Pro?
    >>
    >> Thanks,
    >>
    >> Jim


    Ilan Volow
    "Implicit code is inherently evil, and here's the reason why:"
  • Thanks for all replies to my previous question.

    I've gotten acceptable performance with large images in my program, by
    not using the large images until necessary.
    CGImageSourceCreateThumbnailAtIndex will create thumbnails at any
    size, so a simple thing to do is create a preview reference the size
    of the screen, which means it will always be scaled down to fit the
    view anyway.  However, I now have a different problem.

    I left the view as a normal NSImageView, and I create an NSImage with
    an NSCIImageRep.  This works well, performance is great and the image
    displays correctly, although there are memory leaks that have to be
    chased down. Googling has found several people mentioning that using
    that method leaks lots of memory.

    I also have to save these files, and since the existing code used an
    NSBitmapImageRep, I decided a category on CIImage might be nice:

    -(NSImage *)NSImageWithNSCIImageRep
    {
    NSImage *image = [[[NSImage alloc] initWithSize:NSMakeSize([self
    extent].size.width, [self extent].size.height)] autorelease];
    [image addRepresentation:[NSCIImageRep imageRepWithCIImage:self]];
    return image;
    }

    However, the images wrote out with lines and other artifacts.  I
    thought I had seen someone mention this before, but I cannot find that
    now.  If I change the code that updates the view to create the view's
    NSImage using this method instead of the NSCIImageRep, the image
    displays rendered incorrectly as well, whether from the preview image
    or the full-sized image.

    Since my end goal is to save the images using CGImage and Image I/O, I
    set up that code to see if it worked any better.  As a category on
    CIImage:

    - (CGImageRef)createCGImage
    {
    int width = [self extent].size.width;
    int rows = [self extent].size.height;
    int rowBytes = (width * 4);

    CGColorSpaceRef colorSpace =
    CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
    CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast |
    kCGBitmapByteOrderDefault;

    CGContextRef cgContext = CGBitmapContextCreate(NULL, width, rows, 8,
    rowBytes, colorSpace, bitmapInfo);

    // get the CIContext and render the CIImage
    CIContext* ciContext = [CIContext contextWithCGContext:cgContext
    options:nil];
    CGImageRef cgImage = [ciContext createCGImage:self fromRect:[self
    extent]];

    CGContextRelease(cgContext);
    CGColorSpaceRelease(colorSpace);

    return cgImage;
    }

    I also tried using this to create the CGImage in this method:

    [ciContext drawImage:self atPoint:CGPointZero fromRect: [self extent]];
    CGImageRef cgImage = CGBitmapContextCreateImage(cgContext);

    However, the file saved has the same type of artifacts.  The overlaid
    image may not display at all, or the file may have anything from a few
    thin white lines to being completely illegible, like this one:

    http://www.quevivadev.com/test.jpg

    In addition, while a large image (18000 x 14400) will render using
    NSCIImageRep or (incorrectly) into an NSBitmapImageRep, if I try to
    create a CGImage it will crash deep in the rendering code when calling
    [ciContext createCGImage:self fromRect:[self extent]].  Images up to
    10000 x 8000 render into a CGImage, albeit similar to the test image
    above.

    Has anyone seen this kind of problem before, or have any pointers on
    where to continue searching for a solution?

    Thanks,

    Jim
previous month november 2007 next month
MTWTFSS
      1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30    
Go to today