Leopard real amount of RAM used by application

  • Hi,

    I've got an application that does some very heavy Core Data
    processing. By "heavy" I mean that the application does nothing but
    process things inside Core Data for up to an hour, not that this
    processing is particularly complex.

    While this processing is going on the application is starting to use a
    lot of RAM. I've kept memory utilization down by:

    * break the process down into stages
    * using an auto-release pool for each stage
    * committing frequently during a stage (every 5000 or 500 records
    processed depending on what is being done).

    On Tiger I got the memory utilization down to around 128Mb during the
    entire 60 minute run.

    Now with Leopard, during the early stages memory utilization goes up
    to 1GB and stays there during the entire processing.

    The thing is I don't know whether this means there's a leak or whether
    it's a problem with the way I measure memory utilization.

    I simply use the activity monitor and look at the "real memory" and
    the "virtual memory" columns. The utilization that I refer to is the
    "real memory" column.

    This is no doubt simplistic and I don't really know how the memory
    allocation schemes of Mac OS X work, much less what has changed in
    Leopard..

    Is this 1 Gb simply used because it's there and no other processes
    need? will it go down again if Leopard needs the RAM? is the program
    going to run out of memory and crash?

    What makes this a bit of a drag is that each test takes ages to
    complete, so I want to avoid using instrumented code or a managed
    execution environment if it's going to mean that the code takes 10
    times longer.. surely there's a better way?

    Some details that might help make sense of all this:

    Test Rig: PowerMac 3Ghz, 2GB RAM, Leopard 10.5.0
    Code: 10.4 SDK compiled with XCode 3.0

    Any help would be much appreciated.

    Best regards,

    Frank
  • On Nov 13, 2007, at 12:19 PM, Frank Reiff wrote:

    > I simply use the activity monitor and look at the "real memory" and
    > the "virtual memory" columns. The utilization that I refer to is the
    > "real memory" column.
    >
    > This is no doubt simplistic and I don't really know how the memory
    > allocation schemes of Mac OS X work, much less what has changed in
    > Leopard..
    >
    Check out the following thread from a couple weeks ago, especially Ben
    and Chris's responses:

    http://lists.apple.com/archives/Cocoa-dev/2007/Nov/msg00241.html

    > Is this 1 Gb simply used because it's there and no other processes
    > need? will it go down again if Leopard needs the RAM? is the program
    > going to run out of memory and crash?
    >
    > What makes this a bit of a drag is that each test takes ages to
    > complete, so I want to avoid using instrumented code or a managed
    > execution environment if it's going to mean that the code takes 10
    > times longer.. surely there's a better way?
    >

    Since the memory utilization peaks early, can you analyze just the
    first few seconds or minutes?  Check out the ObjectAlloc in
    Instruments and "heap" and "leaks" in the Terminal.

    Aaron Burghardt
  • Frank,

    > On Tiger I got the memory utilization down to around 128Mb during the
    > entire 60 minute run.
    >
    > Now with Leopard, during the early stages memory utilization goes up
    > to 1GB and stays there during the entire processing.

    That sounds like a bug, but it's hard to diagnose from your
    description.  If you can reproduce the problem in a sample project,
    please file a report with bugreport.apple.com

    > Is this 1 Gb simply used because it's there and no other processes
    > need? will it go down again if Leopard needs the RAM?

    No, not if you're doing most of your processing with Core Data.  On
    Tiger and Leopard, the memory usage with Core Data should be fairly
    deterministic (what you do to yourself with threads is another
    story).  We don't automagically preheat any caches, we simply fetch
    what you ask us.

    So it could be a leak, it could be an autorelease pool moved and you
    need another, or something else has gone awry.

    > What makes this a bit of a drag is that each test takes ages to
    > complete, so I want to avoid using instrumented code or a managed
    > execution environment if it's going to mean that the code takes 10
    > times longer.. surely there's a better way?

    'heap' can provide some useful data, especially if your batch process
    pauses at the end of an iteration.  Presumably, you should have some
    idea about what objects should still be live at the beginning of the
    next iteration.

    'leaks' is also handy, but only for a first pass.  It's easy to use,
    but suffers from many false negatives.

    You could also check if there's any low hanging performance
    optimizations with Shark or Instruments.  If your test runs faster,
    it may be easier to debug, or just nicer to use.  And you might get
    lucky and find it's slow because of the memory allocations, and get a
    2:1

    Instruments on Leopard has a Core Data template, and it will provide
    events for particularly expensive operations (faults that incur I/O,
    fetches, saves)

    It's pretty easy to fire up Instruments, sample your app, and then go
    "uhm, why is my code fetching from the database when the user breaths
    on it ?" or "didn't already fetch all those objects ?  Why is it
    doing it again and again and ..."

    The detail inspector in Instruments will give you a stack trace, so
    you can see how notifications, KVO, or accessor methods go off and
    trigger more work than you expected.
    --

    -Ben
previous month november 2007 next month
MTWTFSS
      1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30    
Go to today