Shadow Scaling

  • I know that you are going to tell me that NSShadow works as advertised,
    but I had hoped that my fears wouldn't be realized.  I create miniature
    views of my main view for various reasons and also allow the user to set the
    scale on the main view.  The problem is that the shadow offsets do not scale
    with the views.  Everything else scales properly, but in small views the
    shadows can be so far off as to be separated completely from the object.
    That is not good.  I was hoping that even though shadow offsets are
    independent of applied transforms, that maybe they wouldn't be independent
    of main view scaling transform.

        The drawn objects have no knowledge of their environment, and I was
    hoping to keep it that way.  It appears that I'm going to have to do
    something to pass the view scaling to the drawing routines for the objects
    so they can scale the shadow offsets appropriately.

        I already had to subclass NSShadow in order to bind the joystick polar
    coordinates offset to NSShadow's Cartesian offsets.  I simply added the
    polar ivars and property declarations and then overrode NSShadow's "set"
    method to do that.  I guess that also might be a place to scale the offset
    for the view scaling.

        Is there any way to set a shadow offset scaling factor that can be set
    once for a scaled view?  This is clearly unacceptable.

        NSShadow, unlike NSGradient, is a mutable object so you really need only
    one.  One option might be to use a single (subclassed) NSShadow per window
    (or view) instead of attaching one to each shadowed object as I am now
    doing.  The scale factor could be imbedded in the shared NSShadow subclass
    and scaled when the window is scaled (i.e., frame/bounds relationship).  My
    own shadow object could just load the shared NSShadow's params when needed.
    Maybe that's a solution. I just hate having to do a workaround for something
    that, IMHO, should have been included (at least as a option) in the system's
    (or cocoa's) shadow rendering process and I really don't want to go back to
    doing my own shadows.

        Another possibility -- Is there anyway to offset the transparency layer
    when drawing a shadow?
  • Shadows draw relative to what core graphics calls the 'base' ctm,
    which is a transformation matrix independent from the one that applies
    to normal drawing.  This matrix isn't directly accessible.  It's set
    up to be the identity matrix usually, but if there's a user space
    scale factor (as used for resolution independence, settable in Quartz
    Debug), then it scales by that scale factor.  This is the same as the
    transformation applied to the contentView of a window by default.
    (The scale factor can also be retrieved with -[NSWindow
    userSpaceScaleFactor].)

    You want your shadows to behave as if they draw in the space of your
    view itself.  That isn't how it works, but you can emulate the effect
    by just keeping track of what coordinate space everything is in.
    Instead of setting offset and blur radius on your shadow object, you
    want clients of your shadow to set new properties
    userSpaceShadowOffset and userSpaceShadowBlurRadius.  Then when the
    -set method is called on your shadow object, you can use methods like
    -[[NSView focusView] convertSize:toView:] to convert the
    userSpaceShadowOffset and userSpaceShadowBlurRadius to parameters in
    the space on the contentView, then set a shadow with that offset and
    blur radius.  That will work, because that's the space that
    CoreGraphics interprets the numbers in.  Make sense?

    Hope that helps.
    -Ken

    PS - if it seems more clear to you, you can work directly with
    matrices instead of converting between coordinate spaces of views.
    CGContextGetCTM([[NSGraphicsContext currentContext] graphicsPort])
    gets you the standard transform matrix in -[NSShadow set].  You'd have
    to calculate the 'base' transform matrix yourself using the info in
    the first paragraph of this message.

    On Dec 21, 2007 12:27 PM, Gordon Apple <ga...> wrote:
    > I know that you are going to tell me that NSShadow works as advertised,
    > but I had hoped that my fears wouldn't be realized.  I create miniature
    > views of my main view for various reasons and also allow the user to set the
    > scale on the main view.  The problem is that the shadow offsets do not scale
    > with the views.  Everything else scales properly, but in small views the
    > shadows can be so far off as to be separated completely from the object.
    > That is not good.  I was hoping that even though shadow offsets are
    > independent of applied transforms, that maybe they wouldn't be independent
    > of main view scaling transform.
    >
    > The drawn objects have no knowledge of their environment, and I was
    > hoping to keep it that way.  It appears that I'm going to have to do
    > something to pass the view scaling to the drawing routines for the objects
    > so they can scale the shadow offsets appropriately.
    >
    > I already had to subclass NSShadow in order to bind the joystick polar
    > coordinates offset to NSShadow's Cartesian offsets.  I simply added the
    > polar ivars and property declarations and then overrode NSShadow's "set"
    > method to do that.  I guess that also might be a place to scale the offset
    > for the view scaling.
    >
    > Is there any way to set a shadow offset scaling factor that can be set
    > once for a scaled view?  This is clearly unacceptable.
    >
    > NSShadow, unlike NSGradient, is a mutable object so you really need only
    > one.  One option might be to use a single (subclassed) NSShadow per window
    > (or view) instead of attaching one to each shadowed object as I am now
    > doing.  The scale factor could be imbedded in the shared NSShadow subclass
    > and scaled when the window is scaled (i.e., frame/bounds relationship).  My
    > own shadow object could just load the shared NSShadow's params when needed.
    > Maybe that's a solution. I just hate having to do a workaround for something
    > that, IMHO, should have been included (at least as a option) in the system's
    > (or cocoa's) shadow rendering process and I really don't want to go back to
    > doing my own shadows.
    >
    > Another possibility -- Is there anyway to offset the transparency layer
    > when drawing a shadow?
    >
  • Ah, Thank you , thank you.  That is the type of information I was hoping
    to elicit.  I haven't had a chance to delve into that yet, but it should
    prove useful.

        I re-architected my code to use a single NSShader in my rendering code,
    rather than putting one into each of my shadowed objects.  In one case I use
    the scaled scroller used in Sketch and lifted and modified the related
    Sketch code for part of my own sizing controller.  Since it had the bound
    scale factor, I set an outlet in my renderer to access that factor and then
    use it to scale the shadow offset and blur radius when rendered.  Not as
    clean as I would like since it broke some of my encapsulation, but it works.
    That same sizing controller does double duty (depending on how it is
    connected) in my down-scaled preview windows, but I haven't addressed that
    one yet.

        One thing I noticed, unless it's an optical illusion, that the visible
    blur radius scales, but less than linearly.  I suspect that there is
    something about the density of algorithm computations that causes that.  The
    disparity becomes more obvious as the scale increases.

        BTW, is there some reason why NSGradient is (at least colorwise)
    immutable?  I currently attach one to each shaded object but would prefer to
    use only one in my renderer (as described above for shadows).  If done in
    the renderer, every draw operation requires a new NSGradient object, which
    should keep the garbage collector employed.  (Sometimes I really miss stack
    objects.  :-)

    > Shadows draw relative to what core graphics calls the 'base' ctm,
    > which is a transformation matrix independent from the one that applies
    > to normal drawing.  This matrix isn't directly accessible.  It's set
    > up to be the identity matrix usually, but if there's a user space
    > scale factor (as used for resolution independence, settable in Quartz
    > Debug), then it scales by that scale factor.  This is the same as the
    > transformation applied to the contentView of a window by default.
    > (The scale factor can also be retrieved with -[NSWindow
    > userSpaceScaleFactor].)
    >
    > You want your shadows to behave as if they draw in the space of your
    > view itself.  That isn't how it works, but you can emulate the effect
    > by just keeping track of what coordinate space everything is in.
    > Instead of setting offset and blur radius on your shadow object, you
    > want clients of your shadow to set new properties
    > userSpaceShadowOffset and userSpaceShadowBlurRadius.  Then when the
    > -set method is called on your shadow object, you can use methods like
    > -[[NSView focusView] convertSize:toView:] to convert the
    > userSpaceShadowOffset and userSpaceShadowBlurRadius to parameters in
    > the space on the contentView, then set a shadow with that offset and
    > blur radius.  That will work, because that's the space that
    > CoreGraphics interprets the numbers in.  Make sense?
    >
    > Hope that helps.
    > -Ken
    >
    > PS - if it seems more clear to you, you can work directly with
    > matrices instead of converting between coordinate spaces of views.
    > CGContextGetCTM([[NSGraphicsContext currentContext] graphicsPort])
    > gets you the standard transform matrix in -[NSShadow set].  You'd have
    > to calculate the 'base' transform matrix yourself using the info in
    > the first paragraph of this message.
    >
previous month december 2007 next month
MTWTFSS
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31            
Go to today