CATransform3D perspective question

  • I'm having some trouble with using Core Animation's 3D transforms to
    get a perspective effect.  Part of the problem is that the exact
    nature of the transformations is not documented as far as I can tell,
    and it seems not to behave the way that I've seen these transforms
    used before.

    The CATransform3D type is a 4 by 4 matrix.  Typically these matrices
    are multiplied by the vector (x,y,z,1) to return a vector (x', y', z',
    s) and the display co-ordinates are (x'/s, y'/s), with z'/s being used
    for z-buffer values if drawing is done with z-buffering (which I don't
    think CA supports).

    My initial tests supported my view that this was the way CA was going
    to use the matrix.  Rotations about the z axis put sin(t) and cos(t)
    values into the m11, m12, m21 and m22 slots; translations effect the
    m4{1,2,3} slots; scales work as expected.  Most importantly, the only
    example I could find for applying a perspective transformation was
    three lines of code in the CA "Layer Geometry and Transforms" docs,
    listing 2, which uses the standard technique of putting -(1/distance)
    into the m34 slot (this exact same code also appears in the CoverFlow
    example).  Adjusting the zPosition value for a layer zooms the layer
    in and out.  So far so good.

    The problem is simply this; applying a 90 degree rotation about the y
    axis does NOT turn the image edge-on.  I've hooked up a rotary
    NSSlider to an action that makes a transform thus:
    flip = CATransform3DMakeRotation(rotation, 0, 1, 0);
    I'm printing this transform and then setting the transform on a layer
    containing an image.  Rotating the slider rotates the image and if the
    slider is set to 90 degrees then the transform is the expected: 1.0 in
    m13, m22, m31 and m44, with zeros everywhere else.  This _should_
    produce a transform where the output 'x' co-ordinates are invariant of
    the 'x' input (and in fact should be solely dependent on the 'z'
    position).  Unfortunately, when I do this I get an image which looks
    like it's turned by about 75 degrees, definitely not edge-on, and very
    much with the output x co-ordinates dependent on the input x value.  I
    have to turn the slider to about 115 degrees to get the image edge-on.

    So, my question is what exactly is the process by which the layer co-
    ordinates are converted to the display co-ordinates?  The
    documentation on this seems to be missing and while it looks like it
    should be fairly standard it does not function as expected.  Any help
    would be much appreciated.

    Nicko
  • Presumably you did what the samples do and set the perspective matrix
    as the sublayerTransform property of the superlayer of the layer
    you're rotating?

    Both sublayerTransform and transform properties are applied relative
    to the anchor point (typically center, though appkit sets the anchor
    of layers it creates to the bottom left corner iirc) of the layer the
    property is set on. So if you have two layers, one with a perspective
    matrix, and its child rotated around the Y axis by 90°, you will only
    see the child layer exactly edge on when its center is aligned with
    the center of its superlayer. Does that explain what you're seeing?

    John

    Other than that, your expectations seem correct

    On Nov 7, 2007, at 11:27 AM, Nicko van Someren wrote:

    > I'm having some trouble with using Core Animation's 3D transforms to
    > get a perspective effect.  Part of the problem is that the exact
    > nature of the transformations is not documented as far as I can
    > tell, and it seems not to behave the way that I've seen these
    > transforms used before.
    >
    > The CATransform3D type is a 4 by 4 matrix.  Typically these matrices
    > are multiplied by the vector (x,y,z,1) to return a vector (x', y',
    > z', s) and the display co-ordinates are (x'/s, y'/s), with z'/s
    > being used for z-buffer values if drawing is done with z-buffering
    > (which I don't think CA supports).
    >
    > My initial tests supported my view that this was the way CA was
    > going to use the matrix.  Rotations about the z axis put sin(t) and
    > cos(t) values into the m11, m12, m21 and m22 slots; translations
    > effect the m4{1,2,3} slots; scales work as expected.  Most
    > importantly, the only example I could find for applying a
    > perspective transformation was three lines of code in the CA "Layer
    > Geometry and Transforms" docs, listing 2, which uses the standard
    > technique of putting -(1/distance) into the m34 slot (this exact
    > same code also appears in the CoverFlow example).  Adjusting the
    > zPosition value for a layer zooms the layer in and out.  So far so
    > good.
    >
    > The problem is simply this; applying a 90 degree rotation about the
    > y axis does NOT turn the image edge-on.  I've hooked up a rotary
    > NSSlider to an action that makes a transform thus:
    > flip = CATransform3DMakeRotation(rotation, 0, 1, 0);
    > I'm printing this transform and then setting the transform on a
    > layer containing an image.  Rotating the slider rotates the image
    > and if the slider is set to 90 degrees then the transform is the
    > expected: 1.0 in m13, m22, m31 and m44, with zeros everywhere else.
    > This _should_ produce a transform where the output 'x' co-ordinates
    > are invariant of the 'x' input (and in fact should be solely
    > dependent on the 'z' position).  Unfortunately, when I do this I get
    > an image which looks like it's turned by about 75 degrees,
    > definitely not edge-on, and very much with the output x co-ordinates
    > dependent on the input x value.  I have to turn the slider to about
    > 115 degrees to get the image edge-on.
    >
    > So, my question is what exactly is the process by which the layer co-
    > ordinates are converted to the display co-ordinates?  The
    > documentation on this seems to be missing and while it looks like it
    > should be fairly standard it does not function as expected.  Any
    > help would be much appreciated.
    >
    > Nicko
  • On 1 Dec 2007, at 17:10, John Harper wrote:

    > Presumably you did what the samples do and set the perspective
    > matrix as the sublayerTransform property of the superlayer of the
    > layer you're rotating?

    Yes, that's right.

    > Both sublayerTransform and transform properties are applied relative
    > to the anchor point (typically center, though appkit sets the anchor
    > of layers it creates to the bottom left corner iirc) of the layer
    > the property is set on. So if you have two layers, one with a
    > perspective matrix, and its child rotated around the Y axis by 90°,
    > you will only see the child layer exactly edge on when its center is
    > aligned with the center of its superlayer. Does that explain what
    > you're seeing?

    Aha.  Thank you.  That explains what I'm seeing exactly.  The image
    that was being rotated was off-centre in the layer that contained the
    perspective transformation and turning it though 90 degrees left it
    "pointing towards the origin" in the parent layer.  Adding an
    intermediate (empty) layer, moving the perspective transform up to the
    intermediate layer and then centring the later containing the image to
    be rotated in that layer now allows the image to rotate as I would
    expect.

    I've filled a bug report on the documentation for this, since there is
    currently no description of the nature of the transformations that
    take place.

    Thanks again for the help,

    Nicko
previous month november 2007 next month
MTWTFSS
      1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30    
Go to today