There’s a couple of different routes here:
Obtaining data about the “actual” camera (Z.camera, without focalLength overridden)
TriggerRegion provides one way to map between the Screen (orthographic) and Camera (perspective) transforms. You can add a TriggerRegion in Z.screen (shape: none, so all triggers are reported regardless of their position) and place a couple of objects with the TriggerRegion’s tag set at some known z position relative to Z.camera - I’d go for an “origin” trigger at [0, 0, -1] and an “up” trigger at [0, 1, -1].
triggers event will then let you examine the
localPosition of those triggers - ie their coordinates in the TriggerRegion space (which is the same as screen space assuming your TriggerRegion position is [0,0,0] and scale is [1,1,1]). The z value of the local position doesn’t mean much (the depth buffer is used very differently for orthographic and perspective cameras), but the x and y values will correspond to the coordinates in Z.screen that would line up with those triggers.
The origin one will usually be very close to [0,0] in screen space, but that is not guaranteed - for example ARKit World Tracking uses optical stabilisation which means the camera origin actually moves around a bit.
The distance between the positions of those triggers in screen space then lets you calculate the scale factor between x/y coordinates on the camera z=-1 plane and screen space coordinates, which should be enough to calculate camera space coordinates corresponding to any screen position. To get the top/bottom coordinates in portrait you’ll need the screen aspect too - the
resize event of Z.screen is the easiest way to get that.
Using a separate Z.CameraTransform with fixed focal length
Z.CameraTransform still has a relationship with the physical camera, even if focalLength is overridden - the origin and the aspect ratio is still taken from the real camera which makes it harder to calculate exactly how coordinates map back to screen position.
We have added a new mode to
Z.CameraTransform to make it easier to obtain a perspective view with a known mapping to the screen space. Right now this isn’t exposed through the ZapWorks properties panel or the TypeScript definitions but you can still access it from a script as shown below:
const Camera_Transform = symbol.nodes.Camera_Transform;
Z.RequiresVersion.v400; // Ensure the scene is published with the right required version
(Camera_Transform as any).focalLengthMode((Z as any).CameraTransform.FocalLengthMode.screen_units);
The relationship between focal length and fov is shown by the diagram below. The units we use for focal length in screen_units mode are the same a Z.screen space - 1 unit is equal to half the height of the screen (in landscape).
From the diagram, the relationship is:
half_vertical_fov = atan(1.0 / focalLength)
So let’s say you set focal length to 3 in screen_units mode - that is equivalent to a vertical field of view of 2 * atan(1.0 / 3) = 36.8 degrees.
If you now place objects at z=-3 relativeTo this camera transform, they will exactly line up with things in
Z.screen. So [0, 1, -3] will be at the top of the screen in landscape mode (for portrait you’ll need to know the screen aspect - see above for that). For other z values you’ll need to scale the x/y coords as it’s a perspective transform - so [0, 2, -6] would also be at the top of the landscape screen for example.
It’s pretty complicated in general - but hope that helps!