Camera focal length to fov in degrees?

Basic idea of what I want to do. Put a compass readout at the top of the screen as a Head Up Display, preferably in portrait mode, that can work on different devices.

The obvious thing I tried to do was create the compass as a ring centered in the camera view and then rotate it as the attitude orient rotates along the Y axis (something I figured out a long time ago.) But this only works if I want the readout in the center of the screen as moving it along the Y axis changes it’s height depending on camera focal length, and screen ration, so I can’t guarantee that it is at the top of the screen, or even on the screen on all devices.

I then decided to try it as a texture that I scaled and translated on plane placed relative to the screen where I wanted it, and the texture translation is again based on the attitude orient. This fixed a lot of things and even makes the HUD look more accurate as it removes the curvature. But it is only accurate at the direct center of view, since the fov changes depending on device, and that means the speed of the compass does not match the speed of rotation, which looks weird and does not give a true measurement for the surrounding image.

So is there anyway I can either figure out the devices exact field of view in degrees, or use a specific focal length camera transform to set that field of view to a specific number that I can use for scaling?

Hi Bradley,

There’s a couple of different routes here:

Obtaining data about the “actual” camera (Z.camera, without focalLength overridden)

TriggerRegion provides one way to map between the Screen (orthographic) and Camera (perspective) transforms. You can add a TriggerRegion in Z.screen (shape: none, so all triggers are reported regardless of their position) and place a couple of objects with the TriggerRegion’s tag set at some known z position relative to Z.camera - I’d go for an “origin” trigger at [0, 0, -1] and an “up” trigger at [0, 1, -1].

The TriggerRegion triggers event will then let you examine the localPosition of those triggers - ie their coordinates in the TriggerRegion space (which is the same as screen space assuming your TriggerRegion position is [0,0,0] and scale is [1,1,1]). The z value of the local position doesn’t mean much (the depth buffer is used very differently for orthographic and perspective cameras), but the x and y values will correspond to the coordinates in Z.screen that would line up with those triggers.

The origin one will usually be very close to [0,0] in screen space, but that is not guaranteed - for example ARKit World Tracking uses optical stabilisation which means the camera origin actually moves around a bit.

The distance between the positions of those triggers in screen space then lets you calculate the scale factor between x/y coordinates on the camera z=-1 plane and screen space coordinates, which should be enough to calculate camera space coordinates corresponding to any screen position. To get the top/bottom coordinates in portrait you’ll need the screen aspect too - the resize event of Z.screen is the easiest way to get that.

Using a separate Z.CameraTransform with fixed focal length

Z.CameraTransform still has a relationship with the physical camera, even if focalLength is overridden - the origin and the aspect ratio is still taken from the real camera which makes it harder to calculate exactly how coordinates map back to screen position.

We have added a new mode to Z.CameraTransform to make it easier to obtain a perspective view with a known mapping to the screen space. Right now this isn’t exposed through the ZapWorks properties panel or the TypeScript definitions but you can still access it from a script as shown below:

const Camera_Transform = symbol.nodes.Camera_Transform;
Z.RequiresVersion.v400; // Ensure the scene is published with the right required version
(Camera_Transform as any).focalLengthMode((Z as any).CameraTransform.FocalLengthMode.screen_units);

The relationship between focal length and fov is shown by the diagram below. The units we use for focal length in screen_units mode are the same a Z.screen space - 1 unit is equal to half the height of the screen (in landscape).

focal-diagram

From the diagram, the relationship is:
half_vertical_fov = atan(1.0 / focalLength)

So let’s say you set focal length to 3 in screen_units mode - that is equivalent to a vertical field of view of 2 * atan(1.0 / 3) = 36.8 degrees.

If you now place objects at z=-3 relativeTo this camera transform, they will exactly line up with things in Z.screen. So [0, 1, -3] will be at the top of the screen in landscape mode (for portrait you’ll need to know the screen aspect - see above for that). For other z values you’ll need to scale the x/y coords as it’s a perspective transform - so [0, 2, -6] would also be at the top of the landscape screen for example.

It’s pretty complicated in general - but hope that helps!

Simon

Thanks this has been really helpful.
I actually use trigger regions to determine what direction the camera is facing relative to the photosphere, since I can not get the photosphere’s rotation directly, (unless there is a way to do that, which would reduce the workload for the users device as well as lines of code.)

But I had not thought about using the trigger regions to measure the camera. I’m going to use that method for this project, but I also think the focal length mode could be extremely helpful with many things and will play with it a bit waiting for it to be editor exposed.