World tracking & Extended tracking


This is something that has always been a little unclear to me. Please forgive me if I’m off base, or repetative, because I know I’ve asked something similar before.

While I’ve experimented a lot with world tracking, I’ve always wondered if, instead of achoring a world tracking experience to the place you click (ie. moving the table around, and then clicking to make it stationary), you could perhaps anchor it to a tracking image?

I think that “anchors” do this, but I havent a clue how to start with them.

To elaborate with an example. There is a symbol on the ground already where you want players to start. And this is made into a tracking image. Then the program would stick the world tracking environment to that image instead of to the ground plane. And then, it would continue to evaluate the ground plane in relation to that in order to expand the experience as normal.

The benefit of this is that all players would have the environment stuck in the same place. Instead of one person’s room being here, and the other person’s room being there, you will have two people looking at approximately the same place and could, in effect, work together with more realism.

If this is possible, can you give me a clue as to how to begin using a tracking image like this?


Extended Tracking not cutting it. Can I anchor world tracking to a ZapCode?

Good question.
I asked before if we could have the size preset. Like the table will always be the same size.
I could see you making a group world relative to the anchors from a tracking image. but like you I don’t understand the anchors yet.



I have been trying to figure this out too. Essentially from what I understand, I need the location of the tracking image to set the X,Y,Z coordinates of the World Tracked model/scene. Could @George, @connell, someone else at Zappar please help us with this question?
It would be ideal if an object could still appear as if it is tracked to an image target, but anchored to the ground to prevent glitching or jumping.
Essentially, I want to use a 6ft tall sign as an image target, but I want the model that appears in front of the sign (when triggered) to be anchored to the ground (through world tracking), and not to the sign (through image tracking), as this tends to glitch at certain times of day (it is an outdoor scene). I want the sign to trigger the world tracking and tell the app where to anchor the world tracked AR experience by setting the XYZ coordinates according to the location of the sign relative to the user.

I hope this makes sense.

Make a 3D object stay there for everyone and not move
ZapBox Controller Triggers in Studio 6.5
Long corridor experience

Hi guys,

This is generally a feature called “Extended Tracking” and is something we’re working on bringing to Studio soon through an easy-to-use subsymbol to sit alongside the ground placement one.

It’s not quite ready yet, but as you asked so nicely here’s a zapcode that shows it in action (as a PDF so it can definitely be printed at 100% scale) and the zpp that I published to it:
extended-demo-75mm.pdf (7.7 KB)
ZapcodeExtendedTracking-Demo.zpp (780.9 KB)

This demo uses zapcode tracking, with an assumption the code is horizontal. Zapcode tracking is convenient for the extended tracking use case as detection can work when the code is much smaller in the camera view than an image tracked target. You can go into the symbol and delete the circle-likely-horiz.zpt file and train your own tracking image if you want (you’ll then get a red squiggly underline in the targetFinder script that shows where you’ll need to swap the zpt path for your target).

It’s important to set the Target Scale referenced property in the parent symbol. This should is the size in meters of 1 target unit (half the target height). For the demo PDF the zapcode is 75mm in diameter, so this is set to 0.0375 in the demo scene.

You can also set the Target Identifier referenced property to lock tracking to a particular zapcode. If this is blank it will lock on to the first code it sees (and show in the debug log the identifier for that code).

Essentially this uses a TriggerRegion to identify the position of the target in relative to the camera (why it is important to set the scale correctly so the distance to the target is measured in meters). It updates a world tracking anchor whenever the target is seen, along with applying some additional smoothing. The “up” direction is always from the world tracker, so will be vertical. The rotation around that “up” axis is driven by the target though - the content group’s X axis will be aligned with the target X axis (again with some more smoothing).

There are 3 different ways the tracking can work:

  • Full world-tracking devices: Updates a custom anchor in a Z.WorldTracker, allows full movement when the target is not seen.
  • Gyro-devices: Still updates a custom anchor in a world tracker, but when target is not seen only the orientation of the device (driven by the gyro) will be updated.
  • Non-gyro devices (low-end android phones): Doesn’t use a world tracker at all, just uses visual tracking. Does add some additional smoothing though. When target is lost it will just remain on the screen at the last seen position.

The subsymbol should manage all this complexity for you, and just exposes a single state for the tracking quality. If this is extended_limited or not_seen then you probably want to fade out the content group and ask the user to point back towards the target.

There are some app-level changes that will improve how well this works in the next Zappar app update; the details aren’t massively important but it is certainly a use-case we have in mind and are trying to optimise for.

WebAR will also support the same thing (gyro-only world tracking mode from above for now). Right now due to the way we sandbox user javascript on the web there is an extra frame delay which introduces too much wobbling to be useful but we will workaround that before making extended tracking a fully supported feature.

Happy experimenting, and let us know how you get on!


World Tracking vs. Image Tracking
Holiday Cubicle decoration
Possible to have 3D models persist after going off image target?
ZapBox Controller Triggers in Studio 6.5
World Tracking vs. Image Tracking

Thanks @simon
It was very helpful


Hi all,

Thank you for the information. The extended tracking the exact capability I need. I am currently trying to modify the provided zpp file to understand how it works. I am have a few issues/areas of confusion that hopefully someone can assist me with.

As shown in the image I am receiving 4 errors when I go to preview this project. I am struggling to find where these references are/should be.

Additionally it was mentioned this uses zapcode tracking that can be replaced with a target image. How am I able to access this zapcode? If I drag the “ZapcodeExtendedTracking” subsymbol into a new project media library a tracking image is imported, but I can’t view it in the original project Hierarchy.

Can anyone provide some insight or suggestions?
Thank you for your time and any help you can provide,


I haven’t looking at it yet but Simon said to go into the symbol and delete the circle-likely-horiz.zpt file. Then train your new one. Dont forget to update any error spots with new tracking image file name. Dont know if I’m any help. I wouldn’t get to look at it till tomorrow.



Sorry for that - I’d uploaded an earlier zpp file by mistake. I’ve fixed the link above now (and tested that re-importing as a new project can be previewed correctly).

The TargetFinder is all handled in the targetFinder script node rather than through the hierarchy. That ensures only the custom anchor is exposed from the symbol for users to place their content; and internally the symbol handles making the content group relative to the target or the custom anchor as required. As I mentioned just deleting the zpt from inside the symbol will highlight where the script needs adjusting. It’s just line 4 of the targetFinder script that sets the tracking file to use:
let tf = Z.TargetFinder(symbol.mediaFiles.circle_likelyhoriz_zpt);


Thanks @simon I really appreciate it. I’ll check it out tomorrow. Do you guys mind if we use this in our projects, or would you prefer us to wait until it is released officially?

Thanks Again,



Feel free to use it - it’s just a combination of the public TargetFinder, TriggerRegion and WorldTracker APIs. That also means it should be “safe” in future, it doesn’t depend on any private internal details so I’d expect it to keep working fine with any future app updates.


Thank you so much for all the help!



Haven’t tried it yet but can we use this with a vertical tracking image?



Yes, vertical tracking works for me. I just followed the step Simon suggested, replacing the targetFinder symbol in the targetFinder script with a vertical one. And then rotated the content group by 90 degrees.

Awesome that there is a demo of this available, thanks for sharing!


@stevesanerd sorry for missing your post earlier - if you’re talking image tracking targets then vertical ones should mostly work just by inserting them into the symbol. The X axis of the anchor that is exposed is aligned with the X axis of the target, so any rotation of the target around that X axis (ie horizontal to vertical) should still work OK. The anchor will be at the origin of the target with Z pointing upwards which is less useful in vertical targets, but as @sgi mentions you can just add a rotation in the parent symbol.

One thing that won’t work is the final fallback for non-world tracking and non-gyro devices; there’s an upTrigger node in the hierarchy of the project that you can move from [0, 0, 2] to [0, 2, 0] if you care about those devices.


Thank you Simon



Hi Simon

I was wondering if you had made any progress with the extended tracking subsymbol?
Very keen to use it:)


Hi Paddy,

For our own projects we’ve just been using variations on the zpp that I shared earlier in the thread. Have you looked into trying to use that, or is it missing some features that you’re looking for?

After some more thinking we’re not sure a subsymbol is the right way to expose this; it probably makes sense to be something you can just opt-in to on a target basis (a bit like the “use gyro” checkbox we have at the moment).

There’s also interactions with our work on Instant Tracking which has further complicated things; we’d like our extended tracking to work even on the web where we don’t have access to ARKit and ARCore.

I won’t promise anything “soon” this time :slight_smile: - but extended tracking that is easy to adopt and works on both the web and in native app Zappar runtimes is definitely a high priority for us.

Hopefully the subsymbol I’ve shared already is enough to get something working in the interim.




Hi Simon,

Trying to get the vertical Extended tracking to work properly…
Uploaded new trackin-image. Turned the Content 90 degrees.

In Studio It looks good…
But when looking in the app>>> Tracking works but content is still very relax and lying down… (horizontally)

Can you help me out?


ThanksTracking.zpp (1.3 MB)


Hi All,

When I tried to publish an altered version of the extended tracking demo I also ran into some compiling issues that had to do with the clipping script. I blanked out the following lines in the ‘clipping planes’ file:

Not only did it fix the compiling issue, but the project runs much smoother and precise when vieweing it on my phone. I am too new to this to fully understand what the purpose if the clipping planes script is though.



I’ve read everything above and downloaded the examples.

For a test, I’ve copied the target image but made the zapcode 180mm(instead of 75mm). My theory was that the bigger the image, the easier a device could pick it up the further away you are from it should you look back. I also set my Target Scale to 0.090 after retraining my version of the target image.

Trying it out, I seem to get a better experience with the smaller, original zapcode than the one I’ve set up. Is this correct?. Am I doing something wrong?, have I misread the above?