Percentage of Tracking Image Seen



Your tracking image recognition, is really robust! It’s even working on a portion of the tracking image.

So, I’m wondering if your algorithm is able to compute the percentage of the tracking image which is actually “seen”?

If yes, could it be possible to set a value to display or not the experience?
For instance, if less than 50% of the tracking image is seen then don’t show the experience.

What I have in mind is to be able to build a tracking image from different images, but with the experience showing up only if all the images are present.

Great job, anyway! :slight_smile:


I would love the answer to this as well!

Here is my tracking image:

This is made up of four images superimposed. However, any two of the images, for example this:

…will trigger the actions.

And sometimes, even a single image, such as this:

…will trigger the actions.

This is a shame because the puzzle I’m working on is dependant on how the player places the images together, superimposed.


A post was split to a new topic: How does a zapcode work?


Hi @shawnjoh,

Thank you for your question, this idea seems very novel and creative and not something we have really seen before! There are just a few issues that will occur when trying to progress in the creation of the tracking image in this way.

To start with, the complexity of the tracking image is not best practice and even with all four images superimposed we still wouldn’t advise to use it. The best tracking images are ones with a good amount of high contrast detail across the image. Images that are too simple or unbalanced may not track as well. Where possible avoid the use of repetitive high contrast patterns. The more diverse the areas of the image are, the better the tracking will be. Using your tracking image as an example - it is relatively simple with repetitive patterns meaning it is not following our guidelines for a ’What makes a good tracking image’.

You also have found out that only a portion of a tracking image needs to be visible for the ‘seen’ state to be activated (actions triggered). Our system tries it’s best to track to an image even if not all of it is in view, it seems as though even with the single image our system automatically recognizes this as the target and initiates the experience.

What we suggest is that you add some complexity to the tracking image (colors and background wouldn’t hurt) and keep testing to see if you can get this idea to work how you expected. Furthermore, you could ask the players to only scan the zapcode when they have all four images in place, removing the issue of the single image tracking the experience.

Please post the finished project on the show and tell page as we are excited to see how this ends up. :slight_smile:

Hope this helps,



Great idea. Actually something I was wondering today. I’d also like to have this in a future release. In case the user tends to lose the image (because of rotating the image to far away from the camera), we can show something like “Please be sure you keep the image in sight!”.


Hey again Ferdy,

We do actually have a template for what we refer to as a Look For subsymbol. This subsymbol plays an animation that prompts the user to point their device at the tracking image if it’s not in view. this is done through the target image’s seen and notseen events.

Please note that while the Target Events video makes use of scripting, the same can be accomplished through Actions.

You can grab our Look For template, as well as instructions on its use, over on the forum post shown below:

Hope this helps.


Thank you so much for the clarification of how things work with a zap code! That makes a lot of sense.