Thanks for getting in touch.
There’s a few things to watch out for with the experience you’ve described:
A normal sign won’t provide sufficient contrast and detail to function as an effective tracking image, leading to jitter.
Using the signs as tracking images would require you to add zapcodes to the physical signs themselves.
Only one tracking image is recognised per experience, the app won’t be able to transition between displaying content from one tracking image to another. There is a possible workaround described in this thread, but it requires a substantial amount of coding.
Lastly, the Zappar app does not recognise and track real-world objects, so your on-screen arrows won’t necessarily point to the object you’d like, depending on the user’s position when scanning.
In regards to your ‘target being seen’ functionality, in Studio you can use a target image node’s ‘onseen’ script to define any actions that should occur when the target image comes into view.
Hope this helps, all the best.