Scanning an image trigger more than one Image tracker

I’m using Universal AR in Unity and I have 9 images that I want to track. My problem is when I scan the first image (I scan it from my screen), it trigger 8-9 image tracker depending how far from the screen I am. So if I move my mobile device back and forth, I can trigger all 9 image tracker with a single image.

My image have similar shape and color, but changing the color doesn’t fix the problem and changing the shape isn’t an option (it’s an Easter game and the image are egg shape with different pattern inside the egg).

I’ve read that scanning from a screen can mess things up on other AR tool and I have no way of printing the image. Could this be the problem?

EDIT: I tried to change my images by making them more distinct (img1, img2, img3) like changing colors, adding things in the corners, removing the whole egg and just keeping the bunny. Nothing worked.

So as an additional question, what makes 2 images different enough?

Hi,

Firstly, our image tracking (and most other AR systems that I’m aware of) only use a greyscale view for tracking, so switching colours doesn’t help a lot to distinguish images. That said, swapping dark/light colours would provide a visible difference to the tracker.

Image tracking tries quite hard to track, even when the camera image doesn’t match exactly - for example due to partial occlusion (user’s hand covering up part of it for example), shadows, imperfect printing, non-flat printout, etc.

That’s generally what you want when you know which target is being looked at, but isn’t helpful when you’re trying to work out which one of a set of images in view when the images have similarity. As an extreme example if 2 images are the same apart from one corner, then you want it to use that corner to “tell them apart” but to continue to track if that corner is covered up by the user’s hand.

Historically at Zappar we’ve used zapcodes to solve the “which target is this” problem, so our tracking is designed toward trying really hard to find a known target in an image rather than trying to solve the “are you sure it’s this target” problem. Zapcodes are not yet available through Universal AR though, so that’s not a solution you can use right now.

We are working on improvements to image tracking to make it better suited for telling similar targets apart, and multiple target applications in general (9 independent trackers isn’t great from a performance point-of-view, though is the only way to handle multiple targets for now).

Until then you unfortunately can’t rely on a target being reported as “seen” in a single frame as a reliable indicator of the target actually being there. You could monitor how many frames it is found for vs the others, wait until it is reported at a certain size in the image (close-up is less likely to mis-match), or encourage people to scan straight on and only accept targets where a vector pointing straight out of the target is within 15 degrees or so of the camera viewing direction.

Alternatively as you suggest you can try to add more differences between the images. Text often provides good strong unique texture, perhaps you could add a number to the targets too? The 3 images you’ve shown look like they should be “different enough” to me though.

Hope that helps a bit!

The image work “fine” if I’m just tracking them (by fine I mean if I’m too close to the image when I scan them I can see flickers of other GameObject). My problem is I use image to activate GameObject that are being instant track and I do not deactivate them when they are unseen. So activating all my target by scanning one image is kind of a deal breaker.

I’ve tried adding black bars in the corner of the image but it didn’t help that much. I’ll try adding numbers in the picture instead of a bunny and scanning it for multiple frame before activating the GameObject. Thank you for the idea.

Would it help to hide a different QR code in the picture for each of them?
Would scaling the image to a higher resolution help?

EDIT: I wanted to point out that the closer I am to the target the more error prone the scanning is. It obviously gets worse if the whole image is not in the frame.

EDIT2: After trying different things, the best result I got is from having the image on screen for at least 40 frames and prevent activation of other GameObject while one is already on screen. If that can help someone else in the future.