Image analyzer outside of Designer

Is there a way to get some sort of image analyzer tool put into Studio? Or maybe in the my.zap.works dashboard? It’s super helpful but the only way I think you can get to it right now is to create a new project in the old Designer tool and upload the tracking image.

I think having a separate image analyzer tool would be really useful to check graphics as we’re creating them to ensure we are creating something that works really well for tracking before we waste a lot of time on a design that doesn’t work. Sure, I have a general sense of what works, but the other members of my team don’t have the same technical understanding of AR and image tracking that I do, so it’s getting old having to constantly explain why I need an image tweaked or what have you.

image

Hi Atlas,

The problem with the analyser is that it’s just scoring the image with a very basic metric (looking for a decent spread of “detail”) - the output of that is usually correlated with tracking quality, but not always.

It can also encourage people to add additional detail where it’s not really needed (people desperately trying to get the whole image “green”). The problem there is people often then add some repetitive patterns to try to get a “green” score, but repetitive patterns are not really that helpful for tracking - unique features are much more useful. A checkerboard image would probably show up completely green on the analyser but is a bit of a nightmare for tracking (as each intersection in the camera could match to any of them in the target).

My rule of thumb is: more high contrast areas = better, and try to ensure they’re spread out to the edges / corners - small text in the centre of a plain white postcard isn’t going to track that well, but the addition of a few small bits of detail towards the corners will probably make it track fine, as long as the use case has the entire image in view most of the time.

In the end I encourage people just to try training the image in Studio, just with a plane on it, and not to obsess over the green score on the analyser.

There’s also this docs article which discusses in general terms how to choose a good tracking image: https://docs.zap.works/general/design/what-makes-good-tracking-image/

Thank you for explaining how the analyzer works! I didn’t realize it was going off of such a basic metric.

I’ve read through that docs page on tracking image many, many times and referenced it a lot, so like I said, I tend to have a pretty good eye for what works and what doesn’t, haha. I just wasn’t sure how robust the analyzer actually was and if it was something I could tell the other designers to use to cut down on some of the back-and-forth as they’re making images to potentially be used as tracking images. But I can leave that out of my “guide” I’m making for them if the analyzer isn’t actually that reliable.

There are a few interesting nuggets about the training in this thread, but yep - it’d be great to be able to get some feedback from the image trainer to see how well it did with what we fed it.

I guess there must have been some secret diagnostic outputs available when the training code was being written – image coordinates of the features identified, for example – but while that’d appeal to the more techy among us (me me me), it’d probably take a fair bit of work to turn that into something presentable, a nice app, for non-hackers. Excellent feature request, though

+1

2 Likes