World tracking & Extended tracking


#21

Hi Frank,

I did exactly the same, with a bigger target image my model is still stable but it starts foating above the target image. It could be that the model size is calibrated acoording to the target image size. And perhaps when the target image is smaller, the smartphone is closer by when initializing the space, hence more exact?

I am going to do some extra experiments today, if I find out something I’ll post it here.

After doing the same, going from a target image of 11cm to 180cm and changing the target scale to half the height (=0.090) it works very well for me. The stability might be even slightly better than with the smaller target. So not sure what went wrong in your case…


#22

Hey everyone, does anyone happen to know how to accomplish this with the React Three.js SDK? Which attributes to get from the image tracker, the attributes in the instant world tracker to assign those values to ect… Any insight is much appreciated Thank you


#23

This would be a @simon question I think.
I don’t know if we can with UAR SDKs yet. But it’s something I would like to do with Unity as well.

Steve


#24

Where can I get a hold of @simon ? :smile:


#25

I pop up every now and again! :slight_smile:

These extended tracking examples only work properly when viewed in the native Zappar app. They rely on Studio’s World Tracking API which wraps ARKit and ARCore on iOS and Android respectively.

Our current Instant World Tracking in UAR (also exposed as Instant Tracking in Studio) needs the anchor to remain in view the whole time, so it’s not really suitable as a fallback when you look away from the target.

A future update will bring “full world tracking” to our Instant Tracking implementation, and that will then be more suitable for enabling Extended Tracking in UAR and on the web. The plan is to make Extended Tracking a really simple thing to opt-into for any UAR tracker, but the details aren’t fully thought through yet.

Our focus for now is on getting the full world tracking update in shape to share as a public beta with you all. It’s not something I can give any estimated dates for yet, but it is the primary focus and top priority for our computer vision team at the moment.

Once that first beta release is out we can look at adding additional features like extended tracking support.

Sorry it’s not great news for now, but hope it helps clarify the current status for you!