Introducing Instant Tracking


#1

Hi all,

We’ve been working on a new tracking type for the Zappar platform over the last few months which we’re calling “Instant Tracking”. I’m excited to be able to share an early preview of the technology with the ZapWorks community.

So, what is Instant Tracking?

Instant Tracking sits between orientation-only 360 experiences and full world tracking. Instant Tracking allows content to be placed on a horizontal surface in the world (no specific markers or image targets required) and maintains the position of that content in the world as the user moves.

2020-03-02%2015_52_39

What’s the difference between World Tracking and Instant Tracking?

A full world tracking solution aims to build up a complex understanding of all of the surfaces in the world and the exact motion of the device, with everything measured in absolute real-world units such as metres. ARCore and ARKit offer this type of full world tracking, and you can make use of these in Zappar experiences through our World Tracking API (which is implemented using ARCore and ARKit internally).

In contrast, Instant Tracking doesn’t build such a complex model of the world, but instead focuses on keeping the position of the content consistent from one camera image to the next.

This inherently simpler approach has a number of advantages:

  • Instant Placement
    A world tracking approach working in real-world scale requires some device motion to determine the absolute scale of things in the camera view, and it often takes a noticeable few seconds until those estimates become stable and content can be placed without jitter. Likewise device motion is required to detect surfaces, and surfaces must be detected before content can be placed on them. Instant Tracking allows placing content straight away, no camera motion required.

  • Wide Device Support
    Instant Tracking requires a device with a gyroscope, but doesn’t need accurate device-specific calibration outside of this - any device with a gyroscope should work. This is especially important on Android where ARCore only supports around 200 devices out of the more than 10,000 distinct device models with access to Google Play.

  • Works in WebAR
    Zappar Instant Tracking is our own implementation that doesn’t rely on ARCore or ARKit. This allows Instant Tracking to be used anywhere Zappar content is supported, including both native app runtimes and WebAR deployments. For WebAR, Zappar Instant Tracking targets the mobile browsers that are already in widespread use - no unusual browsers or support for upcoming web standards is required.

  • High Performance
    Instant Tracking’s simpler approach means there is less computation required to update the position of the content in each frame. This enables high frame rate experiences even on lower-end devices or in WebAR.

Sounds awesome! Any downsides?

Instant Tracking works best when the anchor point in the world used to initially place the content remains in the camera view throughout the experience - it’s great for things like placing characters in a photo feature and simple visualization of 3D objects for example.

Experiences that involve the user looking away from the initial placement point are not well-suited to our initial implementation of Instant Tracking. For example we would not recommend using Instant Tracking to build an experience where a user explores a virtual environment by freely walking around a large area.

Instant Tracking also aims to maintain relative scale and position as the user moves, but does not know the absolute scale of the world. Using “relative scale” is a key reason Instant Tracking can offer instant and stable placement before any device motion, but means it is not suitable for experiences where correct real-world scale is a critical part of the experience.

When can I use it?

Intrepid ZapWorks users can experiment with an open beta of our Instant Tracking implementation for WebAR right now! There are more technical details and a sample ZapWorks Studio project over on this thread. During the beta you’ll need to view any projects making use of Instant Tracking through our beta WebAR site at https://beta.zappar.app.

You can try out the example content shown above by visiting https://beta.zappar.app/?zid=z/wkgp1c on your mobile device, or scanning the zapcode below from the https://beta.zappar.app site.

instanttracking-demo-zapcode

Over the coming weeks we’ll be refining and finalizing the first full release of the Instant Tracking API and implementation, and then rolling out support for Instant Tracking in both our Zappar native apps and our main https://web.zappar.com WebAR page.

If you’ve got a WebAR-only campaign ready to roll before then, we can deploy a Custom WebAR Site to support your launch with the current pre-release implementation. Get in touch if you want to talk about that option.

What’s the future roadmap?

Instant Tracking emerged from the Zappar R&D team’s work towards a full World Tracking solution for all of our supported platforms. As we continue working towards that goal we will also be folding in further improvements to Instant Tracking.

These improvements will address some of the use cases that are not well-supported by the initial Instant Tracking implementation - for example adding the ability to continue to update the content position when the original anchor point is no longer in view, and relaxing the requirement for placement on horizontal surfaces.

You won’t need to make any changes to your Instant Tracking content to benefit from these future updates, and you won’t even need to republish - they will just be improvements to the underlying implementation of the simple Instant Tracking placement API.

We’ll also be looking at how we can combine the instant placement benefit of Instant Tracking with the robust World Tracking provided by ARKit and ARCore on supported devices and platforms.

So you’ve got all that good stuff to look forward to in future, but for now please have a play with Instant Tracking and let us know what you think!


Instant Tracking WebAR Beta - Example content and API discussion
WebAR for World tracking
World Tracking in WebAR
Pokemon Go-style WebAR interaction?
World tracking in Zappar and Web
World Tracking Problems
What's the difference with 8th Wall?
#2

This would be killer! I would love to use this feature to “bring to life” old machinery “in place” inside a historic building, for instance.

For example, imagine an old factory converted into a “museum” of sorts, with pieces of equipment throughout. If a user could wander about the space, point their device at a piece of equipment and an animated 3D model would appear “in place” of that machine, that would be ultra sweet.

Even if the experience had to be triggered from a certain perspective initially in order to anchor the AR overlay, that would be totally workable. For instance, the user would stand on a particular marked spot on the floor and point their device at the machine to trigger and anchor the experience. Then, they could move freely around the machine (while keeping their device pointed at it) in order to see it “in action” from different angles.

Might something like that be possible at some point? :crossed_fingers:

EDIT: Thinking out loud some more… If a photo of the machine from a particular perspective could serve as a “trigger image” as well as a way to scale the 3D model overlay, that would be awesome!


#3

Interesting ideas @shot.

The immediate roadmap for Instant Tracking is to allow it to continue updating the content position when the initial anchor point is not in view. That will enable most world tracking use cases, at least those that don’t require absolute scale to be known.

Then we’ll look into enabling “extended tracking” where an image target or zapcode could be used to initialise the content position, orientation, and scale but Instant Tracking will take over when the target leaves the view or stops tracking for any other reason.

We’ve done some real-world positioned AR content using extended tracking in combination with world tracking before, using zapcodes on plinths to launch the experience and initialise the content position. Something similar could work for you idea @shot. Using a 2D image to recognise a 3D scene doesn’t tend to work that well in practice - it ideally would require a different detection algorithm which isn’t in our near-term roadmap.

One of the biggest problems with delivering experiences that involve pointing at something static and making that very thing appear to come to life is actually how you convincingly cover up the original real-world object in the experience, as it will still be visible in the camera image in the original static position otherwise.


#4

I played around with an app called “Aurasma” a few years ago which did just that. You could photograph a scene in the real world, and then anyone else at that location could point their device in that same direction to trigger the “aura” (their name for AR overlay). The photo was apparently used to “train” the software to recognize the real-world scene. It also used GPS to filter experiences to those relevant to your current location - i.e. nearby.

It actually worked reasonably well - especially given that it was several years ago. The cool thing is that there was no “code” to scan at all and no intermediate screen to tap/click through. You simply pointed your phone at a real-world scene, and the “aura” appeared. However, there was nothing like world or extended tracking.

The technology was bought by Hewlett-Packard and rebranded “HP Reveal”, but I have no idea what the status is now. I lost interest when it was purchased and its future was uncertain. Zappar offers much more functionality anyway.

Is there any chance you guys will provide access to location info in Zappar at some point? Or is it in the docs and I’m missing it?

Yeah, I wouldn’t expect it to be perfectly concealed, but it seems the overlay should be able to remain reasonably well anchored such that the user could move around it. It would be fun to experiment with.


#5

We haven’t added a geo-location API at the moment; we’re privacy-conscious here so would want some sort of per-zap permission management for it before making it available to third-party content. It’s something we’ve thought about but isn’t currently on the short-term roadmap for the general Zappar app. We have integrated geo-location into some custom app deployments where the content is more controlled.

In terms of matching a 3D scene - it certainly is possible to match a photo to a 3D scene, but the algorithm to do that is probably different from Image Tracking (where the target image is assumed to be a flat plane viewed from some angle in the camera view). The geometric constraints for photo -> scene matching are a bit different.


#6

Thanks for the reply, @simon. I totally understand and agree with the privacy concerns regarding location services. I’m not as familiar with Android, but I thought iOS had pretty good built-in controls for that, no? The user must opt in on a per-site basis. At least, that’s the default setting. It would be useful in creating AR “geocaching” type of experiences. So maybe the concern’s not as great for WebAR, given the built-in controls; but I understand the desire to keep the API consistent across Web and app.


#7

The browsers manage permission per-page, but even on WebAR we have the rescan button so you can load two different third-party zaps into the same page. We would need some explicit per-zap permission management. For white-label apps and WebAR that don’t have a scanning screen to access third-party content this is less of a concern.


#8

What’s the time frame for the release of Instant Tracking? also, what’s the ETA on World Tracking for Web AR?


#9

Hi @jbuscemi,

The Instant Tracking API will move out of beta when we have it available across all supported platforms, and all being well it should be part of the next Zappar app release. We don’t have a concrete release date for the next Zappar release at the moment but I’d expect it to be out within the next month or thereabouts.

As I mention in the post if you are only targeting WebAR then we can deploy a custom WebAR site with the current instant tracking implementation even during this beta phase.

The roadmap beyond instant tracking is harder to put any ETA on as it is still very much an active R&D project. What I can say, as I mentioned in the “roadmap” section, is that as we continue pushing the computer vision code towards supporting additional world tracking features we will be able to make use of those updates to improve Instant Tracking, and to broaden the range of World Tracking use cases that Instant Tracking will be suitable for. I’d hope some of those changes will be ready to be rolled out in the next few months but can’t give any more accurate estimates than that right now.

Hope that helps a bit.


#10

Hi @simon just wondering if there has been an update on when instant tracking may come out of Beta? I am thinking of using it for an upcoming project, would I still have to go the custom WebAR site route for now?


#11

A custom site would guarantee that the API wouldn’t change on that site.

The other option for now is to direct your users to the beta site. We don’t have any compatibility-breaking changes to the API planned, but there are no cast-iron guarantees around that.


#12

Thank you for the quick reply @simon , yes this was something I was just testing. Are there any disadvantages to going the beta route? I understand that there may be api changes my question is more regarding user traffic, can the beta site handle traffic the same as the official Web AR site?


#13

Yup there’s no difference in terms of handling the traffic - the sites are hosted and served via a CDN in exactly the same way, and the ZapWorks project hosting is also on a CDN that is the same regardless of the app or site that is used to view the content.


#14

Is there any update on this? I would love to see updated code on Instant Tracking. When I try the original forum demo, the placed object changes scale over time, until I place it again and it resets.

Perhaps there is a way to instruct the user to aim the camera straight down at the surface, and the captured image could be immediately converted internally into to an image target. Then you would get all of the solid and stable tracking we have experienced in our Zappar Image Tracking projects. Or is that is how it currently works?


#15

The roadmap I outlined in the initial post is still being worked through, but that’s the best guide of where things are going.

Instant Tracking uses quite different techniques from image tracking; and needs to do so as the planar assumption will need to be relaxed to allow for larger-environment world tracking. The current implementation just uses a frame-to-frame update which does mean it can be susceptible to drift (it depends a bit on the type of surface texture how much drift occurs). Part of the work we’re doing now involves maintaining the identities and computing reference position of tracked points over longer time windows to keep things more consistent.

This is a really high priority project for us, but I’m afraid I’m not able to give any timeframe estimates for releasing any future milestones at this stage.