Support for Multiple Targets

Sorry fixed link.

http://rentanerdcomputers.com/wp-content/uploads/ZapWorks/Multi%20Tracking%20Image.zpp

3 Likes

Thank you Steve for sharing! :grin::+1:

I was wondering if there’s any news about this implementation? Seems kind of crucial to a lot of experiences.

This thread has been super-helpful already, many thanks to everyone contributing!

Hi @polygongraphics,

While our API has supported multi-tracked experiences for quite a while (hence why experiences such as the one mentioned on this thread are possible), Studio does not currently have native support for creating these types of experiences. Whether it’s something we want to add to our roadmap, and how it would be achieved if it were, are still part of ongoing internal discussions.

As mentioned by George in his post above, the recommended method to link between multiple experiences is to use deep links. Using deep links you can launch a second experience from your first experience.

Once we have some more information to share on the matter, you can be sure we’ll share it with you all on this thread.

All the best,
Seb

P.S. I’ve moved your second question regarding recognising a target image’s size to a new topic, and we’ll be replying there shortly.

As a note to everyone, please remember to keep your comments directly related to the thread’s topic, as it avoids important discussion being lost in the midst of other separate requests/topics.

Thanks again everyone :slight_smile:.

2 Likes

Has something changed since the last posts on this thread? I’ve just started using ZapWorks Studio and want to implement multiple tracking images for a signage project for a zoo but I can’t seem to pull it off.

The @stevesanerd project doesn’t seem to work at all when previewed from Studio. I tried replicating his methodology as best I could understand it from reading the code, but without success.

I’ve also tried the method described by @jvouillon. The concept is sound, but it doesn’t work - not in the latest version of Studio at least.

As @jvouillon describes, I have created 2 subsymbols and am dynamically loading them into an empty group in the heirarchy. If these subsymbols don’t contain tracking images it works fine. As soon as the subsymbols do contain tracking images the project fails to work - no subsymbol ever gets loaded.

I understand as per the moderators comments that Zappar recommend the use of Deep Links, but that doesn’t serve the purpose of what I would like to achieve. Being able to scan multiple tracking images as you wander around a room/space really would be a killer feature for the likes of myself, who are creating AR experiences for signage. In this instance the work is for information boards at a zoo, but the same would apply to museums and other similar visitor attractions.

Many thanks for the help proffered so far in this thread, and thanks in advance for any further guidance provided.

1 Like

Actually, I should be more specific than that…

Even without the code to track whether subsymbols are seen or not, doing a .push() of a subsymbol with a tracking image to the empty group in the hierarchy will fail if there is another subsymbol in the project that also contains a tracking image. If it’s the only subsymbol with a tracking image then it works fine.

1 Like

Hi Dave,
Here it is a version of my app, with just the basic code necessary to work with 2 images. Check the ‘code’ node in both the main hierarchy and the symbols. From there, you can easily increase the number of tracking images. The other archive contains the 2 tracking images. Cheers.

Multi_Targets_Light.zpp (4.5 MB)
Archive.zip (113.1 KB)

3 Likes

Thanks for that. I had ironically just got multi image tracking to work just before you posted!

I managed to get it to work using Steve’s methodology after revisiting his code and deconstructing it again.

However, I think I prefer the implementation in the sample app you have uploaded. Having the tracking images in the subsymbols is really neat and avoids polluting the root hierarchy.

It also means you have direct interaction with the tracking image in the Studio GUI via the subsymbol, rather than having to flub it in the main project where you can’t drag in the trained target images.

I can’t see immediately what you have done differently to my initial attempt, so I’m not sure why my first try didn’t work.

Anyway, many thanks for the sample project and the informative comments in the sourcecode. It’s much appreciated!

3 Likes

Glad to hear you got it working.

Steve

1 Like

@stevesanerd Your code was an eye-opener as to what is possible with Studio, and has sparked loads of ideas for cool signage implementations we can offer to our customers.

Not to mention cool ideas for bringing Pokemon cards to life for my kids! :joy:

2 Likes

Really exciting to hear it’s sparked some inspiration - looking forward to seeing what you come up with!

1 Like

Sorry guys I haven’t read the whole thread, but wanted to add some details around this topic.

New versions of the Zappar app have better support for “round robin” detection of multiple target finders, so you do not need to reset the source attribute to get it to work.

To opt-in you’ll need to make sure you scene targets the version of Zappar when we made this the default - you can just add Z.RequiresVersion.v440; in one of your script nodes.

Then you can have multiple TargetFinders in the scene - if no targets are found it will loop through all of the ones that have tracking enabled, giving each one a few frames to find a target. When one is detected it will continue tracking with that one. When tracking is lost it will try to re-find it for a few frames before starting to loop through the different TargetFinders again.

This will give much better performance than loading the target file again each time you need to switch targets. However please note it doesn’t support tracking more than one target at once.

Internally the TargetFinder instances really represent separate tracking algorithms (eg zapcode vs image tracking) - for true multiple target image tracking with the best efficiency we’ll need to expose the ability to create a combined target file with multiple targets in it, that would be loaded into a single TargetFinder. That is on the future feature roadmap, but not in the near term I’m afraid.

5 Likes

Hi @jvouillon,
This is really cool! I’m playing with it a bit and added a video to the one target, it doesn’t seem to loop however with the usual:

Swimming_dolphin_mp4.on("finish",()=>{
   Swimming_dolphin_mp4.restart();
   Swimming_dolphin_mp4.start();

});

Tried the above code in the parent on seen and outside of it.

Do you know what could be causing this issue?

2 Likes

Never mind. The video is looping successfully it just doesn’t display sometimes. Not sure if it’s a processor issue?

Even if I don’t play the video, it appears in the experience for a while (a min or 2) and then disappears.
Does this experience not support videos?

2 Likes

Hey jvouillon thanks for share this. It’s very good for me!

I have a question: How can i increase the image tracking detection? I’ve create a third trackingImage symbol but only detects the trackingImage_0 and trackingImage_1

I read the code but i’m not sure where you increase the targets tracking.

Cheers,

Pablo.

3 Likes

Change the number in “++trackingImageIndex>1” to one less than the number of your tracking images since its zero indexed.

function nextTrackingImage(){
trackingImageIndex = (++trackingImageIndex>7) ? 0 : trackingImageIndex;
trackingImageDisplayed = Z.Symbol(“trackingImage_”+trackingImageIndex);
}

2 Likes

Hello everyone. I have a general question I’d like to get input from staff and users in regards to multiple targets.

If you have a project such as an intended AR picture storybook that you want to release in multiple languages, would you recommend creating one overall zpp project and one zapcode for all of the different versions or having a zpp project and corresponding zapcode for each of the different versions?

Many thanks,
Richard

Hi everyone, would it be possible to track multiple images while using the Universal SDK and frame for example? fromwhtI’ve understood it should be, right?

So - can I have multiple trained images, each of which goes to a different experience (not played simultaneously), within one Zappar code?

A post was split to a new topic: Stream videos on multiple pages