What tools are you using for debugging / profiling on iOS and Android?


My profiling so far consists of an FPS counter in the corner of the screen (!). But I need to up my game.

My first AR thing is going to go live shortly, and there’s a ton of optimisation possible, but it’d be great to identify where I should focus my efforts. On top of that, I’ve rolled my own multi-target tracking code—huge chunks of which are cribbed from @stevesanerd’s marvellous work—but it does crash the experience if left without a target to find for a few minutes. Not necessarily a show-stopper for us, but I’d love to be able to debug the thing better. It smells memory-leaky, but I’ve no visibility into what’s actually happening on the phone.

And I’d love a more quantitative way of assessing things like the impact of multiple Z.every handlers, versus having a single one that handles all timing-related tasks.

The “Preview in Desktop” mini-browser thing seems like it could help, as that seems to have a working console etc - but (at least on my Mac) it doesn’t seem to be able to use the camera, so it’s no use for image-tracking AR at all.

What tools / approaches / techniques are you using? Suggestions / thoughts?


Hay @howiemnet
I’m glad you where able to use my code!! So just a side note about it. I found the refresh scan was slow any where from .5 to 1 sec. Depending on your device. If you have a lot to scan it can take some time as well. Mine had 7 cards and if it missed the one it would have to loop. This could take up to 12 sec to lock on.

As for debugging I believe I had a console.log set to post what image is being tracked.
I used the Windows ZapWorks Preview to test. You should be able to use a Chrome browser with debugging on as well.



Your code was invaluable :slight_smile: I’m very curious how you arrived at it, though; the reference docs do list all the functions you’re using, but how you managed to get it all working, in the right order, etc… [mind blown]


The only reason I rewrote it was because I couldn’t quite grok why you were setting up multiple persistent Z.every event handlers; it looked like they’d still be triggered, swapping the targets, even after a target had been found. In my case I’m only looking for two targets - the front and back of a postcard - just so I can spot if a user is pointing their phone at the wrong side and tell them to flip it over. As soon as I’ve found a target I don’t want any extraneous Z.every handlers stealing cycles.

So rather than setting up Z.every “loops”, I’m using a Z.after “timeout” that swaps the targets if nothing’s been recognised. Once a target’s acquired I no longer need to think about targets and swapping stuff.

I like to push things until they break so I know what the limits are, and I found that even at 100ms (!) per target my two test phones were able to acquire a target, which made it feel very snappy. Trouble is that if they weren’t pointing at a target, after a couple of minutes it’d crash the process. Extending the timeout to 500ms per target also extended the time before crashing, and for 2 targets, it’s still fast enough.

Faster swapping leading to faster crashes… hence my thought that there’s a memory leak or something similar going on.

The thing I can’t get my tiny head round is quite what happens to existing targetInstances when you swap the targetFinder. If you’ve found a target, a targetInstance is born, and you can tell one of your nodes to be positioned relative to it. But if the target is then lost, and you swap targets and then find another one, what happens to the previous instance? Does it just stop existing? And the nodes that were using the previous instance as a relativeTo parent - what becomes of them? What’s their “parent” at that point?

These are probably questions I need to explore rather than asking to be tutored on JS here :wink: (Still trying to get my head round that weird (function()={…})() code pattern too - never seen that before. Doesn’t seem to stop the memory leaks though)

Thanks for the info re using Windows / Chrome for debugging - shame the Mac doesn’t quite work on this front, but I can dig out a windows machine to play with.

Thanks again, man, you’re blazing a trail for us


Yea It took a bit but I did get a lot of the code from Zappar themselves. I was one of the first ZapBox kick starter back in the day. I got there developer kit and with that there demo code. Back then they used Z. codes to find the controllers. I changed it to look for each tracking image and if not seen after set time move on.

Now yes if I was on an apple phone it could lock on vary fast but the android phones didn’t at the time. So I had to make the scan time longer.

As for the targetInstance . from what I remember Zappar can only have one targetInstance at a time locked. I would change my group relative to based on what was locked at that time. When It’s lost I think I sent it back something like z.screen or root and hide it so it didn’t just float in the air. So yes I believe the instance stops existing or better gets rewritten to the new one.

As for the weird (function()={…})() code pattern I can’t remember it’s been years. I would have to look at it.

So with all that said your only using this to tell them to flip your card over?
This is a lot of work just to tell someone to flip the card over. Have you seen the "Look for Graphic” subsymbol in ZapWorks? Adding a look for prompt to a project



re the targetInstance stuff - thanks, that’s interesting to know.

This is a lot of work just to tell someone to flip the card over. Have you seen the "Look for Graphic” subsymbol in ZapWorks?

Heh - yep - I’m already using the Look For subsymbol as well. But tracking for the wrong side of the card as well lets me keep the instructions simple: scan the QR code, point it at the card.

One of our team complained she couldn’t get it to work - she’d scanned the QR code on the back of the card and then not noticed the little “look for” thumbnail was of the other side. If she missed it, others might too. Rather than write more words words words on the card, why not just use it as an excuse to add another cheeky “turn me over” animation…

And besides, in for a penny, in for a pound - I’ve no idea how well the AR thing’ll work for us so I might as well try as many challenges as I can, while I can… it’s fun :slight_smile:


I can see that. We love our cheeky :smile:
I just worked on a photo op and the users didn’t understand how to pick one of the 4 ops.

To bad you cant just add the QR to the front of the card.



Originally planned to put a QR code on the front, but then figured we could use stickers to put different codes on the same base card (same AR experience, but the queryString encoded into the QR codes tells me which venue and which rep handed the card out so the AR knows what text / CTA to display).

Hard to get a sticker perfectly positioned each time, tho. So keeping the QR code / instructions on the back keeps it all clean and tidy. In theory. It’s not just because I’m overly precious about the work and artistry I put into the front design, no, not that at all.

(it is exactly that)


I was just thinking you should set your QR code as the main tracked image.
Then play your flip animation. Use the not see to flip to the 2nd tracking image after your have played the animation.

Just a thought.



Thought about that kind off approach but it’d make it weird/time-wastey for users who were smart enough to read the instructions in the first place, or people who’d already fired up the experience and knew the good stuff happened on the front of the card. Good call, though, it was worth considering.

Ahhh… the perennial problem of making things simple and hand-holdy for novice users without restricting the speed for more advanced ones. Goodness, if there’s one thing that makes my bloody absolutely boil these days, it’s when you request a new feature in a professional-level app and get the response “but that’ll add complexity for users”… :wink:


So true!!
Keeping it simple for the novice but still engaging for the rest.