How to optimize AR for better UX


When you’re fully invested in creating your next AR masterpiece, it can sometimes be difficult to see beyond your own vision. After all, when a project is carrying your name, you want it to look and behave in the manner you’ve always imagined - that’s why you tweak, iterate...and end up putting on another pot of coffee.

This is a companion discussion topic for the original entry at


Thanks for your useful articles.:hibiscus::+1:t2:


This is great! Well done to all the team that was involved :+1::ok_hand:


You’re so welcome, really glad you found it useful :smiley:!


Loved the article! Will have to use the new tools!

I do have a questions?
PEZ | DC SuperHero Girls video:
At the end you show that some are locked till you scan them. Then you show it scanning the next one. Is each one it’s own app under it’s own scan code? Or is it one app? How are you linking or saving the ones scanned? I would like to do something like that but thought we couldn’t share data across scan codes.

Yorkshire Tree experience:
At the end of the video you have the photo op. My question is the video of the tree at the end. what did you use to make the recording of the tree growing?



Hi Steve, much appreciated, super glad you enjoyed it! I’m chatting to Chris and other members of the creative team to try and find out for you :slight_smile:!


Hi Steve!

Right, for the end part of the Yorkshire Tree experience - the photo op at the end is based on taking an animation of the tree model and then adding that into a modified version of the 3D photo feature template in Studio (enabling you to create an object which users can manipulate and position in 3D space).

On the PEZ - DC Super Hero Girls video, that’s very well spotted - it’s actually based on a technique that we’re doing some ongoing work on, so it’s pretty in depth! But I’ll send you a DM with some further details!