Now that I think about it, approach 2 is simply unworkable, since the “live preview” would be flipped, thereby making it very difficult and counter-intuitive for the person taking the selfie to compose the shot. I mean, that’s the whole reason the view through the user-facing camera is mirrored to begin with!
I suppose the flipped plane could be displayed just before the screen is captured, but that would be a weird UX as well.
Unless there’s a way to share an arbitrary image using the ZapWorks API via approach 1, then I think I’m just stuck releasing this experience with no selfie capability.
It’s interesting how a use-case so obvious to me apparently never surfaced before. I see huge potential there. In fact, I had planned a number of similar experiences. To reiterate, the experience I’m referring to is:
Imagine a company wishing to create an interactive experience using a billboard as the tracked image, allowing people to take and share selfies in front of the billboard along with AR content (which itself might also contain text). In that case, it would be critical for both the AR content and real world to be properly oriented in the final shareable image. A company certainly wouldn’t want their branding to appear backward in a bunch of social media posts.