Actual dimensions of a played scene

Hi fellow designers :dizzy:

Is the actual (real-life) dimensions of a played scene in relative to the target image?
Meaning if I print a 30cm x 50cm target image a played scene will have 30cm x 50cm dimensions in the room (what about the depth size?)
and if I print a 100cm x 160cm target image a played scene will have 100cm x 160cm dimensions?

If I am wrong, how do I control the actual (real-life) dimensions of a played scene?

Welcome your input :pray:

You’re right - it’s all relative. The actual physical dimensions don’t really have any relevance, so you could print the same target image as a small poster or a big one and the AR experience will “scale” to match.

If you’re trying to show something where its apparent size does matter - maybe a 3D model of an actual product so customers can see how it’d look on their desk - use your target image’s printed size as a reference. You’d need to know what size your target image was going to be printed at, but then you can use that to get the scales correct. If your target image is a postcard, 15cm across, and you want to present a laptop that’s 30cm wide, scale the model up so it’s twice as wide as the target image.


@cómoiemnet thanks for your comments.
I’m just trying to show a “real person size” scanning a Trigger or Image in the floor or a banner in a wall.

Reading what you have mentioned, could I place a trigger 4´x 4´ and the person like 6 inch tall?

@arnon.ilani did it work for you?

Could it be possible using WebAR?

Thanks in advance.

Thanks for your input :+1:
I must say that if this is true then it’s a limiting constraint since it doesn’t make sense to me to condition a virtual object size upon a physical artifact size.

The above quote wasn’t quite understood. Where do I setup/configure the scaling factor (is it done in the Studio by modifying scene dimensions manually)?

That’s kinda the point of image tracking, though - you want the virtual stuff to line up with the physical stuff. We’ve made a poster that comes alive in AR, so the printed design becomes 3D and animates, and image-tracking is the way the virtual stuff gets lined up with the physical printed image. We need all our virtual content to be shown relative to the physical print :slight_smile:

Apart from more recent phones which may have LiDAR and other gadgetry, most AR tracking at the moment relies on the camera image - a moving picture - to connect to the real world, and although we humans can make educated guesses at the physical size of things in a video, the phone has no idea. It can spot an image it recognises, and it can track how it moves in the frame and “stick” stuff to it, but it has no way to tell from the camera how near or far away it is, so it has no way to judge how large it is.

If you don’t want your AR to be tied to an image, you can use instant tracking instead, where virtual objects are superimposed so they look like they’re part of the real world, standing on the floor, or attached to a wall, but again, the phone can’t infer scale from the video feed, so there’s no way to tell it “this must look like it’s 1 metre tall”. The phone can do a good job (thanks Zap) of “locking” your 3D scene to a part of the incoming camera feed, a surface, but it has no idea if the user’s pointing at the floor, a metre away, or a desk that’s a foot away, or if they’re high up in stadium seating and tracking something football-pitch sized.

There isn’t a global scale control, but you can do it yourself with a Group node, which ends up being more flexible.

When you import your tracking image, it’ll appear 1 unit wide or 1 unit tall (whichever is the longer side) - you’ll see it in the viewport. As you add 3D things (even flat planes), you can position / rotate / scale them freely in the viewport, or you can type specific values into the properties panel.

If you have a whole bunch of stuff that you want to scale in one go, create a Group node first, and put all the objects in that: then you can scale that Group node itself to make everything contained in it bigger or smaller together. So within that group you can treat all the contents’ positions as inches, metres, nautical miles; whatever you like as long as you’re consistent. To scale the whole scene down to the sort of size you want it to appear on people’s phones (relative to the target image :wink: you only have to scale that main “parent” group. You can carry on adding/moving things inside the group around, and their coordinates will still be in whatever units you decided to call them.

Hope this helps. If it doesn’t already, it will start to make sense: but it can seem a bit abstract at first


@howiemnet :+1: for your comment.

How do you do that in the Designer? didn’t find in the Designer | Components

My apologies @arnon.ilani - hadn’t noticed which subforum we were in. I use Studio rather than Designer; I’m not sure if/how groups are handled in Designer. The general points about units / “everything in AR is measured in relative units rather than real-world” still applies, but having never used Designer I’m not sure if the Groups approach will work for resizing things.

If no one else is able to chip in with help here, I’d drop an email to support.