Adding a 3d model to a zap code


hey all,

Ive finally managed to get videos and/or text loading in a zapcode… but my main goal is to have a 3d model appear…

a brief outline of what im trying to achieve.
i want to build a LEGO display, like a museum. In each “section” of the museum, i want a zapcode in the floor or wall which will bring up a 3d model of that particular subject (dinosaur, bones, artifact etc) when the app scans it.

Can someone point me in the right direction on how to do this please?

Also…im trying to figure out a way to get the zapcodes to auto load when the viewer sees them… so if i have 2 or 3 codes in an area, id like to be able to move the zappar app across the area, and have it auto switch between the codes depending which is on screen. is this possible?

Thanks in advance :slight_smile:


Hey there,

Zapworks has some really quick and helpful tutorials and documents on how you can achieve this :slight_smile:
they have a documentation on how to import 3d models, how to light them and how to make them interactable. They’ve helped me loads with my current 3d project.


Hey @eddpearson80,

To expand on what @Nythim mentioned (thank you :slight_smile: ), the 3D section of our documentation is definitely the right place to start. It was recently updated so there’s even more helpful information there.

In regards to bringing up a 3D model of a subject when an image is scanned, the best way to do this would be to create separate experiences for each subject with the corresponding model shown in each one, and publish each experience to a separate zapcode with individual tracking images.

You can prompt the Zappar app to re-enter scanning mode from within an experience by calling Z.Device.reset() from within a script in your experience.

Hope this helps.

All the best,


Thank you both for the responses :slight_smile: much appreciated.

However, im literally useless when it comes to coding etc…
Before i try and get my head around it, do you think my idea would be possible?


Achieving a similar functionality is definitely possible, though it’s important to note that the Zappar app won’t be able to automatically switch between which target image is currently recognised. There would need to be a specific action take by the user in order to set the app back into scanning mode.

This could be done through a simple button in the experience, or within the target image node’s notseen target event. The latter would set the Zappar app back into scanning mode whenever the tracking image was outside of the camera’s view.

Whichever of the two method you choose, very little code would be required so it should be achievable :slight_smile:



ooh, that sounds perfect! having it as a notseen event is the best option i think… i will have a go at this tonight.

If im unable to do this, do you offer a paid service to set things up for people?


Hi @eddpearson80,

How did you get along with implementing the functionality?

If you’d like to discuss the Zappar team creating the experience for you feel free to email with some more information about the project, the time frame etc.

All the best,