Hi, I have used Studio to load a photosphere in my cardboard headset once the zapcode is scanned. Can someone please give me an example of how to easily change the displayed photosphere so that I can transition through a series of images. Thanks!
I’m not sure if you’re familiar with this tutorial:
But basically I would create states that change the material properties of the Photo Sphere(image 1 to image 2)
We’ve put together a little experience to show you a way to switch from one photo sphere to another in AR and VR modes. You can download the corresponding zpp file here.
In this project, we’ve used two photo spheres (but feel free to use as many as you want!). Within each photo sphere, we’ve inserted a plane (the red “Tap” button) and a raycaster, which tracks the direction in which the user is looking. We’ve “highlighted” the raycaster by using a target image. After launching the experience, the user can then switch from one photo sphere to the other by positioning the raycaster target onto the “Tap” button and -not or- tapping this button.
How does this work in Studio?
We’ve created a group called “Spheres” made up of two photo spheres “Photo Sphere1” and “Photo Sphere2”, and a button per sphere.
Each sphere has a default size of [1, 1, 1]. We’ve chosen to place the center of the first photo sphere at [0, 0, 0] and the center of the other one at [3, 0, 0]. This way, we know that the two spheres do not intersect each other. (Please, note that from here I’ll call the “position of the sphere center” “position of the sphere” to keep in simple.)
We wanted the camera view to move from the center of one sphere to the center of the other when the user presses a “Tap” button. That’s why we’ve used one controller called “sphere_transition” with two states “sphere 1” and “sphere 2”. Each controller is associated with a specific position of the “Spheres” group. (Feel free to read the Controllers and States article for more information.)
We have set the states as indicated below:
- If a user sees the inside of “Photo Sphere1” and taps on the button => they should see the inside of “Photo Sphere2”. This means that “Photo Sphere2” needs to match the position of the camera view, which is currently matched by “Photo Sphere1”.
The two photo spheres are inside the “Spheres” group. It’s important to understand that, to switch from one sphere to the other, we need the “Spheres” group to move while we keep the user/camera view at its position [0, 0, 0].
Initially, within the “Spheres” group, “Photo Sphere1” is centered in [0, 0, 0] and “Photo Sphere 2” in [3, 0, 0]. To move “Photo Sphere1” out of the camera view and make “Photo Sphere2” match the position of the camera view ([0, 0, 0]), the “Spheres” group needs to be moved from -3 along the X axis. That’s what the “sphere 2” state does. As a result, “Photo Sphere1” is then centered in [-3, 0, 0] and “Photo Sphere2” in [0, 0, 0].
- Whereas if the user taps the button when in “Photo Sphere2”, the “Spheres” group needs to be moved back to its original position with “Photo Sphere1” in [0, 0, 0] and “Photo Sphere2” in [3, 0, 0]. That’s what the “sphere 1” state does.
Please, note that, in the pictures above, I’ve placed the “Spheres” group below a tracking image “circle-visual.zpt” node, so that you can observe how the spheres are positioned in the coordinate system. To position the camera view back inside one of the spheres, you’ll need to move the “Spheres” group below the “attitudeOrient” node, as you can see below.
I hope this helps but if you have any questions, feel free to ask!
Virtual tour and Vr cardboard button
We’ve created a new version of the project. You can download the corresponding zpp file: here.
Description of the new version of the experience:
The experience now starts in traditional (non VR) view. In this view, when you tap on a red button, it will move from a sphere to the next sphere (the crosshair has been removed to avoid confusion).
In the traditional view, we’ve added a button so that you can switch from this view (Headset mode not activated) to VR view (Headset mode activated). When in the VR view, the crosshair will need to be pointing at the red button when you tap on the screen so that you can navigate from one sphere to the other.
Updates made in the Studio project:
We’ve added a black button (VR_switchbutton.png) to switch from the traditional view to the VR view.
We’ve created a mode_controls controller with 2 states:
- headset mode off as the default state, as we now want to start with the traditional view. Within this controller, the button to switch to headset mode is enabled and visible, the raycasters are disabled and the crosshair (raycaster_target) is not visible.
- headset mode on: the button to switch to headset mode is disabled and invisible, the raycasters are enabled and the crosshair (raycaster_target) is visible.
The headset mode off state is activated when the experience is launched (as it’s been set up as the default state) and in the headset Manager’s headsetleave() script (when the user leaves the VR view).
The headset mode on is activated in the headset Manager’s headsetenter() script (when the headset mode is launched).
- We’ve added a pointerdown() script under each of the two spheres’ button switch. Thus, when in the traditional view the user taps on a red button, it moves to the other sphere.
Virtual tour and Vr cardboard button
Hello, i just tried it, in the Vr mode, i can not navigate, to go from the first vr to the other .
is there any thing i need to add in the Vr mode, to have the ability to navigate from first panorama to the other ?
As we received your questions through support as well, we’ve replied there.
This is fantastic, and amusingly enough /almost/ exactly what I wanted. I can’t wait to give it a try actually! The question I have however is just setting up buttons for each sphere. I actually have made a little silly walk around zap that lets you move around from sphere to sphere. I did what the first user suggested, change materials on the sphere. However, I came across the problem of “buttons.” If I change a sphere, the same buttons are there. I’ve worked around this by turning buttons on or off depending on the sphere materials. If I move the spheres… I feel like the buttons from the previous sphere will still be there if I look in its directions. This is not desirable either.
My current example
We’re glad to hear you found here what you were looking for!
Could you please send us the zpp file of your project so that we can take a look and point you in the right direction?
Hello, I’m am new to Zapworks, have a gizilion of ideas… and I’m preparing some “demos” to present to potential clients… I have a gizilion of questions as well LOL!!!
First off, thx for your help and tuts, they are really helpful…
I have a couple of questions regarding this particular project…
I can’t seem to understand the raycaster, where it is in the project, and what it does exactly…
If I understand correctly, if I have multiple images, I need to create multiple sphere, and controller states, correct?
How do you manage to get both images the point in the right direction… My asking is that I tried to add the Headset manager to your 360 tutorial, and the images where pointing in different directions.
My headset does not have a button (I’m using a Homido), so can I get the triggers to work by looking at the target an X amount a milliseconds?
And lastly, can I use portion, or certain areas of the images to trigger AR (ex. as you look around, different elements gets animated, or videos starts playing in picture frames on the wall…
Thank you in advance for your help!!
Thanks for letting us know you find the documentation and tutorials helpful!
To answer your questions:
That’s correct. Considering the project posted above, you would need one photosphere per image and one controller’s state per photosphere.
To make sure the user is presented with a consistent view (default view) at the start of an experience, we use the attitudeOrient’s resetHeading() function, as explained in the Gyro-oriented Environments documentation.
In the project that we’ve posted in this thread, we’ve used the resetHeading() function it in the ‘show’ script to ensure the view is set to its default value as soon as the experience is launched.
Note: If you’d like see that default view within Studio, you can use the ‘Reset view’ button in Studio:
- Yes, in general that’s what we use raycasters for in headset mode.
A raycaster can be set up so that when it intersects with an (or multiple) object(s), an action -that can vary depending on the object intersected- is triggered.
To do so, you’ll need to create a colliderTag for the raycaster and add it as a tag to each of the objects you’ll use to trigger an action. You will then need to add an intersectionenter script to the raycaster in order to define the actions (e.g. playing a timeline, activating a state) that should be triggered depending on the object intersected.
You should find all the information you need in the Raycaster documentation.
- What you’re looking to do here could be done using a raycaster as well.
Feel free to ask if you have any questions.
Thank you!! finally got the raycasters to sink in my brain
I’m almost done my first experience, where a user can navigate through different room in a house!!
Very robust software!!!
RoomWalk.zpp (1.3 MB)
So there’s the .zpp of what I have. Sorry it took so long. Real work took over my life.=P Yay production.
It all works pretty well for now as you can see from scanning the zap image I have up there. =) It’s just not clean to me is all.
So I just discovered symbols, and I personally was hoping to use those as my rooms? So the idea would be build a symbol that is a photosphere with room button attachements, and I want to be able to throw in some content like things you can click on and read. Things that will activate videos… Yada yada. I was hoping that by having a “symbol” for each room, that I wouldn’t need to change so many things in my current controller state under root. That and for design purposes, it would be easier to design a room without having to hide the other rooms in the current experience. Does that make sense? That way I can say “Ah I want to edit room one!” I can open that symbol, do the edits, and not have to be concerned with the hidden trap door, the other 4 irrelevant buttons, and whatever information signs and videos I might be playing in the other 3 rooms.
Now, I haven’t implemented /any/ of this in the program I just uploaded for you. It was just a thought as to how to make it a bit cleaner, and how to build it so I can edit each sphere individually without distractions from the others.
Yes, implementing your project using a subsymbol per photosphere would work as well.
I went through the 360 photosphere tutorial and created the project. Only problem is when I view in VR mode the Left and Right screens are flipped so the VR mode is unusable. I must have made a mistake along the way. Any suggestions for being able to flip the left and right screens so it works well with VR googles. Thanks in Advance.