I'm checking web-vr frameworks and looking for an availability to change the projection settings/the camera settings for left and right eye. Is it somehow possible or this is done automatically by the browser?
I checked A-Frame framework from mozilla and react-vr but I could not find such options.
The camera projections are provided by the browser through the WebVR API and are specific to each headset. It's probably something you don't want to change.
Related
I am trying to change the value of eyeSeparation in A-Frame coding. How can I do it?
Referring to the documents of three.js:
effect = new THREE.StereoEffect( renderer );
effect.eyeSeparation = 0;
This would result in no difference in rendering between the left and right images. I guess that the documents of https://aframe.io/docs/0.8.0/components/camera.html
would give the way for changing eyeSeparation settings, but I cannot find the way to do it.
Best,
Shunji
Eye separation is provided by the WebVR / WebXR APIs and user configured through the headset via software (e.g; oculus settings) or hardware (Rift slider or Vive dial). It’s not something that content can control. StereoEffect is not used by A-Frame or THREE VR path. It’s an old module that predates WebVR
I'm really new with A-Frame / Web VR. I'd like to ask you when I am on my mobile in VR mode how can I prevent the controll / camera move? For example when I switch to VR mode I want to use my usb mouse for moving just like in "normal mode" and not the device's sensor. Is it possible?
Thanks.
I don't think this is currently possible. Another potential issue with this approach is that it will likely cause simulator sickness:
The most important guideline in designing for virtual reality is to always maintain head tracking. Never stop tracking the user’s head position inside of the application. Even a short pause in head tracking will cause some users to feel ill.
https://designguidelines.withgoogle.com/cardboard/designing-for-google-cardboard/physiological-considerations.html
That said, if you're purpose for this functionality is for spectating on a 2d screen, A-Frame 0.8.0 allows you to add a spectator camera:
https://aframe.io/docs/0.8.0/components/camera.html#properties
There are no tutorials for this yet, but you could try adding asking another question here or on a-frame slack on how to get it up and running.
Hi I'm really struggling to find an answer to this, I've made a basic 3D environment which the user can move around. However inside GearVR by default it doesn't appear like you can move around the environment, only turn to look on a fixed axis.
Is there anyway this is achievable either by using the trackpad on GearVR or a bluetooth controller?
I've not tried the latest update, but check out this component --
https://github.com/chenzlabs/gearvr-controls
The set of locomotion constraints of GearVR are the same as for mobile/polyfill. Common options include A-Frame checkpoint-controls, touch-controls for teleporting via gaze or tap to move.
I'm working on a Daydream app using the Google VR SDK/NDK. To submit the app to Google Play, I need a 360-degree stereo photosphere. I've seen directions for creating this with Unity, but is there any way to create this without Unity?
I've taken a screenshot of the app in stereo mode, but I don't think that will satisfy the requirement.
Google doesn't provide any tools to capture in-app photospheres in non-Unity apps at this time. Some devs produce photospheres in modeling apps like Maya and Blender.
You could always cheat and make your "Daydream 360 degree stereoscopic image" in Photoshop. Just use the same image twice, once on top and once on the bottom.
I think others have already done this, because I have noticed a few wrong looking previews in the store. Where if I close one eye, parts of the image disappear.
If you change your mind and make one with Unity, this plugin worked nicely for me: https://www.assetstore.unity3d.com/en/#!/content/38755
My HTC Vive is set up in a different room to my developer workstation. When developing for A-Frame, I understand I can: use my desktop monitor instead of a headset; use mouse dragging instead of motion controls; use WASD instead of room-scale tracking. However, what is the preferred way to simulate the tracked controllers?
For example, how can I move the cubes in this demo from my desktop: https://aframe.io/examples/showcase/tracked-controllers
This is not yet released, but we're working on tools to be able to record camera and controllers, output to a file, and then you can load it up any device and replay the camera and controller pose and events without needing a headset. Project is led by Diego:
https://github.com/dmarcos/aframe-motion-capture
http://swimminglessonsformodernlife.com/aframe-motion-capture/examples/
This will become the common way to develop for VR without having to enter and re-enter VR to test each code change, and to do it on the go