I am trying to change the value of eyeSeparation in A-Frame coding. How can I do it?
Referring to the documents of three.js:
effect = new THREE.StereoEffect( renderer );
effect.eyeSeparation = 0;
This would result in no difference in rendering between the left and right images. I guess that the documents of https://aframe.io/docs/0.8.0/components/camera.html
would give the way for changing eyeSeparation settings, but I cannot find the way to do it.
Best,
Shunji
Eye separation is provided by the WebVR / WebXR APIs and user configured through the headset via software (e.g; oculus settings) or hardware (Rift slider or Vive dial). It’s not something that content can control. StereoEffect is not used by A-Frame or THREE VR path. It’s an old module that predates WebVR
Related
Is there any way to zoom in a video with VLCJ like VLC has this feature. I think it is called magnify or interactive zoom.
It is under Tools >> Effects and filters >> Video >> Geometry >> Magnify
I use vlcj with javafx 9, rendering frames to a canvas with an EmbeddedMediaPlayer.
I also try to add this magnify filter to MediaPlayerFactory like new MediaPlayerFactory("--video-filter=magnify") but i have no idea, how to navigate this feature or set zoom level since "-zoom 2.0" is not working.
I tried cropping, but that havent worked for me, or i tried really badly.
Thank you for your help!
As a bare minimum this should work for zooming:
mediaPlayer.video().setScale(float factor);
Where factor is like 2.0 for double, 0.5 for half and so on.
In my experience, it can be a bit glitchy, and you probably do need to use it in conjunction with crop - and by the way, cropping does work.
But if you want an interactive zoom, then you build that yourself invoking setCrop and setScale depending on some UI interactions you control.
For the picture-in-picture type of zoom, if you're using VLC itself you do something like this:
vlc --video-filter=magnify --avcodec-hw=none your-filename.mp4
It shows a small overlay where you can drag a rectangle and change the zoom setting.
In theory, that would have been possible to use in your vlcj application by passing arguments to the MediaPlayerFactory:
List<String> vlcArgs = new ArrayList<String>();
vlcArgs.add("--avcodec-hw=none");
vlcArgs.add("--video-filter=magnify");
MediaPlayerFactory factory = new MediaPlayerFactory(args);
The problem is that it seems like you need "--avcodec-hw=none" (to disable hardware decoding) for the magnify filter to work - BUT that option is not supported (and does not work) in a LibVLC application.
So unfortunately you can't get that native "magnify" working with a vlcj application.
A final point - you can actually enable the magnify filter if you use LibVLC's callback rendering API (in vlcj this is the CallbackMediaPlayer) as this does not use hardware decoding. However, what you would see is the video with the magnify overlays painted on top but they are not interactive and your clicks will have no effect.
So in short, there's no satisfactory solution for this really.
In theory you could build something yourself, but I suspect it would not be easy.
I'm checking web-vr frameworks and looking for an availability to change the projection settings/the camera settings for left and right eye. Is it somehow possible or this is done automatically by the browser?
I checked A-Frame framework from mozilla and react-vr but I could not find such options.
The camera projections are provided by the browser through the WebVR API and are specific to each headset. It's probably something you don't want to change.
I'm really new with A-Frame / Web VR. I'd like to ask you when I am on my mobile in VR mode how can I prevent the controll / camera move? For example when I switch to VR mode I want to use my usb mouse for moving just like in "normal mode" and not the device's sensor. Is it possible?
Thanks.
I don't think this is currently possible. Another potential issue with this approach is that it will likely cause simulator sickness:
The most important guideline in designing for virtual reality is to always maintain head tracking. Never stop tracking the user’s head position inside of the application. Even a short pause in head tracking will cause some users to feel ill.
https://designguidelines.withgoogle.com/cardboard/designing-for-google-cardboard/physiological-considerations.html
That said, if you're purpose for this functionality is for spectating on a 2d screen, A-Frame 0.8.0 allows you to add a spectator camera:
https://aframe.io/docs/0.8.0/components/camera.html#properties
There are no tutorials for this yet, but you could try adding asking another question here or on a-frame slack on how to get it up and running.
My HTC Vive is set up in a different room to my developer workstation. When developing for A-Frame, I understand I can: use my desktop monitor instead of a headset; use mouse dragging instead of motion controls; use WASD instead of room-scale tracking. However, what is the preferred way to simulate the tracked controllers?
For example, how can I move the cubes in this demo from my desktop: https://aframe.io/examples/showcase/tracked-controllers
This is not yet released, but we're working on tools to be able to record camera and controllers, output to a file, and then you can load it up any device and replay the camera and controller pose and events without needing a headset. Project is led by Diego:
https://github.com/dmarcos/aframe-motion-capture
http://swimminglessonsformodernlife.com/aframe-motion-capture/examples/
This will become the common way to develop for VR without having to enter and re-enter VR to test each code change, and to do it on the go
Presently, I have a Rails CMS that allows a user to upload an image. Before saving it to the database, the page displays a 2D preview of their image. After saving, the preview is removed and the image is instead rendered to the user as a 3D panorama inside an iframe.
Ideally, I would like to create a whitelist of browsers that allow the user to view the saved image as a 3D panorama - otherwise the image remains 2D.
My question is that, given A-Frame does not support Internet Explorer (and potentially earlier versions of other browsers), how do I detect which specific browser a user is viewing the website with?
I've read through the Utils documentation and this thread on device detection, however neither yield any insights.
Any advice would be appreciated :)
You can use navigator.userAgent to determine browser information. Tons of utilities for it online, not only specific to A-Frame.
A-Frame has several utilities, currently not documented:
AFRAME.utils.isMobile
AFRAME.utils.isIOS
AFRAME.utils.isGearVR