I'm trying to make an Aframe scene using a landscape that the user can navigate, but I have had a lot of issues with the aframe-extras nav mesh. Even downloading the aframe-extras repo and installing, building the bundle and then trying to run http://localhost:8000/examples/castle I get the same error as in my scene.
Uncaught TypeError: this.zones[t] is undefined
getClosestNode bundle.js:8722
getNode bundle.js:57166
tick bundle.js:55425
Aframe 57
I've looked into other posts like Uncaught TypeError using A-Frame 1.0.4 + A-Frame Extras nav-mesh and movement-controls, but their error is different than mine.
Before I go on the journey to solve this issue, can anyone weigh in on whether I'd just be better off using a different program/approach to constrain my camera to a 3D surface that is a landscape model?
If it is just a landscape you are better of using physics-systems like Ammo.js
ref:
https://github.com/n5ro/aframe-physics-system/blob/master/AmmoDriver.md
https://www.youtube.com/watch?v=SKYfYd3pk4I
For your use case as for the landscape is concerned, you can try adding ammo-body property of kinematic to the third-person camera while the landscape and 3D models have property of static.
Try finding more about physics-system there are lot of tutorials and documentation available.
Related
Is there any way to zoom in a video with VLCJ like VLC has this feature. I think it is called magnify or interactive zoom.
It is under Tools >> Effects and filters >> Video >> Geometry >> Magnify
I use vlcj with javafx 9, rendering frames to a canvas with an EmbeddedMediaPlayer.
I also try to add this magnify filter to MediaPlayerFactory like new MediaPlayerFactory("--video-filter=magnify") but i have no idea, how to navigate this feature or set zoom level since "-zoom 2.0" is not working.
I tried cropping, but that havent worked for me, or i tried really badly.
Thank you for your help!
As a bare minimum this should work for zooming:
mediaPlayer.video().setScale(float factor);
Where factor is like 2.0 for double, 0.5 for half and so on.
In my experience, it can be a bit glitchy, and you probably do need to use it in conjunction with crop - and by the way, cropping does work.
But if you want an interactive zoom, then you build that yourself invoking setCrop and setScale depending on some UI interactions you control.
For the picture-in-picture type of zoom, if you're using VLC itself you do something like this:
vlc --video-filter=magnify --avcodec-hw=none your-filename.mp4
It shows a small overlay where you can drag a rectangle and change the zoom setting.
In theory, that would have been possible to use in your vlcj application by passing arguments to the MediaPlayerFactory:
List<String> vlcArgs = new ArrayList<String>();
vlcArgs.add("--avcodec-hw=none");
vlcArgs.add("--video-filter=magnify");
MediaPlayerFactory factory = new MediaPlayerFactory(args);
The problem is that it seems like you need "--avcodec-hw=none" (to disable hardware decoding) for the magnify filter to work - BUT that option is not supported (and does not work) in a LibVLC application.
So unfortunately you can't get that native "magnify" working with a vlcj application.
A final point - you can actually enable the magnify filter if you use LibVLC's callback rendering API (in vlcj this is the CallbackMediaPlayer) as this does not use hardware decoding. However, what you would see is the video with the magnify overlays painted on top but they are not interactive and your clicks will have no effect.
So in short, there's no satisfactory solution for this really.
In theory you could build something yourself, but I suspect it would not be easy.
I'm using AFRAME in an app for the Pico Goblin device.
Unsurprisingly, I'm getting the "No DPDB device match." error and the camera moves at the wrong speed and appears upside down.
I realise this is because this device is unlikely to be in the official webvr-polyfill DPDB.json
Is it possible to add devices to this file and use within aframe?
Thanks
This is the repo where the dpdb file lives: https://github.com/immersive-web/webvr-polyfill-dpdb
You can indeed submit PRs there. A-Frame will pick it up in the next version.
In the meantime you can have a custom build of A-Frame that points to your fork of the webvr-polyfill that fetches your own DPDB.json
For Daydream games Google has a requirement where it says:
"Cursor displays at the same depth as objects being targeted"
Description here: https://developers.google.com/vr/distribute/daydream/design-requirements#UX-C4
Now I have tested the Google demos, tried implementing this myself. But I have no idea how to proceed. Does anyone have an idea on how to implement this on Unity using the default scripts given by Google in the demos?
Your interactive objects have been placed too far from the camera. Since the cursor's max distance from the control is about 0.75 meter. It will not be placed on your item.
Move the object closer to the camera should fix this.
Hi I'm really struggling to find an answer to this, I've made a basic 3D environment which the user can move around. However inside GearVR by default it doesn't appear like you can move around the environment, only turn to look on a fixed axis.
Is there anyway this is achievable either by using the trackpad on GearVR or a bluetooth controller?
I've not tried the latest update, but check out this component --
https://github.com/chenzlabs/gearvr-controls
The set of locomotion constraints of GearVR are the same as for mobile/polyfill. Common options include A-Frame checkpoint-controls, touch-controls for teleporting via gaze or tap to move.
I am working on 3d product demo, where i need to show the product reflection/ mirror. I have tried the example - http://threejs.org/examples/webgl_m. i made a replica of the example before actually implementing it with my project. When publish it to web, nothing is displayed on the screen and in the inspect you can see the error message "TypeError: this.renderTarget.texture is undefined" in mirror.js.
How to resolve it. Thanks