I have a requirement to build multiple scenes in web VR A-frame. Each scene will have a button which when clicked, will load a new scene with different background.
What is the recommended way to build scenes in A-frame?
Create just one scene and swap the entities at runtime. Each entity can correspond to a button, background, etc.
Create multiple pages(eg. index.html), each having its own scene and load each page on click event.
Approach (1) seems to be the preferred one as the second approach would mean that 'Back' button in browser will be enabled on loading new pages, which is undesirable and affects the user experience in VR.
Can anyone confirm that approach (1) is preferred?
Definitely a better experience if you can keep everything in a "single-page app" (1) and swap out entities. Especially if you are doing something as simple as swapping a background. Only a couple browsers have implemented proper in-VR link traversal, and the link traversal UX at the moment is non-existent.
https://aframe.io/docs/0.6.0/guides/building-a-360-image-gallery.html
Related
I'm looking for the most concise way to deal with focus in an application which renders a map in a canvas component. You can pan the map location using arrow keys or ASWD keys. So far, I've been giving the canvas focus at startup and handling key pressed events via canvas.setOnKeyPressed().
This works fine, but I've always known that a problem was on the horizon when other components enter the picture. Once you interact with another component, it gains focus, and you're unable to scroll around the canvas map. I can prevent this from happening with some components like Hyperlinks or Buttons (I don't need tab-navigation) with something like this for those components:
sidePanel.getChildren().forEach(node -> node.setFocusTraversable(false));
But, when we get to things like TextArea or TextField, those do need to hold focus while they're being edited. And I'll need some way to return focus back (or at least unfocus those components) without being an annoyance to the user. (I don't want to have to click the canvas for it to regain focus.)
The options I see for returning focus back to the canvas after the user is done with those fields seem to be:
Add a key handler (ex. ESC or ENTER keypress) on EACH of these components which returns focus back to the canvas.
Maybe not so concise, and a bit of a pain... also feels a bit fragile; if I miss something and the canvas loses focus, it would fail - I need a 100% reliable solution.
Extend each of these components and add similar code to return focus back to Canvas
Even nastier and requires using custom components in Scene Builder, which
is not ideal.
Add a global event handler on the Scene and transmit events to the controller which owns the canvas
I believe an event filter would accomplish this - but on the other hand if the user is simply using arrow keys to move around a TextArea, I wouldn't want the Canvas map to move!
To solve the above problem, possibly the global event handler could ignore ASWD and arrow keypresses if the focus is on certain types of components? Is this worth trying, or am I neglecting a problem this would create?
Are there any other simple options out there that I've missed - and what would you suggest as the best option here? I'd like an automatic solution that doesn't require remembering to add some workaround code every time a UI component is added.
I'm using A-Frame and I'm trying to figure out how to easily support multiple types of controllers at once (Oculus Touch, HTC Vive controllers, and Windows Mixed Reality controllers), preferably with controller models rendered in the scene and with lasers that would allow the user to click on things.
How do I do this?
I figured out how to do this, so here's my solution.
In your HTML, you can have these to create the controllers (this should be inside an a-scene element):
<a-entity laser-controls="hand: left" raycaster="showLine: true; objects: .clickable;"></a-entity>
<a-entity laser-controls="hand: right" raycaster="showLine: true; objects: .clickable;"></a-entity>
These should also render with the actual controller models in the scene, and each have a laser pointer.
This is what it looks like with the Oculus Touch controllers (ignore the other stuff in the view):
As new types of headsets come out and are supported by A-Frame (e.g. the Valve Index controllers aren't supported yet), the laser-controls component should automatically be updated to support them.
See the docs for a bit more information on how to use controllers in your A-Frame scene.
I still haven't figured out exactly how to make it possible to click on buttons or objects in the environment using the laser, I'll need to figure that out next.
I am attempting to implement the Daydream keyboard into an app built in Unity and am not able to get this to work. I have added the keyboard prefab as a sibling of the main camera and added two input fields with the onpointerclick function added as instructed. I however get a null reference exception and assume this is due to the daydream keyboard delegate field being blank. The example scene in the SDK shows the daydream delegate example prefab but I am unsure how to implement this for two input fields. Also does the keyboard render in the Unity editor or must it be built and run on a phone?
This is an old question and has probably already been answered, but I figured I'd publicize my answer anyway.
For those reading, if you haven't checked out the Keyboard Demo scene that can be found within the Demos folder of the Google VR Unity package, I would highly recommend doing so. Following this object hierarchy has worked for me in the past.
To answer your first question, it seems that they have included a KeyboardDelegateExample object within the scene's hierarchy, and then used this object as the Keyboard Delegate in the GVRKeyboardManager.
They manage to fake an Input Field by creating a background and overlaying a Text object on top. If this method does not suffice and using an Input Field is crucial in your particular case, then drop your Input Fields into two separate GVRKeyboardCanvas objects.
Clicking on either canvas will activate the GVR Keyboard. You may have to add a small script to manage the transitioning of the input field.
Lastly, no the GVR Keyboard does not render in the Unity Editor, it only appears while running a build. Hopefully this will be addressed in later releases. There are also Keyboard plugins that you may find useful on the Asset Store.
So I am having this issue with using Google VR reticle where I cannot click a button. I have an image attached showing the heirarchy and the PlayButton is what I am trying to click. The Canvas has a Graphic Raycaster, the button has an Event Trigger that calls the method to navigate to the next scene. The UpScrollPanel, and DownScrollPanel work just fine. The EventSystem has the Gaze Input Module, as well as Event System, and Touch Input Module.
Any ideas on how to get this working? I have watched a few videos from NurFACEGAMES and while they helped a little, I haven't gotten the click to work yet.
Oh, and I am using Unity 5.3.4f
Sometimes things can get in the way of the button, make sure that no other UI elements overlap it, for example text borders (which are actually larger than they appear). You can also fix this by moving the button up the hierarchy among its siblings, I believe the first child is top.
Also try moving the button up the hierarchy if possible, sometimes UI having certain parents makes them not work
The canvas object should have a graphic raycaster
I found the issue to be unrelated to anything I thought it was. The menu I was using is a prefab I also use in another view that isn't VR. The scrollrect was loading that prefab, instead of the modified one I was using in the VR menu, and therefore the triggers I had added to the button were no being used when the app loaded.
What is the best way to do a loader in a flex application? I have an animated .gif that is to be used as our loader (whenever I need to wait for an action to complete), and I am not sure the best way to do it.
This is how I am thinking:
Have the loader be a custom component.
On the parent application, add an event listener for my custom event AceEvent.SHOW_LOADER.
In the event listener, use the PopUpManager to show the loader.
Listen for AceEvent.HIDE_LOADER.
Get rid fo the loader via PopUpManager.
What do you think about this? Is there a better way to do it?
Thanks!
Andrew
Well, last I checked, animated gifs don't work in Flex unless you have a workaround. Still, I wouldn't use an animated gif to create an animation because of their low quality. I would just recreate it using Flash.
The way I would do the loader however would be very different. personally, I don't believe in 'system loaders' unless it's your application's preloader. The reason for this is that there could be more than one thing loading at the same time (which might not know about each other) which means that the loader popup could disappear before everything is loaded (first one loads, dispatches event and removes popup, while the other is still loading).
What I like to do is create a custom component for the popup loader (since it will be reused quite a bit) and from there I can either use states the are appropriate for my view or have a boolean flag binded to show the popup when true (this can easily be done using frameworks like Parsley). The popup would only cover the part of the system that's actually loading data (since I doubt that your whole app is loading data at the same time) which makes for a better UX.
I ended up using as3gif (until I can get this recreated as a .swf). The way I do this is by using my custom event class (AceEvent.SHOW_LOADER and AceEvent.HIDE_LOADER), which bubbles up to the top. I then use the PopUpManager to add/remove this with modal to disable the application.