A-Frame: how to simulate tracked controllers when developing on desktop? - aframe

My HTC Vive is set up in a different room to my developer workstation. When developing for A-Frame, I understand I can: use my desktop monitor instead of a headset; use mouse dragging instead of motion controls; use WASD instead of room-scale tracking. However, what is the preferred way to simulate the tracked controllers?
For example, how can I move the cubes in this demo from my desktop: https://aframe.io/examples/showcase/tracked-controllers

This is not yet released, but we're working on tools to be able to record camera and controllers, output to a file, and then you can load it up any device and replay the camera and controller pose and events without needing a headset. Project is led by Diego:
https://github.com/dmarcos/aframe-motion-capture
http://swimminglessonsformodernlife.com/aframe-motion-capture/examples/
This will become the common way to develop for VR without having to enter and re-enter VR to test each code change, and to do it on the go

Related

A-Frame - VR Mode mouse control / move in browser

I'm really new with A-Frame / Web VR. I'd like to ask you when I am on my mobile in VR mode how can I prevent the controll / camera move? For example when I switch to VR mode I want to use my usb mouse for moving just like in "normal mode" and not the device's sensor. Is it possible?
Thanks.
I don't think this is currently possible. Another potential issue with this approach is that it will likely cause simulator sickness:
The most important guideline in designing for virtual reality is to always maintain head tracking. Never stop tracking the user’s head position inside of the application. Even a short pause in head tracking will cause some users to feel ill.
https://designguidelines.withgoogle.com/cardboard/designing-for-google-cardboard/physiological-considerations.html
That said, if you're purpose for this functionality is for spectating on a 2d screen, A-Frame 0.8.0 allows you to add a spectator camera:
https://aframe.io/docs/0.8.0/components/camera.html#properties
There are no tutorials for this yet, but you could try adding asking another question here or on a-frame slack on how to get it up and running.

Creating the stereo photosphere without Unity

I'm working on a Daydream app using the Google VR SDK/NDK. To submit the app to Google Play, I need a 360-degree stereo photosphere. I've seen directions for creating this with Unity, but is there any way to create this without Unity?
I've taken a screenshot of the app in stereo mode, but I don't think that will satisfy the requirement.
Google doesn't provide any tools to capture in-app photospheres in non-Unity apps at this time. Some devs produce photospheres in modeling apps like Maya and Blender.
You could always cheat and make your "Daydream 360 degree stereoscopic image" in Photoshop. Just use the same image twice, once on top and once on the bottom.
I think others have already done this, because I have noticed a few wrong looking previews in the store. Where if I close one eye, parts of the image disappear.
If you change your mind and make one with Unity, this plugin worked nicely for me: https://www.assetstore.unity3d.com/en/#!/content/38755

Can we used the iOS technologies in apple watch app?

I want to create music app in which the watch extension app shows audio wave so my question is Can we used the iOS technologies like openGL in watch app?
You can't run any code on the watch. You can only run code in a Watch extension in your iOS app and update a relatively static UI on the watch. You could generate images in your extension for the audio wave, put them together into an animation and then update the UI with that.
It would be possible to pass some information from your iOS app to the Watch extension running on the phone, which could then update a pre-defined interface on the Watch app. However, if you are wanting to provide a real-time audio waveform, I think this could face major problems regarding latency.
Note that as Stephen Johnson states, you could only do this by rendering static images which would then be sent to the watch for display, or by having pre-installed images in your watch interface that you rapidly show or hide to give the impression of levels changing. The latter would be a much more promising approach from a latency approach, and given Apple give a demonstration of a circular progress indicator made up of 360 images, perhaps it would appear to animate smoothly even. However, the key question would be whether the peaks would appear on the Watch screen close enough to when they actually occurred in the music that the user would see them as being linked.
It might be possible to pre-process the audio and build in a delay to both the display of the peaks and the audio playback to manage the communication latency—but testing that would really only be possible once you had Watch hardware in your hand.

TV ergonomics in Flex

Im having fun toying with AIR and want to use it to create an application for my TV, but Im coping with a serious & dumb problem : TV ergonomics. Indeed, without a mouse, it is all about focus on elements and moving this focus in a natural fashion.
In HTML this is handled by the browsers perfectly, but in ActionScript Im having a real hard time ! For instance, I don't even know how to have an element on autofocus, so that when I load the app there is already something to click on (without it I just can't interact with my app at all!).
Do you have any idea on the best ways to create a listener for the remote controller arrows and OK button (should be enough) so that I never get stuck in the app ?
So whether you have already struggled with that or if you simply happen to know how to play with the focus and setFocus() parts of Flex, your help is very welcome !
I recommend you look at the Google TV Flash template. It's all about controls and navigation. I'm not sure if this works for Flex as I have not done any gtv development yet.

Flex On Touch Screen System: Does web sites in flex works on touch screen system?

I build a web site in flex that some time take input. Will this website works on Touch Screen environment(KIOSK).My question is we have to make any change to handle input such as prompt on screen keyboard when input fields are get focused or it will manage my device and OS of system(KIOSK, Touch screen system) itself.
I'm unclear exactly what your question is. But, yes, Flex should work on touch screens and traditional screens with a mouse and keyboard.
It is likely that you'll need project modifications for each environment. The size of a button to accept a mouse click is going to be much smaller than what you need to accept a finger click, for example.
You can use a library project to share code between these two projects / interfaces.
I remember using Flex applications on touch screen monitors running Windows/Linux. These operating systems treat/converts touches as/into mouse actions and hence Flex apps doesn't know the difference.
Whether you'll be able to run it on a kiosk depends on the underlying OS. If the OS converts touches to mouse movements, then yeah, your app will run seamlessly.

Resources