Can we used the iOS technologies in apple watch app? - watchkit

I want to create music app in which the watch extension app shows audio wave so my question is Can we used the iOS technologies like openGL in watch app?

You can't run any code on the watch. You can only run code in a Watch extension in your iOS app and update a relatively static UI on the watch. You could generate images in your extension for the audio wave, put them together into an animation and then update the UI with that.

It would be possible to pass some information from your iOS app to the Watch extension running on the phone, which could then update a pre-defined interface on the Watch app. However, if you are wanting to provide a real-time audio waveform, I think this could face major problems regarding latency.
Note that as Stephen Johnson states, you could only do this by rendering static images which would then be sent to the watch for display, or by having pre-installed images in your watch interface that you rapidly show or hide to give the impression of levels changing. The latter would be a much more promising approach from a latency approach, and given Apple give a demonstration of a circular progress indicator made up of 360 images, perhaps it would appear to animate smoothly even. However, the key question would be whether the peaks would appear on the Watch screen close enough to when they actually occurred in the music that the user would see them as being linked.
It might be possible to pre-process the audio and build in a delay to both the display of the peaks and the audio playback to manage the communication latency—but testing that would really only be possible once you had Watch hardware in your hand.

Related

Audio unable to play in Aframe unless the inspector is opened

I have been working on adding background music to a Glitch project. I followed the Aframe.io documents to create a sound component and set it to automatically play upon running, however, when I try to run the project, the sound does not play automatically on my laptop. I have to open the Aframe inspector and un-pause my project for the music to begin. For some reason, I have not encountered this issue when I tried the same project on my phone. The sound played as expected once the program is up and running. I am not sure what is causing the sound to malfunction but any help is greatly appreciated. Here is the project I am working on: https://glitch.com/edit/#!/animation-animate?path=index.html%3A15%3A23
when working with sounds natively with Aframe, I had the same problem and one of the recommendations that I can give you is to play sounds with the HowlerJS sound library, which is compatible with most frameworks and also compatible with A-Frame
I share their page and the documentation, this library is very easy to use and implement to the project
https://howlerjs.com/
https://github.com/goldfire/howler.js#documentation
Autoplay audio on most modern browsers requires user interaction before enabling audio. You can put a div in front of your scene that upon click starts playing your background audio.
Check out a related SO question / answer for video autoplay that should help:
Autoplaying videosphere from A-frame is not working on any browser(Safari/Chrome)
Another user on A-Frame discord provided this example demo:
https://aframe-autoplay-background-music.glitch.me/

Creating the stereo photosphere without Unity

I'm working on a Daydream app using the Google VR SDK/NDK. To submit the app to Google Play, I need a 360-degree stereo photosphere. I've seen directions for creating this with Unity, but is there any way to create this without Unity?
I've taken a screenshot of the app in stereo mode, but I don't think that will satisfy the requirement.
Google doesn't provide any tools to capture in-app photospheres in non-Unity apps at this time. Some devs produce photospheres in modeling apps like Maya and Blender.
You could always cheat and make your "Daydream 360 degree stereoscopic image" in Photoshop. Just use the same image twice, once on top and once on the bottom.
I think others have already done this, because I have noticed a few wrong looking previews in the store. Where if I close one eye, parts of the image disappear.
If you change your mind and make one with Unity, this plugin worked nicely for me: https://www.assetstore.unity3d.com/en/#!/content/38755

A-Frame: how to simulate tracked controllers when developing on desktop?

My HTC Vive is set up in a different room to my developer workstation. When developing for A-Frame, I understand I can: use my desktop monitor instead of a headset; use mouse dragging instead of motion controls; use WASD instead of room-scale tracking. However, what is the preferred way to simulate the tracked controllers?
For example, how can I move the cubes in this demo from my desktop: https://aframe.io/examples/showcase/tracked-controllers
This is not yet released, but we're working on tools to be able to record camera and controllers, output to a file, and then you can load it up any device and replay the camera and controller pose and events without needing a headset. Project is led by Diego:
https://github.com/dmarcos/aframe-motion-capture
http://swimminglessonsformodernlife.com/aframe-motion-capture/examples/
This will become the common way to develop for VR without having to enter and re-enter VR to test each code change, and to do it on the go

Is it possible to combine 2 images together into one on watchOS?

I am upgrading my watch app from the first version of watchOS. In my first version I was placing UIImageViews on top of each other and then rendering them with UIImagePNGRepresentation() and then converting it to NSData and transferring it across to the watch. As we know there are limited layout options on the apple watch so if you want cool blur effects behind images or images on images they have to be flattened off screen.
Now when I re-created my targets to watchOS2 etc suddenly the images transferred via NSData through [[WKSession defaultSession] sendMessage:replyHandler:] come up with an error saying its too large of a payload!
So as far as I can see I either have to work out how to combine images strictly via watchkit libs or use the transferFile option on the WKSession and still render them on the iPhone. The transferFile option sounds really slow and clumsy since I will have to render the file, save to disk on iPhone, transfer to watch, load into something that I can set on a WK component.
Does anyone know how to merge images on the watch? QuartzCore doesn't seem to be available as a dependency in watch land.
Instead of sendMessage, use transferFile. Note that the actual transfer will happen in a background thread at a time that the system determines to be best.
Sorry, but I have no experience with manipulating images on the watch.

Triggering Windows Store background task from Band tile opened events

Is there a good way to trigger a Windows Store background task when a band tile is opened? And are there any examples for working with the band from a background task with the latest SDK? I have seen mentions of the ability to do so but can't find any code examples of this.
I have a scenario where the tile's content is only valid for a short time (~30 seconds) and would like to wake up a background service on the phone while the band's tile is open to update the content as needed.
I was hoping to find an IBackgroundTrigger in the SDK that would do the trick but no luck there. The best I can think of to fill this need would be to have a task that uses a system trigger and hooks up listeners for the tile opened/closed events. This seems like a lot of unnecessary work for the task though and could end up with unnecessary battery usage on the phone.
Thanks,
Tony
I'm afraid that is not possible. As far as I can see from the SDK, the app on the phone has to be running to allow a tile on the band to send an event back to the phone.
An alternative would be to open your app from the Band using a voice command. Would that solve your problem?

Resources