Registering new motion gestures in Tizen for wearable - gesture

Got a bit of a last minute project for uni. I'm trying to develop an application for a Samsung wearable where you can use motion gestures to interact with IoT devices(like lights and music).
I'm looking at this sample (https://github.com/Samsung/Tizen-CSharp-Samples/tree/master/Wearable/Xamarin.Forms/GestureSensor) using gesture detector. Not sure how to register new motion gestures. Any help would be much appreciated.

Do you mean that you want to make a new gesture?
But there are specific types of gesture.
You can use "face down, pick up and wrist up" gesture in Tizen C# app.
(https://samsung.github.io/TizenFX/stable/api/Tizen.Sensor.Sensor.html)
If you develop with Tizen native app, there are more types of gestures such as shaking, tilting, and snapping.
(https://docs.tizen.org/application/native/api/wearable/5.0/group__CAPI__CONTEXT__GESTURE__MODULE.html#ga260f6752298cdd6c8235fd2922c147bf)
If these are not enough, you should detect them directly using an acceleration sensor or a gyroscope sensor.

Related

A-frame : How to replay 3DoF controllers on desktop?

My goal is to record motion with the oculus go and replay this recording on a desktop computer. Currently I can record, save the recording and replay the recording on the oculus go itself. However when I wish to replay on the computer nothings happens because A-frame is clever enough to see there is no controller connected:
"The controller components following are only activated if they detect the controller is found and seen as connected in the Gamepad API." (Aframe.io)
What would be the best way to tackle this?

Using GNSDK within a Video Camera Mobile Application

Is it possible to use the GNSDK recognition feature within a video camera application? Ie, starting to record a video would concurrently trigger the GNSDK recognition function.
GNSDK does not work with video. You need to feed decoded audio stream to GNSDK. As long as you provide the correct audio data, when or how you start the recognition is totally up to you, the developer.

Is it possible to create a smart tv application that manipulates with the TV input

I'm new at smart-tv development and wonder if it's possible to:
Create an app as a layer, and still display the user TV channel?
Manipulate the sound or the video channels?
example:
is it possible to create an app that displays my tv channel only that the sound is turned into Helium sound?

Access GPS through watchOS 3

Apple Watch series 2 has build-in GPS.
I'm looking at latest SDK, i didn't find any API for that function.
My question:
Can I track my path during swimming ( without iPhone ), which API i should use?
Core Location is the correct way to do this. The watch will automatically determine whether the phone is connected or not, and switch between the built in GPS vs the phone based on connection.

Sending a RESTful url (endpoint) from Band

I just have a general question. Can you send a url from a button on the band. I have a home automation system that you can trigger events by sending a RESTful url (endpoint) to. Basically I can put the url in any web browser and trigger the event. It would be great if this could be done through the Band. I don't really need a response from the Url, just to send it.
Does that make sense?
Thanks,
Scott
No, the Band communicates only via Bluetooth to (applications on) its paired device. On Windows (Phone), the application must be running, with a connection to the Band, and subscribed to the Tile button pressed event in order to receive such notifications. This generally rules out scenarios that require ad-hoc input from the Band unless you're willing to use voice commands via Cortana.
But i think its possible by creating custom tile and handling custom tile events. Haven't tried it in my project but can see from sdk documentation.
For android you can implement broadcast receiver and listen to tile events. Check: sdk doc
Chap 9, page 51
In short, yes it is possible.
However, the problem would be that the button would be single use to only send that ONE URL command and it actually wouldn't be done via the Band.
You can create custom layouts for your applications with the Microsoft Band SDK which will allow you to create a button. You'll then need to register to the click event from the Band which then would get fired on the device the app is running on. From there, you'd be able to send the URL but it would be sent from the Windows Phone or Windows PC rather than the Band so you'd need to be connected. The documentation covers how you can do this here: http://developer.microsoftband.com/Content/docs/Microsoft%20Band%20SDK.pdf
A downside to doing this with WinRT is that as soon as the app is closed and the connection to the Band is lost, your button click won't have any action. The best way to get around this is to create the connection to the Band in a background task but unfortunately, you can't keep hold of the connection to the Band for an infinite amount of time and you'd have to live with the possibilities that you may have times where it doesn't work. I have a GitHub sample which shows you how to connect to the Band in a background task for an indefinite amount of time.
The Microsoft Band has really been developed for the Health aspect and collecting data rather than interactions with other apps which it does in some way support.

Resources