How to use Core ML to analyse device motion values - coreml

I would like to implement Core-ML app to analyze Device Motion, I'm recording device motion values for some time and capturing the details into the JSON file. Now I want to analyze the data x, y, z values then I should give the result how the user using the device.

Use Turi Create. It has an Activity Classification module that makes this very easy: https://apple.github.io/turicreate/docs/userguide/activity_classifier/
To learn more about this in detail, check out the book Machine Learning by Tutorials (disclaimer: I'm a co-author but did not write the chapters on activity classification).

Related

Read data from Mitutoyo gauge

I need to write a program that will read the data from the indicator once a minute. Unfortunately, I can not find the communication protocol for this indicator (the official representative responds extremely unqualified and there is no way to get at least some information from him). Some programmer (with whom I do not have a connection) wrote a similar program earlier, but did not leave the source code. In the Device Manager on computer the indicator is defined as USB-ITN. I will be grateful for any information in solving this problem.
Indicator model: ID-U1025M
Indicator Serial Number: 13063340
USB cable: ITN - 60010409
You should ask the nearby Mitutoyo branch.
Mitutoyo Worldwide
The published information of the product you are using seems to be the following.
Japanese version document for ID-U1025
English version document for similar product
A description of the tool and data format will be as follows.
USB Input Tool Direct/Input Tool SERIES
U-WAVE
U-WAVE/Common Optional Software

Can you develop an app for the Microsoft band, without a corresponding mobile app always being connected?

I have several Microsoft bands, to be used as part of a group health initiative. I intend to develop a single app on a tablet which will pull the data from the bands. This will be a manual process, there will not be a constant connection to the tablet and no connection to Microsoft Health.
Does anyone know if this is possible?
Thanks
Emma
The general answer is no: Historical sensor values are not stored or buffered on the Band itself.
It does however depend on what sensors you are interested in. The sensor values are not buffered, so you can only read the current (realtime) value of the sensors.
But sensors such as pedometer and distance are incrementing over time, so these values will make sense even though you are only connected once in a while. Whereas for, e.g., the heart rate and skin temperature, you will only get the current (realtime) value.
So it depends on your use case.

Scilab - real time plot from serial port

I am trying to make a program in SciLab that would make a real time plot from data received from serial port.
My idea is to execute new plot function after every single portion of data received. But I think it is too much work for the computer and SciLab will not work properly and miss data.
Do you know some option to plot real time data from serial COM port? SciLab or another free program?
It is perfectly possible to implement such a hardware setup run from a Scilab session that plots live data.
We do it #ENSIM, for practicals in optics: we move a translation actuator step by step (Scilab driver # https://fileexchange.scilab.org/toolboxes/255000, plugged to port#1), and for each step we read a transmitted signal with an optical powermeter (driver # https://fileexchange.scilab.org/toolboxes/223000, plugged to port#2). The refresh frequency of the powermeter is 1-2 Hz. So no problem to get a live plot of received data.
As well, we have written a Scilab driver for the very popular M38XR multimeter (# https://fileexchange.scilab.org/toolboxes/232000). A syntax is implemented to continuously display the live data coming from the multimeter (same low refresh rate ~1Hz).
Etc.
2 new Scilab drivers to come soon for new instruments (a furnace, and another popular multimeter). All our drivers presently on FileExchange will be updated for Scilab 6 and gathered in a single ATOMS module (then easier to document and maintain).

Real-time anomaly detection

I would like to do anomaly detection in R on real-time stream of sensor data. I would like to explore use of either the Twitter anomalyDetection or anomalous.
I am trying to think of the most efficient way to do this, as some online sources suggest R is not suitable for real-time anomaly detection. See https://anomaly.io/anomaly-detection-twitter-r. Should I use the stream package to implement my own data stream source? If I do so, is there any "rule-of-thumb" as to how much data I should stream in order to have a sufficient amount of data (perhaps that is what I need to experiment with)? Is there any way of doing the anomaly detection in-database rather than in-application to speed things up?
My experience is that if you want real time anomaly detection, you need to apply an online learning algorithm (rather than batch), ideally running on each sample as it is collected/generated. To do it, you would need to modify the existing open sources to run in online mode and adapt the model parameters for each sample that is processed.
I'm not aware of an open source package that does it though.
For example, if you're computing a very simple anomaly detector, using the normal distribution, all you need to do is update the mean and variance of each metric with each sample that arrives. If you want the model to be adaptive, you'll need to add a forgetting factor (e.g., exponential forgetting), and control the "memory" of the mean and variance.
Another algorithm which lends itself to online learning is Holt-Winters. There is a several R-implementations of it, though you still have to make it run in online mode to be real time.
I gave a talk on this topic at the Big Data, Analytics & Applied Machine Learning - Israeli Innovation Conference last May. The video is at:
https://www.youtube.com/watch?v=SrOM2z6h_RQ
(DISCLAIMER: I am the chief data scientist for Anodot, a commercial company doing real time anomaly detection).

SoundMixer.computeSpectrum with microphone

Flex has the SoundMixer.computeSpectrum function that lets you compute an FFT from the currently playing sound. What I'd like to do is compute an FFT without playing the sound. Since Flash 10.1 lets us access the microphone bytes directly, it seems like we should be able to compute the FFT directly off of what the user is speaking.
Unfortunately this doesn't work as far as I know. As stated on the Adobe help pages:
The SoundMixer.computeSpectrum()
method lets an application read the
raw sound data for the waveform that
is currently being played. If more
than one SoundChannel object is
currently playing the
SoundMixer.computeSpectrum() method
shows the combined sound data of every
SoundChannel object mixed together.
This implies two drawbacks:
It just works on the output (SoundChannel)
It just works on the mix of all outputs.
If you don't need the output channel at all, you may turn down it's volume to zero or near to zero!? Don't know if that could work.
For myself I don't see any other chance at the moment to implement the FFT on my own to compute a spectrum on the microphone data.
I'm not sure if there's a way to pass that data, but if all else fails, you can always compute the FFT yourself.

Resources