Build a cumulative variable/aggregate from raw data - azure-timeseries-insights

I got a power consumption sensor (kWh) sending data to my TSI Gen2 environment, and it is malfunctioning in a way that it is losing its accumulated measuremente value when it is shut down. I need to create a new aggregate/variable that would "stack" the measurements , never letting it drop to zero, but always adding to the last greatest value.
I thought about creating a dataset with values from differences from right to left over a fixed timespan, if positive, and then I could create a SUM aggregation over the bucket period on top of it. I am clueless on how to do such thing based on the poor official documentation provided by Microsoft. Any Ideas?
Here are a couple of pictures illustrating my problem and What I am trying to accomplish:

You probably need to add something in the middle (before the IoT Hub/Event Hub) to save the last state of the sensor, and do the appropriate sum if if detects the device was rebooted.

Related

Best approach for transfering large data chunks over BLE

I'm new to BLE and hope you will be able to point me towards the right implementation approach.
I'm working on an application in which the peripheral (battery operated) device continuously aggregate sensor readings.
On the mobile side application there will be a "sync" button, upon button press, I would like to transfer all the sensor readings that were accumulated in the peripheral to the mobile application.
The maximal duration between sync's can be several days, hence, the accumulated data can reach a size of 20Kbytes.
Now, I'm wondering what will be the best approach to perform the data transfer from the peripheral to the central application.
I thought about creating an array of characteristics where each characteristic will contain a fixed amount of samples (e.g. representing 1hour of readings).
Then, upon sync, I will:
Read the characteristics count (how many 1hours cells).
Then read the characteristics (1hour cells) one by one.
However, I have no idea if this is a valid approach ?
I'm not sure if this is the most "power efficient" way that I can
use.
I'm not sure if Characteristic READ is the way to go, or maybe
I need to use indication instead.
Any help here will be highly appreciated :)
Thanks in advance, Moti.
I would simply use notifications.
Use one characteristic which you write something to in order to trigger the transfer start.
Then have another characteristic which you simply stream data over by sending 20 bytes at a time. Most SDKs for BLE system-on-a-chips have some way to control the flow of data so you don't send too fast. Normally by having a callback triggered when it is ready to take the next notification.
In order to know the size of the data being sent, you can for example let the first notification contain the size, and rest of them the data.
This is the most time and power efficient way since there can be sent many notifications per connection interval, compared if you do a lot of reads instead which normally requires two round trips each. Don't use indications since they also require basically two round trips per indication. They're also quite useless anyway.
You could possibly increase the speed also by some % by exchanging a larger MTU (which leads to lower L2CAP/ATT headers overhead).

How do you sum a statsd counter over a large time range with correct values?

Background
A basic use case for statsd & grafana is learning how many times a function has been called over a time range-- whether that is "last 6h", "since beginning of today", "since beginning of time", etc.
What I'm struggling to find is the correct function to achieve this. I'm using a hosted solution; however, I can confirm that data is being flushed from StatsD to Graphite in 10s intervals.
Current Setup
StatsD Flush: 10s
Graph Function: hitcount(counters.login.employer.count, "10seconds")
Time Range: 24h
Problem
When using hitcount(counters.login.employer.count, "10seconds"), the data returned is incorrect. In fact, I can do 24h, 23h, 22h, and note the values are actually increasing.
I've performed all testing here in a controlled environment, only my machine is sending metrics to StatsD. This is not yet in production code.
Any idea what could be going on here?
The way counters work is that on each interval the value of the counter is sent to graphite and reset in statsd, so what you're looking for is the sum of the series.
You can do that using consolidateBy('sum') combined with maxDataPoints=1.
Be aware that if your series is being aggregated in graphite you'll need to make sure that the aggregation is by sum, otherwise when values get rolled up from the individual values reported by statsd into aggregated buckets they'll be averaged, and your sum won't work across longer intervals. You can read more about configuring aggregation in Graphite here.

Can you develop an app for the Microsoft band, without a corresponding mobile app always being connected?

I have several Microsoft bands, to be used as part of a group health initiative. I intend to develop a single app on a tablet which will pull the data from the bands. This will be a manual process, there will not be a constant connection to the tablet and no connection to Microsoft Health.
Does anyone know if this is possible?
Thanks
Emma
The general answer is no: Historical sensor values are not stored or buffered on the Band itself.
It does however depend on what sensors you are interested in. The sensor values are not buffered, so you can only read the current (realtime) value of the sensors.
But sensors such as pedometer and distance are incrementing over time, so these values will make sense even though you are only connected once in a while. Whereas for, e.g., the heart rate and skin temperature, you will only get the current (realtime) value.
So it depends on your use case.

Client-side prediction & server reconciliation

I’ve read some articles about client-side prediction and server reconciliation but I'm missing some parts, I take the part of client side prediction but I don’t understand how exactly is reconciliation done. I’ll take these two pieces of well-known articles as reference:
http://www.gabrielgambetta.com/fpm2.html
#2. So applying client-side prediction again, the client can calculate the “present” state of the game based on the last authoritative state sent by the server, plus the inputs the server hasn’t processed yet
http://gafferongames.com/networking-for-game-programmers/what-every-programmer-needs-to-know-about-game-networking/
In effect the client invisibly “rewinds and replays” the last n frames of local player character movement while holding the rest of the world fixed
Ok, I take that the client receives an acknowledgement from the server, but how exactly are the inputs re-applied? I can interpret this in two ways.
From the client point of view, where the game loop is executed ‘x’ times per second (frames per second)
First: The non-processed inputs are re-applied in the same frame, so here the expression “invisibly rewind and replay “ fits perfect because in the end what you see in the screen is the result for the last input re-applied.
I don’t see the benefit of doing this because I see no difference between re-applying the last n inputs from the server update to the present time and keeping the client state as it was before the update, we know in advance that the result will be the same.
Second: The inputs are re-applied one by one in the consecutive frames . A human being couldn’t notice a few frames being replayed but I cannot help thinking that if the client were experiencing significant latency he could notice himself going back to the past and replaying the last ‘n’ frames.
Can anyone point me in the right direction , please? Thanks
I know it's been quite a while since you've posted this question, but it is on google's feed, so I'll answer.
I don’t see the benefit of doing this because I see no difference
between re-applying the last n inputs from the server update to the
present time and keeping the client state as it was before the update,
we know in advance that the result will be the same.
The whole point of reconciliation is to sync with the server. We don't really know what the result of our actions will be. We just predict it. Sometimes the result actually is different and we still want to get an image of what's going on on the server.
The first way is definitely the way to go.
The second way doesn't really make any sense. Remember that the player receives updates on a regular basis. That means that with a latency of 200 ms he will see his character about 200 ms in past all the time.
I don’t see the benefit of doing this because I see no difference between re-applying the last n inputs from the server update to the present time and keeping the client state as it was before the update, we know in advance that the result will be the same.
You are correct if and only if the predicted GameState match the server GameState (the one we just received) so there is no reason to do any reconciliation. However, if they don't match, reapplying the inputs would give us a different result. That's when you apply server reconciliation.

Siemens DICOM Individual Slice Time (Private_1019_1029)

I'm seeing that the individual slice time information from the Private_0019_1029 field of the DICOM header has negative values and sometime only positive values.
I assumed that these times are with respect to the Volume Acquisition time recorded in the header.
Going by that assumption, it would mean that the Acquisition time varies. But upon checking the difference between successive volume acquisition times, I see that it's equal to TR.
So I'm at a loss about what's happening.
I'm trying to look at the raw fMRI data without slice time correction; hence it's necessary to have the individual slice times.
Does the moco series do time shifting in addition to motion correction? (I don't believe it used to, but your experience may show otherwise).
This indicates how their slice timing is measured. Try the computations with the raw and the moco series and see if the times line up. That may give you your answer.
When dealing with private tag, you should really include the Private Vendor ID, in your case the value of tag (0019,0010).
You may also want to have a look at the output of:
gdcmdump --csa input.dcm
This will dump the SIEMENS CSA header directly from the DICOM attribute.

Resources