BLE Cycling Speed and Cadence Service - Crank Timing Data - bluetooth-lowenergy

A Bluetooth LE Cycling Speed and Cadence sensor sends measurements data according to the Gatt characteristic measurement data. For the crank cadence this is:
Cumulative Crank Revolutions - an unsigned 16bit integer
Last Crank Event Time - an unsigned 16bit integer with 1/1024s resolution
I'd like to understand how the Last Crank Event Time is defined. The documentation makes it sound like a timestamp but because it is a 16bit integer at 1/1024s it overflows after about 1 minute. So I suspect it is actually a time interval. Below is a sequence of events on a time scale. Message B sends n+2 for the number of crank revolutions but what is the Last Crank Event Time for B?

In section "4.4 CSC Measurement" of the Cycling Speed and Cadence Profile document it says:
The Collector shall take into account that the Wheel Event Time and
the Last Crank Event Time can roll over during a ride.
so my reading of this is that it is a time stamp but as you only need to know the difference between the last two readings it can still be calculated even if it overflows.
There is more information in the Cycling Speed and Cadence Service (CSCS) document that states:
The ‘crank event time’ is a free-running-count of 1/1024 second units
and it represents the time when the crank revolution was detected by
the crank rotation sensor. Since several crank events can occur
between transmissions, only the Last Crank Event Time value is
transmitted. This value is used in combination with the Cumulative
Crank Revolutions value to enable the Client to calculate cadence.
The Last Crank Event Time value rolls over every 64 seconds.
Calculation of cadence at the Collector can be derived from data in two successive measurements. The Collector calculation can be performed as shown below:
Cadence = (Difference in two successive Cumulative Crank Revolution
values) / (Difference in two successive Last Crank Event Time values)

Related

Is it possible to read in data from multiple pins of a microcontroller at the same time?

I am using a PIC24 microcontroller and have multiple inputs. Via these I would like to obtain analog voltage data as fast as possible. I have 8 different data arriving to the microcontroller and I am a bit confused how to solve the problem.
My first idea was to to read in the data sequentially. First from AN0, then AN1 and so forth, but this may take quite a while and I am not at all sure it would be fast enough to do without any other trick. Especially because I do not only want to read in one single value per pin, but an array of voltages, then store and numerically integrate and send the results through USB to the PC. While doing so, new data should be constantly received via the aforementioned pins.
Is it feasible at all what I'm trying to achieve here?
Thanks in advance :)
You should think through your requirements a little more, especially the "at the same time" and "as fast as possible" statements. If you sample each channel within 10 to 100 microseconds of the next would that be satisfactory? What is the maximum frequency of the input signal that you need to detect? Your sampling frequency should be at least double the maximum signal frequency of interest.
Use a single ADC with enough input channels. Configure the ADC so that each time it is triggered to take a sample it will sample all of the channels in sequence (multichannel scan). It won't sample all 8 channels at literally "the same time", but it will cycle through each channel and sample them one after the other at nearly the same time. This could be within a few microseconds depending on the clock rate of the ADC and the channel setup time that you configure.
Now you could configure the ADC to sample in continuous mode where it would start the next sample scan immediately after finishing the previous scan. That would be "as fast as possible" but that might be faster than you need and produce more data than can be processed. Instead you should choose the sampling rate based upon the input signal frequency of interest and setup the ADC to sample at that rate. This rate might be much less than "as fast as possible". You might configure the ADC to collect one sample per channel when it is triggered (single conversion mode) and also setup a hardware timer to expire at the desired sampling rate and trigger the ADC to take a sample scan. The sample period (time between samples) must be greater than the time required to scan all the channels because you won't be able to trigger the ADC again before it has completed the previous channel scan.
If you really need to sample all channels at literally the same time then you probably need a separate ADC for each channel and then trigger all the ADCs to collect a sample at once.

Time measurements for function in microcontroller

I am using two microcontrollers for a project, I want to measure execution time for some code with the help of internal timer of both microcontrollers. But One microcontroller's timer count till 32 bit value and second microcontroller's timer can count till 16bit value then it restart. I know that execution time of code is more than 16 bit value. could you suggest me any solution for this problem. (Turning ON and OFF GPIO pin doesn't provide useful results)
You should be able to measure execution time using either type of timer, assuming that execution time is less than days or hours. The real problem is how to configure the timer to meet your needs. How you configure the timer will control the precision or granularity of the measurement, as well as the maximum interval that can be measured.
The general approach will be thus:
Identify required precision and estimate the longest interval to be measured
Given the precision, determine the timer clock prescaler or divider that will meet your precision requirements. For example, if the clock speed is 50 MHz, and you need microsecond precision, then select a prescaler such that (Prescaler) / (Clock speed) ~ 1 microsecond. A spreadsheet helps with this. For this case, a divider value of 64 gives us about 1.28 microseconds per timer increment.
Determine if your timer register is large enough. For a 16-bit timer, you can measure (1.28 microseconds) * (2^16 - 1) = 0.084 seconds, or about a tenth of a second. If the thing you are measuring takes longer than this, you will need to rethink your precision requirements.
You should by now have identified the key parameters to configuring the timer, keeping in mind the limitations. If you update your answer with more specifics, such as the microcontrollers you plan to use and what you're trying to measure, I can be more specific.

CSMA/CD: Minimum frame size to hear all collisions?

Question from a networking class:
"In a csma/cd lan of 2 km running at 100 megabits per second, what would be the minimum frame size to hear all collisions?"
Looked all over and can't find info anywhere on how to do this. Is there a formula for this problem? Thanks for any help.
bandwidth delay product is the amount of data on transit.
propagation delay is the amount of time it takes for the signal to propagate over network.
propagation delay=(lenght of wire/speed of signal).
assuming copper wire i.e speed =2/3* speed of light
propagation delay =(2000/(2*3*10^8/3))
=10us
round trip time is the time taken for message to travel from sender to receiver and back from receiver to sender.
round trip time =2*propagation delay =20us
minimum frame size =bandwidth *delay (rtt)
frame size = bandwidth *rtt
=100Mbps*20us=2000bits

Bandwidth estimation with multiple TCP connections

I have a client which issues parallel requests for data from a server. Each request uses a separate TCP connection. I would like to estimate the available throughput (bandwidth) based on the received data.
I know that for one connection TCP connection I can do so by dividing the amount of data the has been download by the duration of time it took to download the data. But given that there are multiple concurrent connections, would it be correct to sum up all the data that has been downloaded by the connections and divide the sum by the duration between sending the first request and the arrival time of the last byte (i.e., the last byte of the download that finishes last)? Or am I overlooking something here?
[This is a rewrite of my previous answer, which was getting too messy]
There are two components that we want to measure in order to calculate throughput: the total number of bytes transferred, and the total amount of time it took to transfer those bytes. Once we have those two figures, we just divide the byte-count by the duration to get the throughput (in bytes-per-second).
Calculating the number of bytes transferred is trivial; just have each TCP connection tally the number of bytes it transferred, and at the end of the sequence, we add up all of the tallies into a single sum.
Calculating the amount of time it takes for a single TCP connection to do its transfer is likewise trivial: just record the time (t0) at which the TCP connection received its first byte, and the time (t1) at which it received its last byte, and that connection's duration is (t1-t0).
Calculating the amount of time it takes for the aggregate process to complete, OTOH, is not so obvious, because there is no guarantee that all of the TCP connections will start and stop at the same time, or even that their download-periods will intersect at all. For example, imagine a scenario where there are five TCP connections, and the first four of them start immediately and finish within one second, while the final TCP connection drops some packets during its handshake, and so it doesn't start downloading until 5 seconds later, and it also finishes one second after it starts. In that scenario, do we say that the aggregate download process's duration was 6 seconds, or 2 seconds, or ???
If we're willing to count the "dead time" where no downloads were active (i.e. the time between t=1 and t=5 above) as part of the aggregate-duration, then calculating the aggregate-duration is easy: Just subtract the smallest t0 value from the largest t1 value. (this would yield an aggregate duration of 6 seconds in the example above). This may not be what we want though, because a single delayed download could drastically reduce the reported bandwidth estimate.
A possibly more accurate way to do it would be say that the aggregate duration should only include time periods when at least one TCP download was active; that way the result does not include any dead time, and is thus perhaps a better reflection of the actual bandwidth of the network path.
To do that, we need to capture the start-times (t0s) and end-times (t1s) of all TCP downloads as a list of time-intervals, and then merge any overlapping time-intervals as shown in the sketch below. We can then add up the durations of the merged time-intervals to get the aggregate duration.
You need to do a weighted average. Let B(n) be the bytes processed for connection 'n' and T(n) be the time required to process those bytes. The total throughput is:
double throughput=0;
for (int n=0; n<Nmax; ++n)
{
throughput += B(n) / T(n);
}
throughtput /= Nmax;

Graphite: show change from previous value

I am sending Graphite the time spent in Garbage Collection (getting this from jvm via jmx). This is a counter that increases. Is their a way to have Graphite graph the change every minute so I can see a graph that shows time spent in GC by minute?
You should be able to turn the counter into a hit-rate with the Derivative function, then use the summarize function to the counter into the time frame that your after.
&target=summarize(derivative(java.gc_time), "1min") # time spent per minute
derivative(seriesList)
This is the opposite of the integral function. This is useful for taking a
running totalmetric and showing how many requests per minute were handled.
&target=derivative(company.server.application01.ifconfig.TXPackets)
Each time you run ifconfig, the RX and TXPackets are higher (assuming there is network traffic.)
By applying the derivative function, you can get an idea of the packets per minute sent or received, even though you’re only recording the total.
summarize(seriesList, intervalString, func='sum', alignToFrom=False)
Summarize the data into interval buckets of a certain size.
By default, the contents of each interval bucket are summed together.
This is useful for counters where each increment represents a discrete event and
retrieving a “per X” value requires summing all the events in that interval.
Source: http://graphite.readthedocs.org/en/0.9.10/functions.html

Resources