How to calculate asynchronous bus bandwidth - asynchronous

To calculate a synchronous bus bandwidth we can rely on the clock cycle and how long each event takes in terms of this
Asynchronous bus cycles rely on handshakes. How do can we calculate the total amount of time required to read a single byte of data.
Thank you

Related

BLE Cycling Speed and Cadence Service - Crank Timing Data

A Bluetooth LE Cycling Speed and Cadence sensor sends measurements data according to the Gatt characteristic measurement data. For the crank cadence this is:
Cumulative Crank Revolutions - an unsigned 16bit integer
Last Crank Event Time - an unsigned 16bit integer with 1/1024s resolution
I'd like to understand how the Last Crank Event Time is defined. The documentation makes it sound like a timestamp but because it is a 16bit integer at 1/1024s it overflows after about 1 minute. So I suspect it is actually a time interval. Below is a sequence of events on a time scale. Message B sends n+2 for the number of crank revolutions but what is the Last Crank Event Time for B?
In section "4.4 CSC Measurement" of the Cycling Speed and Cadence Profile document it says:
The Collector shall take into account that the Wheel Event Time and
the Last Crank Event Time can roll over during a ride.
so my reading of this is that it is a time stamp but as you only need to know the difference between the last two readings it can still be calculated even if it overflows.
There is more information in the Cycling Speed and Cadence Service (CSCS) document that states:
The ‘crank event time’ is a free-running-count of 1/1024 second units
and it represents the time when the crank revolution was detected by
the crank rotation sensor. Since several crank events can occur
between transmissions, only the Last Crank Event Time value is
transmitted. This value is used in combination with the Cumulative
Crank Revolutions value to enable the Client to calculate cadence.
The Last Crank Event Time value rolls over every 64 seconds.
Calculation of cadence at the Collector can be derived from data in two successive measurements. The Collector calculation can be performed as shown below:
Cadence = (Difference in two successive Cumulative Crank Revolution
values) / (Difference in two successive Last Crank Event Time values)

Is it possible to read in data from multiple pins of a microcontroller at the same time?

I am using a PIC24 microcontroller and have multiple inputs. Via these I would like to obtain analog voltage data as fast as possible. I have 8 different data arriving to the microcontroller and I am a bit confused how to solve the problem.
My first idea was to to read in the data sequentially. First from AN0, then AN1 and so forth, but this may take quite a while and I am not at all sure it would be fast enough to do without any other trick. Especially because I do not only want to read in one single value per pin, but an array of voltages, then store and numerically integrate and send the results through USB to the PC. While doing so, new data should be constantly received via the aforementioned pins.
Is it feasible at all what I'm trying to achieve here?
Thanks in advance :)
You should think through your requirements a little more, especially the "at the same time" and "as fast as possible" statements. If you sample each channel within 10 to 100 microseconds of the next would that be satisfactory? What is the maximum frequency of the input signal that you need to detect? Your sampling frequency should be at least double the maximum signal frequency of interest.
Use a single ADC with enough input channels. Configure the ADC so that each time it is triggered to take a sample it will sample all of the channels in sequence (multichannel scan). It won't sample all 8 channels at literally "the same time", but it will cycle through each channel and sample them one after the other at nearly the same time. This could be within a few microseconds depending on the clock rate of the ADC and the channel setup time that you configure.
Now you could configure the ADC to sample in continuous mode where it would start the next sample scan immediately after finishing the previous scan. That would be "as fast as possible" but that might be faster than you need and produce more data than can be processed. Instead you should choose the sampling rate based upon the input signal frequency of interest and setup the ADC to sample at that rate. This rate might be much less than "as fast as possible". You might configure the ADC to collect one sample per channel when it is triggered (single conversion mode) and also setup a hardware timer to expire at the desired sampling rate and trigger the ADC to take a sample scan. The sample period (time between samples) must be greater than the time required to scan all the channels because you won't be able to trigger the ADC again before it has completed the previous channel scan.
If you really need to sample all channels at literally the same time then you probably need a separate ADC for each channel and then trigger all the ADCs to collect a sample at once.

CSMA/CD: Minimum frame size to hear all collisions?

Question from a networking class:
"In a csma/cd lan of 2 km running at 100 megabits per second, what would be the minimum frame size to hear all collisions?"
Looked all over and can't find info anywhere on how to do this. Is there a formula for this problem? Thanks for any help.
bandwidth delay product is the amount of data on transit.
propagation delay is the amount of time it takes for the signal to propagate over network.
propagation delay=(lenght of wire/speed of signal).
assuming copper wire i.e speed =2/3* speed of light
propagation delay =(2000/(2*3*10^8/3))
=10us
round trip time is the time taken for message to travel from sender to receiver and back from receiver to sender.
round trip time =2*propagation delay =20us
minimum frame size =bandwidth *delay (rtt)
frame size = bandwidth *rtt
=100Mbps*20us=2000bits

Bandwidth estimation with multiple TCP connections

I have a client which issues parallel requests for data from a server. Each request uses a separate TCP connection. I would like to estimate the available throughput (bandwidth) based on the received data.
I know that for one connection TCP connection I can do so by dividing the amount of data the has been download by the duration of time it took to download the data. But given that there are multiple concurrent connections, would it be correct to sum up all the data that has been downloaded by the connections and divide the sum by the duration between sending the first request and the arrival time of the last byte (i.e., the last byte of the download that finishes last)? Or am I overlooking something here?
[This is a rewrite of my previous answer, which was getting too messy]
There are two components that we want to measure in order to calculate throughput: the total number of bytes transferred, and the total amount of time it took to transfer those bytes. Once we have those two figures, we just divide the byte-count by the duration to get the throughput (in bytes-per-second).
Calculating the number of bytes transferred is trivial; just have each TCP connection tally the number of bytes it transferred, and at the end of the sequence, we add up all of the tallies into a single sum.
Calculating the amount of time it takes for a single TCP connection to do its transfer is likewise trivial: just record the time (t0) at which the TCP connection received its first byte, and the time (t1) at which it received its last byte, and that connection's duration is (t1-t0).
Calculating the amount of time it takes for the aggregate process to complete, OTOH, is not so obvious, because there is no guarantee that all of the TCP connections will start and stop at the same time, or even that their download-periods will intersect at all. For example, imagine a scenario where there are five TCP connections, and the first four of them start immediately and finish within one second, while the final TCP connection drops some packets during its handshake, and so it doesn't start downloading until 5 seconds later, and it also finishes one second after it starts. In that scenario, do we say that the aggregate download process's duration was 6 seconds, or 2 seconds, or ???
If we're willing to count the "dead time" where no downloads were active (i.e. the time between t=1 and t=5 above) as part of the aggregate-duration, then calculating the aggregate-duration is easy: Just subtract the smallest t0 value from the largest t1 value. (this would yield an aggregate duration of 6 seconds in the example above). This may not be what we want though, because a single delayed download could drastically reduce the reported bandwidth estimate.
A possibly more accurate way to do it would be say that the aggregate duration should only include time periods when at least one TCP download was active; that way the result does not include any dead time, and is thus perhaps a better reflection of the actual bandwidth of the network path.
To do that, we need to capture the start-times (t0s) and end-times (t1s) of all TCP downloads as a list of time-intervals, and then merge any overlapping time-intervals as shown in the sketch below. We can then add up the durations of the merged time-intervals to get the aggregate duration.
You need to do a weighted average. Let B(n) be the bytes processed for connection 'n' and T(n) be the time required to process those bytes. The total throughput is:
double throughput=0;
for (int n=0; n<Nmax; ++n)
{
throughput += B(n) / T(n);
}
throughtput /= Nmax;

Why wait DIFS in order to sense if the channel is idle

Station waits in order to sense if the channel is idle DIFS and then starts transmission. My question is why wait DIFS and not SIFS only.
What problems, issues it may cause (sense for SIFS instead of DIFS)?
Short answer: SIFS is not long enough to detect if the channel is indeed idle. The implication of waiting just SIFS instead of DIFS is that the MAC protocol shall no longer be able to detect busy channel, thus collisions may happen all the time, and thus poor channel efficiency.
Long answer:
What is SIFS? The standard defines that SIFS (Short Inter-Frame Space) is used to separate a DATA and ACK frames. A station (STA) receiving DATA will wait for SIFS before sending ACK. It should be as short as possible, basically just enough to decode the frame, MAC processing, and preparation time to send ACK. For 802.11n/ac, SIFS = 16 microseconds.
What is DIFS? DIFS = SIFS + 2*slot_time. Similar to SIFS, slot_time is PHY-dependent. For 802.11n/ac, slot_time = 9 microseconds. slot_time is defined to be long enough to account for, among others, propogation delays, thus enable neighbouring STAs to detect transmitting STA's preamble.
Having said that, if a STA just waits for SIFS before transmitting, there is no way it can detect possible ACK frame being sent by a neighbouring STA at the exact same time - that leads to collisions and poor channel efficiency.
Others:
If one slot_time is long enough to detect transmitting STA's preamble, why just not wait for SIFS + slot_time ? Well can be, but it is actually PIFS that is normally used by the AP only (to have higher access priority than normal STAs).
Why wait at least DIFS before sending ? Given that DIFS is enough to determine whether the channel is busy or not, why not just wait for DIFS ? That's because there can be multiple STAs that are possibly sending the channel at the same time. If every STA just waits for DIFS then send immediately - well then that's another collision. That's why the standard mandates that if a STA sending the channel idle for DIFS, it can transmit immediately. But if a STA sending channel busy, it must wait for DIFS plus a random backoff time to avoid collisions. What is random backoff time ?? Time to google on 802.11 CSMD/CA then..
For reference, there is a similar Q that deals with SIFS and touched a bit on other channel access timing.
The time used in sensing the channel by a station and then sends RTS, to the other station.This overall time is called DIFS(DCF, Interframe Space). The station first time sense the see the station is not used by the other stations and then sends RTS(Request To Send).
If the chanel is idle then, channel wakes up from power saving mode to accept the RTS from a station so some time also spent in this process.
Suppose three stations are sensing a busy medium. If the medium becomes idle at t , then all three stations will not be able to realise that medium is idle at time t. They will realise it only after time (t + DIFS).
So it means when medium becomes idle , all stations will realise it after DIFS duration. It is a kind of lag. It is not a waiting period.

Resources