how to calculate the loss event rate p in TFRC(tcp friendly rate control) when no packet loss? - networking

When there is packet loss , I know the method to calculate the p (I read in the RFC document).
But when there is no packet loss , how to calculate it? The document show nothing about it.
If the loss event rate p is zero, the denominator of equation in tfrc is 0.
The equation is as follows:
enter image description here
and the document is rfc5348 : https://www.rfc-editor.org/rfc/rfc5348

I know nothing about TFRC so pure guesswork here.
I suppose available bandwidth calculation is based on loss event rate. If no packets were lost up to now you have zero info about available bandwidth. Usually in this case Congestion Avoidance algo increases bitrate until packet drops start to occur.
In other words, if no packet drops so far you can assume your available bandwidth is unlimited, and use max possible value of p's type. This exactly follows from the formula, as division by zero gives you infinity in math.

Related

Flimsy error detection methods (computer networks)

I was studying error detection in computer networks and I came to know about the following methods -
Single Bit parity Check
2d parity Check
Checksum
Cyclic Redundancy Check
But after studying only a bit (lmao pun), I came across cases where they fail.
The methods fail when -
Single Bit parity Check - If an even number of bits has been inverted.
2d parity Check - If an even number of bits are inverted in the same position.
Checksum - addition of 0 to a frame does not change the result, and sequence is not maintained.
(for e.g. in the data - 10101010 11110000 11001100 10111001 if we add 0 to any of the
four frames here)
CRC - A CRC of n-bit for g(x) = (x+l)*p(x) can detect:
All burst errors of length less than or equal to n.
All burst errors affecting an odd number of bits.
All burst errors of length equal to n + 1 with probability (2^(n-1) − l)/2^n − 1
All burst errors of length greater than n + 1 with probability (2^(n-1) − l)/2^n
[the CRC-32 polynomial will detect all burst errors of length greater than 33 with
probability (2^32 − l)/2^32; This is equivalent to a 99.99999998% accuracy rate]
Copied from here - https://stackoverflow.com/a/65718709/16778741
As we can see these methods fail because of some very obvious shortcomings.
So my question is - why were these still allowed and not rectified and what do we use these days?
Its like the people who made them forgot to cross check
It is a tradeoff between effort and risk. The more redundant bits are added, the smaller the risk of undetected error.
Extra bits mean additional memory or network bandwidth consumption. It depends on the application, which additional effort is justified.
Complicated checksums add some computational overhead as well.
Modern checksum or hash functions can drive the remaining risk to very small ranges tolerable for the vast majority of applications.
Only 0.00000002% of burst errors will be missed. But what is not stated is the likelihood of these burst errors occurring. That number is dependent on the network implementation. In most cases the likelihood of a undetectable burst error will be very close to zero or zero for an ideal network.
Multiplying almost zero with almost zero is really close to zero.
Undetected errors in CRCs is more of academic interest than practical reality.

Determining a formula for a packet switching network?

Let's say we have a packet of length L bits. It is transmitted from system A through three links to system B. The three links are connected by two packet switches. di, si and Ri are the length, propagation speed and transmission rate for each link, i, in the example network. Each packet switch delays each packed by dproc (processing time).
Lets also say that there are no queuing delays; so how would i go about writing a formula for computing the end-to-end delay for a packet of length L on this theoretical network?
This is what i have so far:
End-End Delay = L/R_1 + L/R_2 + L/R_3 + d_1/s_1 + d_2/s_2 + d_3/s_3 +2(d_proc)
Is this correct, if not, what is the correct formula and why so?
Yes, your formula is correct, assuming that the processing time of each switch is the same. Also, is calculating actual delay be sure to use same dimensions for units - bits and bits/s for size and transfer rate and meters and meters/s for propagation. Take note that if the switches are connected by the fiber-optic links you will have to divide speed of light by the diffraction rating of the fiber in calculations.

AIMD congestion window halving

The AIMD Additive Increase Multiplicative Decrease CA algorithm halves the size of the congestion window when a loss has been detected. But what experimental/statistical or theoretical evidence is there to suggest that dividing by 2 is the most efficient method (instead of, say, another numerical value), other than "intuition"? Can someone point me to a publication or journal paper that supports this or investigates this claim?
All of the algorithms here
https://en.wikipedia.org/wiki/TCP_congestion_control#Algorithms
alter the congestion window in one form or another and they all have varying results which is to be expected.
Can someone point me to a publication or journal paper that supports this or investigates this claim?
Yang Richard Yang & Simon S. LamThere's paper investigates it in this paper
http://www.cs.utexas.edu/users/lam/Vita/Misc/YangLam00tr.pdf
We refer to this window adjustment strategy as general additive increase
multiplicative decrease (GAIMD). We present the (mean) sending rate of a GAIMD
flow as a function of α, β.
The authors parameterized the additive and multiplicative parts of AIMD and then studied them to see if they could be improved on for various TCP flows. The paper goes into a fair amount of depth on what they did and what the effects were. To summarize...
We found that the GAIMD flows were highly TCP-friendly. Furthermore, with β
at 7/8 instead of 1/2, these GAIMD flows have reduced rate fluctuations
compared to TCP flows.
If we believe the papers conclusion then there is no reason to believe that 2 is a magic bullet. Personally I doubt there is a best factor because it's based on too many variable ie protocol, types of flow etc.
Actually the factor of 2 also occurs in another part of the algorithm: slow start, where the window is doubled every RTT. Slow start is essentially a binary search for the optimal value of the congestion window, where the upper bound is infinity.
When you exit slow start due to packet loss, it is natural to half the congestion window (since the value from the previous RTT did not cause congestion), in other words you revert the last iteration of slow start, and then fine tune with a linear search. This is the main reason for halving when exiting slow start.
However the 1/2 factor is also used in CA when the transfer is in steady state, a long time after slow start has ended. There is not a good justification for this. I see it also as a binary search, but downwards, with a finite upper bound equal to the current congestion window; one could say, informally, that it is the opposite of slow start.
You can also read the document by Van Jacobson (one of the main designers of TCP) "Congestion Avoidance and Control", 1988; appendix D discusses exactly how the halving factor was chosen.

Networking and CRC confusion

I am currently working on a project that requires data to be sent from A to B. Once B receives the data, it needs to be able to determine if an error occurred during transmission.
I have read about CRC and have decided that CRC16 is right for my needs; I can chop the data into chunks and send a chunk at a time.
However, I am confused about how B will be able to tell if an error occurred. My initial thought was to have A generate a CRC and then send the data to B. Once B receives the data, generate the CRC and send it back to A. If the CRCs match, the transmission was successful. BUT - what if the transmission of the CRC from B to A errors? It seems redundant to have the CRC sent back because it can become corrupted in the same way that the data can be.
Am I missing something or over-complicating the scenario?
Any thoughts would be appreciated.
Thanks,
P
You usually send the checksum with the data. Then you calculate the checksum out of the data on the receiving end, and compare it with the checksum that came along with it. If they don't match, either the data or the checksum was corrupted (unless you're unlucky enough to get a collision) - in which case you should ask for a retransmission.
CRC is error-detection and, notice, your code can only detect a finite number of errors. However, you can calculate the probability of a CRC16 collision (this is relatively small for most practical purposes).
Now how CRC works is using polynomial division. Your CRC value is some polynomial (probably on the order of (x^15) for CRC16). That said, the polynomial is represented in binary as the coefficients. For example, x^3 + [(0)*x^2] + x + 1 = 1011 is some polynomial on order x^3. Now, you divide your data chunk by your CRC polynomial. The remainder is the CRC value. Thus, when you do this division operation again to the received chunk (with the remainder) on B, the polynomial division should come out even to 0. If this does not occur then you have a transmission error.
Now, this assumes (including corruption of your CRC value) that if n bits are corrupted, the CRC check will detect the failure (assuming no collision). If the CRC check does not pass, simply send a retransmission request to A. Otherwise, continue processing as normal. If a collision occurred, there is no way to verify the data is corrupted until you look at your received data manually (or send several, hopefully error-free copies - note that this method incurs a lot of overhead and redundancy only works to finite precision again).

Data propagation time (single link)

this is not Home work!
I preparing my self to test in Networking :
i had this questen in the midterm test and i got half the points i cant figure it out
in this question i got reciver-sender connection.
link data rate is R(b/s)
Packet size is S(b)
Window Size is W(pkts)
Link distance is D(m)
medium propagation speed is p(m/s)
i need to write the utilisation Formula using those Letters
this is what i wrote:
Tp-Propagation time is D/p ===>this got me a big X on test page
i wrote that frame transmition (Tt) time is window size in bits (W*S)
divided by link Data Rate i.e (W*S)/R
thats why the formula is U=Tt/(Tt+2*tp)==>((W*S)/R)/(((W*S)/R)+2*(D/p))
(again X)
i guess somthing is wrong with the Propagation time calculation .
all the slides refaring to sliding window do not mention the utilisation
in referande to distance and propagation delay
i would love some help with this.
thank you .
It depends on how propagation time is supposed to be measured1, but the general formula is:
Propagation time = (Frame Serialization Time) + (Link Media Delay)
Link Media Delay = D/p
Frame Serialization Time = S/R
I don't see the relevance of TCP's sliding window in this question yet; sometimes professors include extra data to discern how well you understand the principles.
END-NOTES:
Does the professor measure propagation time at the bit-level or at the frame-level? My answer assumes it is a frame-level calculation (measured from first bit transmitted until the last bit in the frame is received), so I include frame serialization time.

Resources