Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
Maybe it's a stupid question!
Assume a P2P network in which peers independently try to find and connect to good nodes.
good nodes are those that are closer(in term of RTT) and has higher bandwidth.
in this scenario RTT is more important (for example RTT has 90% weight).
I want to obtain the liner combination of RTT and bandwidth in a meaningful way.
but it's obvious that the nature of these two metric is inconsistent.
How can I combine these two metrics ?
One of the key unicast routing protocols (it is a point to point and not P2P), EIGRP, uses several metrics to come up with a single metric. EIGRP is a very popular routing protocol. Delay (aka RTT) and bandwidth are typically the two key params for EIGRP. So, your questions is a relevant one!
This should help:
http://www.cisco.com/en/US/prod/collateral/iosswrel/ps6537/ps6554/ps6599/ps6630/whitepaper_C11-720525.html
Given the nature of bandwidth and delay, you could have multiple peers scoring the same final metric. For example, you could have one peer, P1, with values B and D. and another peer, P2, with values 2B and 2D. Assuming the bandwidth represents the capacity of the pipe, you would be able to transport a given amount of data with both of these peers in the same time!
You should certainly use a default value of weights, let us say, 0.50/0.5, for these two params. But, you might find that for a given goal (faster routing convergence, say), a different value of weights might be better.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
According to TCP, sequence number is used to refer to bytes instead of being a counter. The sequence number is 32-bit integer (~4.2 GB).
If I am sending file directly using TCP, I can't exceed this number.
This was okay with old file-systems but now we have files exceeding this size.
I believe Application layer protocols has been modified to bypass this limit, can any provide an example for this or at least list the used techniques?
For reference, the question was based on the following problem
Textbook : Computer Networking: A Top-Down Approach by James F. Kurose , Keith W. Ross.
P26. Consider transferring an enormous file of L bytes from Host A to Host B.
Assume an MSS of 536 bytes.
a. What is the maximum value of L such that TCP sequence numbers are not
exhausted? Recall that the TCP sequence number field has 4 bytes.
If I am sending file directly using TCP, I can't exceed this number.
Yes you can. You are mistaken. It wraps around.
P26. Consider transferring an enormous file of L bytes from Host A to Host B. Assume an MSS of 536 bytes. a. What is the maximum value of L such that TCP sequence numbers are not exhausted? Recall that the TCP sequence number field has 4 bytes.
'Sequence numbers are not exhausted' is a constraint for the purposes of this question, but the authors aren't necessarily thereby claiming that such a limit applies to any TCP transmission. If they are, they're manifestly wrong. Consider that the initial sequence number is chosen randomly, and therefore can be 2^32-1. Does that imply a limit on that connection of one byte? Of course it doesn't.
I also note that the MSS of 536 bytes is entirely irrelevant to the question. Possibly this is just a substandard text.
EDIT I've now located this source. You didn't misunderstand it. There is nothing in the book about TCP sequence number exhaustion except for this stupid question. Nothing about it wrapping around either, which is a curious omission. The MSS is used in the second part of the book problem, not quoted here.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
A lot of ISP's sell their products saying: 100Mbit/s Speed.
However, compare the internet to a packet service, UPS for example.
The ammount of packages you can send every second(bandwith) is something different then the time it takes to arrive(speed).
I know there are multiple meanings of the term 'bandwith' so is it wrong to advertise with speed?
Wikipedia( http://en.wikipedia.org/wiki/Bandwidth_(computing) )
In computer networking and computer science, bandwidth,[1] network
bandwidth,[2] data bandwidth,[3] or digital bandwidth[4][5] is a
measurement of bit-rate of available or consumed data communication
resources expressed in bits per second or multiples of it (bit/s,
kbit/s, Mbit/s, Gbit/s, etc.).> In computer networking and computer
science, bandwidth,[1] network
bandwidth,[2] data bandwidth,[3] or digital bandwidth[4][5] is a
measurement of bit-rate
This part tells me that bandwith is measured in Mbit/s, Gbit/s.
So does this mean the majority of ISP's are advertising wrongly while they should advertise with 'bandwith' instead of speed?
Short answer: Yes.
Long answer: There are several aspects on data transfer that can be measured on an amount-per-time basis; Amount of data per second is one of them, but perhaps misleading if not properly explained.
From the network performance point of view, these are the important factors (quoting Wikipedia here):
Bandwidth - maximum rate that information can be transferred
Throughput - the actual rate that information is transferred
Latency - the delay between the sender and the receiver decoding it
Jitter - variation in the time of arrival at the receiver of the information
Error rate - corrupted data expressed as a percentage or fraction of the total sent
So you may have a 10Mb connection, but if 50% of the sent packages are corrupted, your final throughput is actually just 5Mb. (Even less, if you consider that a substantial part of the data may be control structures instead of data payload.
Latency may be affected by mechanisms such as Nagle's algorythm and ISP-side buffering:
As specified in RFC 1149, An ISP could sell you a IPoAC package with 9G bits/s, and still be true to its words, if they sent to you 16 pigeons with 32GB SD cards attached to them, average air time around 1 hour - or ~3,600,000 ms latency.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm reading one of the Distance vector protocol RIP and come to know maximum hop count it uses is 15 hops but My doubt is why 15 is used as maximum Hop count why not some other number 10,12 or may be 8 ?
My guess is that 15 is 16 - 1, that is 2^4 - 1 or put it otherwise: the biggest unsigned value that holds in 4 bits of information.
However, the metric field is 4 bytes long. And the value 16 denotes infinity.
I can only guess, but I would say that it allows fast checks with a simple bit mask operation to determine whether the metric is infinity or not.
Now the real question might be: "Why is the metric field 4 bytes long when apparently, only five bits are used ?" and for that, I have no answer.
Protocols often make arbitrary decision. RIP is a very basic (and rather old protocol). You should keep that in mind when reading about it. As said above, the max hop count will be a 4 byte field, where 16 is equivalent to infinity. 10 is not a power of 2 number. 8 was probably deemed too small to reach all the routers.
The rationale behind keeping the maximum hop count low is the count to infinity problems. Higher max hop counts would lead to a higher convergence time. (I'll leave you to wikipedia count to infinity problem). Certain versions of RIP use split horizon, which addresses this issue).
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I usually heard that network bandwidth is measured in bit per second, for example 500 Mbps. But when I reading some network-related text, they say:
"Coaxial cable supports bandwidths up to 600 MHz"?
Why they say that? And what is the relationship between MHz and Kb?
Coaxial cable is not a "network" but a transmission medium. For physical reasons the bandwidth must be measured in Hz, consider in fact that we're talking here of electromagnetic signals and not bits.
When you move to the "network" side, in particular digital networks, the capacity is measured in bps. Note that while a bandwidth (MHz) increase will lead generally to a bps increase, the final bps depends on many factors which depend for example on the digital modulation scheme (at low level) and the network protocol (at higher level). A typical case is the "symbol" representation, which gives you the information of how many bits are sent on a single "pulse".
But the subject is really huge and cannot be faced in a single answer here, I recommend you to read a good book on electric communications to have a clear picture on the subject.
That's the bandwidth of the signal that can be sent through the cable. You might want to have a read of Nyquist-Shannon sampling theorem to read about how that relates to the data that can be transmitted.
How the "MHz relate to Kb" depends on the method for transmitting the data, which is why you'll see cables rated with a bandwidth in MHz like you've seen.
We are dealing with a bit of abuse of terminology. Originally "bandwidth" means the width of the band that you have available to transmit (or receive) on. The term has been co-opted to also mean the amount of digital data you can transmit (or receive) on a line per unit time.
Here's an example of the original meaning. FM radio stations are spaced 200 kHz apart. You can have a station on 95.1 MHz and another one on 94.9 MHz and another one on 95.3 MHz, but none in between. The bandwidth available to any given FM radio station is 200 kHz (actually it may be less than that if there is a built-in buffer zone of no-mans-land frequencies between stations, I don't know).
The bandwidth rating of something like a coaxial cable is the range of frequencies of the electromagnetic waves that it is designed to transmit reliably. Outside that range the physical properties of the cable cause it to not reliably transmit signals.
With (digital) computers, bandwidth almost always has the alternate meaning of data capacity per unit time. It's related though, because obviously if you have more available analog band width, it lets you use a technology that transmits more (digital) data at the same time over that carrier.
I've been creating a reliable networking protocol similar to TCP, and was wondering what a good default value for a re-transmit threshold should be on a packet (the number of times I resend the packet before assuming that the connection was broken). How can I find the optimal number of retries on a network? Also; not all networks have the same reliability, so I'd imagine this 'optimal' value would vary between networks. Is there a good way to calculate the optimal number of retries? Also; how many milliseconds should I wait before re-trying?
This question cannot be answered as presented as there are far, far too many real world complexities that must be factored in.
If you want TCP, use TCP. If you want to design a custom-protocol for transport layer, you will do worse than 40 years of cumulative experience coded into TCP will do.
If you don't look at the existing literature, you will miss a good hundred design considerations that will never occur to you sitting at your desk.
I ended up allowing the application to set this value, with a default value of 5 retries. This seemed to work across a large number of networks in our testing scenarios.