Is it possible that in a network,delay from router A to B is different from delay from router B to A - networking

considering that metric is delay in distance vector routing algorithm,
is it possible that delay from router A to B is different from router B to A.
if yes, under which conditions??
thanks.

The algorithm assumes the graph is bidirectional. Of course, it's possible for the delays to be different in each direction in practice: for example, if B is transmitting heavily to A, then traffic from A to B is likely to be faster than from B to A, since traffic from B will have to get in line at the end of a queue.

Delay and metric are two different things.
Delay is the time it takes for a packet to traverse the network. If a link is heavily utilized in one direction and there is some kind of buffering device (such as a switch) on the link you might have different delays in the network traffic depending on direction.
Metrics are values associated with entries in a routing table that indicates "costs" of different routes. If A and B have static routing entries they can definitely be configured with different metrics for each direction of the same link.

Are you assuming both hypothetical circumstances run at the exact same time? If not I suppose there could be a spike on the traffic for one of the routers at any given time that bogs down your 'wanted' traffic.

Certainly this is possible, but to give you more details you probably need to be more specific with the question.
With regards to your specific question about Metrics and Distance Vector routing algorithms, yes, A can be configured to think that B is further away than B thinks A is, although as mentioned by one of the other answers, that doesn't necessarily mean the delay is different although it may in fact be.
In practice though, there are lots of questions to consider:
Is router A adjacent to router B? If not, then you certainly could have different delays because inbound packets may take a different path than outbound packets.
If they are adjacent, what kind of connectivity do they have? Are they the same kind of router? Imagine a router at the end of an aysymmetric DSL line. Of course the propagation delay wouldn't be aysymmetric, but delay could be higher in one direction as a result of traffic congestion. (This scenario also gives a concrete example of why you might want A to think the link to B has a higher cost than B thinks the link to A has.)
In practice, the definition of delay makes a big difference too. Are you thinking of just the computed cost? Or just propagation delay? Or just the link cost? If router B is sending more traffic than router A, it may take longer for responding packets from B to A to be processed by B than A takes when sending the packets (the same may apply for intermediary switches, especially in the case of things like multicast packets--some routers and/or switches take longer to process multicast and other "special" packets). So in this scenario the actual delay may be different, but the cost the DVP is using thinks it is the same.
Hope this answer helps. Good luck,
--jed

Related

Varying network delays between two fixed hosts

Consider two hosts A and B on a wired network. Both the hosts send data packets to each other. In a real world scenario, the delays experienced in the direction A->B can be different from B->A. One primary reason could different routes in which packets are made to travel. For example, A->B might take longer time than B->A, possibly because it travels through a different set of routers or a longer route. Now lets assume that the packets from A->B and B->A take the same route for delivery. Can there still be potential causes for different delays between the packets in two directions? If yes, it would be great if someone can elaborate.
They will never be identical. There are lots of factors that will effect the delay. I might not cover all the possible cases, but at-least i will try to recite what i vaguely remember.
First of all, they won't take same path. In case other direction takes same path, the counter direction traffic conditions could be different at routers in the core network. The routers could have different queuing policy.
The delays depend on the packet size (larger the packet size, larger the one way delay) as the routers treat packets with different sizes differently.
Don't forget the time and day. Holidays, working times and rush hours matter.
Depends on the measurement layer:- Assuming they have crossed all these things and the packets came to your machine, the time it takes for the packet to reach from the Ethernet card to the transport layer (TCP/UDP) or Application layer will not be the same for two different machines. It depends on your machine configuration, load on the machine, what OS, kernel etc.,
Practically they can't be same. You can consider them only for approximations and theory.

How do clients on wireless networks decide who can transmit at any given time?

I've been thinking about wireless networking a little bit recently, and I came upon a realization last night that I can't find an answer to: how do clients know when they can transmit and not stomp over another clients' transmission?
I assume there is documentation for this sort of thing available, but I've been unable to find anything useful over a half hour of casual Google queries, probably because I don't know the right terms. Apologies in advance if this is a silly question . . .
Here's why I'm confused: based on my understanding of how RF hardware works, we can model the transmission medium as a safe shared register between different RF clients (because what one client broadcasts can be overwritten by other clients and get a muddle between the two). But safe registers only have consensus number 1, so how can we establish who can transmit at any given point? I'm assuming that only one client can transmit at once -- perhaps this is my fundamental misunderstanding?
Even the use of a randomized consensus protocol seems unwieldy, because the only ones I know of use atomic registers, not safe registers, and also have no upper bound, so two identical devices with the same random seed would proceed for a very long time.
Thanks!
Please check: Carrier sense multiple access with collision avoidance

Compensating for jitter

I have a voice-chat service which is experiencing variations in the delay between packets. I was wondering what the proper response to this is, and how to compensate for it?
For example, should I adjust my audio buffers in some way?
Thanks
You don't say if this is an application you are developing yourself or one which you are simply using - you will obviously have more control over the former so that may be important.
Either way, it may be that your network is simply not good enough to support VoIP, in which case you really need to concentrate on improving the network or using a different one.
VoIP typically requires an end to end delay of less than 200ms (milli seconds) before the users perceive an issue.
Jitter is also important - in simple terms it is the variance in end to end packet delay. For example the delay between packet 1 and packet 2 may be 20ms but the delay between packet 2 and packet 3 may be 30 ms. Having a jitter buffer of 40ms would mean your application would wait up to 40ms between packets so would not 'lose' any of these packets.
Any packet not received within the jitter buffer window is usually ignored and hence there is a relationship between jitter and the effective packet loss value for your connection. Packet loss typically impacts users perception of voip quality also - different codes have different tolerance - a common target might be that it should be lower than 1%-5%. Packet loss concealment techniques can help if it just an intermittent problem.
Jitter buffers will either be static or dynamic (adaptive) - in either case, the bigger they get the greater the chance they will introduce delay into the call and you get back to the delay issue above. A typical jitter buffer might be between 20 and 50ms, either set statically or adapting automatically based on network conditions.
Good references for further information are:
- http://www.voiptroubleshooter.com/indepth/jittersources.html
- http://www.cisco.com/en/US/tech/tk652/tk698/technologies_tech_note09186a00800945df.shtml
It is also worth trying some of the common internet connection online speed tests available as many will have specific VoIP test that will give you an idea if your local connection is good enough for VoIP (although bear in mind that these tests only indicate the conditions at the exact time you are running your test).

How many times should I retransmit a packet before assuming that it was lost?

I've been creating a reliable networking protocol similar to TCP, and was wondering what a good default value for a re-transmit threshold should be on a packet (the number of times I resend the packet before assuming that the connection was broken). How can I find the optimal number of retries on a network? Also; not all networks have the same reliability, so I'd imagine this 'optimal' value would vary between networks. Is there a good way to calculate the optimal number of retries? Also; how many milliseconds should I wait before re-trying?
This question cannot be answered as presented as there are far, far too many real world complexities that must be factored in.
If you want TCP, use TCP. If you want to design a custom-protocol for transport layer, you will do worse than 40 years of cumulative experience coded into TCP will do.
If you don't look at the existing literature, you will miss a good hundred design considerations that will never occur to you sitting at your desk.
I ended up allowing the application to set this value, with a default value of 5 retries. This seemed to work across a large number of networks in our testing scenarios.

defining the time it takes to do something (latency, throughput, bandwidth)

I understand latency - the time it takes for a message to go from sender to recipient - and bandwidth - the maximum amount of data that can be transferred over a given time - but I am struggling to find the right term to describe a related thing:
If a protocol is conversation-based - the payload is split up over many to-and-fros between the ends - then latency affects 'throughput'1.
1 What is this called, and is there a nice concise explanation of this?
Surfing the web, trying to optimize the performance of my nas (nas4free) I came across a page that described the answer to this question (imho). Specifically this section caught my eye:
"In data transmission, TCP sends a certain amount of data then pauses. To ensure proper delivery of data, it doesn’t send more until it receives an acknowledgement from the remote host that all data was received. This is called the “TCP Window.” Data travels at the speed of light, and typically, most hosts are fairly close together. This “windowing” happens so fast we don’t even notice it. But as the distance between two hosts increases, the speed of light remains constant. Thus, the further away the two hosts, the longer it takes for the sender to receive the acknowledgement from the remote host, reducing overall throughput. This effect is called “Bandwidth Delay Product,” or BDP."
This sounds like the answer to your question.
BDP as wikipedia describes it
To conclude, it's called Bandwidth Delay Product (BDP) and the shortest explanation I've found is the one above. (Flexo has noted this in his comment too.)
Could goodput be the term you are looking for?
According to wikipedia:
In computer networks, goodput is the application level throughput, i.e. the number of useful bits per unit of time forwarded by the network from a certain source address to a certain destination, excluding protocol overhead, and excluding retransmitted data packets.
Wikipedia Goodput link
The problem you describe arises in communications which are synchronous in nature. If there was no need to acknowledge receipt of information and it was certain to arrive then the sender could send as fast as possible and the throughput would be good regardless of the latency.
When there is a requirement for things to be acknowledged then it is this synchronisation that cause this drop in throughput and the degree to which the communication (i.e. sending of acknowledgments) is allowed to be asynchronous or not controls how much it hurts the throughput.
'Round-trip time' links latency and number of turns.
Or: Network latency is a function of two things:
(i) round-trip time (the time it takes to complete a trip across the network); and
(ii) the number of times the application has to traverse it (aka turns).

Resources