does the server being closer to the user decrease the latency? - networking

To what extent the server being closer to the user decrease the latency between the user and the service and help the overall experience ?

That depends greatly on what you mean by "closer."
If the two devices are on the same wire, the latency is negligible since the signal travels in copper at about one-third the speed of light. The latency due to distance can be calculated based the distances of the various media in the path. Moving devices closer, from the perspective of cabling distance, depending on the distance, may have a noticeable difference in latency.
Having multiple devices (switches, routers, etc.) in between the two will increase the latency more due to serialization/de-serialization, encapsulation/de-encapsulation, switching, buffering, etc. delays. This is much more complex, and probably has a more significant increase in the latency.

Related

Would a device physically closer to me transfer a file quicker(P2P)

Hello all I am new to networking and a question arose in my head. Would a device that is physically closer to another device transfer a file quicker than a device which is across the globe if a P2P connection were used?
Thanks!
No, not generally.
The maximum throughput between any two nodes is limited by the slowest interconnect they are using in their path. When acknowledgments are used (eg. with TCP), throughput is also limited by congestion, possible send/acknowledgment window size, round-trip time (RTT) - you cannot transfer more than one full window in each RTT period - and packet loss.
Distance doesn't matter basically. However, for long distance a large number of interconnects is likely used, increasing the chance for a weak link, congestion, or packet loss. Also, RTT inevitably increases, requiring a large send window (TCP window scale option).
Whether the links are wired, copper, fiber, or wireless doesn't matter - the latter means there's some risk for additional packet loss, however. P2P or classic client-server doesn't matter either.

How to improve the speed of MPI_scatter/MPI_gather?

I found that time used for MPI_scatter/MPI_gather continuously increased (somehow linearly) as the number of workers increases, especially when the workers are across different nodes.
I thought that MPI_scatter/MPI_gather is a parallel process, and wonder what leads to the above increasing? Is there any trick to make it faster, especially for workers distributing across CPU nodes?
The root rank has to push a fixed amount of data to the other ranks. As long as all ranks reside on the same compute node, the process is limited by the memory bandwidth available. Once more nodes become involved, the network bandwidth, usually much lower than the memory bandwidth, becomes the limiting factor.
Also the time to send a message is roughly divided in two parts - initial (network setup and MPI protocol handshake) latency and then the time it takes to physically transfer the actual data bits. As the amount of data is fixed, the total physical transfer time remains the same (as long as the transport type and therefore the bandwidth stays the same) but more setup/latency overhead is being added with each new rank that data is scattered to or gathered from, therefore the linear increase in the time it takes to complete the operation.
How an MPI_Scatter/Gather will work varies between implementations. Some MPI implementations may choose to use a series of MPI_Send as an underlying mechanism.
The parameters that may affect how MPI_Scatter works are:
1. Number of processes
2. Size of data
3. Interconnect
For example, an implementation may avoid using a broadcast for very small number of ranks sending/receiving very large data.

Does in/out bandwidth share the same limit of network card?

This may be a very rookie question. Say I have a network card with bandwidth limit 100MB/s, so is it possible that in/out bandwidth reach that limit at the same time? Or I'll have this inequality at any point of time: in bandwidth + out bandwidth <= 100MB/s
First, your network card is probably 100Mb/sec not 100MB/sec. Ethernet is the most common wired network type by far, and it commonly comes in 10, 100, 1000 mega bits per second. A 100 megaBIT/sec ethernet interface is roughly capable of 12.5 MegaBYTES per second.
If you're plugged into an ethernet switch, you're most likely going to be connecting in Full Duplex mode. This allows both ends to speak to each other simultaneously without affecting the performance of each other.
You'll never quite reach the full advertised speed though, a Gigabit network interface (1000Mb/sec) will usually be able to transfer in the high 900's each direction without problem. There are a few things that cause overhead that prevent you from reaching the full speed. Also, many lower end network cards or computers struggle to reach the full speed, so you won't always be able to reach this.
If you're plugged into an ethernet hub, only one end can be talking at a time. There, in + out can't go higher than the link speed, and is typically far lower because of collisions. It's really unlikely you can find a hub anymore unless you're really trying to, switches are pretty much the only thing you can buy now outside of exotic applications.
TL;DR: You're almost always using full duplex mode, which allows up to (but usually less than) the advertised link speed in both directions simultaneously.

Compensating for jitter

I have a voice-chat service which is experiencing variations in the delay between packets. I was wondering what the proper response to this is, and how to compensate for it?
For example, should I adjust my audio buffers in some way?
Thanks
You don't say if this is an application you are developing yourself or one which you are simply using - you will obviously have more control over the former so that may be important.
Either way, it may be that your network is simply not good enough to support VoIP, in which case you really need to concentrate on improving the network or using a different one.
VoIP typically requires an end to end delay of less than 200ms (milli seconds) before the users perceive an issue.
Jitter is also important - in simple terms it is the variance in end to end packet delay. For example the delay between packet 1 and packet 2 may be 20ms but the delay between packet 2 and packet 3 may be 30 ms. Having a jitter buffer of 40ms would mean your application would wait up to 40ms between packets so would not 'lose' any of these packets.
Any packet not received within the jitter buffer window is usually ignored and hence there is a relationship between jitter and the effective packet loss value for your connection. Packet loss typically impacts users perception of voip quality also - different codes have different tolerance - a common target might be that it should be lower than 1%-5%. Packet loss concealment techniques can help if it just an intermittent problem.
Jitter buffers will either be static or dynamic (adaptive) - in either case, the bigger they get the greater the chance they will introduce delay into the call and you get back to the delay issue above. A typical jitter buffer might be between 20 and 50ms, either set statically or adapting automatically based on network conditions.
Good references for further information are:
- http://www.voiptroubleshooter.com/indepth/jittersources.html
- http://www.cisco.com/en/US/tech/tk652/tk698/technologies_tech_note09186a00800945df.shtml
It is also worth trying some of the common internet connection online speed tests available as many will have specific VoIP test that will give you an idea if your local connection is good enough for VoIP (although bear in mind that these tests only indicate the conditions at the exact time you are running your test).

Is it mis-use to use "bandwith" to describe the speed of a network?

I often heard people talking about a network's speed in terms of "bandwith", and I read from < Computer Networks: A Systems Approach > the following definiton:
The bandwidth of a network is given by
the number of bits that can be
transmitted over the network in a
certain period of time.
AFAIK, the word "bandwith" is used to describe the the width of frequency that can be passed on some kind of medium. And the above definition describe something more like a throughput. So is it mis-use?
I have been thinking about this question for some time. I don't know where to post it. So forgive me if it is off topic.
Thanks.
Update - 1 - 9:56 AM 1/13/2011
I recall that, if a signal's cycle is smaller in time domain, its frequency belt will be wider in frequency domain, so IF the bit rate (digital bandwidth) is big, the signal's cycle should be quite small, and thus the analog bandwidth it required will be quite wide, but medium has its physical limit, the medium has the widest frequency it allows to pass, so it has the biggest bit rate it allows to transmit. From this point of view, I think the mis-use of bandwidth in digital world is acceptable.
The word bandwidth has more than one definition:
Bandwidth has several related meanings:
Bandwidth (computing) or digital bandwidth: a rate of data transfer, throughput or bit rate, measured in bits per second (bps), by analogy to signal processing bandwidth
Bandwidth (signal processing) or analog bandwidth, frequency bandwidth or radio bandwidth: a measure of the width of a range of frequencies, measured in hertz
...
With both definitions having more bandwidth means that you can send more data.
In computer networking and other digital fields, the term bandwidth often refers to a data rate measured in bits per second, for example network throughput, sometimes denoted network bandwidth, data bandwidth or digital bandwidth. The reason is that according to Hartley's law, the digital data rate limit (or channel capacity) of a physical communication link is proportional to its bandwidth in hertz, sometimes denoted radio frequency (RF) bandwidth, signal bandwidth, frequency bandwidth, spectral bandwidth or analog bandwidth. For bandwidth as a computing term, less ambiguous terms are bit rate, throughput, maximum throughput, goodput or channel capacity.
(Source)
Bandwidth is only one aspect of network speed. Delay is also important.
The term "bandwidth" is not a precise term, it may mean:
the clock frequency multiplied by the no-of-bits-transmitted-in-a-clock-tick - physical bandwidth,
minus bytes used for low-level error corrections, checksums (e.g. FEC in DVB),
minus bytes used by transmit protocol for addressing or other meta info (e.g. IP headers),
minus the time overhead of the handshake/transmit control (see TCP),
minus the time overhead of the administration of connection (e.g. DNS),
minus time spent on authentication (seeking user name on the host side),
minus time spent on receiving and handling the packet (e.g. an FTP server/client writes out the block of data received) - effective bandwidth, or throughput.
The best we can do is to always explain what kind of bandwidth we mean: with or without protocol overhead etc. Also, the users are often interested only in the last brutto value: how long does it take donwloading that stuff?

Resources