How to measure responsetime of a network? - networking

We can use Ping command to measure responsetime for network.But is it the pure responsetime of network or does it include processing time ?
Kindly help to resolve this issue.

"The network" wouldn't exist without processing time being spent. Nothing happens for free, routing, TTL handling, and upholding all the protocols of course requires processing time on all nodes touched by a given path through the network.
And in the case of 'ping', then yes there is some processing required by the target machine's IP stack to detect the incoming request, and create and send the proper response. But that time is probably more or less constant (assuming constant background load), and often very small when compared to the pure transmission delays.
You can test this out by pinging localhost, then hosts on a local network, and comparing the differences in response times, assuming your ping implementation reports them with high enough precision.

That does indeed include processing time. Otherwise you'd be pretty close to the speed of light.
If your question is if the processing time on the pinged host is included too, yes it is but that should be only a small amount.

ping is so simple that processing time is very low compared to network time (provided you don't check a LAN with 1Gb Ethernet). Anyway, processing time is to be taken into account because actual network traffic also needs time to be processed.

Related

Network Time Protocol vs Real Time Clock

After doing some research related to the Network Time Protocol (NTP), I was wondering what could be a real use of it. From my knowledge, almost all devices have a real time clock that keeps updating itself even when the machine is shutted down. Then, when the machine boots up, the software clock takes its value from this hardware clock (the real time clock).
What is the point of taking the time from a server and, therefore, exposing the machine to some kind of time attack? Is the goal to keep a set of hosts strictly synchronized and avoid that their times differ too much (but how much could be, in reality, this "too much"? ) ? Also: if a host configures NTP, does it still have an initialized software clock from the real time clock that simply corrects itself according to the received NTP packets, or not?

How do I measure ping?

I get how to calculate ping - current time minus the time stamp of the packet - but how do I create a time stamp in the first place? What synchronized concept of time can I use? Note: I use .NET 2.0.
It could be as simple as when you issue your ping request (I will explain this in more detail in a moment), you make note of the current time, and then, when the server/client responds with a pong, you make note of the time again. Subtracting the pong time from the ping time gives you the amount of time for the communication to go between the two applications.
Wikipedia describes ping in the following way:
In multiplayer online video games, MMOs, MMORPGs, MMOFPSs and FPSs ping (not to be confused with frames per second) refers to the network latency between a player's computer (client), and either the game server or another client (i.e. peer). This could be reported quantitatively as an average time in milliseconds... Rather than using the traditional ICMP echo request and reply packets to determine ping times, game programmers often instead build their own latency detection into existing game packets
What I like to do, is when I make a client and a server, I always write in a simple 'ping/pong' command. In short, a ping request is made by one application, when the other application receives it, and sends back a pong confirmation command. This is good for debugging, but for actual development and depending on the game, I usually piggy back this with a heart beat to make sure everything is running as it should. Hope that helps!

Lossy network versus Congested network

Suppose, there is a network which gives a lot of Timeout errors when packets are transmitted over it. Now, timeouts can happen either because the network itself is inherently lossy (say, poor hardware) or it might be that the network is highly congested, due to which network devices are losing packets in between, leading to Timeouts. Now, what additional statistics about the traffic being transmitted (like Missing Packets errors etc.) are required that might help us to find out whether timeouts are happening due to poor hardware, or too much network load.
Please note that we have access only to one node in the network (from which we are transmitting packets) and as such, we cannot get to know the load being put by other nodes on the network. Similarly, we don't really have any information about the hardware being used in the network. Statistics is all that we have.
A network node only has hardware information about its local collision domain, which on a standard network will be the cable that links the host to the switch.
All the TCP stack will know about lost packets is that it is not receiving acknowledgements so it needs to resend, there is no mechanism for devices (E.g. switches & routers) between a source and destination to tell the source that there is a problem.
Without access to any other nodes the only way to ascertain if your problem is load based would be to run a test that sends consistent traffic over the network for a long period, if the packet retry count per second/minute/hour remains the same then it would suggest that there is a hardware issue, if the losses only occur during peak traffic periods then the issue could be load related. Of course there could be a situation where misconfigured hardware issues will only be apparent during high traffic periods, this takes things back to the main problem which is that you need access to network stats from beyond your single node.
In practice, nearly all loss on terrestrial network paths is due to either congestion or firewalls. Loss due to bit-errors is extremely rare. Even on wireless networks, forward error correction handles most bit/media/transmission errors. Congestion can be caused by a lot of different factors: any given network path will involve dozens of devices and if any one of them becomes overloaded for even a moment, packets will be dropped.
The only way to tell the difference between congestion induced packet loss and media errors is that media errors will occur independent of load. In other words, the loss rate will be the same whether you are sending a lot of data or only a little data.
To test that, you will need some control, or at least knowledge, of the load on the path. Since you don't have control and the only knowledge you have is from source-node observation, the best you can do is to take test samples (using ping is the easiest) around the clock and throughout the week, recording loss rates and latencies. These should give you an idea of when the path is relatively idle. If loss rates remain significant even when the path is (probably) idle, then there might be a media-loss issue. But again, that is extremely rare.
For background, I have written a few articles on the subject:
Loss, Latency, and Speed, discussing what statistics you can observe about a path and what they mean.
Common Network Performance Problems, discussing the most common components in a network path and how they affect performance (congestion).

Determine asymmetric latencies in a network

Imagine you have many clustered servers, across many hosts, in a heterogeneous network environment, such that the connections between servers may have wildly varying latencies and bandwidth. You want to build a map of the connections between servers by transferring data between them.
Of course, this map may become stale over time as the network topology changes - but lets ignore those complexities for now and assume the network is relatively static.
Given the latencies between nodes in this host graph, calculating the bandwidth is a relative simply timing exercise. I'm having more difficulty with the latencies - however. To get round-trip time, it is a simple matter of timing a return-trip ping from the local host to a remote host - both timing events (start, stop) occur on the local host.
What if I want one-way times under the assumption that the latency is not equal in both directions? Assuming that the clocks on the various hosts are not precisely synchronized (at least that their error is of the the same magnitude as the latencies involved) - how can I calculate the one-way latency?
In a related question - is this asymmetric latency (where a link is quicker in direction than the other) common in practice? For what reasons/hardware configurations? Certainly I'm aware of asymmetric bandwidth scenarios, especially on last-mile consumer links such as DSL and Cable, but I'm not so sure about latency.
Added: After considering the comment below, the second portion of the question is probably better off on serverfault.
To the best of my knowledge, asymmetric latencies -- especially "last mile" asymmetries -- cannot be automatically determined, because any network time synchronization protocol is equally affected by the same asymmetry, so you don't have a point of reference from which to evaluate the asymmetry.
If each endpoint had, for example, its own GPS clock, then you'd have a reference point to work from.
In Fast Measurement of LogP Parameters
for Message Passing Platforms, the authors note that latency measurement requires clock synchronization external to the system being measured. (Boldface emphasis mine, italics in original text.)
Asymmetric latency can only be measured by sending a message with a timestamp ts, and letting the receiver derive the latency from tr - ts, where tr is the receive time. This requires clock synchronization between sender and receiver. Without external clock synchronization (like using GPS receivers or specialized software like the network time protocol, NTP), clocks can only be synchronized up to a granularity of the roundtrip time between two hosts [10], which is useless for measuring network latency.
No network-based algorithm (such as NTP) will eliminate last-mile link issues, though, since every input to the algorithm will itself be uniformly subject to the performance characteristics of the last-mile link and is therefore not "external" in the sense given above. (I'm confident it's possible to construct a proof, but I don't have time to construct one right now.)
There is a project called One-Way Ping (OWAMP) specifically to solve this issue. Activity can be seen in the LKML for adding high resolution timestamps to incoming packets (SO_TIMESTAMP, SO_TIMESTAMPNS, etc) to assist in the calculation of this statistic.
http://www.internet2.edu/performance/owamp/
There's even a Java version:
http://www.av.it.pt/jowamp/
Note that packet timestamping really needs hardware support and many present generation NICs only offer millisecond resolution which may be out-of-sync with the host clock. There are MSDN articles in the DDK about synchronizing host & NIC clocks demonstrating potential problems. Timestamps in nanoseconds from the TSC is problematic due to core differences and may require Nehalem architecture to properly work at required resolutions.
http://msdn.microsoft.com/en-us/library/ff552492(v=VS.85).aspx
You can measure asymmetric latency on link by sending different sized packets to a port that returns a fixed size packet, like send some udp packets to a port that replies with an icmp error message. The icmp error message is always the same size, but you can adjust the size of the udp packet you're sending.
see http://www.cs.columbia.edu/techreports/cucs-009-99.pdf
In absence of a synchronized clock, the asymmetry cannot be measured as proven in the 2011 paper "Fundamental limits on synchronizing clocks over networks".
https://www.researchgate.net/publication/224183858_Fundamental_Limits_on_Synchronizing_Clocks_Over_Networks
The sping tool is a new development in this space, which uses clock synchronization against nearby NTP servers, or an even more accurate source in the form of a GNSS box, to estimate asymmetric latencies.
The approach is covered in more detail in this blog post.

How Many Network Connections Can a Computer Support?

When writing a custom server, what are the best practices or techniques to determine maximum number of users that can connect to the server at any given time?
I would assume that the capabilities of the computer hardware, network capacity, and server protocol would all be important factors.
Also, do you think it is a good practice to limit the number of network connections to a certain maximum number of users? Or should the server not limit the number of network connections and let performance degrade until the response time is extremely high?
Dan Kegel put together a summary of techniques for handling large amounts of network connections from a single server, here: http://www.kegel.com/c10k.html
In general modern servers can handle very large numbers of concurrent connections. I've worked on systems having over 8,000 concurrently open TCP/IP sockets.
You will need a high quality servicing interface to handle that kind of load, check out libevent or libev.
That is a good question and it definitely is situational. What is your computer? Do you have a 4 socket machine filled with Quad Core Xeons, 128 GB of RAM, and Fiber Channel Connectivity (like the pair of Dell R900s we just bought)? Or are you running on a p3 550 with 256 MB of RAM, and 56K modem? How much load does each connection place on your server? What kind of response is acceptible?
These are the questions you need to answer. I guess the best way to find the answer is through load testing. Create a unit test of the expected (and maybe some unexpected) paths that your code will perform against your server. Find a load testing framework that will allow you to simulate 10, 100, 1000, 10000 users performing those tasks at the same time.
That will tell you how many connections your computer can support.
The great thing about the load/unit test scenario is that you can put in response time expectations in your unit tests and increase the load until you fall outside of your response time. If you have a requirement of supporting X number of Users with Y second response, you will be able to demonstrate it with your load tests.
One of the biggest setbacks in high concurrency connections is actually the routers involved. Home user oriented routers usually have a small NAT table, preventing the router from actually servicing the server the connections.
Be sure to research your router/ network infrastructure setup just as well.
I think you shouldn't limit the number of connections your server will allow - just catch and handle properly any exceptions that might occur when accepting and closing connections and you should be fine. You should leave that kind of lower level programming to the underlying OS layers - that way you can port your server easier etc.
This really depends on your operating system.
Different Unix flavors will support "unlimited" number of file handles / sockets others have high values like 32768.
A typical user limit is 8192 but it can usually be set higher.
I think windows is more limiting but the server version may have higher limits.

Resources