Why are ping times so long? [closed] - networking

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Assuming the speed of light ~ 186000 mi/sec, and the farthest from anywhere on earth you can be without leaving earth ~ 16,000 mi, that means the time it takes for light to reach any point on earth and come back <= ~172 msec. So why can ping times exceed this?

A few reasons
Your assumption about speed is wrong, electronic communications through a wire travel is about 2/3 the speed of light.
You are not traveling a strait line from Point A to Point B, so it could be longer.
Your assumption about leaving earth is wrong, satellite links can often be used for intercontinental network links
(the biggest culprit) You need to pass through many computers (run the program tracert and you can see), the computer does not instantainouly forward the packet on from the time it received to to the time it sends it to the next person. If the computer doing the forwarding is under a very heavy load it could take a while for the packet to be forwarded on while it sits in a queue waiting to be processed.

That is a completely wrong comparison. for some reasons:
Electrons are involved in the pinging not light. So you can't compare light to electrons. Thats wrong.
Servers that your ping request hops on them are not processing them in zero sec. It actually takes time to process the ping packet and send it where it's supposed to go.
Your link to the internet is not a direct link. You have to pass through a DNS server (If you ran ping with a hostname not an IP), many routers and different types of links(satellite, wired, fiber optic). So it's not like emitting from this side of planet to the another side

Related

No means of detecting collision at the application layer? Hmm [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Given the degree of uselessness one has come to expect from both tutorials
https://inet.omnetpp.org/docs/tutorials/wireless/doc/step5.html
and manual pages:
https://doc.omnetpp.org/omnetpp/manual/#sec:ned-lang:warmup:network
how can collision be modelled at the application layer?
You did not find a tutorial how can collision be modelled at the application layer simply because in application layer collisions do not occur.
Generally, a collision may occur when some medium (or layer) cannot be accessed simultaneously by many elements. However, there is no such limitation for application layer. Application may send a packet in any time, that packed will be processed by the transport layer (TCP or UDP) and then it is sent to network layer. The network layer has a buffer so in the situation when at the same time two or more application send packets the conflict will not occur.
According the details presented in your question:
how can hostSink check whether hostA or hostB are still sending packets [originally: signals]? Answer: hostSink cannot determine whether hostA is still sending packets. Simulation reflects the behavior of a real network and in real network host does not know whether the another host is still sending packets.
How does time "pass" in a simulation? Answer: OMNeT++ is Discrete Event Simulator and according to Simulation Manual:
A discrete event system is a system where state changes (events) happen at discrete instances in time, and events take zero time to happen.
It means that a simulation internally maintains variable called currentSimtime. At the beginning currentSimtime=0. When the first event (for example sending an ARP packet) is scheduled at, for example, t=0.003s, currentSimtime is set to 0.003s and the sending ARP packet is executed.

Why traceroute's three packets with same TTL always go to same router? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
I was studying about traceroute in the book "Computer Networking A Top-Down Approach" recently, and I was struck by a few questions. The book said and I quote
Trace-route actually repeats the experiment just described three
times, so the source actually sends 3 • N packets to the destination.
My question is: if the source sends 3 packets with same TTL value, why should all packets with same TTL reach the same router all the time (by all the time I mean for all 3 packets with same TTL value during one single traceroute execution) . I mean why does it not happen that a packet with TTL=n goes to a router n hops from source, another packet with same TTL goes to another router n hops from source and so on? Due to different congestion at different times, it is very likely that two packets to the same destination may take different routes. Why does it not happen in case of traceroute's 3 packets with same TTL? If it does happen, how come only one single router is shown for each TTL value?
Yet another question
RFC1393 says:
The purpose behind this is to record the source of each ICMP TTL
exceeded message to provide a trace of the path the packet took to
reach the destination.
Lets say for TTL=3 the packet took a path of routers A-B-C, and due to different congestion faced by packet with TTL=4, it took a path A-X-Y-D. Now, what can we conclude about the trace here?
Or am I missing something more obvious here?
Your premise is wrong. For instance, if there is load balancing in the path, you don't always get the same router at the same number of hops. There was a recent question in Network Engineering about this.
If you have a stable, well-designed network, and no load balancing, you should get the same router at the same number of hops. This is because the routing tables should be stable and deterministic. Each router will have a best route toward the destination in its routing table, and, absent network instability (or something else like load balancing), the routing tables will not change. The packets are independently routed by each router, and should follow the same path.
In fact, random or varying hops on the path may be a real concern since it can point to serious network problems. If there is churn in one or more of the routing tables, the source of the problem needs to be identified and corrected.
You keep assuming that a router will route around congestion. That is almost never true. Routers typically manage congestion with shaping, queuing, and/or policing, not routing around congestion. The whole reason for QoS is to deal with network congestion and to try to manage fairness and order into which traffic get dropped and which must wait.
TTL refers the maximum number of hops that a packet can travel before discarding it. Therefore, a packet with TTL equal to 5 does not mean that the packet needs to travel 5 hops every time is sent.

Calculating Real Network Round-trip [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I would like to understand network latency a bit better.
Lets say there's one Client and two Servers. Client sends 1000 bytes to each of the Servers, each Server responds instantly with 1000 bytes.
Ping round trip times from Client:
To Server 1 - 2ms
To Server 2 - 20ms
Assume both Client and Servers are connected to quality 1 Gbps pipe (but not via dedicated line between them).
Question: how to calculate real time from when Client starts sending its 1000 bytes to when it fully receives the last byte of the response data. Will it be something close to 2ms for Server 1 and 20ms for Server 2?
Yes, that's exactly right!
The ping round-trip delay measures how long it takes a small packet of data to travel from one host on the network to another, and back to the original host.
You should keep in mind that the numbers you get fluctuate a bit based on network conditions and load on the processors of the hosts. You should average the round-trip delay over a few samples but be prepared that any other packet may experience an unusual delay for a variety of reasons.

Choosing a Server For my game [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I'm about to release a MMORPG. In my game, every 1 second, each player sends 30 TCP messages and gets back from the server 30. Every message is not really long. Like 20~ chars.
the point is that I never got my hands with multiplayer games. I have programmed all the server and client, but I don't know what server I'm gonna need. I mean, RAM, CPU, etc... I still don't know what to be ready for, but let's say for 15K same-time clients. As said, every 1 second every client need to get and send 30 TCP messages, and in the most cases I need also to update my non-SQL DB with the data.
Update: It's a multiplayer game, I must have 30 msgs/sec. Most of the msgs are for the current position of the player. Also I'm using C++.
It depends on what your (already implemented) server requires. You'll never know until you try some particular hardware. Rent a powerful dedicated server for a month and profile your game server. At least just check CPU usage. You'll need multithreaded asynchronous networking.
Details you provided help only to calculate how much bandwidth you need:
~94 bytes (TCP + IP + Ethernet headers) + ~20 bytes (your data) = 114 bytes every packet * 30 per second * 15000 users = ~50MBps * 8 bits = ~400Mbps of both incoming and outgoing traffic. Seems you're in troubles here. Consider something wiser than sending your every packet in separate TCP frame. E.g., implement buffer that collects data ready to be sent and is filled by your game logic threads and separate thread that sends this data to network. In this case your several soft packets can be combined into one TCP packet greatly reducing your bandwidth requirements.
But even after this you're still in troubles. I'd recommend to wait for users first before investing into complicated solution. When you'll have them you'll need to implement some kind of clustering. It's separate story, much more complicated than simple networking.
Try to handle at least 1K users by single server. This can bring you some money to hire somebody experienced in game networking/clustering.
If you know you sending 30 messages every second, why not bundle them into 1 request, every second? Makes a lot of difference in terms in server resources...
And in which language are you going to run your server? I hope you write something dedicated to process/manage these connections? If so: do some profiling and just measure what you need...
And what is your processor doing every second, to process those 30*15K messages?
There is no generic answer to your question. It all depends on your software.

Why do some switches have uplink ports? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
So, this appears, on the surface, to be a network admin (serverfault) question, but I'm looking for a lower level answer from a network hacker type.
I was pretty much oblivious to how networks actually work in real life until I started my summer internship. Then, by way of having no other option (internship is at a pretty networking-centric place and I have to put together testbeds for testing [among other things] networks), I became familiar with them. For one thing, the fact that there's no "This goes out to the internet!" port on commercial switches was kind of surprising, until you reasoned about how it works (starts out like a hub til it 'learns' where ips are in terms of the physical port, i guess?).
And after this home-crafted self-discovery (or possibly, error in thinking), I'm back at the extended stay hotel and looking at my cheap little home switch, and it has an uplink port.
Now my question to you, Network hackers (in the good way), is why?
The "uplink" port on your SOHO switch is internally crossed over. It relieves you of having to use a crossover cable to connect two switches. That is the only difference.
BTW: There isn't a "this goes out to the internet" port on SOHO switches either. You're confusing switches and routers/gateways. This confusion may be encouraged by manufacturers putting the two logically separate devices in one piece of hardware, e.g., a router with a 4-port switch. While we're at it, a wireless router w/ 4 port switch is actually logically three separate devices (router, switch, and access point).
BTW #2: A switch (well, except for layer 3 switches, which arguably are only switches to the marketing department) actually learns where MAC addresses are. It neither knows about nor cares about IP.
Uplink ports can be thought of special ports for inter-switch connections. Sometimes they may have a higher speed (1G instead of 100M for example). Or they are interchangeable (laid out as modules).
Some have multiple uplink ports (I had one with two), so you may have redundancy or multiple switched connected this way with the same logic (where is the mac address (on wthich other switch)?).

Resources