How to connect people far miles away - graph

How we should connect to the people who is far miles away from us.
By using social Distance and mutual connection.

Related

How does Peer to Peer model works in online games?

as mentioned in the title above. I know how client/server model actually works by sending and receiving game states from server to clients(easy to understand.) but on P2P archicture if each "peer" controlls their own games states by sending/receiving a simulation of players inputs, then How does each peer can keep in sync when taking a network latency into account? Say if there were 4 players located in the same world area behaving simultenously, will each peer receive the same actions experiencing the same network latency as the laggy player? Or it's independent? assuming they live far away from each other?

Improving EC2 ping times from home

I've been trying to run a gaming machine in EC2 following the excellent blog post by Larry Land here. The problem I have is latency from my home to my nearest AWS region. I get a ping of around 35ms, and I'm looking to improve on that. Is there anything I can do? I'm using Steam n-home streaming over a Hamachi VPN, on Windows Server 2012.
My internet connection is roughly 120Mbps down and 35Mbps up, and there's nothing I can do to improve on that sadly.
In some cases the nearest region geographically isn't the one with the lowest latency. This is due to routing agreements that sometimes result in non-optimal routes.
A common example, is with Eastern Australia and Singapore. Routes often go to the US and or Japan before finally going back to Singapore.
Besides this, you should not be using wifi on your local network, depending how noisy the environment is, this can result in dropped packets that need to be retransmitted and increase the overall latency.
Routers can have an effect on this too, but unless its heavily loaded, its probably not adding much latency.
You may want to do some research with traceroute to see how each data center performs and where the slow spots are.

Trying to trace the time it takes for my website hosted in New Zealand to be accessed from the US

I live in New Zealand and I just started a website which I also have hosted in New Zealand as novazeal.com and novazeal.co.nz. I am hoping to target clients overseas though as well as in New Zealand so I trying to decide whether to start a second website hosted in the US and point the .com domain to that website instead.
I have heard from friends in the States that a site I had hosted here in New Zealand was slow to access, so what I really need to do is get the time it takes for a traceroute to hop through a location in the US. A normal tracert from my computer here will hop through servers in NZ only, so I can't get the measure I am looking for by using a normal tracert. Does anyone know of an alternative I could use such as an application that forces hops through a distant ISP, or a proxy service that gives the time it takes to retrieve a page from a distant location.
Of course if anyone in the States is willing to run the trace for me and send me the hop time stats I would be most grateful. I could ask the friends I mentioned, but they are not particularly technical, so it would probably be a confusing thing to try to explain to them by email.
There are web-based route tracing utilities, some of which are hosted in the US, that will show you the route and latency between that service and your site (at a point in time).
However, traceroute doesn't give you a full picture of the network latency effects to which you're subject: these days routers do all sorts of sophisticated traffic shaping and traceroute probes just won't be treated the same as your HTTP traffic even if you specify that probes should use TCP and port 80.
Not to mention that network latency itself is just a tiny piece of the puzzle. ISPs perform all sorts of cacheing using (sometimes transparent) HTTP proxies, from which you won't benefit unless/until your site is visited by their customers.

comet server and network latency time

For a social site we use a node.js based comet server as the instant messenger, everything is working great we have only one problem how to solve the latency problem to Australia and New Zealand where we have RRT between 310 ms to 440 ms.
One idea is to have local servers, but in this case they must connect to the main server that a user in Australia is able to communicate with one from the UK. This comet-comet connection will have a higher latency too, but local users can chat fast which will be mostly the case.
Has anyone a better idea then use of a local comet servers?
If your latency is due to the geographical distance, there is no option how to shorten it. The only thing you can do is try find upstream network providers who have more "straight" cables. But you can never achieve latency shorter than direct air distance between those 2 countries/servers.
If you will have users in Australia communicating with each other, then yes, it will be a difference for them if they will be connecting to a local server. But for communication between one user in UK and one in AU, it won't matter if you have a local server.
But anyway for an instant messenger the latency is not so important IMHO. The recipient does not know the moment when the sender finished his message and hit the send button, so he can't measure the latency. And human is not able to send multiple messages per second, so I think it won't be possible to see the difference between 400 ms and 10 ms latency. If it would be over 1 second, it could be visible...
So to summarize I would bother with making local servers only when there would be enough local users communicating between themselves.
(Please let me know if some of my assumptions about your setup were incorrect.)

Minimize network time to exchange messages with distant server

The situation I seek help with is this: A business on the east coast of the US, at random intervals, posts messages via the public internet to a set of listening subcontractors who subscribe to these messages. Each message announces availability of a unit of work to be subcontracted. The first subscriber who responds with an acceptance message indicating it has immediate capacity to perform the work is then awarded that work. One subcontractor is located in the US midwest. Another on the US west coast. Due to the slightly longer time it takes for the messages to reach the west coast subcontractor via the internet, and for its responses to get back to the east coast, the west coast subcontractor's attempts to accept an offered unit of work are often too late (i.e. the nearer subcontractor has already signaled acceptance and been awarded the work) even though the west coast subcontractor also has capacity to do the work. I'm looking for the best way to improve transit time to overcome the distance disadvantage for the west coast subcontractor (connected to the internet via a T1 line). Any suggestions? (If this is the wrong forum for this question, suggestions for a better one would be welcomed.)
You will not be happy with the answer.
There is no actual way to improve the speed of your packets over the internet. If the passing routers are not under your control, there is just no way to reliably get more speed. The internet is based upon best-effort, which means, that no router guarantees that your packets arrive, neither when nor in which order. This is why TCP was invented. If you send two packets, you have a good chance that these two take two different routes to the destination. There is just no way to tell the routers inbetween you and the remote place to handle your packets prioritized or faster. There are some protocols that would theoretically speed up the packet transmission, but most of the headers are stripped on the way (in most cases, after the last router under your control.). There is QOS (Quality-of-Service) and the TCP Urgent header, but non of these really guarantee anything. You can try setting these headers and using these protocols, but there is just no way that you can tell, that your packets get prioritized.
I know that this is not satisfying in any way, but think about it the other way around. If packets were actually prioritized and handled faster on their flags, everyone would just set these, and everything would be as fast as now. You can try, but I can tell you, that most hops just blatantly ignore the flags.
Honestly, the only way to get there faster is to get a server physically close, and reduce network hops inbetween. If you can get a server in the same server center, great. On the same street, good. In the same city, ok. In the same country, also, ok. There will not be really another way to get there. The closer you can get physically and networkwise, the better.

Resources