I've been trying to run a gaming machine in EC2 following the excellent blog post by Larry Land here. The problem I have is latency from my home to my nearest AWS region. I get a ping of around 35ms, and I'm looking to improve on that. Is there anything I can do? I'm using Steam n-home streaming over a Hamachi VPN, on Windows Server 2012.
My internet connection is roughly 120Mbps down and 35Mbps up, and there's nothing I can do to improve on that sadly.
In some cases the nearest region geographically isn't the one with the lowest latency. This is due to routing agreements that sometimes result in non-optimal routes.
A common example, is with Eastern Australia and Singapore. Routes often go to the US and or Japan before finally going back to Singapore.
Besides this, you should not be using wifi on your local network, depending how noisy the environment is, this can result in dropped packets that need to be retransmitted and increase the overall latency.
Routers can have an effect on this too, but unless its heavily loaded, its probably not adding much latency.
You may want to do some research with traceroute to see how each data center performs and where the slow spots are.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
If Chris and Pat want to exchange a text message, they send and receive via their network providers, which charge them for a connection.
If Chris and Pat are both located in New York City, and there are enough wireless devices between Chris and Pat all close enough to each other to form a continuous chain, is it possible for all those devices to be programmed to cooperatively forward packets amongst each other, bypassing the need for network providers?
It would seem the "address" of each device would have to include current geographic coordinates, and devices would have to report their movements frequently enough so routing attempts could still find them, but the speed and capacity of devices nowadays could handle that, right?
Would such a network be viable? Does it already exist or has it been attempted? Is there some kind of inherent programming problem that is difficult to overcome?
There are a few interesting things here:
Reachability. At least you need to use a technology that can do ad-hoc and peer-to-peer networking. Of those technologies only bluetooth, NFC and WiFi are more or less often implemented. Of those again only wifi currently may have the strength to connect to devices in other houses or to the street, but even there typical ranges are 30-60m (and that's for APs, it might be lower for UEs).
Mobility. ANY short-range wireless communication protocol has difficulties with fast-moving devices. It's simple math, suppose your coverage is 50m in diameter, if you move at about 20km/h or 5.5m/s, you have less than 10s to actually detect, connect and send data while passing this link. Oh, but then we did not consider receiving traffic, you actually have to let all devices know that for the next 10s you want to receive data now via this access network. To give an example, wifi connectivity times with decent authentication (which you need for something like this) alone takes a few seconds. 10s might be doable, but as soon we talk about cars, trains, ... it's becoming almost impossible with current technology. But then again, if you can't connect to those, what are the odds you will cross some huge boulevards with your limited reachability?
Hop to hop delays. You need a lot of those. We can fairly assume that you need at least a hop each 20-30m, let's average at 40 hops/km. So to send a packet over lets say 5km you'd need 200 hops. Each hop needs to take in a packet (L2 processing), route it (L3 processing) and send it out again (L2 processing). While mobile devices are relatively powerful these days I wouldn't assume they can handle that in the microseconds routers do. Next to that in a wireless network you have to wait for a transmission slot, which can actually take in the order of ms (each hop!). So all in all, odds are huge this would be a terribly slow network.
Loss. Well, this depends a bit on the wireless protocol, either it has its own reliable delivery protocol (which will make the previous point worse) or it doesn't. In that last case, suppose your wireless link has about .1% loss, or 99.9% no-loss, this would actually end up with an 18.1% loss rate for the 200 hops considered previously ( (1-0.999**200)*100) This is nearly impossible to work with in day-to-day communications.
Routing. lets say you need a few millions of devices and thus routes. For traditional routing this usually takes some very heavy multicore routers with loads of processing power. Let's just say mobile devices (today) can't cut that yet. A purely geographically based routing mechanism might work, but I can't personally think of any (even theoretical) system for this that works today. You still have to distribute those routes, deal with (VERY) frequent route updates, avoid routing loops, and so on. So even with that I'd guess you'd hit the same scale issues as with for example OSPF. But all-in-all I think this is something that mobile devices will be able to handle somewhere in the not-so-far future, we're just talking about computing capacity here.
There are some other points why such a network is very hard today, but these are the major ones I know of. Is it impossible? No, of course not, but I just wanted to show why I think it is almost impossible with the current technologies and would require some very significant improvements, not just building the network.
If everyone has a device with sufficient receive/process/send capabilities, then backbones (ISP's) aren't really necessary. Start at mesh networking to find the huge web of implementations, devices, projects, etc., that have already been in development. The early arpanet was essentially true peer-to-peer, but the number of net nodes grew faster than the nodes' individual capabilities, hence the growth of backbones and those damn fees everyone's paying to phone and cable companies.
Eventually someone will realize there are a million teenagers in NYC that would be happy to text and email each other for free. They'll create a 99-cent download to let everyone turn their phones and laptops and discarded devices into routers and repeaters, and it'll go viral.
Someday household rooftop repeaters might become as common as TV antennas used to be.
Please check: Wireless sensor network
A wireless sensor network (WSN) of spatially distributed autonomous sensors to monitor physical or environmental conditions, such as temperature, sound, pressure, etc. and to cooperatively pass their data through the network to a main location
I have a questions regarding WebSocket communications in mobile connections.
I was wondering how the long-lived TCP connections can be handled for a long time in mobility networks when the user migrate among different networks. What happens to already established TCP connections when handover (hand-off) occurs?
Do different technologies (3G, 4G or etc) behave differently in this case?
I will appreciate if you could leave some online sources or articles as well that I can read more in this regard?
Thank you in advance :)
The hand-off is always transparent to the user — all TCP and voice connections are always kept active when transitioning between the towers on a commercial mobile network like LTE, UMTS etc. You might experience some periods of time where the data stops flowing, but that's about it.
I've had several opportunities to verify this myself through an interesting experiment on a T-Mobile USA's HSPA+ nationwide network. Take a 12-hour-plus drive from one major city to another one, without turning your phone off. Take a look at the area where the external IPv4-address terminates (by using traceroute). You might as well notice that it's still at the same area where you've started your trip. Now reboot the phone, and see where the external IPv4 address is routed to now. You'll notice that now it's likely terminated in a major metro area closer to where you are. I.e., your connection within the core network of the operator follows you along not just within a given city, metro or state, but also between the states and the timezones.
The reason for this is that the carrier has a Core Network, and all external connections are handled by the Packet Gateway of the Core Network, which keeps track of all the connections. More on this is documented in Chapter 7 of the book called High Performance Browser Networking (HPBN.co).
This is not really a SO but more a programmers question and I don't see what you have researched for yourself, but you certainly can't rely on a connection to stay alive, mobile or not.
In fact mobile operators kill long-living connections by resetting them after a certain amount of time or data. So you should be ready to reconnect upon a socket exception anyway.
I live in New Zealand and I just started a website which I also have hosted in New Zealand as novazeal.com and novazeal.co.nz. I am hoping to target clients overseas though as well as in New Zealand so I trying to decide whether to start a second website hosted in the US and point the .com domain to that website instead.
I have heard from friends in the States that a site I had hosted here in New Zealand was slow to access, so what I really need to do is get the time it takes for a traceroute to hop through a location in the US. A normal tracert from my computer here will hop through servers in NZ only, so I can't get the measure I am looking for by using a normal tracert. Does anyone know of an alternative I could use such as an application that forces hops through a distant ISP, or a proxy service that gives the time it takes to retrieve a page from a distant location.
Of course if anyone in the States is willing to run the trace for me and send me the hop time stats I would be most grateful. I could ask the friends I mentioned, but they are not particularly technical, so it would probably be a confusing thing to try to explain to them by email.
There are web-based route tracing utilities, some of which are hosted in the US, that will show you the route and latency between that service and your site (at a point in time).
However, traceroute doesn't give you a full picture of the network latency effects to which you're subject: these days routers do all sorts of sophisticated traffic shaping and traceroute probes just won't be treated the same as your HTTP traffic even if you specify that probes should use TCP and port 80.
Not to mention that network latency itself is just a tiny piece of the puzzle. ISPs perform all sorts of cacheing using (sometimes transparent) HTTP proxies, from which you won't benefit unless/until your site is visited by their customers.
For a social site we use a node.js based comet server as the instant messenger, everything is working great we have only one problem how to solve the latency problem to Australia and New Zealand where we have RRT between 310 ms to 440 ms.
One idea is to have local servers, but in this case they must connect to the main server that a user in Australia is able to communicate with one from the UK. This comet-comet connection will have a higher latency too, but local users can chat fast which will be mostly the case.
Has anyone a better idea then use of a local comet servers?
If your latency is due to the geographical distance, there is no option how to shorten it. The only thing you can do is try find upstream network providers who have more "straight" cables. But you can never achieve latency shorter than direct air distance between those 2 countries/servers.
If you will have users in Australia communicating with each other, then yes, it will be a difference for them if they will be connecting to a local server. But for communication between one user in UK and one in AU, it won't matter if you have a local server.
But anyway for an instant messenger the latency is not so important IMHO. The recipient does not know the moment when the sender finished his message and hit the send button, so he can't measure the latency. And human is not able to send multiple messages per second, so I think it won't be possible to see the difference between 400 ms and 10 ms latency. If it would be over 1 second, it could be visible...
So to summarize I would bother with making local servers only when there would be enough local users communicating between themselves.
(Please let me know if some of my assumptions about your setup were incorrect.)
The situation I seek help with is this: A business on the east coast of the US, at random intervals, posts messages via the public internet to a set of listening subcontractors who subscribe to these messages. Each message announces availability of a unit of work to be subcontracted. The first subscriber who responds with an acceptance message indicating it has immediate capacity to perform the work is then awarded that work. One subcontractor is located in the US midwest. Another on the US west coast. Due to the slightly longer time it takes for the messages to reach the west coast subcontractor via the internet, and for its responses to get back to the east coast, the west coast subcontractor's attempts to accept an offered unit of work are often too late (i.e. the nearer subcontractor has already signaled acceptance and been awarded the work) even though the west coast subcontractor also has capacity to do the work. I'm looking for the best way to improve transit time to overcome the distance disadvantage for the west coast subcontractor (connected to the internet via a T1 line). Any suggestions? (If this is the wrong forum for this question, suggestions for a better one would be welcomed.)
You will not be happy with the answer.
There is no actual way to improve the speed of your packets over the internet. If the passing routers are not under your control, there is just no way to reliably get more speed. The internet is based upon best-effort, which means, that no router guarantees that your packets arrive, neither when nor in which order. This is why TCP was invented. If you send two packets, you have a good chance that these two take two different routes to the destination. There is just no way to tell the routers inbetween you and the remote place to handle your packets prioritized or faster. There are some protocols that would theoretically speed up the packet transmission, but most of the headers are stripped on the way (in most cases, after the last router under your control.). There is QOS (Quality-of-Service) and the TCP Urgent header, but non of these really guarantee anything. You can try setting these headers and using these protocols, but there is just no way that you can tell, that your packets get prioritized.
I know that this is not satisfying in any way, but think about it the other way around. If packets were actually prioritized and handled faster on their flags, everyone would just set these, and everything would be as fast as now. You can try, but I can tell you, that most hops just blatantly ignore the flags.
Honestly, the only way to get there faster is to get a server physically close, and reduce network hops inbetween. If you can get a server in the same server center, great. On the same street, good. In the same city, ok. In the same country, also, ok. There will not be really another way to get there. The closer you can get physically and networkwise, the better.