I have a questions regarding WebSocket communications in mobile connections.
I was wondering how the long-lived TCP connections can be handled for a long time in mobility networks when the user migrate among different networks. What happens to already established TCP connections when handover (hand-off) occurs?
Do different technologies (3G, 4G or etc) behave differently in this case?
I will appreciate if you could leave some online sources or articles as well that I can read more in this regard?
Thank you in advance :)
The hand-off is always transparent to the user — all TCP and voice connections are always kept active when transitioning between the towers on a commercial mobile network like LTE, UMTS etc. You might experience some periods of time where the data stops flowing, but that's about it.
I've had several opportunities to verify this myself through an interesting experiment on a T-Mobile USA's HSPA+ nationwide network. Take a 12-hour-plus drive from one major city to another one, without turning your phone off. Take a look at the area where the external IPv4-address terminates (by using traceroute). You might as well notice that it's still at the same area where you've started your trip. Now reboot the phone, and see where the external IPv4 address is routed to now. You'll notice that now it's likely terminated in a major metro area closer to where you are. I.e., your connection within the core network of the operator follows you along not just within a given city, metro or state, but also between the states and the timezones.
The reason for this is that the carrier has a Core Network, and all external connections are handled by the Packet Gateway of the Core Network, which keeps track of all the connections. More on this is documented in Chapter 7 of the book called High Performance Browser Networking (HPBN.co).
This is not really a SO but more a programmers question and I don't see what you have researched for yourself, but you certainly can't rely on a connection to stay alive, mobile or not.
In fact mobile operators kill long-living connections by resetting them after a certain amount of time or data. So you should be ready to reconnect upon a socket exception anyway.
Related
I want multiple IoT devices (Say 50) communicating to a server directly asynchronously via TCP. Assume all of them have a heartbeat pulse every 30 seconds and may drop off and reconnect at variable times.
Can anyone advice me the best way to make sure no data is dropped or blocked when multiple devices are communicating simultaneously?
TCP by itself ensures no data loss during the communication between a client and a server. It does that by the use of sequence numbers and ACK messages.
Technically, before the actual data transfer happens, a TCP connection is created between the client (which can be an IoT device, or any other device) and the server. Then, the data is split into multiple packets and sent over the network through that connection. All TCP-related mechanisms like flow-control, error-detection, congestion-detection, and many others, take place once the data starts to flow.
The wiki page for TCP is a pretty good start if you want to learn more about how it works.
Apart from that, as long as your server has enough capacity to support the flow of requests coming from the devices, then everything should work (at least in theory).
I don't think you are asking the right question. There is no way to make sure that no data is dropped or blocked. Networks do not always work (that is why the word work is in network, to convince you otherwise ).
The right question is: how do I make my distributed system as available and reliable as possible? The answer involves viewing interruption and congestion as part of the normal operation, and build your software appropriately.
There is a timeless usenix/acm/? paper from the late 70s early 80s that invigorated the notion that end-to-end protocols are much more effective then over-featured middle to middle protocols; and most guarantees of middle to middle amount to best effort. If you rely upon those guarantees, you are bound to fail. Sorry, cannot find the reference right now, but it is widely cited.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
If Chris and Pat want to exchange a text message, they send and receive via their network providers, which charge them for a connection.
If Chris and Pat are both located in New York City, and there are enough wireless devices between Chris and Pat all close enough to each other to form a continuous chain, is it possible for all those devices to be programmed to cooperatively forward packets amongst each other, bypassing the need for network providers?
It would seem the "address" of each device would have to include current geographic coordinates, and devices would have to report their movements frequently enough so routing attempts could still find them, but the speed and capacity of devices nowadays could handle that, right?
Would such a network be viable? Does it already exist or has it been attempted? Is there some kind of inherent programming problem that is difficult to overcome?
There are a few interesting things here:
Reachability. At least you need to use a technology that can do ad-hoc and peer-to-peer networking. Of those technologies only bluetooth, NFC and WiFi are more or less often implemented. Of those again only wifi currently may have the strength to connect to devices in other houses or to the street, but even there typical ranges are 30-60m (and that's for APs, it might be lower for UEs).
Mobility. ANY short-range wireless communication protocol has difficulties with fast-moving devices. It's simple math, suppose your coverage is 50m in diameter, if you move at about 20km/h or 5.5m/s, you have less than 10s to actually detect, connect and send data while passing this link. Oh, but then we did not consider receiving traffic, you actually have to let all devices know that for the next 10s you want to receive data now via this access network. To give an example, wifi connectivity times with decent authentication (which you need for something like this) alone takes a few seconds. 10s might be doable, but as soon we talk about cars, trains, ... it's becoming almost impossible with current technology. But then again, if you can't connect to those, what are the odds you will cross some huge boulevards with your limited reachability?
Hop to hop delays. You need a lot of those. We can fairly assume that you need at least a hop each 20-30m, let's average at 40 hops/km. So to send a packet over lets say 5km you'd need 200 hops. Each hop needs to take in a packet (L2 processing), route it (L3 processing) and send it out again (L2 processing). While mobile devices are relatively powerful these days I wouldn't assume they can handle that in the microseconds routers do. Next to that in a wireless network you have to wait for a transmission slot, which can actually take in the order of ms (each hop!). So all in all, odds are huge this would be a terribly slow network.
Loss. Well, this depends a bit on the wireless protocol, either it has its own reliable delivery protocol (which will make the previous point worse) or it doesn't. In that last case, suppose your wireless link has about .1% loss, or 99.9% no-loss, this would actually end up with an 18.1% loss rate for the 200 hops considered previously ( (1-0.999**200)*100) This is nearly impossible to work with in day-to-day communications.
Routing. lets say you need a few millions of devices and thus routes. For traditional routing this usually takes some very heavy multicore routers with loads of processing power. Let's just say mobile devices (today) can't cut that yet. A purely geographically based routing mechanism might work, but I can't personally think of any (even theoretical) system for this that works today. You still have to distribute those routes, deal with (VERY) frequent route updates, avoid routing loops, and so on. So even with that I'd guess you'd hit the same scale issues as with for example OSPF. But all-in-all I think this is something that mobile devices will be able to handle somewhere in the not-so-far future, we're just talking about computing capacity here.
There are some other points why such a network is very hard today, but these are the major ones I know of. Is it impossible? No, of course not, but I just wanted to show why I think it is almost impossible with the current technologies and would require some very significant improvements, not just building the network.
If everyone has a device with sufficient receive/process/send capabilities, then backbones (ISP's) aren't really necessary. Start at mesh networking to find the huge web of implementations, devices, projects, etc., that have already been in development. The early arpanet was essentially true peer-to-peer, but the number of net nodes grew faster than the nodes' individual capabilities, hence the growth of backbones and those damn fees everyone's paying to phone and cable companies.
Eventually someone will realize there are a million teenagers in NYC that would be happy to text and email each other for free. They'll create a 99-cent download to let everyone turn their phones and laptops and discarded devices into routers and repeaters, and it'll go viral.
Someday household rooftop repeaters might become as common as TV antennas used to be.
Please check: Wireless sensor network
A wireless sensor network (WSN) of spatially distributed autonomous sensors to monitor physical or environmental conditions, such as temperature, sound, pressure, etc. and to cooperatively pass their data through the network to a main location
My knowledge about network programming is limited, so, all the comments are more than welcome. Essentially my question boils down to the following question:
Q1. Is there really such a thing as decentralized asynchronous cross-platform peer-to-peer communication?
Let me explain myself.
If we have two http servers running on computers with actual IP addresses, then clearly the answer is yes, assuming one writes a protocol for the interaction.
To go one step further, if one of them (or both) is (are) behind a router, then, with port forwarding the communication can still be established. However, here the problems start because if someone wants to run such a server on the background, say in a mobile phone, the app that is relying on this server really works when one is at home (we can not really expect to request port forwarding everywhere we go).
But even beyond that,
Q2. do mobile phones obtain an actual IP address from telecommunication companies when someone is not using a wi-fi?
If this is true, then clearly one can have cross-platform asynchronous peer-to-peer communication at the expense of not using wi-fi by running an http server on a smartphone. (I understand that this is not convenient, but it is certainly doable.)
Concluding, the two (perhaps there are more) relevant questions that I can think of are:
Q3. How does Skype really work?
Q4. How does Viber really work?
Based on the answer for Skype, it says: If one of the callee or both of them do not have a public IP, then they send voice traffic to another online Skype node over UDP or TCP.
So, it appears that there is no direct communication in Skype, because they have to use a man-in-the-middle for such a scenario.
Regarding Viber, I could not find a good-thorough answer to this particular question. Do people talk to each other through a Viber centralized server, or, do they establish a direct connection? Of course if they do establish a direct connection, then I really want to know how they manage such a thing since a mobile phone may or may not have a physical address. How is a Viber message routed to my cell phone from a friend of mine even when Viber is not running and I am behind a router?
I guess the answer to Viber is really push notifications, but as far as I can understand, all the variations of push notifications rely on open connections, and then the servers of the applications send the notifications to the clients through such connection(s). So, this approach gives us the feeling that it is asynchronous, but essentially it is not. We are cheating, in the sense that there is a constantly open connection to a server, and moreover, as far as I can understand, the application server has to push the notification through that server. Schematically:
A > Central App Server > Central Server w/ open connection to my cellphone > me
So, this seems to be once again a centralized approach.
Honestly, the only approach that I can think of that is both decentralized and asynchronous (on mobile phones as well) is to run an http server on every platform/device, but this comes at the expense of not using Wi-Fi and assuming that a telecommunication company really assigns a physical IP address to every mobile phone (which I do not know if it is true, do you?).
What about WASTE, darknets, F2Fs, etc? Do they offer advantages in the sense of a more direct asynchronous communication between some interested parties? Are there real-world applications (also including mobile phones) using such approaches for communication.
Really, this is not the actual problem that I would like to work on, but I would like to know what the state of the art is so that I can figure out how I can proceed from there. So, all comments are really more than welcome. If you have references for the state of the art I would like to know about them as well, but a brief description would also be nice.
I appreciate all your time and effort in advance.
You asked many questions, here is the beginning of the answers:
Q1: Yes. For example, take BitTorrent's very successful 10 million+ node network. Aside from the bootstrapping process, the protocol is entirely decentralized and asynchronous. See here for more info.
Q2: Yes! Go to www.whatismyip.com on your mobile telephone, and you will see your assigned IP. However, you are likely to be very filtered (e.g: incoming traffic on port 80 is likely to be blocked).
Q3: It has elements of P2P and clever tricks to get around NAT issues - see here for more info.
Q4: I don't know.
We published the game on russian server and 1% of people couldn't connect to server on 46xx port through raw TCP while they can load it's HTML page (through HTTP). Most of such people live in Germany, Israel....
Why is it so? What's the politics decisions lay behind it? We discovered that their such ports (which are free on IANA) are closed. Does it mean that such people cannot run Steam (and, then, play all games which you can buy through it), play WoW and many other modern games which use TCP through 4xxx ports?
Thank you.
ISPs have been known to filter certain ports for various reasons. Users should complain loudly to them (or switch) in order to send a signal that such is not to be tolerated. You can encourage them to do so but of course that doesn't solve your problem (or really answer your question).
Common reasons are:
- trying to block bittorrent traffic
- limit bandwidth usage (largely related to previous reason)
- security (mistaken)
- control (companies often don't want employees goofing off)
The easiest thing for you to do is run your game over port 443 (perhaps as an alternate). That's HTTPS and so will not generally be blocked. However, because HTTPS is encrypted, there's no way to inspect the stream to know if its web traffic or something else and thus you can run any data stream (encrypted or not) that you wish over it.
That's precisely correct. In fact every public web site would by default block all ports except the ones they expect to be running some traffic they would want to.
This is the reason many applications often try to encapsulate their programs to use port 80 which can't be blocked as long as some one wants http traffic to run.
They simply don't want any application that they haven't approved to run through their servers. If you have a sensitive server in public you surely won't want any one to use your machine for any apps that you don't allow. A common reason is applications that eat up bandwidth such as bittorent, edonkey, gnutella as well as streaming, voip and other high bandwidth consuming apps
It appears that cheap consumer routers are fairly easy to crash: hanging around in various backup/sync software forums, I see this mentioned from time to time. Developers seem to be putting a fair amount of effort into making sure they don't crash the routers.
What are the "do"s and "don't"s for my network-heavy application to ensure that it doesn't cause issues with badly designed routers? Especially one that intends to connect to a number of peers?
IMO trying to workaround bad hardware is the road to nowhere, because every router fails in its own remarkable way :).
What you can do in the network-heavy application is assume that network is not stable media (routers can crash, etc) and design application network operations accordingly.
For instance, provide reconnect logic, connection timeouts, some sort of state caching to allow users work with app even if network connectivity is gone.
Concerning faulty routers - they usually crash because of great number of simultaneous connections (e.g. downloading via bittorrent or other p2p protocol). So, maintaining minimum number of connections can help.