Is this a reasonable technique to establish P2P connections between two hosts behind a NAT? - networking

In order for two hosts behind two different NATs to be able to establish a P2P connection, they must both have each other's public endpoints (address/port). In order for each host to become aware of its own public endpoint, it can communicate with a known public server that returns to the host its public endpoint as seen by the server. See STUN (RFC5389).
In order for this mechanism to work, the NAT must consistently translate the same private endpoint to the same public endpoint regardless of the destination endpoint.
Would it be a good idea to implement a P2P application that depends on this particular NAT behavior (i.e. is it common enough)? Does Skype and/or other popular P2P applications have other fallback mechanisms?
EDIT: It looks like a NAT in which a different port is chosen per destination is called a "Symmetric NAT". In addition, some symmetric NATs increment the port number enabling a possibility of guessing the proper port.
Now I suppose the question is: Roughly what percentage of NATs are symmetric NATs, and what percentage of those use random port assignments vs. something like incrementing?

You can use standard STUN/TURN/ICE protocols from IETF:
http://rtcbits.blogspot.com/2012/10/what-is-ice-interactive-connectivity.html
According to Google Gtalk statistics these mechanisms solve the P2P connection without relaying traffic in 92% of the cases.
https://developers.google.com/talk/libjingle/important_concepts

STUN can only handle 85% of the cases! TURN can handle 100% but is really expensive to use. So what to use? neither of the two
Use ICE. It is the direct combination of STUN servers and TURN servers! That way you will have 100% reliability but you will only relay on a server 15% of the traffic.

Related

Is it possible to restrict ForceBindIP to only inbound/outbound traffic?

I'm using ForcebindIP to point an app at a specific network adapter, like this:
forcebindip -i 192.168.0.5 MyCSharpApp.exe
This works fine and the app isn't aware (or doesn't access) any of the other network adapters on the PC.
Is it possible to restrict ForceBindIP to outbound traffic only leaving the app to receive data from any local network adapter? Or even to specify a network adapter for outbound and another for inbound traffic?
I can't find an extra startup parameter for ForceBindIP that does this.
I'd appreciate any help with this.
If I get your problem correctly, you want to bind your application to listen for packets on all available interfaces but return packets to only through one given interface. I also assume it's a server application and you don't have neiter source code nor control over its behaviour.
Disclosure: I do not know how ForceBindIP works internally, I'm basing my understanding of it on this passage from the website:
it will then inject a DLL (BindIP.dll) which loads WS2_32.DLL into memory and intercepts the bind(), connect(), sendto(), WSAConnect() and WSASendTo() functions, redirecting them to code in the DLL which verifies which interface they will be bound to and if not the one specified, (re)binds the socket
Problems to overcome
I don't believe your desired configuration is possible with just one application level DLL injector. I'll list a few issues that ForceBindIP will have to overcome to make it work:
to listen to a socket, application has to bind() it to a unique protocol-address-port combination first. An application can bind itself to either a specific address or a wildcard (i.e. listen on all interfaces). Apparently, one can bind to wildcard and specific address simultaneously as outlined in this SO question. This however will be two different sockets from the application standpoint. Therefore your application will have to know how to handle this sort of traffic.
When accepting client connection, accept() will create a new socket and parameters on that are managed by Windows, I don't believe there's an API to intercept binding here - by this time the connection is considered established.
Now imagine, we somehow got a magic socket. We can receive packets on one interface and send to another. The client (and all routing equipment on the way) will have to be aware that two packets originating from two different source IP addresses are actually part of the same connection and be able to assemble the TCP session (or correctly merge UDP streams).
You can have multiple gefault gateways with different priorities and rules (which is a whole different topic to explore) but as far as I'm aware that's not going to solve your particular issue: majority of routing protocols assume links are symmetric and expect packets to keep within same interface. There are special cases like asymmetric routing and network interface teaming but they have to be implemented on per-interface level.
One potential solution
One way to achieve what you're after (I don't know enough about your environment to claim it will work), will be to create a virtual interface, set it into yet another IP network, bind your application to it, then use firewall (to, say, allow multicast backets into the "virtual" network) and routing from that network to required default gateway with metric set to 1. I also suspect just any Windows will not be that flexible, so you might need like a Server Edition.
I am sorry this didn't turn out to be the ready-to-fly solution, I however am hoping this gives you more context to the problem you are facing and points you into other directions to explore.
You can use Set-NetAdapterAdvancedProperty command in Powershell to set the flow control of your specified adapter
To get the names and properties of all the network adapter :-
Get-NetAdapterAdvancedProperty -Name "*"
Suppose you want the network adapter named "Ethernet 2" to be only used to receive data from internet then type :-
Set-NetAdapterAdvancedProperty -Name "Ethernet 2" -DisplayName "Flow Control" -DisplayValue "Rx Enabled"
You can find more in :
https://learn.microsoft.com/en-us/powershell/module/netadapter/set-netadapteradvancedproperty?view=win10-ps
Microsoft winsock example has a usage in their example for limiting a socket to only send or receive mode. It might help.
https://learn.microsoft.com/en-us/windows/win32/winsock/complete-client-code
Outbount and Inbount limits are not imposed while binding. But latter or when connection is established.
Line of code pertaining to this in client code is toward the end.
// shutdown the connection since no more data will be sent
iResult = shutdown(ConnectSocket, SD_SEND);

Reply with unsupported protocol when writing custom network stack

I have been writing my own version of the 802.11 protocol with network stack. This is mostly a learning experience to see more in depth on how networks work.
My question is, is there a standard for replying to client devices that a certain protocol is unsupported?
I have an android device connecting to my custom wifi device and immediately sending a TON of requests at the DNS port of my UDP protocol. Since I would like to test out other protocols I would very much like a way for my wifi device to tell the android device that DNS is not available and get it to quite down a little.
Thanks in advance!
I don't see a possibility to send a reply that a service is not available.
I can't find anything about this case in the UDP specification.
One part of the DNS specification assumes that there are multiple DNS servers and defines how to handle communication with them. This explains part of the behavior in your network, but does not provide much information how to handle it.
4.2.1 Messages - format - UDP usage
The optimal UDP retransmission policy will vary with performance of the
Internet and the needs of the client, but the following are recommended:
The client should try other servers and server addresses
before repeating a query to a specific address of a server.
The retransmission interval should be based on prior
statistics if possible. Too aggressive retransmission can
easily slow responses for the community at large. Depending
on how well connected the client is to its expected servers,
the minimum retransmission interval should be 2-5 seconds.
7.2 Resolver Implementation - sending the queries
If a resolver gets a server error or other bizarre response
from a name server, it should remove it from SLIST, and may
wish to schedule an immediate transmission to the next
candidate server address.
According to this you could try to send garbage back to the client, but this is rather a hack, or an error, but how does an error look like? Such a solution assumes that you have knowledge about the service that you don't support.
I believe that the DNS - requests can be avoided by using DHCP. DHCP allows to specify DNS-servers as listed in the linked page. This is the usual way that I know for a DNS-resolver in a LAN to get initial DNS servers although I don't find anything about this in the DNS specification. You can give the Android - device a DNS-server with DHCP so that it does to need to try to query your device. Querying your device could be a fallback.
Additionally to DNS there is mDNS which uses multicasts in the network to send queries. This seems not to be the protocol you have to do with because it uses the special port 5353.
Not possible to stop DNS in the way you intend. However, only for your tests you can check the UDP messages and find out the names the device is looking for. Then you update the hosts file (google how to do it: http://www.howtogeek.com/140576/how-to-edit-the-hosts-file-on-android-and-block-web-sites/) and add those names with some localoop IP address. That might work for your test.
Other possibility is to change DNS server to some localloop IP address: http://xslab.com/2013/08/how-to-change-dns-settings-on-android/
Again, this is only to avoid having all the DNS messages through the wifi connection.

Under which conditions and how does Webrtc PeerConnection work without a TURN server?

Reading about Webrtc i get the feeling that "it will drop server bandwidth usage dramatically" except for "a few corner enterprise-firewall cases" where one needs a TURN server which relays the whole traffic between the peers.
For example, altough not webrtc related but the idea is similar, the wikipedia article of Chatroulette states: The website uses Adobe Flash to display video and access the user's webcam. Flash's peer-to-peer network capabilities (via RTMFP) allow almost all video and audio streams to travel directly between user computers, without using server bandwidth. However, certain combinations of routers will not allow UDP traffic to flow between them, and then it is necessary to fall back to RTMP.
Also similar articles on Webrtc focus on "yeah there might be problems with firewalls so you need a TURN server but ignore this and look at my awesome PeerConnection javascript code".
What i don't understand:
A Connection between two peers requires a server socket to be open so the peers can connect to it. Even UDP requires the concept of a udp server socket. Since nearly all not-server internet connected peers are behind some kind of router. E.g. every smartphone uses a wifi router, desktop PC's use the router of the service provider, ...
It shouldnt be possible to connect to a server socket hosted on a smartphone (browser webrtc server socket) or desktop cause of the router/firewall.
Thus my understanding is practically no two peers which need to send their traffic through the internet will be able to use a direct P2P connection, right?
So the only useful case to use Webrtc is in a LAN like environment, right?
Furtherly in case of a video chat service like chatroulette based on webrtc would need to use a bunch of TURN servers to relay nearly ALL traffic. Which makes Webrtc equally costly regarding server bandwidth like hosting my own solution.
So my question is: Am i right? If not what is the technical detail that allows a PeerConnection to be used without a TURN server but for two nodes separated by the Internet? How is the connection established on Layer 4 the TCP/UDP Transport Layer? Is it using UDP and all wifi routers allow hosting UDP server sockets or such? Which wouldnt make much sense cause of NAT and security.
UPDATE 1:
Digging a bit further i found what "symmetric nat" means and what it has to do with enterprises: In most enterprises it seems that the device connected to the internet has symmetric nat implemented. This means that the routing table which maps internal "internal-ip:internal-port" tuples to "internet-ip:internet-port" also stores "destination-ip:destination-port". So such routes/nats store a table for every (tcp?) connection having 6 columns "internal-ip:internal-port:internet-ip:internet-port:destination-ip:destination-port". This means no one else but the destination is allowed to communicate with internal-ip:internal-port.
Whereas non-enterprise-routers seem to only store the "internal-ip:internal-port:internet-ip:internet-port" combination. Thats also what is meant as "poke a hole in the firewall".
You're not right. All peers have IP addresses in order to communicate, and can be reached on those same addresses, provided a firewall allows it.
NATs tend to be optimized for client-initiated client-server traffic only. That typically means they initially allow outbound traffic only, and only allow inbound traffic on the same line after outbound traffic has happened. Perfect for servers. See this WebRTCHacks article for an intro to the problem.
This is where ICE comes in to attempt to poke holes in the firewall from the inside (client-side), in order to establish a line of communication directly between two peers, without needing any "server" socket, whatever that means.
How ICE works is quite complicated, and is explained in detail in the RFC.
But in broad terms it works in a number of steps:
Each peer (e.g. browser) has an "ICE agent" that collects candidates. Candidates are addresses (IP:port numbers) at which this peer can be reached, including:
Host candidates: e.g. immediate LAN/wifi/VPN IPs of the machine.
Server-reflexive candidates: public (outside-NAT) addresses of the machine, obtained by bouncing requests off mirroring (STUN) servers on the internet.
Relay candidates: addresses to a shared TURN server to forward data if all else fails.
Once discovered, candidates are inserted into the local SDP, and trickled over the signaling channel to the other peer, where they are inserted in it's remote description, where the other agent sees them.
Once an ICE agent has both local and remote candidates, it starts pairing local and remote candidates, and checks them for connectivity by sending STUN requests on them (effectively attempts at reaching the peer).
Successful pairs are ones both ICE agents have gotten a response back on (a 4-way handshake if you will).
If there's more than one successful pair, they're sorted by some metric, and the best pair becomes selected.
The selected pair is then used to send media over. One pair is needed for each track of (video or audio) media.
If a better pair is found later, the selected pair may change, affecting what address media is sent on.
TURN should only be needed in cases either where both clients are behind symmetrical NATs, or UDP traffic is blocked entirely.

Nat punch, MasterServer/Server/Client. Client can't talk to Server on known public ip and port

I have 3 applications: a MasterServer, a Server and a Client.
The MasterServer is running on: 70.105.155.5:15555 (port forwarded with UPnP)
I create a server and let the MasterServer know I exist. The MasterServer keeps my public ip and port. The port that the MS gets is randomly assigned by my router (lets say: 70.105.155.5:16666). The server keeps messaging the MasterServer each 10sec to keep that same port open.
I open up the client, on which it asks the MasterServer for the public ip and port of a server. The MasterServer returns: 70.105.155.5:16666. I know 100% sure that the server's public port 16666 is still open because I can check that in my logs.
But all messages sent from Client => Server are never received. At the same time the Server is still getting messages from MasterServer through 16666.
So this is really puzzling. Am I forgetting something? Is my understanding of NAT punch flawed?
Thanks for any help!
There are multiple issues here, and it depends on the router's security configuration too, often in ways the user cannot control. The general excuse is that it's a security precaution, but really firewalling and NAT are two separate concerns. Anyway, most home users are stuck with whatever they've got. They do usually have the option to explicitly map a port, and UPnP can help you too if the router supports it.
But going back to NAT, to begin with you're likely to have a problem if your server and client are sitting behind the same NAT, which seems to be the case given the addresses you quoted above. Most NATs only rewrite incoming packets from the public interface - so packets that physically arrive on the private interface, even if they're addressed to the public IP, won't be forwarded. To support this configuration in the wild you need the devices to advertise their private addresses to the MasterServer, and detect when they want to talk to other devices behind the same NAT, and if so, use the private address rather than going through the NAT. This is flawed in many ways, especially with nested NATs, but I think it's the best you can do.
Beyond that, in the more common case where all the devices are behind different NATs, some routers will only allow incoming traffic on a forwarding port if it's from the place they originally sent the outgoing traffic to (which resulted in the port opening up in the first place). Some also require it to come from the same source port on the remote device.
The workaround is for the MasterServer to do a bit more work. The gist is that it should tell both peers to send a packet to each other; these may or may not get through, but simply sending the packet out through the Server's NAT to the Client's public IP address may be enough to get the Server's NAT to correctly forward later packets from the Client. And vice versa.
In practice it is even more complicated, because the Server's NAT may use a different port when talking to Client to what it used when talking to MasterServer. So while this may open up a port for the Client to talk back to the Server, but it might not be the one the Client is expecting to use. If the NAT behaves like this, then you need to look at how predictable its choice of port numbering is. Some just increase one by one (but bear in mind there may be other devices behind the same NAT causing the number to jump more than one step at a time). For these ones, the Client needs to spam a range of Server ports to try to figure out which one got opened up. Again the MasterServer is in the best position to coordinate this.
Others seem totally random in port allocations, and there's not much you can do with those. But it's only terminal if both ends are behind these random NATs. So long as one end is more amenable to being opened up, the random end won't matter.
Also note that some NATs use a different outgoing port for each port on the target - this also makes prediction a lot harder, even in the case that the ports are not randomly assigned. Again, so long as one end of the connection is flexible, you can tolerate these NATs but in a peer-to-peer context they will be a nightmare in the end because they just can't talk to each other.

HTTP push over 100,000 connections

I want to use a client-server protocol to push data to clients which will always remain connected, 24/7.
HTTP is a good general-purpose client-server protocol. I don't think the semantics possibly could be very different for any other protocol, and many good HTTP servers exist.
The critical factor is the number of connections: the application will gradually scale up to a very large number of clients, say 100,000. They cannot be servers because they have dynamic IP addresses and may be behind firewalls. So, a socket link must be established and preserved, which leads us to HTTP push. Only rarely will data actually be pushed to a given client, so we want to minimize the connection overhead too.
The server should handle this by accepting the connection, inserting the remote IP and port into a table, and leaving it idle. We don't want 100,000 threads running, just so many table entries and file descriptors.
Is there any way to achieve this using an off-the-shelf HTTP server, without writing at the socket layer?
Use Push Framework : http://www.pushframework.com.
It was designed for that goal of managing a large number of long-lived asynchronous full-duplex connections.
LightStreamer (http://www.lightstreamer.com/) is the tool that is made specifically for PUSH operations of HTTP.
It should solve this problem.
You could also look at Jetty + Continuations.

Resources