Scanning LAN game servers using winsock - networking

I'm trying to figure out how to use winsockets to be able to turn my game into a LAN-playable game. I've read some winsockets documentation but I can't figure out how a client can get all the games that were created on LAN.
Does it have to try to 'connect' to each IP on LAN, like trying to connect to 192.168.0.1, then 192.168.0.2, etc? Is there a better way?

You would use broadcasting to advertise your servers on the LAN. Clients can then listen for these broadcasts to 'find' servers.
See here for more info:
http://tangentsoft.net/wskfaq/intermediate.html#broadcast

Typically these game servers use the local UDP broadcast, which is something that all clients receive and can process so long as they are listening to it.
Here is some sample client and server code I found that may be of interest to you: http://visual-c.itags.org/visual-c-c++/29424/

I think there are two possible ways to do this.
Make a "lobby" that clients and servers connect to so they can find each other through it.
Servers broadcast UDP packets. Clients listen and update a list of servers.
If you need a quick and easy way, the 2nd option would be great but remeber most of UDP packets will be wasted as they are used only once for each client.
The 1st option is more general and extensile solution to this problem. However, it might need more time to design and implement.

First off, I suggest that you get wireshark for any networking development. It will show you what packet goes through the wire. It will allow you to see how other games do it since there are many ways of doing this.
Using the UDP broadcast is one way of doing it. Simply change the target ip's last byte to 255 and you should be ok.

Related

Serial COM port data over WebRTC

I'm currently looking at options to allow me to build a remote COM-port solution.
The idea is to be able to access from my remote PC, another PC that's directly connected to a device locally via its serial COM-port.
I know that the obivous answer is to use a VPN between the 2 Internet connected PCs.
However, I need this solution to be as seamless to the end-user as possible.
i.e. no installing and configuring VPN software, etc.
So I was thinking that WebRTC would be great because the end-user can simply use their web-browser and not have to install any additional software.
My question is, is it possible to stream the COM port data between the 2 PCs via WebRTC?
If so, can you please point me in the right direction as to how I can go about achieving this?
Sorry if this is a ridiculous question, I'm very new to WebRTC, just exploring my options.
Thanks.
That should work great!
Networking wise you get NAT Traversal. That means the two computers can be in completely different networks, and still communicate. You may have to run a TURN server if P2P isn't possible.
Data wise you can exchange anything you want via data channels. It is datagram based and you can send/receive binary data. You get a callback telling you how much has been delivered, that way you can detect backpressure.
Are you ok with installing software on the remote host? You can do something like Pion WebRTC's data-channels. This shows you can have a browser connect to a Go process via WebRTC. Then use tarm/serial on the remote host to interact with the device.
If you want a browser on both ends there is the Web Serial API I haven't used it myself though. That locks you into only doing Chromium which might be an issue.

Real-world cross-platform decentralized asynchronous peer-to-peer communication

My knowledge about network programming is limited, so, all the comments are more than welcome. Essentially my question boils down to the following question:
Q1. Is there really such a thing as decentralized asynchronous cross-platform peer-to-peer communication?
Let me explain myself.
If we have two http servers running on computers with actual IP addresses, then clearly the answer is yes, assuming one writes a protocol for the interaction.
To go one step further, if one of them (or both) is (are) behind a router, then, with port forwarding the communication can still be established. However, here the problems start because if someone wants to run such a server on the background, say in a mobile phone, the app that is relying on this server really works when one is at home (we can not really expect to request port forwarding everywhere we go).
But even beyond that,
Q2. do mobile phones obtain an actual IP address from telecommunication companies when someone is not using a wi-fi?
If this is true, then clearly one can have cross-platform asynchronous peer-to-peer communication at the expense of not using wi-fi by running an http server on a smartphone. (I understand that this is not convenient, but it is certainly doable.)
Concluding, the two (perhaps there are more) relevant questions that I can think of are:
Q3. How does Skype really work?
Q4. How does Viber really work?
Based on the answer for Skype, it says: If one of the callee or both of them do not have a public IP, then they send voice traffic to another online Skype node over UDP or TCP.
So, it appears that there is no direct communication in Skype, because they have to use a man-in-the-middle for such a scenario.
Regarding Viber, I could not find a good-thorough answer to this particular question. Do people talk to each other through a Viber centralized server, or, do they establish a direct connection? Of course if they do establish a direct connection, then I really want to know how they manage such a thing since a mobile phone may or may not have a physical address. How is a Viber message routed to my cell phone from a friend of mine even when Viber is not running and I am behind a router?
I guess the answer to Viber is really push notifications, but as far as I can understand, all the variations of push notifications rely on open connections, and then the servers of the applications send the notifications to the clients through such connection(s). So, this approach gives us the feeling that it is asynchronous, but essentially it is not. We are cheating, in the sense that there is a constantly open connection to a server, and moreover, as far as I can understand, the application server has to push the notification through that server. Schematically:
A > Central App Server > Central Server w/ open connection to my cellphone > me
So, this seems to be once again a centralized approach.
Honestly, the only approach that I can think of that is both decentralized and asynchronous (on mobile phones as well) is to run an http server on every platform/device, but this comes at the expense of not using Wi-Fi and assuming that a telecommunication company really assigns a physical IP address to every mobile phone (which I do not know if it is true, do you?).
What about WASTE, darknets, F2Fs, etc? Do they offer advantages in the sense of a more direct asynchronous communication between some interested parties? Are there real-world applications (also including mobile phones) using such approaches for communication.
Really, this is not the actual problem that I would like to work on, but I would like to know what the state of the art is so that I can figure out how I can proceed from there. So, all comments are really more than welcome. If you have references for the state of the art I would like to know about them as well, but a brief description would also be nice.
I appreciate all your time and effort in advance.
You asked many questions, here is the beginning of the answers:
Q1: Yes. For example, take BitTorrent's very successful 10 million+ node network. Aside from the bootstrapping process, the protocol is entirely decentralized and asynchronous. See here for more info.
Q2: Yes! Go to www.whatismyip.com on your mobile telephone, and you will see your assigned IP. However, you are likely to be very filtered (e.g: incoming traffic on port 80 is likely to be blocked).
Q3: It has elements of P2P and clever tricks to get around NAT issues - see here for more info.
Q4: I don't know.

Discovering free ports

I wrote an server application in erlang and a client in C#. They communicate through 3 TCP ports. Port numbers are hardcoded. Now I'd like to do this dynamically. This is my first time doing network programming, so please pardon my inability to use proper terminology :-D
What I would like to do is make a supervisor which would accept a TCP connection from a client on a previously known port (say, 10000, or whatever), then find 3 free ports, start a server application on those 3 ports and tell the client those port numbers so client can connect to the server.
My particular problem is: how do I find 3 ports which are not in use? (clarification: which module:fun() to use to find a free port?)
My general problem is: I'm sure this kind of stuff (one server allocating ports and redirecting clients) is quite common network programming problem and there should be a bunch of (erlang-specific or general) resources about this, but I just don't have the terminology to google it out.
According to the Erlang documentation here, if the Port argument to the gen_tcp:listen/2 function is 0, then the OS will assign any available port to the socket. The latter can then be retreived using inet:port/1 .
You can therefore do something like this :
{ok, Listen} = gen_tcp:listen(0, [Options]),
Port = inet:port(Listen).
just in case you didn't know that - you dont have to allocate new ports for each client, it's perfectly fine to have all clients to connect to same ports
UPDATE:
if there is a reason to allocate new ports for incoming clients then it's far beyond your first "introduction to network programming" program.
separate ports could mean you want to completely isolate environments of different groups of clients. it's comparable to providing completely different IP addresses to connect to. if you want to write a simple ping-pong program - you don't need it. and i honestly believe you will never need to use such solution in your whole life - that's how incredibly rarely it is.
regarding cpu/ports overhead - allocating ports and starting a server that listens to that port is already far bigger overhead than accepting clients on same port.
You need to avoid commonly known ports, ftp, http, smtp etc, But I don't think there is any master list of which ports other software uses that you should avoid. I think your best bet is to come up with a range of ports you want to use. Check at runtime if anybody else answers ( is using the ports ) on the numbers you choose dynamically, if not issue it to the client.

Sending UDP packets over the Internet

I'm trying to learn some of the ins and outs of P2P/decentralized networks. My question is the following. Say I have two machines named comp1 and comp2. Now comp1 is setup on my home network behind a router, and comp2 is located in my office at also behind a router. Is it possible for me to send UDP packets back and forth across the Internet like this assuming of course that ports are forwarded properly? To offer more insight on what I'm investigating, I'm trying to figure out how a new node would discover existing nodes without the use of a central server.
Thank you!
Assuming, as you said the ports are forwarded correctly you can send UDP packets to 2 clients behind router's.
A good way of detecting clients on a local intranet may be using Multicast, however this does not have widespread support on ISP's (At least here in the UK) so cannot be relied upon. Multicast is used by many device discovery platforms, such as mDNS (used inApple's Bonjour)
http://en.wikipedia.org/wiki/Multicast
(It basically works by clients subscribing to groups, and then sending messages to that group)
I think the best way of discovering new clients over the internet is to have one server which new clients contact to let it know they exist, then the centralised server will tell all the other clients about you. This is used for example in P2P games such as Modern Warfare 2 and this is what the "Trackers" do in the BitTorrent protocol.
This isn't entirely decentralised but it's probably the easiest to implement, and the most reliable.
To add to Dotmister's response, if the ports are not forwarded correctly (e.g. the router isn't statically configured to forward the ports), you will have to look into something like UDP hole punching. Either way, in order to discover a new node without some sort of central server you'll have to rely on some sort of Multicast.

How to retain one million simultaneous TCP connections?

I am to design a server that needs to serve millions of clients that are simultaneously connected with the server via TCP.
The data traffic between the server and the clients will be sparse, so bandwidth issues can be ignored.
One important requirement is that whenever the server needs to send data to any client it should use the existing TCP connection instead of opening a new connection toward the client (because the client may be behind a firewall).
Does anybody know how to do this, and what hardware/software is needed (at the least cost)?
What operating systems are you considering for this?
If using a Windows OS and using something later than Vista then you shouldn't have a problem with many thousands of connections on a single machine. I've run tests (here: http://www.lenholgate.com/blog/2005/11/windows-tcpip-server-performance.html) with a low spec Windows Server 2003 machine and easily achieved more than 70,000 active TCP connections. Some of the resource limits that affect the number of connections possible have been lifted considerably on Vista (see here: http://www.lenholgate.com/blog/2005/11/windows-tcpip-server-performance.html) and so you could probably achieve your goal with a small cluster of machines. I don't know what you'd need in front of those to route the connections.
Windows provides a facility called I/O Completion Ports (see: http://msdn.microsoft.com/en-us/magazine/cc302334.aspx) which allow you to service many thousands of concurrent connections with very few threads (I was running tests yesterday with 5000 connections saturating a link to a server with 2 threads to process the I/O...). Thus the basic architecture is very scalable.
If you want to run some tests then I have some freely available tools on my blog that allow you to thrash a simple echo server using many thousands of connections (1) and (2) and some free code which you could use to get you started (3)
The second part of your question, from your comments, is more tricky. If the client's IP address keeps changing and there's nothing between you and them that is providing NAT to give you a consistent IP address then their connections will, no doubt, be terminated and need to be re-established. If the clients detect this connection tear down when their IP address changes then they can reconnect to the server, if they can't then I would suggest that the clients need to poll the server every so often so that they can detect the connection loss and reconnect. There's nothing the server can do here as it can't predict the new IP address and it will discover that the old connection has failed when it tries to send data.
And remember, your problems are only just beginning once you get your system to scale to this level...
This problem is related to the so-called C10K problem. The C10K page lists a large number of good resources for addressing the problems you will encounter when you try to allow thousands of clients to connect to the same server.
I've come across the APE Project
a while back. It seems like a dream come true. They can support up to 100k concurrent clients on a single node. Spread them across 10 or 20 nodes, and you can serve millions. Perfect for RESTful applications. Might want to look deeper for any shared namespace. One drawback is that this is a standalone server, as in supplementary to a web server. This server is of course Open Source, so any cost is hardware/ISP related.
You cannot use UDP. If the client sends a request and you don't reply immediately, a router is going to forget the reverse route in 30 seconds or less, so your server will never be able to reply to the client.
TCP is the only option, and it, too, will give you headaches. Most routers are going to forget the route and/or drop the connection after a few minutes, so your client/server code is going to have to send "keep alives" fairly often.
I recommend setting up a "sniffer", to see how the phone companies are staying in touch with your smartphone for their "push" technology. Copy whatever they're doing, because that stuff works!
As Greg mentioned, the problem you are describing is C10K (or rather "C1M" in your case )
I recently made a simple TCP echo server on linux that scales very well with the number of sessions (only tested up to 200.000 though), by using the epoll queue. On BSD, you have something similar called kqueue.
You can check out the code if you want to. Hope this helps and good luck!
EDIT: As noted in the comments below, my original assertion that there is a 64K limit based on the number of ports is incorrect, however there is a 32K limit on the number of socket handles, so my suggested design is valid.
With a typical TCP/IP server design, you're limited in the number of simultaneous open connections you can have. The server has one listening port, and when a client connects to it the server makes an accept call, and that creates a new socket on a random port for the rest of the connection.
To handle more than 64K simultaneous connections I think you need to use UDP instead. You only need one port for the server to listen on, and you need to manage the connections using a 32-bit client ID in the packet data instead of having a separate port for each client. The 32-bit client ID could be the client's IP address, and the client can listen on a known UDP port for messages coming back from the server. That port would be the only one that needs to be open on the firewall.
With this approach, your only limitation is how quickly you can handle and respond to UDP messages. With millions of clients, even sparse traffic could give you large spikes, and if you don't read the packets fast enough your input queue will fill up and you'll start dropping packets. The C10K page Greg points to will give you strategies for that.

Resources