Why some internet providers close certain ports? - http

We published the game on russian server and 1% of people couldn't connect to server on 46xx port through raw TCP while they can load it's HTML page (through HTTP). Most of such people live in Germany, Israel....
Why is it so? What's the politics decisions lay behind it? We discovered that their such ports (which are free on IANA) are closed. Does it mean that such people cannot run Steam (and, then, play all games which you can buy through it), play WoW and many other modern games which use TCP through 4xxx ports?
Thank you.

ISPs have been known to filter certain ports for various reasons. Users should complain loudly to them (or switch) in order to send a signal that such is not to be tolerated. You can encourage them to do so but of course that doesn't solve your problem (or really answer your question).
Common reasons are:
- trying to block bittorrent traffic
- limit bandwidth usage (largely related to previous reason)
- security (mistaken)
- control (companies often don't want employees goofing off)
The easiest thing for you to do is run your game over port 443 (perhaps as an alternate). That's HTTPS and so will not generally be blocked. However, because HTTPS is encrypted, there's no way to inspect the stream to know if its web traffic or something else and thus you can run any data stream (encrypted or not) that you wish over it.

That's precisely correct. In fact every public web site would by default block all ports except the ones they expect to be running some traffic they would want to.
This is the reason many applications often try to encapsulate their programs to use port 80 which can't be blocked as long as some one wants http traffic to run.
They simply don't want any application that they haven't approved to run through their servers. If you have a sensitive server in public you surely won't want any one to use your machine for any apps that you don't allow. A common reason is applications that eat up bandwidth such as bittorent, edonkey, gnutella as well as streaming, voip and other high bandwidth consuming apps

Related

Need to setup a RMTP stream from our server with multicast

I have a client with a 1-2 thousand viewer audience, with everyday streams, same concurrent number of viewers.
Ive got a server set up for their website etc, but am in the process of figuring out the best way to stream with OBS onto that server, and than re-distribute that stream to clients (as an embed on the website).
Now from the calculations i did, running that kind of concurrent viewers is very problematic, as it forces you into a 10gbit link - which is very expensive, and i would ideally like to fit within 1-2gbps, if possible.
A friend of mine recommended to look into "Multicast" which supossedly uses MUCH less bandwith than regular live streaming options. Is multicast doable? Ive had a NGINX live stream set up on my server by a friend before, but never looked into the config and if multicast is supported within that. Are there any other options? What would you recommend?
Also, the service of that live stream isnt a high profit / organisation type of deal, so any pre-made services just dont make sense, as it would easily cost 40+ dollars per stream, which is just too much for my client.
Thank you for any help!
Tom
Rather than Multicast, P2P is more practical solution on Internet, to save money not bandwidth.
Especially for H5 browser, it's possible to use WebRTC DataChannel to transport P2P data.
But Multicast does not work on internet routers.
Multicast works by sending a single stream across the network to edge points where clients can 'join' the multicast to get an individual stream for them.
It requires that the network supports multicast protocols and the edges align with your users.
It is typically used when an operator has their own IP network for service like IPTV, rather than for services over the internet.
For your scenario, you would usually use an organ server and a CDN - this will usually reduce the load on your own server as the video will be cached on the network and multiple user can access the same 'chunks' of the video.
You can see and AWS example for on demand video here - other vendor and cloud providers have solutions too so this is just an example:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/tutorial-s3-cloudfront-route53-video-streaming.html
You can find more complex On Demand and Live tutorial also but they are likley more that you need: https://aws.amazon.com/cloudfront/streaming/
Exploring P2P may be an option also as Winton suggests - some CDN may also leverage P2P technology internally.

Is there something I should be concerned about before port-forwarding my server?

I'm setting up my first server on a Raspberry Pi 4 but after reading some articles online I was wondering whether my server is ready to be open to the internet or not. I premise I'm just an individual who would like to publish some programming projects on a site that is accessible on a browser.
After some concerns I designed a PHP page which checks the client IP and returns a 403 header until i give that user the permission to access. Is it enough? Is it necessary?
And also, are there ports that are more safe to open than others?
You "can" open ports 80 and/or 443 for displaying webpages - depending on SSL certificates
I do it myself (not for web hosting) and restrict the open ports to certain IPs - my friends (not smart enough to levy an attack 😂). Though IPs are likely to change every so often and your firewall will need updating.
It's a key thing to remember that anything is open to exploitation if it's not properly maintained/set up. Also displaying a 403 isn't a silver bullet.
Port 25 would give a user access to the files on your device if proper authorisation isn't set up. Opening ports 80 and 443 will give users access to webpages but makes your device/network exposed to DoS attacks or platform level attacks. If there's a known exploit for your version of PHP or your firewall/router or possibly the device itself then an attacker will exploit it.
Hosting providers have layers upon layers of security and are constantly updating devices throughout their network. Keeping your device and platform up to date will help - but it may be worth instead investing a little in a host (from about £4 a month).
There are loads more things I can touch on but will leave it at that for now
Edit after comment:
my website is just a little project i mean who could casually target it?
Strictly speaking, anyone. "Who would want to?" Again, anyone. Sure you're a small target that wouldn't provide any useful data. But your device, once hacked, can be used as a DoS zombie or as a crypto-miner and you probably wouldn't even realise.
And also can't I use whatever port like 6969 or 45688?
Yes, strictly speaking, you can. You could tell your device to listen on that port and reply with the website data. To do this you would also need to provide the port number on the end of the URL in the format www.example.com:6969. Though, again, this isn't a silver bullet. Most security issues aren't with port-forwarding but with poor management/security and bugs in the components themselves. All a port forwarder is doing is saying "oh, device X wants data on this port... here you go".
Another point is, data sent on "Well-known ports" (1-1023) tend to have their headers checked for irregularities by the firewall - which can dispose of any irregular packets. By using a custom port the firewall doesn't really know what to expect, so it sends it anyway. Also, steer away from "Private ports" (49152-65535) these are used as source ports, not destination ports.

Real-world cross-platform decentralized asynchronous peer-to-peer communication

My knowledge about network programming is limited, so, all the comments are more than welcome. Essentially my question boils down to the following question:
Q1. Is there really such a thing as decentralized asynchronous cross-platform peer-to-peer communication?
Let me explain myself.
If we have two http servers running on computers with actual IP addresses, then clearly the answer is yes, assuming one writes a protocol for the interaction.
To go one step further, if one of them (or both) is (are) behind a router, then, with port forwarding the communication can still be established. However, here the problems start because if someone wants to run such a server on the background, say in a mobile phone, the app that is relying on this server really works when one is at home (we can not really expect to request port forwarding everywhere we go).
But even beyond that,
Q2. do mobile phones obtain an actual IP address from telecommunication companies when someone is not using a wi-fi?
If this is true, then clearly one can have cross-platform asynchronous peer-to-peer communication at the expense of not using wi-fi by running an http server on a smartphone. (I understand that this is not convenient, but it is certainly doable.)
Concluding, the two (perhaps there are more) relevant questions that I can think of are:
Q3. How does Skype really work?
Q4. How does Viber really work?
Based on the answer for Skype, it says: If one of the callee or both of them do not have a public IP, then they send voice traffic to another online Skype node over UDP or TCP.
So, it appears that there is no direct communication in Skype, because they have to use a man-in-the-middle for such a scenario.
Regarding Viber, I could not find a good-thorough answer to this particular question. Do people talk to each other through a Viber centralized server, or, do they establish a direct connection? Of course if they do establish a direct connection, then I really want to know how they manage such a thing since a mobile phone may or may not have a physical address. How is a Viber message routed to my cell phone from a friend of mine even when Viber is not running and I am behind a router?
I guess the answer to Viber is really push notifications, but as far as I can understand, all the variations of push notifications rely on open connections, and then the servers of the applications send the notifications to the clients through such connection(s). So, this approach gives us the feeling that it is asynchronous, but essentially it is not. We are cheating, in the sense that there is a constantly open connection to a server, and moreover, as far as I can understand, the application server has to push the notification through that server. Schematically:
A > Central App Server > Central Server w/ open connection to my cellphone > me
So, this seems to be once again a centralized approach.
Honestly, the only approach that I can think of that is both decentralized and asynchronous (on mobile phones as well) is to run an http server on every platform/device, but this comes at the expense of not using Wi-Fi and assuming that a telecommunication company really assigns a physical IP address to every mobile phone (which I do not know if it is true, do you?).
What about WASTE, darknets, F2Fs, etc? Do they offer advantages in the sense of a more direct asynchronous communication between some interested parties? Are there real-world applications (also including mobile phones) using such approaches for communication.
Really, this is not the actual problem that I would like to work on, but I would like to know what the state of the art is so that I can figure out how I can proceed from there. So, all comments are really more than welcome. If you have references for the state of the art I would like to know about them as well, but a brief description would also be nice.
I appreciate all your time and effort in advance.
You asked many questions, here is the beginning of the answers:
Q1: Yes. For example, take BitTorrent's very successful 10 million+ node network. Aside from the bootstrapping process, the protocol is entirely decentralized and asynchronous. See here for more info.
Q2: Yes! Go to www.whatismyip.com on your mobile telephone, and you will see your assigned IP. However, you are likely to be very filtered (e.g: incoming traffic on port 80 is likely to be blocked).
Q3: It has elements of P2P and clever tricks to get around NAT issues - see here for more info.
Q4: I don't know.

How to retain one million simultaneous TCP connections?

I am to design a server that needs to serve millions of clients that are simultaneously connected with the server via TCP.
The data traffic between the server and the clients will be sparse, so bandwidth issues can be ignored.
One important requirement is that whenever the server needs to send data to any client it should use the existing TCP connection instead of opening a new connection toward the client (because the client may be behind a firewall).
Does anybody know how to do this, and what hardware/software is needed (at the least cost)?
What operating systems are you considering for this?
If using a Windows OS and using something later than Vista then you shouldn't have a problem with many thousands of connections on a single machine. I've run tests (here: http://www.lenholgate.com/blog/2005/11/windows-tcpip-server-performance.html) with a low spec Windows Server 2003 machine and easily achieved more than 70,000 active TCP connections. Some of the resource limits that affect the number of connections possible have been lifted considerably on Vista (see here: http://www.lenholgate.com/blog/2005/11/windows-tcpip-server-performance.html) and so you could probably achieve your goal with a small cluster of machines. I don't know what you'd need in front of those to route the connections.
Windows provides a facility called I/O Completion Ports (see: http://msdn.microsoft.com/en-us/magazine/cc302334.aspx) which allow you to service many thousands of concurrent connections with very few threads (I was running tests yesterday with 5000 connections saturating a link to a server with 2 threads to process the I/O...). Thus the basic architecture is very scalable.
If you want to run some tests then I have some freely available tools on my blog that allow you to thrash a simple echo server using many thousands of connections (1) and (2) and some free code which you could use to get you started (3)
The second part of your question, from your comments, is more tricky. If the client's IP address keeps changing and there's nothing between you and them that is providing NAT to give you a consistent IP address then their connections will, no doubt, be terminated and need to be re-established. If the clients detect this connection tear down when their IP address changes then they can reconnect to the server, if they can't then I would suggest that the clients need to poll the server every so often so that they can detect the connection loss and reconnect. There's nothing the server can do here as it can't predict the new IP address and it will discover that the old connection has failed when it tries to send data.
And remember, your problems are only just beginning once you get your system to scale to this level...
This problem is related to the so-called C10K problem. The C10K page lists a large number of good resources for addressing the problems you will encounter when you try to allow thousands of clients to connect to the same server.
I've come across the APE Project
a while back. It seems like a dream come true. They can support up to 100k concurrent clients on a single node. Spread them across 10 or 20 nodes, and you can serve millions. Perfect for RESTful applications. Might want to look deeper for any shared namespace. One drawback is that this is a standalone server, as in supplementary to a web server. This server is of course Open Source, so any cost is hardware/ISP related.
You cannot use UDP. If the client sends a request and you don't reply immediately, a router is going to forget the reverse route in 30 seconds or less, so your server will never be able to reply to the client.
TCP is the only option, and it, too, will give you headaches. Most routers are going to forget the route and/or drop the connection after a few minutes, so your client/server code is going to have to send "keep alives" fairly often.
I recommend setting up a "sniffer", to see how the phone companies are staying in touch with your smartphone for their "push" technology. Copy whatever they're doing, because that stuff works!
As Greg mentioned, the problem you are describing is C10K (or rather "C1M" in your case )
I recently made a simple TCP echo server on linux that scales very well with the number of sessions (only tested up to 200.000 though), by using the epoll queue. On BSD, you have something similar called kqueue.
You can check out the code if you want to. Hope this helps and good luck!
EDIT: As noted in the comments below, my original assertion that there is a 64K limit based on the number of ports is incorrect, however there is a 32K limit on the number of socket handles, so my suggested design is valid.
With a typical TCP/IP server design, you're limited in the number of simultaneous open connections you can have. The server has one listening port, and when a client connects to it the server makes an accept call, and that creates a new socket on a random port for the rest of the connection.
To handle more than 64K simultaneous connections I think you need to use UDP instead. You only need one port for the server to listen on, and you need to manage the connections using a 32-bit client ID in the packet data instead of having a separate port for each client. The 32-bit client ID could be the client's IP address, and the client can listen on a known UDP port for messages coming back from the server. That port would be the only one that needs to be open on the firewall.
With this approach, your only limitation is how quickly you can handle and respond to UDP messages. With millions of clients, even sparse traffic could give you large spikes, and if you don't read the packets fast enough your input queue will fill up and you'll start dropping packets. The C10K page Greg points to will give you strategies for that.

P2P network games/apps: Good choice for a "battle.net"-like matching server

I'm making a network game (1v1) where in-game its p2p - no need for a game server.
However, for players to be able to "find each other", without the need to coordinate in another medium and enter IP addresses (similar to the modem days of network games), I need to have a coordination/matching server.
I can't use regular web hosting because:
The clients will communicate in UDP.
Therefore I'll need to do UDP Hole Punching to be able to go through the NAT
That would require the server to talk in UDP and know the client's IP and port
afaik with regular web hosting (php/etc) I can only get the client's IP address and can only communicate in TCP (HTTP).
Options I am currently considering:
Use a hosting solution where my program can accept UDP connection. (any recommendations?)
UDPonNAT seems to do this but uses GTalk and requires each client to have a GTalk account for this (which probably makes it an unsuitable solution)
Any ideas? Thanks :)
First, let me say that this is well out of my realm of expertise, but I found myself very interested, so I've been doing some searching and reading.
It seems that the most commonly prescribed solution for UDP NAT traversal is to use a STUN server. I did some quick searches to see if there are any companies that will just straight-up provide you with a STUN hosting solution, but if there even were any, they were buried in piles of ads for simple web hosting.
Fortunately, it seems there are several STUN servers that are already up and running and free for public use. There is a list of public STUN servers at voip-info.org.
In addition, there is plenty more information to be had if you explore SO questions tagged "nat".
I don't see any other choice than to have a dedicated server running your code. The other solutions you propose are, shall we say, less than optimal.
If you start small, virtual hosting will be fine. Costs are pretty minimal.
Rather than a full-blown dedicated server, you could just get a cheap shared hosting service and have the application interface with a PHP page, which in turn interfaces with a MySQL database backend.
For example, Lunarpages has a $3/month starter package that includes 5gb of space and 50gb of bandwidth. For something this simple, that's all you should need.
Then you just have your application poll the web page for the list of games, and submit a POST request in order to add their own game to the list.
Of course, this method requires learning PHP and MySQL if you don't already know them. And if you do it right, you can have the PHP page enter a sort of infinite loop to keep the connection open and just feed updates to the client, rather than polling the page every few seconds and wasting a lot of bandwidth. That's way outside the scope of this answer though.
Oh, and if you're looking for something absolutely free, search for a free PHP host. Those exist too! Even with an ad-supported host, your app could just grab the page and ignore the ads when you parse the list of games. I know that T35 used to be one of my favorites because their free plan doesn't track space or bandwidth (it limits the per-file size, to eliminate their service being used as a media share, but it shouldn't be a problem for PHP files). But of course, I think in the long run you'll be better off going with a paid host.
Edit: T35 also says "Free hosting allows 1 domain to be hosted, while paid offers unlimited domain hosting." So you can even just pay for a domain name and link it to them! I think in the short term, that's your best (cheapest) bet. Of course, this is all assuming you either know or are willing to learn PHP in order to make this happen. :)
There's nothing that every net connection will support. STUN is probably good, UPnP can work for this.
However, it's rumored that most firewalls can be enticed to pass almost anything through UDP port 53 (DNS). You might have to argue with the OS about your access to that port though.
Also, check out SIP, it's another protocol designed for this sort of thing. With the popularity of VOIP, there may be decent built-in support for this in more firewalls.
If you're really committed to UDP, you might also consider tunneling it over HTTP.
how about you break the problem into two parts - make a game matcher client (that is distinct from the game), which can communicate via http to your cheap/shared webhost. All gamers who wants to use the game matching function use this. THe game matcher client then launches the actual game with the correct parameters (IP, etc etc) after obtaining the info from your server.
The game will then use the standard way to UDP punch thru NAT, etc etc, as per your network code. The game dont actually need to know anything about the matcher client or matcher server - in the true sense of p2p (like torrents, once you can obtain your peer's IPs, you can even disconnect from the tracker).
That way, your problems become smaller.
An intermediate solution between hosting your own dedicated server and a strictly P2P networking environment is the gnutella model. In that model, there are superpeers that act like local servers, having known IP addresses and being connected to (and thus having knowledge of) more clients than a typical peer. This still requires you to run at least one superpeer yourself, but it gives you the option to let other people run their own superpeers.

Resources