I understand the Ping is to help prevent proxies from closing idle connections. Is there any guidelines w.r.t. how often to ping? Once every second? 10 seconds? Minute? Does it even matter? A preliminary google search gives me nothing, and the websocket spec only says what a ping is, not how often you should do it
I know this is an old question, but i've also been in search for an answer. The previous answer does not make mention of the interval, so I searched through some code of some popular websocket frameworks. Not that this is official, but at least it gives a starting point: this repo makes use of 20 second intervals. I'm not sure if that 100% correct, but its better than "often", or "somewhat frequently".
The accepted answer to the following SO thread seems to answer your question pretty well:
Sending websocket ping/pong frame from browser
It sounds like you can ping fairly often (using your own custom ping/pong strings), and unless there are a ton of clients connected to your WebSocket server, then the load on the system will be fairly minimal.
Related
I am programming a toy example to do NAT traversal. Interested in how a widely used desktop application does that, I used wireshark to try to analyze its traffic. After a some study of the output I realized that server notifications (e. g., "new file added to your xxx folder") worked using some kind of Comet mechanism, with long lived HTTP connections. But the thing that amazed me the most was that, despite the low traffic (1 HTTP GET and its response every minute) the TCP connection was never closed. I can assure that the connection was not closed during at least 20 minutes.
So far, my understanding is that having a lot of long lived TCP connections opened at the same time consumes quickly the resources of the server (mainly in terms of memory). So my question is, how do this kind of applications manage to efficiently keep such a huge number of TCP and HTTP connections opened at the same time during long periods? Do they use some special kind of server? Or is it only matter of adding hardware to scale horizontally?
I googled a lot trying to find an answer, with no luck. Perhaps I am missing something pretty obvious.
Maybe you can take a look at epoll (Linux), kqueue(freebsd), libev, and libevent to get some idea.
From epoll's wikipedia page : "where the number of watched file descriptors is large". You can replace the 'watched file descriptors' with TCP socket.
So I'm writing a fairly simple game with very low networking requirements, I'm using TCP.
I'm unsure where to start in even defining/implementing a protocol for the client and server to use. I've been looking around and I've seen a few examples, for instance Mojang's Minecraft which uses a table of 'commands' the client sends the server and the server sends the client, with numbers of arguments and such.
What's a good way to do this? I've heard complaints about Minecraft's protocol because if you overread by a byte you ruin the entire stream.
Game networking is a broad question, depending on what type of problem you are solving. TCP (may) not even be the correct choice for you.
For example - games that send movement of characters is typically done with UDP. The reason being that character movement isn't critical to the operation of the game, so some data loss of movement is "acceptable". That may be why sometimes your character "jumps" - some UDP packets were lost, or severely out-of-order.
UDP is argued as the preferred protocol for networked games. So before you even get started, carefully consider whether you are even picking the correct protocol.
Overall, I consider Glenn Fiedler's series on developing a networked game a fantastic read. I'd start here. He covers all of the basics of using UDP for gaming.
If you want to use TCP simply just to get a handle on TCP - then Minecraft is a reasonable example. A known list of commands that can be sent back and forth is a simple way to start. However, as you stated, is prone to some problems. This is more aligned with using the wrong protocol than how it was developed.
Google "game networking library" and you'll get a bunch of results. GNE would be a good one to look at.
I guess it depends on what your game is, what it mechanics are, what information is necessary. In any case I think this stack exchange https://gamedev.stackexchange.com/ is more suited to answer your question.
Gamedev.net's networking forum has a great FAQ covering these sorts of questions and many others, however, to make this more than a 'go-there-look-at-that' answer, I'll suggest some small improvements you can make. When using tcp, delivery is guarenteed, but this has a speed cost, which is fine if your not making a fps, but it means you need to get more from the data you do send, a great way to do this is via deltas/differentials, that is, sending only the change in state, not the entire game state, you can also validate your incoming packets for corrupt/anomalys data over and about tcp checks by predicting possibilities are allow, and with the same prediction, you can cut out even more data etc. But as others have said, this is a broad question, and not suited to getting truely helpful answers
As you're coding in lua, the only library anyone uses is luasocket (though ZMQ is gaining ground).
You're really going to have several protocols going: TCP for data that must be received (eg, server commands such as changemap or you_got_kicked, conversations and such; then use UDP for non-compulsory data, or data that quickly expires (eg, character positions).
I am writing an application where the client side will be uploading data to the server through a wireless link.
The connection should be very reliable.The link is expected to break many times and there will be many clients connected to the server.
I am confused whether to use TCP or reliable UDP.
Please share your thoughts.
Thanks.
RUDP is not, of course, a formal standard, and there's no telling if you will find existing implementations you can use. Given a choice between rolling this from scratch and just re-making TCP connections, I'd chose TCP.
To be safe, I would go with TCP just because it's a reliable, standard protocol. RUDP has the disadvantage of not being an established standard (although it's been mentioned in several IETF discussions).
Good luck with your project!
It's likely that both your TCP and RUDP links would be broken by your environment, so the fact that you're using RUDP is unlikely to help there; there will likely be times when no datagrams can get through...
What you actually need to make sure of is that a) you can handle the number of connected clients, b) your application protocol can detect reasonably quickly when you've lost connectivity with a client (or server) and c) you can handle the required reconnection and maintenance of cross connection session state for clients.
As long as you deal with b) and c) it doesn't really matter if the connection keeps being broken. Make sure you design your application protocol so that you can get things done in short batches; so if you're uploading files, make sure that you're sending small blocks and that the application protocol can resume a transfer that was broken half way through; you don't want to get 99% of the way through a 2gb transfer and lose the connection and have to start again.
For this to work your server needs some kind of client session state cache where you can keep the logical state of a client's connection beyond the life of the connection itself. Design from the start to expect a given session to include multiple separate connections. The session state should possibly have some kind of timeout so if the client goes away for along time it doesn't continue to consume resources on the server but, to be honest, it may simply be a case of saving the state off to disk after a while.
In summary, I don't think the choice of transport matters and I'd go with TCP at least to start with. What will really matter is being able to manage your client's session state on the server and deal with the fact that clients will connect and disconnect regularly.
If you aren't sure, odds are that you should use TCP. For one thing, it's certain to be part of the network stack for anything supporting IP. "Reliable UDP" is rarely supported out of the box, so you'll have some extra support work for your clients.
I've just started dabbling in some game development and wanted to create a simple multiplayer game. Is it feasible to use HTTP as the primary communication protocol for a multiplayer Game.
My game will not be making several requests a second but rather a a request every few seconds. The client will be a mobile device.
The reason I'm asking is, I thought it may be interesting to try using Tornado which reportedly scales well and supports non blocking requests and can handle "thousands of concurrent users".
So my client could make a HTTP Request, and when the game server has somethign to tell it, it will respond to the request. I believe this illustrates what some people call the COMET design pattern.
I understand that working at the socket level has less overhead but I am just wondering if this would be feasible at all given my game requirements? Or am I just thinking crazy?
Thanks in advance.
Q: Is it feasible to use HTTP as the primary communication protocol for a multiplayer Game.
A. Using HTTP as a communication protocol may make sense for your game, probably not, but that's for you to decide. I have developed applications for Windows Mobile, Blackberry, Android and iPhone for just over 10 years. All the way back to CE 1.0. With that in mind, here is my opinion.
First I suggest reading RFC 3205 as Teddy suggested. It explains the reasons for my suggestions in detail.
In general HTTP is good because...
If you're developing a browser based game - flash or javascript where you don't create the client, then use HTTP because it's built in anyway and it's likely all you can use.
You can get http server hosting with decent scripting super cheap anywhere
There's a ton of tools available and lots of documentation
It's easy to get started
HTTP may be bad because...
HTTP introduces huge overhead in terms of bandwidth compared to a simple TCP service.
For example Omegle.com sends 420 bytes of header data to send a 9 byte payload.
If you really need comet / long polling you'll waste a lot of time figuring out how to make your server talk right instead of working on what it says.
A steady stream of http traffic may overload mobile devices in both processing and bandwidth, giving you less resources to devote to your game performance.
You may not think you know how to create your own TCP server - but it really is easy.
If you're writing the server AND the client, then I would go straight to TCP. If you already know python then use the twisted networking library. You can get a simple server up in an hour or so just following the tutorials.
Check out the LineReceiver example for a super simple server you can test with any telnet client.
http://twistedmatrix.com/projects/core/documentation/howto/servers.html
WRT:
"my client could make a HTTP Request, and when the game server has somethign to tell it, it will respond to the request."
This is not how HTTP is supposed to work. So, no, HTTP would not be a good choice here. HTTP requests timeout if the response is not received back with the timeout (60 seconds is a common default but it would depend on the specifics).
Please read RFC 3205: On the use of HTTP as a Substrate, which deals with this.
With the target platform being a mobile device (and the limited bandwidth that entails) HTTP wouldn't be the first tool I would pull out of the box.
If you just fancy playing with all this technology, then you could give it a go. Tornado seems like a reasonable choice, if the example on the web site is anything to go by. But any simple server-side web language would suffice for serving up the responses you need at the rate you have mentioned. The performance characteristics are likely to be irrelevant here.
The COMET method is when you leave a HTTP connection open over a long period. It is primarily there for 'server push' of data. But usually you don't need this. It's usually much more straightforward to make repeated requests and handle the responses individually.
Even on big-time sites such as Google, I sometimes make a request and the browser just sits there. The hourglass will turn indefinitely until I click again, after which I get a response instantly. So, the response or request is simply getting lost on the internet.
As a developer of ASP.NET web applications, is there any way for me to mitigate this problem, so that users of the sites I develop do not experience this issue? If there is, it seems like Google would do it. Still, I'm hopeful there is a solution.
Edit: I can verify, for our web applications, that every request actually reaching the server is served in a few seconds even in the absolute worst case (e.g. a complex report). I have an email notification sent out if a server ever takes more than 4 seconds to process a request, or if it fails to process a request, and have not received that email in 30 days.
It's possible that a request made from the client took a particular path which happened to not work at that particular moment. These are unavoidable - they're simply a result of the internet, which is built upon unstable components and which TCP manages to ensure a certain kind of guarantee for.
Like someone else said - make sure when a request hits your server, you'll be ready to reply. Everything else is out of your hands.
They get lost because the internet is a big place and sometimes packets get dropped or servers get overloaded. To give your users the best experience make sure you have plenty of hardware, robust software, and a very good network connection.
You cannot control the pipe from the client all the way to your server. There could be network connectivity issues anywhere along the pipeline, including from your PC to your ISP's router which is a likely place to look first.
The bottom line is if you are having issues bringing Google.com up in your browser then you are guaranteed to have the same issue with your own web application at least as often.
That's not to say an ASP application cannot generate the same sort of downtime experience completely on it's own... Test often and code defensively are the key phrases to keep in mind.
Let's not forget browser bugs. They aren't nearly perfect applications themselves...
This problem/situation isn't only ASP related, but it covers the whole concept of keeping your apps up and its called informally the "5 nines" or "99.999% availability".
The wikipedia article is here
If you lookup the 5 nines you'll find tons of useful information, which you can apply as needed to your apps.