Is it inefficient to open and close a WebSocket connection multiple times? - http

I'm building a game that involves realtime voice chat in which I want to use WebSockets for. My understanding is that WebSockets are not scaleable meaning they would require a lot of server Infrastructure to handle. Although I plan to host the server through a server-less method (AWS), I still want to maximize efficiency to reduce cost. My idea was to minimize the number of WebSocket connections on the server by opening the connection when voice chat is an option (i.e. in the game lobby) and then close the connection when voice chat is no longer an option instead of opening the connection the whole time the user is in the game. Common sense would tell me that this is much more efficient than keeping the connection open for the duration of the game, but I just thought I would ask to see if my thought process is correct since I am pretty new to using WebSockets. Is it inefficient to open and close a connection multiple times? Are there any draw backs to this?
Also, as a side question, is it bad practice to send large payloads over WebSockets (e.i. a megabyte)? I want the client to be able to send an audio file to multiple other clients. Would it just be best to stick with typical polling to handle this, or could I use WebSockets?
Thank you to all who can help!

Related

Does integrating WebRTC one to one audio/video calls affect the performance of web application

After knowing about some great features of WebRTC, I thought of using WebRTC one to one audio/video calls in my web application. The web application is for many organizations/entities of a category who can register and keep recording several records daily for their internal working and about their clients. The clients of these individual organizations/entities also have access to the web application to access their details.
The purpose of using WebRTC is for communication between clients and organizations. Also for daily inquires by new people to these organizations about products and services.
While going through articles on google etc. I found broadcasting or one to many calls requires very high bandwidth to users if we don't make use of Media Server.
So what is the case for one to one calls?
Will it affect the performance of web application or bring any critical situation if several users are making audio/video calls(one to one) to each other simultaneously as a routine?
The number of users will be very large and users will be recording daily several entries as their routine work. But still it is manageable and application will be running smoothly but I am not sure about the new concept WebRTC. Will it require a very high hosting plan? Is using WebRTC for current scenario suitable or advisable?
WebRTC by its nature is Peer-to-Peer. Meaning that the streaming data is handled CLIENT side. All decoding, encoding, ICE candidate gathering/negotiation, and media encrypting/transmitting will happen on the client side and not on server side. So, you will be providing the pages, client side JS, and some data exchange(session negotiation signalling) but all in all, it is not a huge amount of work. It should be easily handled without having to worry about your host machine being over worked.
All that said, here are the only a performance concerns that would POSSIBLY affect your hosting server.
Signalling session startup, negotiations, and tare down. This is very minimal(only some json data at the beginning of a session). This should not be too much of a burden but you should be aware that if 1000 sessions start at the same time, you will have a queue of messages to direct to the needed parties. How you determine the parties, forward the messages, and what work you do server side could all affect performance. If written smartly(how to store sessions, how to forward messages, etc.) should not be a terrible burden.This could easily done with SignalR since you are on ASP.NET or you could use a separate one running Node.js(or the same box, does not matter) if you so desired.
RTP TURN relay if needed. This will probably be through a different server(or the same one as your hosting server if you want). For SOME connections, a TURN server is needed and any production ready WebRTC solution should take this into account. Here is a good open source turn server. Bandwidth usage here could be very high as RTP packets are sent to this server and the forwarded to the peer in the connection.
If you are recording the streams, you may have increased hosting traffic depending on how you implement it. Firefox supports client side recording of the streams but Chrome does not(they say it is in the works currently). You could use existing JS libraries to record the feeds client side and then push them anywhere you want. You could also push all the data through a MediaServer that will mux, demux, and forward the data to be recorded anywhere you like. Janus-Gateway videoroom is a good lightweight example of a mediaserver.
Client side is a different story.
There are higher level concerns in the Javascript. If you use one of the recording JS libraries, this is especially evident as they do canvas captures numerous times a second which are a heavy hit and would degrade the user experience.
CPU utilization by the browser will increase as the quality of the video being streamed increases. This is rather obvious as HD video frames take more CPU power to encode/decode than SD frames.
Client side bandwidth usage can also be an issue. Chrome and Firefox try to modify the bitrate of each video/audio feed dynamically but the video Bitrate can go all the way up to 2 Mbps. You can cap this in Chrome( by adding an attribute in the SDP) but not in Firefox(last I checked) as of yet.

Websockets for background processing

Is it a good idea to use Websockets (comet, server push, ...) to overcome a problem with long running HTTP requests? Imagine you have an app, build on full-stack web app framework, like Django, or Rails. You want to do some background processing in the name of performance. That's easy to do from programmer perspective, but the problem arises in the UI.
Users demand immediate response. So my idea was to use Socket.IO + node.js + AMQP messaging, to push notifications back to browsers, once the background task completed. I like the idea, but it still feels like lots of engineering, just because we don't want to long running requests in our main app. Competing idea could be to use another, more robust, web server, that can handle many long running HTTP requests.
Which one you think is better?
Is it a good idea to use Websockets to
overcome a problem with long running HTTP requests?
Yes it is. You can save singificant amount of data when compared to other techniques, such as continuous or long polling. Try to look at this article, namely the Step 3 part.
I like the idea, but it still feels like lots of engineering, just
because we don't want to long running requests in our main app.
Competing idea could be to use another, more robust, web server, that
can handle many long running HTTP requests.
Socket.io abstracts transport layer and fallback solutions (in case of websockets absence) for you. If you want to use socket.io/node.js/AMPQ stack only for messaging and notifications then it shouldn't be a complex or time consuming development process, however it may depend on various stuff around.
By delegating messaging/notifications to node.js you may disburden your main app to great extent thanks to its non-blocking architecture although you will introduce dependency on another technology.
On the other hand choosing more performant web server may solve your performance concerns for some time, but you may eventually end up with scaling your system (either up or out).
WebSockets in themselves provide little here over e.g. XHR or jsonp long polling. From the user's perspective, messaging over either transport would feel the same. From the server's perspective, an open WebSocket connection or an open long poll isn't violently different.
What you're really doing, and should be doing regardless of the underlying technology, is build your application to be asynchronous - event driven.

TCP vs Reliable UDP

I am writing an application where the client side will be uploading data to the server through a wireless link.
The connection should be very reliable.The link is expected to break many times and there will be many clients connected to the server.
I am confused whether to use TCP or reliable UDP.
Please share your thoughts.
Thanks.
RUDP is not, of course, a formal standard, and there's no telling if you will find existing implementations you can use. Given a choice between rolling this from scratch and just re-making TCP connections, I'd chose TCP.
To be safe, I would go with TCP just because it's a reliable, standard protocol. RUDP has the disadvantage of not being an established standard (although it's been mentioned in several IETF discussions).
Good luck with your project!
It's likely that both your TCP and RUDP links would be broken by your environment, so the fact that you're using RUDP is unlikely to help there; there will likely be times when no datagrams can get through...
What you actually need to make sure of is that a) you can handle the number of connected clients, b) your application protocol can detect reasonably quickly when you've lost connectivity with a client (or server) and c) you can handle the required reconnection and maintenance of cross connection session state for clients.
As long as you deal with b) and c) it doesn't really matter if the connection keeps being broken. Make sure you design your application protocol so that you can get things done in short batches; so if you're uploading files, make sure that you're sending small blocks and that the application protocol can resume a transfer that was broken half way through; you don't want to get 99% of the way through a 2gb transfer and lose the connection and have to start again.
For this to work your server needs some kind of client session state cache where you can keep the logical state of a client's connection beyond the life of the connection itself. Design from the start to expect a given session to include multiple separate connections. The session state should possibly have some kind of timeout so if the client goes away for along time it doesn't continue to consume resources on the server but, to be honest, it may simply be a case of saving the state off to disk after a while.
In summary, I don't think the choice of transport matters and I'd go with TCP at least to start with. What will really matter is being able to manage your client's session state on the server and deal with the fact that clients will connect and disconnect regularly.
If you aren't sure, odds are that you should use TCP. For one thing, it's certain to be part of the network stack for anything supporting IP. "Reliable UDP" is rarely supported out of the box, so you'll have some extra support work for your clients.

How to tunnel multi-player game data over HTTP with IIS to minimize latency?

I would like to create a browser-based network game. To ensure it can be used by as many people as possible, I'd like to embed all the traffic in standard HTTP packages.
Assuming I use IIS as my back end, how should I code this to minimize latency?
Is it reasonable to start with an ASP application of some kind (ASP.NET MVC perhaps) using shared state in memory? Or should I plan from the outset on writing some kind of IIS plugin of my own? Or should I abandon IIS and write a custom server instead?
Is it reasonable to start with each client repeatedly querying, say ten times per second, or should I make the data more stream-like somehow (and if so how)?
This can work just fine, but you're going to have to do something less "conventional."
To make this work, don't do individual requests, keep the request open and then transmit data with the open connection.
You could try using a protocol like Bayeux, but there are no rules here.
Here's an example with ASP.NET using COMET.
Designing an app to hit a web server 10 times a second is not a good idea. Web servers are designed for less frequent client requests and probably larger processing times and reponse sizes than your game will be using. That's not to say a web server wouldn't be able to cope just that it would not be an efficient client-server match.
For any type of app that requires multiple packets per second you really should think about a lighter protocol than HTTP which is fairly verbose. For example if your game needs to send 4 bytes to register a user's battleship co-ordinates you don't really want to transmit an extra few hundred bytes of HTTP headers.
I'd recommend a browser plugin technology like Siverlight of Flash. Both of those support TCP socket connections. You would need to write your own server to sit at the other end of the TCP socket. With that approach you'd also have the advantage of being able to push data out to the clients without having to rely on client polling mechanisms which are required with HTTP, e.g. AJAX.
One problem you will face with standard HTTP messages is the size (and lack of control) over the header data.
I was involved in the design of a flash game which was played by several million people. We needed to communicate with the server every few seconds and ended up using ultra-lightweight JSON messages to save on bandwidth and keep it snappy.
Size of JSON message: 16 bytes
Size of HTTP Header: 200+ bytes
HTTP is not really a good protocol for high traffic, low latency requirements. It was designed for larger, less frequent request/response models and has status codes like 304 Not Modified for this very reason.
You'll probably want to drop down to a custom TCP/IP implementation.

How do I connect a pair of clients together via a server for an online game?

I'm developing a multi-player game and I know nothing about how to connect from one client to another via a server. Where do I start? Are there any whizzy open source projects which provide the communication framework into which I can drop my message data or do I have to write a load of complicated multi-threaded sockety code? Does the picture change at all if teh clients are running on phones?
I am language agnostic, although ideally I would have a Flash or Qt front end and a Java server, but that may be being a bit greedy.
I have spent a few hours googling, but the whole topic is new to me and I'm a bit lost. I'd appreciate help of any kind - including how to tag this question.
If latency isn't a huge issue, you could just implement a few web services to do message passing. This would not be a slow as you might think, and is easy to implement across languages. The downside is the client has to poll the server to get updates. so you could be looking at a few hundred ms to get from one client to another.
You can also use the built in flex messaging interface. There are provisions there to allow client to client interactions.
Typically game engines send UDP packets because of latency. The fact is that TCP is just not fast enough and reliability is less of a concern than speed is.
Web services would compound the latency issues inherent in TCP due to additional overhead. Further, they would eat up memory depending on number of expected players. Finally, they have a large amount of payload overhead that you just don't need (xml anyone?).
There are several ways to go about this. One way is centralized messaging (client/server). This means that you would have a java server listening for udp packets from the clients. It would then rebroadcast them to any of the relevant users.
A second way is decentralized (peer to peer). A client registers with the server to state what game / world it's in. From that it gets a list of other clients in that world. The server maintains that list and notifies the other clients of people who join / drop out.
From that point forward clients broadcast udp packets directly to the other users.
If you look for communication framework with high performance try look at ACE C++ framework (it has Java bindings).
Official web-site is: http://www.cs.wustl.edu/~schmidt/ACE-overview.html
You could also look into Flash Media Interactive Server, or if you want a Java implementation, Wowsa or Red5. Those use AMF and provide native functionality for ShareObjects including synching of the ShareObjects among connected clients.
Those aren't peer to peer though (yet, it's coming soon I hear). They use centralized messaging managed by the server.
Good luck

Resources