I want to use a client-server protocol to push data to clients which will always remain connected, 24/7.
HTTP is a good general-purpose client-server protocol. I don't think the semantics possibly could be very different for any other protocol, and many good HTTP servers exist.
The critical factor is the number of connections: the application will gradually scale up to a very large number of clients, say 100,000. They cannot be servers because they have dynamic IP addresses and may be behind firewalls. So, a socket link must be established and preserved, which leads us to HTTP push. Only rarely will data actually be pushed to a given client, so we want to minimize the connection overhead too.
The server should handle this by accepting the connection, inserting the remote IP and port into a table, and leaving it idle. We don't want 100,000 threads running, just so many table entries and file descriptors.
Is there any way to achieve this using an off-the-shelf HTTP server, without writing at the socket layer?
Use Push Framework : http://www.pushframework.com.
It was designed for that goal of managing a large number of long-lived asynchronous full-duplex connections.
LightStreamer (http://www.lightstreamer.com/) is the tool that is made specifically for PUSH operations of HTTP.
It should solve this problem.
You could also look at Jetty + Continuations.
Related
I am wondering about the difference between HTTP and TCP data transfer protocols for online games.
I have heard many people using TCP or UDP to transfer data between client and server for online games.
But can you use http at all? I know http is mostly used for web browsing, but if I could set up web server and let my game applications use GET and POST methods, I can still send data back and forth right? Is it that this way of communicating is too slow or unnecessary?
And just one thing about TCP transmission protocols, if I were to write some gaming application using TCP, is it that the data are usually transferred using something called "sockets" (like Socket classes in Java)? What about UDP?
Thanks very much!
Appreciate any answer!
HTTP is an additional layer on top of TCP that defines what a request looks like, what a response looks like, and how the connection is closed or maintained across requests. You can either use it or not use it, depending on what you actually need to transport. If your game consists of a series of requests that each get a reply, HTTP might make sense. If it's more like unsolicited messages in each direction, making HTTP work is like putting a square peg in a round hole.
Most platforms provide a socket interface that allows you to work with either TCP or UDP depending on the protocol specified when the socket is allocated. Some higher-level APIs look completely different for different protocols.
I'm creating an IoT Device + Server system using .NET Micro Framework and ASP.NET WebAPI (Probably in Azure).
The IoT device needs to be able to frequently update the server with stats whilst also being able to receive occasional incoming commands from the server that would change its behaviour. In this sense, the device needs to act as both client and server itself.
My concern is getting the best balance between the security of the device and the load on the server. Furthermore, there must be a relatively low amount of latency between the server needing to issue a command and the device carrying it out; of the order of a few seconds.
As I see it my options are:
Upon connection to the internet, the device establishes a persistent TCP connection to the server which is then used for both polling and receiving commands.
The device listens on a port (e.g. HttpListener) for incoming commands whilst updating the server via frequent HTTP requests.
The device only ever polls the server with HTTP requests. The server uses the response to give the device commands.
The 2nd option seems to be the least secure as the device would have open incoming ports. The 1st option looks the most difficult to reliably implement as it would require low level socket programming. The 3rd option seems easy and secure but due to the latency requirements the device would need to poll every few seconds. This impacts the scalability of the system.
So at what frequency does HTTP polling create more overhead than just constantly keeping a TCP connection open? 5s? 3s? 1s? Or am I overstating the overhead of keeping a TCP connection open in ASP.NET? Or is there a completely different way that this can be implemented?
Thanks.
So at what frequency does HTTP polling create more overhead than just constantly keeping a TCP connection open? 5s? 3s? 1s?
There is nothing to do to keep a TCP connection open. The only thing you might need to do is to use TCP keep-alive (which have nothing to do with HTTP keep-alive!) in case you want to keep the connection idle (i.e no data to send) for a long time.
with HTTP your overhead already starts with the first request, since your data need to be encapsulated into a HTTP message. This overhead can be comparable small if the message is large or it can easily be much larger than the message itself for small messages. Also, HTTP server close the TCP connection after some idle time so you might need to re-establish the TCP connection for the next data exchange which is again overhead and latency.
HTTP has the advantage to pass through most firewalls and proxies, while plain TCP does not. You also get encryption kind of free with HTTPS, i.e. there are established standards for direct encrypted connection and for tunneling through a proxy.
WebSockets is something in between: you do a HTTP request and then upgrade HTTP to WebSocket. The initial overhead is thus as large as for HTTP but for the next messages the overhead is not that much higher than TCP. And you can do also WebSockets with HTTPS (i.e. wss:// instead of ws://). It passes through most simple firewalls and proxies, but more deeper inspection firewalls might still have trouble with it.
Setting up a TCP listener will be a problem if you have your IoT device behind some NAT router, i.e. the usual setup inside private or SoHo networks. To reach the device one would need to open a tunnel at the router from outside into the network, either by administrating the router by hand or with UPnP (which is often switched off for security reasons). So you would introduce too much problems for the average user.
Which means that the thing which the fewest problems for the customer is probably HTTP polling. But this is also the one with the highest overhead. Still mostly compatible are WebSockets which have less overhead and more problems but even less overhead can be reached with simple TCP to the server. TCP listener instead would cause too much trouble.
As for resources on the server side: each HTTP polling request might use new TCP connection but you can also reuse an existing one. In this case you could decide between more overhead and latency one the client side (new TCP connection for each request) which needs few resources on the server side and less overhead and latency on the client side which needs more resources on the server side (multiple HTTP requests per TCP connection). With WebSockets and plain TCP connection you always need more server side resources, unless your client will automatically re-establish the connection on loss of connectivity.
These days you should use a IOT Specific communication protocol over TLS 2.0 for secure light weight connections. For example AWS uses MQTT http://mqtt.org/ and Azure uses AMQP https://www.amqp.org/
The idea is you get a broker you can connect to securely then you use a messaging protocol with a topic to route messages to the proper devices. Also IBM has been using MQTT for a long time and routers now typically come with port 8883 open which is MQTT over TLS.
Good Luck!
Simply use SignalR to connect client and server. It provides you minimal latency without polling. The API is very simple to use.
Physically, this runs over WebSockets which are scalable to a large number of concurrent connections. If you don't have a need for more than 100k per Windows server this would not be a concern.
Apparently, I don't get true parallel reads of different URLs on the same server, even issuing truly contemporary requests, on multiple physical interfaces (NICs).
I think the problem could be that HTTP protocol is connection oriented, then requests are serialized at lower level into TCP/IP stack (is this correct wording?).
Does make sense to attempt to 'reimplement' an high level HTTP request with a connectionless schema, like UDP, and handle myself packet addressing, to speedup streaming ?
HTTP requests are independent. They can be issues over arbitrarily many independent connections. HTTP does not impose an limits regarding concurrency.
You hit some resource limit. Maybe your client library restricts the number of concurrent calls. Maybe the server does. Maybe the network is fully utilized. Maybe back-end resources that the server uses are maxed out.
Find the bottleneck and eliminate it. The transport protocol is not the problem. Changing it can't help.
different URLs
Whether the URL is different or not makes no difference, except if the server implements some special throttling. Highly unlikely.
on multiple physical interfaces (NICs).
You are probably not network-bound.
requests are serialized at lower level into TCP/IP stack
No. Connection management is not part of HTTP. The client decided how many connections to use. Reconfigure the client.
Does make sense to attempt to 'reimplement' an high level HTTP request with a connectionless schema, like UDP, and handle myself packet addressing, to speedup streaming ?
You will have to re-implement flow control, segment fragmentation, re-transmission and other features of TCP protocol yourself. And then your HTTP implementation will not be compatible with the standard one.
So no, it does not make much sense.
For streaming you may like to use protocols designed for streaming, like WebRTC.
I was considering doing a chat server using node.js/socket.io. Should I make it a tcp server or a http server? I'd imagine tcp server would be more efficient, but can you send other stuff to it like file attachments etc? If tcp is more efficient, how much more so? Also, just wondering how many concurrent connections can one node.js server handle? Is it more work to do TCP or HTTP?
You are talking about 2 totally different approaches here - TCP is a transport layer protocol and HTTP is an application layer protocol. HTTP (usually) operates over TCP, so whichever option you choose, it will still be operating over TCP.
The efficiency question is sort of a moot point, because you are talking about different OSI layers. If you went for raw TCP sockets, your solution would probably be more efficient - in bandwidth at least - since HTTP contains a whole bunch of extra data (the headers) that would likely be irrelevant to your purposes (depending on the scale of the chat program). What you are talking about developing there is your own application layer protocol.
You can send anything you like over TCP - after all HTTP can send attachments, and that operates over TCP. FTP also operates over TCP, and that is designed purely for transferring "attachments". In order to do this, you would need to write your protocol so that it was able to tell the remote party that the following data was a file, then send the file data, then tell the remote party that the transfer is complete. Implementations of this are many and varied (the HTTP approach is completely different from the FTP approach) and your options are pretty much infinite.
I don't know for sure about the node.js connection limit, but I can say with a fair amount of confidence that it is limited by the operating system. This might help you get to grips with the answer to that question.
It is debatable whether it is more work to do it with TCP or HTTP - it's a lot of work to do it in both. I would probably lean more toward the TCP option being your best bet. While TCP would require you to design a protocol rather than/as well as an application, HTTP is not particularly suited to live, 2-way applications like chat servers. There are many implementations of chat over HTTP that use AJAX, but I can tell you from painful experience that they are a complete pain in the rear-end.
I would say that you should only be looking at HTTP if you are intending the endpoint (i.e. the client) to be a browser. If you are going to write a desktop app for the endpoint, a direct TCP link would definitely be the way to go. The main reason for this is that HTTP works in a request-response manner, where the client sends a request to the server, and the server responds. Over TCP you can open a single TCP stream, that can be used for bi-directional communication. This means that the server can push an event to the client instantly, while over HTTP you have to wait for the client to send a request, so you can respond with an event. If you were intending to use a browser as the client, it will make the whole file transfer thing much more tricky (the sending at least).
There are ways to implement this over HTTP using long-polling and server push (read this) but it can be a real pain to implement.
If you are going to implement this on a LAN (or possibly even over the internet) it is worth considering UDP over TCP - in a chat application it is not usually absolutely mission critical that messages arrive in the right order, and even if it was, users would probably not be able to type faster than the variations in network latency (probably <100ms). Then for file transfers you could either negotiate a seperate TCP socket for the data exchange (like FTP), or implement some kind of UDP ACK system (like TFTP).
I feel there is a lot more to say on this subject but right now I can't put it into words - I may extend this answer at some point.
Chat servers are the Hello World program in node. Use http.
As far as the question of how many concurrent connections can it handle, that all depends on your system. Set up a simple chat server and then try benchmarking it.
Also, check out http://search.npmjs.org/ and search for chat for a few pointers.
Whats the best practice for scalable servers which need to maintain a list of active users?
Should I open a persistent TCP Connection for each client on which the server sends update messages?
This could lead in many open connection and propably no traffic for many seconds. Is this a problem in TCP?
Or would it be better to let the Client poll updates periodically (with a new tcp connection each)?
How do Chat Servers or large Online Games handle this?
Personally I'd go for a single persistent TCP connection per client to avoid a) the additional work in creating and destroying connections and the additional latency involved in all the TCP packets involved and b) to avoid creating lots of sockets in TIME_WAIT on either the clients or the server. There's simply no good reason to create and destroy the connections.
Depending on your platform there may be various tricks to deal with the various platform specific problems that you might get when you have lots of connections open, and by lots I mean 10s of thousands. For example, on Windows, using overlapped I/O and I/O completion ports would be a good design for lots of connections and if your connections are generally idle most of the time then you might find that using the 'zero byte read' trick would allow you to handle more connections on lesser hardware; but it's something you can add once you know you have a problem due to the amount of buffer space that you have waiting for reads which only complete infrequently.
I wouldn't have the clients polling the server. It's inefficient. Have the server publish data to the clients as and when there is data available. This would allow the server to control the workload somewhat by letting it decide how often to send the data to the clients - it could either send every time new data became available for a client or send after it had batched up some data and waited a short while, etc. If the server is pushing the data then the server (the weak point, the place that might get overwhelmed by client demand) has more control over the work that it will need to do.
If you have each client polling then a) you're generating more network noise as each client sends a message to ask the server if it has anything that it should send it and b) you're generating more work for the server as it needs to respond to the polls. The server knows when there's data for the client, let it be responsible for telling the clients.