I am having a Web application sitting on IIS, and talking with [remote]Service-Machine.
I am not sure whether to choose TCP or Http, as the main protocol.
more details:
i will have more than one service\endpoint
some of them will be one-way
the other will be two-ways
the web pages will work infront of the services
we are talking about hi-scale web-site
I know the difference pretty well, but I am looking for a good benchmark, that shows how much faster is the TCP?
HTTP is a layer built ontop of the TCP layer to some what standardize data transmission. So naturally using TCP sockets will be less heavy than using HTTP. If performance is the only thing you care about then plain TCP is the best solution for you.
You may want to consider HTTP because of its ease of use and simplicity which ultimately reduces development time. If you are doing something that might be directly consumed by a browser (through an AJAX call) then you should use HTTP. For a non-modern browser to directly consume TCP connections without HTTP you would have to use Flash or Silverlight and this normally happens for rich content such as video and/or audio. However, many modern browsers now (as of 2013) support API's to access network, audio, and video resources directly via JavaScript. The only thing to consider is the usage rate of modern web browsers among your users; see caniuse.com for the latest info regarding browser compatibility.
As for benchmarks, this is the only thing I found. See page 5, it has the performance graph. Note that it doesn't really compare apples to apples since it compares the TCP/Binary data option with the HTTP/XML data option. Which begs the question: what kind of data are your services outputting? binary (video, audio, files) or text (JSON, XML, HTML)?
In general performance oriented system like those in the military or financial sectors will probably use plain TCP connections. Where as general web focused companies will opt to use HTTP and use IIS or Apache to host their services.
The question you really need an answer for is "will TCP or HTTP be faster for my application". The answer is that it depends on the nature of your application, and on the way that you use TCP and/or HTTP in your application. A generic HTTP vs TCP benchmark won't answer your question, because the chances are that the benchmark won't match your application behaviour.
In theory, an optimally designed / implemented solution using TCP will be faster than one that uses HTTP. But it may also be considerably more work to implement ... depending on the details of your application.
There are other issues that might affect your choice. For example, you are less likely to run into firewall issues if you use HTTP than if you use TCP on some random port. Another is that HTTP would make it easier to implement a load balancer between the IIS server and the backend systems.
Finally, at the end of the day it is probably more important that your system is secure, reliable, maintainable and (maybe) scalable than it is fast. A sensible strategy is to implement the simple version first, but have plans in your head for how to make it faster ... if the simple solution is too slow.
You could always benchmark it.
In general, if what you want to accomplish can be easily done over HTTP (i.e. the only reason you would otherwise think about using raw TCP is for a possible performance boost) you should probably just use HTTP. Sure, you can do socket programming, but why bother? Lots of people have spent a lot of time and effort building HTTP client libraries and servers, and they have spent waaaaaay more time optimizing and testing that code than you will ever be able to possibly spend on your TCP sockets. There are simply so many possible errors that you would have to handle, edge cases, and optimizations that can be done, that it is usually easier and safer to use a library function for HTTP.
Plus, the HTTP specs define all kinds of features (and clients/servers implement, which you get to use "for free", i.e. no extra implementation work) which makes any third-party interoperability that much easier. "Here is my URL, here are the rules for what you send, here are the rules for what I return..."
I have a Self Hosted Windows native C++ server application that I use the Casablanca C++ REST SDK code in. I can use any client C#, JavaScript, C++, cURL, basically anything that can send a POST, GET, PUT, DEL message can be used to send request messages to this self hosted windows app. Also I can use a plain browser address bar to do GET related requests using various parameters. Currently I only run this system on a private intranet so it is very fast - I haven't benchmark it against just doing raw TCP, but on a private intranet I doubt there would be even a few microseconds difference? For the convenience and ease of development and ability to expand to full blown internet app it's a dream come true. It is a dedicated system with a private protocol using small JSON packets so not certain if that fits your application needs or not? Another nice thing is this Windows application native C++ code could be ported fairly easily to run on Linux/MacOS as the Casablanca REST SDK is portable to those OSes.
Related
I have a scenario where I need to deliver realtime firehoses of events (<30-50/sec max) for dashboard and config-screen type contexts.
While I'll be using WebSockets for some of the scenarios that require bidirectional I/O, I'm having a bit of a bikeshed/architecture-astronaut/analysis paralysis lockup about whether to use Server-Sent Events or Fetch with readable streams for the read-only endpoints I want to develop.
I have no particular vested interest in picking one approach over the other, and the backends aren't using any frameworks or libraries that express opinionation about using one or the other approach, so I figure I might as well put my hesitancy to use and ask:
Are there any intrinsic benefits to picking SSE over streaming Fetch?
The only fairly minor caveat I'm aware of with Fetch is that if I'm manually building the HTTP response (say from a C daemon exposing some status info) then I have to manage response chunking myself. That's quite straightforward.
So far I've discovered one unintuitive gotcha of exactly the sort I was trying to dig out of the woodwork:
When using HTTP/1.1 (aka plain HTTP), Chrome specially allows up to 255 WebSocket connections per domain, completely independently of its maximum of 15 normal (Fetch)/SSE connections.
Read all about it: https://chromium.googlesource.com/chromium/src/+/e5a38eddbdf45d7563a00d019debd11b803af1bb/net/socket/client_socket_pool_manager.cc#52
This is of course irrelevant when using HTTP2, where you typically get 100 parallel streams to work with (that (IIUC) are shared by all types of connections - Fetch, SSE, WebSockets, etc).
I find it remarkable that almost every SO question about connection limits doesn't talk about the 255-connection WebSockets limit! It's been around for 5-6 years!! Use the source, people! :D
I do have to say that this situation is very annoying though. It's reasonably straightforward to "bit-bang" (for want of a better term) HTTP and SSE using printf from C (accept connection, ignore all input, immediately dprintf HTTP header, proceed to send updates), while WebSockets requires handshake processing, and SHA1, and MD5, and input XORing. For tinkering, prototyping and throwaway code (that'll probably stick around for a while...), the simplicity of SSE is hard to beat. You don't even need to use chunked-encoding with it.
I am searching for a good method to transfer data over internet, and I work in C++/windows environment. The data is binary, a compressed blob of an extracted image. Input and requirements are as follows:
6kB/packet # 10 packets/sec (60kBytes per second)
Reliable data transfer
I am new to network programming and so far I could figure out that one of the following methods will be suitable.
Sockets
MSMQ (MS Message Queuing)
The Client runs on a browser (Shows realtime images on browser). While server runs native C++ code. Please let me know if there are any other methods for achieving the same? Which one should I go for and why?
If the server determines the pace at which images are sent, which is what it looks like, a server push style solution would make sense. What most browsers (and even non-browsers) are settling for these days are WebSockets.
The main advantage WebSockets have over most proprietary protocols, apart from becoming a widely adopted standard, is that they run on top of HTTP and can thus permeate (most) proxies and firewalls etc.
On the server side, you could potentially integrate node.js, which allows you to easily implement WebSockets, and comes with a lot of other libraries. It's written in C++, and extensible via C++ and JavaScript, which node.js hosts a VM for. node.js's main feature is being asynchronous at every level, making that style of programming the default.
But of course there are other ways to implement WebSockets on the server side, maybe node.js is more than you need. I have implemented a C++ extension for node.js on Windows and use socket.io to do WebSockets and non-WebSocket transports for older browsers, and that has worked out fine for me.
But that was textual data. In your binary data case, socket.io wouldn't do it, so you could check out other libraries that do binary over WebSockets.
Is there any specific reason why you cannot run a server on your windows machine? 60kb/seconds, looks like some kind of an embedded device?
Based on our description, you ned to show image information, in realtime on a browser. You can possibly use HTTP. but its stateless, meaning once the information is transferred, you lose the connection. You client needs to poll the C++/Windows machine. If you are prety confident the information generated is periodic, you can use this approach. This requires a server, so only if a yes to my first question
A chat protocol. Something like a Jabber client running on your client, and a Jabber server on your C++/Windows machine. Chat protocols allow almost realtime
While it may seem to make sense, I wouldn't use MSMQ in this scenario. You may not run into a problem now, but MSMQ messages are limited in size and you may eventually hit a wall because of this.
I would use TCP for this application, TCP is built with reliability in mind and you can simply feed data through a socket. You may have to figure out a very simple protocol yourself but it should be the best choice.
Unless you are using an embedded device that understands MSMQ out of the box, your best bet to use MSMQ would be to use a proxy and you are then still forced to play with TCP and possibly HTTP.
I do home automation that includes security cameras on my personal time and I use the .net micro framework and even if it did have MSMQ capabilities I still wouldn't use it.
I recommend that you look into MJPEG (Motion JPEG) which sounds exactly like what you would like to do.
http://www.codeproject.com/Articles/371955/Motion-JPEG-Streaming-Server
Hey,
first of all this is a conceptional question and I do not know if StackOverflow is the appropriate place - so my apologies if I am wrong.
Nowadays the web is not only used for passing raw informations. Many and especially complex web applications are in use. These web application seem to be so complex that it seems irrational to use the HTTP protocol, which is based on so simple data exchange, plus it is stateless.
Would it not be more convincing to use remote invocations for this web applications? The big advantage to my mind is a unified GUI by using HTML. But there are applications, which have no need for a graphical interfaces and then it comes to a point where the HTTP protocol is really cumbersome.
Short answer: HTTP is allowed through firewalls where other protocols would be blocked.
A short partial answer is: first, for historical reasons - HTTP was used since the dawn of the web as protocol for requesting documents, and has since been used for some different purposes. One reason to keep using it is that it is generally served on port 80 which you can be sure won't be blocked by firewalls between your client and the server. The statelessness of the protocol may not always be what you want, but it has at least the advantage of protecting the server side from very trivial overloading problems.
OS independence
firewall passing
the web server is already a well understood and mostly "solved" problem in terms of load balancing, server fall over, etc.
don't have to reinvent the wheel
Other protocols are being used more and more now, including remote invocations and (the one I am particularly familiar with) WCF (which allows binary TCP/IP data transfer).
This allows data to travel faster for applications which require more bandwidth. For example an n-tier application may use WCF binary transfer between application and presentation tiers. Also public web services allow multiple protocols, including binary.
For data transfer protocols, firewalls should be configured (ie. expose a port specifically for your application), not worked around, I would not recommend using a protocol because firewalls do not block it.
The protocol used really depends on who will consume it and what control you have over the consumption - eg external third parties may need a plain-text version with a commonly agrreed data interface. On the other hand, two tiers in a single web application may be able to utilise binary data transfer for performance and security.
From all the articles I've read so far about Mochiweb, I've heard this over and over again that Mochiweb provides very good scalability. My question is, how exactly does Mochiweb get its scalability property? Is it from Erlang's inherent scalability properties or does Mochiweb have any additional code that explicitly enables it to scale well? Put another way, if I were to write a simple HTTP server in Erlang myself, with a simple 'loop' (recursive function) to handle requests, would it have the same level of scalability as a simple web server built using the Mochiweb framework?
UPDATE: I'm not planning to implement a full blown web-server supporting every feature possible. My requirements are very specific - to handle POST data from a HTML form with fixed controls.
Probably. :-)
If you were to write a web server that handles each request in a separate process (light weight thread in Erlang) you could reach the same kind of "scalability" easily. Of course the feature set would be different, unless you implement everything Mochiweb has.
Erlang also has great built in support for distribution among many machines, this might be possible to use to gain even more scalability.
MochiWeb isn't scalable itself, as far as I understand it. It's a fast, tiny server library that can handle thousands of requests per second. The way in which it does that has nothing to do with "scalability" (aside from adjusting the number of mochiweb_acceptors that are listening at any given time).
What you get with MochiWeb is a solid web server library, and Erlang's scalability features. If you want to run a single MochiWeb server, when a request comes in, you can still offload the work of processing that request to any machine you want, thanks to Erlang's distributed node infrastructure and cheap message passing. If you want to run multiple MochiWeb servers, you can put them behind a load balancer and use mnesia's distributed features to sync session data between machines.
The point is, MochiWeb is small and fast (enough). Erlang is the scalability power tool.
If you roll your own server solution, you could probably meet or beat MochiWeb's efficiency and "scalability" out of the box. But then you'd have to rethink everything they've already thought of, and you'd have to battle test it yourself.
I've just started dabbling in some game development and wanted to create a simple multiplayer game. Is it feasible to use HTTP as the primary communication protocol for a multiplayer Game.
My game will not be making several requests a second but rather a a request every few seconds. The client will be a mobile device.
The reason I'm asking is, I thought it may be interesting to try using Tornado which reportedly scales well and supports non blocking requests and can handle "thousands of concurrent users".
So my client could make a HTTP Request, and when the game server has somethign to tell it, it will respond to the request. I believe this illustrates what some people call the COMET design pattern.
I understand that working at the socket level has less overhead but I am just wondering if this would be feasible at all given my game requirements? Or am I just thinking crazy?
Thanks in advance.
Q: Is it feasible to use HTTP as the primary communication protocol for a multiplayer Game.
A. Using HTTP as a communication protocol may make sense for your game, probably not, but that's for you to decide. I have developed applications for Windows Mobile, Blackberry, Android and iPhone for just over 10 years. All the way back to CE 1.0. With that in mind, here is my opinion.
First I suggest reading RFC 3205 as Teddy suggested. It explains the reasons for my suggestions in detail.
In general HTTP is good because...
If you're developing a browser based game - flash or javascript where you don't create the client, then use HTTP because it's built in anyway and it's likely all you can use.
You can get http server hosting with decent scripting super cheap anywhere
There's a ton of tools available and lots of documentation
It's easy to get started
HTTP may be bad because...
HTTP introduces huge overhead in terms of bandwidth compared to a simple TCP service.
For example Omegle.com sends 420 bytes of header data to send a 9 byte payload.
If you really need comet / long polling you'll waste a lot of time figuring out how to make your server talk right instead of working on what it says.
A steady stream of http traffic may overload mobile devices in both processing and bandwidth, giving you less resources to devote to your game performance.
You may not think you know how to create your own TCP server - but it really is easy.
If you're writing the server AND the client, then I would go straight to TCP. If you already know python then use the twisted networking library. You can get a simple server up in an hour or so just following the tutorials.
Check out the LineReceiver example for a super simple server you can test with any telnet client.
http://twistedmatrix.com/projects/core/documentation/howto/servers.html
WRT:
"my client could make a HTTP Request, and when the game server has somethign to tell it, it will respond to the request."
This is not how HTTP is supposed to work. So, no, HTTP would not be a good choice here. HTTP requests timeout if the response is not received back with the timeout (60 seconds is a common default but it would depend on the specifics).
Please read RFC 3205: On the use of HTTP as a Substrate, which deals with this.
With the target platform being a mobile device (and the limited bandwidth that entails) HTTP wouldn't be the first tool I would pull out of the box.
If you just fancy playing with all this technology, then you could give it a go. Tornado seems like a reasonable choice, if the example on the web site is anything to go by. But any simple server-side web language would suffice for serving up the responses you need at the rate you have mentioned. The performance characteristics are likely to be irrelevant here.
The COMET method is when you leave a HTTP connection open over a long period. It is primarily there for 'server push' of data. But usually you don't need this. It's usually much more straightforward to make repeated requests and handle the responses individually.