As the title says, could someone explain to me why WCF HttpsTransport using Websocket (transportUsage=Always), are faster than not using Websocket (transportUsage=Never) even when doing sessionless request-replies? My thinking was that not using Websockets in such case would be faster since we don't use a persistent connection and doing multiple short lived Websocket connect\disconnect are expensive. When I run our app though, the database lookups (using WCF) is slightly faster when transportUsage=Always that when Never. Using WCF tracing I can see the durations when transportUsage=Never are maybe 3x the durations when Always. Everything works with either setting but I don't understand why "Always" is faster even when using the websockets "incorrectly" such as in our case?
I was expecting HttpsTransport with transportUsage=Never to be faster in our case since we use the Websocket in a "sessionless" way.
As far as I know, websocket is used in wcf scenarios where duplex communication is used. As you mentioned, the connection or disconnection will cost resources, and I think it will also affect efficiency.
Related
I have a scenario where I need to deliver realtime firehoses of events (<30-50/sec max) for dashboard and config-screen type contexts.
While I'll be using WebSockets for some of the scenarios that require bidirectional I/O, I'm having a bit of a bikeshed/architecture-astronaut/analysis paralysis lockup about whether to use Server-Sent Events or Fetch with readable streams for the read-only endpoints I want to develop.
I have no particular vested interest in picking one approach over the other, and the backends aren't using any frameworks or libraries that express opinionation about using one or the other approach, so I figure I might as well put my hesitancy to use and ask:
Are there any intrinsic benefits to picking SSE over streaming Fetch?
The only fairly minor caveat I'm aware of with Fetch is that if I'm manually building the HTTP response (say from a C daemon exposing some status info) then I have to manage response chunking myself. That's quite straightforward.
So far I've discovered one unintuitive gotcha of exactly the sort I was trying to dig out of the woodwork:
When using HTTP/1.1 (aka plain HTTP), Chrome specially allows up to 255 WebSocket connections per domain, completely independently of its maximum of 15 normal (Fetch)/SSE connections.
Read all about it: https://chromium.googlesource.com/chromium/src/+/e5a38eddbdf45d7563a00d019debd11b803af1bb/net/socket/client_socket_pool_manager.cc#52
This is of course irrelevant when using HTTP2, where you typically get 100 parallel streams to work with (that (IIUC) are shared by all types of connections - Fetch, SSE, WebSockets, etc).
I find it remarkable that almost every SO question about connection limits doesn't talk about the 255-connection WebSockets limit! It's been around for 5-6 years!! Use the source, people! :D
I do have to say that this situation is very annoying though. It's reasonably straightforward to "bit-bang" (for want of a better term) HTTP and SSE using printf from C (accept connection, ignore all input, immediately dprintf HTTP header, proceed to send updates), while WebSockets requires handshake processing, and SHA1, and MD5, and input XORing. For tinkering, prototyping and throwaway code (that'll probably stick around for a while...), the simplicity of SSE is hard to beat. You don't even need to use chunked-encoding with it.
After reviewing the differences between raw TCP and websocket, I am thinking to use websocket, even though it will be a client/server system with no web browser in the picture. My motivation stems from:
websocket is message-oriented, so I do not have to write down a protocol on top of the tcp layer to delimit messages myself.
The initial handshake of websocket is quite fitting for my use case as I can authenticate/authorize the user in this initial response-request exchange.
Performance does matter a lot here though, I am wondering if, excluding the websocket handshake, there would be a loss of performance between the websocket messages vs writing a custom protocol on raw tcp? If not, then websocket is the most convenient choice to me, even if I don't use the benefits related to the "web" part.
Also would using wss change the answer to the above question?
You are basically asking if using an already implemented library which perfectly fits your requirements and which even has the option for secure connections (wss) is better then designing and implementing your own message based protocol on TCP, assuming that performance and overhead are not relevant for your use case.
If you rephrase your question this way the answer should be obvious: using an existing implementation which fits your purpose saves you a lot of time and hassle for design, implementation and testing. It is also easier to train developers to use this protocol. It is easier to debug problems since common tools like Wireshark understand the protocol already.
Apart from this websockets have an established mechanism to use proxies, use a common protocol so that they can easier pass firewalls etc. So you will likely run into less problems when rolling out your application.
In other words: I can see no reason on why you should not use websockets if they fit your purpose.
I'm currently building a Koa app with the Firebase Node library. Is there a speed difference between using that compared to REST?
This is something that would best be determined by some profiling or jsperf-style testing.
The simplest answer is that there would naturally be a difference, particularly at higher frequencies of transcations. The node.js SDK works over a socket connection, whereas the REST client would have the overhead of establishing and tearing down connections with each payload.
One helpful tool that can help narrow the gap in performance is to utilize HTTP 1.1's keep-alive feature. However, it's certainly not going to be comparable to web sockets.
Firebase is a web-socket real-time database provider, so it's much more faster than HTTP REST calls, that has a big overhead of creating a new connection for each call, so you can use this link below to have an idea:
http://blog.arungupta.me/2014/02/rest-vs-websocket-comparison-benchmarks/
I am developing a group chat using the python Twisted framework. The technique I am using is Long polling with Ajax. I am returning SERVER_NOT_DONE_YET to keep the connection open. The code is non-blocking and allows other requests. How much scalable is it ??
However, I want to move ahead of this streaming over open connections. I want to implement a pure server push. How to do it ? Do I need to go in the direction of XMPP ? If I open a socket on the server for each unique client, which web server would best suit the bridging ? How much scalable would it be ?
I want it to be as much scalable as the C10K problem.I would like to stick to Twisted because it has a lot of protocol implementations in easy steps. Please point me in the right direction. Thanx
Long-polling works, but isn't necessarily your best option. It starts getting really nasty in terms of integration with firewalls and flaky internet connections. For example, at work, a lot of our customers' firewalls kill off any HTTP connection that isn't active for 10-20 seconds.
We've solved a lot of problems by switching over to WebSocket over SSL. WebSocket gives you a full-duplex channel, which is perfect for server push. By using SSL, firewalls are often less aggressive in their garbage collecting, and transparent proxies are often fooled by the TLS encryption. You will still need to manage the occasional disconnection on an application-level, even if you're using WebSockets instead of long-polling, but even that can be handled gracefully by having a decent recovery protocol, regardless of whatever transport protocol you use.
This being said, instead of going directly for WebSockets, we've decided to use SockJS. The main reason for this choice was that SockJS can use WebSockets when available (rfc6455, hixie-76, hybi-10), but also fall back to xhr-streaming, xdr-streaming, etc, if the client's browser does not support it (or if the connection fails). When I say that it can "fall back", I mean that the code you use on the client side remains exactly the same, SockJS takes care of the dirty work.
On the server side, the same is true. We currently use Cyclone's SockJS implementation for Twisted (in production), but we're also aware of DesertBus' implementation, which we still have to check out. There's also some other stuff that we're hoping to check out, for example WAMP, and the accompanying Autobahn|Python.
With regards to performance, we use HAProxy for SSL termination and load-balancing. HAProxy's performance is pretty amazing, on a multitude of levels.
We have migrated to WebSockets now. It works perfectly fine !!
Is it a good idea to use Websockets (comet, server push, ...) to overcome a problem with long running HTTP requests? Imagine you have an app, build on full-stack web app framework, like Django, or Rails. You want to do some background processing in the name of performance. That's easy to do from programmer perspective, but the problem arises in the UI.
Users demand immediate response. So my idea was to use Socket.IO + node.js + AMQP messaging, to push notifications back to browsers, once the background task completed. I like the idea, but it still feels like lots of engineering, just because we don't want to long running requests in our main app. Competing idea could be to use another, more robust, web server, that can handle many long running HTTP requests.
Which one you think is better?
Is it a good idea to use Websockets to
overcome a problem with long running HTTP requests?
Yes it is. You can save singificant amount of data when compared to other techniques, such as continuous or long polling. Try to look at this article, namely the Step 3 part.
I like the idea, but it still feels like lots of engineering, just
because we don't want to long running requests in our main app.
Competing idea could be to use another, more robust, web server, that
can handle many long running HTTP requests.
Socket.io abstracts transport layer and fallback solutions (in case of websockets absence) for you. If you want to use socket.io/node.js/AMPQ stack only for messaging and notifications then it shouldn't be a complex or time consuming development process, however it may depend on various stuff around.
By delegating messaging/notifications to node.js you may disburden your main app to great extent thanks to its non-blocking architecture although you will introduce dependency on another technology.
On the other hand choosing more performant web server may solve your performance concerns for some time, but you may eventually end up with scaling your system (either up or out).
WebSockets in themselves provide little here over e.g. XHR or jsonp long polling. From the user's perspective, messaging over either transport would feel the same. From the server's perspective, an open WebSocket connection or an open long poll isn't violently different.
What you're really doing, and should be doing regardless of the underlying technology, is build your application to be asynchronous - event driven.