Speed of Firebase Node library vs REST - firebase

I'm currently building a Koa app with the Firebase Node library. Is there a speed difference between using that compared to REST?

This is something that would best be determined by some profiling or jsperf-style testing.
The simplest answer is that there would naturally be a difference, particularly at higher frequencies of transcations. The node.js SDK works over a socket connection, whereas the REST client would have the overhead of establishing and tearing down connections with each payload.
One helpful tool that can help narrow the gap in performance is to utilize HTTP 1.1's keep-alive feature. However, it's certainly not going to be comparable to web sockets.

Firebase is a web-socket real-time database provider, so it's much more faster than HTTP REST calls, that has a big overhead of creating a new connection for each call, so you can use this link below to have an idea:
http://blog.arungupta.me/2014/02/rest-vs-websocket-comparison-benchmarks/

Related

WCF Why HttpsTransport with WebSocket=Always faster than Never

As the title says, could someone explain to me why WCF HttpsTransport using Websocket (transportUsage=Always), are faster than not using Websocket (transportUsage=Never) even when doing sessionless request-replies? My thinking was that not using Websockets in such case would be faster since we don't use a persistent connection and doing multiple short lived Websocket connect\disconnect are expensive. When I run our app though, the database lookups (using WCF) is slightly faster when transportUsage=Always that when Never. Using WCF tracing I can see the durations when transportUsage=Never are maybe 3x the durations when Always. Everything works with either setting but I don't understand why "Always" is faster even when using the websockets "incorrectly" such as in our case?
I was expecting HttpsTransport with transportUsage=Never to be faster in our case since we use the Websocket in a "sessionless" way.
As far as I know, websocket is used in wcf scenarios where duplex communication is used. As you mentioned, the connection or disconnection will cost resources, and I think it will also affect efficiency.

What obscure/semantic differences should I be aware of when evaluating SSE against Fetch with readable streams?

I have a scenario where I need to deliver realtime firehoses of events (<30-50/sec max) for dashboard and config-screen type contexts.
While I'll be using WebSockets for some of the scenarios that require bidirectional I/O, I'm having a bit of a bikeshed/architecture-astronaut/analysis paralysis lockup about whether to use Server-Sent Events or Fetch with readable streams for the read-only endpoints I want to develop.
I have no particular vested interest in picking one approach over the other, and the backends aren't using any frameworks or libraries that express opinionation about using one or the other approach, so I figure I might as well put my hesitancy to use and ask:
Are there any intrinsic benefits to picking SSE over streaming Fetch?
The only fairly minor caveat I'm aware of with Fetch is that if I'm manually building the HTTP response (say from a C daemon exposing some status info) then I have to manage response chunking myself. That's quite straightforward.
So far I've discovered one unintuitive gotcha of exactly the sort I was trying to dig out of the woodwork:
When using HTTP/1.1 (aka plain HTTP), Chrome specially allows up to 255 WebSocket connections per domain, completely independently of its maximum of 15 normal (Fetch)/SSE connections.
Read all about it: https://chromium.googlesource.com/chromium/src/+/e5a38eddbdf45d7563a00d019debd11b803af1bb/net/socket/client_socket_pool_manager.cc#52
This is of course irrelevant when using HTTP2, where you typically get 100 parallel streams to work with (that (IIUC) are shared by all types of connections - Fetch, SSE, WebSockets, etc).
I find it remarkable that almost every SO question about connection limits doesn't talk about the 255-connection WebSockets limit! It's been around for 5-6 years!! Use the source, people! :D
I do have to say that this situation is very annoying though. It's reasonably straightforward to "bit-bang" (for want of a better term) HTTP and SSE using printf from C (accept connection, ignore all input, immediately dprintf HTTP header, proceed to send updates), while WebSockets requires handshake processing, and SHA1, and MD5, and input XORing. For tinkering, prototyping and throwaway code (that'll probably stick around for a while...), the simplicity of SSE is hard to beat. You don't even need to use chunked-encoding with it.

Golang Driver or HTTP?

Just wondering if anyone can help me understand the advantages of talking to Arango via HTTP or the driver? In my case the GoLang driver.
Aside from the few milliseconds overhead per request,would I experience a big difference in performance?
I like the concurrency attributes of using HTTP Requests, since my main tools all make HTTP communication very easy.
Thanks for any insights on this subject.
Quick scanning through the 3 recommended drivers shows they use HTTP under the hood.
So using them will not give you big difference in DB interaction, but may be more comfortable for development.

Does synchronous redis call make a tornado app slower?

I am trying to add cache to a Tornado application, with data in Mongo. I am using Redis as a shared cache store.
Since tornado is an asynchronous framework, I was thinking about using an async client for Redis, that uses tornado's ioloop to fetch data from Redis server. None of the existing solutions are very mature, and I heard the throughput of these clients are not good.
So my question is, if I use a synchronous Redis client like pyredis, will it negatively impact the performance of my app?
I mean, considering the Redis instance lives on the same LAN, the latency for a redis command is very small, does it matter whether it is blocking or not?
It's difficult to say for sure without benchmarking the two approaches side-by-side in your environment, but redis on a fast network may be fast enough that a synchronous driver wins under normal conditions (or maybe not. I'm not personally familiar with the performance of different redis drivers).
The biggest advantage of an asynchronous driver is that it may be able to handle outages of the redis server or the network more gracefully. While redis is having problems, it will be able to do other things that don't depend on redis. Of course, if your entire site depends on redis there may not be much else you can do in this case. This was FriendFeed's philosophy. When we originally wrote Tornado we used synchronous memcache and mysql drivers because those services were under our control and we could count on them being fast, but we used asynchronous HTTP clients for external APIs because they were less predictable.

Websockets for background processing

Is it a good idea to use Websockets (comet, server push, ...) to overcome a problem with long running HTTP requests? Imagine you have an app, build on full-stack web app framework, like Django, or Rails. You want to do some background processing in the name of performance. That's easy to do from programmer perspective, but the problem arises in the UI.
Users demand immediate response. So my idea was to use Socket.IO + node.js + AMQP messaging, to push notifications back to browsers, once the background task completed. I like the idea, but it still feels like lots of engineering, just because we don't want to long running requests in our main app. Competing idea could be to use another, more robust, web server, that can handle many long running HTTP requests.
Which one you think is better?
Is it a good idea to use Websockets to
overcome a problem with long running HTTP requests?
Yes it is. You can save singificant amount of data when compared to other techniques, such as continuous or long polling. Try to look at this article, namely the Step 3 part.
I like the idea, but it still feels like lots of engineering, just
because we don't want to long running requests in our main app.
Competing idea could be to use another, more robust, web server, that
can handle many long running HTTP requests.
Socket.io abstracts transport layer and fallback solutions (in case of websockets absence) for you. If you want to use socket.io/node.js/AMPQ stack only for messaging and notifications then it shouldn't be a complex or time consuming development process, however it may depend on various stuff around.
By delegating messaging/notifications to node.js you may disburden your main app to great extent thanks to its non-blocking architecture although you will introduce dependency on another technology.
On the other hand choosing more performant web server may solve your performance concerns for some time, but you may eventually end up with scaling your system (either up or out).
WebSockets in themselves provide little here over e.g. XHR or jsonp long polling. From the user's perspective, messaging over either transport would feel the same. From the server's perspective, an open WebSocket connection or an open long poll isn't violently different.
What you're really doing, and should be doing regardless of the underlying technology, is build your application to be asynchronous - event driven.

Resources