Using asynchronous API to create nodes in Zookeeper - asynchronous

While looking for zookeeper, the accepted answer says that concurrent writes are not allowed.
Explaining Apache ZooKeeper
Now my question is as Zookeeper has linear writes, that does not stop me to use Asynchronous APIs to create nodes and take the response in a callback ? Though internally it may not allow concurrent writes , or am I missing something ?

Even though zookeeper operates in an ensemble, writes are always served through the leader. Therefore, leader is capable of queuing write requests and completing them sequentially.
Using the asynchronous API will not do any harm to the above mentioned approach. Even though the write requests are asynchronous (from the client side), leader will always make sure that they are served sequentially. Once a asynchronous write request is served, client will be notified through the callback. It is simple as that. Remember, the requests are asynchronous as viewed by the client. But from the leader's point of view, they are served sequentially.

Related

In asynchronous messaging, is client-broker communication synchronous?

While discussing asynchronous messaging on page 67 of the Microservices Patterns book by Chris Richardson (2019), the author writes:
Synchronous—The client expects a timely response from the service and might even block while it waits.
Asynchronous - The client doesn’t block, and the response, if any, isn’t necessarily sent immediately
Given that, it seems that moving from "synchronous" to "asynchronous" communication actually just swaps one synchronous service (e.g., Service A) with a different synchronous service (e.g., a listening port on the message broker like Active MQ, Kafka, IBM MQ, AWS Kinesis, etc.).
That's because the client, presumably, must still block (or at least use 1 thread or connection from a pool) while communicating with the broker, instead of communicating directly with Service A--especially since the client probably expects a broker response (e.g., SUCCESS) for reliability purposes.
Is that analysis correct?
Yes, your analysis is correct.
Working on your case, the broker's client library provides the asynchronous functionality to the caller code (ServiceA for example), which means that it doesn't block the ServiceA's thread until the operation is finished, but it lets you provide a callback that will be invoked (with the results of the async operation) when it is finished.
Now the question is: who will invoke that callback? Well, some code from the broker's client library, which runs on a thread that presumably does some periodic checks to see if the operation is finished (or any other logic that will eventually emit this result).
So yes, there has to be some background thread that does some synchronous work to grab those results.

handle server shutdown while serving http request

Scenario : The server is in middle of processing a http request and the server shuts down. There are multiple points till where the code has executed. How are such cases typically handled ?. A typical example could be that some downstream http calls had to be made as a part of the incoming http request. How to find whether such calls were made or not made when the shutdown occurred. I assume that its not possible to persist every action in the code flow. Suggestions and views are welcome.
There are two kinds of shutdowns to consider here.
There are graceful shutdowns: when the execution environment politely asks your process to stop (e.g. systemd sends a SIGTERM) and expects it to exit on its own. If your process doesn’t exit within a few seconds, the environment proceeds to kill the process in a more forceful way.
A typical way to handle a graceful shutdown is:
listen for the signal from the environment
when you receive the signal, stop accepting new requests...
...and then wait for all current requests to finish
Exactly how you do this depends on your platform/framework. For instance, Go’s standard net/http library provides a Server.Shutdown method.
In a typical system, most shutdowns will be graceful. For example, when you need to restart your process to deploy a new version of code, you do a graceful shutdown.
There can also be unexpected shutdowns: e.g. when you suddenly lose power or network connectivity (a disconnected server is usually as good as a dead one). Such faults are harder to deal with. There’s an entire body of research dedicated to making distributed systems robust to arbitrary faults. In the simple case, when your server only writes to a single database, you can open a transaction at the beginning of a request and commit it before returning the response. This will guarantee that either all the changes are saved to the database or none of them are. But if you call multiple downstream services as part of one upstream HTTP request, you need to coordinate them, for example, with a saga.
For some applications, it may be OK to ignore unexpected shutdowns and simply deal with any inconsistencies manually if/when they arise. This depends on your application.

How is non-blocking IO actually works from client's perspective?

So I came across the idea of blocking and non-blocking I/O. But what I understood from the concept and some of the sample implementations is that we implement code on the server side to achieve this nature of the code.
But now my question is, if (for example postman sending HTTP request to the server) the request has to wait for the server to respond, then what's the point of non-blocking I/O? (Please correct me if I am wrong) Or the whole concept is just for the increase of throughput of the server instead of actual async nature w.r.t. to client.
For example, in one of my project what I did was created a post request to create a request in the system for processing which will return the transaction ID, now using this transaction id, I can query the server to know the outcome.
I may sound too naive, but the concept has confused me a lot. I do not understand this concept clearly. Please help.
Thanks
the request has to wait for the server to respond, then what's the point of non-blocking I/O?
There's a confusion. Waiting for a response and (non)blocking i/o are very loosely related. You always have to wait for response. That's why youve made the request to begin with. But the question is: how?
Non-blocking HTTP: "Dear server, here's my request, please process it and send me a response, I'm going to do something else in the meantime, like calculating n-th digit of Pi (I'm a weirdo)".
Blocking HTTP: "Dear server, here's my request, please process it and send me a response, I'm going to patiently wait for it doing nothing".
Or the whole concept is just for the increase of throughput of the server instead of actual async nature w.r.t. to client.
The whole concept is to be able to do other things while waiting for i/o at the same time. And to do that while minimizing the usage of threads which don't scale well.
Asynchronous systems, i.e. systems without "I'm going to wait idly" part tend to perform better at the cost of complexity.
Side note: nonblocking i/o can be used both on the server side and client side. For example almost all JS engines in browsers are built on top of some asynchronous engine. JS is often single-threaded, meaning nonblocking i/o is necessary to achieve any concurrency.
But what I understood from the concept and some of the sample implementations is that we implement code on the server side to achieve this nature of the code.
You implement code in whereever you are doing the non-blocking UI. What a server does has no bearing on whether a client uses blocking or non-blocking UI, and what a client does has no bearing on whether a server uses blocking or non-blocking UI.
if (for example postman sending HTTP request to the server) the request has to wait for the server to respond, then what's the point of non-blocking I/O?
So that you're not wasting resources.
Let's consider first a simple console application that hits the web and then does something with the results. In this case there's very little to gain with non-blocking I/O as the application is just going to be sitting around waiting for something to do anyway.
Now let's consider a simple console application that hits 50 different web resources and collates the responses. Now non-blocking I/O is more useful, because with blocking I/O it would have to either get one resource after the other, or spin up 50 threads. With non-blocking I/O one, a small number of threads is all that is needed to hit 50 resources and respond promptly to each returning a response.
Now let's consider a GUI version of this application that wants to remain responsive to user input, while also running on low-power low-memory devices in which blocked threads are all the more expensive. The advantages of the above are increased.
Finally, consider a web application that is doing I/O both with the client and also as a client to a database, file system and maybe other web applications. It may have multiple requests at the same time, and blocking on either the I/O it does with the client or any of the I/O it does with db, file or other applications would cost a thread, which would put a scalability limit on how many requests it can handle simultaneously. Not blocking on I/O allows threads to be used for other requests while the I/O is pending.

Tornado and asynchronous requests handling

My question is two-part:
What exactly does it mean by an 'asynchronous server', which is usually what people call Tornado? Can someone please provide an concrete example to illustrate the concept/definition?
In the case of Tornado, what exactly does it mean by 'non-blocking'? is this related to the asynchronous nature above? In addition, I read it somewhere it always uses a single thread for handling all the requests, does this mean that requests are handled sequentially one by one or in parallel? If the latter case, how does Tornado do it?
Tornado uses asynchronous, non-blocking I/O to solve the C10K problem. That means all I/O operations are event driven, that is, they use callbacks and event notification rather than waiting for the operation to return. Node.js and Nginx use a similar model. The exception is tornado.database, which is blocking. The Tornado IOLoop source is well documented if you want to look at in detail. For a concrete example, see below.
Non-blocking and asynchronous are used interchangeably in Tornado, although in other cases there are differences; this answer gives an excellent overview. Tornado uses one thread and handles requests sequentially, albeit very very quickly as there is no waiting for IO. In production you'd typically run multiple Tornado processes.
As far as a concrete example, say you have a HTTP request which Tornado must fetch some data (asynchronously) and respond to, here's (very roughly) what happens:
Tornado receives the request and calls the appropriate handler method in your application
Your handler method makes an asynchronous database call, with a callback
Database call returns, callback is called, and response is sent.
What's different about Tornado (versus for example Django) is that between step 2 and 3 the process can continue handling other requests. The Tornado IOLoop simply holds the connection open and continues processing its callback queue, whereas with Django (and any synchronous web framework) the thread will hang, waiting for the database to return.
This is my test about the performance of web.py(cherrypy) and tornado.
how is cherrypy working? it handls requests well compared with tornado when concurrence is low

Connecting http request/response model with asynchronous queue

What's a good way to connect the synchronous http request/response model with an asynchronous queue based model?
When the user's HTTP request comes it generates a work request that goes onto a queue (beanstalkd in this case). One of the workers picks up the request, does the work, and prepares a response.
The queue model is not request/response - there are only requests, not responses. So the question is, how best do we get the response back into the world of HTTP and back to the user?
Ideas:
Beanstalkd supports light weight topics or queues (they call them tubes). We could create a tube for each request, have the worker create a message on that tube, and have the http process sit and wait on the tube for the response. Don't particularly like this one since it has apache processes sitting around taking memory.
Have the http client poll for the response. The user's initial HTTP request kicks off the job on the queue and returns immediately. The client (the user's browser) polls periodically for a response. On the backend the worker puts its response into memcached, and we connect nginx to memcached so the polling is light weight.
Use Comet. Similar to the second option, but with fancier http communication to avoid polling.
I'm leaning towards 2 since it's easy and well know (I haven't used comet yet). I'm guessing there's probably also a much better obvious model I haven't thought of. What do you think?
Here's how to implement request-response efficiently on JMS which might be helpful (though Java/JMS centric). The general idea is to create a temporary queue per client/thread then use correlationIDs to correlate requests to replies etc.
Polling is the simple solution; comet is the more efficient solution. You've got it nailed :)
I personally love comet (although I'm biased, since I helped write WebSync), it nicely lets your clients subscribe to a channel and get the message when your server process is ready. Works like a champ.
I'm looking to implement a Beanstalkd and memcached system to run a number of processes following a request - in this case, looking up information when a user logs in (the number of messages a user has waiting for example). The info is stored in Memcached and then read back on the next page load.
Without knowing a little more about what tasks you are doing though, it's not so easy to say what needs to be done, or how. Option #2 is however the simplest, and that may be all you need - depending on what you are pushing back into the workers.

Resources