Does nginx block on long requests? - nginx

I've read a couple similar questions but haven't gotten the answer I'm looking for.
If Nginx is using an evented model, my understanding is that any work that happens in the event loop is blocking the event loop. So does it block, or does it use some other idea to avoid blocking the event loop if there's some slow request. Is it based on the idea that there are no slow requests in a typical http situation?

Related

Asynchronous HTTP(S) request in existing loop

I want to send a HTTPS request to a server but have a problem figuring out how. The best way for me to do it would be to initiate the request and regularly check back whether it has finished. How can I do this? It is even possible? What are the alternatives?
The best way to make asynchronous I/O is to use tokio.
You can find a example HTTP+TLS in doc : https://tokio.rs/docs/getting-started/tls/

How does nginx_redis2_module achieve non-blocking operation?

I need a nginx server that receives HTTP request and sends back response from Redis-store and this should be non-blocking. After Googling and going through forums, i came across the nginx_redis2_module. I tried going through the code but was not able to understand how it works. How have they achieved non-blocking operation? Have they achieved this by adding events to nginx's event loop ? Is there any document or sample code how this is done ?
source : https://github.com/openresty/redis2-nginx-module
The essence of nginx is non blocking modules.
It is complex area.
Here you may found some starting points: how to write Nginx module?
FYI:
When used in conjunction with lua-nginx-module, it is recommended to
use the lua-resty-redis library instead of this module though, because
the former is much more flexible and memory-efficient.

Streaming stdout to a web page

This seems like it should be a really simple thing to achieve, unfortunately web development was never my strong point.
I have a bunch of scripts, and I would like to launch them from a webpage and see the realtime stdout text on the page. Some of the scripts take a long time to run so the normal single response isn't good enough (I have this working already).
As far as I can see, my options are
stdout to a file, and periodically (every couple of seconds) send a request from the client and respond with the contents of this file.
Chunked HTTP responses? I'm not sure if this is what they are used for- I tried to implement this already but I think I may be misunderstanding their purpose.
Websockets (I'm using a Luvit server so this isn't an option).
... Something else?
I'm sure there must be a standard way of achieving, I see other sites doing this all the time. Teamcity for example. Or chat rooms (vanilla TCP sockets?).
Any pointers in the right direction appreciated. Simplest method possible, if that's just sending lots of scheduled requests from the client then so be it.
That heavily reminds me of Common Gateway Interfaces.
Your own ideas sound all like the right direction. As you are using a shell script, and some potentially nontrivial interactions with the web server, I feel it could make sense to point out where to dig for examples of this kind of code, which was common a long time ago, and very error-prone, basically allways.
Practically, your script is a CGI script, doing typical things.
In the earlier days and years of the internet, that was the "normal way" to implement web page that are not just static files (HTML or others).
The page is basically implemented as a shell script (or any other programm reading from stdin and writing to stdout).
Part of what you are doing/proposing is very similar, and I think there are useful lessons to learn from old CGI code.
For example, getting buffering right with from inside the script over sdtout, whrough the web server onto the client's page can be tricky of course.
So digging out old examples could help a lot.
(Much of this may be obvious to you, the OP, personally, so take the "you" as potential reader)
The tricky part in general will be the buffering, I expect. If you are used to explicitly handling stdin/out buffers in shell, for programms that do not support it, the kind of things to expect can be imagined - but if not used to it: I remember CGI is worse, as you have to get the buffering of the HTTP server in sync too (let's hope it is handled automatically) - so maybe start to ask questions/dig for examples early.
The CGI style way would be exactly what you have implemented now - and it the buffering is right, that should be as real-time as it can get. But I understand that you get timeouts because of the long runtime? Or do you have strongly varying runtimes?
In terms of getting it as real-time as possible, there is nothing better than writing stdout to the http stream.
(I assume we accept the overhead of going through a HTTP server.)
Also, I'm thinking of line buffering, so not flushing every char - is that good enough for the use case? (i.e. no animated progress indicator lines/ ANSI escapes that you want to see in real time)
Then maybe the best is to work aroung the issues like timeouts, but to keep the concept. If real time is not that important, other ways may be better in many ways, of course. One point would be that other methods could be required for any scalability.

Continue execution asynchronously after return statement

Is there any way to achieve this?
My actual requirement is, I want to return success from my rest service as soon I get the data and perform a basic action on it and then I want to continue some more operations on the data.
I thought of two approaches
Threading - Currently I don't know how I will make it through threading.
Bulk update - I will schedule a task that will do all this processing after may be an hour or so.
But I am not very sure how should I start implementing this.
Any help?
In the context of HTTP we have to remember that once we finish the response, the conversation is over. There is really no way to keep the request alive after you have given the client a response. You would need to implement some worker process that runs outside of HTTP that processes things "queued up" or "triggered" by an HTTP request.

Is it Possible To Perform Concurrent Processing in ASP.NET 4?

I'd like to ask you is it possible to use parallel programming in ASP.NET 4? For example PLINQ.
(Because all hosting servers are multi-cores now and will it give better perfomance?)
Yes it's possible. But in doubt it makes sense in the most cases. ASP.NET is already highly parallelized, as every request works in it's own thread. If you spin off other threads to do some of the work, that would create overhead. This overhead will slow down other threads working on other requests. Then again, you will introduce another overhead when synchronizing the results to finish the request. Also this overhead will probably slow down the time required to answer the request.
There might be scenarios where that actually would help increase overall performance, but I think in general it's not worth it.
Of course only stress tests with both approaches would make sure if its more efficient to go with PLINQ or to not use it.
The answer is YES.
Why would you think otherwise? (Hence the sarcastic comments.)
Note, unless you take special steps every HTTP request needs to be completed on the thread that starts serving it. The special steps involve telling ASP.NET to use asynchronous processing for pages which allows a response to be created and the request to be completed on a different threads (with, potentially, intermediate processing other other threads). If you use TPL (including PLINQ) from the request's original thread this is not a problem.

Resources