I am using nginx as reverse proxy for file storage upload with an external provider.
When I am processing a file upload, I need to keep track (in my database) whether an upload was successful before returning the response to the user. I would therefore like to use the ngx.location.capture method provided in the lua-nginx-module to talk to my backend about the outcome of the request. Since I need to wait for the response of the upstream server I can only issue the capture in header_filter_by_lua. Unluckily I cannot issue any outwards communication in header_filter_by_lua. ngx.location.capture, ngx.socket.* and ngx.exec are only available when the response has not yet arrived.
How can I react to an upstream response in nginx?
Other approaches I've thought about:
Have a script watch the access log and then issue a curl request. (Seems like there should be an easier way)
Initially send the file via ngx.location.capture in content_by_lua (I don't think this would handle up to 5 GB filesize)
Help is appreciated :)
use for /upload location:
content_by_lua_file with resty.upload module
Related
I have a server which acts as a middle man between an HTTP client that I don't control and a remote file hosting server I don't control. I want to expose a URL through which the client can download a chunk (specified by HTTP range headers my server provides) of a file on the remote server.
There are two important constraints here: I'd like to facilitate this partial download without having the response flow back through my server (response goes straight to client) and without writing a custom client. How can I accomplish this?
One option I tried was having my endpoint send a redirect response with the range headers set on the response, but unfortunately those do not get forwarded onto the subsequent request from the client and as a result the entire file is downloaded. Are there any other hacky tricks / network wizardry I can employ to achieve this end given the constraints?
i am also thinking about this since 5 days it's like the server give you file only when you give required header from your side and without header it will deny your request and middleman if it does get request with required header then file will be accessable through your middleman to client and you are trying to client get file from server not from your custom server which is trying to pass headers to server for your client
I have a demand, is at server return response, send a request to other server, but openresty say API disabled in the context of body_filter_by_lua*. i use module resty.http.
thanks
You can change the main logic.
First issue subrequest to your upstream (location.capture or lua-resty-http)
Upon success you may first send the response downstream by Lua code and issue the next subrequest to your "other server" from Lua.
UPDATE - this doesn't work
As second approach you may treat your "other server" as upstream and allow request to this upstream only if subrequest to original server will be successful.
For both scenarios you may use access_by_lua* and content_by_lua* where cosocket API is available.
I am developing an Upload application.
I use Google Chrome to upload a big file (GB) and use nginx to pass the file to my backend application.
I use Wireshark to find that Chrome send the file in one connection with multiple POST requests.
But nginx will split every POST request then send it in different connection to backend application.
How can I config nginx to make it send all the POST requests in one connection, not per POST request one connection?
Oh my god, it's pathetic!
The solution is just enable Nginx upstream keepalive.
Operations to enable upstream keepalive.
I am trying to solve an issue with uploads to our web infastructure.
When a user uploads media to our site, it is proxied (via our Web Proxy tier) to a Java backend with a limited number of threads. When a user has a slow connection or a large upload, this holds one of the Java threads open a long period of time, reducing overall capacity.
To mitigate this I'd like to implement an 'upload proxy' which will accept the entire HTTP POST data of the upload and only when it has received all of the data it will proxy that POST to the Java backend quickly, pushing the problem of the upload thread being held open to a HTTP proxy.
Initially I found Apache Traffic Server has a 'buffer_upload' plugin, but it seems a bit bleeding edge and has no support for regex in URLs, although it would solve most of my issues.
Does anyone know a proxy product that would be able to do what I am suggesting (aside from Apache Traffic Server)?
I see that Nginx has fairly detaild buffer settings for proxying, but it doesn't seem (from docs/explanations) to wait for the whole POST before opening a backend connection/thread. Do I have this right?
Cheers,
Tim
Actually, nginx always buffers requests before opening a connection to the backend. It is possible to turn off response buffering using proxy_buffering or setting an X-Accel-Buffering response header for per-response buffering control.
Seems most web server support subrequest.
I found here's one question about sub request:
Subrequest for PHP-CGI
But what is the point of sub request at all,when is that kind of thing really useful?
Is it defined in http protocol?
Apache subrequests can be used in your e.g. PHP application with virtual() to access resources from the same server. The resource gets processed from Apache normally as normal requests are, but you don't have the overhead of sending a full HTTP request on the network interface.
Less overhead is probably the only reason one would want to use it instead of a real HTTP request.
Edit: The resources are processed by apache, which means that the apache modules are used if configured. You can request a mod_perl or mod_ruby processed resource from PHP.