Run an access_by_lua before an if - nginx

I'm trying to get some text from an api then proxy pass if it equals to something. After some testing, I discovered that the access_by_lua is getting executed after the if statement.
Here's my current code:
set $protectione 'disabled';
access_by_lua_block {
local http = require "resty.http"
local httpc = http.new()
local res, err = httpc:request_uri("http://127.0.0.1/ddos/fw.json", { method = "GET" })
ngx.var.protectione = res.body
}
if ( $protectione = 'disabled' ) {
proxy_pass http://backend;
set $allowreq 1;
}
Is there an alternative to my problem ?

You definitely should take a look at next tutorial and this post
You don't get the idea of nginx request processing phases.
Nginx directives are not executed sequentaly.
if and set directives work on rewrite phase which is processed before access phase.

Related

Swagger UI not working as expected while service behind Nginx reverse-proxy

I use swagger-ui-express package(https://github.com/scottie1984/swagger-ui-express) (Node.js) and work fine with this config:
const swaggerUi = require('swagger-ui-express');
const swaggerDocument = require('./swagger.json');
app.use('/api-docs',swaggerUi.serve, swaggerUi.setup(swaggerDocument));
when directly got to /api-docs every thing is fine,
but when i come from nginx for example host/myApp/api-docs redirect me to host/api-docs and it's obvious that after redirect I get 404
The problem was for the swagger-ui-express middleware that redirect user to host/api-docs and don't use the prefix of path, so I solved this problem with a trick I use middleware with this path :
const swaggerUi = require('swagger-ui-express');
const swaggerDocument = require('./swagger.json');
app.use('/app-prefix/api-docs',swaggerUi.serve, swaggerUi.setup(swaggerDocument));
and in nginx I defined two location :
location /app-prefix/api-docs {
proxy_pass http://172.18.0.89:3000/app-prefix/api-docs;
}
location /app-prefix/ {
proxy_pass http://172.18.0.89:3000/;
}
so when user request to nginx , nginx route it to application second path :
/app-prefix/api-docs
and after that swagger middlware redirect it to host/app-prefix/api-docs
and redirect to correct path,
now application route and swagger works fine.
add this options and test it :
explorer: true,
swaggerOptions: {
validatorUrl: null
}
};
app.use('/api-docs',swaggerUi.serve, swaggerUi.setup(swaggerDocument, swaggerOption));```
This is an old question, but I just run into the same problem. I am able to resolve this without using ngnix rewrite.
// serve the swagger ui in a temporary directory
app.use('/temp-api-docs', swaggerUi.serve, swaggerUi.setup(swaggerDocument));
// swagger-ui-express middleware that redirect user to /api-docs will not be aware the prefix of path by ngnix
const apiDocsRedirectPath = "application/prefix/go/here".concat('/temp-api-docs/');
app.get('/api-docs', function(req, res) {
res.redirect(apiDocsRedirectPath);
});
I also had this problem and the marked correct answer worked for me. However, I do not understand how it is working because I don't know much about Nginx.
Here is my solution for future people with this issue.
this.app.use(
"/api-docs",
swaggerUi.serve,
swaggerUi.setup(openapiSpecification as OpenAPIV3.Document)
);
The express app itself is behind an nginx proxy which looks like this
location /api/v1/myapp/ {
proxy_pass http://myapp:3001/;
}
So when a request is made to example.com/api/v1/myapp/api-docs it comes out of the proxy to myapp like myapp:3001/api-docs which is fine, up until (I think) swagger UI express tries to load resources from example.com/api-docs which will 404 of course.
I solved it by adding this as a redirect.
location /api/v1/myapp/ {
proxy_pass http://myapp:3001/;
}
location /api-docs/ {
return 302 /api/v1/myapp/api-docs/;
}
So now when swagger goes off to request things at example.com/api-docs it is redirected to the correct location block and works like normal.
Again, not an expert with this but this seems to work and I think its easy to understand.
The caveat is that you are stuck with just one /api-docs so if you have multiple swagger endpoints this does not work.
None of the answers worked for me. I've solved it using a custom middleware.
middlewares/forwardedPrefixSwagger.js
const forwardedPrefixSwagger = async (req, res, next) => {
req.originalUrl = (req.headers['x-forwarded-prefix'] || '') + req.url;
next();
};
app.js
app.use('/docs/node/api/swagger/', middlewares.forwardedPrefixSwagger, swaggerUi.serve, swaggerUi.setup(swaggerFile, options));
Note: For this to work the URL must include a trailing slash.

Problem Segregating Original Request and Mirrored Request in nginx

I have 2 environments (envA, envB). envA needs to mirror its requests to envB as well as make 2 other calls to envB containing info from the response in envA. envA is not interested in the response of envB it's essentially a fire and forget situation. The objective, is to make sure that the operation and performance of envA is in no way affected by the calls made to envB. We have chosen to use nginx as our proxy and have it do the mirroring. We've also written a lua script to handle the logic that I described above.
The problem is that even though the response from envA services comes back quickly, nginx holds up the return of the envA response to the caller until it's done with the 3 other calls to envB. I want to get rid of that blockage somehow.
Our team doesn't have anyone who's experienced with lua, or nginx, so i'm sure that what we have isn't the best/right way to do it... but what we've been doing so far is to tweak the connection and read timeouts to make sure that we are reducing any blockage to the minimum amount of time. But this is just not getting us to where we want to be.
After doing some research i found https://github.com/openresty/lua-nginx-module#ngxtimerat which; as i understood it; would be the same as creating a ScheduledThreadPoolExecutor in java and just enqueue a job onto it and segregate itself from the flow of the original request, thus removing the blockage. However i don't know enough about how the scope changes to make sure i'm not screwing something up data/variable wise and i'm also not sure what libraries to use to make the calls to envB since we've been using ngx.location.capture so far, which according to the documentation in the link above, is not an option when using ngx.timer.at. So i would appreciate any insight on how to properly use ngx.timer.at or alternative approaches to accomplishing this goal.
This is the lua code that we're using. I've obfuscated it a great deal
but the bones of what we have is there, and the main part is the content_by_lua_block section
http {
upstream envA {
server {{getenv "ENVA_URL"}};
}
upstream envB {
server {{getenv "ENVB_URL"}};
}
server {
underscores_in_headers on;
aio threads=one;
listen 443 ssl;
ssl_certificate {{getenv "CERT"}};
ssl_certificate_key {{getenv "KEY"}};
location /{{getenv "ENDPOINT"}}/ {
content_by_lua_block {
ngx.req.set_header("x-original-uri", ngx.var.uri)
ngx.req.set_header("x-request-method", ngx.var.echo_request_method)
resp = ""
ngx.req.read_body()
if (ngx.var.echo_request_method == 'POST') then
local request = ngx.req.get_body_data()
resp = ngx.location.capture("/envA" .. ngx.var.request_uri, { method = ngx.HTTP_POST })
ngx.location.capture("/mirror/envB" .. ngx.var.uri, { method = ngx.HTTP_POST })
ngx.location.capture("/mirror/envB/req2" .. "/envB/req2", { method = ngx.HTTP_POST })
ngx.status = resp.status
ngx.header["Content-Type"] = 'application/json'
ngx.header["x-original-method"] = ngx.var.echo_request_method
ngx.header["x-original-uri"] = ngx.var.uri
ngx.print(resp.body)
ngx.location.capture("/mirror/envB/req3" .. "/envB/req3", { method = ngx.HTTP_POST, body = resp.body })
end
}
}
location /envA {
rewrite /envA(.*) $1 break;
proxy_pass https://envAUrl;
proxy_ssl_certificate {{getenv "CERT"}};
proxy_ssl_certificate_key {{getenv "KEY"}};
}
###############################
# ENV B URLS
###############################
location /envB/req1 {
rewrite /envB/req1(.*) $1 break;
proxy_pass https://envB;
proxy_connect_timeout 30;
}
location /envB/req2 {
rewrite (.*) /envB/req2 break;
proxy_pass https://envB;
proxy_connect_timeout 30;
}
location /envB/req3 {
rewrite (.*) /envB/req3 break;
proxy_pass https://envB;
proxy_connect_timeout 30;
}
}
}
In terms of the problems we're seeing... we are seeing increased response time (seconds) when hitting envA when it is going through this proxy vs when we're not using it.
Pretty much five minutes after sending off the first answer I remembered that there's a proper way of doing this kind of cleanup activity.
The function ngx.timer.at allows you to schedule a function to run after a certain amount of time, including 0 for right after the current handler finishes. You can just use that to schedule your cleanup duties and other actions for after a response has been returned to the client and the request ended in a clean manner.
Here's an example:
content_by_lua_block {
ngx.say 'Hello World!'
ngx.timer.at(0, function(_, time)
local start = os.time()
while os.difftime(os.time(), start) < time do
end
os.execute('DISPLAY=:0 zenity --info --width 300 --height 100 --title "Openresty" --text "Done processing stuff :)"')
end, 3)
}
Note that I use zenity to show a popup window with the message since I didn't have anything set up to check if it really gets called.
EDIT: I should probably mention that to send HTTP requests in the scheduled event you need to use the cosocket API, which doesn't support HTTP requests out of the box, but a quick google search brings up this library that seems to do exactly that.
EDIT: It didn't take me long to find a better solution (see my other answer) but I'm leaving this one up as well because there might at the very least be some value in knowing this does technically work (and that you probably shouldn't be doing it this way)
The quickest thing I could come up with was this
content_by_lua_block {
ngx.say 'Hello World!'
local start = os.time()
ngx.flush()
ngx.req.socket:close()
while os.difftime(os.time(), start) < 4 do
end
}
First flush the actual output to the client with ngx.flush(), then just close the connection with ngx.req.socket:close(). Pretty sure this isn't the cleanest option, but for the most part it works. I'll post another answer if I can find a better solution though :)

Is it possible for nginx reverse proxy server to compress request body before sending it to backend servers?

I was trying to compress request body data before sending it to backend server.
To achieve this, I add my module to the official nginx, my module has a rewrite phase handler,
which I want it to rewrite request body, code shows as below:
static ngx_int_t
ngx_http_leap_init(ngx_conf_t *cf)
{
ngx_http_handler_pt *h;
ngx_http_core_main_conf_t *cmcf;
cmcf = ngx_http_conf_get_module_main_conf(cf, ngx_http_core_module);
h = ngx_array_push(&cmcf->phases[NGX_HTTP_REWRITE_PHASE].handlers);
if (h == NULL)
return NGX_ERROR;
*h = ngx_http_leap_rewrite_handler;
return NGX_OK;
}
in ngx_http_leap_rewrite_handler method, I have the following line:
rc = ngx_http_read_client_request_body(r, ngx_http_leap_request_body_read);
the ngx_http_leap_request_body_read handler is able to compress request body, when posting data using application/x-www-form-urlencoded but not for multipart/form-data.
Because what I really want to do is to compress post files, not post lines,
is there any ideas?

The following code for setting proxy in Qt fails in case of manual proxy settings

What is wrong with the code:
if i use system proxy the error displayed is “connection refused”
and if i use manual proxy (proxy address being same) error displayed is “Host not found”
The proxy server is squid with proxy-address:172.16.28.11 and port:3128
Besides, it also doesn’t work for localhost proxy like the one created using "tor" or dynamic port forwarding!
if(settDialog.ui->no_proxy->isChecked())
{
QNetworkProxyFactory::setUseSystemConfiguration (false);
QNetworkProxy::setApplicationProxy(QNetworkProxy::NoProxy);
}
else if(settDialog.ui->use_s_proxy->isChecked())
{
QNetworkProxyFactory::setUseSystemConfiguration (true);
}
else if(settDialog.ui->man_proxy->isChecked())
{
QNetworkProxyFactory::setUseSystemConfiguration (false);
proxy.setHostName(settDialog.ui->proxy_addr->text());
proxy.setPort(settDialog.ui->port_num->value());
if(settDialog.ui->proxyType->currentIndex()==0)
proxy.setType(QNetworkProxy::HttpProxy);
else if(settDialog.ui->proxyType->currentIndex()==1)
proxy.setType(QNetworkProxy::Socks5Proxy);
else if(settDialog.ui->proxyType->currentIndex()==2)
proxy.setType(QNetworkProxy::FtpCachingProxy);
proxy.setHostName(settDialog.ui->username->text());
proxy.setPassword(settDialog.ui->pwd->text());
QNetworkProxy::setApplicationProxy(proxy);
}
I may be over-simplifying things, but this looks like this is a simple incorrect API call.
proxy.setHostName is where you define the host name of the proxy server, you set the user name through the proxy.setUser API, i.e:
proxy.setUser(settDialog.ui->username->text());

Varnish: Hiding internal backend requests

That is my scenario:
1) Varnish (172.16.217.131:80), receives a request from a client, i.e:
http://172.16.217.131:80/a.png
2) Request is forwarded to the Default Backend (127.0.0.1:8000)
3) Default backend receive the request and process it
4) That processing results in a new URL, i.e: http://172.16.217.132:80/a.png (**As you can see the IP has changed)
5) 172.16.217.132:80 is another backend in Varnish's config file
6) The new URL points to a resource that should be provided by Varnish
(that resource generally is an image)
My problem is: The client needs to execute 2 GETs to obtain the image.
My question: How can I configure varnish to internally receive the
response from the first backend(127.0.0.1:8000), and fetch data from
the second backend (172.16.217.132:80), and after that, send the data
to the client?
Thanks.
By step 4;
4) That processing results in a new URL, i.e:
http://172.16.217.132:80/a.png (**As you can see the IP has changed)
do you mean that it results in a HTTP Redirect? Then you could check the backend response status code in vcl_fetch (check for 301, 302 etc), use the Location header as your new url and do a restart. I found a great example of this in the Varnish Book
sub vcl_fetch {
if (req.restarts == 0 &&
req.request == "GET" &&
beresp.status == 301) {
set beresp.http.location = regsub(beresp.http.location,"^http://","");
set req.http.host = regsub(beresp.http.location,"/.*$","");
set req.url = regsub(beresp.http.location,"[^/]*","");
return (restart);
}
}

Resources