Problem Segregating Original Request and Mirrored Request in nginx - nginx

I have 2 environments (envA, envB). envA needs to mirror its requests to envB as well as make 2 other calls to envB containing info from the response in envA. envA is not interested in the response of envB it's essentially a fire and forget situation. The objective, is to make sure that the operation and performance of envA is in no way affected by the calls made to envB. We have chosen to use nginx as our proxy and have it do the mirroring. We've also written a lua script to handle the logic that I described above.
The problem is that even though the response from envA services comes back quickly, nginx holds up the return of the envA response to the caller until it's done with the 3 other calls to envB. I want to get rid of that blockage somehow.
Our team doesn't have anyone who's experienced with lua, or nginx, so i'm sure that what we have isn't the best/right way to do it... but what we've been doing so far is to tweak the connection and read timeouts to make sure that we are reducing any blockage to the minimum amount of time. But this is just not getting us to where we want to be.
After doing some research i found https://github.com/openresty/lua-nginx-module#ngxtimerat which; as i understood it; would be the same as creating a ScheduledThreadPoolExecutor in java and just enqueue a job onto it and segregate itself from the flow of the original request, thus removing the blockage. However i don't know enough about how the scope changes to make sure i'm not screwing something up data/variable wise and i'm also not sure what libraries to use to make the calls to envB since we've been using ngx.location.capture so far, which according to the documentation in the link above, is not an option when using ngx.timer.at. So i would appreciate any insight on how to properly use ngx.timer.at or alternative approaches to accomplishing this goal.
This is the lua code that we're using. I've obfuscated it a great deal
but the bones of what we have is there, and the main part is the content_by_lua_block section
http {
upstream envA {
server {{getenv "ENVA_URL"}};
}
upstream envB {
server {{getenv "ENVB_URL"}};
}
server {
underscores_in_headers on;
aio threads=one;
listen 443 ssl;
ssl_certificate {{getenv "CERT"}};
ssl_certificate_key {{getenv "KEY"}};
location /{{getenv "ENDPOINT"}}/ {
content_by_lua_block {
ngx.req.set_header("x-original-uri", ngx.var.uri)
ngx.req.set_header("x-request-method", ngx.var.echo_request_method)
resp = ""
ngx.req.read_body()
if (ngx.var.echo_request_method == 'POST') then
local request = ngx.req.get_body_data()
resp = ngx.location.capture("/envA" .. ngx.var.request_uri, { method = ngx.HTTP_POST })
ngx.location.capture("/mirror/envB" .. ngx.var.uri, { method = ngx.HTTP_POST })
ngx.location.capture("/mirror/envB/req2" .. "/envB/req2", { method = ngx.HTTP_POST })
ngx.status = resp.status
ngx.header["Content-Type"] = 'application/json'
ngx.header["x-original-method"] = ngx.var.echo_request_method
ngx.header["x-original-uri"] = ngx.var.uri
ngx.print(resp.body)
ngx.location.capture("/mirror/envB/req3" .. "/envB/req3", { method = ngx.HTTP_POST, body = resp.body })
end
}
}
location /envA {
rewrite /envA(.*) $1 break;
proxy_pass https://envAUrl;
proxy_ssl_certificate {{getenv "CERT"}};
proxy_ssl_certificate_key {{getenv "KEY"}};
}
###############################
# ENV B URLS
###############################
location /envB/req1 {
rewrite /envB/req1(.*) $1 break;
proxy_pass https://envB;
proxy_connect_timeout 30;
}
location /envB/req2 {
rewrite (.*) /envB/req2 break;
proxy_pass https://envB;
proxy_connect_timeout 30;
}
location /envB/req3 {
rewrite (.*) /envB/req3 break;
proxy_pass https://envB;
proxy_connect_timeout 30;
}
}
}
In terms of the problems we're seeing... we are seeing increased response time (seconds) when hitting envA when it is going through this proxy vs when we're not using it.

Pretty much five minutes after sending off the first answer I remembered that there's a proper way of doing this kind of cleanup activity.
The function ngx.timer.at allows you to schedule a function to run after a certain amount of time, including 0 for right after the current handler finishes. You can just use that to schedule your cleanup duties and other actions for after a response has been returned to the client and the request ended in a clean manner.
Here's an example:
content_by_lua_block {
ngx.say 'Hello World!'
ngx.timer.at(0, function(_, time)
local start = os.time()
while os.difftime(os.time(), start) < time do
end
os.execute('DISPLAY=:0 zenity --info --width 300 --height 100 --title "Openresty" --text "Done processing stuff :)"')
end, 3)
}
Note that I use zenity to show a popup window with the message since I didn't have anything set up to check if it really gets called.
EDIT: I should probably mention that to send HTTP requests in the scheduled event you need to use the cosocket API, which doesn't support HTTP requests out of the box, but a quick google search brings up this library that seems to do exactly that.

EDIT: It didn't take me long to find a better solution (see my other answer) but I'm leaving this one up as well because there might at the very least be some value in knowing this does technically work (and that you probably shouldn't be doing it this way)
The quickest thing I could come up with was this
content_by_lua_block {
ngx.say 'Hello World!'
local start = os.time()
ngx.flush()
ngx.req.socket:close()
while os.difftime(os.time(), start) < 4 do
end
}
First flush the actual output to the client with ngx.flush(), then just close the connection with ngx.req.socket:close(). Pretty sure this isn't the cleanest option, but for the most part it works. I'll post another answer if I can find a better solution though :)

Related

Don`t we need to care about thread-safety issue when we writing Lua script for openresty?

I`ve written a Lua script with openresty for testing the thread-safety issue,
The script as follow:
local _M = {}
local count = 0
function _M.increment()
count = count + 1
return count
end
return _M
I purposely use only 1 work process for testing, the nginx.conf as follow:
worker_processes 1;
error_log logs/error.log debug;
events {
worker_connections 1024;
}
http {
lua_package_path "/usr/local/openresty/nginx/?.lua;;";
server {
listen 80;
location / {
content_by_lua_block {
local counter = require "counter"
ngx.say(counter.increment())
}
}
}
}
Surprisingly, the value of counter always as same as the number of concurrent request I’ve sent?
My question is:
Some other language such as Golang,C#, if you increment the counter with multi thread without any lock, you would get a strange result finally, from my Lua experiment, it seems the counter always access by single thread exclusively? If so, when I used some data struct, like queue, does this mean I don`t need any lock when dequeue or enqueue an item (I just check the size of queue to ensure there is an item or a space for dequeue or enqueue)?

Nginx block all traffic with specific custom header except to some urls

I have a service that is hosted in an internal network that is receiving traffic in port 443 (via https) behind a custom loadbalancer both from the internet but also from the internal network.
Internal network requests are coming with an extra custom header, let's call it X-my-lb-header.
I want to block all external incoming traffic to all uris (return an http response code), except to some specific ones.
Eg, let's say that i want to allow traffic that is coming to two endpoints /endpoind1/ (preffix match) and /endpoint2 actual match.
What is the best way to achieve a behaviour like this?
If my understanding is correct I need something like (not correct syntax bellow)
location = /endpoind2 {
if ($http_x_my_lb_header not exists) {
pass
} else {
return 404
}
... the rest of the directives
}
location ~ / {
if ($http_x_my_lb_header) {
return 404;
}
... the rest of the directives
}
But since else is not supported in nginx, i cannot figure out to do it.
Any ideas?
So you need some logic like
if (header exists) {
if (request URI isn't whitelisted) {
block the request
}
}
or in another words
if ((header exists) AND (request URI isn't whitelisted)) {
block the request
}
Well, nginx don't allow nested if blocks (nor logical conditions). While some people inventing a really weird but creative solutions like this one (emulating AND) or even this one (emulating OR), a huge part of such a problems can be solved using map blocks (an extremely powerfull nginx feature).
Here is an example:
# get the $block variable using 'X-my-lb-header' value
map $http_x_my_lb_header $block {
# if 'X-my-lb-header doesn't exists, get the value from another map block
'' $endpoint;
# default value (if the 'X-my-lb-header' exists) will be an empty string
# (unless not explicitly defined using 'default' keyword)
}
# get the $endpoint variable using request URI
map $uri $endpoint {
# endpoint1 prefix matching (using regex)
~^/endpoint1 ''; don't block
# endpoint2 exact matching
/endpoint2 ''; don't block
default 1; # block everything other
}
Now you can use this check in your server block (don't put it to some location, use at the server context):
if ($block) { return 404; }

is it possible to change response on proxy level using varnish?

For example we have setup like this:
user -> api gateway -> (specific endpoint) varnish -> backend service
If backend returns response 500 {"message":"error"} I want to patch this response and return 200 "[]" instead.
Is it possible to do something like this using varnish or some other proxy?
It is definitely possible to intercept backend errors, and convert them into regular responses.
A very simplistic example is the following:
sub vcl_backend_error {
set beresp.http.Content-Type = "application/json";
set beresp.status = 200;
set beresp.body = "[]";
return(deliver);
}
sub vcl_backend_response {
if(beresp.status == 500) {
return(error(200,"OK"));
}
}
Whenever your backend would fail, and return an HTTP/503 error, we will send a HTTP/200 response with [] output.
This output template for backend errors is also triggered when the backend does reply, but with a HTTP/500 error.
In real world scenarios, I would a some conditional logic in vcl_backend_error to only return the JSON output template when specific criteria are matched. For example: a certain URL pattern was matched.
I would advise the same in vcl_backend_response: maybe you don't want to convert all HTTP/500 errors into regular HTTP/200 responses. Maybe you also want to add conditional logic.

nginx lua body_filter_by_lua_block need to execute sleep API disabled in the context of content_by_lua*

I need to make body filter sleep before response
location /configure/result.php {
body_filter_by_lua_block {
--I am using ngx.arg[1] content return here if content contains somevalue then sleep
--Need to execute sleep code before response =>
ngx.sleep(60) --API disabled in the context of content_by_lua??
}
}
}
But i cant execute sleep function in body filter API disabled in the context of content_by_lua* is there any other method i can use
I rebuild source code to be able to use sleep function in body filters but it did not work the error "no co ctx was found" some suggestions would really help me i found out that i can use the (echo_sleep 10.0;) from nginx but it does so before the content from server has been requested
You can use access_by_lua to make request to "/" but then unfortunately you will have double the data request sent

Run an access_by_lua before an if

I'm trying to get some text from an api then proxy pass if it equals to something. After some testing, I discovered that the access_by_lua is getting executed after the if statement.
Here's my current code:
set $protectione 'disabled';
access_by_lua_block {
local http = require "resty.http"
local httpc = http.new()
local res, err = httpc:request_uri("http://127.0.0.1/ddos/fw.json", { method = "GET" })
ngx.var.protectione = res.body
}
if ( $protectione = 'disabled' ) {
proxy_pass http://backend;
set $allowreq 1;
}
Is there an alternative to my problem ?
You definitely should take a look at next tutorial and this post
You don't get the idea of nginx request processing phases.
Nginx directives are not executed sequentaly.
if and set directives work on rewrite phase which is processed before access phase.

Resources