I am running a docker with lua-nginx image.
In my Nginx conf file I call the lua script from server { } section:
server {
listen 80;
server_name _;
location /payload {
content_by_lua_file /etc/nginx/handler.lua;
proxy_pass <myUrl>;
}
}
I have an issue that no matter what, after the handler.lua script ends, it will go straight to proxy_pass. Even when lua script says ngx.close !!!
if method == "POST" then
--do some stuff
else
ngx.log(ngx.ERR, "wrong event request method: ", ngx.req.get_method())
return ngx.exit (ngx.HTTP_NOT_ACCEPTABLE)
end
So when I do a GET request, after the "return ngx.exit()", the nginx.config will continue to the proxy_pass.
This makes my lua code meaningless. I want to have proxy_pass only if the method is POST.
Related
Example request - http://localhost/iframe?ip=192.168.0.237
I want to proxy pass the request to the value of IP and remove the path and args after localhost/ .
Ideally the proxy_pass should point to 192.168.0.237 and the URL should be http://localhost/.
localhost /iframe {
rewrite ^/(iframe/.*)$ http://localhost/ permanent;
proxy_pass $arg_ip;
}
I'm not sure whether rewrite is the proper way to address this problem.
I would use the argument ip and a rewrite to remove the iframe location
server {
listen 8085;
location /iframe {
rewrite ^/iframe(.*)$ /$1 break;
proxy_pass http://$arg_ip;
}
}
server {
listen 8080;
location / { return 200 "$host$uri"; }
}
Security Notice
I just have a feeling you should whilelist the upstream servers accepted as arguments. If not this will be a wildcard proxy to every single http-server reachable in the network. This is a easy to use SSRF attack vector. So please add some extra layer of security.
SSRF Explained:
Let's say we use this configuration without any further security. Given the folowing NGINX config:
server {
listen 8085;
location /iframe {
rewrite ^/iframe(.*)$ /$1 break;
proxy_pass http://$arg_ip;
}
}
# Server for iframe service
server {
listen 8080;
root /usr/share/nginx/;
location / { return 200 "$host$uri\n"; }
}
# Private Server Section here!
server {
listen 8086;
allow 127.0.0.1;
deny all;
.....
location / {
index welcome.html;
}
}
Trying to reach the secret server directly
curl -v EXTERNALIP:8086
will fail with HTTP 403.
The NGINX will just allow connections form localhost/127.0.0.1 as defined in the allow/deny directives.
But lets try the iframe with the ip argument.
$# curl localhost:8085/iframe?ip=127.0.0.1:8086
Welcome to our very secure server! Internals only!
It prints the content of the secret server. Whitlisting a proxy-pass like this is never a good idea regardless its working or not.
I am using the access_by_lua_block in my nginx configuration to add/modify custom request headers (let's say ngx.req.set_header("foo", "bar")). I am accessing these headers within the header_filter_by_lua_block as ngx.var["http_foo"] just before returning any response to the client.
This works fine in case of passing the request to upstream, however it doesn't work in case of redirects.
So basically,
This works (No redirect).
location /abc {
proxy_pass some_upstream;
access_by_lua_block {
ngx.req.set_header("foo", "bar")
}
header_filter_by_lua_block {
ngx.header["foo2"] = ngx.var["http_foo"] # this is correctly getting the value of "foo" header set above
}
}
This doesn't work (With Redirect)
location /abc {
access_by_lua_block {
ngx.req.set_header("foo", "bar")
}
header_filter_by_lua_block {
ngx.header["foo2"] = ngx.var["http_foo"] # this is not getting the value of "foo" header set above
}
return 301 xyz.com;
}
The access_by_lua_block is not getting executed in case of redirects only (return 301 statement). I don't understand why so? Because access_by_lua_block has a execution priority before the content phase (link)
As far as I understand, execution of return directive happens within the rewrite phase, and access phase doesn't executed at all in this case. You can try to change access_by_lua_block to rewrite_by_lua_block and see what happens.
Update
First attempt to solve the problem gives nothing. Indeed, as lua-ngx-module documentation for rewrite_by_lua states:
Note that this handler always runs after the standard ngx_http_rewrite_module.
What else you could try is to do the redirect inside the rewrite_by_lua_block:
location /abc {
rewrite_by_lua_block {
ngx.req.set_header("foo", "bar")
ngx.redirect("xyz.com", 301)
}
header_filter_by_lua_block {
ngx.header["foo2"] = ngx.var["http_foo"] # this is not getting the value of "foo" header set above
}
}
I have two servers:
NGINX (it exchanges file id to file path)
Golang (it accepts file id and return it's path)
Ex: When browser client makes request to https://example.com/file?id=123, NGINX should proxy this request to Golang server https://go.example.com/getpath?file_id=123, which will return the response to NGINX:
{
data: {
filePath: "/static/..."
},
status: "ok"
}
Then NGINX should get value from filePath and return file from the location.
So the question is how to read response (get filePath) in NGINX?
I assume you are software developer and your have full control over your application so there is no need to force square peg in a round hole here.
Different kinds of reverse proxies support ESI(Edge Side Includes) technology which allow developer to replace different parts of responce body with content of static files or with response bodies from upstream servers.
Nginx has such technology as well. It is called SSI (Server Side Includes).
location /file {
ssi on;
proxy_pass http://go.example.com;
}
Your upstream server can produce body with content <!--# include file="/path-to-static-files/some-static-file.ext" --> and nginx will replace this in-body directive with content of the file.
But you mentioned streaming...
It means that files will be of arbitrary sizes and building response with SSI would certainly eat precious RAM resources so we need a Plan #B.
There is "good enough" method to feed big files to the clients without showing static location of the file to the client.
You can use nginx's error handler to server static files based on information supplied by upstream server.
Upstream server for example can send back redirect 302 with Location header field containing real file path to the file.
This response does not reach the client and is feed into error handler.
Here is an example of config:
location /file {
error_page 302 = #service_static_file;
proxy_intercept_errors on;
proxy_set_header Host $host;
proxy_pass http://go.example.com;
}
location #service_static_file {
root /hidden-files;
try_files $upstream_http_location 404.html;
}
With this method you will be able to serve files without over-loading your system while having control over whom do you give the file.
For this to work your upstream server should respond with status 302 and with typical "Location:" field and nginx will use location content to find the file in the "new" root for static files.
The reason for this method to be of "good enough" type (instead of perfect) because it does not support partial requests (i.e. Range: bytes ...)
Looks like you are wanting to make an api call for data to run decision and logic against. That's not quite what proxying is about.
The core proxy ability of nginx is not designed for what you are looking to do.
Possible workaround: extending nginx...
Nginx + PHP
Your php code would do the leg work.
Serve as a client to connect to the Golang server and apply additional logic to the response.
<?php
$response = file_get_contents('https://go.example.com/getpath?file_id='.$_GET["id"]);
preg_match_all("/filePath: \"(.*?)\"/", $response, $filePath);
readfile($filePath[1][0]);
?>
location /getpath {
try_files /getpath.php;
}
This is just the pseudo-code example to get it rolling.
Some miscellaneous observations / comments:
The Golang response doesn't look like valid json, replace preg_match_all with json_decode if so.
readfile is not super efficient. Consider being creative with a 302 response.
Nginx + Lua
sites-enabled:
lua_package_path "/etc/nginx/conf.d/lib/?.lua;;";
server {
listen 80 default_server;
listen [::]:80 default_server;
location /getfile {
root /var/www/html;
resolver 8.8.8.8;
set $filepath "/index.html";
access_by_lua_file /etc/nginx/conf.d/getfile.lua;
try_files $filepath =404;
}
}
Test if lua is behaving as expected:
getfile.lua (v1)
ngx.var.filepath = "/static/...";
Simplify the Golang response body to just return a bland path then use it to set filepath:
getfile.lua (v2)
local http = require "resty.http"
local httpc = http.new()
local query_string = ngx.req.get_uri_args()
local res, err = httpc:request_uri('https://go.example.com/getpath?file_id=' .. query_string["id"], {
method = "GET",
keepalive_timeout = 60,
keepalive_pool = 10
})
if res and res.status == ngx.HTTP_OK then
body = string.gsub(res.body, '[\r\n%z]', '')
ngx.var.filepath = body;
ngx.log(ngx.ERR, "[" .. body .. "]");
else
ngx.log(ngx.ERR, "missing response");
ngx.exit(504);
end
resty.http
mkdir -p /etc/nginx/conf.d/lib/resty
wget "https://raw.githubusercontent.com/ledgetech/lua-resty-http/master/lib/resty/http_headers.lua" -P /etc/nginx/conf.d/lib/resty
wget "https://raw.githubusercontent.com/ledgetech/lua-resty-http/master/lib/resty/http.lua" -P /etc/nginx/conf.d/lib/resty
I'm trying to serve two (Openresty) Lua web applications as virtual hosts from NGINX which both require their own unique lua_package_path, but have a hard time getting the configuration right.
# Failing example.conf
http {
lua_package_path = "/path/to/app/?.lua;;"
server{
listen 80;
server_name example.org
}
}
http {
lua_package_path = "/path/to/dev_app/?.lua;;"
server{
listen 80;
server_name dev.example.org
}
}
If you define the http twice (one for each host), you will receive this error: [emerg] "http" directive is duplicate in example.conf
If you define the lua_package_path inside the server block, you will receive this error: [emerg] "lua_package_path" directive is not allowed here in example.conf
If you define the lua_package_path twice in a http block (which does not make any sense anyway), you will receive this error: [emerg] "lua_package_path" directive is duplicate in example.conf
What is the best practise of serving multiple (Openresty) Lua applications with their own lua_package_path, being virtual hosts on the same IP and port?
I faced this issue several month ago.
I do not recommend no use debug and release projects in the same server. For instance, you launch the one nginx application for both (debug and release) key may lead to unexpectable behaviour.
But, nevertheless, you can setup:
package.path = './mylib/?.lua;' .. package.path inside lua-script.
You can setup your own local DEBUG = false state and manage inside the app.
Obviously, use the other machine for debug. Imo, the best solution.
Execute different my.release.lua or my.debug.lua file:
http {
lua_package_path "./lua/?.lua;/etc/nginx/lua/?.lua;;";
server{
listen 80;
server_name dev.example.org;
lua_code_cache off;
location / {
default_type text/html;
content_by_lua_file './lua/my.debug.lua';
}
}
server{
listen 80;
server_name example.org
location / {
default_type text/html;
content_by_lua_file './lua/my.release.lua';
}
}
}
Fixed it removing the lua_package_path from the NGINX configuration (since the OpenResty bundle already takes care of loading packages) and pointing my content_by_lua_file to the absolute full path of my app: /var/www/app/app.lua
# example.conf
http {
server{
listen 80;
server_name example.org
location / {
content_by_lua_file '/var/www/app/app.lua';
}
}
server{
listen 80;
server_name dev.example.org
location / {
content_by_lua_file '/var/www/app_dev/app.lua';
}
}
}
After that I included this at the top of my app.lua file:
-- app.lua
-- Get the current path of app.lua
local function script_path()
local str = debug.getinfo(2, "S").source:sub(2)
return str:match("(.*/)")
end
-- Add the current path to the package path
package.path = script_path() .. '?.lua;' .. package.path
-- Load the config.lua package
local config = require("config")
-- Use the config
config.env()['redis']['host']
...
This allows me to read the config.lua from the same directory as my app.lua
-- config.lua
module('config', package.seeall)
function env()
return {
env="development",
redis={
host="127.0.0.1",
port="6379"
}
}
end
Using this I can now use multiple virtual hosts with their own package paths.
#Vyacheslav Thank you for the pointer to package.path = './mylib/?.lua;' .. package.path! That was really helpful! Unfortunately it also kept using the NGINX conf root instead of my application root. Even with prepending the . for the path.
I have a service listening on myservice.mycompany.local
We're proxifying request like this
server {
listen 80;
location /myservice/ {
proxy_pass http://myservice.mycompany.local/;
}
}
it all works fine requests on public.mycompany.com/myservice/api/1/ping are correctly transformed into request to http://myservice.mycompany.local/api/1/ping as there is the trailing /
but now if we try to use a variable
server {
listen 80;
set $MY_SERVICE "myservice.mycompany.local";
location /acm/ {
proxy_pass http://$MY_SERVICE/;
}
}
the local service will only receive a requests to / with the URI part being lost
I've been able to reproduce this "problem" with several version of nginx
1.8.1-1~wheezy
1.4.6-1ubuntu3.5
I'm able also to reproduce it locally by replacing the proxified service by a simple nc -l 127.0.0.2 8080 and using it as the value of my variable, so it really seems to be something happening inside nginx
And this behaviour is not covered in http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass
You may have discovered an undocumented feature, but you can always use a rewrite ... break instead of proxy_pass aliasing:
server {
listen 80;
set $MY_SERVICE "myservice.mycompany.local";
location /acm {
rewrite ^/acm(/.*)$ $1 break;
proxy_pass http://$MY_SERVICE;
}
}