I have a client program I cannot modify. It makes large POST (x-www-form-urlencoded) requests containing hundreds of variables across WAN links, but I only need 5 of them. I'm inserting nginx as a reverse proxy on the local client system. What's the easiest to get nginx to strip out the extra data?
Two ways I see so far:
1. Use Lua (If I did, should I do content_by_lua, rewrite the body, and then make a subrequest? Or is there a simpler way?)
2. Use form-input-nginx-module and proxy_set_body to parse and grab a few variables out.
I'm already using OpenResty, so Lua means no extra modules. But, it probably means writing more locations and so on to do subrequests.
In my opinion the easiest way will be using lua. The choice between content_by_lua, rewrite_by_lua, access_by_lua or any combination of them; will depend on how you use the response body of your subrequest. That decision will also determine if you would need additional locations or not.
Here are a couple of examples:
1. with content_by_lua targeting a local location.
(This approach requires the definition of the sub request location)
location /original/url {
lua_need_request_body on;
content_by_lua '
--Lots of params but I only need 5 for the subrequest
local limited_post_args, err = ngx.req.get_post_args(5)
if not limited_post_args then
ngx.say("failed to get post args: ", err)
return
end
local subreq_uri = "/test/local"
local subreq_response = ngx.location.capture(subreq_uri, {method=ngx.HTTP_POST,
body = ngx.encode_args(limited_post_args)})
ngx.print(subreq_response.body)
';
}
location ~/test/local {
lua_need_request_body on;
proxy_set_header Accept-Encoding "";
proxy_pass http://echo.200please.com;
}
2. with rewrite_by_lua to remote target
(No additional location is needed)
location /original/url/to/remote {
lua_need_request_body on;
rewrite_by_lua '
--Lost of params but I only need 5 for the subrequest
local limited_post_args, err = ngx.req.get_post_args(5)
if not limited_post_args then
ngx.say("failed to get post args: ", err)
return
end
--setting limited number of params
ngx.req.set_body_data(ngx.encode_args(limited_post_args))
--rewriting url
local subreq_path = "/test"
ngx.req.set_uri(subreq_path)
';
proxy_pass http://echo.200please.com;
}
Sample post request with 7 args limited to 5:
curl 'http://localhost/original/url/to/remote' --data 'param1=test¶m2=2¶m3=3¶m4=4¶m5=5¶m6=6¶m7=7' --compressed
response:
POST /test HTTP/1.0
Host: echo.200please.com
Connection: close
Content-Length: 47
User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
Accept: */*
Accept-Encoding: deflate, gzip
Content-Type: application/x-www-form-urlencoded
param3=3¶m4=4¶m1=test¶m2=2¶m5=5
Related
I am trying to rewrite custom header information like "Author" (not part of the URL) using nginx reverse proxy.
The header information "Author:" should be rewritten from "test123" to e.g. "BASIC"
command:
admin1#nginx1:~$ curl -x 192.168.175.134:80 http://home1.MyWeb.eu:8081/home1/index.html?t=1 -H "Author: test123" -vk
TCPdump on apache:
--
GET /home1/index.html?t=1 HTTP/1.0
Host: home1.MyWeb.eu
Connection: close
User-Agent: curl/7.58.0
Accept: */*
Proxy-Connection: Keep-Alive
Author: test123
wanted result:
--
GET /home1/index.html?t=1 HTTP/1.0
Host: home1.MyWeb.eu
Connection: close
User-Agent: curl/7.58.0
Accept: */*
Proxy-Connection: Keep-Alive
Author: BASIC
You can use the proxy_set_header in your configuration. I.e.:
proxy_set_header Author "BASIC";
I made with setting variables. A little ugly but seems to work.
location / {
<...>
set $rewritten_header $http_myheader;
if ($http_myheader = "something") {
set $rewritten_header somethingelse;
}
proxy_set_header Myheader $rewritten_header;
}
The above will rewrite your header only if the condition match. Otherwise keep original value.
I think more elegant to use map in case you have a large mapping.
Hi I have solved the issue like this:
curl -x localhost:80 https://www.dummy.com -H "Authorization: test12" -vk
NGINX configuration:
server {
listen 127.0.0.2:443 ssl;
# the included file below contains ssl certificates
include snippets/www.dummy.com.conf;
root /var/www/html;
set $MyAuthorization 'Basic bGaa9zX25ljYhhWxlcl9=';
location / {
proxy_pass https://www;
proxy_set_header Host www.dummy.com;
proxy_set_header Authorization $MyAuthorization;
}
}
I use openresty as nginx server with spnego-http-auth-nginx-module
This module replace request header Authorization: Negotiate YIIG... with Authorization: Basic ... and set REMOTE_USER header.
How to copy original Authorization header value to another custom header key to save original authorization header value?
This config snippet return needed data:
...
set_by_lua_block $xauth {
local inp = ngx.req.raw_header(true)
return string.match(inp, "Negotiate .*==")
}
uwsgi_param XAUTH $xauth;
...
I have two servers:
NGINX (it exchanges file id to file path)
Golang (it accepts file id and return it's path)
Ex: When browser client makes request to https://example.com/file?id=123, NGINX should proxy this request to Golang server https://go.example.com/getpath?file_id=123, which will return the response to NGINX:
{
data: {
filePath: "/static/..."
},
status: "ok"
}
Then NGINX should get value from filePath and return file from the location.
So the question is how to read response (get filePath) in NGINX?
I assume you are software developer and your have full control over your application so there is no need to force square peg in a round hole here.
Different kinds of reverse proxies support ESI(Edge Side Includes) technology which allow developer to replace different parts of responce body with content of static files or with response bodies from upstream servers.
Nginx has such technology as well. It is called SSI (Server Side Includes).
location /file {
ssi on;
proxy_pass http://go.example.com;
}
Your upstream server can produce body with content <!--# include file="/path-to-static-files/some-static-file.ext" --> and nginx will replace this in-body directive with content of the file.
But you mentioned streaming...
It means that files will be of arbitrary sizes and building response with SSI would certainly eat precious RAM resources so we need a Plan #B.
There is "good enough" method to feed big files to the clients without showing static location of the file to the client.
You can use nginx's error handler to server static files based on information supplied by upstream server.
Upstream server for example can send back redirect 302 with Location header field containing real file path to the file.
This response does not reach the client and is feed into error handler.
Here is an example of config:
location /file {
error_page 302 = #service_static_file;
proxy_intercept_errors on;
proxy_set_header Host $host;
proxy_pass http://go.example.com;
}
location #service_static_file {
root /hidden-files;
try_files $upstream_http_location 404.html;
}
With this method you will be able to serve files without over-loading your system while having control over whom do you give the file.
For this to work your upstream server should respond with status 302 and with typical "Location:" field and nginx will use location content to find the file in the "new" root for static files.
The reason for this method to be of "good enough" type (instead of perfect) because it does not support partial requests (i.e. Range: bytes ...)
Looks like you are wanting to make an api call for data to run decision and logic against. That's not quite what proxying is about.
The core proxy ability of nginx is not designed for what you are looking to do.
Possible workaround: extending nginx...
Nginx + PHP
Your php code would do the leg work.
Serve as a client to connect to the Golang server and apply additional logic to the response.
<?php
$response = file_get_contents('https://go.example.com/getpath?file_id='.$_GET["id"]);
preg_match_all("/filePath: \"(.*?)\"/", $response, $filePath);
readfile($filePath[1][0]);
?>
location /getpath {
try_files /getpath.php;
}
This is just the pseudo-code example to get it rolling.
Some miscellaneous observations / comments:
The Golang response doesn't look like valid json, replace preg_match_all with json_decode if so.
readfile is not super efficient. Consider being creative with a 302 response.
Nginx + Lua
sites-enabled:
lua_package_path "/etc/nginx/conf.d/lib/?.lua;;";
server {
listen 80 default_server;
listen [::]:80 default_server;
location /getfile {
root /var/www/html;
resolver 8.8.8.8;
set $filepath "/index.html";
access_by_lua_file /etc/nginx/conf.d/getfile.lua;
try_files $filepath =404;
}
}
Test if lua is behaving as expected:
getfile.lua (v1)
ngx.var.filepath = "/static/...";
Simplify the Golang response body to just return a bland path then use it to set filepath:
getfile.lua (v2)
local http = require "resty.http"
local httpc = http.new()
local query_string = ngx.req.get_uri_args()
local res, err = httpc:request_uri('https://go.example.com/getpath?file_id=' .. query_string["id"], {
method = "GET",
keepalive_timeout = 60,
keepalive_pool = 10
})
if res and res.status == ngx.HTTP_OK then
body = string.gsub(res.body, '[\r\n%z]', '')
ngx.var.filepath = body;
ngx.log(ngx.ERR, "[" .. body .. "]");
else
ngx.log(ngx.ERR, "missing response");
ngx.exit(504);
end
resty.http
mkdir -p /etc/nginx/conf.d/lib/resty
wget "https://raw.githubusercontent.com/ledgetech/lua-resty-http/master/lib/resty/http_headers.lua" -P /etc/nginx/conf.d/lib/resty
wget "https://raw.githubusercontent.com/ledgetech/lua-resty-http/master/lib/resty/http.lua" -P /etc/nginx/conf.d/lib/resty
Is it possible to send a http subrequest in a location block and use the response in the proxy_pass directive?
use case
My upstream application needs some additional information from an API.
I've written a location block that proxies request with the proxy_pass directive.
Before nginx sends the request to my application. I'd like to send an HTTP request to my API and use several response headers
as request headers to my application.
This is the outline of what I want to achieve:
server {
server_name ...;
location {
# perform subrequest to fetch additional information from an api
proxy_pass myapplication;
proxy_set_header X-Additional-Info "some information from the subrequest";
}
}
The behaviour is similar to the auth_request module. However, I can't find documentation of sending an additional blocking HTTP request before inside a location block using standard nginx configuration.
You can't do it using regular nginx directives but it's quite easy using lua-nginx-module.
This module embeds Lua, via the standard Lua 5.1 interpreter or LuaJIT
2.0/2.1, into Nginx and by leveraging Nginx's subrequests, allows the integration of the powerful Lua threads (Lua coroutines) into the
Nginx event model.
Here's how to accomplish what you need:
create a directory conf.d/
put 2 files test.conf and header.lua into it (see the contents below)
docker run -p8080:8080 -v your_path/conf.d:/etc/nginx/conf.d openresty/openresty:alpine
curl http://localhost:8080/
test.conf
server {
listen 8080;
location /fetch_api {
# this is a service echoing your IP address
proxy_pass http://api.ipify.org/;
}
location / {
set $api_result "";
access_by_lua_file /etc/nginx/conf.d/header.lua;
proxy_set_header X-Additional-Info $api_result;
# this service just prints out your request headers
proxy_pass http://scooterlabs.com/echo;
}
}
header.lua
local res = ngx.location.capture('/fetch_api', { method = ngx.HTTP_GET, args = {} });
ngx.log(ngx.ERR, res.status);
if res.status == ngx.HTTP_OK then
ngx.var.api_result = res.body;
else
ngx.exit(403);
end
results
curl http://localhost:8080/
Simple webservice echo test: make a request to this endpoint to return the HTTP request parameters and headers. Results available in plain text, JSON, or XML formats. See http://www.cantoni.org/2012/01/08/simple-webservice-echo-test for more details, or https://github.com/bcantoni/echotest for source code.
Array
(
[method] => GET
[headers] => Array
(
[X-Additional-Info] => my-ip-address
[Host] => scooterlabs.com
[Connection] => close
[User-Agent] => curl/7.43.0
[Accept] => */*
)
[request] => Array
(
)
[client_ip] => my-ip-address
[time_utc] => 2018-01-23T19:25:56+0000
[info] => Echo service from Scooterlabs (http://www.scooterlabs.com)
)
Notice the X-Additional-Info header populated with the data obtained in /fetch_api handler
In Flask, if you place a file in a directory called static/, then any URL of the form http://localhost/static/foo.jpg will serve that file from static/foo.jpg.
This can also be accomplished via an nginx config:
location /static {
alias /var/www/mywebsite/static;
}
However, I want to do dynamic URL rewriting.
If someone requests the URL http://localhost/username/foo.jpg, I want to tell nginx to fetch the static file from an arbitrary URL, say, /var/www/assets/11235/1bcd5.jpg. I want the user to see a pretty url, and I want the location to be transparent to the user.
Is there an easy way to do this? Ideally, I would be able to do something so that nginx serves the file. However, if Flask needs to serve it, then that is fine too (it isn't like my project has any users yet!)
What am I missing here?
If the files can be stored with the names that are directly referenced in the "pretty" URL, then you can do a simple rewrite in nginx.
However, it appears that you want to map URL path info to other representations on the disk, as in username -> 11235 and foo.jpg -> 1bcd5.jpg. If the content being served should be protected by authentication or sessions, then you should probably keep the mapping and rewriting inside your Flask app, since Flask provides the means to do that.
If the content can be treated as public and only needs the mapping done, then nginx can be configured to grab query string parameters, look them up in a datastore, and rewrite the URL.
Here's an example from agentzh that was originally posted on the nginx mailing list:
Consider you seo uri is /baz, the true uri is /foo/bar. And I have the
following table in my local mysql "test" database:
create table my_url_map(id serial, url text, seo_url);
insert into my_url_map(url, seo_url)values('/foo/bar', '/baz');
And I build my nginx 0.8.41 this way:
./configure \
--add-module=/path/to/ngx_devel_kit \
--add-module=/path/to/set-misc-nginx-module \
--add-module=/path/to/ngx_http_auth_request_module-0.2 \
--add-module=/path/to/echo-nginx-module \
--add-module=/path/to/lua-nginx-module \
--add-module=/path/to/drizzle-nginx-module \
--add-module=/path/to/rds-json-nginx-module
Also, I have lua 5.1.4 and the lua-yajl library installed to my system.
And here's the central part in my nginx.conf:
upstream backend {
drizzle_server 127.0.0.1:3306 dbname=test
password=some_pass user=monty protocol=mysql;
drizzle_keepalive max=300 mode=single overflow=ignore;
}
lua_package_cpath '/path/to/your/lua/yajl/library/?.so';
server {
...
location /conv-mysql {
internal;
set_quote_sql_str $seo_uri $query_string; # to prevent sql injection
drizzle_query "select url from my_url_map where seo_url=$seo_uri";
drizzle_pass backend;
rds_json on;
}
location /conv-uid {
internal;
content_by_lua_file 'html/foo.lua';
}
location /jump {
internal;
rewrite ^ $query_string? redirect;
}
# your SEO uri
location /baz {
set $my_uri $uri;
auth_request /conv-uid;
echo_exec /jump $my_uri;
}
}
Contents of foo.lua, the essential glue:
local yajl = require('yajl')
local seo_uri = ngx.var.my_uri
local res = ngx.location.capture('/conv-mysql?' .. seo_uri)
if (res.status ~= ngx.HTTP_OK) then
ngx.throw_error(res.status)
end
res = yajl.to_value(res.body)
if (not res or not res[1] or not res[1].url) then
ngx.throw_error(ngx.HTTP_INTERNAL_SERVER_ERROR)
end
ngx.var.my_uri = res[1].url;
Then let's access /baz from the client side:
$ curl -i localhost:1984/baz
HTTP/1.1 302 Moved Temporarily
Server: nginx/0.8.41 (without pool)
Date: Tue, 24 Aug 2010 03:28:42 GMT
Content-Type: text/html
Content-Length: 176
Location: http://localhost:1984/foo/bar
Connection: keep-alive
<html>
<head><title>302 Found</title></head>
<body bgcolor="white">
<center><h1>302 Found</h1></center>
<hr><center>nginx/0.8.41 (without pool)</center>
</body>
</html>