nginx lua redis cookie not setting - nginx

I am trying to set a cookie with lua+nginx+redis. This is my idea: set cookie if cookie doesn't exist and then save to redis.
local redis = require "resty.redis"
local red = redis:new()
local md5 = require "md5"
local ip = ngx.var.remote_addr
local secs = ngx.time()
local uid_key = ip .. secs
local uid = md5.sumhexa(uid_key)
local cookie = ngx.var.cookie_uid
local red_cookie = red:hget("cookie:"..uid, uid)
local ok, err = red:connect("127.0.0.1", 6379)
if not ok then
ngx.say("failed to connect to Redis: ", err)
return
end
local args = ngx.req.get_headers()
local date_time = ngx.http_time(secs)
if cookie == nil or cookie ~= red_cookie then
ngx.header['Set-Cookie'] = "path=/; uid=" .. uid
local res, err = red:hmset("cookie:".. uid,
"uid", uid,
"date_time", date_time,
"user-agent", args["user-agent"]
)
if not res then
ngx.say("failed to set cookie: ", err)
end
end
and my nginx conf
...
location /cookie {
default_type "text/plain";
lua_code_cache off;
content_by_lua_file /lua/test.lua;
}
I am not seeing the cookie set however. I get [error] 63519#0: *408 attempt to set ngx.header.HEADER after sending out response headers, client: 127.0.0.1, server: localhost, request: "GET /cookie HTTP/1.1", host: "localhost"
I can't seem to figure out why this wouldn't work? I also thought I could set cookies with purely nginx. I need to track users who visit my page. Any thoughts?
Thanks!!
Update
I revised my idea to make redis requests from an upstream access point. Now I keep getting an invalid reply from redis.parser.
local redis = require "resty.redis"
local md5 = require "md5"
local parser = require "redis.parser"
local ip = ngx.var.remote_addr
local secs = ngx.time()
local uid_key = ip .. secs
local uid = md5.sumhexa(uid_key)
local args = ngx.req.get_headers()
local date_time = ngx.http_time(secs)
local test_cookie = ngx.location.capture("/redis_check_cookie", {args = {cookie_uid="cookie:"..uid}});
if test_cookie.status ~= 200 or not test_cookie.body then
ngx.log(ngx.ERR, "failed to query redis")
ngx.exit(500)
end
local reqs = {
{"hmset", "cookie:"..uid, "path=/"}
}
local raw_reqs = {}
for i, req in ipairs(reqs) do
table.insert(raw_reqs, parser.build_query(req))
end
local res = ngx.location.capture("/redis_set_cookie?" .. #reqs,
{ body = table.concat(raw_reqs, "")
})
local replies = parser.parse_replies(res.body, #reqs)
for i, reply in ipairs(replies) do
ngx.say(reply[1])
end
and my nginx conf now has:
upstream my_redis {
server 127.0.0.1:6379;
keepalive 1024 single;
}
and
location /redis_check_cookie {
internal;
set_unescape_uri $cookie_uid $arg_cookie_uid;
redis2_query hexists $cookie_uid uid;
redis2_pass my_redis;
}
location /redis_set_cookie {
internal;
redis2_raw_queries $args $echo_request_body;
redis2_pass my_redis;
}

maybe u forget to display some other things;
i don't have openresty environment; but our environment is similar.
the code below is my test, and it run perfectly
this is nginx.conf
location /cookie {
default_type "text/plain";
lua_code_cache off;
content_by_lua_file test.lua;
}
this is lua script
local redis = require "redis"
local red = redis.connect('192.168.1.51',6379)
local ip = ngx.var.remote_addr
local secs = ngx.time()
local uid_key = ip .. secs
local uid = (uid_key)
local cookie = ngx.var.cookie_uid
local red_cookie = red:hget("cookie:"..uid, uid)
local args = ngx.req.get_headers()
local date_time = ngx.http_time(secs)
if cookie == nil or cookie ~= red_cookie then
ngx.header['Set-Cookie'] = "path=/; uid=" .. uid
local res, err = red:hmset("cookie:".. uid,
"uid", uid, "date_time", date_time,
"user-agent", args["user-agent"])
if not res then
ngx.say("failed to set cookie: ", err)
end
end
will u display more about your code?

friends, yesterday i use my own environment and change some of your code, the program run ok.
But, u say your code is also bad.
Just now, i also use the resty.redis. But the code runs ok.
So, i use your environment and your code according to your post, but the result is ok.
I can't provide u help any more.

Related

Dynamic routing with nginx, lua and redis

I am trying to make nginx perform proxying based on the URI with the help of lua and redis.
So far, I am able to successfully proxy simple URI like '/hello' to desired target. Was able to achieve this by saving the mappings in a redis hashmap something like,
HGETALL "127.0.0.1:8080"
1) "/demo1/test/hello4"
2) "example.com/demo1/test/hello4"
3) "/hello"
4) "example.com/hello"
nginx.conf
worker_processes 2;
error_log logs/error.log info;
events {
worker_connections 1024;
}
http {
server {
listen 8080;
location / {
resolver 8.8.4.4; # use Google's open DNS server
set $target '';
access_by_lua '
local http_host = ngx.var.http_host
if not http_host then
ngx.log(ngx.ERR, "no http-host found")
return ngx.exit(400)
end
local redis = require "resty.redis"
local red = redis:new()
red:set_timeout(1000) -- 1 second
local ok, err = red:connect("127.0.0.1", 6379)
if not ok then
ngx.log(ngx.ERR, "failed to connect to redis: ", err)
return ngx.exit(500)
end
local fPath, err = red:hget(http_host, ngx.var.uri)
if not fPath then
ngx.log(ngx.ERR, "No fPath: ", err)
return ngx.exit(500)
end
ngx.var.target = fPath
';
proxy_pass $target;
}
}
}
However, I also want to handle dynamic URI's like example:-
user/id/1 -> "example.com/user/id/1",
user/id/2 -> "example.com/user/id/2",
user/id/3 -> "example.com/user/id/3",
and so on....
I am not sure how can I create a key value pair in redis and lua logic for this case which can handle the dynamicity of the id's.
I tried looking but haven't been able to find the right direction or some resource to aid me in figuring this out.
Any help would be really great!
If you want to achieve this in production, I would recommend using mature API gateways like Apache APISIX or Kong. To implement it yourself, maybe you could store paths with wildcard or Lua patterns in Redis to allow later matching to the original URI. Applying some simple heuristics would help reduce the range of checking.

NGINX invalid port in upstream

I am using nginx as a proxy to a nodejs application. I have the same application running multiple times each on a different port. The request is directed to the correct application/port based on host name.
So
test1.domain.com would be proxied to 127.0.0.1:8000
test2.domain.com would be proxied to 127.0.0.1:8001
test3.domain.com would be proxied to 127.0.0.1:8002
When I hard code " proxy_pass http://127.0.0.1:8000;" Everything works fine.
Now I wrote a njs script to read a file in a users directory to get the port number based on the subdommain. Here is the script.
#inclusion of js file
js_include sites-available/port_assign.js;
js_set $myPort port;
function port(r) {
var host = r.headersIn.host;
var subdomain = host.split('.');
var fs = require('fs');
var filename = '/home/' + subdomain[0] + '/port';
var port = fs.readFileSync(filename);
port.trim();
return(port);
}
this does read the file and returns the port number correctly. I have verified this in the error logs, Because I get:
2020/01/21 04:26:46 [error] 2729#2729: *6 invalid port in upstream "127.0.0.1:8001
", client: 96.54.17.234, server: *.foundryserver.com, request: "GET / HTTP/1.1", host: "test1.foundryserver.com"
now when I tried to issue the directive: proxy_pass http://127.0.0.1:$myPort I get an internal server error and the error stated above.
Not sure what is the difference it the two. I can only think somehow using a variable $myPort is got weird characters or something.
There was some extra information in the port variable. I was able to store the port number in a json format and parse it in the js. {"port":"8000"} is stored in the file.
function port(r) {
var host = r.headersIn.host;
var subdomain = host.split('.');
var fs = require('fs');
var filename = '/home/' + subdomain[0] + '/myport';
var jport = fs.readFileSync(filename);
var port = JSON.parse(jport);
return(port.port);
}
by doing the json parsing it removed any unseen characters in the variable.

How to change request parameters before passing request to nginx reverse proxy server

I am using Nginx to proxy request to some backend server.
For this, i am using proxy_pass module to redirect.
Config file:
location = /hello {
proxy_pass: abc.com
}
I want to execute the following workflow.
Request to nginx server --> change request parameters --> pass the request to abc.com --> change the response parameters --> send response back to client.
Is this possible with nginx ? Any help/pointers to this problem would be appreciated.
You should be able to change/set new parameters with this
location /hello {
proxy_pass: abc.com
if ($args ~* paramToChange=(.+)) {
set $args newParamName=$1;
}
set $args otherParam=value;
}
Update:
There is not a way in nginx out of the box to make a request to get params dynamically then apply them to another request before sending the client response.
You can do this by adding a module to nginx called lua.
This module can be recompiled into nginx by downloading it and adding it in the .configure options during installation. Also I like the openresty bundle that comes with it and other useful modules already available, like echo.
Once you have the lua module this server code will work:
server {
listen 8083;
location /proxy/one {
proxy_set_header testheader test1;
proxy_pass http://localhost:8081;
}
location /proxy/two {
proxy_pass http://localhost:8082;
}
location / {
default_type text/html;
content_by_lua '
local first = ngx.location.capture("/proxy/one",
{ args = { test = "test" } }
)
local testArg = first.body
local second = ngx.location.capture("/proxy/two",
{ args = { test = testArg } }
)
ngx.print(second.body)
';
}
}
I tested this configuration with a couple of node js servers like this:
var koa = require('koa');
var http = require('http');
startServerOne();
startServerTwo();
function startServerOne() {
var app = koa();
app.use(function *(next){
console.log('\n------ Server One ------');
console.log('this.request.headers.testheader: ' + JSON.stringify(this.request.headers.testheader));
console.log('this.request.query: ' + JSON.stringify(this.request.query));
if (this.request.query.test == 'test') {
this.body = 'First query worked!';
}else{
this.body = 'this.request.query: ' + JSON.stringify(this.request.query);
}
});
http.createServer(app.callback()).listen(8081);
console.log('Server 1 - 8081');
}
function startServerTwo(){
var app = koa();
app.use(function *(next){
console.log('\n------ Server Two ------');
console.log('this.request.query: ' + JSON.stringify(this.request.query));
if (this.request.query.test == 'First query worked!') {
this.body = 'It Worked!';
}else{
this.body = 'this.request.query: ' + JSON.stringify(this.request.query);
}
});
http.createServer(app.callback()).listen(8082);
console.log('Server 2 - 8082');
}
This was the output from the node console logs:
Server 1 - 8081
Server 2 - 8082
------ Server One ------
this.request.headers.testheader: "test1"
this.request.query: {"test":"test"}
------ Server Two ------
this.request.query: {"test":"First query worked!"}
Here's what happens:
Nginx sends server one a request query with the test parameter set.
Node server 1 sees the test parameter and responds with 'First query worked!'.
Nginx updates the query parameters with the body from the server one response.
Nginx sends server two a request with the new query parameters.
Node server 2 sees that the 'test' query parameter equals 'First query worked!' and responds to the request with response body 'It Worked!'.
And the curl response or visiting localhost:8083 in a browser shows 'It worked':
curl -i 'http://localhost:8083'
HTTP/1.1 200 OK
Server: openresty/1.9.3.2
Date: Thu, 17 Dec 2015 16:57:45 GMT
Content-Type: text/html
Transfer-Encoding: chunked
Connection: keep-alive
It Worked!

Nginx serves response with 200 even after calling ngx.exit()

I am trying to serve static content from S3. I am using nginx and lua script for its configuration. I am new to both nginx and lua.
The task that I want to achieve is :
Get request URL into nginx.
Authenticate query params of url.
Serve from S3 if parameters are valid.
Send error response if parameters are not valid.
My nginx.conf file is as follows :
location ~ "^/media/(.*?)/(.*?)/(.*)$" {
set $mediaUrl "$1/$2/$3";
set $key "$2/$3"
set $target http://$1.s3.amazonaws.com
rewrite_by_lua "
local uri = '/authenticate'
local res = ngx.location.capture(uri, {args = { param = '/xmedia/'.. ngx.var.mediaUrl }})
if (res.status ~= 200) then
return ngx.exit(ngx.HTTP_GONE)
end
";
rewrite .* /$key break;
proxy_pass $target;
}
location "/authenticate" {
proxy_set_header Range "";
proxy_set_header Content-Range "";
set_by_lua $param "
local params = ngx.req.get_uri_args()
return params.param
";
set $test_url http://127.0.0.1:some_port/authenticate?url=$param;
proxy_pass $test_url;
}
I my case, if authenticate returns 200, then everything works fine. But even if authenticate returns null, nginx returns correct file and does't give error report specified in if statement : return ngx.exit(ngx.HTTP_GONE).
Am I doing something wrong? How to achieve expected behavior in efficient manner?
Thanks.
As already mentioned on your question's comments, the HttpRewriteModule is always executed before rewrite_by_lua; therefore you have to put the logic on the rewrite_by_lua section using ngx.req.set_uri, like this:
location ~ "^/media/(.*?)/(.*?)/(.*)$" {
set $mediaUrl "$1/$2/$3";
set $key "$2/$3"
set $target http://$1.s3.amazonaws.com
rewrite_by_lua "
local uri = '/authenticate'
local res = ngx.location.capture(uri, {args = { param = '/xmedia/'.. ngx.var.mediaUrl }})
if (res.status ~= 200) then
ngx.exit(ngx.HTTP_GONE)
else
ngx.req.set_uri(string.format('/%s', ngx.var.key))
end
";
proxy_pass $target;
}
...

nginx reverse proxy with Windows authentication that uses NTLM

Anyone knows if is possible to do reverse proxy with Windows authentication that uses NTLM? I cant find any example on this. What should be the values of more_set_headers field?
location / {
proxy_http_version 1.1;
proxy_pass_request_headers on;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
more_set_input_headers 'Authorization: $http_authorization';
proxy_set_header Accept-Encoding "";
proxy_pass http://host/;
proxy_redirect default;
#This is what worked for me, but you need the headers-more mod
more_set_headers -s 401 'WWW-Authenticate: Basic realm="host.local"';
}
If I access the host directly the authentication succeed if I access with the reverse proxy the authentication fail every time.
To enable NTLM pass-through with Nginx -
upstream http_backend {
server 2.3.4.5:80;
keepalive 16;
}
server {
...
location / {
proxy_pass http://http_backend/;
proxy_http_version 1.1;
proxy_set_header Connection "";
...
}
}
-- Ramon
As far as I know, this is currently not possible with nginx. I investigated this in depth myself just a little while ago. The basic problem is that NTLM authentication will require the same socket be used on the subsequent request, but the proxy doesn't do that. Until the nginx development team provides some kind of support for this behavior, the way I handled this was by resorting to authenticate in the reverse proxy itself. I am currently doing this using apache 2.2, mod_proxy, mod_auth_sspi (not perfect, but works). Good luck! Sorry nginx, I love you, but we could really use some help for this common use case.
I have since come up with another solution for this. This is still not the same as nginx doing the NTLM (which will be nice if the nginx team ever implements this). But, for now, what I'm doing works for us.
I've written some lua code that uses an encrypted cookie. The encrypted cookie contains the user's id, the time he authenticated and the ip address from which he authenticated. I'm attaching this stuff here for reference. It's not polished, but perhaps you can use it to develop your own similar scheme.
Basically, how it works is:
If the cookie is NOT available or if it's expired or invalid, nginx makes a service call (pre-auth) to a backend IIS application passing the client's IP address and then redirects the client to an IIS web application where I have "Windows Authentication" turned on. The back-end IIS application's pre-auth service generates a GUID and stores an entry in the db for that guid and a flag indicating this GUID is about to be authenticated.
The browser is redirected by nginx to the authenticator app passing the GUID.
The IIS app authenticates the user via windows authentication and updates the db record for that GUID and client IP address with the user id and the time authenticated.
The IIS app redirects the client back to the original request.
nginx lua code intercepts this call and makes a back-door service call to the IIS app again (post-auth) and asks for the user id and time authenticated. This information is set in an encrypted cookie and is sent to the browser. The request is allowed to pass through and the REMOTE_USER is sent along.
subsequent requests by the browser pass the cookie and the nginx lua code sees the valid cookie and proxies the request directly (without needing to authenticate again of course) by passing the REMOTE_USER request header.
access.lua:
local enc = require("enc");
local strings = require("strings");
local dkjson = require("dkjson");
function beginAuth()
local headers = ngx.req.get_headers();
local contentTypeOriginal = headers["Content-Type"];
print( contentTypeOriginal );
ngx.req.set_header( "Content-Type", "application/json" );
local method = ngx.req.get_method();
local body = "";
if method == "POST" then
local requestedWith = headers["X-Requested-With"];
if requestedWith ~= nil and requestedWith == "XMLHttpRequest" then
print( "bailing, won't allow post during re-authentication." );
ngx.exit(ngx.HTTP_GONE); -- for now, we are NOT supporting a post for re-authentication. user must do a get first. cookies can't be set on these ajax calls when redirecting, so for now we can't support it.
ngx.say("Reload the page.");
return;
else
print( "Attempting to handle POST for request uri: " .. ngx.var.uri );
end
ngx.req.read_body();
local bodyData = ngx.req.get_body_data();
if bodyData ~= nil then
body = bodyData;
end
end
local json = dkjson.encode( { c = contentTypeOriginal, m = method, d = body } );
local origData = enc.base64encode( json );
local res = ngx.location.capture( "/preauth", { method = ngx.HTTP_POST, body = "{'clientIp':'" .. ngx.var.remote_addr .. "','originalUrl':'" .. ngx.var.FrontEndProtocol .. ngx.var.host .. ngx.var.uri .. "','originalData':'" .. origData .. "'}" } );
if contentTypeOriginal ~= nil then
ngx.req.set_header( "Content-Type", contentTypeOriginal );
else
ngx.req.clear_header( "Content-Type" );
end
if res.status == 200 then
ngx.header["Access-Control-Allow-Origin"] = "*";
ngx.header["Set-Cookie"] = "pca=guid:" .. enc.encrypt( res.body ) .. "; path=/"
ngx.redirect( ngx.var.authurl .. "auth/" .. res.body );
else
ngx.exit(res.status);
end
end
function completeAuth( cookie )
local guid = enc.decrypt( string.sub( cookie, 6 ) );
local contentTypeOriginal = ngx.header["Content-Type"];
ngx.req.set_header( "Content-Type", "application/json" );
local res = ngx.location.capture( "/postauth", { method = ngx.HTTP_POST, body = "{'clientIp':'" .. ngx.var.remote_addr .. "','guid':'" .. guid .. "'}" } );
if contentTypeOriginal ~= nil then
ngx.req.set_header( "Content-Type", contentTypeOriginal );
else
ngx.req.clear_header( "Content-Type" );
end
if res.status == 200 then
local resJson = res.body;
-- print( "here a1" );
-- print( resJson );
local resTbl = dkjson.decode( resJson );
if resTbl.StatusCode == 0 then
resTbl = resTbl.Result;
local time = os.time();
local sessionData = dkjson.encode( { u = resTbl.user, t = time, o = time } );
ngx.header["Set-Cookie"] = "pca=" .. enc.encrypt( sessionData ) .. "; path=/"
ngx.req.set_header( "REMOTE_USER", resTbl.user );
if resTbl.originalData ~= nil and resTbl.originalData ~= "" then
local tblJson = enc.base64decode( resTbl.originalData );
local tbl = dkjson.decode( tblJson );
if tbl.m ~= nil and tbl.m == "POST" then
ngx.req.set_method( ngx.HTTP_POST );
ngx.req.set_header( "Content-Type", tbl.c );
ngx.req.read_body();
ngx.req.set_body_data( tbl.d );
end
end
else
ngx.log( ngx.ERR, "error parsing json " .. resJson );
ngx.exit(500);
end
else
print( "error completing auth." );
ngx.header["Set-Cookie"] = "pca=; path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT; token=deleted;"
print( res.status );
ngx.exit(res.status);
end
end
local cookie = ngx.var.cookie_pca;
print( cookie );
if cookie == nil then
beginAuth();
elseif strings.starts( cookie, "guid:" ) then
completeAuth( cookie );
else
-- GOOD TO GO...
local json = enc.decrypt( cookie );
local d = dkjson.decode( json );
local now = os.time();
local diff = now - d.t;
local diffOriginal = 0;
if d.o ~= nil then
diffOriginal = now - d.o;
end
if diff > 3600 or diffOriginal > 43200 then
beginAuth();
elseif diff > 300 then
print( "regenerating new cookie after " .. tostring( diff ) .. " seconds." );
local sessionData = dkjson.encode( { u = d.u, t = now, o = d.t } );
ngx.header["Set-Cookie"] = "pca=" .. enc.encrypt( sessionData ) .. "; path=/"
end
ngx.req.set_header( "REMOTE_USER", d.u );
end
strings.lua:
local private = {};
local public = {};
strings = public;
function public.starts(String,Start)
return string.sub(String,1,string.len(Start))==Start
end
function public.ends(String,End)
return End=='' or string.sub(String,-string.len(End))==End
end
return strings;
enc.lua:
-- for base64, try something like: http://lua-users.org/wiki/BaseSixtyFour
local private = {};
local public = {};
enc = public;
local aeslua = require("aeslua");
private.key = "f8d7shfkdjfhhggf";
function public.encrypt( s )
return base64.base64encode( aeslua.encrypt( private.key, s ) );
end
function public.decrypt( s )
return aeslua.decrypt( private.key, base64.base64decode( s ) );
end
return enc;
sample nginx conf:
upstream dev {
ip_hash;
server app.server.local:8080;
}
set $authurl http://auth.server.local:8082/root/;
set $FrontEndProtocol https://;
location / {
proxy_pass http://dev/;
proxy_set_header Host $host;
proxy_redirect default;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Real-IP $remote_addr;
proxy_buffers 128 8k;
access_by_lua_file conf/lua/app/dev/access.lua;
}
Ok, we wrote lua code for nginx/openresty, which solves ntlm reverse-proxy issue with some solvable limitations and without need of commercial nginx version
According to nginx documentation:
Allows proxying requests with NTLM Authentication. The upstream connection is bound to the client connection once the client sends a request with the “Authorization” header field value starting with “Negotiate” or “NTLM”. Further client requests will be proxied through the same upstream connection, keeping the authentication context.
upstream http_backend {
server 127.0.0.1:8080;
ntlm;
}
but the ntlm; option is available only with a commercial subscription (Nginx Plus)
You can use this module for non-production environments.
gabihodoroaga/nginx-ntlm-module
The module is not complete but it was enough for me to solve my issues. There is also a blog post about this module at hodo.dev.

Resources