I am trying to log the users who accessed our nginx site and are authenticated via lua.
The current nginx access log has a variable $remote_user but that does not expose the real user who logged in via lua.
I think to achieve this there may be 2 steps:
obtain authenticated user info from lua (does lua has any native method for this?)
write the obtained user info to nginx access.log, ideally replace the original $remote_user in log format.
Can anyone share some thoughts on how to achieve these?
Any help is appreciated!
It depends on what LUA plugin(s) you are using and how the user is represented in HTTP requests. This might be via various types of token or cookie, so there is no out-of-the-box solution.
A common option is to write some custom LUA script:
location /mylocation {
access_by_lua_block {
ngx.log(ngx.INFO, 'My info')
}
...
}
Where the info is retrieved in LUA, by reading the HTTP header that contains the user credential. The required value may be available in variables such as these:
ngx.var.http_authorization
ngx.var['cookie_mycookiename']
Related
I am experimenting with a two-service docker-compose recipe, largely based on
the following GitHub project:
https://github.com/rongfengliang/keycloak-openresty-openidc
After streamlining, my configuration looks something like the following fork
commit:
https://github.com/Tythos/keycloak-openresty-openidc
My current issue is, the authorization endpoint ("../openid-connect/auth") uses
the internal origin ("http://keycloak-svc:"). Obviously, if users are
redirected to this URL, their browsers will need to cite the external origin
("http://localhost:"). I thought the PROXY_ADDRESS_FORWARDING variable for the
Keycloak service would fix this, but I'm wondering if I need to do something
like a rewrite on-the-fly in the nginx/openresty configuration.
To replicate, from project root::
docker-compose build
docker-compose up --force-recreate --remove-orphans
Then browse to "http://localhost:8090" to start the OIDC flow. You can
circumvent the origin issue by, once you encounter the aforementioned origin
issue, by replacing "keycloak-svc" with "localhost", which will forward you to
the correct login interface. Once there, though, you will need to add a user
to proceed. To add a user, browse to "http://localhost:8080" in a separate tab
and follow these steps before returning to the original tab and entering the
credentials:
Under Users > Add user:
username = "testuser"
email = "{{whatever}}"
email verified = ON
Groups > add "restybox-group"
After user created:
Go to "Credentials" tab
Set to "mypassword"
Temporary = OFF
Authorization Servers such as Keycloak have a base / internet URL when running behind a reverse proxy. You don't need to do anything dynamic in the reverse proxy - have a look at the frontend URL configuration.
Out of interest I just answered a similar question here, that may help you to understand the general pattern. Aim for good URLs (not localhost) and a discovery endpoint that returns intermet URLs rather than internal URLs.
Although I have been reading and testing many things I could not get a working solution. I want to do something simple, I want to restrict access to some folders to only logged users. If an user is not logged it should get a redirect to the login page.
I do not want to serve files directly using another script. I want to serve files only to authenticated files, I know that it is possible because I saw some websites like Dropbox (not sure if they use nginx) and other services (with nginx in the headers) do not allow direct access to public files without being logged.
I guess that once the user is authenticated I should add a cookie in the header in the backed so I should be able to check it in Nginx. But I do not know if I can set the cookie and do the check entirely in Nginx
I need to whitelist login and register urls. Because if I check if one cookie exists in the request and it does not exist, I will enter into an infinite loop in the login/register urls
If the cookie does not exist or is not valid, it should redirect user to the login page
I checked the following question which is almost the same as mine:
And I have been trying the next config:
location ~* ^/assets/users/images/(.*)$ {
if ($cookie_cookieafterlogin != "secret_value") {
return 301 https://example_domain.com/login;
}
}
I must say that I am a newbie to Nginx and I am starting to learn now. So the above code is partially working because it blocks direct access to anymous access but it also blocks access for logged users, so I think that I am not setting properly the cookie in my web app.
Once the user is authenticated I am sending in the header my cookie data and checking headers in the browser I can see this:
Set-Cookie cookieafterlogin=secret_value; expires=Sun, 13-Jun-2021 ....
Anyone could say me where is my mistake?
Thanks in advance!
I have been using NGINX as a reverse proxy and recently decided I needed to add authentication to a route on a website. I realised that there area multiple NGINX modules that could allow me to handle the authentication through NGINX (see links below). So, I build a single-sing-on login page that integrates auth0 to test how this would work.
All modules are similar and allow you to specify an auth_jwt_key (for validation of the JWT) as well as a variable to where the JWT (auth_jwt) is stored. I decided to store the JWT in a cookie.
Unfortunately, I can not get the validation through NGINX to work and keep seeing a 401 unauthorized return code.
Here is the part of the flask app, where I am handling the auth0 login and storing the JWT into a cookie:
#app.route('/callback')
def callback_handling():
token = auth0.authorize_access_token()
response = redirect('/dashboard')
response.set_cookie('lt_jwt', value=token.get('id_token'), max_age=token.get('expires_in'))
return response
Can you see something wrong with this? The resulting cookie looks something like this (I removed most of it for readability): lt_jwt eyJ0eX[...]SkRNZyJ9.eyJnaXZ[...]kzNDV9.d3Tzr[...]NzbA staging-auth0-login.scapp.io / 9/13/2018, 1:55:45 PM 1.03 KB
I can put the cookie value into jwt.io and decode it properly, but my problem is that the mentioned NGINX modules have issues decoding it.
Here is an example NGINX config, where I am setting up the authentication:
location = /dashboard {
auth_jwt_key "AUTH0_CLIENT_SECRET";
auth_jwt $cookie_lt_jwt;
root /usr/src/lt/nginx;
try_files /dashboard.html =404;
}
Basically, I never get to see dashboard.html and always get the 401 unauthorized. The NGINX error.log shows that decoding the JWT failed: [warn] 64#64: *23 JWT: failed to parse jwt, which in this case is an error log from the custom nginx module I used:
// Validate the jwt
if (jwt_decode(&jwt, jwt_data, conf->jwt_key.data, conf->jwt_key.len))
{
ngx_log_error(NGX_LOG_WARN, r->connection->log, 0, "JWT: failed to parse jwt");
return NGX_HTTP_UNAUTHORIZED;
}
Reference: https://github.com/maxx-t/nginx-jwt-module/blob/d9a2ece81ca66647f81fc2586b29b348af67f8aa/src/ngx_http_auth_jwt_module.c#L124
Unfortunately, it's not super easy to debug it, but I received a similar response from another custom NGINX module that I also tested: jwt_verify: error on decode: SUCCESS, which is a result of this code segment:
jwt_t* token;
int err = jwt_decode(&token, token_data, alcf->key.data, alcf->key.len);
if (err) {
ngx_log_error(NGX_LOG_ERR, r->connection->log, errno,
"jwt_verify: error on decode: %s", strerror(errno));
return ngx_http_auth_jwt_set_realm(r, &alcf->realm);
}
Reference: https://github.com/tizpuppi/ngx_http_auth_jwt_module/blob/bf8ae5fd4b8e981b7683990378356181dee93842/ngx_http_auth_jwt_module.c#L247
So in both cases jwt_decode is called and fails (even though it's error code apparently is SUCCESS).
The reason I am asking here is that I feel I might be doing something conceptually wrong here:
formatting the cookie (I have seen jwt cookies looking like this:
Bearer eyJ0eX[...]SkRNZyJ9.eyJnaXZ[...]kzNDV9.d3Tzr[...]NzbA
assuming that an auth0 generated token can be used this way
...?
Please let me know if you have any insight or good working example with NGINX + auth0. I have read this article https://auth0.com/blog/use-nginx-plus-and-auth0-to-authenticate-api-clients/ by auth0 detailing how a very similar NGINX Pro module could be used, but I don't have access to that commercial module.
To answer the main question: The cookie should really contain only the JWT without any prefix. The python (flask) code in the question isn’t doing anything wrong and I was able to confirm with curl that you can in fact get authorized (with a valid JWT) stored in a cookie:
curl https://some.page.com/application --cookie "lt_jwt=eyJ0eX[...]SkRNZyJ9.eyJnaXZ[...]kzNDV9.d3Tzr[...]NzbA"
So what is the problem here - why can’t I authenticate in the browser, when it works via curl?
The reason was in the NGINX config, which required authentication to be disabled in the http context. This was essentially a bug in the NGINX module, which the developer managed to fix since.
Lesson learned: it's usually not your fault ;)
I have a backend which generates three JWT tokens - reference token, access token and refresh token. Reference token stores a reference to the access token, which is used to access API and refresh token is used to reissue access token when it is timed out. The problem is I do not want to pass access token to the client, but want to use nginx to store it in memcached. So, my whole task is to filter the response from the backend, which currently looks as simple as:
{"reference_token":"...","access_token":"...","refresh_token":"..."}
Nginx should filter this response, get access token from this response and store it in memcached. Finally, it should return to the client a new response:
{"reference_token":"...","refresh_token":"..."}
As you can see, there should be no access_token any more. Access token is something which I try to secure and not to show it and even pass it to the client. What I do not know, is what is the best approach to implement this, what Lua block should I use for this task. I know about body_filter_by_lua , but documentation shortly says that:
Note that the following API functions are currently disabled within this context due to the limitations in NGINX output filter's current implementation
So, it seems like body filtering is rather limited and I'm not even sure if it is possible to call memcached API inside this block. So, how can I implement my task in real world? At least, what Lua (openresty) tricks should I use to approach this task?
You may issue a subrequest (e.g., ngx.location.capture) to your backend within you content handler for example.
Next you may filter a body as you want and use then lua-resty-memcached which use cosocket API.
The drawback of this approach is that you would have full buffered proxy.
I am working with a historic API which grants access via a key/secret combo, which the original API designer specified should be passed as the user name & password in an HTTP Basic auth header, e.g.:
curl -u api_key:api_secret http://api.example.com/....
Now that our API client base is going to be growing, we're looking to using 3scale to handle both authentication, rate limiting and other functions. As per 3scale's instructions and advice, we'll be using an Nginx proxy in front of our API server, which authenticates against 3scale's services to handle all the access control systems.
We'll be exporting our existing clients' keys and secrets into 3scale and keeping the two systems in sync. We need our existing app to continue to receive the key & secret in the existing manner, as some of the returned data is client-specific. However, I need to find a way of converting that HTTP basic auth request, which 3scale doesn't natively support as an authentication method, into rewritten custom headers which they do.
I've been able to set up the proxy using the Nginx and Lua configs that 3scale configures for you. This allows the -u key:secret to be passed through to our server, and correctly processed. At the moment, though, I need to additionally add the same authentication information either as query params or custom headers, so that 3scale can manage the access.
I want my Nginx proxy to handle that for me, so that users provide one set of auth details, in the pre-existing manner, and 3scale can also pick it up.
In a language I know, e.g., Ruby, I can decode the HTTP_AUTHORIZATION header, pick out the Base64-encoded portion, and decode it to find the key & secret components that have been supplied. But I'm an Nginx newbie, and don't know how to achieve the same within Nginx (I also don't know if 3scale's supplied Lua script can/will be part of a solution)...
Reusing the HTTP Authorization header for the 3scale keys can be supported with a small tweak in your Nginx configuration files. As you were rightly pointing out, the Lua script that you download is the place to do this.
However, I would suggest a slightly different approach regarding the keys that you import to 3scale. Instead of using the app_id/app_key authentication pattern, you could use the user_key mode (which is a single key). Then what you would import to 3scale for each application would be the base64 string of api_key+api_secret combined.
This way the changes you will need to do to the configuration files will be fewer and simpler.
The steps you will need to follow are:
in your 3scale admin portal, set the authentication mode to API key (https://support.3scale.net/howtos/api-configuration/authentication-patterns)
go to the proxy configuration screen (where you set your API backend, mappings and where you download the Nginx files).
under "Authentication Settings", set the location of the credentials to HTTP headers.
download the Nginx config files and open the Lua script
find the following line (should be towards the end of the file):
local parameters = get_auth_params("headers", string.split(ngx.var.request, " ")[1] )
replace it with:
local parameters = get_auth_params("basicauth", string.split(ngx.var.request, " ")[1] )
finally, within the same file, replace the entire function named "get_auth_params" for the one in this gist: https://gist.github.com/vdel26/9050170
I hope this approach suits your needs. You can also contact at support#3scale.net if you need more help.