NGINX rate limitting by decoded values from JWT token - nginx

I have a question regarding NGINX rate limiting.
Is it possible to do rate limiting based on the decoded value of JWT token? I cannot find any information like this in the docs.
Or even if there is a way of doing rate limiting by creating pure custom variable (using LuaJIT) which will be assigned with a value from my decoded JWT - will also do the job.
The thing is that the limit_req module seems to execute way before the request reaches the luaJIT stage so its already too late!
A solution will be appreciated.

As you may know that rate limit is applied through unique ip address for best result you should use unique jwt value or token to rate limit.
You can follow any of these 3 methods
Method
You can directly use jwt token in limit_req_zone.
http {
...
limit_req_zone $http_authorization zone=req_zone:10m rate=5r/s;
}
conf.d/default.conf
server {
listen 80;
listen [::]:80;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
if ($http_authorization = "") {
return 403;
}
location /jwt {
limit_req zone=req_zone burst=10 nodelay;
return 200 $http_authorization;
}
...
}
Method
You can send decoded jwt value from frontend in reqest header like http_x_jwt_decode_value and then you can use that in limit_req_zone.
http {
...
limit_req_zone $http_x_jwt_decode_value zone=req_zone:10m rate=5r/s;
}
conf.d/default.conf
server {
listen 80;
listen [::]:80;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
if ($http_x_jwt_decode_value = "") {
return 403;
}
location /jwt {
limit_req zone=req_zone burst=10 nodelay;
return 200 $http_x_jwt_decode_value;
}
...
}
Method
You can decode jwt token in nginx though njs javascript module or perl module or lua module and assign it to variable then use that to rate limit.
Description: here i just decoded jwt value and checked if its not empty you can use it to work with and jwt decoded value.
jwt_example.js
function jwt(data) {
var parts = data.split('.').slice(0,2)
.map(v=>String.bytesFrom(v, 'base64url'))
.map(JSON.parse);
return { headers:parts[0], payload: parts[1] };
}
function jwt_payload_sub(r) {
return jwt(r.headersIn.Authorization.slice(7)).payload.sub;
}
export default {jwt_payload_sub}
nginx.conf
# njs module
load_module modules/ngx_http_js_module.so;
http {
...
include /etc/nginx/conf.d/*.conf;
js_import main from jwt_example.js;
js_set $jwt_payload_sub main.jwt_payload_sub;
limit_req_zone $jwt_payload_sub zone=req_zone:10m rate=5r/s;
}
conf.d/default.conf
server {
listen 80;
listen [::]:80;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
if ($jwt_payload_sub = "") {
return 403;
}
location /jwt {
limit_req zone=req_zone burst=10 nodelay;
return 200 $jwt_payload_sub;
}
...
}

JWT Auth for Nginx
nginx-jwt is a Lua script for the Nginx server (running the HttpLuaModule) that will allow you to use Nginx as a reverse proxy in front of your existing set of HTTP services and secure them (authentication/authorization) using a trusted JSON Web Token (JWT) in the Authorization request header, having to make little or no changes to the backing services themselves.
IMPORTANT: nginx-jwt is a Lua script that is designed to run on Nginx servers that have the HttpLuaModule installed. But ultimately its dependencies require components available in the OpenResty distribution of Nginx. Therefore, it is recommended that you use OpenResty as your Nginx server, and these instructions make that assumption.
Configuration
At the moment, nginx-jwt only supports symmetric keys (alg = hs256), which is why you need to configure your server with the shared JWT secret below.
1.Export the JWT_SECRET environment variable on the Nginx host, setting it equal to your JWT secret. Then expose it to Nginx server:
# nginx.conf:
env JWT_SECRET;
2.If your JWT secret is Base64 (URL-safe) encoded, export the JWT_SECRET_IS_BASE64_ENCODED environment variable on the Nginx host, setting it equal to true. Then expose it to Nginx server:
# nginx.conf:
env JWT_SECRET_IS_BASE64_ENCODED;
JWT Auth for Nginx

Related

How do I transfer post requests to a different port with nginx?

I want to transfer all post requests to port 5000 as that's where my express server is running.
This is my nginx config at the moment:
server {
if ($host = www.website.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80 default_server;
server_name www.website.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name www.website.com;
ssl_certificate /etc/letsencrypt/live/website.com/fullchain.pem; # managed$
ssl_certificate_key /etc/letsencrypt/live/website.com/privkey.pem; # manag$
I know that I should have probably thought about it before writing the code and make all post requests to /api and then redirect it from the config. But I haven't, and I don't want to change the code if that's node necessary.
How can I recognize if a request is a post request and transfer it to port 5000?
Also, in what language is this config file written in? it looks like js but it isn't really
You'd better change your code, but as a quick and dirty hack you can try
server {
...
if ($request_method = POST) {
rewrite ^ /api$request_uri last;
}
location / {
# your default location config here
}
location /api/ {
proxy_pass http://127.0.0.1:5000/;
}
}
NGINX config is not a programming language, it uses its own syntax and is rather declarative than imperative, changing the order of nginx directives (except of those from ngx_http_rewrite_module) usually make no difference.

nginx proxy_pass application with prefix

I need to serve multiple instances of same application for different users.
Say I have users as user1, user2 and user3. My nginx.conf will be like below.
server {
listen 80;
server_name localhost;
location /user1/ {
proxy_pass http://myapp1;
}
location /user2/ {
proxy_pass http://myapp2;
}
location /user3/ {
proxy_pass http://myapp3;
}
}
The application will redirect user back and forth several times. The userX prefix is lost at first proxy pass and next calls are sent to /.
I am using nginx inside a docker container and already read and tried below.
I simply followed a workaround as below to get done what I needed.
upstream user1 {
server myapp1;
}
upstream user2 {
server myapp2;
}
upstream user3 {
server myapp3;
}
server {
listen 80;
server_name localhost;
location / {
//Used a lua script to identify the user
proxy_pass http://$userX;
}
}

Nginx Allow http for single resource when default is redirect to https

I configured my Nginx to redirect to all http requests to https
server {
server_name url.net;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
return 301 https://$host$request_uri;
}
server {
server_name url.net;
listen 443 ssl http2 ;
....
}
There is one specific resource at url.net/file.xml where I want to allow plain HTTP get requests without any redirect.
How can I configure that in nginx?
Place the return statement into a location block, then add an exact match location block for the URI exception. See this document for details.
For example:
server {
server_name url.net;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
return 301 https://$host$request_uri;
}
location = /file.xml {
root /path/to/directory;
}
}
Note that this will not work if you have HTTP Strict Transport Security headers on your secure site.

How to disable logging for illegal host headers request in nginx

In my nginx.conf, I add the following code to deny the illegal request:
server {
...
## Deny illegal Host headers
if ($host !~* ^(www.mydomain.com|xxx.xxx.xxx.xxx)$) {
return 444;
}
...
}
But this request info always be written in access log, which is monitor request I think because they are so many and from two unsafe site and just HEAD request.
So how to stop logging these illegal request info to access log?
Thanks.
You should use separate server blocks
server {
listen 80;
# valid host names
server_name www.example.com;
server_name xx.xx.xx.xx;
# you site goes here
}
# requests with all other hostnames will be caught by this server block
server {
listen 80 default_server;
access_log off;
return 444;
}
That would be simple and efficient
So shamed to find that nginx(1.7.0 and later) log moudle has provided conditional logging, and the document example is just the status condition:
The if parameter (1.7.0) enables conditional logging. A request will not be logged if the condition evaluates to “0” or an empty string. In the following example, the requests with response codes 2xx and 3xx will not be logged:
map $status $loggable {
~^[23] 0;
default 1;
}
access_log /path/to/access.log combined if=$loggable;
Also answer Disable logging in nginx for specific request has mentioned this info. I just neglected it.
Now I add the following code to nginx log settings:
map $status $loggable {
~444 0;
default 1;
}
access_log /var/log/nginx/access.log combined if=$loggable;
the illegal host headers request has not been logged.

In Nginx, "etag" directive doesn't work for proxy_pass?

I'm using Nginx 1.9.2 and following is my configuration
upstream httpserver0{
server 127.0.0.1:35011 max_fails=3 fail_timeout=30s; #H_server0
}
server {
listen 443 ssl;
listen 80;
server_name 11.22.33.44; #my_server_name
etag on;
location ~* \.(ts|raw)$ {
set $server_id "0";
if ( $uri ~ ^/(.*cfs+)/(.*)$ ){
set $server_id $1;
}
if ( $server_id = "4cfs" ){
proxy_pass http://httpserver0$request_uri;
}
}
}
I'm using upstream module and proxy_pass for reverse proxy, and I enabled etag function by etag on within the server block.
However, when I check the Header of HTTP response, I didn't find the etag field at all..
Does anyone have ideas about this? Thanks!
No, it does not work for proxy_pass.
http://nginx.org/r/etag
Enables or disables automatic generation of the “ETag” response header field for static resources.
Even more, it's turned on by default.

Resources