303 cache is preventing headers from being sent - nginx

I am configuring a odoo server to block certain routes for people outside of some networks. I am using Nginx as a reverse proxy on this server.
My issue is with the route /web/session/lougout. When i add the two following blocks to my config, browsers start caching the 303 answer from the route and stop sending headers to the server. Since the headers are missing, it prevents the server from invalidating the session and leads to a bunch of issues.
location ~ ^/web/ {
add_header Content-Security-Policy upgrade-insecure-requests;
proxy_pass http://odoo-xxx-test;
proxy_buffering on;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
expires 864000;
allow *some ip* ;
allow *some ip* ;
allow *some ip* ;
deny all;
}
location ~ ^\/web\/(action\/|content\/|static\/|image\/|login|session\/|webclient\/) {
add_header Content-Security-Policy upgrade-insecure-requests;
proxy_pass http://odoo-xxx-test;
proxy_cache_valid 200 60m;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_buffering on;
expires 864000;
}
From my understanding, the first location should deny access to all routes outside of the 3 allowed ips and the second should allow everyone outside of those 3 ips to access those 7 routes. I dont understand how those two block affect the caching of the /web/session/logout route, removing those two directives fixes the issues but this behavior is in the requirements for the projects.
Any help would be welcome!

Found the solution to my issue, turns out the expires directive was making the NGINX server cache the 303 response and serve it without contacting the server behind the proxy.
Removing the expires directive from both location fixed our issue.

Related

Nginx: rewrite port in url from reverse proxie'd app

So I have set up a reverse proxy to tunnel my application.
Unfortunately the application thinks it is served via http and not https and gives out URLs with port 80.
How can I handle this in the nginx reverse proxy? (by rewriting maybe)
When I go on the page:
https://my.server.com
index.php loads, everything is okay
after clicking something I have a URL like this:
https://my.server.com:80/page/stuff/?redirect_to
which throws an error within the browser because my reverse proxy doesn't serve SSL on port 80.
How can I migitate this?
My current nginx ssl vhost for the site:
... ssl stuff ...
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
location / {
proxy_pass http://localhost:22228;
proxy_buffering off;
proxy_redirect off;
proxy_read_timeout 43800;
proxy_pass_request_headers on;
proxy_set_header Connection "Keep-Alive";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Port 443;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass_header Content-Type;
proxy_pass_header Content-Disposition;
proxy_pass_header Content-Length;
proxy_set_header X-Forwarded-Proto https;
}
(yes I know my request headers look like a christmas tree 🎄)
Also bonus points if you show where the documentation addressing this issue is and what the mechanism is called.
For rewriting response body you can use http_sub_module:
location / {
proxy_pass http://localhost:22228;
sub_filter_once off;
sub_filter_types text/css application/javascript; # in addition to text/html
sub_filter "//my.server.com:80/" "//my.server.com/";
}
Many people says (1, 2) that you need to disable compression when using sub_filter directive:
proxy_set_header Accept-Encoding "";
For me, it works fine without this line in config, but it can be a feature of OpenResty which I use instead of nginx.
If your app generates HTTP 30x redirects with explicit indication of domain:port, you can rewrite Location header value with the proxy_redirect directive:
proxy_redirect //my.server.com:80/ //my.server.com/;

NGINX internal redirect from uri path to JSF context root

I'm configuring a cloud server which use NGINX as reverse proxy to serve different applications on different URI (all the applications are on the same wildfly standalone instance).
To be more specific i've a JSF application with a contextroot, let's say, /jsfcontext and i've set up a NGINX location like /mypublicuri.
What happens is that when I navigate to https://myserver.com/mypublicuri/index.xhtml i receive the following error:
/mypublicuri/index.xhtml Not Found in ExternalContext as a Resource.
I'm pretty sure it's related to a missing internal redirect route or some kind of "hack" that i need to specify in order to make everything work but i'm a newbie in NGINX and I don't know how to properly set everything up.
Thanks for the help
Cheers
Read NGINX documentation but my lack of english knowledge makes difficoult to understand what should I have to do
My actual NGINX config
server {
server_name myserver.com www.myserver.com;
access_log /usr/share/logs/access.log;
error_log /usr/share/logs/error.log;
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_intercept_errors on;
location /anotherworkingapp {
add_header Allow "GET, POST, HEAD, PUT, DELETE" always;
proxy_pass http://127.0.0.1:8080$request_uri;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location /mypublicuri {
proxy_pass http://127.0.0.1:8080/jsfcontext$request_uri;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
}

Nginx (HTTP Only) Reverse Proxy Settings in Production

I am playing around with Nginx and I successfully set up a simple (for now HTTP only) reverse proxy. As a newbie, I am wondering what would I need to modify to make this production ready. Which leads me to the following questions:
Is there a way to unify the proxy_set_header directive so that I don't need to repeat myself for every virtual host?
Am I missing any other important host header modifications than X-Forwarded-Proto, X-Url-Scheme, X-Forwarded-For and Host?
nginx.conf:
worker_processes 1;
events {
worker_connections 1024;
}
http {
sendfile on;
gzip on;
# skip log_format/access_log
server {
listen 80;
server_name server1.company.com;
location / {
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Url-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_pass http://server1; # IP or FQDN would be better here
}
}
server {
listen 80;
server_name server2.company.com;
location / {
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Url-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_pass http://server2; # IP or FQDN would be better here
}
}
}
Any feedback/point to a direction would be appreciated.
If you place all of your proxy_set_header statements in the http block, they will be inherited into the server blocks and then into the location blocks. The inheritance only happens into blocks without another proxy_set_header statement. See this document for details.
Alternatively, place common statements into a separate file and pull them into any part of your configuration by using an include directive. See this document for details.
Which headers you should set is dependent on your application. But this article discusses preventing certain headers from being passed to the proxied server, e.g.
proxy_set_header Accept-Encoding "";
And this article mitigates the HTTPoxy vulnerability with:
proxy_set_header Proxy "";

WildFly console served with nginx

I stuck to configure a simple reverse proxy on AWS.
Since we have one host (reverse proxy nginx) serving the public access I decided to follow the rules and created the following configuration.
server {
listen 9990;
server_name project-wildfly.domain.me;
access_log /var/log/nginx/wildfly.access.log;
error_log /var/log/nginx/wildfly.error.log;
proxy_buffers 16 64k;
proxy_buffer_size 128k;
root /var/www/;
index index.html index.htm;
location /console {
proxy_set_header Host $server_addr:$server_port;
proxy_set_header X-Forwarded-Proto $scheme;
add_header Cache-Control "no-cache, no-store";
proxy_pass http://10.124.1.120:9990/console;
}
location /management {
proxy_set_header Host $server_addr:$server_port;
proxy_set_header X-Forwarded-Proto $scheme;
add_header Cache-Control "no-cache, no-store";
proxy_pass http://10.124.1.120:9990/management;
}
}
This will serve the admin console and I'm able to log in with the user. Then this message appears:
Access Denied
Insufficient privileges to access this interface.
Nothing within the error log. Thanks for any hint!
I had the same issue when configuring Wildfly 15 and nginx 1.10.3 as reverse proxy.
Setup was very similar to the first post, redirecting /management & /console to wildflyhost:9990.
I was able to access the console directly via :9990 and when comparing the network traffic between direct and nginx-proxied traffic, I noticed that Origin and Host were different.
So in my case the solution was to force the Origin and Host headers in Nginx to something that Wildfly is expecting. I couldn't find this solution elsewhere, so I'm posting it here for future reference anyhow although the thread is old.
location /.../ {
proxy_set_header Host $host:9990;
proxy_set_header Origin http://$host:9990;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass_request_headers on;
proxy_pass http://wildflyhost:9990
...
}
Maybe you need turn on management module.
Try this:sh standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0 &

mod-security allowing only one set-cookie

Has anyone run into the problem of mod-security only allowing one set-cookie through a proxy request response? We are using nginx with mod-security and seeing all but the last set-cookie be removed by nginx on the response from our application server. We are applying the mod-security in the location section
location ~* ^/(test|securitytest|$) {
ModSecurityEnabled on;
ModSecurityConfig modsecurity.conf;
create_full_put_path on;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://app;
proxy_read_timeout 10;
proxy_redirect off;
}
there was a bug in modsecurity+nginx that was dropping all except one cookie for each request. It was fixed, have a look at:
https://github.com/SpiderLabs/ModSecurity/issues/154

Resources