nginx chaining proxy_pass - nginx

I am trying to build a reverse proxy service with nginx that would first validate a session cookie and proxy to the requested backend resource if valdation was successful.
I have this config ...
# upstream services
upstream backend_production {
#ip_hash;
server cumulonimbus.foo.com:81;
}
map $mycookie $mybackend {
_SESSION_COOKIE http://backend_production/session.php;
}
# session server
server {
listen *:80;
server_name devcumulonimbus cumulonimbus.foo.com;
error_log /var/log/nginx/session.error.log;
access_log /var/log/nginx/session.access.log;
#access_log off; # turn access_log off for speed
root /var/www/;
location = /favicon.ico {
return 204;
access_log off;
log_not_found off;
}
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_cache off;
# does cookie exist?
if ($http_cookie ~* "_SESSION_COOKIE")
{
set $mycookie '_SESSION_COOKIE';
proxy_pass $mybackend;
error_page 400 403 404 500 = /denied;
post_action /reader.php;
break;
}
# no cookie
rewrite ^(.*)$ http://cumulonimbus.foo.com:81/login.php;
}
location =/reader.php {
internal;
proxy_pass http://backend_production/reader.php;
}
location /denied/ {
internal;
rewrite ^(.*)$ http://cumulonimbus.foo.com:81/login.php;
}
}
I set _SESSION_COOKIE in login.php, update the cookie value in session.php and pop a page in reader.php. The problem is that I do not see the page emitted by reader.php eventhough syslog tells me it was hit (only the session.php page is displayed). nginx is the front-end, apache is the backend runnning the php services (this ienvironment is used for prototyping only).
==> /var/log/apache2/error.log <==
session_manager[4350]: CONNECTED TO SESSION MANAGER
==> /var/log/syslog <==
Jul 12 19:41:06 devcumulonimbus session_manager[4350]: CONNECTED TO SESSION MANAGER
==> /var/log/apache2/access.log <==
10.10.11.113 - - [12/Jul/2011:19:41:06 -0700] "GET /session.php HTTP/1.0" 200 454 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:5.0) Gecko/20100101 Firefox/5.0"
==> /var/log/apache2/error.log <==
web_reader[4352]: CONNECTED TO WEB READER
==> /var/log/syslog <==
Jul 12 19:41:06 devcumulonimbus web_reader[4352]: CONNECTED TO WEB READER
If I hit memcache first I can see the page.
I would also like to be able to capture the output response of the first request (first proxy_pass)? should I use the nginx lua module.

Related

Nginx OSM tiles caching proxy with https upstream

I have the old nginx-based OSM tile caching proxy configured by https://coderwall.com/p/--wgba/nginx-reverse-proxy-cache-for-openstreetmap, but as source tile server migrated to HTTPS this solution is not working anymore: 421-Misdirected Request.
The fix I based on the article https://kimsereyblog.blogspot.com/2018/07/nginx-502-bad-gateway-after-ssl-setup.html. Unfortunately after days of experiments - I'm still getting 502 error.
My theory is that the root cause is the upstream servers SSL certificate which uses wildcard: *.tile.openstreetmap.org but all attempts to use $http_host, $host, proxy_ssl_name, proxy_ssl_session_reuse in different combinations did't help: 421 or 502 every time.
My current nginx config is:
worker_processes auto;
events {
worker_connections 768;
}
http {
access_log /etc/nginx/logs/access_log.log;
error_log /etc/nginx/logs/error_log.log;
client_max_body_size 20m;
proxy_cache_path /etc/nginx/cache levels=1:2 keys_zone=openstreetmap-backend-cache:8m max_size=500000m inactive=1000d;
proxy_temp_path /etc/nginx/cache/tmp;
proxy_ssl_trusted_certificate /etc/nginx/ca.crt;
proxy_ssl_verify on;
proxy_ssl_verify_depth 2;
proxy_ssl_session_reuse on;
proxy_ssl_name *.tile.openstreetmap.org;
sendfile on;
upstream openstreetmap_backend {
server a.tile.openstreetmap.org:443;
server b.tile.openstreetmap.org:443;
server c.tile.openstreetmap.org:443;
}
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
include /etc/nginx/mime.types;
root /dist/browser/;
location ~ ^/osm-tiles/(.+) {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X_FORWARDED_PROTO http;
proxy_set_header Host $http_host;
proxy_cache openstreetmap-backend-cache;
proxy_cache_valid 200 302 365d;
proxy_cache_valid 404 1m;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass https://openstreetmap_backend/$1;
break;
}
}
}
}
But it still produces error when accessing https://example.com/osm-tiles/12/2392/1188.png:
2021/02/28 15:05:47 [error] 23#23: *1 upstream SSL certificate does not match "*.tile.openstreetmap.org" while SSL handshaking to upstream, client: 172.28.0.1, server: example.com, request: "GET /osm-tiles/12/2392/1188.png HTTP/1.0", upstream: "https://151.101.2.217:443/12/2392/1188.png", host: "localhost:3003"
Host OS Ubuntu 20.04 (here https is handled), nginx is runnig on docker from nginx:latest image, ca.crt is the default ubuntu's crt.
Please help.

How do I prevent nginx from redirecting to HTTPS in this particular setup?

I have a somewhat messy setup (no choice) where a local computer is made available to the internet through port forwarding. It is only reachable through [public IP]:8000. I cannot get a Let's Encrypt certificate for an IP address, but the part of the app that will be accessed from the internet does not require encryption. So instead, I'm planning on making the app available from the internet at http://[public IP]:8000/, and from the local network at https://[local DNS name]/ (port 80). The certificate used in the latter is issued by our network's root CA. Clients within the network trust this CA.
Furthermore, some small changes are made to the layout of the page when accessed from the internet. These changes are made by setting an embedded query param.
In summary, I need:
+--------------------------+--------------------------+----------+--------------------------------------+
| Accessed using | Redirect to (ideally) | URL args | Current state |
+--------------------------+--------------------------+----------+--------------------------------------+
| http://a.b.c.d:8000 | no redirect | embedded | Arg not appended, redirects to HTTPS |
| http://localhost:8000 | no redirect | embedded | Arg not appended, redirects to HTTPS |
| http://[local DNS name] | https://[local DNS name] | no args | Working as expected |
| https://[local DNS name] | no redirect | no args | Working as expected |
+--------------------------+--------------------------+----------+--------------------------------------+
For the two top rows, I don't want the redirection to HTTPS, and I need ?embedded to be appended to the URL.
Here's my config:
upstream channels-backend {
server api:5000;
}
# Connections from the internet (no HTTPS)
server {
listen 8000;
listen [::]:8000;
server_name [PUBLIC IP ADDRESS] localhost;
keepalive_timeout 70;
access_log /var/log/nginx/access.log;
underscores_in_headers on;
location = /favicon.ico {
access_log off;
log_not_found off;
}
location /admin/ {
# Do not allow access to /admin/ from the internet.
return 404;
}
location /static/rest_framework/ {
alias /home/docker/backend/static/rest_framework/;
}
location /static/admin/ {
alias /home/docker/backend/static/admin/;
}
location /files/media/ {
alias /home/docker/backend/media/;
}
location /api/ {
proxy_pass http://channels-backend/;
}
location ~* (service-worker\.js)$ {
add_header 'Cache-Control' 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
expires off;
proxy_no_cache 1;
}
location / {
root /var/www/frontend/;
# I want to add "?embedded" to the URL if accessed through http://[public IP]:8000.
# I do not want to redirect to HTTPS.
try_files $uri $uri/ /$uri.html?embedded =404;
}
}
# Upgrade requests from local network to HTTPS
server {
listen 80;
keepalive_timeout 70;
access_log /var/log/nginx/access.log;
underscores_in_headers on;
server_name [local DNS name] [local IP] localhost;
# This works; it redirects to HTTPS.
return 301 https://$http_host$request_uri;
}
# Server for connections from the local network (uses HTTPS)
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name [local DNS name] [local IP] localhost;
ssl_password_file /etc/nginx/certificates/global.pass;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_protocols TLSv1.2 TLSv1.1;
ssl_certificate /etc/nginx/certificates/certificate.crt;
ssl_certificate_key /etc/nginx/certificates/privatekey.key;
keepalive_timeout 70;
access_log /var/log/nginx/access.log;
underscores_in_headers on;
location = /favicon.ico {
access_log off;
log_not_found off;
}
location /admin/ {
proxy_pass http://channels-backend/admin/;
}
location /static/rest_framework/ {
alias /home/docker/backend/static/rest_framework/;
}
location /static/admin/ {
alias /home/docker/backend/static/admin/;
}
location /files/media/ {
alias /home/docker/backend/media/;
}
location /api/ {
# Proxy to backend
proxy_read_timeout 30;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $server_name;
proxy_redirect off;
proxy_pass http://channels-backend/;
}
# ignore cache frontend
location ~* (service-worker\.js)$ {
add_header 'Cache-Control' 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
expires off;
proxy_no_cache 1;
}
location / {
root /var/www/frontend/;
# Do not add "?embedded" argument.
try_files $uri $uri/ /$uri.html =404;
}
}
The server serves both the frontend and an API developed using React and Django RF, in case it matters. It's deployed using Docker.
Any pointers would be greatly appreciated.
Edit: I commented out everything except the first server (port 8000), and requests are still being redirected to https://localhost:8000 from http://localhost:8000. I don't understand why. I'm using an incognito tab to rule out cache as the problem.
Edit 2: I noticed that Firefox sets an Upgrade-Insecure-Requests header with the initial request to http://localhost:8000. How can I ignore this header and not upgrade insecure requests? This request was made by Firefox, and not the frontend application.
Edit 3: Please take a look at the below configuration, which I'm now using to try to figure out the issue. How can this possibly result in redirection from HTTP to HTTPS? There's now only one server block, and there's nothing here that could be interpreted as a wish to redirect to https://localhost:8000 from http://localhost:8000. Where does the redirect come from? Notice that I replaced some parts with redirects to Google, Yahoo and Facebook. I'm not redirected to any of these. I'm immediately upgraded to HTTPS, which should not be supported at all with this configuration. It's worth mentioning that the redirect ends in SSL_ERROR_RX_RECORD_TOO_LONG. The certificate is accepted when accessing https://localhost/ (port 80) using the original configuration.
upstream channels-backend {
server api:5000;
}
# Server for connections from the internet (does not use HTTPS)
server {
listen 8000;
listen [::]:8000 default_server;
server_name localhost [public IP];
keepalive_timeout 70;
access_log /var/log/nginx/access.log;
underscores_in_headers on;
ssl off;
location = /favicon.ico {
access_log off;
log_not_found off;
}
location /admin/ {
# Do not allow access to /admin/ from the internet.
return 404;
}
location /static/rest_framework/ {
alias /home/docker/backend/static/rest_framework/;
}
location /static/admin/ {
alias /home/docker/backend/static/admin/;
}
location /files/media/ {
alias /home/docker/backend/media/;
}
location /api/ {
proxy_pass http://channels-backend/;
}
location / {
if ($args != "embedded") {
return 301 https://google.com;
# return 301 http://$http_host$request_uri?embedded;
}
return 301 https://yahoo.com;
# root /var/www/frontend/;
# try_files $uri $uri/ /$uri.html =404;
}
}
Boy, do I feel stupid.
In my docker-compose.yml file, I had accidentally mapped port 8000 to 80:
nginx-server:
image: nginx-server
build:
context: ./
dockerfile: .docker/dockerfiles/NginxDockerfile
restart: on-failure
ports:
- "0.0.0.0:80:80"
- "0.0.0.0:443:443"
- "0.0.0.0:8000:80" # Oops
So any request on port 8000 was received by nginx as a request on port 80. Thus, even a simple config like...
server {
listen 8000;
return 301 https://google.com;
}
... would result in an attempt to upgrade to HTTPS (causes include unexpected caching of redirects, possibly default behavior, etc.) on port 80. I was thoroughly confused, but fixing my compose instructions fixed the problem:
nginx-server:
image: nginx-server
build:
context: ./
dockerfile: .docker/dockerfiles/NginxDockerfile
restart: on-failure
ports:
- "0.0.0.0:80:80"
- "0.0.0.0:443:443"
- "0.0.0.0:8000:8000" # Fixed

Nginx reverse proxy issue long request

I'm stuck on a problem for a long time now with two nginxs server which the first is acting as a reverse proxy and the second as the backend server.
Here is my design :
Client made a GET request on HTTP address from internet
Reverse Proxy Handle it and reverse it to Backend server
Backend server handle it and made a SQL request to an database server
SQL request run while 15min (900MB of Data returned)
PHP-FPM on Backend server will compress the datas and send it back to reverse proxy
Reverse proxy get back the data and give them back to the client
This design is working well for small SQL request, but as the datas made too much time to came back to reverse proxy, the connection between reverse proxy and cliend beeing closed ( I guess ) and the reverse is obliged to initiate a new one itself
When I made the request from a local client to the local HTTP address of my reverse proxy it works well
Here is my nginx config
Reverse Proxy :
server {
listen 80 deferred;
server_name FQDN;
if ($request_method !~ ^(GET|HEAD|POST|PUT)$ )
{
return 405;
}
access_log /var/log/nginx/qa.access.log;
error_log /var/log/nginx/qa.error.log debug;
location /ws/ {
proxy_pass http://IP:8080/;
proxy_redirect default;
proxy_http_version 1.1;
proxy_set_header Host $host;
#proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_connect_timeout 1200s;
proxy_send_timeout 1200s;
proxy_read_timeout 1200s;
send_timeout 1200s;
#proxy_connect_timeout 300s;
#proxy_send_timeout 300s;
add_header Front-End_Https on;
}
location / {
return 444;
}
}
BACKEND server:
server {
listen 8080 deferred;
root /home/Websites/ws2.5/web;
index index.html index.htm index.nginx-debian.html index.php;
server_name NAME_BACKEND;
location / {
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\.php(/|$) {
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
#fastcgi_read_timeout 300;
fastcgi_send_timeout 1200s;
fastcgi_read_timeout 1200s;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
}
location ~ \.php$ {
return 404;
}
error_log /var/log/nginx/ws2.5_error.log debug;
access_log /var/log/nginx/ws2.5_access.log;
location ~ /\.ht {
deny all;
}
}
When it works, here is what I can see in log :
BACKEND - [10/Aug/2020:17:40:22 +0200] "GET /me/patents/ HTTP/1.1" 200 25441300 "-" "RestSharp/105.2.3.0"
REVERSE PROXY - [10/Aug/2020:17:41:05 +0200] "GET /ws/me/patents/ HTTP/1.1" 200 25441067 "-" "RestSharp/105.2.3.0"
And here is what it returned when it doesn't work :
BACKEND - [10/Aug/2020:16:28:39 +0200] "GET /me/patents/ HTTP/1.1" 200 25444257 "-" "RestSharp/105.2.3.0"
REVERSE PROXY - [10/Aug/2020:16:29:39 +0200] "GET /ws/me/patents/ HTTP/1.1" 200 142307 "-" "RestSharp/105.2.3.0"
In both case backend send the among of datas expected, but in the second we can see that the reverse proxy receive less than it sent by BACKEND
The only thing you do about in situation you've described is to tune timeouts.
But it's rather strange way of doing things. Why do you need reverse proxy to juggle 900mb of data between frontend and backend? Nginx is suitable for serving large static files.

Jenkins - NGINX reverse proxy broken

I just moved our jenkins to a new machine behind a reverse proxy, before it was straight on the intranet. And I've started seeing the error "It appears that your reverse proxy setup is broken."
So I copied the recommended nginx config straight, modifying slightly for our needs but the warning remains, leaving me slightly confused and posting here.
upstream jenkins {
keepalive 32; # keepalive connections
server 127.0.0.1:8080; # jenkins ip and port
}
server {
listen 80; # Listen on port 80 for IPv4 requests
server_name jenkins.domain.tld;
#this is the jenkins web root directory (mentioned in the /etc/default/jenkins file)
root /usr/share/jenkins;
access_log /var/log/nginx/jenkins/access.log;
error_log /var/log/nginx/jenkins/error.log;
ignore_invalid_headers off; #pass through headers from Jenkins which are considered invalid by Nginx server.
location ~ "^/static/[0-9a-fA-F]{8}\/(.*)$" {
#rewrite all static files into requests to the root
#E.g /static/12345678/css/something.css will become /css/something.css
rewrite "^/static/[0-9a-fA-F]{8}\/(.*)" /$1 last;
}
location /userContent {
#have nginx handle all the static requests to the userContent folder files
#note : This is the $JENKINS_HOME dir
root /var/lib/jenkins/;
if (!-f $request_filename){
#this file does not exist, might be a directory or a /**view** url
rewrite (.*) /$1 last;
break;
}
sendfile on;
}
location #jenkins {
sendfile off;
proxy_pass http://jenkins;
proxy_redirect default;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_max_temp_file_size 0;
#this is the maximum upload size
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_request_buffering off; # Required for HTTP CLI commands in Jenkins > 2.54
proxy_set_header Connection ""; # Clear for keepalive
}
location / {
# Optional configuration to detect and redirect iPhones
if ($http_user_agent ~* '(iPhone|iPod)') {
rewrite ^/$ /view/iphone/ redirect;
}
try_files $uri #jenkins;
}
}
So it's reached at jenkins.domain.tld and I'm out of ideas on how to troubleshoot this. The requests log properly, nothing in the error log, jenkins seems to work in other ways....but the proxy tests gives a 404?
$: curl -iL -e http://jenkins.domain.tld/jenkins/manage http://jenkins.domain.tld/jenkins/administrativeMonitor/hudson.diagnosis.ReverseProxySetupMonitor/test
HTTP/1.1 404 Not Found
Server: nginx/1.10.3 (Ubuntu)
Date: Mon, 26 Mar 2018 06:50:30 GMT
Content-Type: text/html;charset=iso-8859-1
Content-Length: 391
Connection: keep-alive
X-Content-Type-Options: nosniff
Cache-Control: must-revalidate,no-cache,no-store
<html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>
<title>Error 404 Not Found</title>
</head>
<body><h2>HTTP ERROR 404</h2>
<p>Problem accessing /jenkins/administrativeMonitor/hudson.diagnosis.ReverseProxySetupMonitor/test. Reason:
<pre> Not Found</pre></p><hr>Powered by Jetty:// 9.4.z-SNAPSHOT<hr/>
</body>
</html>
Jenkins URL in system config is also set to jenkins.domain.tld.

Rails Application on Nginx and Thin, 400 Bad Request Request Header Or Cookie Too Large

Static page is served correctly you can visit : http://www.ec2.lankenow.info/
click on the image and it will take you to the error page.
Nginx error log:
2012/03/16 01:58:41 [alert] 884#0: 768 worker_connections are not enough
2012/03/16 01:58:41 [error] 887#0: *3900 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 23.21.141.105, server: www.ec2.lankenow.info, request: "GET /leads HTTP/1.0", upstream: "http://23.21.141.105:80/leads", host: "www.ec2.lankenow.info", referrer: "http://www.ec2.lankenow.info/"
I think I might have to set client_header_buffer_size but dont know how or which file to edit.
Any direction how to go about it would be much appreciated.
EDIT: This can be added to the http section of nginx.conf which can be found in /etc/nginx/
or usr/local/nginx/ depending on your installation
Config file under /etc/nginx/sites-available/
upstream vdiamond {
server 0.0.0.0:3000;
server 0.0.0.0:3001;
server 0.0.0.0:3002;
}
server {
listen 80;
server_name www.ec2.lankenow.info;
access_log /home/ubuntu/vdiamond/log/access.log;
error_log /home/ubuntu/vdiamond/log/error.log;
root /home/ubuntu/vdiamond/public/;
index index.html;
location / {
# Add expires header for static content
location ~* \.(js|css|jpg|jpeg|gif|png)$ {
if (-f $request_filename) {
expires max;
break;
}
}
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_read_timeout 300;
proxy_next_upstream off;
#proxy_redirect false;
if (-f $request_filename/index.html) {
rewrite (.*) $1/index.html break;
}
if (-f $request_filename.html) {
rewrite (.*) $1.html break;
}
if (!-f $request_filename) {
proxy_pass http://www.ec2.lankenow.info;
break;
}
}
}

Resources