I have nginx configured to act as a reverse proxy
http {
log_format combined '$proxy_protocol_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
#...
server {
server_name localhost;
listen 80 proxy_protocol;
listen 443 ssl proxy_protocol;
ssl_certificate /etc/nginx/ssl/public.example.com.pem;
ssl_certificate_key /etc/nginx/ssl/public.example.com.key;
location /app/ {
proxy_pass http://backend1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $proxy_protocol_addr;
proxy_set_header X-Forwarded-For $proxy_protocol_addr;
}
}
Something very similar to the example above.
But i need that all the traffic that is going to backend server will go through another proxy.
Meaning:
Client request -> Nginx (as reverese proxy) --- all traffic---> Proxy server -> backend server
Is it possible?
Related
I have a service running on https://old-server.net:8444/devs/. I set up a new service on a new server https://new-server.net/. When accessing the new service via the web, things work as expected. But when trying to login to the old service via curl (A POST request) or download from it (GET request), I just get the "301 Moved Permanently" message. Here is my nginx.conf:
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
server {
listen 80;
server_name new-server.net;
server_tokens off;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name new-server.net;
server_tokens off;
ssl_certificate /etc/nginx/certs/server.cer;
ssl_certificate_key /etc/nginx/certs/server.key;
location / {
proxy_pass http://new-server.net:8081/;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https ;
proxy_redirect http:// https:// ;
}
}
server {
listen 8444 ssl;
server_name old-server.net;
server_tokens off;
ssl_certificate /etc/nginx/certs/server.cer;
ssl_certificate_key /etc/nginx/certs/server.key;
location /devs {
rewrite ^/devs(.*) https://new-server.net$1 permanent;
}
}
}
I'm using rewrite because the new server doesn't have the /devs/ context path of the old server. I wasn't sure how to achieve this with a 'return 301' line. So, is it possible for me to allow devs to continue to GET and POST to the old URL and have those requests sent to the new URL?
I'm trying to redirect an APP that already works, to https.
The app is a PWA that is running locally, on 8080 port.
I know that's something in my configuration, because the root works ok, but the sub apps doesn't.
The root is just a html with the routes for the sub apps.
This is the app that I'm trying to redirect: https://github.com/PolymerLabs/multitenant-prpl
This is my configuration file, that I've made with searching around and mixing things into the main nginx configuration file.
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
upstream node-app {
least_conn;
server 192.168.0.9:8080 weight=10 max_fails=3 fail_timeout=30s;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
# https redirection
location / {
return 301 https://$host$request_uri;
}
}
server {
# http2 setup
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name node-app;
# ssl setup
ssl_certificate /etc/nginx/certs/nginx.crt;
ssl_certificate_key /etc/nginx/certs/nginx.key;
ssl_dhparam /etc/nginx/certs/dhparam.pem;
# server redirection
location / {
proxy_pass http://node-app$request_uri;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
}
Probably something is missing.
Thanks in advance.
Images:
The root working.
Another route with the dev tools open, the error is a bit longer.
I'm getting an error when using the docker image for setting up an nginx proxy server: nginx-proxy. If I hit and point on my site the response is incredibly slow to come back in some instances. This happens pretty much immediately, if I hit an endpoint three times, for example, in relatively quick succession. The log for nginx shows the following error:
2017/05/14 09:24:26 [warn] 26#26: *29 upstream server temporarily
disabled while connecting to upstream, client: 10.255.0.2, server: [ip
removed], request: "GET
/documents/5918206a-8da0-4deb-86b2-6b627867e0d5 HTTP/1.1", upstream:
"http://10.255.0.4:8080/documents/5918206a-8da0-4deb-86b2-6b627867e0d5",
host: "[ip removed]"
The log for my back end service doesn't show any errors, so I'm not sure what may be going on. I am guessing it is a configuration issue with nginx, which could be fixed by changing the settings, but I am not sure where to start. Does anyone have any ideas?
My configuration looks like this in the end when the docker instance runs:
nginx.conf:
# cat nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
server_names_hash_bucket_size 128;
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
conf.d/default.conf:
daemon off;
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
default off;
https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log off;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
upstream [ip removed] {
## Can be connect with "ingress" network
# datemo_datemo.1.dean8edsp7ytoevagjnemb8bb
server 10.255.0.6:8080;
## Can be connect with "datemo_default" network
# datemo_datemo.1.dean8edsp7ytoevagjnemb8bb
server 10.0.0.5:8080;
}
server {
server_name [ip removed];
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://[ip removed];
}
}
I mean add an upstream but not a server in an upstream.
That means I don't have an upstream block like:
upstream backend {
# ...
}
I want create an upstream block dynamically. That is something like:
content_by_lua_block {
upstream_block.add('backend');
upstream_block.add_server('backend', '127.0.0.1', 8080);
upstream_block.add_server('backend', '127.0.0.1', 8081);
upstream_block.add_server('backend', '127.0.0.1', 8082);
upstream_block.del_server('backend', '127.0.0.1', 8080);
}
proxy_pass http://backend
You may use balancer_by_lua* and https://github.com/openresty/lua-resty-core/blob/master/lib/ngx/balancer.md
You will have a full control which upstream is selected for given request.
You may self provision you code or use existing upstream config as the source using https://github.com/openresty/lua-upstream-nginx-module
I found a nginx module called ngx_http_dyups_module matches my question.
My example on how to dynamically add upstream servers based on CPU count.
Servers. I used openresty and configured it to listen on multiple ports.
worker_processes auto;
error_log logs/openresty.err ;
events {
worker_connections 1000;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/openresty.log main;
server {
listen 127.0.0.1:8080;
listen 127.0.0.1:8081;
listen 127.0.0.1:8082;
listen 127.0.0.1:8083;
listen 127.0.0.1:8084;
listen 127.0.0.1:8085;
listen 127.0.0.1:8086;
listen 127.0.0.1:8087;
listen 127.0.0.1:8088;
listen 127.0.0.1:8089;
listen 127.0.0.1:8090;
server_name *.*;
location / {
content_by_lua_block {
--[[ local NumCores = tonumber(os.getenv("NUMBER_OF_PROCESSORS"))
local NumCores=10
]]
--
-- local f = io.popen("ps -ef | grep nginx | wc -l ")
local f = io.popen("/usr/sbin/sysctl -n hw.ncpu ")
ngx.print('CPU count: '..f:read())
f:close()
}
}
}
}
And the reverse proxy, dynamically add upstream servers based on CPU count.
error_log logs/reverse_openresty.err ;
events {
worker_connections 1000;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/reverse_openresty.log main;
upstream backend {
server 0.0.0.1; # just an invalid address as a place holder
balancer_by_lua_block {
local balancer = require "ngx.balancer"
local start_port=8080
local f = io.popen("/usr/sbin/sysctl -n hw.ncpu ") -- get cpu count
local cpu_count=tonumber(f:read())
f:close()
local max_port=start_port+cpu_count-2
repeat
local ok, err = balancer.set_current_peer('127.0.0.1', start_port)
if not ok then
ngx.log(ngx.ERR, "failed to set the current peer: ", err)
return ngx.exit(500)
end
start_port=start_port+1
until start_port>max_port
}
keepalive 10; # connection pool
}
server {
listen 80;
location / {
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_pass http://backend; # force using http. as node server.js only have http
}
}
}
The configuration is tested on MacOs.
I'm using ngx_dynamic_upstream. it's really good at production. i'd forked original from owner and checked source codes for just in case.
I need to install a uWSGI app and Kibana4 / elastic search stack on the same server. The uwsgi app only needs to be used when a user accesses the server via [server_IP]/charts/ and I'd like Kibana4 to be accessed via [Server_IP].
Both listen on port 80 via their own separate conf files and, predictably, the uwsgi app doesn't allow for Kibana4 to receive requests.
How would I adjust my conf files to allow the access I need? I'm a bit confused as to what I need to use (rewrite, redirect, something else?)
Thanks for your time
nginx_conf_for_uwsgi:
server {
server_name 192.168.250.37;
listen 80;
root /usr/local/wsgi;
access_log /var/log/nginx/graph_server/access.log;
error_log /var/log/nginx/graph_server/error.log;
client_max_body_size 500M;
proxy_read_timeout 600;
location / {
include uwsgi_params;
uwsgi_pass 192.168.250.37:9091;
uwsgi_read_timeout 600;
}
}
kibana4.conf:
server {
listen 80;
server_name 192.168.250.37;
#auth_basic "Restricted Access";
#auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://192.168.250.37:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
nginx.conf:
user nginx;
worker_processes 4;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
index index.html index.htm;
# Increase header buffer size (needed for PHP)
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
# Update the logs to display the real IP address after removing the IP for
# the load balancers
set_real_ip_from redacted; # a
set_real_ip_from redacted; # b
real_ip_header X-Forwarded-For;
real_ip_recursive on;
# Custom logger to display the subdomain folder (if applicable)
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
log_format log_thing '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'"$http_x_forwarded_for" sub:"$subdomain"';
log_format i_server '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'filename:"$http_filename"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
server_name localhost;
root /usr/share/nginx/html;
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
One way is to use nginx as a reverse proxy which is effectively what you are doing already. This way you have one nginx virtual host listening on port 80 which forwards different locations to separate nginx vhosts listening on different ports on your system.
You nginx reverse proxy vhost would look something like this, the 3 proxy_set_header lines can be moved to the server block if all locations work with them
server {
listen 80;
server_name 192.168.250.37;
port_in_redirect off
location / {
proxy_pass http://127.0.0.1:8081;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /charts {
proxy_pass http://127.0.0.1:8082;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Then change you Kibanaconf to listen on port 8081 and uwsgi to listen on 8082
Alternatively you can combine the two vhosts into one and will need to set custom aliases for the root folders under each location and rearrange.
server {
listen 80;
server_name 192.168.250.37;
root /usr/local/wsgi;
client_max_body_size 500M;
proxy_read_timeout 600;
#auth_basic "Restricted Access";
#auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://192.168.250.37:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /charts {
include uwsgi_params;
uwsgi_pass 192.168.250.37:9091;
uwsgi_read_timeout 600;
}
}