I am having a problem with my Nginx configuration.
I have an Nginx server(A) that adds custom headers and then that proxy_passes to another server(B) which then proxy passes to my flask app(C) that reads the headers. If I go from A -> C the flask app can read the headers that are set but if I go through B (A -> B -> C) the headers seem to be removed.
Config
events {
worker_connections 512;
}
http {
# Server B
server {
listen 127.0.0.1:5001;
server_name 127.0.0.1;
location / {
proxy_pass http://127.0.0.1:5000;
}
}
# Server A
server {
listen 4999;
server_name domain.com;
location / {
proxy_pass http://127.0.0.1:5001;
proxy_set_header X-Forwarded-User 'username';
}
}
}
Flask app running on 127.0.0.1:5000
If I change the server A config to proxy_pass http://127.0.0.1:5000 then the Flask app can see the X-Forwarded-User but if I go through server B the headers are "lost"
I am not sure what I am doing wrong. Any suggestions?
Thanks
I can not reproduce the issue, sending the custom header X-custom-header: custom in my netcat server i get:
nc -l -vvv -p 5000
Listening on [0.0.0.0] (family 0, port 5000)
Connection from localhost 41368 received!
GET / HTTP/1.0
Host: 127.0.0.1:5000
Connection: close
X-Forwarded-User: username
User-Agent: curl/7.58.0
Accept: */*
X-custom-header: custom
(see? the X-custom-header is on the last line)
when i run this curl command:
curl -H "X-custom-header: custom" http://127.0.0.1:4999/
against an nginx server running this exact config:
events {
worker_connections 512;
}
http {
# Server B
server {
listen 127.0.0.1:5001;
server_name 127.0.0.1;
location / {
proxy_pass http://127.0.0.1:5000;
}
}
# Server A
server {
listen 4999;
server_name domain.com;
location / {
proxy_pass http://127.0.0.1:5001;
proxy_set_header X-Forwarded-User 'username';
}
}
}
thus i can only assume that the problem is in the part of your config that you isn't showing us. (you said it yourself, it's not the real config you're showing us, but a replica. specifically, a replica that isn't showing the problem)
thus i have voted to close this question as "can not reproduce" - at least i can't.
Related
I have an nginx config that looks similar to this (simplified):
http {
server {
listen 80 default_server;
location /api {
proxy_pass https://my-bff.azurewebsites.net;
proxy_ssl_server_name on;
}
}
}
Essentially, I have a reverse proxy to an API endpoint that uses https.
Now, I would like to convert this to an upstream group to gain access to keepalive and other features. So I tried this:
http {
upstream bff-app {
server my-bff.azurewebsites.net:443;
}
server {
listen 80 default_server;
location /api {
proxy_pass https:/bff-app;
proxy_ssl_server_name on;
}
}
}
Yet it doesn't work. Clearly I'm missing something.
In summary, how do I correctly do this "conversion" i.e. from url to defined upstream?
I have tried switching between http instead of https in the proxy_pass directive, but that didn't work either.
I was honestly expecting this to be a simple replacement. One upstream for another, but I'm doing something wrong it seems.
Richard Smith pointed me in the right direction.
Essentially, the issue was that the host header was being set to "bff-app" instead of "my-bff.azurewebsites.net" and this caused the remote server to close the connection.
Fixed by specifying header manually like below:
http {
upstream bff-app {
server my-bff.azurewebsites.net:443;
}
server {
listen 80 default_server;
location /api {
proxy_pass https:/bff-app;
proxy_ssl_server_name on;
# Manually set Host header to "my-bff.azurewebsites.net",
# otherwise it will default to "bff-app".
proxy_set_header Host my-bff.azurewebsites.net;
}
}
}
Here's the setup:
fowarding_proxy -> server_1, server_2
server_1 -> app1.domain.com, app2.domain.com
server_2 -> app3.domain.com, app4.domain.com
Where each server is running a docker daemon with an nginx reverse-proxy based on the jwilder/nginx-proxy + letsencrypt setup.
Both servers sit behind the same router and I need a way to route traffic correctly to each one based on the host name. I've been trying to use the nginx stream module since I don't want the forwarding proxy to handle any ssl termination, but the $ssl_preread_name directive doesn't (seem) to capture the host name on http traffic and I can't do a 301 on server directives in the stream module. What's the best way to approach this?
I've included an example of the config I'm currently working with and I've tried multiple iterations. Open to any suggestions.
(Also, as an aside, nothing logs to access.log)
Forward_proxy nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
stream {
# bare bones content, still nothing written to the log.
log_format main '[$time_local] $remote_addr'
access_log /var/log/nginx/access.log main;
map $ssl_preread_server_name $name {
app1.domain.com server1;
app2.domain.com server1;
app3.domain.com server2;
app4.domain.com server2;
}
upstream server1 {
server server1:80;
}
upstream server2 {
server server1:80;
}
upstream server1_ssl {
server server1:443;
}
upstream server2_ssl {
server server1:443;
}
server {
listen 80;
proxy_pass $name;
ssl_preread on;
}
server {
listen 443;
proxy_pass "${name}_ssl";
ssl_preread on;
}
}
Came up with a solution, happy to hear of better ones.
Instead of a single forwarding-proxy, I created two new nginx containers: One for HTTP traffic and the other for HTTPS traffic and put them both in a single docker-compose file for easier management.
HTTP-forwarding-proxy
http {
map $host $name {
default server1;
app3.strangedreamsinc.com server2;
app4.strangedreamsinc.com server2;
}
upstream server1 {
server server1_ip:8080;
}
upstream server2 {
server server2:8080;
}
server {
listen 80 default_server;
server_name _;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://$name;
}
}
}
HTTPS-forwarding-proxy
stream {
map $ssl_preread_server_name $name {
default server1;
app1.strangedreamsinc.com server1;
app2.strangedreamsinc.com server1;
}
upstream server1 {
server server1_ip:8443;
}
upstream server2 {
server server2_ip:8443;
}
server {
listen 443;
proxy_pass $name;
ssl_preread on;
}
}
I'm not convinced there isn't a better way and there's probably something I'm overlooking, but this allows me to transparently route traffic to the correct reverse-proxy and still supports the letsencrypt protocols to apply SSL to my servers.
I done congfiguration in nginx for redirection and it works successfully.
But in that i want load balancing :-
for that i already create load-balancer.conf as well as give server name into that file like :-
upstream backend {
# ip_hash;
server 1.2.3.4;
server 5.6.7.8;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
In both instances i did same configuration
and it default uses round-robin algorithm so in that request transfer via one pc to another pc.....
but it were not working
can any one suggest me anything that secong request going to another server 5.6.7.8
so i can check load balancing.
Thankyou so much.
Create a log file for upstream to check request is going to which server
http {
log_format upstreamlog '$server_name to: $upstream_addr {$request} '
'upstream_response_time $upstream_response_time'
' request_time $request_time';
upstream backend {
# ip_hash;
server 1.2.3.4;
server 5.6.7.8;
}
server {
listen 80;
access_log /var/log/nginx/nginx-access.log upstreamlog;
location / {
proxy_pass http://backend;
}
}
and then check your log file
sudo cat /var/log/nginx/nginx-access.log;
you will see log like
to: 5.6.7.8:80 {GET /sites/default/files/abc.png HTTP/1.1} upstream_response_time 0.171 request_time 0.171
I have 2 servers on my network:
one linux machine (192.168.0.2) with a website listening on port 8181 for service1.domain.com
one windows machine (192.168.0.3) with a website listening on port 8080 for service2.domain.com
I want to set up an nginx reverse proxy so that I can route requests like so:
service1.domain.com --> 192.168.0.2:8181 with host header service1.domain.com
service2.domain.com --> 192.168.0.3:8080 with host header service2.domain.com
I have tried with the following config:
### General Server Settings ###
worker_processes 1;
events {
worker_connections 1024;
}
### Reverse Proxy Listener Definition ###
http {
server {
listen 80;
server_name service1.domain.com;
location / {
proxy_pass http://192.168.0.2:8181;
proxy_set_header host service1.domain.com;
}
}
server {
listen 80;
server_name service2.domain.com;
location / {
proxy_pass http://192.168.0.3:8080;
proxy_set_header host service2.domain.com;
}
}
}
But that doesn't seem to work?
Is there anything blindingly obvious that I might be doing wrong here?
this works fine for me:
http {
server {
listen 80;
server_name service1.domain.com;
location / {
proxy_pass http://192.168.0.2:8181;
proxy_set_header host service1.domain.com
}
}
server {
listen 80;
server_name service2.domain.com;
location / {
proxy_pass http://192.168.0.3:8080;
proxy_set_header host service2.domain.com;
}
}
}
have a try?
I need to keep alive my connection between nginx and upstream nodejs.
Just compiled and installed nginx 1.2.0
my configuration file:
upstream backend {
ip_hash;
server dev:3001;
server dev:3002;
server dev:3003;
server dev:3004;
keepalive 128;
}
server {
listen 9000;
server_name dev;
location / {
proxy_pass http://backend;
error_page 404 = 404.png;
}
}
My programe (dev:3001 - 3004) detect that the connection was closed by nginx after response.
document
The documentation states that for http keepalive, you should also set proxy_http_version 1.1; and proxy_set_header Connection "";