I've configured nginx as a front-end load-balancer across three nodes of a web application I've constructed. nginx continually returns 400/bad request - invalid hostname errors regardless of the values i use in upstream.server and server.server_name. I've tried localhost and 127.0.0.1 for both of those values and issued requests using matching cURL/Postman requests to no avail.
I've also tried setting the value for server.server_name including the port number to better match the incoming HTTP HOST header to no avail.
nginx.conf
events {
worker_connections 1024;
}
http {
upstream myapp {
server 127.0.0.1:8001;
server 127.0.0.1:8002;
server 127.0.0.1:8003;
}
server {
listen 8000;
server_name 127.0.0.1;
location / {
proxy_pass http://myapp;
}
}
}
cURL requests result in the following (no difference between using localhost and 127.0.0.1).
C:\>curl -v http://127.0.0.1:8000/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0)
> GET / HTTP/1.1
> Host: 127.0.0.1:8000
> User-Agent: curl/7.55.1
> Accept: */*
>
< HTTP/1.1 400 Bad Request
< Server: nginx/1.17.1
< Date: Mon, 22 Jul 2019 14:29:22 GMT
< Content-Type: text/html; charset=us-ascii
< Content-Length: 334
< Connection: keep-alive
<
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd">
<HTML><HEAD><TITLE>Bad Request</TITLE>
<META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></HEAD>
<BODY><h2>Bad Request - Invalid Hostname</h2>
<hr><p>HTTP Error 400. The request hostname is invalid.</p>
</BODY></HTML>
* Connection #0 to host 127.0.0.1 left intact
The solution was to add proxy_set_header Host <hostname> in the server.location section of the config used by nginx.
Thank you to Michael Hampton on serverfault.
events {
worker_connections 1024;
}
http {
upstream myapp {
server 127.0.0.1:8001;
server 127.0.0.1:8002;
server 127.0.0.1:8003;
}
server {
listen 8000;
server_name 127.0.0.1;
location / {
proxy_pass http://myapp;
proxy_set_header Host $host;
}
}
}
Related
i have nginx ingress configured into a k8s pod, with the following configuration:
location = /alert-cluster1 {
proxy_pass http://cma-cortex-alertmanager.cortex-aggregator.svc.cluster.local:8080/alertmanager;
proxy_set_header X-Scope-OrgID cluster1;
proxy_pass_request_headers on;
}
location = /alert-cluster2 {
proxy_pass http://cma-cortex-alertmanager.cortex-aggregator.svc.cluster.local:8080/alertmanager;
proxy_set_header X-Scope-OrgID cluster2;
proxy_pass_request_headers on;
}
location ~ /alertmanager {
proxy_pass http://cma-cortex-alertmanager.cortex-aggregator.svc.cluster.local:8080$request_uri;
}
Basically what i need is:
when calling http://mydns/alert-cluster1, nginx should rewrite to /alertmanager with the header X-Scope-OrgID set to cluster1.
when calling http://mydns/alert-cluster2, nginx should rewrite to /alertmanager with the header X-Scope-OrgID set to cluster2.
The proxy pass directive point to a k8s service.
When performing a cURL, the header is not set and during the forward to /alertmanager, X-Scope-OrgID header is not set and alertmanager response with no org id.
curl http://mydns/multitenant-cluster1 -vL.
* Connected to ..... port 80 (#0)
> GET /multitenant-cluster1 HTTP/1.1
> Host: mydns
> User-Agent: curl/7.79.1
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 301 Moved Permanently
< Content-Type: text/html; charset=utf-8
< Content-Length: 49
< Connection: keep-alive
< Server: nginx/1.22.0
< Date: Tue, 19 Jul 2022 10:08:57 GMT
< Location: /alertmanager/
< Vary: Accept-Encoding
< X-Kong-Upstream-Latency: 2
< X-Kong-Proxy-Latency: 0
< Via: kong/2.0.4
<
* Ignoring the response-body
* Connection #0 to host mydns left intact
* Issue another request to this URL: 'http://mydns/alertmanager/'
* Found bundle for host mydns: 0x6000014e40c0 [serially]
* Can not multiplex, even if we wanted to!
* Re-using existing connection! (#0) with host mydns
* Connected to mydns (10.228.41.23) port 80 (#0)
> GET /alertmanager/ HTTP/1.1
> Host: mydns
> User-Agent: curl/7.79.1
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 401 Unauthorized
< Content-Type: text/plain; charset=utf-8
< Content-Length: 10
< Connection: keep-alive
< Server: nginx/1.22.0
< Date: Tue, 19 Jul 2022 10:08:57 GMT
< Vary: Accept-Encoding
< X-Content-Type-Options: nosniff
< X-Kong-Upstream-Latency: 2
< X-Kong-Proxy-Latency: 0
< Via: kong/2.0.4
<
no org id
* Connection #0 to host mydns left intact
but if i call directly /alertmanager with the header hardcoded into the cURL
curl http://mydns/alertmanager --header 'X-Scope-OrgID:cluster1' -L
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<link rel="icon" type="image/x-icon" href="favicon.ico" />
<title>Alertmanager</title>
</head>
<body>
<script>
// If there is no trailing slash at the end of the path in the url,
// add one. This ensures assets like script.js are loaded properly
if (location.pathname.substr(-1) != '/') {
location.pathname = location.pathname + '/';
console.log('added slash');
}
</script>
<script src="script.js"></script>
<script>
var app = Elm.Main.init({
flags: {
production: true,
defaultCreator: localStorage.getItem('defaultCreator'),
groupExpandAll: JSON.parse(localStorage.getItem('groupExpandAll'))
}
});
app.ports.persistDefaultCreator.subscribe(function(name) {
localStorage.setItem('defaultCreator', name);
});
app.ports.persistGroupExpandAll.subscribe(function(expanded) {
localStorage.setItem('groupExpandAll', JSON.stringify(expanded));
});
</script>
</body>
</html>
Am i missing something?
EDIT
after some test i found out this
location ~ /alertmanager {
proxy_pass http://cma-cortex-alertmanager.cortex-aggregator.svc.cluster.local:8080$request_uri;
proxy_set_header X-Scope-OrgID cluster2;
}
If i add the header directly into the location alertmanager, the header is set and the system is working fine. It's like that proxy_pass do not pass header during the redirect
You are redirecting before the header is set, please set the header before proxy_pass. Below snippet might help.
location = /alert-cluster1 {
proxy_set_header X-Scope-OrgID cluster1;
proxy_pass_request_headers on;
proxy_pass http://cma-cortex-alertmanager.cortex-aggregator.svc.cluster.local:8080/alertmanager;
}
location = /alert-cluster2 {
proxy_set_header X-Scope-OrgID cluster2;
proxy_pass_request_headers on;
proxy_pass http://cma-cortex-alertmanager.cortex-aggregator.svc.cluster.local:8080/alertmanager;
}
location ~ /alertmanager {
proxy_pass http://cma-cortex-alertmanager.cortex-aggregator.svc.cluster.local:8080$request_uri;
}
In order to pass header in a location block, one have to add it, please see the below snippet for help.
location ~ /alertmanager {
proxy_set_header X-Scope-OrgID cluster2;
proxy_pass_request_headers on;
proxy_pass http://cma-cortex-alertmanager.cortex-aggregator.svc.cluster.local:8080$request_uri;
}
Receive 404 error while calling URL - http://10.240.0.133/swagger. Below is the snippet of nginx.conf file, I need to append index.html at end of the URI, so I placed a rewrite rule.
server {
listen 80;
listen [::]:80;
server_name localhost;
server_name 10.240.0.133;
server_name 127.0.0.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
access_log /var/log/nginx/resources-reverse-access.log;
error_log /var/log/nginx/resources-reverse-error.log;
location /swagger {
rewrite ^/swagger/index.html break;
proxy_pass http://52.177.131.103:8082/;
}
}
When I visited the URL - curl -v http://10.240.0.133/swagger
404 is thrown:-
* Trying 10.240.0.133...
* TCP_NODELAY set
* Connected to 10.240.0.133 (10.240.0.133) port 80 (#0)
> GET /swagger HTTP/1.1
> Host: 10.240.0.133
> User-Agent: curl/7.55.1
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Server: nginx/1.14.0 (Ubuntu)
< Date: Wed, 18 Mar 2020 14:41:50 GMT
< Content-Length: 0
< Connection: keep-alive
<
* Connection #0 to host 10.240.0.133 left intact
I believe your rewrite rule is incorrect. It should look more like this.
location /swagger {
rewrite ^\/swagger\/?.*?$ /swagger/index.html break;
proxy_pass http://52.177.131.103:8082/;
}
but I believe this still not correct since you have not a set a root directive for this server.
I am having a problem with my Nginx configuration.
I have an Nginx server(A) that adds custom headers and then that proxy_passes to another server(B) which then proxy passes to my flask app(C) that reads the headers. If I go from A -> C the flask app can read the headers that are set but if I go through B (A -> B -> C) the headers seem to be removed.
Config
events {
worker_connections 512;
}
http {
# Server B
server {
listen 127.0.0.1:5001;
server_name 127.0.0.1;
location / {
proxy_pass http://127.0.0.1:5000;
}
}
# Server A
server {
listen 4999;
server_name domain.com;
location / {
proxy_pass http://127.0.0.1:5001;
proxy_set_header X-Forwarded-User 'username';
}
}
}
Flask app running on 127.0.0.1:5000
If I change the server A config to proxy_pass http://127.0.0.1:5000 then the Flask app can see the X-Forwarded-User but if I go through server B the headers are "lost"
I am not sure what I am doing wrong. Any suggestions?
Thanks
I can not reproduce the issue, sending the custom header X-custom-header: custom in my netcat server i get:
nc -l -vvv -p 5000
Listening on [0.0.0.0] (family 0, port 5000)
Connection from localhost 41368 received!
GET / HTTP/1.0
Host: 127.0.0.1:5000
Connection: close
X-Forwarded-User: username
User-Agent: curl/7.58.0
Accept: */*
X-custom-header: custom
(see? the X-custom-header is on the last line)
when i run this curl command:
curl -H "X-custom-header: custom" http://127.0.0.1:4999/
against an nginx server running this exact config:
events {
worker_connections 512;
}
http {
# Server B
server {
listen 127.0.0.1:5001;
server_name 127.0.0.1;
location / {
proxy_pass http://127.0.0.1:5000;
}
}
# Server A
server {
listen 4999;
server_name domain.com;
location / {
proxy_pass http://127.0.0.1:5001;
proxy_set_header X-Forwarded-User 'username';
}
}
}
thus i can only assume that the problem is in the part of your config that you isn't showing us. (you said it yourself, it's not the real config you're showing us, but a replica. specifically, a replica that isn't showing the problem)
thus i have voted to close this question as "can not reproduce" - at least i can't.
I am trying to set up nginx with this config. To access backend.mygreat.server.com I have to go through my corporate proxy, which is myproxy.server.com:80.
Hence, I have added this in /etc/environment
https_proxy=myproxy.server.com:80
Yet, nginx is unable to reach https://backend.mygreat.server.com:443. I'm seeing 504 as HTTP status in nginx logs.
I could use wget or curl to load the page (goes via corporate proxy)
server {
listen 443;
server_name mygreat.server.com;
ssl on;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH:EDH+aRSA:!aNULL:!eNULL:!LOW:!RC4:!3DES:!MD5:!EXP:!PSK:!SRP:!SEED:!DSS:!CAMELLIA;
ssl_certificate /etc/nginx/ssl/mygreat.server.com.pem;
ssl_certificate_key /etc/nginx/ssl/mygreat.server.com.key;
access_log /var/log/nginx/access.ssl.log;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host-Real-IP $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-Pcol http;
proxy_intercept_errors on;
error_page 301 302 307 = #handle_redirects;
proxy_pass https://backend.mygreat.server.com:443;
}
location #handle_redirects {
set $saved_redirect_location '$upstream_http_location';
proxy_pass $saved_redirect_location;
}
}
Any help is greatly appreciated.
Thanks
Update :
Here is the sample error log from nginx
2017/10/18 06:55:51 [warn] 34604#34604: *1 upstream server temporarily disabled while connecting to upstream, client: <ip-address>, server: mygreat.server.com, request: "GET / HTTP/1.1", upstream: "https://<ip-of-backend>:443/", host: "mygreat.server.com"
If I run curl -v https://backend.mygreat.server.com/ below is the response
* About to connect() to proxy corp-proxy.server.com port 80 (#0)
* Trying <some-ip-address>...
* Connected to corp-proxy.server.com (<ip-of-proxy>) port 80 (#0)
* Establish HTTP proxy tunnel to backend.mygreat.server.com:443
> CONNECT backend.mygreat.server.com:443 HTTP/1.1
> Host: backend.mygreat.server.com:443
> User-Agent: curl/7.29.0
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 200 Connection established
<
* Proxy replied OK to CONNECT request
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
* subject: CN=backend.mygreat.server.com,OU=Technology Operations,O=MyCompany.,L=San Diego,ST=California,C=US
* start date: Mar 15 00:00:00 2017 GMT
* expire date: Mar 15 23:59:59 2020 GMT
* common name: backend.mygreat.server.com
* issuer: CN=Symantec Class 3 Secure Server CA - G4,OU=Symantec Trust Network,O=Symantec Corporation,C=US
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: backend.mygreat.server.com
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: openresty/1.11.2.5
< Date: Wed, 18 Oct 2017 14:03:10 GMT
< Content-Type: text/html;charset=UTF-8
< Content-Length: 5642
< Connection: keep-alive
< X-XSS-Protection: 1; mode=block
< Cache-Control: max-age=0, no-cache, no-store, must-revalidate, private
< Expires: 0
< P3P: policyref="http://backend.mygreat.server.com/w3c/p3p.xml" CP="CURa OUR STP UNI INT"
< Content-Language: en
< Set-Cookie: qboeuid=127.0.0.1.1508335390550307; path=/; expires=Thu, 18-Oct-18 14:03:10 GMT; domain=.server.com
< Set-Cookie: JSESSIONID=784529AA39C10C3DB4B0ED0D61CC8F31.c23-pe2ec23uw2apu012031; Path=/; Secure; HttpOnly
< Set-Cookie: something.blah_blah=testme; Domain=.server.com; Path=/; Secure
< Vary: Accept-Encoding
<
<!DOCTYPE html>
<html>
....
</html>
So first of all I am not sure if Nginx is suppose to respect http_proxy and https_proxy variables. I didn't find any documentation on the same. So I assume your issues is related to nginx not using proxy at a all
So now you have an option to use something which actually uses proxy. This is where socat comes to rescue.
Running socat forwarder
If you have a transparent proxy then run
socat TCP4-LISTEN:8443,reuseaddr,fork TCP:<proxysever>:<proxyport>
And if you have CONNECT proxy then use below
socat TCP4-LISTEN:8443,reuseaddr,fork PROXY:yourproxy:backendserver:443,proxyport=<yourproxyport>
Then in your nginx config use
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host-Real-IP $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-Pcol http;
proxy_intercept_errors on;
proxy_set_header Host backend.mygreat.server.com;
proxy_pass https://127.0.0.1:8443;
proxy_redirect https://backend.mygreat.server.com https://mygreat.server.com;
}
You probably want to use Systemd service to launch the socat, so it runs on startup and is handled as a service
Nginx's proxy_pass does not support https proxy.
http proxy can be supported, but the request url only supports http.
this is a example:
server {
listen 8880;
server_name localhost;
location / {
rewrite ^(.*)$ "://developer.android.com$1";
rewrite ^(.*)$ "http$1" break;
proxy_set_header Proxy-Connection Keep-Alive;
proxy_set_header Host developer.android.com;
proxy_pass http://127.0.0.1:1080;
proxy_redirect ~^https?://developer\.android\.com(.*)$ http://$host:8080$1;
}
}
see: https://serverfault.com/a/683955/418613
I installed gitlab with the offical Docker container:
docker run -d -p 8002:80 -v /mnt/gitlab/etc/gitlab:/etc/gitlab -v /mnt/gitlab/var/opt/gitlab:/var/opt/gitlab -v /mnt/gitlab/var/log/gitlab:/var/log/gitlab gitlab/gitlab-ce
I'm using nginx as reverse proxy:
upstream gitlab {
server localhost:8002;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
keepalive_timeout 70;
ssl_certificate /etc/letsencrypt/live/git.cedware.com/cert.pem;
ssl_certificate_key /etc/letsencrypt/live/git.cedware.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
server_name git.cedware.com;
client_max_body_size 300M;
location / {
proxy_http_version 1.1;
proxy_pass http://localhost:8002/;
proxy_set_header Host $host;
proxy_set_header X-Forwared-Ssl off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
This all works totally fine, until I add this line to the gitlab.rb
external_url 'https://git.cedware.com';
After restarting the container, nginx can't reach gitlab. Can someone tell me what's wrong with my setup?
Edit:
This is the output of curl -v https://git.cedware.com:
* Rebuilt URL to: https://git.cedware.com/
* Trying 37.120.177.116...
* Connected to git.cedware.com (37.120.177.116) port 443 (#0)
* found 175 certificates in /etc/ssl/certs/ca-certificates.crt
* found 700 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_256_GCM_SHA384
* server certificate verification OK
* server certificate status verification SKIPPED
* common name: git.cedware.com (matched)
* server certificate expiration date OK
* server certificate activation date OK
* certificate public key: RSA
* certificate version: #3
* subject: CN=git.cedware.com
* start date: Wed, 04 Jan 2017 16:58:00 GMT
* expire date: Tue, 04 Apr 2017 16:58:00 GMT
* issuer: C=US,O=Let's Encrypt,CN=Let's Encrypt Authority X3
* compression: NULL
* ALPN, server accepted to use http/1.1
> GET / HTTP/1.1
> Host: git.cedware.com
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 502 Bad Gateway
< Server: nginx/1.10.0 (Ubuntu)
< Date: Thu, 05 Jan 2017 08:45:52 GMT
< Content-Type: text/html
< Content-Length: 182
< Connection: keep-alive
<
<html>
<head><title>502 Bad Gateway</title></head>
<body bgcolor="white">
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.10.0 (Ubuntu)</center>
</body>
</html>
* Connection #0 to host git.cedware.com left intact
And this is the content of the nginx error.log:
> 2017/01/05 09:47:43 [error] 26258#26258: *1 recv() failed (104:
> Connection reset by peer) while reading response header from upstream,
> client: 217.7.247.238, server: git.cedware.com, request: "GET /
> HTTP/1.1", upstream: "http://127.0.0.1:8002/", host: "git.cedware.com"
> 2017/01/05 09:47:43 [error] 26258#26258: *1 recv() failed (104:
> Connection reset by peer) while reading response header from upstream,
> client: 217.7.247.238, server: git.cedware.com, request: "GET /
> HTTP/1.1", upstream: "http://[::1]:8002/", host: "git.cedware.com"
> 2017/01/05 09:47:43 [error] 26258#26258: *1 no live upstreams while
> connecting to upstream, client: 217.7.247.238, server:
> git.cedware.com, request: "GET /favicon.ico HTTP/1.1", upstream:
> "http://localhost/favicon.ico", host: "git.cedware.com", referrer:
> "https://git.cedware.com/"
As per the nginx error shown in the log the upstream is not responding. This is not a nginx error.
Most likely your container is either down or stuck in a restart loop.
Use docker ps to see the container status. Then use docker logs <containername> to see any errors it generates.
It is possible that gitlab doesn't like your gitlab.rb modification. The log should tell you more.
You should expose 443 port of container since you are using https for gitlab.
Also your location in host system's Nginx settign should be https://localhost:some_443_port/