Kong behind nginx reverse proxy - nginx

I use Kong as my API Gateway, running in a Docker container. By executing the following command from docker host, i get the correct answer.
root#prod-s-swarm01:~# curl -i -X GET --url http://prod-s-swarm:8000 --header 'Host: example.com' --header 'apikey: auth-key-maks'
HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Date: Thu, 24 Oct 2019 11:16:10 GMT
Server: Apache/2.4.7 (Ubuntu)
Vary: Accept-Encoding
X-RateLimit-Remaining-hour: 4
X-RateLimit-Limit-second: 2
X-RateLimit-Remaining-second: 1
X-RateLimit-Limit-hour: 5
X-Kong-Upstream-Latency: 25
X-Kong-Proxy-Latency: 139
Via: kong/1.3.0
<!DOCTYPE html>
<html lang="ru">
<head>
.......
But, this request over my nginx proxy return not correct answer:
root#prod-s-swarm01:~# curl -i -X GET --url https://kong.myserver.com --header 'Host: example.com' --header 'apikey: auth-key-maks'
HTTP/1.1 200 OK
Server: nginx
Date: Thu, 24 Oct 2019 11:14:33 GMT
Content-Type: application/json; charset=utf-8
Content-Length: 97
Connection: keep-alive
X-Powered-By: Express
ETag: W/"61-Mn0BCF+92vC7dF087oyDAFsiE"
{"Status":"ERROR","Error":"Bad authorize","ErrorDesc":"Не верная авторизация"}
My nginx proxy config:
server {
listen 443 ssl;
server_name kong.myserver.com;
ssl_certificate /etc/letsencrypt/live/appgw/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/appgw/privkey.pem;
location / {
proxy_pass http://prod-s-swarm:8000;
proxy_set_header Host $host;
}
}
I tried to use, $http_host - this also not work.

Another Host's fall into in default_server on nginx. Or in server_name is necessary write all domains in kong api.

Related

Wordpress HTTPS mode strange behavior

I'm running Wordpress on nginx+fpm (port 80) behind another Nginx with SSL set up and such.
I have some background with doing this things right and Wordpress is working fine delivering site, letting into wp-admin etc. However, I've stumbled into the fact that 'index.php' of WP does not honor HTTPS=on mode for some reason.
Testing stand:
Wordpress set to deliver pages on https://example.com/info/ URL from IP 10.130.0.4 behind proxy.
1.
# curl -I -H "Host: example.com" -H "X-Forwarded-Proto: https" http://10.130.0.4/wp-login.php
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Wed, 27 Jan 2021 18:29:27 GMT
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
Expires: Wed, 11 Jan 1984 05:00:00 GMT
Cache-Control: no-cache, must-revalidate, max-age=0
Set-Cookie: wordpress_test_cookie=WP+Cookie+check; path=/info/; secure
Set-Cookie: wordpress_test_cookie=WP+Cookie+check; path=/; secure
X-Frame-Options: SAMEORIGIN
# curl -I -H "Host: example.com" -H "X-Forwarded-Proto: https" http://10.130.0.4/info/
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Wed, 27 Jan 2021 18:32:13 GMT
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
Link: <https://example.com/info/wp-json/>; rel="https://api.w.org/"
# curl -I -H "Host: example.com" -H "X-Forwarded-Proto: https" http://10.130.0.4/
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Wed, 27 Jan 2021 18:35:52 GMT
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
Link: <https://example.com/info/wp-json/>; rel="https://api.w.org/"
And number 4 that makes me almost literally bump my head against the table is:
# curl -I -H "Host: example.com" -H "X-Forwarded-Proto: https" http://10.130.0.4/index.php
HTTP/1.1 301 Moved Permanently
Server: nginx/1.18.0 (Ubuntu)
Date: Wed, 27 Jan 2021 18:39:21 GMT
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
X-Redirect-By: WordPress
Location: https://example.com/
Nginx is set up using the most widespread gist for it:
location / {
# This is cool because no php is touched for static content.
# include the "?$args" part so non-default permalinks doesn't break when using query string
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
#NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
include fastcgi_params;
fastcgi_intercept_errors on;
fastcgi_pass php;
#The following parameter can be also included in fastcgi_params file
fastcgi_param REQUEST_SCHEME https;
fastcgi_param HTTPS on;
fastcgi_param HTTP_X_FORWARDED_PROTO https;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
If I am missing anything obvious here, please point me to it.
Again, WP is working wholly fine but AIOSEO plugin is trying to deliver robots.txt via 'http://10.130.0.4/index.php?aioseo_robots_path=root' and gets redirected to https://example.com which is different (static) site.

Nginx running ip access in browser work but dns timeout

I'm running into an issue that I can't solve myself...
I'm running a Debian 10 server with nginx freshly installed on it.
IPV4: 149.56.45.129, DNS: yocha.app
Result of hostnamectl:
Static hostname: yocha.app
Icon name: computer-vm
Chassis: vm
Machine ID: d72735cff36a41f0a5326f0bb7eb1778
Boot ID: 72dd9022a4894eeea82bc74480543823
Virtualization: kvm
Operating System: Debian GNU/Linux 10 (buster)
Kernel: Linux 4.19.0-13-cloud-amd64
Architecture: x86-64
My /etc/hosts:
127.0.0.1 localhost
149.56.45.129 yocha.app
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
My nginx sites-avaible/default:
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name yocha.app;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
}
When I access my ip address in the browser, I do get the nginx welcome message which is good I guess.
BUT when I try to access the dns the request timed out with no return...
I can log with ssh on my dns, I can ping it with no problems I even can curl it but when It comes to access it on a Browser, nothing happens.
curl -I http://149.56.45.129:80
HTTP/1.1 200 OK
Server: nginx/1.14.2
Date: Thu, 21 Jan 2021 13:40:16 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Thu, 21 Jan 2021 13:05:20 GMT
Connection: keep-alive
ETag: "60097c10-264"
Accept-Ranges: bytes
me#yocha:~$ curl -I http://yocha.app:80
HTTP/1.1 200 OK
Server: nginx/1.14.2
Date: Thu, 21 Jan 2021 13:40:25 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Thu, 21 Jan 2021 13:05:20 GMT
Connection: keep-alive
ETag: "60097c10-264"
Accept-Ranges: bytes
http://yocha.app on a browser
Anyone having a clue for me ?
Thanks a lot in advance !
Your site is redirecting to https:
$ curl -v http://yocha.app
< HTTP/1.1 301 Moved Permanently
< Server: nginx/1.14.2
< Date: Fri, 29 Jan 2021 20:21:46 GMT
< Content-Type: text/html
< Content-Length: 185
< Connection: keep-alive
< Location: https://yocha.app/
and port 443 is not open or it's blocked:
$ telnet yocha.app 443
Trying 149.56.45.129...
telnet: Unable to connect to remote host: Connection timed out
DNS is fine: check your firewall or make sure nginx is propertly configured to listen on port 443 and to serve an ssl certificate.

Nginx removes Content-Length header when acting as reverse proxy behind a WAF

I have Nginx 1.16.1 as a reverse proxy for JFrog Artifactory and they are reachable from the external networks via web application firewall. I am trying to get docker client working with this setup. It sends a HEAD request and awaits a Content-Length to check for the existence of a layer. Now I see that Content-Length is not included in the response received by the client. I can examine it by sending the same request using curl that sends docker:
$ curl -H 'User-Agent: docker/19.03.13 go/go1.13.15 git-commit/4484c46d9d kernel/4.19.128-microsoft-standard os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.13 \(linux\))' \
-H "Authorization: Bearer ${TOKEN}" \
-H 'Connection: close' \
-I \
"https://${ARTIFACTORY_URL}/v2/${IMAGE}/blobs/${DIGEST}"
HTTP/1.1 200 OK
Date: Mon, 09 Nov 2020 14:57:05 GMT
Server: Secure Entry Server
Content-Type: application/octet-stream
Docker-Content-Digest: sha256:[MASKED]
Docker-Distribution-Api-Version: registry/2.0
X-Artifactory-Id: [MASKED]
X-Artifactory-Node-Id: [MASKED]
Set-Cookie: SCDID_S=[MASKED]; path=/; Secure; HttpOnly
Connection: close
However, I see in the access log of Artifactory that it sets this response header. I used tcpdump to see what data is exchanged between Nginx and Artifactory:
HEAD /v2/[MASKED]/blobs/[MASKED] HTTP/1.1
X-JFrog-Override-Base-Url: https://[MASKED]:443
X-Forwarded-Port: 443
X-Forwarded-Proto: https
Host: [MASKED]
X-Forwarded-For: 10.10.40.14
Connection: close
ClientCorrelator: 0rIKeSpqZ9E$
RequestCorrelator: 7f0100-9099-2020.11.09_1457.05.275-001
HSP_CLIENT_ADDR: [MASKED]
Hsp-ListenerUri: https://[MASKED]
HSP_HTTPS_HOST: [MASKED]:443
Accept: */*
Authorization: Bearer [MASKED]
User-Agent: docker/19.03.13 go/go1.13.15 git-commit/4484c46d9d kernel/4.19.128-microsoft-standard os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.13 \(linux\))
HTTPS: on
SSLSessionID: 78ad360e9ea54f5efdb72ea223a63b6cbc7788ae9a1e876620e398040d06182c
SSLSessionTimeLeft: 3600
SSLSessionAge: 0
SSLCipher: ECDHE-RSA-AES128-GCM-SHA256
SSLCipherKeySize: 128
SSLProtocolVersion: TLSv1.2
Via: HTTP/1.1 Secure Entry Server
HTTP/1.1 200 OK
Content-Length: 2529
Content-Type: application/octet-stream
Date: Mon, 09 Nov 2020 14:57:05 GMT
Docker-Content-Digest: [MASKED]
Docker-Distribution-Api-Version: registry/2.0
Server: Artifactory/7.4.1 70401900
X-Artifactory-Id: 5a2dee84b6d80d2f:1f521881:17554c79de4:-8000
X-Artifactory-Node-Id: [MASKED]
Connection: close
The TrafficAnalyzer on the WAF shows that Content-Length in the incoming response from Artifactory is missing. Hence it must be Nginx responsible for removing it.
Now when I connect via VPN to get around the WAF the response looks okay:
Host: [MASKED]
User-Agent: docker/19.03.13 go/go1.13.15 ...
Authorization: Bearer [MASKED]
Connection: close
Date: Fri, 06 Nov 2020 17:13:58 GMT
Content-Type: application/octet-stream
Content-Length: 2529
Docker-Content-Digest: [MASKED]
Docker-Distribution-Api-Version:registry/2.0
Server: Artifactory/7.4.1 70401900
X-Artifactory-Id: 5a2dee84b6d80d2f:1f521881:17554c79de4:-8000
X-Artifactory-Node-Id: [MASKED]
Connection: close
But I also notice, that there are fewer headers set in the request. Is that some additional WAF-header that causes Nginx to remove Content-Length? I don't see anything related to this in Nginx debug log. Any thoughts?

NGINX reverse proxy to https upstream

trying to set up Nginx reverse proxy in front of AWS Elastic Load Balancer with TSL enabled on it.
The configuration I've tried:
events {}
http {
upstream pricing {
server pricing-api.my-awselb.com:443;
}
server {
listen 80;
server_name localhost;
location /pricing {
proxy_pass https://pricing;
}
}
}
Now when I run Nginx in docker locally on port 8080 when I try test it I get 404
> http http://localhost:8080/pricing
HTTP/1.1 404 Not Found
Connection: keep-alive
Content-Length: 0
Date: Wed, 21 Oct 2020 21:47:56 GMT
Server: nginx/1.19.3
upstream itself is accessible from my local machine:
> http https://pricing-api.my-awselb.com
HTTP/1.1 302 Found
Connection: keep-alive
Content-Length: 0
Date: Wed, 21 Oct 2020 21:54:36 GMT
Location: /swagger
Server: Kestrel
Whats wrong with my Nginx configuration?

Nginx missing content-type for woff2

The problem, nginx missing content type for woff2
curl -s -I -X GET https://.../Montserrat-Medium.woff2
HTTP/1.1 200 OK
Server: nginx
Date: Wed, 10 Oct 2018 10:30:54 GMT
Content-Length: 118676
Connection: keep-alive
Keep-Alive: timeout=60
Last-Modified: Wed, 10 Oct 2018 10:27:24 GMT
ETag: "1cf94-577dd4cdf1e25"
Accept-Ranges: bytes
What I've try:
added application/woff2 woff2; to /etc/nginx/mime.types (also application/x-font-woff2, etc)
add to server section this part and it works
location ~* ^.+.woff2$ {
return 403;
}
change part above to this and still have no success
location ~* ^.+\.woff2$ {
proxy_pass https://82.202.226.111:8443;
add_header Content-type application/woff2;
root /var/web/public_shtml;
access_log off;
expires 7d;
try_files $uri #fallback;
}
Also I've view nginx -T configuration to be sure where is no other conditions for woff2.
(3) is almost right. Add also the following to remove the upstream's Content-Type:
proxy_hide_header Content-Type;
Changes to mime.types file are not necessary in this case.
But Richard Smith is right, it's the upstream who returns the wrong content type.

Resources