Nginx pod responds with its listening port in self-signed certificate examples - nginx

Env:
❯ sw_vers
ProductName: macOS
ProductVersion: 11.6.1
BuildVersion: 20G224
❯ minikube version
minikube version: v1.24.0
commit: 76b94fb3c4e8ac5062daf70d60cf03ddcc0a741b
I made a self-signed certificate example on NGINX pod. Omitting to create certificates and keys, since they are working on my local mac, the files are following:
❯ ll rootCA.*
-rw-r--r--# 1 hansuk staff 1383 1 17 12:37 rootCA.crt
-rw------- 1 hansuk staff 1874 1 17 12:02 rootCA.key
❯ ll localhost.*
-rw------- 1 hansuk staff 1704 1 17 12:09 localhost.key
-rw-r--r-- 1 hansuk staff 1383 1 17 12:37 localhost.pem
Start up the following kubernetes definitions on minikube(kubectl apply -f nginx.yml -n cert):
apiVersion: v1
kind: Service
metadata:
name: nginx-cert
labels:
app: nginx-cert
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
name: http
nodePort: 30080
- port: 443
protocol: TCP
name: https
nodePort: 30443
selector:
app: nginx-cert
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: nginx-cert
name: nginx-cert
spec:
replicas: 1
selector:
matchLabels:
app: nginx-cert
template:
metadata:
labels:
app: nginx-cert
spec:
volumes:
- name: secret-volume
secret:
secretName: nginxsecret
- name: configmap-volume
configMap:
name: nginxconfmap
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
- containerPort: 443
volumeMounts:
- mountPath: /etc/nginx/ssl
name: secret-volume
- mountPath: /etc/nginx/conf.d
name: configmap-volume
Create the configmap and secret for nginx config and TLS path respectively:
❯ cat default.conf
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
listen 443 ssl;
root /usr/share/nginx/html;
index index.html;
server_name locahost;
ssl_certificate /etc/nginx/ssl/tls.crt;
ssl_certificate_key /etc/nginx/ssl/tls.key;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
try_files / =404;
}
}
❯ kubectl create configmap nginxconfmap --from-file=default.conf -n cert
❯ kubectl create secret tls nginxsecret --key localhost.key --cert localhost.pem -n cert
All status of deployments and services, and event logs are OK. No failures:
❯ kubectl get all -n cert
NAME READY STATUS RESTARTS AGE
pod/nginx-cert-76f7f8748f-q2nvl 1/1 Running 0 21m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx-cert NodePort 10.110.115.36 <none> 80:30080/TCP,443:30443/TCP 21m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-cert 1/1 1 1 21m
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-cert-76f7f8748f 1 1 1 21m
❯ kubectl get events -n cert
22m Normal Scheduled pod/nginx-cert-76f7f8748f-q2nvl Successfully assigned cert/nginx-cert-76f7f8748f-q2nvl to minikube
22m Normal Pulling pod/nginx-cert-76f7f8748f-q2nvl Pulling image "nginx"
22m Normal Pulled pod/nginx-cert-76f7f8748f-q2nvl Successfully pulled image "nginx" in 4.345505365s
22m Normal Created pod/nginx-cert-76f7f8748f-q2nvl Created container nginx
22m Normal Started pod/nginx-cert-76f7f8748f-q2nvl Started container nginx
22m Normal SuccessfulCreate replicaset/nginx-cert-76f7f8748f Created pod: nginx-cert-76f7f8748f-q2nvl
22m Normal ScalingReplicaSet deployment/nginx-cert Scaled up replica set nginx-cert-76f7f8748f to
And then, SSL handshaking is working with minukube service IP:
❯ minikube service --url nginx-cert --namespace cert
http://192.168.64.2:30080
http://192.168.64.2:30443
❯ openssl s_client -CAfile rootCA.crt -connect 192.168.64.2:30443 -showcerts 2>/dev/null < /dev/null
CONNECTED(00000003)
---
Certificate chain
0 s:C = KR, ST = Seoul, L = Seocho-gu, O = Localhost, CN = localhost
i:C = KR, ST = RootState, L = RootCity, O = Root Inc., OU = Root CA, CN = Self-signed Root CA
a:PKEY: rsaEncryption, 2048 (bit); sigalg: RSA-SHA256
v:NotBefore: Jan 17 03:37:15 2022 GMT; NotAfter: Jan 17 03:37:15 2023 GMT
-----BEGIN CERTIFICATE-----
MIIDzzCCAregAwIBAgIUYMe6nRgsZwq9UPMKFgj9dt9z9FIwDQYJKoZIhvcNAQEL
BQAweDELMAkGA1UEBhMCS1IxEjAQBgNVBAgMCVJvb3RTdGF0ZTERMA8GA1UEBwwI
Um9vdENpdHkxEjAQBgNVBAoMCVJvb3QgSW5jLjEQMA4GA1UECwwHUm9vdCBDQTEc
MBoGA1UEAwwTU2VsZi1zaWduZWQgUm9vdCBDQTAeFw0yMjAxMTcwMzM3MTVaFw0y
MzAxMTcwMzM3MTVaMFkxCzAJBgNVBAYTAktSMQ4wDAYDVQQIDAVTZW91bDESMBAG
A1UEBwwJU2VvY2hvLWd1MRIwEAYDVQQKDAlMb2NhbGhvc3QxEjAQBgNVBAMMCWxv
Y2FsaG9zdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALc9retjBorw
RKbuyC1SNx1U9L5LJPPbBkBh4kg98saQxtRX0Wqs5mgswWMZYL3E6yRl0gfwBkdq
t8GVQ49dgg0QO5MbG9ylfCLS9xR3WWjAgxaDJ0W96PyvTzmg295aqqHFKPSaG/nM
JyZgFJDuGoRRgwoWNqZ1pRCDLMIENDx4qgjOnQch529pM9ZRwFQSswKpn4BVkY00
/u8jIvax67kFOg70QGY16paGEg7YfSNle7BFZY0VJ8rIiBoqwRmPH6hbF/djxe5b
yzkI9eqts9bqw8eDLC28S36x62FxkdqkK8pI/rzWAKSV43TWML1zq4vM2bI+vp0k
a06GhSsS1bUCAwEAAaNwMG4wHwYDVR0jBBgwFoAUURHNpOE9zTgXgVYAGvLt94Ym
P+8wCQYDVR0TBAIwADALBgNVHQ8EBAMCBPAwFAYDVR0RBA0wC4IJbG9jYWxob3N0
MB0GA1UdDgQWBBSS1ZHHT6OHTomYIRsmhz6hMJLGnDANBgkqhkiG9w0BAQsFAAOC
AQEAWA23pCdAXtAbdSRy/p8XURCjUDdhkp3MYA+1gIDeGAQBKNipU/KEo5wO+aVk
AG6FryPZLOiwiP8nYAebUxOAqKG3fNbgT9t95BEGCip7Cxjp96KNYt73Kl/OTPjJ
KZUkHQ7MXN4vc5gmca8q+OqwCCQ/daMkzLabPQWNk3R/Hzo/mT42v8ht9/nVh1Ml
u3Dow5QPp8LESrJABLIRyRs0+Tfp+WodgekgDX5hnkkSk77+oXB49r2tZUeG/CVv
Fg8PuUNi+DWpdxX8fE/gIbSzSsamOf29+0sCIoJEPvk7lEVLt9ca0SoJ7rKn/ai4
HxwTiYo9pNcoLwhH3xdXjvbuGA==
-----END CERTIFICATE-----
---
Server certificate
subject=C = KR, ST = Seoul, L = Seocho-gu, O = Localhost, CN = localhost
issuer=C = KR, ST = RootState, L = RootCity, O = Root Inc., OU = Root CA, CN = Self-signed Root CA
---
No client certificate CA names sent
Peer signing digest: SHA256
Peer signature type: RSA-PSS
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 1620 bytes and written 390 bytes
Verification: OK
---
New, TLSv1.2, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-GCM-SHA384
Session-ID: EED06A09B8971ADD25F352BF55298096581A490020C88BB457AB9864B9844778
Session-ID-ctx:
Master-Key: 71C686180017B4DB5D681CCFC2C8741A7A70F7364572811AE548556A1DCAC078ABAF34B9F53885C6177C7024991B98FF
PSK identity: None
PSK identity hint: None
SRP username: None
TLS session ticket lifetime hint: 300 (seconds)
TLS session ticket:
0000 - 8b 7f 76 5a c3 4a 1f 40-43 8e 00 e7 ad 35 ae 24 ..vZ.J.#C....5.$
0010 - 5c 63 0b 0c 91 86 d0 74-ef 39 94 8a 07 fa 96 51 \c.....t.9.....Q
0020 - 58 cd 61 99 7d ae 47 87-7b 36 c1 22 89 fa 8e ca X.a.}.G.{6."....
0030 - 52 c2 04 6e 7a 9f 2d 3e-42 25 fc 1f 87 11 5f 02 R..nz.->B%...._.
0040 - 37 b3 26 d4 1f 10 97 a3-29 e8 d1 37 cd 9a a3 8e 7.&.....)..7....
0050 - 61 52 15 63 89 99 8e a8-95 58 a8 e0 12 03 c4 15 aR.c.....X......
0060 - 95 bf 1e b7 48 dc 4e fb-c4 8c 1a 17 eb 19 88 ca ....H.N.........
0070 - eb 16 b0 17 83 97 04 0d-79 ca d9 7d 80 5b 96 8d ........y..}.[..
0080 - d3 bf 6f 4f 55 6d 2f ce-0b b9 24 a9 a2 d0 5b 28 ..oOUm/...$...[(
0090 - 06 10 1d 72 52 a3 ef f1-5c e3 2a 35 83 93 a1 91 ...rR...\.*5....
00a0 - cb 94 6c 4f 3e f7 2e 8d-87 76 a5 46 29 6f 0e 5f ..lO>....v.F)o._
Start Time: 1643011123
Timeout : 7200 (sec)
Verify return code: 0 (ok)
Extended master secret: yes
---
But it fail to connect on Chrome browser or on curl, redirecting to its listening port each(30080 -> 80, 30443 -> 443):
# for convenience ignore root CA now, the problem is not in there.
❯ curl -k https://192.168.64.2:30443
<html>
<head><title>301 Moved Permanently</title></head>
<body>
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.21.5</center>
</body>
</html>
❯ curl -kL https://192.168.64.2:30443
curl: (7) Failed to connect to 192.168.64.2 port 443: Connection refused
❯ curl http://192.168.64.2:30080
<html>
<head><title>301 Moved Permanently</title></head>
<body>
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.21.5</center>
</body>
</html>
❯ curl -L http://192.168.64.2:30080
curl: (7) Failed to connect to 192.168.64.2 port 80: Connection refused
❯ kubectl logs nginx-cert-76f7f8748f-q2nvl -n cert
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: can not modify /etc/nginx/conf.d/default.conf (read-only file system?)
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2022/01/24 07:33:25 [notice] 1#1: using the "epoll" event method
2022/01/24 07:33:25 [notice] 1#1: nginx/1.21.5
2022/01/24 07:33:25 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2022/01/24 07:33:25 [notice] 1#1: OS: Linux 4.19.202
2022/01/24 07:33:25 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2022/01/24 07:33:25 [notice] 1#1: start worker processes
2022/01/24 07:33:25 [notice] 1#1: start worker process 24
2022/01/24 07:33:25 [notice] 1#1: start worker process 25
172.17.0.1 - - [24/Jan/2022:07:44:36 +0000] "\x16\x03\x01\x01$\x01\x00\x01 \x03\x03rM&\xF2\xDD\xA3\x04(\xB0\xB2\xBF\x1CTS`\xDC\x90\x86\xF1\xEC\xBD9\x9Cz1c4\x0B\x8F\x13\xC2" 400 157 "-" "-"
172.17.0.1 - - [24/Jan/2022:07:44:48 +0000] "\x16\x03\x01\x01$\x01\x00\x01 \x03\x03'Y\xECP\x15\xD1\xE6\x1C\xC4\xB1v\xC1\x97\xEE\x04\xEBu\xDE\xF9\x04\x95\xC2V\x14\xB5\x7F\x91\x86V\x8F\x05\x83 \xBFtL\xDB\xF6\xC2\xD8\xD4\x1E]\xAE4\xCA\x03xw\x92D&\x1E\x8D\x97c\xB3,\xFD\xCD\xF47\xC4:\xF8\x00>\x13\x02\x13\x03\x13\x01\xC0,\xC00\x00\x9F\xCC\xA9\xCC\xA8\xCC\xAA\xC0+\xC0/\x00\x9E\xC0$\xC0(\x00k\xC0#\xC0'\x00g\xC0" 400 157 "-" "-"
172.17.0.1 - - [24/Jan/2022:07:45:05 +0000] "\x16\x03\x01\x01$\x01\x00\x01 \x03\x03;J\xA7\xD0\xC2\xC3\x1A\xF9LK\xC7\xA8l\xBD>*\x80A$\xA4\xFCw\x19\xE7(\xFAGc\xF6]\xF3I \xFF\x83\x84I\xC2\x8D\xD5}\xEA\x95\x8F\xDB\x8Cfq\xC6\xBA\xCF\xDDyn\xC6v\xBA\xCC\xDC\xCC\xCC/\xAF\xBC\xB2\x00>\x13\x02\x13\x03\x13\x01\xC0,\xC00\x00\x9F\xCC\xA9\xCC\xA8\xCC\xAA\xC0+\xC0/\x00\x9E\xC0$\xC0(\x00k\xC0#\xC0'\x00g\xC0" 400 157 "-" "-"
172.17.0.1 - - [24/Jan/2022:07:49:08 +0000] "GET / HTTP/1.1" 301 169 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36"
172.17.0.1 - - [24/Jan/2022:07:49:08 +0000] "GET / HTTP/1.1" 301 169 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36"
172.17.0.1 - - [24/Jan/2022:08:00:24 +0000] "GET / HTTP/1.1" 400 255 "-" "curl/7.64.1"
172.17.0.1 - - [24/Jan/2022:08:01:46 +0000] "GET / HTTP/1.1" 301 169 "-" "curl/7.64.1"
172.17.0.1 - - [24/Jan/2022:08:01:50 +0000] "GET / HTTP/1.1" 301 169 "-" "curl/7.64.1"
172.17.0.1 - - [24/Jan/2022:08:03:04 +0000] "GET / HTTP/1.1" 301 169 "-" "curl/7.64.1"
172.17.0.1 - - [24/Jan/2022:08:03:07 +0000] "GET / HTTP/1.1" 301 169 "-" "curl/7.64.1"
Actually, at first, the pod respond with the requested ports, 30080 and 30443, but it respond with 80 and 443 now. I have no ideas when it changed and i did change.
I have changed server_name on nginx config from localhost to 192.168.64.2 but it doesn't matter.

I completely recreated your configuration for minikube on Linux. Your Kubernetes configuration is fine. And I got the same response - 301 Moved Permanently.
After that, I changed these lines in the default.conf file:
location / {
try_files $uri $uri/ =404;
}
And everything is working for me now (nginx web page from the pod is reachable using curl and browser).

Related

Upstream http and https protocol under one upstream

I have been trying to add two target under one upstream, one of which is on HTTP and the other one is HTTPS. I am not sure how to acheive that, I tried adding target like: https:10.32.9.123:443 but that didn't work.
It seems like there is a limitation that the targets could either be on HTTP or HTTPS, is there a workaround to this. My kong config file looks like below:
_format_version: "2.1"
_transform: true
services:
- name: test-server-public
protocol: http
host: test-endpoint-upstream
port: 8000
retries: 3
connect_timeout: 5000
routes:
- name: test-route
paths:
- /test
upstreams:
- name: test-endpoint-upstream
targets:
- target: target-url:8080
weight: 999
- target: target-https-url:443
weight: 1
healthchecks:
active:
concurrency: 2
http_path: /
type: http
healthy:
interval: 0
successes: 1
http_statuses:
- 200
- 302
unhealthy:
http_failures: 3
interval: 10
tcp_failures: 3
timeouts: 3
http_statuses:
- 429
- 404
- 500
- 501
- 502
- 503
- 504
- 505
passive:
type: http
healthy:
successes: 1
http_statuses:
- 200
- 201
- 202
- 203
- 204
- 205
- 206
- 207
- 208
- 226
- 300
- 301
- 302
- 303
- 304
- 305
- 306
- 307
- 308
unhealthy:
http_failures: 1
tcp_failures: 1
timeouts: 1
http_statuses:
- 429
- 500
- 503
slots: 1000
There're no workaround for that.
You should set to redirect http to https at your webserver configurations.
This http-to-https-redirect plugin will be helpful in your case.

k8s ingress HTTP response invalid for redirect

I came across a very interesting situation. I am running a apache server inside a Docker container, running a mediawiki installation.
When I run curl localhost:8081 -v, I get a valid response. Also visiting the website in the browser redirects me to the main page as expected:
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8081 (#0)
> GET / HTTP/1.1
> Host: localhost:8081
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< Date: Sun, 01 Nov 2020 05:18:33 GMT
< Server: Apache/2.4.38 (Debian)
< X-Powered-By: PHP/7.3.24
< X-Content-Type-Options: nosniff
< Vary: Accept-Encoding,Cookie
< Expires: Sun, 01 Nov 2020 05:18:33 GMT
< Cache-Control: private, must-revalidate, max-age=0
< Last-Modified: Sun, 01 Nov 2020 05:18:33 GMT
< Location: http://wiki.10.0.0.20.xip.io/wiki/Main_Page
< Connection: close
< Content-Encoding: none
< Content-Length: 0
< X-Request-Id: 76816afe025ea1c1ced4d30d
< Content-Type: text/html; charset=utf-8
<
* Closing connection 0
When on the other hand I make the request through the nginx ingress, I get the following:
* Trying 10.0.0.20...
* TCP_NODELAY set
* Connected to wiki.10.0.0.20.xip.io (10.0.0.20) port 80 (#0)
> GET / HTTP/1.1
> Host: wiki.10.0.0.20.xip.io
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< Server: nginx/1.19.1
< Date: Sun, 01 Nov 2020 05:25:49 GMT
< Content-Type: text/html; charset=utf-8
< Content-Length: 0
< Connection: keep-alive
< X-Powered-By: PHP/7.3.24
< X-Content-Type-Options: nosniff
< Vary: Accept-Encoding,Cookie
< Expires: Sun, 01 Nov 2020 05:25:49 GMT
< Cache-Control: private, must-revalidate, max-age=0
< Last-Modified: Sun, 01 Nov 2020 05:25:49 GMT
< Location: http://wiki.10.0.0.20.xip.io/wiki/Main_Page
< Content-Encoding: none
< X-Request-Id: d6457fbb4f7cc3d0003206eb
<
* Connection #0 to host wiki.10.0.0.20.xip.io left intact
* Closing connection 0
As you can see, the Location header is the same in both responses. The only difference in my eyes is the Connection: keep-alive header which is set to close by Apache. Is this forbidden for 301 results?
When I try to open the Page in Chrome/Safari, the pod server works correctly, but the ingress shows the ERR_EMPTY_RESPONSE error message without any further indication. Furthermore, passing any query param to the url, like ?a, shows the start page correctly. This is most likely because any invalid URL is caught by mediawiki and redirected to the start page.
My ingress config is as follows:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: wiki.10.0.0.20.xip.io
http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: mediawiki
port:
number: 80
The kubernetes environment has been created via minikube. I am using the local NAT network adapter from VirtualBox. But also with a bridged network it is the same from another machine. You can see that the connection is established as I see the 301. Also it routes to mediawiki/PHP, but then the response becomes invalid on the way.
I deployed the service as follows:
apiVersion: v1
kind: Service
metadata:
name: mediawiki
spec:
type: NodePort
selector:
app: mediawiki
ports:
- protocol: TCP
port: 80
nodePort: 30090
And the deployment looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mediawiki
labels:
app: mediawiki
spec:
replicas: 1
selector:
matchLabels:
app: mediawiki
template:
metadata:
labels:
app: mediawiki
spec:
containers:
- name: mediawiki
image: mediawiki
ports:
- containerPort: 80
volumeMounts:
- name: localsettings
mountPath: "/var/www/html/LocalSettings.php"
readOnly: true
subPath: LocalSettings.php
- name: logo
mountPath: "/var/www/html/resources/assets/wiki.png"
readOnly: true
subPath: logo.png
volumes:
- name: localsettings
configMap:
name: localsettings
- name: logo
configMap:
name: logo

Can't upgrade websocket connection in Kubernetes using Nginx-ingress

I'm trying to connect to my Mosquitto broker over websockets, but I'm not able to do it because the connection doesn't upgrade. The mosquitto broker expose the port 9001 to allow websocket connections and it is running behind a Kubernetes Cluster with nginx-ingress controllers.
$ kubectl get ingress mosquitto
NAME HOSTS ADDRESS PORTS AGE
mosquitto * 80 14m
.
$kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mosquitto ClusterIP 10.108.206.11 <none> 9001/TCP,1883/TCP 12m
Mosquitto.yaml:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mosquitto
spec:
replicas: 1
template:
metadata:
labels:
app: mosquitto
spec:
imagePullSecrets:
- name: abb-login
containers:
- name: mosquitto
image: ***/mosquitto:k8s2
imagePullPolicy: Always
ports:
- containerPort: 9001
protocol: TCP
- containerPort: 1883
protocol: TCP
resources: {}
---
apiVersion: v1
kind: Service
metadata:
name: mosquitto
spec:
ports:
- name: "9001"
port: 9001
targetPort: 9001
protocol: TCP
- name: "1883"
port: 1883
targetPort: 1883
protocol: TCP
selector:
app: mosquitto
Mosquitto.conf:
allow_duplicate_messages false
connection_messages true
log_dest stdout stderr
log_timestamp true
log_type all
persistence false
listener 1883
allow_anonymous true
listener 9001
protocol websockets
allow_anonymous false
auth_plugin /usr/lib/mosquitto-auth-plugin/auth-plugin.so
auth_opt_backends http
auth_opt_http_ip 127.0.0.1
auth_opt_http_getuser_uri /api/mosquitto/users
auth_opt_http_superuser_uri /api/mosquitto/admins
auth_opt_http_aclcheck_uri /api/mosquitto/permissions
auth_opt_acl_cacheseconds 1
auth_opt_auth_cacheseconds 0
Ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mosquitto
annotations:
nginx.org/websocket-services: mosquitto
spec:
rules:
- http:
paths:
- path: /mosquitto-ws
backend:
serviceName: mosquitto
servicePort: 80
Error from the client:
MqttException (0) - java.io.IOException: WebSocket Response header: Incorrect upgrade.
opc-ua-adapter_1 | at org.eclipse.paho.client.mqttv3.internal.ExceptionHelper.createMqttException(ExceptionHelper.java:38)
opc-ua-adapter_1 | at org.eclipse.paho.client.mqttv3.internal.ClientComms$ConnectBG.run(ClientComms.java:715)
opc-ua-adapter_1 | at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
opc-ua-adapter_1 | at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
opc-ua-adapter_1 | at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
opc-ua-adapter_1 | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
opc-ua-adapter_1 | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
opc-ua-adapter_1 | at java.base/java.lang.Thread.run(Thread.java:834)
Kubernetes ingress-nginx pod logs:
192.168.39.1 - [192.168.39.1] - - [27/Feb/2019:09:59:14 +0000] "GET /mosquitto-ws HTTP/1.1" 308 171 "-" "-" 218 0.000 [default-mosquitto-9001] - - - - 5db23bb19698ac94612ff6ebac265bed
192.168.39.1 - [192.168.39.1] - - [27/Feb/2019:09:59:14 +0000] "\x88\x84\xDDi+\x5C\xECY\x1Bl" 400 157 "-" "-" 0 0.000 [] - - - - 2b8f177f0f62389ba7d918f9c36ee72e
192.168.39.1 - [192.168.39.1] - - [27/Feb/2019:09:59:14 +0000] "GET /mosquitto-ws HTTP/1.1" 308 171 "-" "-" 218 0.000 [default-mosquitto-9001] - - - - c99fe7606530ae938297e227e34084c0
192.168.39.1 - [192.168.39.1] - - [27/Feb/2019:09:59:14 +0000] "\x88\x84dB5aUr\x05Q" 400 157 "-" "-" 0 0.000 [] - - - - 375ec1ac17cc3e0f7595cf8c1cc752c3
Try to increase proxy-read-timeout and proxy-send-timeout on your mosquito ingress definition.
See the NGinx Ingress doc:
https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#websockets

Content length limitation in http requests to pod ip and port

Env:
# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
km12-01 Ready master 26d v1.13.0 10.42.140.154 <none> Ubuntu 16.04.5 LTS 4.17.0-041700-generic docker://17.6.2
km12-02 Ready master 26d v1.13.0 10.42.104.113 <none> Ubuntu 16.04.5 LTS 4.17.0-041700-generic docker://17.6.2
km12-03 Ready master 26d v1.13.0 10.42.177.142 <none> Ubuntu 16.04.5 LTS 4.17.0-041700-generic docker://17.6.2
prod-k8s-node002 Ready node 25d v1.13.0 10.42.78.21 <none> Ubuntu 16.04.5 LTS 4.17.0-041700-generic docker://17.3.2
prod-k8s-tmpnode005 Ready node 24d v1.13.0 10.42.177.75 <none> Ubuntu 16.04.5 LTS 4.17.0-041700-generic docker://17.3.2
calico v3.3.1
What Happen:
I have a deployment, with 2 pods scheduled on prod-k8s-node002 and prod-k8s-tmpnode005. Just like this:
# kubectl -n gitlab-prod get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
api-monkey-76489bd8c9-7gbkt 1/1 Running 0 35m 192.244.199.37 prod-k8s-node002 <none> <none>
api-monkey-76489bd8c9-w2zrs 1/1 Running 0 55m 192.244.124.240 prod-k8s-tmpnode005 <none> <none>
Now I curl each pod from a master node, say it is km12-01:
# # wait 0ms, response a json string, which has a property named 'data', it is a string, which is '1' repeated by 1000 times
# curl -v '192.244.124.240:3000/health/test?ms=0&content=1&repeat=1000'
* Trying 192.244.124.240...
* Connected to 192.244.124.240 (192.244.124.240) port 3000 (#0)
> GET /health/test?ms=0&content=1&repeat=1000 HTTP/1.1
> Host: 192.244.124.240:3000
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Powered-By: Express
< Access-Control-Allow-Origin: *
< Content-Type: application/json; charset=utf-8
< Content-Length: 1011
< ETag: W/"3f3-oUOV2TWikka+Y8l16Cqo/Q"
< Date: Sat, 08 Dec 2018 20:12:26 GMT
< Connection: keep-alive
<
* Connection #0 to host 192.244.124.240 left intact
{"data":"1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111"}
# curl -v '192.244.199.37:3000/health/test?ms=0&content=1&repeat=1000'
* Trying 192.244.199.37...
* Connected to 192.244.199.37 (192.244.199.37) port 3000 (#0)
> GET /health/test?ms=0&content=1&repeat=1000 HTTP/1.1
> Host: 192.244.199.37:3000
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Powered-By: Express
< Access-Control-Allow-Origin: *
< Content-Type: application/json; charset=utf-8
< Content-Length: 1011
< ETag: W/"3f3-oUOV2TWikka+Y8l16Cqo/Q"
< Date: Sat, 08 Dec 2018 20:15:16 GMT
< Connection: keep-alive
<
* Connection #0 to host 192.244.199.37 left intact
{"data":"1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111"}
Good, both work.
So if I make a longer response body (2000 bytes), what will happen?
# curl -v '192.244.124.240:3000/health/test?ms=0&content=1&repeat=2000'
* Trying 192.244.124.240...
* Connected to 192.244.124.240 (192.244.124.240) port 3000 (#0)
> GET /health/test?ms=0&content=1&repeat=2000 HTTP/1.1
> Host: 192.244.124.240:3000
> User-Agent: curl/7.47.0
> Accept: */*
>
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer
Unfortunately, the connection was hanging on and then reset by peer after couple minutes. But it works on the host of the requested pod.
Curl a pod from its host:
# curl -v '192.244.124.240:3000/health/test?ms=0&content=1&repeat=2000'
* Trying 192.244.124.240...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to 192.244.124.240 (192.244.124.240) port 3000 (#0)
> GET /health/test?ms=0&content=1&repeat=2000 HTTP/1.1
> Host: 192.244.124.240:3000
> User-Agent: curl/7.47.0
> Accept: */*
>
{"data":"11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111"}< HTTP/1.1 200 OK
< X-Powered-By: Express
< Access-Control-Allow-Origin: *
< Content-Type: application/json; charset=utf-8
< Content-Length: 2011
< ETag: W/"7db-ViIL+fXpsfh/YwmlqDHSsQ"
< Date: Sat, 08 Dec 2018 20:22:55 GMT
< Connection: keep-alive
<
{ [2011 bytes data]
100 2011 100 2011 0 0 440k 0 --:--:-- --:--:-- --:--:-- 490k
* Connection #0 to host 192.244.124.240 left intact
The other one is in the same situation.
Summary:
I tried many times, and found a interesting and strange thing: If I request a pod from any other node (except the host of the pod), the response body cannot be longer than 1140 bytes. Otherwise, the connection will hang on.
My problem:
How it happen? And how can I break this limitation?
This is the kubeadm-1.12.0 initialization file:
# cat kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1alpha3
controlPlaneEndpoint: 10.42.79.210:6443
kind: ClusterConfiguration
kubernetesVersion: v1.12.0
networking:
podSubnet: 192.244.0.0/16
serviceSubnet: 192.96.0.0/16
apiServerCertSANs:
- 10.42.140.154
- 10.42.104.113
- 10.42.177.142
- km12-01
- km12-02
- km12-03
- 127.0.0.1
- localhost
- 10.42.79.210
etcd:
external:
endpoints:
- https://10.42.140.154:2379
- https://10.42.104.113:2379
- https://10.42.177.142:2379
caFile: /etc/kubernetes/ssl/ca.pem
certFile: /etc/etcd/ssl/etcd.pem
keyFile: /etc/etcd/ssl/etcd-key.pem
dataDir: /var/lib/etcd
clusterDNS:
- 192.96.0.2
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
We use an external etcd cluster, and the underlying network uses IPVS rather than iptables. We did an upgrade from 1.12.0 to 1.13.0.
# kubeadm upgrade apply 1.13.0
You can use this to test, and we're the ones that came up after the upgrade. #Adam Otto
There are some IPVS errors as follows:
kube-proxy logs
Seems like to be an MTU issue.

Nginx don't block bot by user-agent

I'm trying to ban annoying bot by user-agent. I put this into server section of nginx config:
server {
listen 80 default_server;
....
if ($http_user_agent ~* (AhrefsBot)) {
return 444;
}
checking by curl:
[root#vm85559 site_avaliable]# curl -I -H 'User-agent: Mozilla/5.0 (compatible; AhrefsBot/5.2; +http://ahrefs.com/robot/)' localhost/
curl: (52) Empty reply from server
so i check /var/log/nginx/access.log and i see some connections get 444, but another connections get 200!
51.255.65.78 - - [25/Jun/2017:15:47:36 +0300 - -] "GET /product/kovriki-avtomobilnie/volkswagen/?PAGEN_1=10 HTTP/1.1" 444 0 "-" "Mozilla/5.0 (compatible; AhrefsBot/5.2; +http://ahrefs.com/robot/)" 1498394856.155
217.182.132.60 - - [25/Jun/2017:15:47:50 +0300 - 2.301] "GET /product/bryzgoviki/toyota/ HTTP/1.1" 200 14500 "-" "Mozilla/5.0 (compatible; AhrefsBot/5.2; +http://ahrefs.com/robot/)" 1498394870.955
How is it possible?
Ok, got it!
I've add $server_name and $server_addr to nginx log format, and saw that cunning bot connects by ip without server_name:
51.255.65.40 - _ *myip* - [25/Jun/2017:16:22:27 +0300 - 2.449] "GET /product/soyuz_96_2/mitsubishi/l200/ HTTP/1.1" 200 9974 "-" "Mozilla/5.0 (compatible; AhrefsBot/5.2; +http://ahrefs.com/robot/)" 1498396947.308
so i added this and bot can't connect anymore
server {
listen *myip*:80;
server_name _;
return 403;
}

Resources