The airflow created by official helm chart is not redirect https. It is running behind LoadBalancer with ingress control service.
Here is my ingress controller
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/scheme: internet-facing
ingress.kubernetes.io/healthcheck-protocol: HTTP
ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
name: airflow-webser
namespace: airflow
spec:
# secretName: tls-secret
rules:
- host: airflow.example.com
http:
paths:
- path: /
backend:
serviceName: airflow-webserver
servicePort: airflow-ui
Here is my airflow service which i get by kubectl -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"meta.helm.sh/release-name":"airflow","meta.helm.sh/release-namespace":"airflow"},"creationTimestamp":"2022-07-07T01:37:40Z","labels":{"app.kubernetes.io/managed-by":"Helm","chart":"airflow-1.6.0","component":"webserver","heritage":"Helm","release":"airflow","tier":"airflow"},"name":"airflow-webserver","namespace":"airflow","resourceVersion":"11826...","uid":"2ee4946c"},"spec":{"clusterIP":"","clusterIPs":["172....."],"externalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"airflow-ui","nodePort":32...,"port":80,"protocol":"TCP","targetPort":8080}],"selector":{"component":"webserver","release":"airflow","tier":"airflow"},"sessionAffinity":"None","type":"NodePort"},"status":{"loadBalancer":{}}}
meta.helm.sh/release-name: airflow
meta.helm.sh/release-namespace: airflow
creationTimestamp: "2022-06-07T04:15:13Z"
labels:
app.kubernetes.io/managed-by: Helm
chart: airflow-1.6.0
component: webserver
heritage: Helm
release: airflow
tier: airflow
name: airflow-webserver
namespace: airflow
resourceVersion: "12000742"
uid: 9fe3e104-0c00-4cab-b701
spec:
clusterIP: 172.10.....
clusterIPs:
- 172.10.....
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: airflow-ui
nodePort: 32...
port: 443
protocol: TCP
targetPort: airflow-ui
selector:
component: webserver
release: airflow
tier: airflow
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
Other front-end app i applied working fine listening https by aws certificate management. I tried to the same thing for airflow but it didnt work.It directs http.
**
❯ curl https://example.com/ ─╯
* Trying IP ADDRESS:443...
* Connected to airflow.example.com (IP) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
* CApath: none
* (304) (OUT), TLS handshake, Client hello (1):
* (304) (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: CN=example.com
* start date: Jul 6 00:00:00 2022 GMT
* expire date: Aug 4 23:59:59 2023 GMT
* issuer: C=US; O=Amazon; OU=Server CA 1B; CN=Amazon
* SSL certificate verify ok.
> GET / HTTP/1.1
> Host: airflow.example.com
> User-Agent: curl/7.79.1
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 302 FOUND
< Content-Type: text/html; charset=utf-8
< Date: Fri, 08 Jul 2022 00:57:57 GMT
< Location: http://airflow.example.com/home
< Server: nginx/1.19.1
< Set-Cookie: session=7916bd57-21b4-4c39-ac89-c6d56c924e2a.SWJ9WmiwN849ZPoilwH5UWiXhbg; Expires=Sun, 07-Aug-2022 00:57:57 GMT; HttpOnly; Path=/; SameSite=Lax
< X-Frame-Options: DENY
< X-Robots-Tag: noindex, nofollow
< Content-Length: 217
< Connection: keep-alive
<
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>Redirecting...</title>
**
I would be happy if someone help me out.
Related
I'm testing proxying an HTTPS request from a server running Nginx (which I will call client-side) to another (server-side) that will proxy the request to a local Alertmanager. The server-side is TLS with a self-signed certificate. When I set proxy_ssl_verify to on on the client-side with the self-signed certificate in proxy_ssl_trusted_certificate, the client-side Nginx returns 503 Unavailable without logging any error.
Any help understanding why the client-side Nginx closes the connection silently and returns 503 would be much appreciated!
**Server-side Nginx config **
nginx.conf:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log debug;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
sendfile on;
keepalive_timeout 65;
include /etc/nginx/conf.d/sites-enabled/*.conf;
}
alertmanager.conf:
upstream alertmanager {
server 127.0.0.1:9193;
}
server {
listen 127.0.0.1:9093 ssl;
listen 192.168.128.2:9093 ssl;
server_name 172.29.49.202;
include /etc/nginx/conf.d/common.conf;
include /etc/nginx/conf.d/ssl.conf;
ssl_certificate /etc/ssl/private/alertmanager-cert.pem;
ssl_certificate_key /etc/ssl/private/alertmanager-key.pem;
location / {
proxy_pass http://alertmanager;
include /etc/nginx/conf.d/common_location.conf;
auth_basic alertmanager;
auth_basic_user_file /etc/nginx/conf.d/alertmanager/.htpasswd;
add_header 'Access-Control-Allow-Headers' 'Accept, Authorization, Content-Type, Origin' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Credentials' 'true' always;
add_header X-XSS-Protection "1; mode=block";
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' $http_origin always;
add_header 'Access-Control-Allow-Headers' 'Accept, Authorization, Content-Type, Origin' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Credentials' 'true' always;
add_header X-XSS-Protection "1; mode=block";
return 200;
}
}
}
ssl.conf:
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384";
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/private/dhparams.pem;
ssl_session_timeout 10m;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
Nginx version on the server-side is 1.21.6.
Client-side Nginx config
nginx.conf:
user nginx;
worker_processes 4;
error_log /var/log/nginx/error.log debug;
pid /var/run/nginx.pid;
worker_rlimit_nofile 99999;
events {
worker_connections 32768;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
sendfile on;
limit_req_zone $server_name zone=one:10m rate=1800r/s;
keepalive_timeout 86400s;
keepalive_requests 150000;
client_header_timeout 86400s;
client_max_body_size 50M;
include /etc/nginx/conf.d/*.conf;
}
default.conf:
upstream alertmanager {
server 172.29.49.202:9093;
}
server {
listen 8080 ssl;
listen [::]:8080 ssl;
server_name management;
ssl_certificate /etc/ssl/nginx.crt;
ssl_certificate_key /etc/ssl/nginx.key;
ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-
RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384";
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
location /alertmanager/ {
limit_rate 1024k;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass_header Server;
add_header X-XSS-Protection "1; mode=block";
# Allow backend with keepalive connections
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Authorization "Basic <redacted>";
proxy_pass https://alertmanager/;
proxy_ssl_verify on;
proxy_ssl_trusted_certificate /etc/rbbn/ids/nginx/appsvc/alertmanager-cert.pem;
proxy_next_upstream error timeout http_500;
}
}
Note that alertmanager-cert.pem on both servers are the same self-signed cert. The nginx version is 1.22.1.
Validating connection with the certificate
I can see that validating the upstream connection with the self-signed cert works fine.
# curl -v -H "Authorization: Basic <redacted>" https://172.29.49.202:9093 --cacert alertmanager-cert.pem
* Rebuilt URL to: https://172.29.49.202:9093/
* Trying 172.29.49.202...
* TCP_NODELAY set
* Connected to 172.29.49.202 (172.29.49.202) port 9093 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/rbbn/ids/nginx/appsvc/alertmanager-cert.pem
CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, [no content] (0):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: CN=blabla
* start date: Jan 11 23:46:27 2023 GMT
* expire date: Jan 11 23:46:27 2024 GMT
* subjectAltName: host "172.29.49.202" matched cert's IP address!
* issuer: CN=blabla
* SSL certificate verify ok.
* TLSv1.3 (OUT), TLS app data, [no content] (0):
> GET / HTTP/1.1
> Host: 172.29.49.202:9093
> User-Agent: curl/7.61.1
> Accept: */*
> Authorization: Basic <redacted>
>
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS app data, [no content] (0):
< HTTP/1.1 200 OK
...
Running the command through Nginx with proxy_ssl_verify
When trying to do it through Nginx, I get a 503 response though.
# curl -v -k https://127.0.0.1:8080/alertmanager/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, [no content] (0):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: ...
* start date: Jan 11 01:52:36 2023 GMT
* expire date: Dec 18 01:52:36 2122 GMT
* issuer: ...
* SSL certificate verify result: self signed certificate (18), continuing anyway.
* TLSv1.3 (OUT), TLS app data, [no content] (0):
> GET /alertmanager/ HTTP/1.1
> Host: 127.0.0.1:8080
> User-Agent: curl/7.61.1
> Accept: */*
>
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS app data, [no content] (0):
< HTTP/1.1 503 Service Temporarily Unavailable
< Server: nginx/1.22.1
< Date: Thu, 12 Jan 2023 19:18:50 GMT
< Content-Type: application/json
< Content-Length: 32
< Connection: keep-alive
< ETag: "63bf5851-20"
< Retry-After: 1
<
{"error":"Service Unavailable"}
I tried capturing traffic on the server-side and I see that the TLS handshake gets done, but then the client-side Nginx closes the connection.
TLS handshake
I see this error in the client-side Nginx's logs, but nothing on the server-side.
[info] 17#17: *272 client closed connection while waiting for request, client: 172.29.49.112, server: 192.168.128.2:9093
Testing without proxy_ssl_verify
When proxy_ssl_verify is disabled, the connection succeeds.
# curl -v -k https://127.0.0.1:8080/alertmanager/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, [no content] (0):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: ...
* start date: Jan 11 01:52:36 2023 GMT
* expire date: Dec 18 01:52:36 2122 GMT
* issuer: ...
* SSL certificate verify result: self signed certificate (18), continuing anyway.
* TLSv1.3 (OUT), TLS app data, [no content] (0):
> GET /alertmanager/ HTTP/1.1
> Host: 127.0.0.1:8080
> User-Agent: curl/7.61.1
> Accept: */*
>
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS app data, [no content] (0):
< HTTP/1.1 200 OK
< Date: Thu, 12 Jan 2023 19:33:21 GMT
< Content-Type: text/html; charset=utf-8
< Content-Length: 1381
< Connection: keep-alive
< Server: nginx/1.21.6
< Accept-Ranges: bytes
< Cache-Control: no-cache, no-store, must-revalidate
< Expires: 0
< Last-Modified: Thu, 01 Jan 1970 00:00:01 GMT
< Pragma: no-cache
< Access-Control-Allow-Headers: Accept, Authorization, Content-Type, Origin
< Access-Control-Allow-Methods: GET, POST, OPTIONS
< Access-Control-Allow-Credentials: true
< X-XSS-Protection: 1; mode=block
< X-XSS-Protection: 1; mode=block
description of the problem:
I have a site which distribute the configurations
https://cliconf.aa.bb.cc/cgi-bin/get-config.cgi
it returns config json and some HTTP headers for cors:
vary: Origin
vary: Access-Control-Request-Method
vary: Access-Control-Request-Headers
access-control-allow-origin: *
access-control-allow-headers: *
I have the app site
https://web.aa.bb.cc/
this two sites a one same domain zone, but app doesn't read config and return error "strict-origin-when-cross-origin"
the curl call
curl -v -H "Origin: https://web.aa.bb.cc/" --url "https://cliconf.aa.bb.cc/cgi-bin/get-config.cgi"
is return 200
* Trying x.x.x.x...
* Connected to x.x.x.x (x.x.x.x) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* CAfile: /etc/ssl/certs/xxxxxx.crt
* CApath: /etc/ssl/certs
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS header, Finished (20):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS header, Finished (20):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
* subject: CN=*.x.x.x.x
* start date: Jun 15 00:00:00 2022 GMT
* expire date: Jul 14 23:59:59 2023 GMT
* subjectAltName: host "x.x.x.x" matched cert's "*.x.x.x.x"
* issuer: C=US; O=Amazon; OU=Server CA 1B; CN=Amazon
* SSL certificate verify ok.
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* Using Stream ID: 1 (easy handle 0x5573120e5e80)
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
> GET /cgi-bin/config?environment=aws_dev HTTP/2
> Host: x.x.x.x:443
> user-agent: curl/7.81.0
> accept: */*
>
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
< HTTP/2 200
< date: Thu, 12 Jan 2023 14:55:55 GMT
< content-type: application/json
< server: nginx/1.18.0 (Ubuntu)
< vary: Origin
< vary: Access-Control-Request-Method
< vary: Access-Control-Request-Headers
< access-control-allow-headers: *
< access-control-allow-origin: *
< access-control-max-age: 3600
< access-control-expose-headers: Content-Length
<
{"logcollector": "https://x.x.x.x/gelf", "linkFrontend": "https://x.x.x.x", x.x.x.x, "name": "dev", "linkAdminApi": "https://x.x.x.x", "maintenance": false, "manifest_id": "x.x.x.x", "url_appsite": "https://x.x.x.x", "linkSupport": "https://x.x.x.x", "websocketPathFrontend": "wss://x.x.x.x/wsapi" }
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* Connection #0 to host x.x.x.x left intact
I found on article https://developer.chrome.com/blog/referrer-policy-new-chrome-default/
but it's still not clear to me which headers and / or on which side I have to create in order for it to work
all web servers are nginx
conf file of nginx from cliconf.aa.bb.cc
server {
listen *:80;
add_header Access-Control-Allow-Headers *;
add_header Access-Control-Allow-Origin *;
add_header Access-Control-Max-Age 3600;
add_header Access-Control-Expose-Headers Content-Length;
location / {
try_files $uri $uri/ =404;
}
include fcgiwrap.conf;
access_log /var/log/nginx/cliconf.aa.bb.cc-access.log;
error_log /var/log/nginx/cliconf.aa.bb.cc-error.log;
}
I include at main index.html
<meta name="referrer" content="unsafe-url" />
but it still not working.
I have Nginx Open Source on AKS service. Every thing was good but unable to serve static content like index.html or favicon.ico.
When I open http:// it is not serving the index.html by default[i get 404] and if I try to open any static content I get 404 error.
nginx configuration was passed as ConfigMap and below is the config file that talks about serving static content.
server {
listen 80;
server_name localhost;
root /opt/abc/html; #also tried root /opt/abc/html/
location / {
root /opt/abc/html; #also tried root /opt/abc/html/
index index.html;
try_files $uri $uri/ /index.html?$args;
...
...
..
proxy_pass http://tomcat;
}
}
Setup:
Kubernetes on AKS
Nginx Open Source [no ingress]
configMaps to mount config.d
the static content (/opt/abc/html) was passed into pod with kubernetes cp command. [will this work?]
ref: https://github.com/RammusXu/toolkit/tree/master/k8s/echoserver
Here's a example to mount nginx.conf from ConfigMap
And make sure you kubectl rollout restart deployment echoserver after update ConfigMap. Pod only clone ConfigMap when it created. It don't sync or auto-updated.
apiVersion: apps/v1
kind: Deployment
metadata:
name: echoserver
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: echoserver
template:
metadata:
labels:
app: echoserver
spec:
volumes:
- name: config
configMap:
name: nginx-config
containers:
- name: echoserver
# image: gcr.io/google_containers/echoserver:1.10
image: openresty/openresty:1.15.8.2-1-alpine
ports:
- containerPort: 8080
name: http
# nginx.conf override
volumeMounts:
- name: config
subPath: nginx.conf
# mountPath: /etc/nginx/nginx.conf
mountPath: /usr/local/openresty/nginx/conf/nginx.conf
readOnly: true
---
apiVersion: v1
kind: Service
metadata:
name: echoserver
namespace: default
spec:
type: NodePort
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app: echoserver
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
namespace: default
data:
nginx.conf: |-
events {
worker_connections 1024;
}
env HOSTNAME;
env NODE_NAME;
env POD_NAME;
env POD_NAMESPACE;
env POD_IP;
http {
default_type 'text/plain';
# maximum allowed size of the client request body. By default this is 1m.
# Request with bigger bodies nginx will return error code 413.
# http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size
client_max_body_size 10m;
# https://blog.percy.io/tuning-nginx-behind-google-cloud-platform-http-s-load-balancer-305982ddb340
keepalive_timeout 650;
keepalive_requests 10000;
# GZIP support
gzip on;
gzip_min_length 128;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/css
text/plain
text/javascript
application/javascript
application/json
application/x-javascript
application/xml
application/xml+rss
application/xhtml+xml
application/x-font-ttf
application/x-font-opentype
application/vnd.ms-fontobject
image/svg+xml
image/x-icon
application/rss+xml
application/atom_xml
application/vnd.apple.mpegURL
application/x-mpegurl
vnd.apple.mpegURL
application/dash+xml;
init_by_lua_block {
local template = require("template")
-- template syntax documented here:
-- https://github.com/bungle/lua-resty-template/blob/master/README.md
tmpl = template.compile([[
Hostname: {{os.getenv("HOSTNAME") or "N/A"}}
Pod Information:
{% if os.getenv("POD_NAME") then %}
node name: {{os.getenv("NODE_NAME") or "N/A"}}
pod name: {{os.getenv("POD_NAME") or "N/A"}}
pod namespace: {{os.getenv("POD_NAMESPACE") or "N/A"}}
pod IP: {{os.getenv("POD_IP") or "N/A"}}
{% else %}
-no pod information available-
{% end %}
Server values:
server_version=nginx: {{ngx.var.nginx_version}} - lua: {{ngx.config.ngx_lua_version}}
Request Information:
client_address={{ngx.var.remote_addr}}
method={{ngx.req.get_method()}}
real path={{ngx.var.request_uri}}
query={{ngx.var.query_string or ""}}
request_version={{ngx.req.http_version()}}
request_scheme={{ngx.var.scheme}}
request_uri={{ngx.var.scheme.."://"..ngx.var.host..":"..ngx.var.server_port..ngx.var.request_uri}}
Request Headers:
{% for i, key in ipairs(keys) do %}
{{key}}={{headers[key]}}
{% end %}
Request Body:
{{ngx.var.request_body or " -no body in request-"}}
]])
}
server {
# please check the benefits of reuseport https://www.nginx.com/blog/socket-sharding-nginx-release-1-9-1
# basically instructs to create an individual listening socket for each worker process (using the SO_REUSEPORT
# socket option), allowing a kernel to distribute incoming connections between worker processes.
listen 8080 default_server reuseport;
listen 8443 default_server ssl http2 reuseport;
ssl_certificate /certs/certificate.crt;
ssl_certificate_key /certs/privateKey.key;
# Replace '_' with your hostname.
server_name _;
location / {
lua_need_request_body on;
content_by_lua_block {
ngx.header["Server"] = "echoserver"
local headers = ngx.req.get_headers()
local keys = {}
for key, val in pairs(headers) do
table.insert(keys, key)
end
table.sort(keys)
ngx.say(tmpl({os=os, ngx=ngx, keys=keys, headers=headers}))
}
}
}
}
I am trying to get a reverse proxy working on kubernetes using nginx and a .net core API.
When I request http://localhost:9000/api/message I want something like the following to happen:
[Request] --> [nginx](localhost:9000) --> [.net API](internal port 9001)
but what appears to be happening is:
[Request] --> [nginx](localhost:9000)!
Fails because /usr/share/nginx/api/message is not found.
Obviously nginx is failing to route the request to the upstream servers. This works correctly when I run the same config under docker-compose but is failing here in kubernetes (local in docker)
I am using the following configmap for nginx:
error_log /dev/stdout info;
events {
worker_connections 2048;
}
http {
access_log /dev/stdout;
upstream web_tier {
server webapi:9001;
}
server {
listen 80;
access_log /dev/stdout;
location / {
proxy_pass http://web_tier;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /nginx_status {
stub_status on;
access_log off;
allow all;
}
}
}
The load-balancer yaml is:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: load-balancer
spec:
replicas: 1
template:
metadata:
labels:
app: load-balancer
spec:
containers:
- args:
- nginx
- -g
- daemon off;
env:
- name: NGINX_HOST
value: example.com
- name: NGINX_PORT
value: "80"
image: nginx:1.15.9
name: iac-load-balancer
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/lib/nginx
readOnly: true
name: vol-config
- mountPath: /tmp/share/nginx/html
readOnly: true
name: vol-html
volumes:
- name: vol-config
configMap:
name: load-balancer-configmap
items:
- key: nginx.conf
path: nginx.conf
- name: vol-html
configMap:
name: load-balancer-configmap
items:
- key: index.html
path: index.html
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: load-balancer
spec:
type: LoadBalancer
ports:
- name: http
port: 9000
targetPort: 80
selector:
app: load-balancer
status:
loadBalancer: {}
Finally the error messages are:
2019/04/10 18:47:26 [error] 7#7: *1 open() "/usr/share/nginx/html/api/message" failed (2: No such file or directory), client: 192.168.65.3, server: localhost, request: "GET /api/message HTTP/1.1", host: "localhost:9000",
192.168.65.3 - - [10/Apr/2019:18:47:26 +0000] "GET /api/message HTTP/1.1" 404 555 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36" "-",
It seems like nginx is either not reading the config correctly for some reason, or it is failing to communicating with the webapi servers and is defaulting back to trying to serve static local content (nothing in the log indicates a comms issue though).
Edit 1: I should have included that /nginx_status is also not routing correctly and fails with the same "/usr/share/nginx/html/nginx_status" not found error.
Here what i understood is you are requesting a Api, which is giving 404.
http://localhost:9000/api/message
I have solved this issue by creating backend service as nodeport, and i am trying to access the api from my Angular App.
Here is my configure.conf file which get replaced by original nginx configuration file
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
}
server {
listen 5555;
location / {
proxy_pass http://login:5555;
}
}
server {
listen 5554;
location / {
proxy_pass http://dashboard:5554;
}
}
here i have route my external traffic coming on port 5554/5555 to the service [selector-Name]
here login and dashboards are my services having Type as NodePort
Here is my Docker file
from nginx:1.11-alpine
copy configure.conf /etc/nginx/conf.d/default.conf
copy dockerpoc /usr/share/nginx/html
expose 80
expose 5555
expose 5554
cmd ["nginx","-g","daemon off;"]
Here i kept my frontend service's Type as LoadBalancer which will expose a public endpoint and,
I am calling my backend Api from frontend as :
http://loadbalancer-endpoint:5555/login
Hope this will help you.
Can you share how you created the configmap? Verify that the configmap has a data entry named nginx.conf. It might be related to the readOnly flag maybe you can also try to remove it or change the path to /etc/nginx/ as stated in docker image documentation.
I installed gitlab with the offical Docker container:
docker run -d -p 8002:80 -v /mnt/gitlab/etc/gitlab:/etc/gitlab -v /mnt/gitlab/var/opt/gitlab:/var/opt/gitlab -v /mnt/gitlab/var/log/gitlab:/var/log/gitlab gitlab/gitlab-ce
I'm using nginx as reverse proxy:
upstream gitlab {
server localhost:8002;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
keepalive_timeout 70;
ssl_certificate /etc/letsencrypt/live/git.cedware.com/cert.pem;
ssl_certificate_key /etc/letsencrypt/live/git.cedware.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
server_name git.cedware.com;
client_max_body_size 300M;
location / {
proxy_http_version 1.1;
proxy_pass http://localhost:8002/;
proxy_set_header Host $host;
proxy_set_header X-Forwared-Ssl off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
This all works totally fine, until I add this line to the gitlab.rb
external_url 'https://git.cedware.com';
After restarting the container, nginx can't reach gitlab. Can someone tell me what's wrong with my setup?
Edit:
This is the output of curl -v https://git.cedware.com:
* Rebuilt URL to: https://git.cedware.com/
* Trying 37.120.177.116...
* Connected to git.cedware.com (37.120.177.116) port 443 (#0)
* found 175 certificates in /etc/ssl/certs/ca-certificates.crt
* found 700 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_256_GCM_SHA384
* server certificate verification OK
* server certificate status verification SKIPPED
* common name: git.cedware.com (matched)
* server certificate expiration date OK
* server certificate activation date OK
* certificate public key: RSA
* certificate version: #3
* subject: CN=git.cedware.com
* start date: Wed, 04 Jan 2017 16:58:00 GMT
* expire date: Tue, 04 Apr 2017 16:58:00 GMT
* issuer: C=US,O=Let's Encrypt,CN=Let's Encrypt Authority X3
* compression: NULL
* ALPN, server accepted to use http/1.1
> GET / HTTP/1.1
> Host: git.cedware.com
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 502 Bad Gateway
< Server: nginx/1.10.0 (Ubuntu)
< Date: Thu, 05 Jan 2017 08:45:52 GMT
< Content-Type: text/html
< Content-Length: 182
< Connection: keep-alive
<
<html>
<head><title>502 Bad Gateway</title></head>
<body bgcolor="white">
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.10.0 (Ubuntu)</center>
</body>
</html>
* Connection #0 to host git.cedware.com left intact
And this is the content of the nginx error.log:
> 2017/01/05 09:47:43 [error] 26258#26258: *1 recv() failed (104:
> Connection reset by peer) while reading response header from upstream,
> client: 217.7.247.238, server: git.cedware.com, request: "GET /
> HTTP/1.1", upstream: "http://127.0.0.1:8002/", host: "git.cedware.com"
> 2017/01/05 09:47:43 [error] 26258#26258: *1 recv() failed (104:
> Connection reset by peer) while reading response header from upstream,
> client: 217.7.247.238, server: git.cedware.com, request: "GET /
> HTTP/1.1", upstream: "http://[::1]:8002/", host: "git.cedware.com"
> 2017/01/05 09:47:43 [error] 26258#26258: *1 no live upstreams while
> connecting to upstream, client: 217.7.247.238, server:
> git.cedware.com, request: "GET /favicon.ico HTTP/1.1", upstream:
> "http://localhost/favicon.ico", host: "git.cedware.com", referrer:
> "https://git.cedware.com/"
As per the nginx error shown in the log the upstream is not responding. This is not a nginx error.
Most likely your container is either down or stuck in a restart loop.
Use docker ps to see the container status. Then use docker logs <containername> to see any errors it generates.
It is possible that gitlab doesn't like your gitlab.rb modification. The log should tell you more.
You should expose 443 port of container since you are using https for gitlab.
Also your location in host system's Nginx settign should be https://localhost:some_443_port/