I am trying to implement a gRPC service on GKE (v1.11.2-gke.18) with mutual TLS auth.
When not enforcing client auth, the HTTP2 health check that GKE automatically creates responds, and everything connects issue.
When I turn on mutual auth, the health check fails - presumably because it cannot complete a connection since it lacks a client certificate and key.
As always, documentation is light and conflicting. I require a solution that is fully programmatic (I.e. no console tweaking), but I have not been able to find a solution, other than manually changing the health check to TCP.
From what I can see
I am guessing that I will either need to:
implement a custom mTLS health check that will prevent GKE automatically creating a HTTP2 check
find an alternative way to do SSL termination at the container that doesn't use the service.alpha.kubernetes.io/app-protocols: '{"grpc":"HTTP2"}' proprietary annotation
find some way to provide the health check with the credentials it needs
alter my go implementation to somehow server a health check without requiring mTLS, while enforcing mTLS on all other endpoints
Or perhaps there is something else that I have not considered? The config below works perfectly for REST and gRPC with TLS but breaks with mTLS.
service.yaml
apiVersion: v1
kind: Service
metadata:
name: grpc-srv
labels:
type: grpc-srv
annotations:
service.alpha.kubernetes.io/app-protocols: '{"grpc":"HTTP2"}'
spec:
type: NodePort
ports:
- name: grpc
port: 9999
protocol: TCP
targetPort: 9999
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: myapp
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: io-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "grpc-ingress"
kubernetes.io/ingress.allow-http: "true"
spec:
tls:
- secretName: io-grpc
- secretName: io-api
rules:
- host: grpc.xxx.com
http:
paths:
- path: /*
backend:
serviceName: grpc-srv
servicePort: 9999
- host: rest.xxx.com
http:
paths:
- path: /*
backend:
serviceName: grpc-srv
servicePort: 8080
It seems that there is currently no way to achieve this using the GKE L7 ingress. But I have been successful deploying an NGINX Ingress Controller. Google have a not bad tutorial on how to deploy one here.
This installs a L4 TCP load balancer with no health checks on the services, leaving NGINX to handle the L7 termination and routing. This gives you a lot more flexibility, but the devil is in the detail, and the detail isn't easy to come by. Most of what I found was learned trawling through github issues.
What I have managed to achieve is for NGINX to handle the TLS termination, and still pass through the certificate to the back end, so you can handle things such as user auth via the CN, or check the certificate serial against a CRL.
Below is my ingress file. The annotations are the minimum required to achieve mTLS authentication, and still have access to the certificate in the back end.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grpc-ingress
namespace: master
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
nginx.ingress.kubernetes.io/auth-tls-secret: "master/auth-tls-chain"
nginx.ingress.kubernetes.io/auth-tls-verify-depth: "2"
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
nginx.ingress.kubernetes.io/backend-protocol: "GRPCS"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/grpc-backend: "true"
spec:
tls:
- hosts:
- grpc.example.com
secretName: auth-tls-chain
rules:
- host: grpc.example.com
http:
paths:
- path: /grpc.AwesomeService
backend:
serviceName: awesome-srv
servicePort: 9999
- path: /grpc.FantasticService
backend:
serviceName: fantastic-srv
servicePort: 9999
A few things to note:
The auth-ls-chain secret contains 3 files. ca.crt is the certificate chain and should include any intermediate certificates. tls.crt contains your server certificate and tls.key contains your private key.
If this secret lies in a namespace that is different to the NGINX ingress, then you should give the full path in the annotation.
My verify-depth is 2, but that is because I am using intermediate certificates. If you are using self signed, then you will only need a depth of 1.
backend-protocol: "GRPCS" is required to prevent NGINX terminating the TLS. If you want to have NGINX terminate the TLS and run your services without encryption, use GRPC as the protocol.
grpc-backend: "true" is required to let NGINX know to use HTTP2 for the backend requests.
You can list multiple paths and direct to multiple services. Unlike with the GKE ingress, these paths should not have a forward slash or asterisk suffix.
The best part is that if you have multiple namespaces, or if you are running a REST service as well (E.g. gRPC Gateway), NGINX will reuse the same load balancer. This provides some savings over the GKE ingress, that would use a separate LB for each ingress.
The above is from the master namespace and below is a REST ingress from the staging namespace.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: staging
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
tls:
- hosts:
- api-stage.example.com
secretName: letsencrypt-staging
rules:
- host: api-stage.example.com
http:
paths:
- path: /awesome
backend:
serviceName: awesom-srv
servicePort: 8080
- path: /fantastic
backend:
serviceName: fantastic-srv
servicePort: 8080
For HTTP, I am using LetsEncrypt, but there's plenty of information available on how to set that up.
If you exec into the ingress-nginx pod, you will be able to see how NGINX has been configured:
...
server {
server_name grpc.example.com ;
listen 80;
set $proxy_upstream_name "-";
set $pass_access_scheme $scheme;
set $pass_server_port $server_port;
set $best_http_host $http_host;
set $pass_port $pass_server_port;
listen 442 proxy_protocol ssl http2;
# PEM sha: 142600b0866df5ed9b8a363294b5fd2490c8619d
ssl_certificate /etc/ingress-controller/ssl/default-fake-certificate.pem;
ssl_certificate_key /etc/ingress-controller/ssl/default-fake-certificate.pem;
ssl_certificate_by_lua_block {
certificate.call()
}
# PEM sha: 142600b0866df5ed9b8a363294b5fd2490c8619d
ssl_client_certificate /etc/ingress-controller/ssl/master-auth-tls-chain.pem;
ssl_verify_client on;
ssl_verify_depth 2;
error_page 495 496 = https://help.example.com/auth;
location /grpc.AwesomeService {
set $namespace "master";
set $ingress_name "grpc-ingress";
set $service_name "awesome-srv";
set $service_port "9999";
set $location_path "/grpc.AwesomeServices";
rewrite_by_lua_block {
lua_ingress.rewrite({
force_ssl_redirect = true,
use_port_in_redirects = false,
})
balancer.rewrite()
plugins.run()
}
header_filter_by_lua_block {
plugins.run()
}
body_filter_by_lua_block {
}
log_by_lua_block {
balancer.log()
monitor.call()
plugins.run()
}
if ($scheme = https) {
more_set_headers "Strict-Transport-Security: max-age=15724800; includeSubDomains";
}
port_in_redirect off;
set $proxy_upstream_name "master-analytics-srv-9999";
set $proxy_host $proxy_upstream_name;
client_max_body_size 1m;
grpc_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
grpc_set_header ssl-client-cert $ssl_client_escaped_cert;
grpc_set_header ssl-client-verify $ssl_client_verify;
grpc_set_header ssl-client-subject-dn $ssl_client_s_dn;
grpc_set_header ssl-client-issuer-dn $ssl_client_i_dn;
# Allow websocket connections
grpc_set_header Upgrade $http_upgrade;
grpc_set_header Connection $connection_upgrade;
grpc_set_header X-Request-ID $req_id;
grpc_set_header X-Real-IP $the_real_ip;
grpc_set_header X-Forwarded-For $the_real_ip;
grpc_set_header X-Forwarded-Host $best_http_host;
grpc_set_header X-Forwarded-Port $pass_port;
grpc_set_header X-Forwarded-Proto $pass_access_scheme;
grpc_set_header X-Original-URI $request_uri;
grpc_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
grpc_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
grpc_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering off;
proxy_buffer_size 4k;
proxy_buffers 4 4k;
proxy_request_buffering on;
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout;
proxy_next_upstream_tries 3;
grpc_pass grpcs://upstream_balancer;
proxy_redirect off;
}
location /grpc.FantasticService {
set $namespace "master";
set $ingress_name "grpc-ingress";
set $service_name "fantastic-srv";
set $service_port "9999";
set $location_path "/grpc.FantasticService";
...
This is just an extract of the generated nginx.conf. But you should be able to see how a single configuration could handle multiple services across multiple namespaces.
The last piece is a go snippet of how we get hold of the certificate via the context. As you can see from the config above, NGINX adds the authenticated cert and other details into the gRPC metadata.
meta, ok := metadata.FromIncomingContext(*ctx)
if !ok {
return status.Error(codes.Unauthenticated, "missing metadata")
}
// Check if SSL has been handled upstream
if len(meta.Get("ssl-client-verify")) == 1 && meta.Get("ssl-client-verify")[0] == "SUCCESS" {
if len(meta.Get("ssl-client-cert")) > 0 {
certPEM, err := url.QueryUnescape(meta.Get("ssl-client-cert")[0])
if err != nil {
return status.Errorf(codes.Unauthenticated, "bad or corrupt certificate")
}
block, _ := pem.Decode([]byte(certPEM))
if block == nil {
return status.Error(codes.Unauthenticated, "failed to parse certificate PEM")
}
cert, err := x509.ParseCertificate(block.Bytes)
if err != nil {
return status.Error(codes.Unauthenticated, "failed to parse certificate PEM")
}
return authUserFromCertificate(ctx, cert)
}
}
// if fallen through, then try to authenticate via the peer object for gRPCS,
// or via a JWT in the metadata for gRPC Gateway.
HTTP/2 and gRPC support on GKE is not available yet. Please see limitation. There is already a feature request in the works in order to address the issue.
Related
I am running celery flower on port inside Kubernetes with nginx-ingress controller
I want to do a rewrite where requests to /flower/(.*) request goes to /$1 according to their documentation:
https://flower.readthedocs.io/en/latest/config.html?highlight=nginx#url-prefix
server {
listen 80;
server_name example.com;
location /flower/ {
rewrite ^/flower/(.*)$ /$1 break;
proxy_pass http://example.com:5555;
proxy_set_header Host $host;
}
}
I have come up with the following ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: backend-airflow-ingress
namespace: edna
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/use-regex: "true"
ingress.kubernetes.io/rewrite-target: /$2
# nginx.ingress.kubernetes.io/app-root: /flower
spec:
rules:
- host:
http:
paths:
- path: /flower(/|$)(.*)
backend:
serviceName: airflow-flower-service
servicePort: 5555
Inside POD running flower, I successfully get
curl localhost:5555/dashboard
However if get into the POD running Nginx controller, then it fails.
curl localhost/flower/dashboard
I get response by the flower:
<div class="span12">
<p>
Error, page not found
</p>
</div>
this is what I see inside nginx.conf in nginx-controller pod
server {
server_name _ ;
listen 80 default_server reuseport backlog=511 ;
listen 443 default_server reuseport backlog=511 ssl http2 ;
set $proxy_upstream_name "-";
ssl_certificate_by_lua_block {
certificate.call()
}
location ~* "^/flower(/|$)(.*)" {
set $namespace "edna";
set $ingress_name "backend-airflow-ingress";
set $service_name "";
set $service_port "";
set $location_path "/flower(/|${literal_dollar})(.*)";
rewrite_by_lua_block {
lua_ingress.rewrite({
force_ssl_redirect = false,
ssl_redirect = true,
force_no_ssl_redirect = false,
use_port_in_redirects = false,
})
balancer.rewrite()
plugins.run()
}
# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
# other authentication method such as basic auth or external auth useless - all requests will be allowed.
#access_by_lua_block {
#}
header_filter_by_lua_block {
lua_ingress.header()
plugins.run()
}
body_filter_by_lua_block {
}
Ok figured this out
ingress.kubernetes.io/rewrite-target: /$2
should be in my case a different annotation
nginx.ingress.kubernetes.io/rewrite-target: /$2
I would like to be able to disable external authorization for a specific path of my App.
Similiar to this SO: Kubernetes NGINX Ingress: Disable Basic Auth for specific path
Only difference is using an external Auth provider (OAuth via Microsoft Azure) and there is a
This is the path that should be reachable by the public
/MyPublicPath
My ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myIngressName
annotations:
nginx.ingress.kubernetes.io/auth-signin: https://externalprovider/oauth2/sign_in
nginx.ingress.kubernetes.io/auth-url: https://externalprovider/oauth2/auth
nginx.ingress.kubernetes.io/auth-request-redirect: https://myapp/context_root/
nginx.ingress.kubernetes.io/auth-response-headers: X-Auth-Request-User, X-Auth-Request-Email, X-Auth-Request-Access-Token, Set-Cookie, Authorization
spec:
rules:
- host: myHostName
http:
paths:
- backend:
serviceName: myServiceName
servicePort: 9080
path: /
Can I have it not hit the https://externalprovider/oauth2/auth url for just that path?
I've tried using ingress.kubernetes.io/configuration-snippet to set auth_basic to value "off" but that appears to be tied to the basic auth directives not the external ones.
My experiment showed that it's not required to have two ingress-controllers like Crou mentioned in the previous answer.
One Nginx ingress-controller and two Ingress objects are just enough to do the trick.
The experiment didn't cover the whole solution: the auth provider wasn't deployed so we'll see auth request only, but for checking Ingress part it's not really necessary.
Here are the details [TL;DR]:
Ingress-controller was deployed according to the official manual.
Both my1service and my2service are forwarding the traffic to the same Nginx Pod.
I also added rewrite-target annotation because my the destination Pod is serving content on path / only.
Ingress1:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress1
annotations:
nginx.ingress.kubernetes.io/auth-signin: https://externalprovider/oauth2/sign_in
nginx.ingress.kubernetes.io/auth-url: https://externalprovider/oauth2/auth
nginx.ingress.kubernetes.io/auth-request-redirect: https://myapp/context_root/
nginx.ingress.kubernetes.io/auth-response-headers: X-Auth-Request-User, X-Auth-Request-Email, X-Auth-Request-Access-Token, Set-Cookie, Authorization
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: myhost.com
http:
paths:
- backend:
serviceName: my1service
servicePort: 80
path: /
Ingress2:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress2
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: myhost.com
http:
paths:
- backend:
serviceName: my2service
servicePort: 80
path: /somepath
Applying them to the cluster gives us the following configuration of ingress-controller:
(I skipped not important lines from the nginx.conf content)
As we can see here, different set of rules is used for each location, so it's possible to have an auth for some path and skip the auth for another, or even have different auth providers for different locations on the same HTTP host.
ingress-controller's nginx.conf:
$ kubectl exec -n ingress-nginx ingress-nginx-controller-7fd7d8df56-xx987 -- cat /etc/nginx/nginx.conf > nginx.conf
$ less nginx.conf
http {
## start server myhost.com
server {
server_name myhost.com ;
location /somepath {
# this location doesn't use authentication and responds with the backend content page.
set $namespace "default";
set $ingress_name "myingress2";
set $service_name "my2service";
set $service_port "80";
set $location_path "/somepath";
set $proxy_upstream_name "default-my2service-80";
set $proxy_host $proxy_upstream_name;
set $pass_access_scheme $scheme;
}
location = /_external-auth-Lw {
internal;
# this location is used for executing authentication requests
set $proxy_upstream_name "default-my1service-80";
proxy_set_header Host externalprovider;
proxy_set_header X-Original-URL $scheme://$http_host$request_uri;
proxy_set_header X-Original-Method $request_method;
proxy_set_header X-Sent-From "nginx-ingress-controller";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Auth-Request-Redirect https://myapp/context_root/;
set $target https://externalprovider/oauth2/auth;
proxy_pass $target;
}
location #64e7eef73f135f7a304693e85336f805005c5bc2 {
internal;
# this location suppose to return authentication error page
add_header Set-Cookie $auth_cookie;
return 302 https://externalprovider/oauth2/sign_in?rd=$pass_access_scheme://$http_host$escaped_request_uri;
}
location / {
# this location requests for authentication from external source before returning the backend content
set $namespace "default";
set $ingress_name "myingress1";
set $service_name "my1service";
set $service_port "80";
set $location_path "/";
set $balancer_ewma_score -1;
set $proxy_upstream_name "default-my1service-80";
set $proxy_host $proxy_upstream_name;
set $pass_access_scheme $scheme;
set $pass_server_port $server_port;
set $best_http_host $http_host;
set $pass_port $pass_server_port;
set $proxy_alternative_upstream_name "";
# this location requires authentication
auth_request /_external-auth-Lw;
auth_request_set $auth_cookie $upstream_http_set_cookie;
add_header Set-Cookie $auth_cookie;
auth_request_set $authHeader0 $upstream_http_x_auth_request_user;
proxy_set_header 'X-Auth-Request-User' $authHeader0;
auth_request_set $authHeader1 $upstream_http_x_auth_request_email;
proxy_set_header 'X-Auth-Request-Email' $authHeader1;
auth_request_set $authHeader2 $upstream_http_x_auth_request_access_token;
proxy_set_header 'X-Auth-Request-Access-Token' $authHeader2;
auth_request_set $authHeader3 $upstream_http_set_cookie;
proxy_set_header 'Set-Cookie' $authHeader3;
auth_request_set $authHeader4 $upstream_http_authorization;
proxy_set_header 'Authorization' $authHeader4;
set_escape_uri $escaped_request_uri $request_uri;
error_page 401 = #64e7eef73f135f7a304693e85336f805005c5bc2;
}
}
## end server myhost.com
}
Let's test how it all works:
# ingress-controller IP address is 10.68.0.8
# here I requested / path and internal error and 'externalprovider could not be resolved (3: Host not found)'
# error tells us that authentication was required, but auth backend is not available.
# It's expected.
master-node$ curl http://10.68.0.8/ -H "Host: myhost.com"
<html>
<head><title>500 Internal Server Error</title></head>
<body>
<center><h1>500 Internal Server Error</h1></center>
<hr><center>nginx/1.19.1</center>
</body>
</html>
#controller logs:
$ kubectl logs -n ingress-nginx ingress-nginx-controller-7fd7d8df56-xx987
10.68.0.1 - - [21/Jul/2020:13:17:06 +0000] "GET / HTTP/1.1" 502 0 "-" "curl/7.47.0" 0 0.072 [default-my1service-80] [] - - - - 158e2f959af845b216c399b939d7c2b6
2020/07/21 13:17:06 [error] 689#689: *119718 externalprovider could not be resolved (3: Host not found), client: 10.68.0.1, server: myhost.com, request: "GET / HTTP/1.1", subrequest: "/_external-auth-Lw", host: "myhost.com"
2020/07/21 13:17:06 [error] 689#689: *119718 auth request unexpected status: 502 while sending to client, client: 10.68.0.1, server: myhost.com, request: "GET / HTTP/1.1", host: "myhost.com"
10.68.0.1 - - [21/Jul/2020:13:17:06 +0000] "GET / HTTP/1.1" 500 177 "-" "curl/7.47.0" 74 0.072 [default-my1service-80] [] - - - - 158e2f959af845b216c399b939d7c2b6
# Then I sent a request to /somepath and got a reply without necessity
# to provide any auth headers.
$ curl http://10.68.0.8/somepath -H "Host: myhost.com"
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
</body>
</html>
#controller logs show the successful reply:
10.68.0.1 - - [21/Jul/2020:13:18:29 +0000] "GET /somepath HTTP/1.1" 200 612 "-" "curl/7.47.0" 82 0.002 [default-my2service-80] [] 10.68.1.3:80 612 0.004 200 3af1d3d48c045be160e2cee8313ebf42
I had the same problem and I added below snippet code in my ingress.yaml file and its working.
nginx.ingress.kubernetes.io/auth-snippet: |
if ( $request_uri = "/nonmember" ) {
return 200;
}
Because you already have ingress in place and the path is /, there will be no way of disabling the basic auth on your https://externalprovider/oauth2/auth.
For best explanation please refer to answer provided by #VAS below.
To do that, you need to set up another ingress and configure it to disable basic auth.
You can also check this question on Stack Two ingress controllers on same K8S cluster and this one Kubernetes NGINX Ingress: Disable external auth for specific path.
I am trying to get a reverse proxy working on kubernetes using nginx and a .net core API.
When I request http://localhost:9000/api/message I want something like the following to happen:
[Request] --> [nginx](localhost:9000) --> [.net API](internal port 9001)
but what appears to be happening is:
[Request] --> [nginx](localhost:9000)!
Fails because /usr/share/nginx/api/message is not found.
Obviously nginx is failing to route the request to the upstream servers. This works correctly when I run the same config under docker-compose but is failing here in kubernetes (local in docker)
I am using the following configmap for nginx:
error_log /dev/stdout info;
events {
worker_connections 2048;
}
http {
access_log /dev/stdout;
upstream web_tier {
server webapi:9001;
}
server {
listen 80;
access_log /dev/stdout;
location / {
proxy_pass http://web_tier;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /nginx_status {
stub_status on;
access_log off;
allow all;
}
}
}
The load-balancer yaml is:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: load-balancer
spec:
replicas: 1
template:
metadata:
labels:
app: load-balancer
spec:
containers:
- args:
- nginx
- -g
- daemon off;
env:
- name: NGINX_HOST
value: example.com
- name: NGINX_PORT
value: "80"
image: nginx:1.15.9
name: iac-load-balancer
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/lib/nginx
readOnly: true
name: vol-config
- mountPath: /tmp/share/nginx/html
readOnly: true
name: vol-html
volumes:
- name: vol-config
configMap:
name: load-balancer-configmap
items:
- key: nginx.conf
path: nginx.conf
- name: vol-html
configMap:
name: load-balancer-configmap
items:
- key: index.html
path: index.html
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: load-balancer
spec:
type: LoadBalancer
ports:
- name: http
port: 9000
targetPort: 80
selector:
app: load-balancer
status:
loadBalancer: {}
Finally the error messages are:
2019/04/10 18:47:26 [error] 7#7: *1 open() "/usr/share/nginx/html/api/message" failed (2: No such file or directory), client: 192.168.65.3, server: localhost, request: "GET /api/message HTTP/1.1", host: "localhost:9000",
192.168.65.3 - - [10/Apr/2019:18:47:26 +0000] "GET /api/message HTTP/1.1" 404 555 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36" "-",
It seems like nginx is either not reading the config correctly for some reason, or it is failing to communicating with the webapi servers and is defaulting back to trying to serve static local content (nothing in the log indicates a comms issue though).
Edit 1: I should have included that /nginx_status is also not routing correctly and fails with the same "/usr/share/nginx/html/nginx_status" not found error.
Here what i understood is you are requesting a Api, which is giving 404.
http://localhost:9000/api/message
I have solved this issue by creating backend service as nodeport, and i am trying to access the api from my Angular App.
Here is my configure.conf file which get replaced by original nginx configuration file
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
}
server {
listen 5555;
location / {
proxy_pass http://login:5555;
}
}
server {
listen 5554;
location / {
proxy_pass http://dashboard:5554;
}
}
here i have route my external traffic coming on port 5554/5555 to the service [selector-Name]
here login and dashboards are my services having Type as NodePort
Here is my Docker file
from nginx:1.11-alpine
copy configure.conf /etc/nginx/conf.d/default.conf
copy dockerpoc /usr/share/nginx/html
expose 80
expose 5555
expose 5554
cmd ["nginx","-g","daemon off;"]
Here i kept my frontend service's Type as LoadBalancer which will expose a public endpoint and,
I am calling my backend Api from frontend as :
http://loadbalancer-endpoint:5555/login
Hope this will help you.
Can you share how you created the configmap? Verify that the configmap has a data entry named nginx.conf. It might be related to the readOnly flag maybe you can also try to remove it or change the path to /etc/nginx/ as stated in docker image documentation.
I am trying to get Kibana 6.2.4 in my GKE Kubernetes cluster running under www.mydomain.com/kibana without success. Though, I can run it perfectly fine with kubectl proxy and the default SERVER_BASEPATH.
Here is my Kibana deployment with the SERVER_BASEPATH removed.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana-logging
namespace: logging
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
replicas: 1
selector:
matchLabels:
k8s-app: kibana-logging
template:
metadata:
labels:
k8s-app: kibana-logging
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
containers:
- name: kibana-logging
image: docker.elastic.co/kibana/kibana-oss:6.2.4
resources:
# need more cpu upon initialization, therefore burstable class
limits:
cpu: 1000m
requests:
cpu: 100m
env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch-logging:9200
# - name: SERVER_BASEPATH
# value: /api/v1/namespaces/logging/services/kibana-logging/proxy
ports:
- containerPort: 5601
name: ui
protocol: TCP
My nginx ingress definition (nginx-ingress-controller:0.19.0):
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: logging-ingress
namespace: logging
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite ^/kibana/(.*)$ /$1 break;
spec:
tls:
- hosts:
- dev.mydomain.net
secretName: mydomain-net-tls-secret
rules:
- host: dev.mydomain.net
http:
paths:
- path: /kibana
backend:
serviceName: kibana-logging
servicePort: 5601
This results in this nginx location
location /kibana {
set $namespace "logging";
set $ingress_name "logging-ingress";
set $service_name "kibana-logging";
set $service_port "5601";
set $location_path "/kibana";
rewrite_by_lua_block {
balancer.rewrite()
}
log_by_lua_block {
balancer.log()
monitor.call()
}
port_in_redirect off;
set $proxy_upstream_name "logging-kibana-logging-5601";
# enforce ssl on server side
if ($redirect_to_https) {
return 308 https://$best_http_host$request_uri;
}
client_max_body_size "1m";
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Request-ID $req_id;
proxy_set_header X-Real-IP $the_real_ip;
proxy_set_header X-Forwarded-For $the_real_ip;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering "off";
proxy_buffer_size "4k";
proxy_buffers 4 "4k";
proxy_request_buffering "on";
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout;
proxy_next_upstream_tries 3;
rewrite ^/kibana/(.*)$ /$1 break;
proxy_pass http://upstream_balancer;
proxy_redirect off;
}
However, going to /kibana results in a 404.
Stackdriver
2018-10-30 08:30:48.000 MDT
GET /kibana 404 61ms - 9.0B
Web page
{
statusCode: 404,
error: "Not Found",
message: "Not Found"
}
I feel as though I am missing some sort of setting with either SERVER_BASEPATH and/or my nginx ingress configuration.
I believe what you want is the nginx.ingress.kubernetes.io/rewrite-target: / annotation in your ingress.
This way the location {} block will look something like this:
location ~* ^/kibana\/?(?<baseuri>.*) {
...
rewrite (?i)/kibana/(.*) /$1 break;
rewrite (?i)/kibana$ / break;
...
}
So this is my current setup.
I have a k8 cluster with nginx controller installed. I installed nginx using helm.
So I have a simple apple service as below:
kind: Pod
apiVersion: v1
metadata:
name: apple-app
labels:
app: apple
spec:
containers:
- name: apple-app
image: hashicorp/http-echo
args:
- "-text=apple"
---
kind: Service
apiVersion: v1
metadata:
name: apple-service
spec:
selector:
app: apple
ports:
- port: 5678 # Default port for image
and then I did a kubectl apply -f apples.yaml
Now i have an ingress.yaml as below.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /apple
backend:
serviceName: apple-service
servicePort: 5678
and then I kubectl -f ingress.yaml
my ingress controller doesnt have an external ip address.
But even without the external ip, I did a
kubectl exec -it nginxdeploy-nginx-ingress-controller-5d6ddbb677-774xc /bin/bash
And tried doing a curl kL http://localhost/apples
and its giving me a 503 error.
Anybody can help on this?
I've tested your configuration, and it seems to be working fine to me.
Pod responds fine:
$ kubectl describe pod apple-app
Name: apple-app
Namespace: default
Node: kube-helm/10.156.0.2
Start Time: Mon, 10 Sep 2018 11:53:57 +0000
Labels: app=apple
Annotations: <none>
Status: Running
IP: 192.168.73.73
...
$ curl http://192.168.73.73:5678
apple
Service responds fine:
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apple-service ClusterIP 10.111.93.194 <none> 5678/TCP 1m
$ curl http://10.111.93.194:5678
apple
Ingress also responds fine, but by default it redirects http to https:
$ kubectl exec -it nginx-ingress-controller-6c9fcdf8d9-ggrcs -n ingress-nginx /bin/bash
www-data#nginx-ingress-controller-6c9fcdf8d9-ggrcs:/etc/nginx$ curl http://localhost/apple
<html>
<head><title>308 Permanent Redirect</title></head>
<body bgcolor="white">
<center><h1>308 Permanent Redirect</h1></center>
<hr><center>nginx/1.13.12</center>
</body>
</html>
www-data#nginx-ingress-controller-6c9fcdf8d9-ggrcs:/etc/nginx$ curl -k https://localhost/apple
apple
If you check the nginx configuration in controller pod, you will see that redirect configuration for /apple location:
www-data#nginx-ingress-controller-6c9fcdf8d9-ggrcs:/etc/nginx$ more /etc/nginx/nginx.conf
...
location /apple {
set $namespace "default";
set $ingress_name "example-ingress";
set $service_name "apple-service";
set $service_port "5678";
set $location_path "/apple";
rewrite_by_lua_block {
}
log_by_lua_block {
monitor.call()
}
if ($scheme = https) {
more_set_headers "Strict-Transport-Security: max-age=1572
4800; includeSubDomains";
}
port_in_redirect off;
set $proxy_upstream_name "default-apple-service-5678";
# enforce ssl on server side
if ($redirect_to_https) {
return 308 https://$best_http_host$request_uri;
}
client_max_body_size "1m";
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Request-ID $req_id;
proxy_set_header X-Real-IP $the_real_ip;
proxy_set_header X-Forwarded-For $the_real_ip;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering "off";
proxy_buffer_size "4k";
proxy_buffers 4 "4k";
proxy_request_buffering "on";
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout;
proxy_next_upstream_tries 3;
proxy_pass http://default-apple-service-5678;
proxy_redirect off;
}
You can disable this default behavior by adding annotations:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /apple
backend:
serviceName: apple-service
servicePort: 5678
www-data#nginx-ingress-controller-6c9fcdf8d9-ggrcs:/etc/nginx$ curl http://localhost/apple
apple