Applying TLS to a working ingress results in default backend - 404 - nginx

I have installed the Nginx Ingress Controller through helm in the ingress namespace.
helm ls --namespace ingress
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
nginx-ingress ingress 1 2020-03-15 10:47:51.143159 +0530 IST deployed nginx-ingress-1.34.2 0.30.0
The Service and Deployment is as follows
apiVersion: v1
kind: Service
metadata:
name: test-service
labels:
app.kubernetes.io/name: test-service
helm.sh/chart: test-service-0.1.0
app.kubernetes.io/instance: test-service
app.kubernetes.io/managed-by: Helm
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: '"true"'
spec:
type: LoadBalancer
ports:
- port: 8080
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: test-service
app.kubernetes.io/instance: test-service
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-service
labels:
app.kubernetes.io/name: test-service
helm.sh/chart: test-service-0.1.0
app.kubernetes.io/instance: test-service
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: test-service
app.kubernetes.io/instance: test-service
template:
metadata:
labels:
app.kubernetes.io/name: test-service
app.kubernetes.io/instance: test-service
spec:
containers:
- name: test-service
image: "<acr-url>/test-service:c93c58c0bd4918de06d46381a89b293087262cf9"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
protocol: TCP
livenessProbe:
httpGet:
path: /devops/health/liveness
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /devops/health/readiness
port: 8080
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
env:
- name: test-serviceClientId
valueFrom:
secretKeyRef:
key: test-serviceClientId
name: test-service-133
- name: test-serviceClientSecret
valueFrom:
secretKeyRef:
key: test-serviceClientSecret
name: test-service-133
- name: test-serviceTenantClientId
valueFrom:
secretKeyRef:
key: test-serviceTenantClientId
name: test-service-133
- name: test-serviceTenantClientSecret
valueFrom:
secretKeyRef:
key: test-serviceTenantClientSecret
name: test-service-133
resources:
limits:
cpu: 800m
requests:
cpu: 300m
Configured the ingress on a service with rewrite as follows
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-service
labels:
app.kubernetes.io/name: test-service
helm.sh/chart: test-service-0.1.0
app.kubernetes.io/instance: test-service
app.kubernetes.io/managed-by: Helm
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
- hosts:
- apiexample.centralus.cloudapp.azure.com
secretName: tls-secret
rules:
- host: "apiexample.centralus.cloudapp.azure.com"
http:
paths:
- path: /testservice(/|$)(.*)
backend:
serviceName: test-service
servicePort: 8080
The tls-secret has been generated using
$ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=apiexample.centralus.cloudapp.azure.com/O=apiexample.centralus.cloudapp.azure.com"
$ kubectl create secret tls tls-secret --key tls.key --cert tls.crt
Before applying the tls configuration in the ingress, I was able to get response from the api endpoint. The api endpoint is secured with oauth.
API Endpoint:
http://apiexample.centralus.cloudapp.azure.com/testservice/tenant/api/v1/endpoint
After applying the TLS config on the ingress, and hitting
https://apiexample.centralus.cloudapp.azure.com/testservice/tenant/api/v1/endpoint
I am getting default backend 404.
I have tested the TLS with ingress using another sample service (which is not secured with oauth) and it seems to be working for that service.
Here's the configuration for the other services
apiVersion: apps/v1
kind: Deployment
metadata:
name: tea
spec:
replicas: 3
selector:
matchLabels:
app: tea
template:
metadata:
labels:
app: tea
spec:
containers:
- name: tea
image: nginxdemos/nginx-hello:plain-text
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: tea-svc
labels:
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: tea
The ingress for the service is configured as follows
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
- hosts:
- apiexample.centralus.cloudapp.azure.com
secretName: tls-secret
rules:
- host: apiexample.centralus.cloudapp.azure.com
http:
paths:
- path: /teaprefix(/|$)(.*)
backend:
serviceName: tea-svc
servicePort: 80
The endpoint
https://apiexample.centralus.cloudapp.azure.com/teaprefix/someurl
works fine.
Please let me know if there's anything missing in my configuration and any potential issues that i may have ignored.
Note: The Service and the Ingress are deployed in the default namespace and the Ingress Controller is running in the ingress namespace
The Nginx Ingress Controller is running in 2 Pods
Logs from Ingress Controller with TLS configuration
Pod1
10.244.0.1 - - [22/Mar/2020:06:57:12 +0000] "GET /testservice/tenant/api/v1/endpoint HTTP/1.1" 302 0 "-" "PostmanRuntime/7.23.0" 1495 0.004 [default-test-service-8080] [] 10.244.0.7:8080 0 0.004 302 f4671ede2f95148220c21fe44de6fdad
10.244.0.1 - - [22/Mar/2020:06:57:13 +0000] "GET /tenant/api/v1/endpoint HTTP/1.1" 404 21 "http://apiexample.centralus.cloudapp.azure.com/tenant/api/v1/endpoint" "PostmanRuntime/7.23.0" 1563 0.001 [upstream-default-backend] [] 10.244.0.225:8080 21 0.004 404 ed41b36bc6b89b60bc3f208539a0d44c
Pod2
10.244.0.1 - - [22/Mar/2020:06:57:12 +0000] "GET /tenant/api/v1/endpoint HTTP/1.1" 308 171 "https://apiexample.centralus.cloudapp.azure.com/testservice/tenant/api/v1/endpoint" "PostmanRuntime/7.23.0" 1580 0.000 [upstream-default-backend] [] - - - - ce955b7bb5118169e99dd4051060c897
Logs from Ingress Controller without TLS configuration
10.244.0.1 - - [22/Mar/2020:07:04:34 +0000] "GET /testservice/tenant/api/v1/endpoint HTTP/1.1" 200 276 "-" "PostmanRuntime/7.23.0" 1495 2.165 [default-test-service-8080] [] 10.244.0.4:8080 548 2.168 200 e866f277def90c398df4e509e45718b2
UPDATE
Disabling the authentication on the backend service (test-service) also results in the same behavior.
Without applying TLS, able to hit the endpoint using http without any Bearer Token.
After applying TLS, get a default backend - 404 when i hit the endpoint with https/http
UPDATE
Exposing the Service via ClusterIP without the
service.beta.kubernetes.io/azure-load-balancer-internal: '"true"'
annotation instead of LoadBalancer also does not seem to be helping. The endpoint works without TLS and with TLS applied, get a default backend - 404
UPDATE
The test-service is a Spring Boot Application with the following WebSecurityConfiguration
#Component
#EnableResourceServer
public class WebSecurityConfiguration extends ResourceServerConfigurerAdapter {
private static final Logger LOGGER = LoggerFactory.getLogger(WebSecurityConfiguration.class);
private final HealthCheckWebSecurity healthCheckWebSecurity = new HealthCheckWebSecurity();
private final Oauth2Settings oauth2Settings;
private final JwtTokenStore jwtTokenStore;
private final TenantService tenantService;
private final TransportGuaranteeWebSecurity transportGuaranteeWebSecurity;
#Autowired
public WebSecurityConfiguration(
Oauth2Settings oauth2Settings,
JwtTokenStore jwtTokenStore,
TenantService tenantService,
TransportGuaranteeWebSecurity transportGuaranteeWebSecurity) {
this.oauth2Settings = oauth2Settings;
this.jwtTokenStore = jwtTokenStore;
this.tenantService = tenantService;
this.transportGuaranteeWebSecurity = transportGuaranteeWebSecurity;
}
#Override
public void configure(ResourceServerSecurityConfigurer resources) throws Exception {
String resourceId = oauth2Settings.getResource("default").getResourceId();
LOGGER.info("Resource service id: {}", resourceId);
resources.resourceId(resourceId).tokenStore(jwtTokenStore);
}
#Override
public void configure(HttpSecurity http) throws Exception {
http.requestMatchers().anyRequest();
http.sessionManagement().sessionCreationPolicy(SessionCreationPolicy.STATELESS);
http.csrf().disable();
healthCheckWebSecurity.configure(http);
transportGuaranteeWebSecurity.configure(http);
http.authorizeRequests().anyRequest().permitAll();
http.addFilterAfter(buildTenancyContextFilter(), ChannelProcessingFilter.class);
http.addFilterAfter(buildLongUrlFilter(), ChannelProcessingFilter.class);
}
private TenancyContextFilter buildTenancyContextFilter() {
return new TenancyContextFilter(tenantService,
new PathVariableTenantExtractor(Arrays.asList("/{tenantAlias}/api/**")));
}
private LongRequestHttpFilter buildLongUrlFilter() {
return new LongRequestHttpFilter();
}
}
public final class TransportGuaranteeWebSecurity {
private TransportGuaranteeSettings transportGuaranteeSettings;
TransportGuaranteeWebSecurity(TransportGuaranteeSettings transportGuaranteeSettings) {
this.transportGuaranteeSettings = transportGuaranteeSettings;
}
public void configure(HttpSecurity httpSecurity) throws Exception {
if (httpsRequired()) {
httpSecurity.requiresChannel().anyRequest().requiresSecure();
} else {
httpSecurity.requiresChannel().anyRequest().requiresInsecure();
}
}
private boolean httpsRequired() {
final String transportGuarantee = transportGuaranteeSettings.getTransportGuarantee();
return !TransportGuaranteeSettings.TRANSPORT_GUARANTEE_NONE.equalsIgnoreCase(transportGuarantee);
}
}
#ConfigurationProperties(prefix = "web.security")
public class TransportGuaranteeSettings {
static final String TRANSPORT_GUARANTEE_NONE = "NONE";
static final String TRANSPORT_GUARANTEE_CONFIDENTIAL = "CONFIDENTIAL";
private static final Logger LOGGER = LoggerFactory.getLogger(TransportGuaranteeSettings.class);
private static final String TRANSPORT_GUARANTEE_PROPERTY = "web.security.transportGuarantee";
private String transportGuarantee;
public String getTransportGuarantee() {
return transportGuarantee;
}
public void setTransportGuarantee(String transportGuarantee) {
this.transportGuarantee = transportGuarantee.trim();
logUnexpectedValue();
}
private void logUnexpectedValue() {
if (!TRANSPORT_GUARANTEE_NONE.equalsIgnoreCase(transportGuarantee)
&& !TRANSPORT_GUARANTEE_CONFIDENTIAL.equalsIgnoreCase(transportGuarantee)) {
LOGGER.debug(
"Unknown value '{}' for property '{}' (expected '{}' or '{}'). Defaulted to '{}'.",
transportGuarantee, TRANSPORT_GUARANTEE_PROPERTY, TRANSPORT_GUARANTEE_NONE, TRANSPORT_GUARANTEE_CONFIDENTIAL,
TRANSPORT_GUARANTEE_CONFIDENTIAL);
}
}
}
In my application.yaml,
web.security.transportGuarantee: NONE
The tenancy context filter extracts the Tenant information from the URL and sets a ThreadLocal. There should not be any issue with that since I am able to hit the endpoint without the TLS configuration. I also do not see any issue with the TransportGuaranteeWebSecurity for the same reason.
Some more logs for Debug
kubectl get pods -owide --namespace ingress
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ingress-controller-5fcbccd545-bdh25 1/1 Running 1 15d 10.244.0.22 aks-agentpool-44086776-vmss000000 <none> <none>
nginx-ingress-controller-5fcbccd545-ptx6j 1/1 Running 0 15d 10.244.0.21 aks-agentpool-44086776-vmss000000 <none> <none>
nginx-ingress-default-backend-554d7bd77c-zxzlf 1/1 Running 0 15d 10.244.0.225 aks-agentpool-44086776-vmss000000 <none> <none>
kubectl get svc
test-service LoadBalancer 10.0.231.35 13.89.111.39 8080:31534/TCP 14d
tea-svc ClusterIP 10.0.12.216 <none> 80/TCP 17d
kubectl get ing
test-service apiexample.centralus.cloudapp.azure.com 10.240.0.4 80, 443 15d

I've reproduced your scenario in my GCP account, and didn't get the same result, so I'm posting my steps to make the troubleshoot of each components in order to make sure all of them is working properly. In resume, seem's the main problem is how the application is handle the paths or host.
Kubernetes: 1.15.3 (GKE)
Nginx Ingress: Installed following the offical docs
Based on your yaml, I removed the readiness and liveness probes, and environment envs to test, and changed the image to a nginx image (on port 80):
apiVersion: v1
kind: Service
metadata:
name: test-service
labels:
app.kubernetes.io/name: test-service
helm.sh/chart: test-service-0.1.0
app.kubernetes.io/instance: test-service
app.kubernetes.io/managed-by: Helm
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: '"true"'
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: test-service
app.kubernetes.io/instance: test-service
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-service
labels:
app.kubernetes.io/name: test-service
helm.sh/chart: test-service-0.1.0
app.kubernetes.io/instance: test-service
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: test-service
app.kubernetes.io/instance: test-service
template:
metadata:
labels:
app.kubernetes.io/name: test-service
app.kubernetes.io/instance: test-service
spec:
containers:
- name: test-service
image: nginx
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
After applied we can check if both (deployment and service) are running as expected, before apply ingress spec.
To check this, we can use a curl image to curl destination or dnsutil container by oficial kubernetes docs.
In this case I've used curlimages/curl do test:
apiVersion: v1
kind: Pod
metadata:
name: curl
namespace: default
spec:
containers:
- name: curl
image: curlimages/curl
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
With curl container running we can check first if the container of our nginx image is correctly running and answers the requests 'curling' directly their IP.
The command below will create a variable named $pod from the pod with label app.kubernetes.io/name=test-service with the ip of the pod.
$ pod=$(kubectl get pods -ojsonpath='{.items[*].status.podIP}' -l app.kubernetes.io/name=test-service)
$ echo $pod
192.168.109.12
Using curl pod create earlier, we can check if the pod is processing the requests:
$ kubectl exec curl -- curl -Is $pod
HTTP/1.1 200 OK
Server: nginx/1.17.9
Date: Tue, 24 Mar 2020 09:08:21 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 03 Mar 2020 14:32:47 GMT
Connection: keep-alive
ETag: "5e5e6a8f-264"
Accept-Ranges: bytes
See the response HTTP/1.1 200 OK, let's move forward to test the service:
$ kubectl exec curl -- curl -Is test-service
HTTP/1.1 200 OK
Server: nginx/1.17.9
Date: Tue, 24 Mar 2020 09:11:13 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 03 Mar 2020 14:32:47 GMT
Connection: keep-alive
ETag: "5e5e6a8f-264"
Accept-Ranges: bytes
Same here, HTTP/1.1 200 OK for service.
Let's go further and deploy the nginx ingress without TLS yet, to make a test before and after:
Generating and applying the certificate:
$ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=apiexample.centralus.cloudapp.azure.com/O=apiexample.centralus.cloudapp.azure.com"
...
$ kubectl create secret tls tls-secret --key tls.key --cert tls.crt
secret/tls-secret created
Ingress without TLS (I've changed to port 80 to match with my nginx image):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-service
labels:
app.kubernetes.io/name: test-service
helm.sh/chart: test-service-0.1.0
app.kubernetes.io/instance: test-service
app.kubernetes.io/managed-by: Helm
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: "apiexample.centralus.cloudapp.azure.com"
http:
paths:
- path: /testservice(/|$)(.*)
backend:
serviceName: test-service
servicePort: 80
Testing from my desktop to internet using the IP (omitted) provided by GCP:
$ curl -ILH "Host: apiexample.centralus.cloudapp.azure.com" http://34.77.xxx.xx/testservice
HTTP/1.1 200 OK
Server: nginx/1.17.8
Date: Tue, 24 Mar 2020 10:41:21 GMT
Content-Type: text/html
Content-Length: 612
Connection: keep-alive
Vary: Accept-Encoding
Last-Modified: Tue, 03 Mar 2020 14:32:47 GMT
ETag: "5e5e6a8f-264"
Accept-Ranges: bytes
Until here everything is working fine. We can now add the TLS to ingress spec and try again:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-service
labels:
app.kubernetes.io/name: test-service
helm.sh/chart: test-service-0.1.0
app.kubernetes.io/instance: test-service
app.kubernetes.io/managed-by: Helm
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
- hosts:
- apiexample.centralus.cloudapp.azure.com
secretName: tls-secret
rules:
- host: "apiexample.centralus.cloudapp.azure.com"
http:
paths:
- path: /testservice(/|$)(.*)
backend:
serviceName: test-service
servicePort: 80
Testing using curl:
curl -ILH "Host: apiexample.centralus.cloudapp.azure.com" https://34.77.147.74/testservice -k
HTTP/2 200
server: nginx/1.17.8
date: Tue, 24 Mar 2020 10:45:25 GMT
content-type: text/html
content-length: 612
vary: Accept-Encoding
last-modified: Tue, 03 Mar 2020 14:32:47 GMT
etag: "5e5e6a8f-264"
accept-ranges: bytes
strict-transport-security: max-age=15724800; includeSubDomains
Ok, so that is working with TLS, so based on this we can conclude that your yaml spec is working, and maybe your are facing some issue with path in ingress definitions and your application:
You are using the annotation nginx.ingress.kubernetes.io/rewrite-target: /$2 and in the path /testservice(/|$)(.*)
It means that, any characters captured by (.*) will be assigned to the placeholder $2, which is then used as a parameter in the rewrite-target annotation.
Based on your ingress path regex:
apiexample.centralus.cloudapp.azure.com/testservice rewrites to apiexample.centralus.cloudapp.azure.com/
apiexample.centralus.cloudapp.azure.com/testservice/ rewrites to apiexample.centralus.cloudapp.azure.com/
apiexample.centralus.cloudapp.azure.com/testservice/tenant/api/v1/endpoint rewrites to apiexample.centralus.cloudapp.azure.com/tenant/api/v1/endpoint
When checked in nginx pod logs, you could see the requested url after rewrite:
2020/03/24 10:59:33 [error] 7#7: *186 open() "/usr/share/nginx/html/tenant/api/v1/endpoint" failed (2: No such file or directory), client: 10.20.1.61, server: localhost, request: "HEAD /tenant/api/v1/endpoint HTTP/1.1", host: "apiexample.centralus.cloudapp.azure.com"
So, by this test I could concluded that your deployment, service and ingress is working and doesn't has any typo or formatting problems. So my advice is to double check the application
Make sure your application is handle correctly the path;
If your applications is doing some URL validation, make sure it can handle http and https;
In case you have CORS enabled, adjust the ingress as mentioned here.
Since you didn't post any app as example to reproduce, my tests was limited a generic app as backend. If you could provide more details about backend application, or some generic app which produces the same behavior please let me know and will happy to improve my answer with more details.
References:
https://kubernetes.github.io/ingress-nginx/deploy/
https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/
https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/

Related

Rabbitmq with nginx ingress

I'm using nginx controller with Minikube. I can access rabbitmq management but when i access the queues i got this error:
Not found
The object you clicked on was not found; it may have been deleted on the server.
if i use port-forward it's working correctly
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rabbitmq-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
labels:
name: rabbitmq-ingress
spec:
rules:
- http:
paths:
- pathType: Prefix
path: /rabbit(/|$)(.*)
backend:
service:
name: rabbitmq-management
port:
number: 15672
Looks like you forget to mention host into ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-wildcard-host
spec:
rules:
- host: "foo.bar.com"
http:
paths:
- pathType: Prefix
path: "/bar"
backend:
service:
name: service1
port:
number: 80
host: "foo.bar.com"
Read more : https://kubernetes.io/docs/concepts/services-networking/ingress/#hostname-wildcards

kubernetes nginx ingress controller return 404

Following this guide, I created an ingress controller on my local kubernetes server, the only difference is that it is created as a NodePort.
I have done some test deployments, with respective services and everything works, here the file
Deploy1:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: helloworld1
spec:
selector:
matchLabels:
app: helloworld1
replicas: 1
template:
metadata:
labels:
app: helloworld1
spec:
containers:
- name: hello
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
Deploy2:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: helloworld2
spec:
selector:
matchLabels:
app: helloworld2
replicas: 1
template:
metadata:
labels:
app: helloworld2
spec:
containers:
- name: hello
image: gcr.io/google-samples/hello-app:2.0
ports:
- containerPort: 8080
Deploy3:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: geojson-example
spec:
selector:
matchLabels:
app: geojson-example
replicas: 1
template:
metadata:
labels:
app: geojson-example
spec:
containers:
- name: geojson-container
image: "nmex87/geojsonexample:latest"
ports:
- containerPort: 8080
Service1:
apiVersion: v1
kind: Service
metadata:
name: helloworld1
spec:
# type: NodePort
ports:
- port: 8080
selector:
app: helloworld1
Service2:
apiVersion: v1
kind: Service
metadata:
name: helloworld2
spec:
# type: NodePort
ports:
- port: 8080
selector:
app: helloworld2
Service3:
apiVersion: v1
kind: Service
metadata:
name: geojson-example
spec:
ports:
- port: 8080
selector:
app: geojson-example
This is the ingress controller:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/default-backend: geojson-example
spec:
rules:
- http:
paths:
- path: /geo
pathType: Prefix
backend:
service:
name: geojson-example
port:
number: 8080
- path: /test1
pathType: Prefix
backend:
service:
name: helloworld1
port:
number: 8080
- path: /test2
pathType: Prefix
backend:
service:
name: helloworld2
port:
number: 8080
When I do a GET on myServer:myPort/test1 or /test2 everything works, on /geo i get the following answer
{
"timestamp": "2021-03-09T17:02:36.606+00:00",
"status": 404,
"error": "Not Found",
"message": "",
"path": "/geo"
}
Why??
if I create a pod, and from inside the pod, i do a curl on geojson-example it works, but from the external, i obtain a 404 (i think by nginx ingress controller)
This is the log of nginx pod:
x.x.x.x - - [09/Mar/2021:17:02:21 +0000] "GET /test1 HTTP/1.1" 200 68 "-" "PostmanRuntime/7.26.8" 234 0.006 [default-helloworld1-8080] [] 192.168.168.92:8080 68 0.008 200
x.x.x.x - - [09/Mar/2021:17:02:36 +0000] "GET /geo HTTP/1.1" 404 116 "-" "PostmanRuntime/7.26.8" 232 0.013 [default-geojson-example-8080] [] 192.168.168.109:8080 116 0.012 404
What can I do?
As far the doc: This annotation is of the form nginx.ingress.kubernetes.io/default-backend: <svc name> to specify a custom default backend. This <svc name> is a reference to a service inside of the same namespace in which you are applying this annotation. This annotation overrides the global default backend.
This service will be handle the response when the service in the Ingress rule does not have active endpoints.
You cannot use same service as default backend and also for a path. When you do this the path /geo became invalid. As we know default backend serves only the inactive endpoints. Now If you tell that you want geojson-example as default backend(for inactive endpoints) again in the paths if you tell that use geojson-example for a valid path /geo then it became invalid as you are creating a deadlock type situation here.
You actually do not need to give this nginx.ingress.kubernetes.io/default-backend annotation.
Your ingress should be like below without the default annotation, or you can use the annotation but in that case you need to remove geojson-example from using for any valid path in the paths, or need to use another service for the path /geo. Options that you can use are given below:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /geo
pathType: Prefix
backend:
service:
name: geojson-example
port:
number: 8080
- path: /test1
pathType: Prefix
backend:
service:
name: helloworld1
port:
number: 8080
- path: /test2
pathType: Prefix
backend:
service:
name: helloworld2
port:
number: 8080
Or:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/default-backend: geojson-example
spec:
rules:
- http:
paths:
- path: /geo
pathType: Prefix
backend:
service:
name: <any_other_service> # here use another service except `geojson-example`
port:
number: 8080
- path: /test1
pathType: Prefix
backend:
service:
name: helloworld1
port:
number: 8080
- path: /test2
pathType: Prefix
backend:
service:
name: helloworld2
port:
number: 8080
Or:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/default-backend: geojson-example
spec:
rules:
- http:
paths:
- path: /test1
pathType: Prefix
backend:
service:
name: helloworld1
port:
number: 8080
- path: /test2
pathType: Prefix
backend:
service:
name: helloworld2
port:
number: 8080
This is for your default backend. You set the geojson-example service as a default backend.
The default backend is a service which handles all URL paths and hosts the nginx controller doesn't understand (i.e., all the requests that are not mapped with an Ingress).
Basically a default backend exposes two URLs:
/healthz that returns 200
/ that returns 404
So , if you want geojson-example service as a default backend then you don't need /geo path specification. Then your manifest file will be:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/default-backend: geojson-example
spec:
rules:
- http:
paths:
- path: /test1
pathType: Prefix
backend:
service:
name: helloworld1
port:
number: 8080
- path: /test2
pathType: Prefix
backend:
service:
name: helloworld2
port:
number: 8080
Or if you want geojson-example as a ingress valid path then you have to remove default backend annotation. Then your manifest file will be:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /geo
pathType: Prefix
backend:
service:
name: geojson-example
port:
number: 8080
- path: /test1
pathType: Prefix
backend:
service:
name: helloworld1
port:
number: 8080
- path: /test2
pathType: Prefix
backend:
service:
name: helloworld2
port:
number: 8080

https redirects to http and then to https

I have an application running inside EKS. Istio is used as a ServiceMesh. I am having some problem with https redirect to http and then to https. It looks problem is at istio virtual service, it momentarily switches to http which I want to prevent.
This is how we installed istio [Installed version is 1.5.1]
istioctl -n infrastructure manifest apply \
--set profile=default --set values.kiali.enabled=true \
--set values.gateways.istio-ingressgateway.enabled=true \
--set values.gateways.enabled=true \
--set values.gateways.istio-ingressgateway.type=NodePort \
--set values.global.k8sIngress.enabled=false \
--set values.global.k8sIngress.gatewayName=ingressgateway \
--set values.global.proxy.accessLogFile="/dev/stdout"
This is our virtual service. Cluster contains two deployments:
myapps-front
myapps-api
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: dev-sanojapps-virtual-service
namespace: istio-system
spec:
hosts:
- "dev-mydomain.com"
gateways:
- ingressgateway
http:
- match:
- uri:
prefix: /myapp/
- uri:
prefix: /myapp
rewrite:
uri: /
route:
- destination:
host: myapp-front.sanojapps-dev.svc.cluster.local
headers:
request:
set:
"X-Forwarded-Proto": "https"
"X-Forwarded-Port": "443"
response:
set:
Strict-Transport-Security: max-age=31536000; includeSubDomains
- match:
- uri:
prefix: /v1/myapp-api/
- uri:
prefix: /v1/myapp-api
rewrite:
uri: /
route:
- destination:
host: myapp-api.sanojapps-dev.svc.cluster.local
port:
number: 8080
- match:
- uri:
prefix: /
redirect:
uri: /myapp/
https_redirect: true
headers:
request:
set:
"X-Forwarded-Proto": "https"
"X-Forwarded-Port": "443"
response:
set:
Strict-Transport-Security: max-age=31536000; includeSubDomains
Below is front-end apps yaml deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-front
namespace: sanojapps-dev
labels:
app: myapp-front
spec:
selector:
matchLabels:
app: myapp-front
template:
metadata:
labels:
app: myapp-front
spec:
containers:
- name: myapp-front
image: <ECR_REPO:TAG>
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: http
protocol: TCP
resources:
limits:
cpu: 500m
memory: 1024Mi
requests:
cpu: 50m
memory: 256Mi
Our gateway is configured like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: istio-system
name: sanojapps-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/subnets: ""
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:region:account:certificate/<ACM_ID>
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-FS-1-2-2019-08
alb.ingress.kubernetes.io/wafv2-acl-arn: arn:aws:wafv2:region:account:regional/webacl/sanojapps-acl/<ACM_ID>
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: ssl-redirect
servicePort: use-annotation
- path: /*
backend:
serviceName: istio-ingressgateway
servicePort: 80

Why the certificate is not recognized by the ingress?

I have installed on my K8S https://cert-manager.io and have created cluster issuer:
apiVersion: v1
kind: Secret
metadata:
name: digitalocean-dns
namespace: cert-manager
data:
# insert your DO access token here
access-token: secret
---
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
email: mail#example.io
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: secret
solvers:
- dns01:
digitalocean:
tokenSecretRef:
name: digitalocean-dns
key: access-token
selector:
dnsNames:
- "*.tool.databaker.io"
#- "*.service.databaker.io"
---
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: mail#example.io
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: secret
solvers:
- dns01:
digitalocean:
tokenSecretRef:
name: digitalocean-dns
key: access-token
selector:
dnsNames:
- "*.tool.databaker.io"
also have created a certificate:
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: hello-cert
spec:
secretName: hello-cert-prod
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
commonName: "*.tool.databaker.io"
dnsNames:
- "*.tool.databaker.io"
and it was successfully created:
Normal Requested 8m31s cert-manager Created new CertificateRequest resource "hello-cert-2824719253"
Normal Issued 7m22s cert-manager Certificate issued successfully
To figure out, if the certificate is working, I have deployed a service:
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-first
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes-first
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes-first
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes-first
template:
metadata:
labels:
app: hello-kubernetes-first
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.7
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: Hello from the first deployment!
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
rules:
- host: hello.tool.databaker.io
http:
paths:
- backend:
serviceName: hello-kubernetes-first
servicePort: 80
---
But it does not work properly.
What am I doing wrong?
You haven't specified the secrets containing your certificate:
spec:
tls:
- hosts:
- hello.tool.databaker.io
secretName: <secret containing the certificate>
rules:
...

Problem with Kubernetes and Nginx. Error code

I'm trying to deploy my first Kubernetes application. I've set up everyting but now when I try to acces it over the clusters IP adress I get this message:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\": No policy matched.",
"reason": "Forbidden",
"details": {
},
"code": 403
}
Anybody knows what could be the problem? Does it has anything to do with NGNIX?
Also here is my .yaml file:
# Certificate
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: ${APP_NAME}
namespace: gitlab-managed-apps
spec:
secretName: ${APP_NAME}-cert
dnsNames:
- ${URL}
- www.${URL}
acme:
config:
- domains:
- ${URL}
- www.${URL}
http01:
ingressClass: nginx
issuerRef:
name: ${CERT_ISSUER}
kind: ClusterIssuer
---
# Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ${APP_NAME}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: 'true'
nginx.ingress.kubernetes.io/from-to-www-redirect: 'true'
spec:
tls:
- secretName: ${APP_NAME}-cert
hosts:
- ${URL}
- www.${URL}
rules:
- host: ${URL}
http:
paths:
- backend:
serviceName: ${APP_NAME}
servicePort: 80
---
# Service
apiVersion: v1
kind: Service
metadata:
name: ${APP_NAME}
labels:
app: ${CI_PROJECT_NAME}
spec:
selector:
name: ${APP_NAME}
app: ${CI_PROJECT_NAME}
ports:
- name: http
port: 80
targetPort: http
---
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${APP_NAME}
labels:
app: ${CI_PROJECT_NAME}
spec:
replicas: ${REPLICAS}
revisionHistoryLimit: 0
selector:
matchLabels:
app: ${CI_PROJECT_NAME}
template:
metadata:
labels:
name: ${APP_NAME}
app: ${CI_PROJECT_NAME}
spec:
containers:
- name: webapp
image: eu.gcr.io/my-site/my-site.com:latest
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
env:
- name: COMMIT_SHA
value: ${CI_COMMIT_SHA}
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 30
timeoutSeconds: 1
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
timeoutSeconds: 1
resources:
requests:
memory: '16Mi'
limits:
memory: '64Mi'
imagePullSecrets:
- name: ${REGISTRY_PULL_SECRET}
I would really appreciate it if anybody could help me!
Just add the path in you ingress:
rules:
- host: ${URL}
http:
paths:
- backend:
serviceName: ${APP_NAME}
servicePort: 80
path: /
https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource

Resources