K8S Ingress controller, Cert manager and LetsEncrypt SSL doesn't working - nginx

I have created a brand new K8S cluster
I have created the Ingress nginx controller.
The controller created a namespace with all of the required Pods, Svcs and etc.
I have created an Ingress object that routes the traffic to a Deployment service with TLS enabled.
I have created a cluster issuer object.
When inspecting the kubectl describe cert everything okay and ready.
When inspecting the kubectl describe clusterissuer, as well.
When doing curl https://example.com/ it returns the following error:
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
Without SSL, the access is enabled from outside and works properly, when adding back the SSL configuration in the Ingress object, it fails again.
ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- k8s-poc.example.com
secretName: echo-tls
rules:
- host: k8s-poc.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test-svc
port:
number: 3333
test-depl.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-depl
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: mydockeruser/test:42
ports:
- containerPort: 3333
imagePullSecrets:
- name: docker-regcred
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: test-svc
spec:
selector:
app: test
ports:
- name: http
protocol: TCP
port: 3333
targetPort: 3333
prod-issuer.yaml:
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
namespace: cert-manager
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: my#email.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-prod
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: nginx

Related

Problem with NGINX, Kubernetes and CloudFlare

I am experiencing exactly this issue: Nginx-ingress-controller fails to start after AKS upgrade to v1.22, with the exception that none of the proposed solutions is working for my case.
I am running a Kubernetes Cluster on Oracle Cloud and I accidentally upgraded the cluster and now I cannot connect anymore to the services through nginx-controller. After reading the official nginx documentation, I am aware of the new version of nginx, so I checked the documentation and re-installed the nginx-controller following Oracle Cloud official documentation.
I am able to perform step by step as I run:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/cloud/deploy.yaml
And then an ingress-nginx namespace is created and a LoadBalancer is created. Then as in the guide I have created a simple hello application (though not running on port 80):
apiVersion: apps/v1
kind: Deployment
metadata:
name: docker-hello-world
labels:
app: docker-hello-world
spec:
selector:
matchLabels:
app: docker-hello-world
replicas: 1
template:
metadata:
labels:
app: docker-hello-world
spec:
containers:
- name: docker-hello-world
image: scottsbaldwin/docker-hello-world:latest
ports:
- containerPort: 8088
---
apiVersion: v1
kind: Service
metadata:
name: docker-hello-world-svc
spec:
selector:
app: docker-hello-world
ports:
- port: 8088
targetPort: 8088
type: ClusterIP
and then the ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ing
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- secretName: tls-secret
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: docker-hello-world-svc
port:
number: 8088
But when running the curl commands I only get a curl: (56) Recv failure: Connection reset by peer.
So I then tried to connect to some python microservices that are already running by simply editing the ingress, but whatever I do I get the same error message. And when setting the host as the following:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ing
namespace: ingress-nginx
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: SUBDOMAIN.DOMAIN.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ANY_MICROSERVICE_RUNNING_IN_CLUSTER
port:
number: EXPOSED_PORT_BY_MICROSERVICE
Then, by setting the subdomain on CloudFlare I only get a 520 Bad Gateway.
Can you help me find what is that I do not see?
This may be related to your Ingress resource.
In Kubernetes versions v1.19 and above, Ingress resources should use ingressClassName instead of the older annotation syntax. Additional information on what should be done when upgrading can be found on the official Kubernetes documentation.
However, with the changes it requires at face value, from the information you're provided so far, your Ingress resource should look this:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ing
spec:
ingressClassName: nginx
rules:
- host: SUBDOMAIN.DOMAIN.com
http:
paths:
- backend:
service:
name: docker-hello-world-svc
port:
number: 8088
path: /
pathType: Prefix
tls:
- hosts:
- SUBDOMAIN.DOMAIN.com
secretName: tls-secret
Additionally, please provide the deployment Nginx-ingress logs if you still have issues, as the Cloudflare error does not detail what could be wrong apart from providing a starting point.
As someone who uses Cloudflare and Nginx, there are multiple reasons why you're receiving a 520 error, so it'd be better if we could reduce the scope of what could be the main issue. Let me know if you have any questions.

i want to change my wordpress pod domain name

I hope you all are doing great today ,
here is my situation :
i have 2 wordpress websites (identical )
--1st one is an App Service wordpress in azure with a domain name eg:https://wordpress.azurewebsites.net
--the 2nd one is in aks cluster as a pod with a load balancer that expose it to the internet with a public ip
what i want to do :
i want to take the domain name from the app service and give it to the aks pod
what did i do :
i changed from the dashboard the domain name and changed the load balancer public ip adress
and it didn't work now i can't access the dashboard from the load balancer ip adress either
im new in kubernetes i hope someone can guide me to the right direction on how to do it
Seems like you are missing an ingress controller. You could for example install ingress-nginx and expose the ingress with this service config:
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
loadBalancerIP: 53.1.1.1
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
appProtocol: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
You can now create a service for your app:
apiVersion: v1
kind: Service
metadata:
name: app_service
namespace: app
spec:
type: ClusterIP
ports:
- name: service
port: 80
selector:
app: yoour_app
Then you can expose yoour app with an ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app_ingress
namespace: app
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- wordpress.azurewebsites.net
rules:
- host: wordpress.azurewebsites.net
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app_service
port:
number: 80

yaml configuration for Istio and gRPC

I am working on a POC for Istio + gRPC, the Istio version is 1.6, but I could not see any gRPC traffic to my pods.
I suspect my Istio Gateway or VirtualService miss something, but I could not figure out what's wrong here? Could anybody help review my yaml file and correct me what's missing or wrong in my yaml?
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: syslogserver
name: syslogserver
namespace: mynamespace
spec:
selector:
matchLabels:
app: syslogserver
replicas: 1
template:
metadata:
labels:
app: syslogserver
spec:
containers:
- name: syslogserver
image: docker.io/grpc-syslog:latest
imagePullPolicy: Always
ports:
- containerPort: 5555
imagePullSecrets:
- name: pull-image-credential
---
apiVersion: v1
kind: Service
metadata:
name: syslogserver
namespace: mynamespace
labels:
app: syslogserver
spec:
selector:
app: syslogserver
ports:
- name: grpc
port: 6666
protocol: TCP
targetPort: 5555
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: xyz-ingress-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 7777
name: http2
protocol: HTTP2
hosts:
- "*"
---
apiVersion: v1
kind: Service
metadata:
name: xyz-istio-ingressgateway
namespace: istio-system
labels:
app: xyz-istio-ingressgateway
spec:
selector:
app: istio-ingressgateway
istio: ingressgateway
type: NodePort
ports:
- protocol: TCP
nodePort: 32555
port: 7777
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: xyz-ingress-gateway-virtualservice
namespace: istio-system
spec:
hosts:
- "*"
gateways:
- xyz-ingress-gateway
#tls:
http:
- match:
- port: 7777
route:
- destination:
host: syslogserver.mynamespace.svc.cluster.local
port:
number: 6666
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: xyz-destinationrule
namespace: istio-system
spec:
host: syslogserver.mynamespace.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
Please give your guidance, thanks.
From what I see the service name: xyz-istio-ingressgateway should be deleted, as that's not how you communicate when using Istio.
Instead you should use istio ingress gateway, combined with a gateway, virtual service and destination rule.
If you've choosen port number 7777 on your gateway, you have to open this port on istio ingress gateway, there are few ways to do that in this stackoverflow question. There are the default istio ingress gateway values.
After you configure the port you can use kubectl get svc istio-ingressgateway -n istio-system to get the istio ingress gateway external IP.
If the external IP value is set, your environment has an external load balancer that you can use for the ingress gateway. If the EXTERNAL-IP value is pending, then your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s node port.
The rest of your configuration looks fine for me. Just a reminder about injecting a sidecar proxy to your pods.

can I use nginx ingress controller oauth2_proxy in kubernetes with azure active directory without cookies

I am in the process of changing from an Azure webservice to azure kubernetes to host an api. I have the solution working with nginx and oauth2_proxy and azure active directory. However the solution requires a cookie to function.
As this is an api and the external security will be managed by an AWS API Gateway with a custom authoriser. I would like for the API Gateway to authenticate using a bearer token only and not require a cookie.
I have my solution working and have been so far testing form postman. In postman I have the bearer token but cannot find a way to access without the cookie.
My application presently runs via aws api gateway and an azure app service with azure active directory. The aws api gateway custom authoriser does not require a cookie in this case.
I have the following configuration
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: oauth2-proxy
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: mydomain.com
http:
paths:
- path: /oauth2
backend:
serviceName: oauth2-proxy
servicePort: 4180
tls:
- hosts:
- mydomain.com
secretName: tls-secret
------
# oauth2_proxy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: oauth2-proxy
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
labels:
app: oauth2-proxy
spec:
containers:
- env:
- name: OAUTH2_PROXY_PROVIDER
value: azure
- name: OAUTH2_PROXY_AZURE_TENANT
value: mytennantid
- name: OAUTH2_PROXY_CLIENT_ID
value: my clientid
- name: OAUTH2_PROXY_CLIENT_SECRET
value: my client secret
- name: OAUTH2_PROXY_COOKIE_SECRET
value: my cookie secret
- name: OAUTH2_PROXY_HTTP_ADDRESS
value: "0.0.0.0:4180"
- name: OAUTH2_PROXY_UPSTREAM
value: "file:///dev/null"
image: machinedata/oauth2_proxy:latest
imagePullPolicy: IfNotPresent
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
app: oauth2-proxy
-----
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
spec:
tls:
- hosts:
- mydomain.com
secretName: tls-secret
rules:
- host: mydomain.com
http:
paths:
- backend:
serviceName: mayapp
servicePort: 80
I would like to change this configuration so a cookie is no longer required. If this is not possible is there another way to achieve the same outcome?
Just drop the oauth part on kubernetes and make API Gateway validate the requests, it has the ability to do exactly what you need. You can secure your kubernetes to only accept requests from the API Gateway, so you don't need to protect your endpoint from other calls.

Kubernetes nginx ingress is not resolving services

Cloud: Google Cloud Platform.
I have the following configuration
kind: Deployment
apiVersion: apps/v1
metadata:
name: api
spec:
replicas: 2
selector:
matchLabels:
run: api
template:
metadata:
labels:
run: api
spec:
containers:
- name: api
image: gcr.io/*************/api
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /_ah/health
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
---
kind: Service
apiVersion: v1
metadata:
name: api
spec:
selector:
run: api
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: 8080
---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: main-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /api/*
backend:
serviceName: api
servicePort: 8080
All set. GKE saying that all deployments are okay, pods number are met and main ingress with nginx-ingress-controllers are set as well. But I'm not able to reach any of the services. Even application specific 404. Nothing. Like, it's not resolved at all.
Another related question I see to entrypoints. The first one through main-ingress. It created it's own LoadBalancer with own IP address. The second one address from nginx-ingress-controller. The second one is at least returning 404 from default backend, but also not pointing to expected api service.

Resources