I have installed the following two different ingress controllers on my DigitalOcean managed K8S cluster:
Nginx
Istio
and they have been assigned to two different IP addresses. My question is, if it is wrong to have two different ingress controllers on the same K8S cluster?
The reason, why I have done it, because nginx is for tools like harbor, argocd, etc. and istio for microservices.
I have also figured out, when both are installed alongside each other, sometimes during the deployment, the K8S suddenly goes down.
For example, I have deployed:
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-first
namespace: dev
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes-first
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes-first
namespace: dev
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes-first
template:
metadata:
labels:
app: hello-kubernetes-first
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.7
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: Hello from the first deployment!
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
name: helloworld-ingress
namespace: dev
spec:
rules:
- host: hello.service.databaker.io
http:
paths:
- path: /*
backend:
serviceName: hello-kubernetes-first
servicePort: 80
---
Then I've got:
Error from server (InternalError): error when creating "istio-app.yml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s: dial tcp 10.245.107.175:443: i/o timeout
You have raised several points - before answering your question, let's take a step back.
K8s Ingress not recommended by Istio
It is important to note how Istio does not recommend using K8s Ingress:
Using the Istio Gateway, rather than Ingress, is recommended to make use of the full feature set that Istio offers, such as rich traffic management and security features.
Ref: https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/
As noted, Istio Gateway (Istio IngressGateway and EgressGateway) acts as the edge, which you can find more in https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/.
Multiple endpoints within Istio
If you need to assign one public endpoint for business requirement, and another for monitoring (such as Argo CD, Harbor as you mentioned), you can achieve that by using Istio only. There are roughly 2 approaches to this.
Create separate Istio IngressGateways - one for main traffic, and another for monitoring
Create one Istio IngressGateway, and use Gateway definition to handle multiple access patterns
Both are valid, and depending on requirements, you may need to choose one way or the other.
As to the Approach #2., it is where Istio's traffic management system shines. It is a great example of Istio's power, but the setup is slightly complex if you are new to it. So here goes an example.
Example of Approach #2
When you create Istio IngressGateway by following the default installation, it would create istio-ingressgateway like below (I overly simplified YAML definition):
apiVersion: v1
kind: Service
metadata:
labels:
app: istio-ingressgateway
istio: ingressgateway
name: istio-ingressgateway
namespace: istio-system
# ... other attributes ...
spec:
type: LoadBalancer
# ... other attributes ...
This LB Service would then be your endpoint. (I'm not familiar with DigitalOcean K8s env, but I suppose they would handle LB creation.)
Then, you can create Gateway definition like below:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: your-gateway
namespace: istio-system
spec:
selector:
app: istio-ingressgateway
istio: ingressgateway
servers:
- port:
number: 3000
name: https-your-system
protocol: HTTPS
hosts:
- "your-business-domain.com"
- "*.monitoring-domain.com"
# ... other attributes ...
You can then create 2 or more VirtualService definitions.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: business-virtsvc
spec:
gateways:
- istio-ingressgateway.istio-system.svc.cluster.local
hosts:
- "your-business-domain.com"
http:
- match:
- port: 3000
route:
- destination:
host: some-business-pod
port:
number: 3000
# ... other attributes ...
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: monitoring-virtsvc
spec:
gateways:
- istio-ingressgateway.istio-system.svc.cluster.local
hosts:
- "harbor.monitoring-domain.com"
http:
- match:
- port: 3000
route:
- destination:
host: harbor-pod
port:
number: 3000
# ... other attributes ...
NOTE: The above is assuming a lot of things, such as port mapping, traffic handling, etc.. Please check out the official doc for details.
So, back to the question after long detour:
Question: [Is it] wrong to have two different ingress controllers on the same K8S cluster[?]
I believe it is OK, though this can cause an error like you are seeing, as two ingress controller fight for the K8s Ingress resource.
As mentioned above, if you are using Istio, it's better to stick with Istio IngressGateway instead of K8s Ingress. If you need K8s Ingress for some specific reason, you could use other Ingress controller for K8s Ingress, like Nginx.
As to the error you saw, it's coming from Nginx deployed webhook, that ingress-nginx-controller-admission.nginx.svc is not available. This means you have created a K8s Ingress helloworld-ingress with kubernetes.io/ingress.class: istio annotation, but Nginx webhook is interfering with K8s Ingress handling. The webhook is then failing to handle the resource, as the Pod / Svc responsible for webhook traffic is not found.
The error itself just says something is unhealthy in K8s - potentially not enough Node allocated to the cluster, and thus Pod allocation not happening. It's also good to note that Istio does require some CPU and memory footprint, which may be putting more pressure to the cluster.
Both products have distinct characteristics and solve different type of problems. So, no issue in having both installed on your cluster.
To call them Ingress Controller is not correct:
- Nginx is a well known web server
- Nginx ingress controller is an implementation of a Kubernetes Ingress controller based on Nginx (Load balancing, HTTPS termination, authentication, traffic routing , etc)
- Istio is a service mesh (well known to microservice architecture and used to address cross cutting concerns in a standard way - things like, logging, tracing, Https termination, etc - at the POD level)
Can you provide more details to what you mean by "K8S suddenly goes down". Are you talking about the cluster nodes or the PODs running inside?
Thanks.
Have you looked specifying the ingress.class (kubernetes.io/ingress.class: "nginx" ), like mentioned here? - https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/
Related
Let me explain what the deployment consists of. First of all I created a Cloud SQL db by importing some data. To connect the db to the application I used cloud-sql-proxy and so far everything works.
I created a kubernetes cluster in which there is a pod containing the Docker container of the application that I want to depoly and so far everything works ... To reach the application in https then I followed several online guides (https://cloud.google.com/load-balancing/docs/ssl-certificates/google-managed-certs#console , https://cloud.google.com/load-balancing/docs/ssl-certificates/google-managed-certs#console , etc.), all converge on using a service and an ingress kubernetes. The first one maps the 8080 of spring to the 80 while the second one creates a load balacer that exposes a frontend in https. I configured a health-check, I created a certificate (google managed) associated to a domain which maps the static ip assigned to the ingress.
Apparently everything works but as soon as you try to reach from the browser the address https://example.org/ you are correctly redirected to the login page ( http://example.org/login ) but as you can see it switches to the HTTP protocol and obviously a 404 is returned by google since http is disabled. Forcing https on the address to which it redirects you then ( https://example.org/login ) for some absurd reason adds "www" in front of the domain name ( https://www.example.org/login ). If you try not to use the domain by switching to the static IP the www problem disappears... However, every time you make a request in HTTPS it keeps changing to HTTP.
P.S. the general goal would be to have http up to the load balancer (google's internal network) and then have https between the load balancer and the client.
Can anyone help me? If it helps I post the yaml file of the deployment. Thank you very much!
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: my-app # Label for the Deployment
name: my-app # Name of Deployment
spec:
minReadySeconds: 60 # Number of seconds to wait after a Pod is created and its status is Ready
selector:
matchLabels:
run: my-app
template: # Pod template
metadata:
labels:
run: my-app # Labels Pods from this Deployment
spec: # Pod specification; each Pod created by this Deployment has this specification
containers:
- image: eu.gcr.io/my-app/my-app-production:latest # Application to run in Deployment's Pods
name: my-app-production # Container name
# Note: The following line is necessary only on clusters running GKE v1.11 and lower.
# For details, see https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing#align_rollouts
ports:
- containerPort: 8080
protocol: TCP
- image: gcr.io/cloudsql-docker/gce-proxy:1.17
name: cloud-sql-proxy
command:
- "/cloud_sql_proxy"
- "-instances=my-app:europe-west6:my-app-cloud-sql-instance=tcp:3306"
- "-credential_file=/secrets/service_account.json"
securityContext:
runAsNonRoot: true
volumeMounts:
- name: my-app-service-account-secret-volume
mountPath: /secrets/
readOnly: true
volumes:
- name: my-app-service-account-secret-volume
secret:
secretName: my-app-service-account-secret
terminationGracePeriodSeconds: 60 # Number of seconds to wait for connections to terminate before shutting down Pods
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: my-app-health-check
spec:
healthCheck:
checkIntervalSec: 60
port: 8080
type: HTTP
requestPath: /health/check
---
apiVersion: v1
kind: Service
metadata:
name: my-app-svc # Name of Service
annotations:
cloud.google.com/neg: '{"ingress": true}' # Creates a NEG after an Ingress is created
cloud.google.com/backend-config: '{"default": "my-app-health-check"}'
spec: # Service's specification
type: ClusterIP
selector:
run: my-app # Selects Pods labelled run: neg-demo-app
ports:
- port: 80 # Service's port
protocol: TCP
targetPort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-app-ing
annotations:
kubernetes.io/ingress.global-static-ip-name: "my-static-ip"
ingress.gcp.kubernetes.io/pre-shared-cert: "example-org"
kubernetes.io/ingress.allow-http: "false"
spec:
backend:
serviceName: my-app-svc
servicePort: 80
tls:
- secretName: example-org
hosts:
- example.org
---
As I mention in the comment section, you can redirect HTTP to HTTPS.
Google Cloud have quite good documentation and you can find there step by step guides, including firewall configurations or tests. You can find this guide here.
I would also suggest you to read also docs like:
Traffic management overview for external HTTP(S) load balancers
Setting up traffic management for external HTTP(S) load balancers
Routing and traffic management
As alternative you could check Nginx Ingress with proper annotation (force-ssl-redirect). Some examples can be found here.
The end goal: be able to sftp into the server using domain.com:42150 using routing through Kubernetes.
The reason: This behavior is currently handled by an HAProxy config that we are moving away from, but we still need to support this behavior in our Kubernetes set up.
I came across this and could not figure out how to make it work.
I have the IP of the sftp server and the port.
So, basicaly if a request comes in at domain.com:42150 then it should connect to external-ip:22
I have created a config-map like the one in the linked article:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: nginx-ingress
data:
42150: "nginx-ingress/external-sftp:80"
Which, by my understanding should route requests to port 42150 to this service:
apiVersion: v1
kind: Service
metadata:
name: external-sftp
namespace: nginx-ingress
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 22
protocol: TCP
And although it's not listed in that article, I know from connecting to other outside services, I need to create an endpoint to use.
apiVersion: v1
kind: Endpoints
metadata:
name: external-sftp
namespace: nginx-ingress
subsets:
- addresses:
- ip: 12.345.67.89
ports:
- port: 22
protocol: TCP
Obviously this isn't working. I never ask questions here. Usually my answers are easy to find, but this one I cannot find an answer for. I'm just stuck.
Is there something I'm missing? I'm thinking this way of doing it is not possible. Is there a better way to go about doing this?
If I have a kubernetes cluster in AKS with an nginx-ingress, can I then forward certain traffic to something external to the cluster like an App Service?
If I open my-domain.com/svc3, I want traffic to be routed to the App Service.
If this is not directly possible, what would be the best solution?
1) I could put an additional load balancer (like AppGateway) in front of both the AKS cluster and the App Service
2) I could instantiate a second nginx as a service, which then routes traffic to the app service
3) ... ?
I think you can use mapping external service to kubernetes :
kind: Service
apiVersion: v1
metadata:
name: external-service
Spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
here the endpoint :
kind: Endpoints
apiVersion: v1
metadata:
name: external-service
subsets:
- addresses:
- ip: 101.280.1.44
ports:
- port: 80
For more information you can check this video also :
https://www.youtube.com/watch?v=fvpq4jqtuZ8
you can also read this document for more information : https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-mapping-external-services
I am using Spring Boot to create REST API and that access MongoDB on load time.
I am making one deployment & service for REST API and one deployment and service for mongodb.
But my REST API pod is breaking and not coming up as on load time it looks for mongodb service but it is not able to ping that host.
I have exposed mongodb as a service and also REST API as a service.
REST API is exposed as NodePort and mongodb is exposed as ClusreIP.
Everything i tried but no solution.
===================================MongoDB deployment========================
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: tech-hunt-mongodb
spec:
replicas: 1
template:
metadata:
name: tech-hunt-mongodb
labels:
app: tech-hunt
module: mongodb
spec:
containers:
- image: <image>
name: tech-hunt-mongodb
ports:
- containerPort: 27017
===================================MongoDB service========================
apiVersion: v1
kind: Service
metadata:
name: tech-hunt-mongodb
spec:
#type: ClusterIP
selector:
app: tech-hunt
module: mongodb
ports:
- port: 27017
targetPort: 27017
protocol: TCP
clusterIP: None
#nodePort: 30000
#protocol: TCP
===========================REST API deployment================================
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: tech-hunt-api
spec:
template:
metadata:
name: tech-hunt-api
labels:
app: tech-hunt
module: rest-api
spec:
containers:
- image: <image>
name: tech-hunt-api
ports:
- containerPort: 4000
===============================REST API service=============================
apiVersion: v1
kind: Service
metadata:
name: tech-hunt-api-client
spec:
type: NodePort
selector:
app: tech-hunt
module: rest-api-client
ports:
- port: 5000
targetPort: 5000
nodePort: 30010
clusterIP: None
is almost certainly not what you want to happen, as that places the burden of populating the Endpoints entirely on you -- or an external controller (the StatefulSet controllers are one such example).
You'll have to delete, and then recreate, the tech-hunt-mongodb Service in order to change its clusterIP away from None over to an auto-populated value, but you should for sure do that first.
But my REST API pod is breaking and not coming up as on load time it looks for mongodb service but it is not able to ping that host.
As an FYI, you will never be able to "ping" a Service IP since those addresses are "fake"; only the tuple of Service IP and its traffic port (so, 27017 or 5000 in your two examples) will respond to any packets. You should use curl or nc to test connectivity, and not ping.
I was using wrong mongodb image that was not giving access to the clients. So the created container was not pingable in neither mean via curl or so.
I am using NodePort and my rest API container is able to get response from mongodb container.
I'm trying to integrate socket.io into an application deployed on Google Kubernetes Engine. Developing locally, everything works great. But once deployed, I am continuously getting the dreaded 400 response when my sockets try to connect on. I've been searching on SO and other sites for a few days now and I haven't found anything that fixes my issue.
Unfortunately this architecture was set up by a developer who is no longer at our company, and I'm certainly not a Kubernetes or GKE expert, so I'm definitely not sure I've got everything set up correctly.
Here's out setup:
we have 5 app pods that serve our application distributed across 5 cloud nodes (GCE vm instances)
we are using the nginx ingress controller (https://github.com/kubernetes/ingress-nginx) to create a load balancer to distribute traffic between our nodes
Here's what I've tried so far:
adding the following annotations to the ingress:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
adding sessionAffinity: ClientIP to the backend service referenced by the ingress
These measures don't seem to have made any difference, I'm still getting a 400 response. If anyone has handled a similar situation or has any advice to point me in the right direction I'd be very, very appreciative!
I just setup ngin ingress with same config where we are using socket.io.
here is my ingress config
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: core-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.org/websocket-services : "app-test"
nginx.ingress.kubernetes.io/rewrite-target: /
certmanager.k8s.io/cluster-issuer: core-prod
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/websocket-services : "socket-service"
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
spec:
tls:
- hosts:
- <domain>
secretName: core-prod
rules:
- host: <domain>
http:
paths:
- backend:
serviceName: service-name
servicePort: 80
i was also facing same issue so added proxy-send-timeout and proxy-read-timeout.
I'm guessing you have probably found the answer by now, but you have to add an annotation to your ingress to specify which service will provide websocket upgrades. It looks something like this:
# web socket support
nginx.org/websocket-services: "(your-websocket-service)"