The end goal: be able to sftp into the server using domain.com:42150 using routing through Kubernetes.
The reason: This behavior is currently handled by an HAProxy config that we are moving away from, but we still need to support this behavior in our Kubernetes set up.
I came across this and could not figure out how to make it work.
I have the IP of the sftp server and the port.
So, basicaly if a request comes in at domain.com:42150 then it should connect to external-ip:22
I have created a config-map like the one in the linked article:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: nginx-ingress
data:
42150: "nginx-ingress/external-sftp:80"
Which, by my understanding should route requests to port 42150 to this service:
apiVersion: v1
kind: Service
metadata:
name: external-sftp
namespace: nginx-ingress
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 22
protocol: TCP
And although it's not listed in that article, I know from connecting to other outside services, I need to create an endpoint to use.
apiVersion: v1
kind: Endpoints
metadata:
name: external-sftp
namespace: nginx-ingress
subsets:
- addresses:
- ip: 12.345.67.89
ports:
- port: 22
protocol: TCP
Obviously this isn't working. I never ask questions here. Usually my answers are easy to find, but this one I cannot find an answer for. I'm just stuck.
Is there something I'm missing? I'm thinking this way of doing it is not possible. Is there a better way to go about doing this?
I have installed the following two different ingress controllers on my DigitalOcean managed K8S cluster:
Nginx
Istio
and they have been assigned to two different IP addresses. My question is, if it is wrong to have two different ingress controllers on the same K8S cluster?
The reason, why I have done it, because nginx is for tools like harbor, argocd, etc. and istio for microservices.
I have also figured out, when both are installed alongside each other, sometimes during the deployment, the K8S suddenly goes down.
For example, I have deployed:
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-first
namespace: dev
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes-first
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes-first
namespace: dev
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes-first
template:
metadata:
labels:
app: hello-kubernetes-first
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.7
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: Hello from the first deployment!
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
name: helloworld-ingress
namespace: dev
spec:
rules:
- host: hello.service.databaker.io
http:
paths:
- path: /*
backend:
serviceName: hello-kubernetes-first
servicePort: 80
---
Then I've got:
Error from server (InternalError): error when creating "istio-app.yml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s: dial tcp 10.245.107.175:443: i/o timeout
You have raised several points - before answering your question, let's take a step back.
K8s Ingress not recommended by Istio
It is important to note how Istio does not recommend using K8s Ingress:
Using the Istio Gateway, rather than Ingress, is recommended to make use of the full feature set that Istio offers, such as rich traffic management and security features.
Ref: https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/
As noted, Istio Gateway (Istio IngressGateway and EgressGateway) acts as the edge, which you can find more in https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/.
Multiple endpoints within Istio
If you need to assign one public endpoint for business requirement, and another for monitoring (such as Argo CD, Harbor as you mentioned), you can achieve that by using Istio only. There are roughly 2 approaches to this.
Create separate Istio IngressGateways - one for main traffic, and another for monitoring
Create one Istio IngressGateway, and use Gateway definition to handle multiple access patterns
Both are valid, and depending on requirements, you may need to choose one way or the other.
As to the Approach #2., it is where Istio's traffic management system shines. It is a great example of Istio's power, but the setup is slightly complex if you are new to it. So here goes an example.
Example of Approach #2
When you create Istio IngressGateway by following the default installation, it would create istio-ingressgateway like below (I overly simplified YAML definition):
apiVersion: v1
kind: Service
metadata:
labels:
app: istio-ingressgateway
istio: ingressgateway
name: istio-ingressgateway
namespace: istio-system
# ... other attributes ...
spec:
type: LoadBalancer
# ... other attributes ...
This LB Service would then be your endpoint. (I'm not familiar with DigitalOcean K8s env, but I suppose they would handle LB creation.)
Then, you can create Gateway definition like below:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: your-gateway
namespace: istio-system
spec:
selector:
app: istio-ingressgateway
istio: ingressgateway
servers:
- port:
number: 3000
name: https-your-system
protocol: HTTPS
hosts:
- "your-business-domain.com"
- "*.monitoring-domain.com"
# ... other attributes ...
You can then create 2 or more VirtualService definitions.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: business-virtsvc
spec:
gateways:
- istio-ingressgateway.istio-system.svc.cluster.local
hosts:
- "your-business-domain.com"
http:
- match:
- port: 3000
route:
- destination:
host: some-business-pod
port:
number: 3000
# ... other attributes ...
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: monitoring-virtsvc
spec:
gateways:
- istio-ingressgateway.istio-system.svc.cluster.local
hosts:
- "harbor.monitoring-domain.com"
http:
- match:
- port: 3000
route:
- destination:
host: harbor-pod
port:
number: 3000
# ... other attributes ...
NOTE: The above is assuming a lot of things, such as port mapping, traffic handling, etc.. Please check out the official doc for details.
So, back to the question after long detour:
Question: [Is it] wrong to have two different ingress controllers on the same K8S cluster[?]
I believe it is OK, though this can cause an error like you are seeing, as two ingress controller fight for the K8s Ingress resource.
As mentioned above, if you are using Istio, it's better to stick with Istio IngressGateway instead of K8s Ingress. If you need K8s Ingress for some specific reason, you could use other Ingress controller for K8s Ingress, like Nginx.
As to the error you saw, it's coming from Nginx deployed webhook, that ingress-nginx-controller-admission.nginx.svc is not available. This means you have created a K8s Ingress helloworld-ingress with kubernetes.io/ingress.class: istio annotation, but Nginx webhook is interfering with K8s Ingress handling. The webhook is then failing to handle the resource, as the Pod / Svc responsible for webhook traffic is not found.
The error itself just says something is unhealthy in K8s - potentially not enough Node allocated to the cluster, and thus Pod allocation not happening. It's also good to note that Istio does require some CPU and memory footprint, which may be putting more pressure to the cluster.
Both products have distinct characteristics and solve different type of problems. So, no issue in having both installed on your cluster.
To call them Ingress Controller is not correct:
- Nginx is a well known web server
- Nginx ingress controller is an implementation of a Kubernetes Ingress controller based on Nginx (Load balancing, HTTPS termination, authentication, traffic routing , etc)
- Istio is a service mesh (well known to microservice architecture and used to address cross cutting concerns in a standard way - things like, logging, tracing, Https termination, etc - at the POD level)
Can you provide more details to what you mean by "K8S suddenly goes down". Are you talking about the cluster nodes or the PODs running inside?
Thanks.
Have you looked specifying the ingress.class (kubernetes.io/ingress.class: "nginx" ), like mentioned here? - https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/
I am trying to set up my app on GKE and use an internal load balancer for public access. I am able to deploy the cluster / load balancer service without any issues, but when I try to access the external ip address of the load balancer, I get Connection Refused and I am not sure what is wrong / how to debug this.
These are the steps I did:
I applied my deployment yaml file via kubectl apply -f file.yaml then after, I applied my load balancer service yaml file with kubectl apply -f service.yaml. After both were deployed, I did kubectl get service to fetch the External IP Address from the Load Balancer.
Here is my deployment.yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app-api
image: gcr.io/...
ports:
- containerPort: 8000
resources:
requests:
memory: "250M"
cpu: "250m"
limits:
memory: "1G"
cpu: "500m"
- name: my-app
image: gcr.io/...
ports:
- containerPort: 3000
resources:
requests:
memory: "250M"
cpu: "250m"
limits:
memory: "1G"
cpu: "500m"
and here is my service.yaml file:
apiVersion: v1
kind: Service
metadata:
name: my-app-ilb
annotations:
cloud.google.com/load-balancer-type: "Internal"
labels:
app: my-app-ilb
spec:
type: LoadBalancer
selector:
app: my-app
ports:
- port: 3000
targetPort: 3000
protocol: TCP
My deployment file has two containers; a backend api and a frontend. What I want to happen is that I should be able to go on [external ip address]:3000 and see my web app.
I hope this is enough information; please let me know if there is anything else I may be missing / can add.
Thank you all!
You need to allow traffic to flow into your cluster by creating firewall rule.
gcloud compute firewall-rules create my-rule --allow=tcp:3000
Remove this annotation :
annotations:
cloud.google.com/load-balancer-type: "Internal"
You need external Load Balancer.
I'm using minikube, and i have 2 pods, pod A and pod B.
I want that pod A make an http request to pod B, assuming that the two pods are in the same namespace (for example namepsace X).
When i write the code for pod A, which address should i use for identify pod B ?
You need to expose Pod-B as Services.
For Pod-B assuming your Pod Definition looks something it as service something like this:
apiVersion: v1
kind: Pod
metadata:
name: Pod-B
labels:
app: my-service
spec:
containers:
- name: nginx
image: nginx:2.0.0
ports:
- containerPort: 80
To wrap your Pod-B with a higher level abstraction i.e Service, define it something like this
kind: Service
apiVersion: v1
metadata:
name: Pod-B-Service
spec:
selector:
app: my-service
ports:
- protocol: TCP
port: 80
targetPort: 80
I am using Spring Boot to create REST API and that access MongoDB on load time.
I am making one deployment & service for REST API and one deployment and service for mongodb.
But my REST API pod is breaking and not coming up as on load time it looks for mongodb service but it is not able to ping that host.
I have exposed mongodb as a service and also REST API as a service.
REST API is exposed as NodePort and mongodb is exposed as ClusreIP.
Everything i tried but no solution.
===================================MongoDB deployment========================
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: tech-hunt-mongodb
spec:
replicas: 1
template:
metadata:
name: tech-hunt-mongodb
labels:
app: tech-hunt
module: mongodb
spec:
containers:
- image: <image>
name: tech-hunt-mongodb
ports:
- containerPort: 27017
===================================MongoDB service========================
apiVersion: v1
kind: Service
metadata:
name: tech-hunt-mongodb
spec:
#type: ClusterIP
selector:
app: tech-hunt
module: mongodb
ports:
- port: 27017
targetPort: 27017
protocol: TCP
clusterIP: None
#nodePort: 30000
#protocol: TCP
===========================REST API deployment================================
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: tech-hunt-api
spec:
template:
metadata:
name: tech-hunt-api
labels:
app: tech-hunt
module: rest-api
spec:
containers:
- image: <image>
name: tech-hunt-api
ports:
- containerPort: 4000
===============================REST API service=============================
apiVersion: v1
kind: Service
metadata:
name: tech-hunt-api-client
spec:
type: NodePort
selector:
app: tech-hunt
module: rest-api-client
ports:
- port: 5000
targetPort: 5000
nodePort: 30010
clusterIP: None
is almost certainly not what you want to happen, as that places the burden of populating the Endpoints entirely on you -- or an external controller (the StatefulSet controllers are one such example).
You'll have to delete, and then recreate, the tech-hunt-mongodb Service in order to change its clusterIP away from None over to an auto-populated value, but you should for sure do that first.
But my REST API pod is breaking and not coming up as on load time it looks for mongodb service but it is not able to ping that host.
As an FYI, you will never be able to "ping" a Service IP since those addresses are "fake"; only the tuple of Service IP and its traffic port (so, 27017 or 5000 in your two examples) will respond to any packets. You should use curl or nc to test connectivity, and not ping.
I was using wrong mongodb image that was not giving access to the clients. So the created container was not pingable in neither mean via curl or so.
I am using NodePort and my rest API container is able to get response from mongodb container.