Let me explain what the deployment consists of. First of all I created a Cloud SQL db by importing some data. To connect the db to the application I used cloud-sql-proxy and so far everything works.
I created a kubernetes cluster in which there is a pod containing the Docker container of the application that I want to depoly and so far everything works ... To reach the application in https then I followed several online guides (https://cloud.google.com/load-balancing/docs/ssl-certificates/google-managed-certs#console , https://cloud.google.com/load-balancing/docs/ssl-certificates/google-managed-certs#console , etc.), all converge on using a service and an ingress kubernetes. The first one maps the 8080 of spring to the 80 while the second one creates a load balacer that exposes a frontend in https. I configured a health-check, I created a certificate (google managed) associated to a domain which maps the static ip assigned to the ingress.
Apparently everything works but as soon as you try to reach from the browser the address https://example.org/ you are correctly redirected to the login page ( http://example.org/login ) but as you can see it switches to the HTTP protocol and obviously a 404 is returned by google since http is disabled. Forcing https on the address to which it redirects you then ( https://example.org/login ) for some absurd reason adds "www" in front of the domain name ( https://www.example.org/login ). If you try not to use the domain by switching to the static IP the www problem disappears... However, every time you make a request in HTTPS it keeps changing to HTTP.
P.S. the general goal would be to have http up to the load balancer (google's internal network) and then have https between the load balancer and the client.
Can anyone help me? If it helps I post the yaml file of the deployment. Thank you very much!
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: my-app # Label for the Deployment
name: my-app # Name of Deployment
spec:
minReadySeconds: 60 # Number of seconds to wait after a Pod is created and its status is Ready
selector:
matchLabels:
run: my-app
template: # Pod template
metadata:
labels:
run: my-app # Labels Pods from this Deployment
spec: # Pod specification; each Pod created by this Deployment has this specification
containers:
- image: eu.gcr.io/my-app/my-app-production:latest # Application to run in Deployment's Pods
name: my-app-production # Container name
# Note: The following line is necessary only on clusters running GKE v1.11 and lower.
# For details, see https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing#align_rollouts
ports:
- containerPort: 8080
protocol: TCP
- image: gcr.io/cloudsql-docker/gce-proxy:1.17
name: cloud-sql-proxy
command:
- "/cloud_sql_proxy"
- "-instances=my-app:europe-west6:my-app-cloud-sql-instance=tcp:3306"
- "-credential_file=/secrets/service_account.json"
securityContext:
runAsNonRoot: true
volumeMounts:
- name: my-app-service-account-secret-volume
mountPath: /secrets/
readOnly: true
volumes:
- name: my-app-service-account-secret-volume
secret:
secretName: my-app-service-account-secret
terminationGracePeriodSeconds: 60 # Number of seconds to wait for connections to terminate before shutting down Pods
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: my-app-health-check
spec:
healthCheck:
checkIntervalSec: 60
port: 8080
type: HTTP
requestPath: /health/check
---
apiVersion: v1
kind: Service
metadata:
name: my-app-svc # Name of Service
annotations:
cloud.google.com/neg: '{"ingress": true}' # Creates a NEG after an Ingress is created
cloud.google.com/backend-config: '{"default": "my-app-health-check"}'
spec: # Service's specification
type: ClusterIP
selector:
run: my-app # Selects Pods labelled run: neg-demo-app
ports:
- port: 80 # Service's port
protocol: TCP
targetPort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-app-ing
annotations:
kubernetes.io/ingress.global-static-ip-name: "my-static-ip"
ingress.gcp.kubernetes.io/pre-shared-cert: "example-org"
kubernetes.io/ingress.allow-http: "false"
spec:
backend:
serviceName: my-app-svc
servicePort: 80
tls:
- secretName: example-org
hosts:
- example.org
---
As I mention in the comment section, you can redirect HTTP to HTTPS.
Google Cloud have quite good documentation and you can find there step by step guides, including firewall configurations or tests. You can find this guide here.
I would also suggest you to read also docs like:
Traffic management overview for external HTTP(S) load balancers
Setting up traffic management for external HTTP(S) load balancers
Routing and traffic management
As alternative you could check Nginx Ingress with proper annotation (force-ssl-redirect). Some examples can be found here.
The end goal: be able to sftp into the server using domain.com:42150 using routing through Kubernetes.
The reason: This behavior is currently handled by an HAProxy config that we are moving away from, but we still need to support this behavior in our Kubernetes set up.
I came across this and could not figure out how to make it work.
I have the IP of the sftp server and the port.
So, basicaly if a request comes in at domain.com:42150 then it should connect to external-ip:22
I have created a config-map like the one in the linked article:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: nginx-ingress
data:
42150: "nginx-ingress/external-sftp:80"
Which, by my understanding should route requests to port 42150 to this service:
apiVersion: v1
kind: Service
metadata:
name: external-sftp
namespace: nginx-ingress
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 22
protocol: TCP
And although it's not listed in that article, I know from connecting to other outside services, I need to create an endpoint to use.
apiVersion: v1
kind: Endpoints
metadata:
name: external-sftp
namespace: nginx-ingress
subsets:
- addresses:
- ip: 12.345.67.89
ports:
- port: 22
protocol: TCP
Obviously this isn't working. I never ask questions here. Usually my answers are easy to find, but this one I cannot find an answer for. I'm just stuck.
Is there something I'm missing? I'm thinking this way of doing it is not possible. Is there a better way to go about doing this?
I have installed the following two different ingress controllers on my DigitalOcean managed K8S cluster:
Nginx
Istio
and they have been assigned to two different IP addresses. My question is, if it is wrong to have two different ingress controllers on the same K8S cluster?
The reason, why I have done it, because nginx is for tools like harbor, argocd, etc. and istio for microservices.
I have also figured out, when both are installed alongside each other, sometimes during the deployment, the K8S suddenly goes down.
For example, I have deployed:
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-first
namespace: dev
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes-first
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes-first
namespace: dev
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes-first
template:
metadata:
labels:
app: hello-kubernetes-first
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.7
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: Hello from the first deployment!
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
name: helloworld-ingress
namespace: dev
spec:
rules:
- host: hello.service.databaker.io
http:
paths:
- path: /*
backend:
serviceName: hello-kubernetes-first
servicePort: 80
---
Then I've got:
Error from server (InternalError): error when creating "istio-app.yml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s: dial tcp 10.245.107.175:443: i/o timeout
You have raised several points - before answering your question, let's take a step back.
K8s Ingress not recommended by Istio
It is important to note how Istio does not recommend using K8s Ingress:
Using the Istio Gateway, rather than Ingress, is recommended to make use of the full feature set that Istio offers, such as rich traffic management and security features.
Ref: https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/
As noted, Istio Gateway (Istio IngressGateway and EgressGateway) acts as the edge, which you can find more in https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/.
Multiple endpoints within Istio
If you need to assign one public endpoint for business requirement, and another for monitoring (such as Argo CD, Harbor as you mentioned), you can achieve that by using Istio only. There are roughly 2 approaches to this.
Create separate Istio IngressGateways - one for main traffic, and another for monitoring
Create one Istio IngressGateway, and use Gateway definition to handle multiple access patterns
Both are valid, and depending on requirements, you may need to choose one way or the other.
As to the Approach #2., it is where Istio's traffic management system shines. It is a great example of Istio's power, but the setup is slightly complex if you are new to it. So here goes an example.
Example of Approach #2
When you create Istio IngressGateway by following the default installation, it would create istio-ingressgateway like below (I overly simplified YAML definition):
apiVersion: v1
kind: Service
metadata:
labels:
app: istio-ingressgateway
istio: ingressgateway
name: istio-ingressgateway
namespace: istio-system
# ... other attributes ...
spec:
type: LoadBalancer
# ... other attributes ...
This LB Service would then be your endpoint. (I'm not familiar with DigitalOcean K8s env, but I suppose they would handle LB creation.)
Then, you can create Gateway definition like below:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: your-gateway
namespace: istio-system
spec:
selector:
app: istio-ingressgateway
istio: ingressgateway
servers:
- port:
number: 3000
name: https-your-system
protocol: HTTPS
hosts:
- "your-business-domain.com"
- "*.monitoring-domain.com"
# ... other attributes ...
You can then create 2 or more VirtualService definitions.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: business-virtsvc
spec:
gateways:
- istio-ingressgateway.istio-system.svc.cluster.local
hosts:
- "your-business-domain.com"
http:
- match:
- port: 3000
route:
- destination:
host: some-business-pod
port:
number: 3000
# ... other attributes ...
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: monitoring-virtsvc
spec:
gateways:
- istio-ingressgateway.istio-system.svc.cluster.local
hosts:
- "harbor.monitoring-domain.com"
http:
- match:
- port: 3000
route:
- destination:
host: harbor-pod
port:
number: 3000
# ... other attributes ...
NOTE: The above is assuming a lot of things, such as port mapping, traffic handling, etc.. Please check out the official doc for details.
So, back to the question after long detour:
Question: [Is it] wrong to have two different ingress controllers on the same K8S cluster[?]
I believe it is OK, though this can cause an error like you are seeing, as two ingress controller fight for the K8s Ingress resource.
As mentioned above, if you are using Istio, it's better to stick with Istio IngressGateway instead of K8s Ingress. If you need K8s Ingress for some specific reason, you could use other Ingress controller for K8s Ingress, like Nginx.
As to the error you saw, it's coming from Nginx deployed webhook, that ingress-nginx-controller-admission.nginx.svc is not available. This means you have created a K8s Ingress helloworld-ingress with kubernetes.io/ingress.class: istio annotation, but Nginx webhook is interfering with K8s Ingress handling. The webhook is then failing to handle the resource, as the Pod / Svc responsible for webhook traffic is not found.
The error itself just says something is unhealthy in K8s - potentially not enough Node allocated to the cluster, and thus Pod allocation not happening. It's also good to note that Istio does require some CPU and memory footprint, which may be putting more pressure to the cluster.
Both products have distinct characteristics and solve different type of problems. So, no issue in having both installed on your cluster.
To call them Ingress Controller is not correct:
- Nginx is a well known web server
- Nginx ingress controller is an implementation of a Kubernetes Ingress controller based on Nginx (Load balancing, HTTPS termination, authentication, traffic routing , etc)
- Istio is a service mesh (well known to microservice architecture and used to address cross cutting concerns in a standard way - things like, logging, tracing, Https termination, etc - at the POD level)
Can you provide more details to what you mean by "K8S suddenly goes down". Are you talking about the cluster nodes or the PODs running inside?
Thanks.
Have you looked specifying the ingress.class (kubernetes.io/ingress.class: "nginx" ), like mentioned here? - https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/
I am new to Kubernetes. I followed Kubernetes the hard way from Kesley Hightower and also this to set up Kubernetes in Azure. Now all the services are up and running fine. But I am not able to expose the traffic using Load balancer. I tried to add a Service object of type LoadBalancer but the external IP is showing as <pending>. I need to add ingress to expose the traffic.
nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-service
name: nginx-service
spec:
type: LoadBalancer
externalIPs:
- <ip>
ports:
- name: "80"
port: 80
targetPort: 80
- name: "443"
port: 443
targetPort: 443
selector:
app: nginx-service
Thank you,
By default, the solution proposed by Kubernetes The Hard Way doesn't include a solution for LoadBalancer. The fact it's pending forever is expected behavior. You need to use out-of-the box solution for that. A very commonly used is MetalLB.
MetalLB isn't going to allocate an External IP for you, it will allocate a internal IP inside our VPC and you have to create the necessary routing rules to route traffic to this IP.
I have an application running in K8s. It has 3 microservices and nginx in front of them.
Each redirection goes through nginx first and is proxied as specified.
My flask app is having issues redirecting without the port number. I run k8s locally via minikube. Whenever I redirect to another page the url doesn't include port number, which throws me an error.
if usernamedata == None:
print("Could not log in")
else:
if passworddata == password:
print("Logged in")
return redirect("/user/{0}".format(username))
Nginx is the only service exposed and its url is
http://192.168.99.107:31699
With my redirection in flask I get redirected to http://192.168.99.107/user/David, which throws me connection refused.
If I add port number and make it http://192.168.99.107:31699/user/David it works fine.
Do I need to specify port number when redirecting? What if the service is down and recreated?
Also, this is my service definition for nginx:
kind: Service
apiVersion: v1
metadata:
name: nginx
labels:
svc: nginx
spec:
selector:
app: nginx-app
type: LoadBalancer
ports:
- port: 80
How can I make redirection within flask app work?
If service is down and recreated and you want to retain same high port number for your service you need to specify nodePort and change type of service to NodePort.
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
svc: nginx
spec:
type: NodePort # type is set to NodePort
ports:
- port: 80 # Service's internal cluster IP
targetPort: 80 # target port of the backing pods
nodePort: 31699 # service will be only available via this port for each cluster node if recreated
selector:
app: nginx-app
Within your Python code (If Service starts before Pod):
import os
...
service_host = os.environ.get("NGINX_SERVICE_HOST")
service_port = os.environ.get("NGINX_SERVICE_PORT")
...
redirect(f"http://{service_host}:{service_port}/user/{username}")