Nginx Ingress Error 413 Request Entity Too Large - nginx

I use ingress-nginx with Helm Chart. I used to have the problem, that when I would upload a file (50MB) that I would get the error 413 Request Entity Too Large nginx.
So I changed the proxy-body-size value in my values.yaml file to 150m, so I should now be able to upload my file.
But now I get the error "413 Request Entity Too Large openresty/1.13.6.2".
I checked the nginx.conf file on the ingress controller and the value for client_max_body_size is correctly set to 150m.
After some research I found out that openresty is used by the lua module in nginx.
Does anybody know how I can set this setting too for openresty, or what parameter I am missing ?
My current config is the following:
values.yml:
ingress-nginx:
defaultBackend:
nodeSelector:
beta.kubernetes.io/os: linux
controller:
replicaCount: 2
resources:
requests:
cpu: 1
memory: 4Gi
limits:
cpu: 2
memory: 7Gi
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 90
targetMemoryUtilizationPercentage: 90
ingressClassResource:
name: nginx
controllerValue: "k8s.io/nginx"
nodeSelector:
beta.kubernetes.io/os: linux
admissionWebhooks:
enabled: false
patch:
nodeSelector:
beta.kubernetes.io/os: linux
extraArgs:
ingress-class: "nginx"
config:
proxy-buffer-size: "16k"
proxy-body-size: "150m"
client-body-buffer-size: "128k"
large-client-header-buffers: "4 32k"
ssl-redirect: "false"
use-forwarded-headers: "true"
compute-full-forwarded-for: "true"
use-proxy-protocol: "false"
ingress.yml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
namespace: namespacename
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/proxy-buffer-size: "128k"
nginx.ingress.kubernetes.io/proxy-buffers-number: "8"
nginx.ingress.kubernetes.io/client-body-buffer-size: "128k"
nginx.ingress.kubernetes.io/proxy-body-size: "150m"
spec:
tls:
- hosts:
- hostname
rules:
- host: hostname
http:
paths:
- path: /assets/static/
pathType: ImplementationSpecific
backend:
service:
name: servicename
port:
number: 8080

So it turns out the Application wich had the error, had another reverse Proxy infront of it (wich uses Lua and Openresty for oauth registration).
The Proxy-body-size attribute needed to be raised there to. After that the File upload worked

Related

How to add a VirtualServer on top of a service in Kubernetes?

I have the following deployment and service config files for deploying a service to Kubernetes;
apiVersion: apps/v1
kind: Deployment
metadata:
name: write-your-own-name
spec:
replicas: 1
selector:
matchLabels:
run: write-your-own-name
template:
metadata:
labels:
run: write-your-own-name
spec:
containers:
- name: write-your-own-name-container
image: gcr.io/cool-adviser-345716/automl:manual__2022-10-10T07_54_04_00_00
ports:
- containerPort: 80
apiVersion: v1
kind: Service
metadata:
name: write-your-own-name
spec:
ports:
- port: 80
targetPort: 80
selector:
run: write-your-own-name
type: LoadBalancer
K8s exposes the service on the endpoint (let's say) http://12.239.21.88
I then created a namespace nginx-deploy and installed nginx-controller using the command
helm install controller-free nginx-stable/nginx-ingress \
--set controller.ingressClass=nginx-deployment \
--set controller.service.type=NodePort \
--set controller.service.httpPort.nodePort=30040 \
--set controller.enablePreviewPolicies=true \
--namespace nginx-deploy
I then added a rate limit policy
apiVersion: k8s.nginx.org/v1
kind: Policy
metadata:
name: rate-limit-policy
spec:
rateLimit:
rate: 1r/s
key: ${binary_remote_addr}
zoneSize: 10M
And then finally a VirtualServer
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: write-your-name-vs
spec:
ingressClassName: nginx-deployment
host: 12.239.21.88
policies:
- name: rate-limit-policy
upstreams:
- name: write-your-own-name
service: write-your-own-name
port: 80
routes:
- path: /
action:
pass: write-your-own-name
- path: /hehe
action:
redirect:
url: http://www.nginx.com
code: 301
Before adding the VirtualServer, I could go to 12.239.21.88:80 and access my service and I can still do that after adding the virtual server. But when I try accessing the page 12.239.21.88:80/hehe, I get detail not found error.
I am guessing that this is because the VirtualServer is not working on top of the service. How do I expose my service with a VirtualServer? Or alternatively, I want rate limiting on my service and how do I achieve this?
I used the following tutorial to get rate-limiting to work:
NGINX Tutorial: Protect Kubernetes APIs with Rate Limiting
I am sorry if the question is too long but I have been trying to figure out rate limiting for a while and can't get it to work. Thanks in advance.

Automatically update Kubernetes resource if another resource is created

I currently have the following challenge: We are using two ingress controllers in our cloud Kubernetes cluster, a custom Nginx ingress controller, and a cloud ingress controller on the load balancer.
The challenge now is when creating an Nginx-ingress element, that an automatic update on the cloud ingress controller ingress element is triggered. The ingress controller of the cloud provider does not support host specifications like *.example.com, so we have to work around it.
Cloud Provider Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cloudprovider-listener-https
namespace: nginx-ingress-controller
annotations:
kubernetes.io/elb.id: "<loadbalancerid>"
kubernetes.io/elb.port: "<loadbalancerport>"
kubernetes.io/ingress.class: "<cloudprovider>"
spec:
rules:
- host: "customer1.example.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ingress-nginx-controller
port:
number: 80
property:
ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH
- host: "customer2.example.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ingress-nginx-controller
port:
number: 80
property:
ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH
- host: "customer3.example.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ingress-nginx-controller
port:
number: 80
property:
ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH
tls:
- hosts:
- "*.example.com"
secretName: wildcard-cert
Nginx Ingress Config for each Customer
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web
namespace: <namespace>
annotations:
kubernetes.io/ingress.class: nginx
# ... several nginx-ingress annotations
spec:
rules:
- host: "customer<x>.example.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: <port>
property:
ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH
Currently, the cloud ingress resource is created dynamically by the helm, but triggered externally and the paths are queried by script "kubectl get ing -A" + magic.
Is there a way to monitor Nginx ingresses internally in the cluster and automatically trigger an update of the cloud ingress for new ingress elements?
Or am I going about this completely wrong?
Hope you guys can help.
I'll describe a solution that requires running kubectl commands from within the Pod.
In short, you can use a script to continuously monitor the .metadata.generation value of the ingress resource, and when this value is increased, you can run your "kubectl get ing -A + magic".
The .metadata.generation value is incremented for all changes, except for changes to .metadata or .status.
Below, I will provide a detailed step-by-step explanation.
To check the generation of the web ingress resource, we can use:
### kubectl get ingress <INGRESS_RESOURCE_NAME> -n default --template '{{.metadata.generation}}'
$ kubectl get ingress web -n default --template '{{.metadata.generation}}'
1
To constantly monitor this value, we can create a Bash script:
NOTE: This script compares generation to newGeneration in a while loop to detect any .metadata.generation changes.
$ cat check-script.sh
#!/bin/bash
generation="$(kubectl get ingress web -n default --template '{{.metadata.generation}}')"
while true; do
newGeneration="$(kubectl get ingress web -n default --template '{{.metadata.generation}}')"
if [[ "${generation}" != "${newGeneration}" ]]; then
echo "Modified !!!" # Here you can additionally add "magic"
generation=${newGeneration}
fi
We want to run this script from inside the Pod, so I converted it to ConfigMap which will allow us to mount this script in a volume (see: Using ConfigMaps as files from a Pod):
$ cat check-script-configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: check-script
data:
checkScript.sh: |
#!/bin/bash
generation="$(kubectl get ingress web -n default --template '{{.metadata.generation}}')"
while true; do
newGeneration="$(kubectl get ingress web -n default --template '{{.metadata.generation}}')"
if [[ "${generation}" != "${newGeneration}" ]]; then
echo "Modified !!!"
generation=${newGeneration}
fi
done
$ kubectl apply -f check-script-configmap.yml
configmap/check-script created
For security reasons, I've created a separate ingress-checker ServiceAccount with the view Role assigned and our Pod will run under this ServiceAccount:
NOTE: I've created a Deployment instead of a single Pod.
$ cat all-in-one.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-checker
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ingress-checker-binding
subjects:
- kind: ServiceAccount
name: ingress-checker
namespace: default
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ingress-checker
name: ingress-checker
spec:
selector:
matchLabels:
app: ingress-checker
template:
metadata:
labels:
app: ingress-checker
spec:
serviceAccountName: ingress-checker
volumes:
- name: check-script
configMap:
name: check-script
containers:
- image: bitnami/kubectl
name: test
command: ["bash", "/mnt/checkScript.sh"]
volumeMounts:
- name: check-script
mountPath: /mnt
After applying the above manifest, the ingress-checker Deployment was created and started monitoring the web ingress resource:
$ kubectl apply -f all-in-one.yaml
serviceaccount/ingress-checker created
clusterrolebinding.rbac.authorization.k8s.io/ingress-checker-binding created
deployment.apps/ingress-checker created
$ kubectl get deploy,pod | grep ingress-checker
deployment.apps/ingress-checker 1/1 1
pod/ingress-checker-6846474c9-rhszh 1/1 Running
Finally, we can check how it works.
From one terminal window I checked the ingress-checker logs with the $ kubectl logs -f deploy/ingress-checker command.
From another terminal window, I modified the web ingress resource.
Second terminal window:
$ kubectl edit ing web
ingress.networking.k8s.io/web edited
First terminal window:
$ kubectl logs -f deploy/ingress-checker
Modified !!!
As you can see, it works as expected. We have the ingress-checker Deployment that monitors changes to the web ingress resource.

Canary rollouts with linkerd and argo rollouts

I'm trying to configure a canary rollout for a demo, but I'm having trouble getting the traffic splitting to work with linkerd. The funny part is I was able to get this working with istio and i find istio to be much more complicated then linkerd.
I have a basic go-lang service define like this:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: fish
spec:
[...]
strategy:
canary:
canaryService: canary-svc
stableService: stable-svc
trafficRouting:
smi: {}
steps:
- setWeight: 5
- pause: {}
- setWeight: 20
- pause: {}
- setWeight: 50
- pause: {}
- setWeight: 80
- pause: {}
---
apiVersion: v1
kind: Service
metadata:
name: canary-svc
spec:
selector:
app: fish
ports:
- name: http
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: stable-svc
spec:
selector:
app: fish
ports:
- name: http
port: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: fish
annotations:
kubernetes.io/ingress.class: 'nginx'
cert-manager.io/cluster-issuer: letsencrypt-production
cert-manager.io/acme-challenge-type: dns01
external-dns.alpha.kubernetes.io/hostname: fish.local
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
spec:
rules:
- host: fish.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: stable-svc
port:
number: 8080
When I do the deploy (sync) via ArgoCD I can see the traffic split is 50/50:
- apiVersion: split.smi-spec.io/v1alpha2
kind: TrafficSplit
metadata:
[...]
name: fish
namespace: default
spec:
backends:
- service: canary-svc
weight: "50"
- service: stable-svc
weight: "50"
service: stable-svc
However doing a curl command in a while loop i only get back the stable-svc. The only time i see a change is after I have completely moved the service to 100%.
I tried to follow this: https://argoproj.github.io/argo-rollouts/getting-started/smi/
Any help would be greatly appreciated.
Thanks
After reading this: https://linkerd.io/2.10/tasks/using-ingress/ I discovered you need to modify your ingress controller with a special annotation:
$ kubectl get deployment <ingress-controller> -n <ingress-namespace> -o yaml | linkerd inject --ingress - | kubectl apply -f -
TLDR; if you want Linkerd functionality like Service Profiles, Traffic Splits, etc, there is additional configuration required to make the Ingress controller’s Linkerd proxy run in ingress mode.
So there's a bit more context in this issue but the TL;DR is ingresses tend to target individual pods instead of the service address. Putting Linkerd's proxy in ingress mode tells it to override that behaviour. NGINX does already have a setting that will let it hit services instead of endpoints directly, you can see that in their docs here.

Let's Encrypt kubernetes Ingress Controller issuing Fake Certificate

Not Sure why I'm getting Fake certificate, even the certificate is properly issued by Let's Encrypt using certmanager
The setup is running on the Alibaba Cloud ECS console, where one Kube-master and one cube-minion form a Kubernetes cluster.
Service Details
root#kube-master:~# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h20m
my-nginx ClusterIP 10.101.150.247 <none> 80/TCP 77m
Pod Details
root#kube-master:~# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
my-nginx-6cc48cd8db-n6scm 1/1 Running 0 46s app=my-nginx,pod-template-hash=6cc48cd8db
Helm Cert-manager deployed
root#kube-master:~# helm ls
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
cert-manager 1 Tue Mar 12 15:29:21 2019 DEPLOYED cert-manager-v0.5.2 v0.5.2 kube-system
kindred-garfish 1 Tue Mar 12 17:03:41 2019 DEPLOYED nginx-ingress-1.3.1 0.22.0 kube-system
Certificate Issued Properly
root#kube-master:~# kubectl describe certs
Name: tls-prod-cert
Namespace: default
Labels: <none>
Annotations: <none>
API Version: certmanager.k8s.io/v1alpha1
Kind: Certificate
Metadata:
Creation Timestamp: 2019-03-12T10:26:58Z
Generation: 2
Owner References:
API Version: extensions/v1beta1
Block Owner Deletion: true
Controller: true
Kind: Ingress
Name: nginx-ingress-prod
UID: 5ab11929-44b1-11e9-b431-00163e005d19
Resource Version: 17687
Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/default/certificates/tls-prod-cert
UID: 5dad4740-44b1-11e9-b431-00163e005d19
Spec:
Acme:
Config:
Domains:
zariga.com
Http 01:
Ingress:
Ingress Class: nginx
Dns Names:
zariga.com
Issuer Ref:
Kind: ClusterIssuer
Name: letsencrypt-prod
Secret Name: tls-prod-cert
Status:
Acme:
Order:
URL: https://acme-v02.api.letsencrypt.org/acme/order/53135536/352104603
Conditions:
Last Transition Time: 2019-03-12T10:27:00Z
Message: Order validated
Reason: OrderValidated
Status: False
Type: ValidateFailed
Last Transition Time: <nil>
Message: Certificate issued successfully
Reason: CertIssued
Status: True
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CreateOrder 27s cert-manager Created new ACME order, attempting validation...
Normal IssueCert 27s cert-manager Issuing certificate...
Normal CertObtained 25s cert-manager Obtained certificate from ACME server
Normal CertIssued 25s cert-manager Certificate issued successfully
Ingress Details
root#kube-master:~# kubectl describe ingress
Name: nginx-ingress-prod
Namespace: default
Address:
Default backend: my-nginx:80 (192.168.123.202:80)
TLS:
tls-prod-cert terminates zariga.com
Rules:
Host Path Backends
---- ---- --------
* * my-nginx:80 (192.168.123.202:80)
Annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: true
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 7m13s nginx-ingress-controller Ingress default/nginx-ingress-prod
Normal CreateCertificate 7m8s cert-manager Successfully created Certificate "tls-prod-cert"
Normal UPDATE 6m57s nginx-ingress-controller Ingress default/nginx-ingress-prod
Letsencrypt Nginx Production Definition
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress-prod
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
kubernetes.io/tls-acme: 'true'
labels:
app: 'my-nginx'
spec:
backend:
serviceName: my-nginx
servicePort: 80
tls:
- secretName: tls-prod-cert
hosts:
- zariga.com
Maybe would be helpful for someone experiencing similar issues. As for me, a forgot to specify hostname in Ingress yaml file for both rules and tls sections.
After duplicating the hostname, it started responding with a proper certificate.
Example:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-web-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- my.host.com # <----
secretName: tls-secret
rules:
- host: my.host.com # <----
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: my-nginx
servicePort: 80
Sometimes it may happen if you are using the clusterissuer URL as staging URL.
Check the letsencrypt url set in you issuer.yaml or clusterissuer.yaml and change it to production url: https://acme-v02.api.letsencrypt.org/directory
I faced the same issue once and changing the url to production url solved it.
Also check that the ingress tls secrets you are using is right.
Actual cluster issuer should be something like for production :
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: dev-clusterissuer
spec:
acme:
email: harsh#example.com
privateKeySecretRef:
name: dev-clusterissuer
server: https://acme-v02.api.letsencrypt.org/directory # <----check this server URL it is for Prod and use this only
solvers:
- http01:
ingress:
class: nginx
If you are using server: https://acme-staging-v02.api.letsencrypt.org/directory you will face issue better replace it with server: https://acme-v02.api.letsencrypt.org/directory
If you're convinced that everything is set up correctly and it still doesn't work, try this.
Edit the deployment of your nginx-controller. Why? Because, if it doesn't find the secret in the namespace it's deployed in, the Nginx controller deploys it's own certificate (fake certificate). Not knowing this (I'm new to the game) cost me a few days of my life.
So, either change to the namespace where your Nginx Ingress controller is and get the name of the deployment, then:
kubectl edit deployment nginx-ingress-ingress-nginx-controller -n nginx-ingress
Or if there is only one deployment in that namespace you can just do
kubectl edit deployment
And you should be in edit mode for your nginx controller deployment. Look for the section: spec --> containers: --> args:
spec:
containers:
- args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/nginx-ingress-ingress-nginx-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/nginx-ingress-ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
- --default-ssl-certificate=app-namespace/letsencrypt-cert-prod
You can add a default certificate to use if your nginx controller doesn't find one (as I have above), so it will search in a namespace for a secret by adding:
--default-ssl-certificate=your-cert-namespace/your-cert-secret
your-cert-namespace: The namespace where your certificate secret is
your-cert-secret: The name of your certificate containing secret
Once you save and close your editor, it should be updated. Then check the logs of your cert manager pod:
kubectl logs cert-manager-xxxpodxx-abcdef -n cert-manager
To make sure that things are working as normal.
You probably won't have this issue if all your resources are deployed in the same namespace.
Important to note that the ClusterIssuer spec for solvers changed. For people using cer-manager>0.7.2, this comment saved me so much time: https://github.com/jetstack/cert-manager/issues/1650#issuecomment-518953464. Specially on how to configure the ClusterIssuer and Certificate.
In my case, the problem was accessing the domain at wrong port, my default https port wasn't 443 but 4443
For me, the issue was that I forgot to kubectl apply the secret (in my case 'tls-secret.yml'). When deploying K8S manually, such an error is rarely made. However, I'm using gitlab CICD to deploy applications, and I forgot to add - kubectl apply -f ./kube/secret to my .gitlab-ci.yml.
In my case i mistyped the name of my tls secret inside my ingress rules.
instead of secretName: my-homepage-tls i typed secretName: myy-homepage-tls
For me, the issue was ingress class name, since I'm using microk8s, ingress class name is public:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: "your#email.tld"
privateKeySecretRef:
name: letsencrypt-prod
server: "https://acme-v02.api.letsencrypt.org/directory"
solvers:
- http01:
ingress:
class: public
This happened to me today, I had 2 ingresses in the same namespace and used letsencrypt-prod as the secret name for both. One worked, the other didn't. The secrets are auto-generated and needed to have a unique name to avoid clashing

How do I get one pod to network to another pod in Kubernetes? (SIMPLE)

I've been banging my head against this wall on and off for a while. There is a ton of information on Kubernetes on the web, but it's all assuming so much knowledge that n00bs like me don't really have much to go on.
So, can anyone share a simple example of the following (as a yaml file)? All I want is
two pods
let's say one pod has a backend (I don't know - node.js), and one has a frontend (say React).
A way to network between them.
And then an example of calling an api call from the back to the front.
I start looking into this sort of thing, and all of a sudden I hit this page - https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this. This is super unhelpful. I don't want or need advanced network policies, nor do I have the time to go through several different service layers that are mapped on top of kubernetes. I just want to figure out a trivial example of a network request.
Hopefully if this example exists on stackoverflow it will serve other people as well.
Any help would be appreciated. Thanks.
EDIT; it looks like the easiest example may be using the Ingress controller.
EDIT EDIT;
I'm working to try and get a minimal example deployed - I'll walk through some steps here and point out my issues.
So below is my yaml file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
labels:
app: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: nginx
image: patientplatypus/frontend_example
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: nginx
image: patientplatypus/backend_example
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
type: LoadBalancer
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 5000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend
spec:
rules:
- host: www.kubeplaytime.example
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
- path: /api
backend:
serviceName: backend
servicePort: 80
What I believe this is doing is
Deploying a frontend and backend app - I deployed patientplatypus/frontend_example and patientplatypus/backend_example to dockerhub and then pull the images down. One open question I have is, what if I don't want to pull the images from docker hub and rather would just like to load from my localhost, is that possible? In this case I would push my code to the production server, build the docker images on the server and then upload to kubernetes. The benefit is that I don't have to rely on dockerhub if I want my images to be private.
It is creating two service endpoints that route outside traffic from a web browser to each of the deployments. These services are of type loadBalancer because they are balancing the traffic among the (in this case 3) replicasets that I have in the deployments.
Finally, I have an ingress controller which is supposed to allow my services to route to each other through www.kubeplaytime.example and www.kubeplaytime.example/api. However this is not working.
What happens when I run this?
patientplatypus:~/Documents/kubePlay:09:17:50$kubectl create -f kube-deploy.yaml
deployment.apps "frontend" created
service "frontend" created
deployment.apps "backend" created
service "backend" created
ingress.extensions "frontend" created
So first, it appears to create all the parts that I need fine with no errors.
patientplatypus:~/Documents/kubePlay:09:22:30$kubectl get --watch services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend LoadBalancer 10.0.18.174 <pending> 80:31649/TCP 1m
frontend LoadBalancer 10.0.100.65 <pending> 80:32635/TCP 1m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10d
frontend LoadBalancer 10.0.100.65 138.91.126.178 80:32635/TCP 2m
backend LoadBalancer 10.0.18.174 138.91.121.182 80:31649/TCP 2m
Second, if I watch the services, I eventually get IP addresses that I can use to navigate in my browser to these sites. Each of the above IP addresses works in routing me to the frontend and backend respectively.
HOWEVER
I reach an issue when I try and use the ingress controller - it seemingly deployed, but I don't know how to get there.
patientplatypus:~/Documents/kubePlay:09:24:44$kubectl get ingresses
NAME HOSTS ADDRESS PORTS AGE
frontend www.kubeplaytime.example 80 16m
So I have no address I can use, and www.kubeplaytime.example does not appear to work.
What it appears that I have to do to route to the ingress extension I just created is to use a service and deployment on it in order to get an IP address, but this starts to look incredibly complicated very quickly.
For example, take a look at this medium article: https://medium.com/#cashisclay/kubernetes-ingress-82aa960f658e.
It would appear that the necessary code to add for just the service routing to the Ingress (ie what he calls the Ingress Controller) appears to be this:
---
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
spec:
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: ingress-nginx
spec:
replicas: 1
template:
metadata:
labels:
app: ingress-nginx
spec:
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
name: ingress-nginx
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
---
kind: Service
apiVersion: v1
metadata:
name: nginx-default-backend
spec:
ports:
- port: 80
targetPort: http
selector:
app: nginx-default-backend
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: nginx-default-backend
spec:
replicas: 1
template:
metadata:
labels:
app: nginx-default-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
ports:
- name: http
containerPort: 8080
protocol: TCP
This would seemingly need to be appended to my other yaml code above in order to get a service entry point for my ingress routing, and it does appear to give an ip:
patientplatypus:~/Documents/kubePlay:09:54:12$kubectl get --watch services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend LoadBalancer 10.0.31.209 <pending> 80:32428/TCP 4m
frontend LoadBalancer 10.0.222.47 <pending> 80:32482/TCP 4m
ingress-nginx LoadBalancer 10.0.28.157 <pending> 80:30573/TCP,443:30802/TCP 4m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10d
nginx-default-backend ClusterIP 10.0.71.121 <none> 80/TCP 4m
frontend LoadBalancer 10.0.222.47 40.121.7.66 80:32482/TCP 5m
ingress-nginx LoadBalancer 10.0.28.157 40.121.6.179 80:30573/TCP,443:30802/TCP 6m
backend LoadBalancer 10.0.31.209 40.117.248.73 80:32428/TCP 7m
So ingress-nginx appears to be the site I want to get to. Navigating to 40.121.6.179 returns a default 404 message (default backend - 404) - it does not go to frontend as / aught to route. /api returns the same. Navigating to my host namespace www.kubeplaytime.example returns a 404 from the browser - no error handling.
QUESTIONS
Is the Ingress Controller strictly necessary, and if so is there a less complicated version of this?
I feel I am close, what am I doing wrong?
FULL YAML
Available here: https://gist.github.com/patientplatypus/fa07648339ee6538616cb69282a84938
Thanks for the help!
EDIT EDIT EDIT
I've attempted to use HELM. On the surface it appears to be a simple interface, and so I tried spinning it up:
patientplatypus:~/Documents/kubePlay:12:13:00$helm install stable/nginx-ingress
NAME: erstwhile-beetle
LAST DEPLOYED: Sun May 6 12:13:30 2018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
erstwhile-beetle-nginx-ingress-controller 1 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
erstwhile-beetle-nginx-ingress-controller LoadBalancer 10.0.216.38 <pending> 80:31494/TCP,443:32118/TCP 1s
erstwhile-beetle-nginx-ingress-default-backend ClusterIP 10.0.55.224 <none> 80/TCP 1s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
erstwhile-beetle-nginx-ingress-controller 1 1 1 0 1s
erstwhile-beetle-nginx-ingress-default-backend 1 1 1 0 1s
==> v1beta1/PodDisruptionBudget
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
erstwhile-beetle-nginx-ingress-controller 1 N/A 0 1s
erstwhile-beetle-nginx-ingress-default-backend 1 N/A 0 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
erstwhile-beetle-nginx-ingress-controller-7df9b78b64-24hwz 0/1 ContainerCreating 0 1s
erstwhile-beetle-nginx-ingress-default-backend-849b8df477-gzv8w 0/1 ContainerCreating 0 1s
NOTES:
The nginx-ingress controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w erstwhile-beetle-nginx-ingress-controller'
An example Ingress that makes use of the controller:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: example
namespace: foo
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
serviceName: exampleService
servicePort: 80
path: /
# This section is only required if TLS is to be enabled for the Ingress
tls:
- hosts:
- www.example.com
secretName: example-tls
If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: foo
data:
tls.crt: <base64 encoded cert>
tls.key: <base64 encoded key>
type: kubernetes.io/tls
Seemingly this is really nice - it spins everything up and gives an example of how to add an ingress. Since I spun up helm in a blank kubectl I used the following yaml file to add in what I thought would be required.
The file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
labels:
app: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: nginx
image: patientplatypus/frontend_example
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: nginx
image: patientplatypus/backend_example
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
type: LoadBalancer
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 5000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: www.example.com
http:
paths:
- path: /api
backend:
serviceName: backend
servicePort: 80
- path: /
frontend:
serviceName: frontend
servicePort: 80
Deploying this to the cluster however runs into this error:
patientplatypus:~/Documents/kubePlay:11:44:20$kubectl create -f kube-deploy.yaml
deployment.apps "frontend" created
service "frontend" created
deployment.apps "backend" created
service "backend" created
error: error validating "kube-deploy.yaml": error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[1]): unknown field "frontend" in io.k8s.api.extensions.v1beta1.HTTPIngressPath, ValidationError(Ingress.spec.rules[0].http.paths[1]): missing required field "backend" in io.k8s.api.extensions.v1beta1.HTTPIngressPath]; if you choose to ignore these errors, turn validation off with --validate=false
So, the question then becomes, well crap how do I debug this?
If you spit out the code that helm produces, it's basically non-readable by a person - there's no way to go in there and figure out what's going on.
Check it out: https://gist.github.com/patientplatypus/0e281bf61307f02e16e0091397a1d863 - over a 1000 lines!
If anyone has a better way to debug a helm deploy add it to the list of open questions.
EDIT EDIT EDIT EDIT
To simplify in the extreme I attempt to make a call from one pod to another only using namespace.
So here is my React code where I make the http request:
axios.get('http://backend/test')
.then(response=>{
console.log('return from backend and response: ', response);
})
.catch(error=>{
console.log('return from backend and error: ', error);
})
I've also attempted to use http://backend.exampledeploy.svc.cluster.local/test without luck.
Here is my node code handling the get:
router.get('/test', function(req, res, next) {
res.json({"test":"test"})
});
Here is my yaml file that I uploading to the kubectl cluster:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
namespace: exampledeploy
labels:
app: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: nginx
image: patientplatypus/frontend_example
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: exampledeploy
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
namespace: exampledeploy
labels:
app: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: nginx
image: patientplatypus/backend_example
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: backend
namespace: exampledeploy
spec:
type: LoadBalancer
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 5000
The uploading to the cluster appears to work as I can see in my terminal:
patientplatypus:~/Documents/kubePlay:14:33:20$kubectl get all --namespace=exampledeploy
NAME READY STATUS RESTARTS AGE
pod/backend-584c5c59bc-5wkb4 1/1 Running 0 15m
pod/backend-584c5c59bc-jsr4m 1/1 Running 0 15m
pod/backend-584c5c59bc-txgw5 1/1 Running 0 15m
pod/frontend-647c99cdcf-2mmvn 1/1 Running 0 15m
pod/frontend-647c99cdcf-79sq5 1/1 Running 0 15m
pod/frontend-647c99cdcf-r5bvg 1/1 Running 0 15m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/backend LoadBalancer 10.0.112.160 168.62.175.155 80:31498/TCP 15m
service/frontend LoadBalancer 10.0.246.212 168.62.37.100 80:31139/TCP 15m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.extensions/backend 3 3 3 3 15m
deployment.extensions/frontend 3 3 3 3 15m
NAME DESIRED CURRENT READY AGE
replicaset.extensions/backend-584c5c59bc 3 3 3 15m
replicaset.extensions/frontend-647c99cdcf 3 3 3 15m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/backend 3 3 3 3 15m
deployment.apps/frontend 3 3 3 3 15m
NAME DESIRED CURRENT READY AGE
replicaset.apps/backend-584c5c59bc 3 3 3 15m
replicaset.apps/frontend-647c99cdcf 3 3 3 15m
However, when I attempt to make the request I get the following error:
return from backend and error:
Error: Network Error
Stack trace:
createError#http://168.62.37.100/static/js/bundle.js:1555:15
handleError#http://168.62.37.100/static/js/bundle.js:1091:14
App.js:14
Since the axios call is being made from the browser, I'm wondering if it is simply not possible to use this method to call the backend, even though the backend and the frontend are in different pods. I'm a little lost, as I thought this was the simplest possible way to network pods together.
EDIT X5
I've determined that it is possible to curl the backend from the command line by exec'ing into the pod like this:
patientplatypus:~/Documents/kubePlay:15:25:25$kubectl exec -ti frontend-647c99cdcf-5mfz4 --namespace=exampledeploy -- curl -v http://backend/test
* Hostname was NOT found in DNS cache
* Trying 10.0.249.147...
* Connected to backend (10.0.249.147) port 80 (#0)
> GET /test HTTP/1.1
> User-Agent: curl/7.38.0
> Host: backend
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Powered-By: Express
< Content-Type: application/json; charset=utf-8
< Content-Length: 15
< ETag: W/"f-SzkCEKs7NV6rxiz4/VbpzPnLKEM"
< Date: Sun, 06 May 2018 20:25:49 GMT
< Connection: keep-alive
<
* Connection #0 to host backend left intact
{"test":"test"}
What this means is, without a doubt, because the front end code is being executed in the browser it needs Ingress to gain entry into the pod, as http requests from the front end are what's breaking with simple pod networking. I was unsure of this, but it means Ingress is necessary.
First of all, let's clarify some apparent misconceptions. You mentioned your front-end being a React application, that will presumably run in the users browser. For this to work, your actual problem is not your back-end and front-end pods communicating with each other, but the browser needs to be able to connect to both these pods (to the front-end pod in order to load the React application, and to the back-end pod for the React app to make API calls).
To visualize:
+---------+
+---| Browser |---+
| +---------+ |
V V
+-----------+ +----------+ +-----------+ +----------+
| Front-end |---->| Back-end | | Front-end | | Back-end |
+-----------+ +----------+ +-----------+ +----------+
(what you asked for) (what you need)
As already stated, the easiest solution for this would be to use an Ingress controller. I won't go into detail on how to set up an Ingress controller here; in some cloud environments (like GKE) you will be able to use an Ingress controller provided to you by the cloud provider. Otherwise, you can set up the NGINX Ingress controller. Have a look at the NGINX Ingress controllers deployment guide for more information.
Define services
Start by defining Service resources for both your front-end and back-end application (these would also allow your Pods to communicate with each other). A service definition might look like this:
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 8080
Make sure that your Pods have labels that can be selected by the Service resource (in this example, I'm using app=backend and app=frontend as labels).
If you want to establish Pod-to-Pod communication, you're done now. In each Pod, you can now use backend.<namespace>.svc.cluster.local (or backend as shorthand) and frontend as host names to connect to that Pod.
Define Ingresses
Next up, you can define the Ingress resources; since both services will need connectivity from outside the cluster (the users browser), you will need Ingress definitions for both services.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend
spec:
rules:
- host: www.your-application.example
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: backend
spec:
rules:
- host: api.your-application.example
http:
paths:
- path: /
backend:
serviceName: backend
servicePort: 80
Alternatively, you could also aggregate frontend and backend with a single Ingress resource (no "right" answer here, just a matter of preference):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend
spec:
rules:
- host: www.your-application.example
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
- path: /api
backend:
serviceName: backend
servicePort: 80
After that, make sure that both www.your-application.example and api.your-application.example point to your Ingress controller's external IP address, and you should be done.
As it turns out I was over-complicating things. Here is the Kubernetes file that works to do what I want. You can do this using two deployments (front end, and backend) and one service entrypoint. As far as I can tell, a service can load balance to many (not just 2) different deployments, meaning for practical development this should be a good start to micro service development. One of the benefits of an ingress method is allowing the use of path names rather than port numbers, but given the difficulty it doesn't seem practical in development.
Here is the yaml file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
labels:
app: exampleapp
spec:
replicas: 3
selector:
matchLabels:
app: exampleapp
template:
metadata:
labels:
app: exampleapp
spec:
containers:
- name: nginx
image: patientplatypus/kubeplayfrontend
ports:
- containerPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
labels:
app: exampleapp
spec:
replicas: 3
selector:
matchLabels:
app: exampleapp
template:
metadata:
labels:
app: exampleapp
spec:
containers:
- name: nginx
image: patientplatypus/kubeplaybackend
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: entrypt
spec:
type: LoadBalancer
ports:
- name: backend
port: 8080
targetPort: 5000
- name: frontend
port: 81
targetPort: 3000
selector:
app: exampleapp
Here are the bash commands I use to get it to spin up (you may have to add a login command - docker login - to push to dockerhub):
#!/bin/bash
# stop all containers
echo stopping all containers
docker stop $(docker ps -aq)
# remove all containers
echo removing all containers
docker rm $(docker ps -aq)
# remove all images
echo removing all images
docker rmi $(docker images -q)
echo building backend
cd ./backend
docker build -t patientplatypus/kubeplaybackend .
echo push backend to dockerhub
docker push patientplatypus/kubeplaybackend:latest
echo building frontend
cd ../frontend
docker build -t patientplatypus/kubeplayfrontend .
echo push backend to dockerhub
docker push patientplatypus/kubeplayfrontend:latest
echo now working on kubectl
cd ..
echo deleting previous variables
kubectl delete pods,deployments,services entrypt backend frontend
echo creating deployment
kubectl create -f kube-deploy.yaml
echo watching services spin up
kubectl get services --watch
The actual code is just a frontend react app making an axios http call to a backend node route on componentDidMount of the starting App page.
You can also see a working example here: https://github.com/patientplatypus/KubernetesMultiPodCommunication
Thanks again everyone for your help.
To use ingress controller you need to have valid domain (DNS server configured to point your ingress controller ip). This is not due to any kubernetes "magic" but due to the way how vhosts work (here is an example for nginx - very often used as ingress server, but any other ingress implementation will work the same way under the hood).
If you can't configure your domain the easiest way for dev purpose would be creating kubernetes service. There is a nice short cut for doing it using kubectl expose
kubectl expose pod frontend-pod --port=444 --name=frontend
kubectl expose pod backend-pod --port=888 --name=backend

Resources