Not Sure why I'm getting Fake certificate, even the certificate is properly issued by Let's Encrypt using certmanager
The setup is running on the Alibaba Cloud ECS console, where one Kube-master and one cube-minion form a Kubernetes cluster.
Service Details
root#kube-master:~# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h20m
my-nginx ClusterIP 10.101.150.247 <none> 80/TCP 77m
Pod Details
root#kube-master:~# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
my-nginx-6cc48cd8db-n6scm 1/1 Running 0 46s app=my-nginx,pod-template-hash=6cc48cd8db
Helm Cert-manager deployed
root#kube-master:~# helm ls
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
cert-manager 1 Tue Mar 12 15:29:21 2019 DEPLOYED cert-manager-v0.5.2 v0.5.2 kube-system
kindred-garfish 1 Tue Mar 12 17:03:41 2019 DEPLOYED nginx-ingress-1.3.1 0.22.0 kube-system
Certificate Issued Properly
root#kube-master:~# kubectl describe certs
Name: tls-prod-cert
Namespace: default
Labels: <none>
Annotations: <none>
API Version: certmanager.k8s.io/v1alpha1
Kind: Certificate
Metadata:
Creation Timestamp: 2019-03-12T10:26:58Z
Generation: 2
Owner References:
API Version: extensions/v1beta1
Block Owner Deletion: true
Controller: true
Kind: Ingress
Name: nginx-ingress-prod
UID: 5ab11929-44b1-11e9-b431-00163e005d19
Resource Version: 17687
Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/default/certificates/tls-prod-cert
UID: 5dad4740-44b1-11e9-b431-00163e005d19
Spec:
Acme:
Config:
Domains:
zariga.com
Http 01:
Ingress:
Ingress Class: nginx
Dns Names:
zariga.com
Issuer Ref:
Kind: ClusterIssuer
Name: letsencrypt-prod
Secret Name: tls-prod-cert
Status:
Acme:
Order:
URL: https://acme-v02.api.letsencrypt.org/acme/order/53135536/352104603
Conditions:
Last Transition Time: 2019-03-12T10:27:00Z
Message: Order validated
Reason: OrderValidated
Status: False
Type: ValidateFailed
Last Transition Time: <nil>
Message: Certificate issued successfully
Reason: CertIssued
Status: True
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CreateOrder 27s cert-manager Created new ACME order, attempting validation...
Normal IssueCert 27s cert-manager Issuing certificate...
Normal CertObtained 25s cert-manager Obtained certificate from ACME server
Normal CertIssued 25s cert-manager Certificate issued successfully
Ingress Details
root#kube-master:~# kubectl describe ingress
Name: nginx-ingress-prod
Namespace: default
Address:
Default backend: my-nginx:80 (192.168.123.202:80)
TLS:
tls-prod-cert terminates zariga.com
Rules:
Host Path Backends
---- ---- --------
* * my-nginx:80 (192.168.123.202:80)
Annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: true
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 7m13s nginx-ingress-controller Ingress default/nginx-ingress-prod
Normal CreateCertificate 7m8s cert-manager Successfully created Certificate "tls-prod-cert"
Normal UPDATE 6m57s nginx-ingress-controller Ingress default/nginx-ingress-prod
Letsencrypt Nginx Production Definition
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress-prod
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
kubernetes.io/tls-acme: 'true'
labels:
app: 'my-nginx'
spec:
backend:
serviceName: my-nginx
servicePort: 80
tls:
- secretName: tls-prod-cert
hosts:
- zariga.com
Maybe would be helpful for someone experiencing similar issues. As for me, a forgot to specify hostname in Ingress yaml file for both rules and tls sections.
After duplicating the hostname, it started responding with a proper certificate.
Example:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-web-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- my.host.com # <----
secretName: tls-secret
rules:
- host: my.host.com # <----
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: my-nginx
servicePort: 80
Sometimes it may happen if you are using the clusterissuer URL as staging URL.
Check the letsencrypt url set in you issuer.yaml or clusterissuer.yaml and change it to production url: https://acme-v02.api.letsencrypt.org/directory
I faced the same issue once and changing the url to production url solved it.
Also check that the ingress tls secrets you are using is right.
Actual cluster issuer should be something like for production :
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: dev-clusterissuer
spec:
acme:
email: harsh#example.com
privateKeySecretRef:
name: dev-clusterissuer
server: https://acme-v02.api.letsencrypt.org/directory # <----check this server URL it is for Prod and use this only
solvers:
- http01:
ingress:
class: nginx
If you are using server: https://acme-staging-v02.api.letsencrypt.org/directory you will face issue better replace it with server: https://acme-v02.api.letsencrypt.org/directory
If you're convinced that everything is set up correctly and it still doesn't work, try this.
Edit the deployment of your nginx-controller. Why? Because, if it doesn't find the secret in the namespace it's deployed in, the Nginx controller deploys it's own certificate (fake certificate). Not knowing this (I'm new to the game) cost me a few days of my life.
So, either change to the namespace where your Nginx Ingress controller is and get the name of the deployment, then:
kubectl edit deployment nginx-ingress-ingress-nginx-controller -n nginx-ingress
Or if there is only one deployment in that namespace you can just do
kubectl edit deployment
And you should be in edit mode for your nginx controller deployment. Look for the section: spec --> containers: --> args:
spec:
containers:
- args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/nginx-ingress-ingress-nginx-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/nginx-ingress-ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
- --default-ssl-certificate=app-namespace/letsencrypt-cert-prod
You can add a default certificate to use if your nginx controller doesn't find one (as I have above), so it will search in a namespace for a secret by adding:
--default-ssl-certificate=your-cert-namespace/your-cert-secret
your-cert-namespace: The namespace where your certificate secret is
your-cert-secret: The name of your certificate containing secret
Once you save and close your editor, it should be updated. Then check the logs of your cert manager pod:
kubectl logs cert-manager-xxxpodxx-abcdef -n cert-manager
To make sure that things are working as normal.
You probably won't have this issue if all your resources are deployed in the same namespace.
Important to note that the ClusterIssuer spec for solvers changed. For people using cer-manager>0.7.2, this comment saved me so much time: https://github.com/jetstack/cert-manager/issues/1650#issuecomment-518953464. Specially on how to configure the ClusterIssuer and Certificate.
In my case, the problem was accessing the domain at wrong port, my default https port wasn't 443 but 4443
For me, the issue was that I forgot to kubectl apply the secret (in my case 'tls-secret.yml'). When deploying K8S manually, such an error is rarely made. However, I'm using gitlab CICD to deploy applications, and I forgot to add - kubectl apply -f ./kube/secret to my .gitlab-ci.yml.
In my case i mistyped the name of my tls secret inside my ingress rules.
instead of secretName: my-homepage-tls i typed secretName: myy-homepage-tls
For me, the issue was ingress class name, since I'm using microk8s, ingress class name is public:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: "your#email.tld"
privateKeySecretRef:
name: letsencrypt-prod
server: "https://acme-v02.api.letsencrypt.org/directory"
solvers:
- http01:
ingress:
class: public
This happened to me today, I had 2 ingresses in the same namespace and used letsencrypt-prod as the secret name for both. One worked, the other didn't. The secrets are auto-generated and needed to have a unique name to avoid clashing
Related
I am taking a course in Udemy and I am new to the world of Kubernetes and I am trying to configure ingress nginx controller in Kubernetes but it returns 404 not found when i send a request at specified URL, it has been 10 days that I am trying to fix it, i've looked at similar questions but none of their answers are working for me. I am also using Skaffold to do build/deploy image on docker hub automatically when i change something in files.
My express app server:
app.get('/api/users/currentuser', (req, res) => {
res.send('Hi there');
});
app.listen(3000, () => {
console.log('[Auth] - Listening on port 3000');
});
ingress-srv.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: ticketing.com
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
auth-depl.yaml (Auth deployment & srv)
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: myusername/auth:latest
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
type: ClusterIP
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
skaffold.yaml file:
apiVersion: skaffold/v2beta25
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: username/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
Dockerfile:
FROM node:alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "start"]
I also executed command from NGINX Ingress Controller docs:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.5/deploy/static/provider/cloud/deploy.yaml
I also changed hosts.file in the system:
127.0.0.1 ticketing.com
Logs:
kubectl get pods
NAME READY STATUS RESTARTS AGE
auth-depl-5f89899d9f-wtc94 1/1 Running 0 6h33m
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
auth-srv ClusterIP 10.96.23.71 <none> 3000/TCP 23h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
kubectl get pods --namespace=ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-7fm56 0/1 Completed 0 23h
ingress-nginx-admission-patch-5vflr 0/1 Completed 1 23h
ingress-nginx-controller-5c8d66c76d-89zhp 1/1 Running 0 23h
kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-srv <none> ticketing.com localhost 80 23h
kubectl describe ing ingress-srv
Name: ingress-srv
Namespace: default
Address: localhost
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
ticketing.com
/api/users/?(.*) auth-srv:3000 (10.1.0.10:3000)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 22m (x18 over 23h) nginx-ingress-controller Scheduled for sync
Could there be a problem with the Windows IIS web server? since I previously configured something for another project, and in the screenshot above I see:
Requested URL http://ticketing.com:80/api/users/currentuser
Physical Path C:\inetpub\wwwroot\api\users\currentuser
Also the screenshot shows the port :80 at the requested URL but I have the server port 3000? + when i request at https it returns:
502 Bad Gateway
nginx
also C:\inetpub\wwwroot is strange to me.
Any ideas would help me a lot with continuing the course.
After a few days of research I finally solved the problem, the problem was with IIS Web Server which I had enabled when I was working on a project in ASP.NET core, I uninstalled it and the problem was solved.
How to uninstall IIS from Windows 10:
Go to Control Panel > Programs and Features
Click Turn Windows features on or off
Scroll down to Internet Information Services
Click on the square next to Internet Information Services so it becomes empty
Click OK and restart the PC (required).
I'm trying to configure a canary rollout for a demo, but I'm having trouble getting the traffic splitting to work with linkerd. The funny part is I was able to get this working with istio and i find istio to be much more complicated then linkerd.
I have a basic go-lang service define like this:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: fish
spec:
[...]
strategy:
canary:
canaryService: canary-svc
stableService: stable-svc
trafficRouting:
smi: {}
steps:
- setWeight: 5
- pause: {}
- setWeight: 20
- pause: {}
- setWeight: 50
- pause: {}
- setWeight: 80
- pause: {}
---
apiVersion: v1
kind: Service
metadata:
name: canary-svc
spec:
selector:
app: fish
ports:
- name: http
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: stable-svc
spec:
selector:
app: fish
ports:
- name: http
port: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: fish
annotations:
kubernetes.io/ingress.class: 'nginx'
cert-manager.io/cluster-issuer: letsencrypt-production
cert-manager.io/acme-challenge-type: dns01
external-dns.alpha.kubernetes.io/hostname: fish.local
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
spec:
rules:
- host: fish.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: stable-svc
port:
number: 8080
When I do the deploy (sync) via ArgoCD I can see the traffic split is 50/50:
- apiVersion: split.smi-spec.io/v1alpha2
kind: TrafficSplit
metadata:
[...]
name: fish
namespace: default
spec:
backends:
- service: canary-svc
weight: "50"
- service: stable-svc
weight: "50"
service: stable-svc
However doing a curl command in a while loop i only get back the stable-svc. The only time i see a change is after I have completely moved the service to 100%.
I tried to follow this: https://argoproj.github.io/argo-rollouts/getting-started/smi/
Any help would be greatly appreciated.
Thanks
After reading this: https://linkerd.io/2.10/tasks/using-ingress/ I discovered you need to modify your ingress controller with a special annotation:
$ kubectl get deployment <ingress-controller> -n <ingress-namespace> -o yaml | linkerd inject --ingress - | kubectl apply -f -
TLDR; if you want Linkerd functionality like Service Profiles, Traffic Splits, etc, there is additional configuration required to make the Ingress controller’s Linkerd proxy run in ingress mode.
So there's a bit more context in this issue but the TL;DR is ingresses tend to target individual pods instead of the service address. Putting Linkerd's proxy in ingress mode tells it to override that behaviour. NGINX does already have a setting that will let it hit services instead of endpoints directly, you can see that in their docs here.
I have installed the following two different ingress controllers on my DigitalOcean managed K8S cluster:
Nginx
Istio
and they have been assigned to two different IP addresses. My question is, if it is wrong to have two different ingress controllers on the same K8S cluster?
The reason, why I have done it, because nginx is for tools like harbor, argocd, etc. and istio for microservices.
I have also figured out, when both are installed alongside each other, sometimes during the deployment, the K8S suddenly goes down.
For example, I have deployed:
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-first
namespace: dev
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes-first
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes-first
namespace: dev
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes-first
template:
metadata:
labels:
app: hello-kubernetes-first
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.7
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: Hello from the first deployment!
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
name: helloworld-ingress
namespace: dev
spec:
rules:
- host: hello.service.databaker.io
http:
paths:
- path: /*
backend:
serviceName: hello-kubernetes-first
servicePort: 80
---
Then I've got:
Error from server (InternalError): error when creating "istio-app.yml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s: dial tcp 10.245.107.175:443: i/o timeout
You have raised several points - before answering your question, let's take a step back.
K8s Ingress not recommended by Istio
It is important to note how Istio does not recommend using K8s Ingress:
Using the Istio Gateway, rather than Ingress, is recommended to make use of the full feature set that Istio offers, such as rich traffic management and security features.
Ref: https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/
As noted, Istio Gateway (Istio IngressGateway and EgressGateway) acts as the edge, which you can find more in https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/.
Multiple endpoints within Istio
If you need to assign one public endpoint for business requirement, and another for monitoring (such as Argo CD, Harbor as you mentioned), you can achieve that by using Istio only. There are roughly 2 approaches to this.
Create separate Istio IngressGateways - one for main traffic, and another for monitoring
Create one Istio IngressGateway, and use Gateway definition to handle multiple access patterns
Both are valid, and depending on requirements, you may need to choose one way or the other.
As to the Approach #2., it is where Istio's traffic management system shines. It is a great example of Istio's power, but the setup is slightly complex if you are new to it. So here goes an example.
Example of Approach #2
When you create Istio IngressGateway by following the default installation, it would create istio-ingressgateway like below (I overly simplified YAML definition):
apiVersion: v1
kind: Service
metadata:
labels:
app: istio-ingressgateway
istio: ingressgateway
name: istio-ingressgateway
namespace: istio-system
# ... other attributes ...
spec:
type: LoadBalancer
# ... other attributes ...
This LB Service would then be your endpoint. (I'm not familiar with DigitalOcean K8s env, but I suppose they would handle LB creation.)
Then, you can create Gateway definition like below:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: your-gateway
namespace: istio-system
spec:
selector:
app: istio-ingressgateway
istio: ingressgateway
servers:
- port:
number: 3000
name: https-your-system
protocol: HTTPS
hosts:
- "your-business-domain.com"
- "*.monitoring-domain.com"
# ... other attributes ...
You can then create 2 or more VirtualService definitions.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: business-virtsvc
spec:
gateways:
- istio-ingressgateway.istio-system.svc.cluster.local
hosts:
- "your-business-domain.com"
http:
- match:
- port: 3000
route:
- destination:
host: some-business-pod
port:
number: 3000
# ... other attributes ...
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: monitoring-virtsvc
spec:
gateways:
- istio-ingressgateway.istio-system.svc.cluster.local
hosts:
- "harbor.monitoring-domain.com"
http:
- match:
- port: 3000
route:
- destination:
host: harbor-pod
port:
number: 3000
# ... other attributes ...
NOTE: The above is assuming a lot of things, such as port mapping, traffic handling, etc.. Please check out the official doc for details.
So, back to the question after long detour:
Question: [Is it] wrong to have two different ingress controllers on the same K8S cluster[?]
I believe it is OK, though this can cause an error like you are seeing, as two ingress controller fight for the K8s Ingress resource.
As mentioned above, if you are using Istio, it's better to stick with Istio IngressGateway instead of K8s Ingress. If you need K8s Ingress for some specific reason, you could use other Ingress controller for K8s Ingress, like Nginx.
As to the error you saw, it's coming from Nginx deployed webhook, that ingress-nginx-controller-admission.nginx.svc is not available. This means you have created a K8s Ingress helloworld-ingress with kubernetes.io/ingress.class: istio annotation, but Nginx webhook is interfering with K8s Ingress handling. The webhook is then failing to handle the resource, as the Pod / Svc responsible for webhook traffic is not found.
The error itself just says something is unhealthy in K8s - potentially not enough Node allocated to the cluster, and thus Pod allocation not happening. It's also good to note that Istio does require some CPU and memory footprint, which may be putting more pressure to the cluster.
Both products have distinct characteristics and solve different type of problems. So, no issue in having both installed on your cluster.
To call them Ingress Controller is not correct:
- Nginx is a well known web server
- Nginx ingress controller is an implementation of a Kubernetes Ingress controller based on Nginx (Load balancing, HTTPS termination, authentication, traffic routing , etc)
- Istio is a service mesh (well known to microservice architecture and used to address cross cutting concerns in a standard way - things like, logging, tracing, Https termination, etc - at the POD level)
Can you provide more details to what you mean by "K8S suddenly goes down". Are you talking about the cluster nodes or the PODs running inside?
Thanks.
Have you looked specifying the ingress.class (kubernetes.io/ingress.class: "nginx" ), like mentioned here? - https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/
I have written an application that runs a FastAPI server inside a Kubernetes pod. The external communication with the pod goes through an nginx ingress controller in a separate pod. I am running nginx:1.17.0.
When it is all up and running I can use curl calls to interact with the app server through the ingress address, and access all the simple GET paths as well as address/openapi.json in my browser. I can also access the interactive documentation page if I use the internal ip of the app service in Kubernetes.
However trying to reach the interactive documentation page (address/docs#/default/) gives me an error regarding /openapi.json.
Since the curl calls work as expected I do not think the problem is necessarily in the ingress definition but as using the internal ip of the app also works fine the issue should not be inside the app.
I have included the ingress definition file below.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.17.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: my-host.info
http:
paths:
- path: /server(/|$)(.*)
backend:
serviceName: my-app-service # This is the service that runs my fastAPI server pod
servicePort: 80
EDIT
This is the service.yaml file
apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
type: ClusterIP
selector:
app: server
ports:
- protocol: "TCP"
port: 80
targetPort: 80
As the service is a ClusterIP inside my local cluster I have might be able to curl straight to it, I have not tried though. When I curl I use commands like
curl -X GET "http://my-host.info/server/subpath/" -H "accept: application/json"
curl -X POST "http://my-host.info/server/subpath/update/" -H "accept: application/json"
from outside the local cluster.
These are all the services that are running:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d
default my-app-service ClusterIP 10.96.68.29 <none> 80/TCP 18h
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 28d
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.96.114.1 <none> 8000/TCP 28d
kubernetes-dashboard kubernetes-dashboard ClusterIP 10.96.249.255 <none> 80/TCP 28d
and inside my /etc/hosts file I have connected 10.0.0.1 (cluster "external" IP) to my-host.info.
Any ideas of why this is happening?
I think you definitely should look into official documentation of FastApi: https://fastapi.tiangolo.com/advanced/behind-a-proxy/
As you mentioned, when accessing your app internally the Swagger auto docs works fine but when accessing from outside your cluster you've got an error regarding /openapi.json.
In your service.yaml you have:
- path: /server(/|$)(.*)
backend:
serviceName: my-app-service # This is the service that runs my fastAPI server pod
servicePort: 80
and when starting your application with uvicorn you should pass the root_path
uvicorn main:app --root-path /server
ATTENTION: Here you will be able to access the routers endpoints but no the Swagger docs. In order to get Swagger docs you have to edit your main main.py file:
from fastapi import FastAPI, Request
app = FastAPI(openapi_prefix="/server")
#app.get("/")
def read_root(request: Request):
return {"message": "Hello World", "root_path": request.scope.get("root_path")}
I searched why we need explicitly pass the OpenApi prefix but I found only workarounds like: https://github.com/iwpnd/fastapi-aws-lambda-example/issues/2
So I suggest to store root_path in environment variable $ROOT_PATH=/server on your system and pass it to uvicorn main:app --root-path $ROOT_PATH as well as in main.py:
import os
from fastapi import FastAPI, Request
app = FastAPI(openapi_prefix=os.getenv('ROOT_PATH', ''))
#app.get("/")
def read_root(request: Request):
return {"message": "Hello World", "root_path": request.scope.get("root_path")}
UPDATE 07.07.2020
Currently tiangolo ready to use docker images https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker are "outdated" if goes on fastapi version which currently is: 0.55.1 - reported here: link
"root_path" is supported since 0.56.0
Using Kubernetes rewrite feature for ingress I could solve my problem like this:
ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: my-app
http:
paths:
- path: /server(/|$)(.*)
pathType: Prefix
backend:
service:
name: my-fastapi
port:
number: 80
then I just needed to add root_path to my FastAPI app:
app = FastAPI(root_path="/server")
I've been banging my head against this wall on and off for a while. There is a ton of information on Kubernetes on the web, but it's all assuming so much knowledge that n00bs like me don't really have much to go on.
So, can anyone share a simple example of the following (as a yaml file)? All I want is
two pods
let's say one pod has a backend (I don't know - node.js), and one has a frontend (say React).
A way to network between them.
And then an example of calling an api call from the back to the front.
I start looking into this sort of thing, and all of a sudden I hit this page - https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this. This is super unhelpful. I don't want or need advanced network policies, nor do I have the time to go through several different service layers that are mapped on top of kubernetes. I just want to figure out a trivial example of a network request.
Hopefully if this example exists on stackoverflow it will serve other people as well.
Any help would be appreciated. Thanks.
EDIT; it looks like the easiest example may be using the Ingress controller.
EDIT EDIT;
I'm working to try and get a minimal example deployed - I'll walk through some steps here and point out my issues.
So below is my yaml file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
labels:
app: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: nginx
image: patientplatypus/frontend_example
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: nginx
image: patientplatypus/backend_example
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
type: LoadBalancer
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 5000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend
spec:
rules:
- host: www.kubeplaytime.example
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
- path: /api
backend:
serviceName: backend
servicePort: 80
What I believe this is doing is
Deploying a frontend and backend app - I deployed patientplatypus/frontend_example and patientplatypus/backend_example to dockerhub and then pull the images down. One open question I have is, what if I don't want to pull the images from docker hub and rather would just like to load from my localhost, is that possible? In this case I would push my code to the production server, build the docker images on the server and then upload to kubernetes. The benefit is that I don't have to rely on dockerhub if I want my images to be private.
It is creating two service endpoints that route outside traffic from a web browser to each of the deployments. These services are of type loadBalancer because they are balancing the traffic among the (in this case 3) replicasets that I have in the deployments.
Finally, I have an ingress controller which is supposed to allow my services to route to each other through www.kubeplaytime.example and www.kubeplaytime.example/api. However this is not working.
What happens when I run this?
patientplatypus:~/Documents/kubePlay:09:17:50$kubectl create -f kube-deploy.yaml
deployment.apps "frontend" created
service "frontend" created
deployment.apps "backend" created
service "backend" created
ingress.extensions "frontend" created
So first, it appears to create all the parts that I need fine with no errors.
patientplatypus:~/Documents/kubePlay:09:22:30$kubectl get --watch services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend LoadBalancer 10.0.18.174 <pending> 80:31649/TCP 1m
frontend LoadBalancer 10.0.100.65 <pending> 80:32635/TCP 1m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10d
frontend LoadBalancer 10.0.100.65 138.91.126.178 80:32635/TCP 2m
backend LoadBalancer 10.0.18.174 138.91.121.182 80:31649/TCP 2m
Second, if I watch the services, I eventually get IP addresses that I can use to navigate in my browser to these sites. Each of the above IP addresses works in routing me to the frontend and backend respectively.
HOWEVER
I reach an issue when I try and use the ingress controller - it seemingly deployed, but I don't know how to get there.
patientplatypus:~/Documents/kubePlay:09:24:44$kubectl get ingresses
NAME HOSTS ADDRESS PORTS AGE
frontend www.kubeplaytime.example 80 16m
So I have no address I can use, and www.kubeplaytime.example does not appear to work.
What it appears that I have to do to route to the ingress extension I just created is to use a service and deployment on it in order to get an IP address, but this starts to look incredibly complicated very quickly.
For example, take a look at this medium article: https://medium.com/#cashisclay/kubernetes-ingress-82aa960f658e.
It would appear that the necessary code to add for just the service routing to the Ingress (ie what he calls the Ingress Controller) appears to be this:
---
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
spec:
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: ingress-nginx
spec:
replicas: 1
template:
metadata:
labels:
app: ingress-nginx
spec:
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
name: ingress-nginx
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
---
kind: Service
apiVersion: v1
metadata:
name: nginx-default-backend
spec:
ports:
- port: 80
targetPort: http
selector:
app: nginx-default-backend
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: nginx-default-backend
spec:
replicas: 1
template:
metadata:
labels:
app: nginx-default-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
ports:
- name: http
containerPort: 8080
protocol: TCP
This would seemingly need to be appended to my other yaml code above in order to get a service entry point for my ingress routing, and it does appear to give an ip:
patientplatypus:~/Documents/kubePlay:09:54:12$kubectl get --watch services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend LoadBalancer 10.0.31.209 <pending> 80:32428/TCP 4m
frontend LoadBalancer 10.0.222.47 <pending> 80:32482/TCP 4m
ingress-nginx LoadBalancer 10.0.28.157 <pending> 80:30573/TCP,443:30802/TCP 4m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10d
nginx-default-backend ClusterIP 10.0.71.121 <none> 80/TCP 4m
frontend LoadBalancer 10.0.222.47 40.121.7.66 80:32482/TCP 5m
ingress-nginx LoadBalancer 10.0.28.157 40.121.6.179 80:30573/TCP,443:30802/TCP 6m
backend LoadBalancer 10.0.31.209 40.117.248.73 80:32428/TCP 7m
So ingress-nginx appears to be the site I want to get to. Navigating to 40.121.6.179 returns a default 404 message (default backend - 404) - it does not go to frontend as / aught to route. /api returns the same. Navigating to my host namespace www.kubeplaytime.example returns a 404 from the browser - no error handling.
QUESTIONS
Is the Ingress Controller strictly necessary, and if so is there a less complicated version of this?
I feel I am close, what am I doing wrong?
FULL YAML
Available here: https://gist.github.com/patientplatypus/fa07648339ee6538616cb69282a84938
Thanks for the help!
EDIT EDIT EDIT
I've attempted to use HELM. On the surface it appears to be a simple interface, and so I tried spinning it up:
patientplatypus:~/Documents/kubePlay:12:13:00$helm install stable/nginx-ingress
NAME: erstwhile-beetle
LAST DEPLOYED: Sun May 6 12:13:30 2018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
erstwhile-beetle-nginx-ingress-controller 1 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
erstwhile-beetle-nginx-ingress-controller LoadBalancer 10.0.216.38 <pending> 80:31494/TCP,443:32118/TCP 1s
erstwhile-beetle-nginx-ingress-default-backend ClusterIP 10.0.55.224 <none> 80/TCP 1s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
erstwhile-beetle-nginx-ingress-controller 1 1 1 0 1s
erstwhile-beetle-nginx-ingress-default-backend 1 1 1 0 1s
==> v1beta1/PodDisruptionBudget
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
erstwhile-beetle-nginx-ingress-controller 1 N/A 0 1s
erstwhile-beetle-nginx-ingress-default-backend 1 N/A 0 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
erstwhile-beetle-nginx-ingress-controller-7df9b78b64-24hwz 0/1 ContainerCreating 0 1s
erstwhile-beetle-nginx-ingress-default-backend-849b8df477-gzv8w 0/1 ContainerCreating 0 1s
NOTES:
The nginx-ingress controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w erstwhile-beetle-nginx-ingress-controller'
An example Ingress that makes use of the controller:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: example
namespace: foo
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
serviceName: exampleService
servicePort: 80
path: /
# This section is only required if TLS is to be enabled for the Ingress
tls:
- hosts:
- www.example.com
secretName: example-tls
If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: foo
data:
tls.crt: <base64 encoded cert>
tls.key: <base64 encoded key>
type: kubernetes.io/tls
Seemingly this is really nice - it spins everything up and gives an example of how to add an ingress. Since I spun up helm in a blank kubectl I used the following yaml file to add in what I thought would be required.
The file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
labels:
app: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: nginx
image: patientplatypus/frontend_example
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: nginx
image: patientplatypus/backend_example
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
type: LoadBalancer
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 5000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: www.example.com
http:
paths:
- path: /api
backend:
serviceName: backend
servicePort: 80
- path: /
frontend:
serviceName: frontend
servicePort: 80
Deploying this to the cluster however runs into this error:
patientplatypus:~/Documents/kubePlay:11:44:20$kubectl create -f kube-deploy.yaml
deployment.apps "frontend" created
service "frontend" created
deployment.apps "backend" created
service "backend" created
error: error validating "kube-deploy.yaml": error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[1]): unknown field "frontend" in io.k8s.api.extensions.v1beta1.HTTPIngressPath, ValidationError(Ingress.spec.rules[0].http.paths[1]): missing required field "backend" in io.k8s.api.extensions.v1beta1.HTTPIngressPath]; if you choose to ignore these errors, turn validation off with --validate=false
So, the question then becomes, well crap how do I debug this?
If you spit out the code that helm produces, it's basically non-readable by a person - there's no way to go in there and figure out what's going on.
Check it out: https://gist.github.com/patientplatypus/0e281bf61307f02e16e0091397a1d863 - over a 1000 lines!
If anyone has a better way to debug a helm deploy add it to the list of open questions.
EDIT EDIT EDIT EDIT
To simplify in the extreme I attempt to make a call from one pod to another only using namespace.
So here is my React code where I make the http request:
axios.get('http://backend/test')
.then(response=>{
console.log('return from backend and response: ', response);
})
.catch(error=>{
console.log('return from backend and error: ', error);
})
I've also attempted to use http://backend.exampledeploy.svc.cluster.local/test without luck.
Here is my node code handling the get:
router.get('/test', function(req, res, next) {
res.json({"test":"test"})
});
Here is my yaml file that I uploading to the kubectl cluster:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
namespace: exampledeploy
labels:
app: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: nginx
image: patientplatypus/frontend_example
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: exampledeploy
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
namespace: exampledeploy
labels:
app: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: nginx
image: patientplatypus/backend_example
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: backend
namespace: exampledeploy
spec:
type: LoadBalancer
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 5000
The uploading to the cluster appears to work as I can see in my terminal:
patientplatypus:~/Documents/kubePlay:14:33:20$kubectl get all --namespace=exampledeploy
NAME READY STATUS RESTARTS AGE
pod/backend-584c5c59bc-5wkb4 1/1 Running 0 15m
pod/backend-584c5c59bc-jsr4m 1/1 Running 0 15m
pod/backend-584c5c59bc-txgw5 1/1 Running 0 15m
pod/frontend-647c99cdcf-2mmvn 1/1 Running 0 15m
pod/frontend-647c99cdcf-79sq5 1/1 Running 0 15m
pod/frontend-647c99cdcf-r5bvg 1/1 Running 0 15m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/backend LoadBalancer 10.0.112.160 168.62.175.155 80:31498/TCP 15m
service/frontend LoadBalancer 10.0.246.212 168.62.37.100 80:31139/TCP 15m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.extensions/backend 3 3 3 3 15m
deployment.extensions/frontend 3 3 3 3 15m
NAME DESIRED CURRENT READY AGE
replicaset.extensions/backend-584c5c59bc 3 3 3 15m
replicaset.extensions/frontend-647c99cdcf 3 3 3 15m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/backend 3 3 3 3 15m
deployment.apps/frontend 3 3 3 3 15m
NAME DESIRED CURRENT READY AGE
replicaset.apps/backend-584c5c59bc 3 3 3 15m
replicaset.apps/frontend-647c99cdcf 3 3 3 15m
However, when I attempt to make the request I get the following error:
return from backend and error:
Error: Network Error
Stack trace:
createError#http://168.62.37.100/static/js/bundle.js:1555:15
handleError#http://168.62.37.100/static/js/bundle.js:1091:14
App.js:14
Since the axios call is being made from the browser, I'm wondering if it is simply not possible to use this method to call the backend, even though the backend and the frontend are in different pods. I'm a little lost, as I thought this was the simplest possible way to network pods together.
EDIT X5
I've determined that it is possible to curl the backend from the command line by exec'ing into the pod like this:
patientplatypus:~/Documents/kubePlay:15:25:25$kubectl exec -ti frontend-647c99cdcf-5mfz4 --namespace=exampledeploy -- curl -v http://backend/test
* Hostname was NOT found in DNS cache
* Trying 10.0.249.147...
* Connected to backend (10.0.249.147) port 80 (#0)
> GET /test HTTP/1.1
> User-Agent: curl/7.38.0
> Host: backend
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Powered-By: Express
< Content-Type: application/json; charset=utf-8
< Content-Length: 15
< ETag: W/"f-SzkCEKs7NV6rxiz4/VbpzPnLKEM"
< Date: Sun, 06 May 2018 20:25:49 GMT
< Connection: keep-alive
<
* Connection #0 to host backend left intact
{"test":"test"}
What this means is, without a doubt, because the front end code is being executed in the browser it needs Ingress to gain entry into the pod, as http requests from the front end are what's breaking with simple pod networking. I was unsure of this, but it means Ingress is necessary.
First of all, let's clarify some apparent misconceptions. You mentioned your front-end being a React application, that will presumably run in the users browser. For this to work, your actual problem is not your back-end and front-end pods communicating with each other, but the browser needs to be able to connect to both these pods (to the front-end pod in order to load the React application, and to the back-end pod for the React app to make API calls).
To visualize:
+---------+
+---| Browser |---+
| +---------+ |
V V
+-----------+ +----------+ +-----------+ +----------+
| Front-end |---->| Back-end | | Front-end | | Back-end |
+-----------+ +----------+ +-----------+ +----------+
(what you asked for) (what you need)
As already stated, the easiest solution for this would be to use an Ingress controller. I won't go into detail on how to set up an Ingress controller here; in some cloud environments (like GKE) you will be able to use an Ingress controller provided to you by the cloud provider. Otherwise, you can set up the NGINX Ingress controller. Have a look at the NGINX Ingress controllers deployment guide for more information.
Define services
Start by defining Service resources for both your front-end and back-end application (these would also allow your Pods to communicate with each other). A service definition might look like this:
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 8080
Make sure that your Pods have labels that can be selected by the Service resource (in this example, I'm using app=backend and app=frontend as labels).
If you want to establish Pod-to-Pod communication, you're done now. In each Pod, you can now use backend.<namespace>.svc.cluster.local (or backend as shorthand) and frontend as host names to connect to that Pod.
Define Ingresses
Next up, you can define the Ingress resources; since both services will need connectivity from outside the cluster (the users browser), you will need Ingress definitions for both services.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend
spec:
rules:
- host: www.your-application.example
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: backend
spec:
rules:
- host: api.your-application.example
http:
paths:
- path: /
backend:
serviceName: backend
servicePort: 80
Alternatively, you could also aggregate frontend and backend with a single Ingress resource (no "right" answer here, just a matter of preference):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend
spec:
rules:
- host: www.your-application.example
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
- path: /api
backend:
serviceName: backend
servicePort: 80
After that, make sure that both www.your-application.example and api.your-application.example point to your Ingress controller's external IP address, and you should be done.
As it turns out I was over-complicating things. Here is the Kubernetes file that works to do what I want. You can do this using two deployments (front end, and backend) and one service entrypoint. As far as I can tell, a service can load balance to many (not just 2) different deployments, meaning for practical development this should be a good start to micro service development. One of the benefits of an ingress method is allowing the use of path names rather than port numbers, but given the difficulty it doesn't seem practical in development.
Here is the yaml file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
labels:
app: exampleapp
spec:
replicas: 3
selector:
matchLabels:
app: exampleapp
template:
metadata:
labels:
app: exampleapp
spec:
containers:
- name: nginx
image: patientplatypus/kubeplayfrontend
ports:
- containerPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
labels:
app: exampleapp
spec:
replicas: 3
selector:
matchLabels:
app: exampleapp
template:
metadata:
labels:
app: exampleapp
spec:
containers:
- name: nginx
image: patientplatypus/kubeplaybackend
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: entrypt
spec:
type: LoadBalancer
ports:
- name: backend
port: 8080
targetPort: 5000
- name: frontend
port: 81
targetPort: 3000
selector:
app: exampleapp
Here are the bash commands I use to get it to spin up (you may have to add a login command - docker login - to push to dockerhub):
#!/bin/bash
# stop all containers
echo stopping all containers
docker stop $(docker ps -aq)
# remove all containers
echo removing all containers
docker rm $(docker ps -aq)
# remove all images
echo removing all images
docker rmi $(docker images -q)
echo building backend
cd ./backend
docker build -t patientplatypus/kubeplaybackend .
echo push backend to dockerhub
docker push patientplatypus/kubeplaybackend:latest
echo building frontend
cd ../frontend
docker build -t patientplatypus/kubeplayfrontend .
echo push backend to dockerhub
docker push patientplatypus/kubeplayfrontend:latest
echo now working on kubectl
cd ..
echo deleting previous variables
kubectl delete pods,deployments,services entrypt backend frontend
echo creating deployment
kubectl create -f kube-deploy.yaml
echo watching services spin up
kubectl get services --watch
The actual code is just a frontend react app making an axios http call to a backend node route on componentDidMount of the starting App page.
You can also see a working example here: https://github.com/patientplatypus/KubernetesMultiPodCommunication
Thanks again everyone for your help.
To use ingress controller you need to have valid domain (DNS server configured to point your ingress controller ip). This is not due to any kubernetes "magic" but due to the way how vhosts work (here is an example for nginx - very often used as ingress server, but any other ingress implementation will work the same way under the hood).
If you can't configure your domain the easiest way for dev purpose would be creating kubernetes service. There is a nice short cut for doing it using kubectl expose
kubectl expose pod frontend-pod --port=444 --name=frontend
kubectl expose pod backend-pod --port=888 --name=backend