kubernetes-how to subdomain localhost using nginx ingress controller? - nginx

I want to have 2 apps running on kubernetes, I wonder if I can do 2 subdomains using nginx ingress controller.
For example: app1.localhost:8181/cxf and app2.localhost:8181/cxf
each one of those will have diferent services.
How can I do that?
Some more context here:
EDIT:
Note:mysql is working fine so im not posting the yaml's here so it doesn't get too long.
Note too that im using karaf with a kar.(that will be my app)
I was thinking that maybe I should have 2 nodes? one with mysql and app1 and the other one with mysql and app2? so in one I could access app1.localhost/cxf services and in the other app2.localhost/cxf services... maybe doesn't make much sense... and I was reading that I need kubeadm for that, and there is no way to install it on windows. I think I must use minikube for that instead?
These are my yaml's:
The load balancer:
apiVersion: v1
kind: Service
metadata:
name: lb-service
spec:
type: LoadBalancer
selector:
app: app1
ports:
- protocol: TCP
name: app1
port: 3306
targetPort: 3306
- protocol: TCP
name: app1-8080
port: 8080
targetPort: 8080
- protocol: TCP
name: app1-8101
port: 8101
targetPort: 8101
- protocol: TCP
name: app1-8181
port: 8181
targetPort: 8181
status:
loadBalancer:
ingress:
- hostname: localhost
app1:
apiVersion: v1
kind: Service
metadata:
name: app1-service
spec:
ports:
- port: 8101
selector:
app: app1
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: app1-deployment
spec:
selector:
matchLabels:
app: app1
replicas: 1
template:
metadata:
labels:
app: app1
spec:
containers:
- name: app1
image: app1:latest
app2: is the same as app1 but in a diferent version(older services)
ingress-resource:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: apps-ingress
#annotations:
#nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: app1.localhost # tried app1.127-0-0-1.sslip.io ass answered below too.
http:
paths:
- path: /
backend:
serviceName: app1-service
servicePort: 8181
- host: app2.localhost
http:
paths:
- path: /
backend:
serviceName: app2-service
servicePort: 8181
I should be able to access app1 version in app1.localhost:8181/cxf, and app2 version in app2.localhost:8181/cxf
There is another doubt I have, shouldn't I be able to create another loadBalancer? I wanted to, so the selector would be app2 in that loadBalancer, but since I already have one, the new one just stays <pending> until I remove the first one.
That would make some sense, since if I have 2 replicas if app1, and 2 replicas of app2, there should be a loadBalancer for each app right?
Note that I installed the nginx ingress-controller using helm too, since the ingress-resource would not work otherwise, at least thats what I have read.
By installing that, it installed nginx load balancer too, and this one didnt go to pending. Do I need to use nginx loadBalancer? or can I delete it and use kubernetes type loadBalancer?
Huum, im missing something here...
Thanks for your time!

I want to have 2 apps running on kubernetes, I wonder if I can do 2 subdomains using nginx ingress controller.
Yes, you just need any number of DNS records which point to your Ingress controller's IP (you used 127.0.0.1, so that's what I'll use for these examples, but you can substitute whatever IP is relevant). That's the whole point of an Ingress resource: to use the host: header to dispatch requests into the cluster
I found a list of wildcard DNS providers of which I confirmed that app1.127-0-0-1.sslip.io and app2.127-0-0-1.sslip.io do as expected
Thus:
kind: Ingress
metadata:
name: app1-and-2
spec:
rules:
- host: app1.127-0-0-1.sslip.io
http:
paths:
- path: /
backend:
serviceName: app1-backend
servicePort: 8181 # <-- or whatever your Service port is
# then you can repeat that for as many hosts as you wish
- host: app2.127-0-0-1.sslip.io
http:
paths:
- path: /
backend:
serviceName: app2-backend
servicePort: 8181

Related

forward from ingress nginx controller to different nginx pods according to port numbers

in my k8s system I have a nginx ingress controller as LoadBalancer and accessing it to ddns adress like hedehodo.ddns.net and this triggering to forward web traffic to another nginx port.
Now I deployed another nginx which works on node.js app but I cannot forward nginx ingress controller for any request to port 3000 to go another nginx
here is the nginx ingress controller yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: default
spec:
rules:
- host: hedehodo.ddns.net
http:
paths:
- path: /
backend:
serviceName: my-nginx
servicePort: 80
- path: /
backend:
serviceName: helloapp-deployment
servicePort: 3000
helloapp deployment works a Loadbalancer and I can access it from IP:3000
could any body help me?
Each host cannot share multiple duplicate paths, so in your example, the request to host: hedehodo.ddns.net will always map to the first service listed: my-nginx:80.
To use another service, you have to specify a different path. That path can use any service that you want. Your ingress should always point to a service, and that service can point to a deployment.
You should also use HTTPS by default for your ingress.
Ingress example:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
rules:
- host: my.example.net
http:
paths:
- path: /
backend:
serviceName: my-nginx
servicePort: 80
- path: /hello
backend:
serviceName: helloapp-svc
servicePort: 3000
Service example:
---
apiVersion: v1
kind: Service
metadata:
name: helloapp-svc
spec:
ports:
- port: 3000
name: app
protocol: TCP
targetPort: 3000
selector:
app: helloapp
type: NodePort
Deployment example:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloapp
labels:
app: helloapp
spec:
replicas: 1
selector:
matchLabels:
app: helloapp
template:
metadata:
labels:
app: helloapp
spec:
containers:
- name: node
image: my-node-img:v1
ports:
- name: web
containerPort: 3000
You can't have the same "path: /" for the same host. Change the path to a different one for your the new service.

External OAuth authentication with Nginx in Kubernetes

Having trouble setting up external authentication for a web application behind nginx ingress. When i try to access the URL https://site.example.com from external i get no redirection to Github login, and direct access to web application happens.
Running Pods for my environment:
NAME READY STATUS
nginx-ingress-68df4dfc4f-wpj5t 1/1 Running
oauth2-proxy-6675d4b57c-cspw8 1/1 Running
web-deployment-7d4bd85b46-blxb8 1/1 Running
web-deployment-7d4bd85b46-nqjgl 1/1 Running
Active Services:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
nginx-ingress LoadBalancer 10.96.156.157 192.168.1.82 80:31613/TCP,443:32437/TCP
oauth2-proxy ClusterIP 10.100.101.251 <none> 4180/TCP
web-service ClusterIP 10.108.237.188 <none> 8080/TCP
Two Ingress resources:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: nginx-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$request_uri"
labels:
app: webapp
spec:
rules:
- host: site.example.com
http:
paths:
- path: /
backend:
serviceName: web-service
servicePort: 8080
tls:
- hosts:
- site.example.com
secretName: example-tls
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: oauth2-proxy
namespace: nginx-ingress
annotations:
kubernetes.io/ingress.class: nginx
labels:
app: oauth2-proxy
spec:
rules:
- host: site.example.com
http:
paths:
- backend:
serviceName: oauth2-proxy
servicePort: 4180
path: /oauth2
tls:
- hosts:
- site.example.com
secretName: example-tls
Ingress output:
NAME CLASS HOSTS ADDRESS PORTS
ingress <none> site.example.com 192.168.1.82 80, 443
oauth2-proxy <none> site.example.com 80, 443
I see these errors in Ingress oauth2-proxy events:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Rejected 54m nginx-ingress-controller All hosts are taken by other resources
Oauth2-proxy built from deployment here with Client ID, Client Secret and SECRET according to the OAuth app created in my Github account.
No logs found in oauth2-proxy logs, i suppose because it's not invoked in the process.
UPDATE:
This question was incomplete, i forgot to mention the NGINX image empolyed (NGINX 1.9.0 from installation guide).
Changing the image with the below:
NGINX Ingress controller
Release: v0.41.2
Build: d8a93551e6e5798fc4af3eb910cef62ecddc8938
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.4
the error disappears. In brief both Ingress configuration, the one from my question and the other from the answer are working.
In your configuration, you are using 2 Ingress. As you described you oauth2-proxy Ingress, in Event section you can find information:
All hosts are taken by other resources
Issue you have encounter here is called Host Collisions. It occured as in your both Ingress you have used:
spec:
rules:
- host: site.example.com
In that kind of situation, Ingress is using default algorith called Choosing the Winner.
If multiple resources contend for the same host, the Ingress Controller will pick the winner based on the creationTimestamp of the resources: the oldest resource will win.
As fast solution for your issue is to create one Ingress with 2 paths.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: nginx-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$request_uri"
spec:
rules:
- host: site.example.com
http:
paths:
- path: /
backend:
serviceName: web-service
servicePort: 8080
- path: /oauth2
backend:
serviceName: oauth2-proxy
servicePort: 4180
tls:
- hosts:
- site.example.com
secretName: example-tls
Another way to resolve this issue is to use Merging Configuration for the Same Host, however it shouldn't be applied in this scenario.
As last thing, you can follow official Nginx Ingress tutorial - External OAUTH Authentication

Loadbalancing/Redirecting from Kubernetes NodePort Services

i got a bare metal cluster with a few nodeport deployments of my services (http and https). I would like to access them from a single url like myservices.local with (sub)paths.
config could be sth like the following (pseudo code):
/app1
http://10.100.22.55:30322
http://10.100.22.56:30322
# browser access: myservices.local/app1
/app2
https://10.100.22.55:31432
https://10.100.22.56:31432
# browser access: myservices.local/app2
/...
I tried a few things with haproxy and nginx but nothing really worked (for inexperienced in this webserver/lb things kinda confusing syntax/ config style in my opinion).
what is the easiest solution for a case like this?
The easiest and most used way is to use a NGINX Ingress. The NGINX Ingress is built around the Kubernetes Ingress resource, using a ConfigMap to store the NGINX configuration.
In the documentation we can read:
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
internet
|
[ Ingress ]
--|-----|--
[ Services ]
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting. An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
This is exactly what you want to achieve.
The first thing you need to do is to install the NGINX Ingress Controller in your cluster. You can follow the official Installation Guide.
A ingress is always going to point to a Service. So you need to have a Deployment, a Service and a NGINX Ingress.
Here is an example of an application similar to your example.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app1
name: app1
spec:
replicas: 1
selector:
matchLabels:
app: app1
strategy:
type: Recreate
template:
metadata:
labels:
app: app1
spec:
containers:
- name: app1
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app2
name: app2
spec:
replicas: 1
selector:
matchLabels:
app: app2
strategy:
type: Recreate
template:
metadata:
labels:
app: app2
spec:
containers:
- name: app2
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 5001
---
apiVersion: v1
kind: Service
metadata:
name: app1
labels:
app: app1
spec:
type: ClusterIP
ports:
- port: 5000
protocol: TCP
targetPort: 5000
selector:
app: app1
---
apiVersion: v1
kind: Service
metadata:
name: app2
labels:
app: app2
spec:
type: ClusterIP
ports:
- port: 5001
protocol: TCP
targetPort: 5001
selector:
app: app2
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress #ingress resource
metadata:
name: myservices
labels:
app: myservices
spec:
rules:
- host: myservices.local #only match connections to myservices.local.
http:
paths:
- path: /app1
backend:
serviceName: app1
servicePort: 5000
- path: /app2
backend:
serviceName: app2
servicePort: 5001

How add nginx-ingress custom health check behind a nginx reverse proxy

I have a nginx server outside kubernetes. nginx -> nginx ingress. I want know how add a custom health check path /health/status to nginx ingress.
This question is almost certainly solving the wrong problem, but in the spirit of answering what was asked:
You can expose the Ingress /healthz to the outside world:
kind: Service
metadata:
name: ingress-nginx-health
spec:
type: ClusterIP
selector: # whatever
ports:
- name: healthz
port: 80
targetPort: 10254
---
kind: Ingress
spec:
rules:
- host: elb-1234.example.com
http:
path: /healthz
backend:
serviceName: ingress-nginx-health
servicePort: healthz
Because if your Ingress controller falls over, it will for sure stop answering its own healthz check

I need help understanding kubernetes architecture best practices

I have 2 nodes on GCP in a kubernetes cluster. I also have a load balancer in GCP as well. this is a regular cluster (not GCK). I am trying to expose my front-end-service to the world. I am trying nginx-ingress type:nodePort as a solution. Where should my loadbalancer be pointing to? is this a good architecture approach?
world --> GCP-LB --> nginx-ingress-resource(GCP k8s cluster) --> services(pods)
to access my site I would have to point LB to worker-node-IP where nginx pod is running. Is this bad practice. I am new in this subject and trying to understand.
Thank you
deployservice:
apiVersion: v1
kind: Service
metadata:
name: mycha-service
labels:
run: mycha-app
spec:
ports:
- port: 80
targetPort: 3000
protocol: TCP
selector:
app: mycha-app
nginxservice:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
labels:
app: nginx-ingress
spec:
type: NodePort
ports:
- nodePort: 31000
port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
name: nginx-ingress
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
labels:
run: nginx-ingress
spec:
type: NodePort
ports:
- nodePort: 31000
port: 80
targetPort: 3000
protocol: TCP
selector:
app: nginx-ingress
nginx-resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mycha-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: mycha-service
servicePort: 80
This configuration is not working.
When you use ingress in-front of your workload pods the service type for workload pods will always be of type clusterIP because you are not exposing pods directly outside the cluster.
But you need to expose the ingress controller outside the cluster either using NodePort type service or using Load Balancer type service and for production its recommended to use Loadbalancer type service.
This is the recommended pattern.
Client -> LoadBalancer -> Ingress Controller -> Kubernetes Pods
Ingress controller avoids usage of kube-proxy and load balancing provided by kube-proxy. You can configure layer 7 load balancing in the ingress itself.
The best practise of exposing application is:
World > LoadBalancer/NodePort (for connecting to the cluster) > Ingress (Mostly to redirect traffic) > Service
If you are using Google Cloud Platform, I would use GKE as it is optimized for containers and configure many things automatically for you.
Regarding your issue, I also couldn't obtain IP address for LB <Pending> state, however you can expose your application using NodePort and VMs IP. I will try few other config to obtain ExternalIP and will edit answer.
Below is one of examples how to expose your app using Kubeadm on GCE.
On GCE, your VM already have ExternalIP. This way you can just use Service with NodePort and Ingress to redirect traffic to proper services.
Deploy Nginx Ingress using Helm 3 as tiller is not required anymore ($ helm install nginx stable/nginx-ingress).
As Default it will deploy service with LoadBalancer type but it won't get externalIP and it will stuck in <Pending> state. You have to change it to NodePort and apply changes.
$ kubectl edit svc nginx-nginx-ingress-controller
Default it will open Vi editor. If you want other you need to specify it
$ KUBE_EDITOR="nano" kubectl edit svc nginx-nginx-ingress-controller
Now you can deploy service, deployment and ingress.
apiVersion: v1
kind: Service
metadata:
name: fs
spec:
selector:
key: app
ports:
- port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: fd
spec:
replicas: 1
selector:
matchLabels:
key: app
template:
metadata:
labels:
key: app
spec:
containers:
- name: hello1
image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mycha-deploy
labels:
app: mycha-app
spec:
replicas: 1
selector:
matchLabels:
app: mycha-app
template:
metadata:
labels:
app: mycha-app
spec:
containers:
- name: mycha-container
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: mycha-service
labels:
app: mycha-app
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: mycha-app
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.pod.svc
http:
paths:
- path: /mycha
backend:
serviceName: mycha-service
servicePort: 80
- path: /hello
backend:
serviceName: fs
servicePort: 80
service/fs created
deployment.apps/fd created
deployment.apps/mycha-deploy created
service/mycha-service created
ingress.extensions/two-svc-ingress created
$ kubectl get svc nginx-nginx-ingress-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-nginx-ingress-controller NodePort 10.105.247.148 <none> 80:31143/TCP,443:32224/TCP 97m
Now you should use your VM ExternalIP (slave VM) with port from NodePort service. My VM ExternalIP: 35.228.133.12, service: 80:31143/TCP,443:32224/TCP
IMPORTANT
If you would curl your VM with port you would get response:
$ curl 35.228.235.99:31143
curl: (7) Failed to connect to 35.228.235.99 port 31143: Connection timed out
As you are doing this manually, you also need add Firewall rule to allow traffic from outside on this specific port or range.
Information about creation of Firewall Rules can be found here.
If you will set proper values (open ports, set IP range (0.0.0.0/0), etc) you will be able to get service from you machine.
Curl from my local machine:
$ curl -H "HOST: my.pod.svc" http://35.228.235.99:31143/mycha
<!DOCTYPE html>
...
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
$ curl -H "HOST: my.pod.svc" http://35.228.235.99:31143/hello
Hello, world!
Version: 1.0.0
Hostname: fd-c6d79cdf8-dq2d6

Resources