I have installed minikube on a server which I can access from the internet.
I have created a kubernetes service which is available:
>kubectl get service myservice
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myservice 10.0.0.246 <nodes> 80:31988/TCP 14h
The IP address of minikube is:
>minikube ip
192.168.42.135
I would like the URL http://myservice.myhost.com (i.e. port 80) to map to the service in minikube.
I have nginx running on the host (totally unrelated to kubernetes). I can set up a virtual host, mapping the URL to 192.168.42.135:31988 (the node port) and it works fine.
I would like to use an ingress. I've added and enabled ingress. But I am unsure of:
a) what the yaml file should contain
b) how incoming traffic on port 80, from the browser, gets redirected to the ingress and minikube.
c) do I still need to use nginx as a reverse proxy?
d) if so, what address is the ingress-nginx running on (so that I can map traffic to it)?
Setup
First of all, you need a nginx ingress controller.
The nginx instance(s) will listen on host 80 and 443 port, and redirect every HTTP request to services which ingress configuration defined, like this.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-service-ingress
annotations:
# by default the controller redirects (301) HTTP to HTTPS,
# the following would make it disabled.
# ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: myservice
servicePort: 80
Use https://{host-ip}/ to visit myservice, The host should be the one where nginx controller is running at.
Outside
Normally you don't need another nginx outside kubernetes cluster.
While Minikube is a little different, It is running kubernetes in a virtual machine instead of host.
We need do some port-forwards like host:80 => minikube:80, Running a reverse proxy (like nginx) in the host is an elegant way.
It can also be done by setting virtual networking port forward in Virtualbox.
As stated by #silverfox, you need an ingress controller. You can enable the ingress controller in minikube like this:
minikube addons enable ingress
Minikube runs on IP 192.168.42.135, according to minikube ip. And after enabling the ingress addon it listens to port 80 too. But that means a reverse proxy like nginx is required on the host, to proxy calls to port 80 through to minikube.
After enabling ingress on minikube, I created an ingress file (myservice-ingress.yaml):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myservice-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: myservice.myhost.com
http:
paths:
- path: /
backend:
serviceName: myservice
servicePort: 80
Note that this is different to the answer given by #silverfox because it must contain the "host" which should match.
Using this file, I created the ingress:
kubectl create -f myservice-ingress.yaml
Finally, I added a virtual host to nginx (running outside of minikube) to proxy traffic from outside into minikube:
server {
listen 80;
server_name myservice.myhost.com;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://192.168.42.135;
}
}
The Host header must be passed through because the ingress uses it to match the service. If it is not passed through, minikube cannot match the request to the service.
Remember to restart nginx after adding the virtual host above.
use iptables forward host's port to minikube ip's port
sudo echo “1” > /proc/sys/net/ipv4/ip_forward
sudo vim /etc/sysctl.conf
change net.ipv4.ip_forward = 1
# enable iptables's NAT:
sudo /sbin/iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
# forword host's port 30000-32767 to minikube ip's port 30000-32767
sudo iptables -t nat -I PREROUTING -p tcp -d <host ip> --dport 30000:32767 -j DNAT --to <minikube ip>:30000-32767
Related
Kubernetes is based on ubuntu.When I run the application, the address part of ingress is empty.
service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: docker-testmrv
name: docker-testmrv-service
namespace: jenkins
spec:
selector:
app: docker-testmrv
ports:
- protocol: TCP
port: 80
targetPort: 8093
type: LoadBalancer
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
labels:
app: docker-testmrv
name: docker-testmrv-ingress
namespace: jenkins
spec:
rules:
- host: dockertest.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: docker-testmrv-service
port:
number: 80
ingressClassName: nginx
As you can see in the picture below, the hosts part is empty.I also tried the following in the annotation section, but it didn't work. I've looked and tried other sources as well.
nginx.ingress.kubernetes.io/rewrite-target: /$1
or
ingressclass.kubernetes.io/is-default-class: "true"
or
kubernetes.io/ingressClassName: nginx
kubectl get ing -n jenkins
First we need to ensure nginx enabled and nginx-ingress-controller pod in running status.
Follow below steps to verify :
Enable the NGINX Ingress controller, run the following command:
minikube addons enable ingress
Verify that the NGINX Ingress controller is running
kubectl get pods -n kube-system
As per your YAML, For ingress rule, change the port servicePort from 8093 to 80 the default http port.
Now apply those files and create your pods, service and ingress rule. Wait a few moments, it will take a few moments to get ADDRESS for your ingress rule.
Refer this SO Link
Updated Answer :
Do Nodes have an external ip by default?
If you're using public nodes, each node will have a different public IP and can change every time a node is recreated.
So, Make sure you use the service type as Load balancer to get an external IP on your ingress . NodePort opens any one of the available ports. You can also use NodePort but it might not give you an external IP though instead give a port that will be opened on all the nodes.
Refer this Link to get the difference between cluster IP Node Port and Load balancer different from each other.
Create the service type as Load balancer and add the last line ingressClassName: nginx definition to your ingress yaml. This will work. Refer to this SO
the load balancer and ingress controller are working fine.
I install Nginx deployment and Nginx service, then expose Nginx deployment to port 80
#kubectl describe ing minimal-ingress
- Name: minimal-ingress
- Namespace: default
- Address:
- Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
- Rules:
- Host Path Backends
- ---- ---- --------
- mywebsite.com
- / nginx-deployment:80 (10.52.2.38:80,10.52.2.39:80)
- Annotations: nginx.ingress.kubernetes.io/rewrite-target: /
- Events: <none>
it's working fine when I access my Nginx through my deployment IP address:
> #curl 10.52.2.38
I accessed through the command line and browser it's working fine, with or without specifying port 80.
just 10.52.2.38 or 10.52.2.39
but when I try to access use my host domain I showing
#curl mywebsite.com
curl: (7) Failed to connect to mywebsite.com port 80: No route to host
I am also cannot do curl to localhost or IP localhost 127.0.0.1
it is only working on my IP pods
I flush my iptables and disable ufw
I add my domain to /etc/hosts and put my loadbalancer IP there
but still not working.
thank you, I am kubernetes K3S user.
Finally, I got the answer to my problem.
what are you need to add is ingressClassName:nginx
so basically it would look like this
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
namespace: ingress
spec:
ingressClassName: nginx
good luck everyone
I have setup an ingress for an application but want to whitelist my ip address. So I created this Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/whitelist-source-range: ${MY_IP}/32
name: ${INGRESS_NAME}
spec:
rules:
- host: ${DNS_NAME}
http:
paths:
- backend:
serviceName: ${SVC_NAME}
servicePort: ${SVC_PORT}
tls:
- hosts:
- ${DNS_NAME}
secretName: tls-secret
But when I try to access it I get a 403 forbidden and in the nginx logging I see a client ip but that is from one of the cluster nodes and not my home ip.
I also created a configmap with this configuration:
data:
use-forwarded-headers: "true"
In the nginx.conf in the container I can see that has been correctly passed on/ configured, but I still get a 403 forbidden with still only the client ip from cluster node.
I am running on an AKS cluster and the nginx ingress controller is behind an Azure loadbalancer. The nginx ingress controller svc is exposed as type loadbalancer and locks in on the nodeport opened by the svc.
Do I need to configure something else within Nginx?
If you've installed nginx-ingress with the Helm chart, you can simply configure your values.yaml file with controller.service.externalTrafficPolicy: Local, which I believe will apply to all of your Services. Otherwise, you can configure specific Services with service.spec.externalTrafficPolicy: Local to achieve the same effect on those specific Services.
Here are some resources to further your understanding:
k8s docs - Preserving the client source IP
k8s docs - Using Source IP
It sounds like you have your Nginx Ingress Controller behind a NodePort (or LoadBalancer) Service, or rather behind a kube-proxy. Generally to get your controller to see the raw connecting IP you will need to deploy it using a hostNetwork port so it listens directly to incoming traffic.
I have a set of services that i want to expose as an ingress load balancer. I select nginx to be the ingress because of the ability to force http to https redirects.
Having an ingress config like
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-https
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: true
nginx.ingress.kubernetes.io/force-ssl-redirect: true
nginx.org/ssl-services: "api,spa"
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- api.some.com
- www.some.com
secretName: secret
rules:
- host: api.some.com
http:
paths:
- path: /
backend:
serviceName: api
servicePort: 8080
- host: www.some.com
http:
paths:
- path: /
backend:
serviceName: spa
servicePort: 8081
gke creates the nginx ingress load balancer but also another load balancer with backends and everything like if where not nginx selected but gcp as ingress.
below screenshot shows in red the two unexpected LB and in blue the two nginx ingress LB one for our qa and prod env respectively.
output from kubectl get services
xyz#cloudshell:~ (xyz)$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
api NodePort 1.2.3.4 <none> 8080:32332/TCP,4433:31866/TCP 10d
nginx-ingress-controller LoadBalancer 1.2.6.9 12.13.14.15 80:32321/TCP,443:32514/TCP 2d
nginx-ingress-default-backend ClusterIP 1.2.7.10 <none> 80/TCP 2d
spa NodePort 1.2.8.11 <none> 8082:31847/TCP,4435:31116/TCP 6d
screenshot from gcp gke services view of the ingress with wrong info
Is this expected?
Did i miss any configuration to prevent this extra load balancer for been created?
On GCP GKE the gcp ingress controller its enabled by default and will be always lead to a new LB in any ingress definition even if the .class its specified.
https://github.com/kubernetes/ingress-nginx/issues/3703
So to fix it we should remove the gcp ingress controller from the cluster as mention on https://github.com/kubernetes/ingress-gce/blob/master/docs/faq/gce.md#how-do-i-disable-the-gce-ingress-controller
When you create a deployment on GKE cluster, you have two possibilities to expose it:
Use a Service with a type LoadBalancer and expose it - this will
create a TCP load balancer
Create a Service as a NodePort or a Cluster
IP and expose it as an Ingress - this will create HTTP load balancer
If you can see both of them in Load Balancers, this means that you have probably created a Service type LoadBalancer and then exposed it as Ingress. You are opening the same deployment to be accessed from two different IPs, by service and Ingress. To confirm this try:
$ kubectl get ingress
$ kubectl get svc
You will get 2 ips from these 2 commands and they will both show you the same page.
Better way to configure it is to have a service type NodePort, and expose that service as an ingress. This is especially useful because you can use the same ingress for exposing more services.
This way you are saving the number of IPs exposed (and saving money by not using several Load Balancers).
I'm trying to set up the LetsEncrypt SSL ceritficate using cert manager.
I have successfully deployed Cert Manager by Helm and stuck at configuring ingress.yaml.
$ sudo kubectl create --edit -f https://raw.githubusercontent.com/jetstack/cert-manager/master/docs/tutorials/quick-start/example/ingress.yaml
I've got this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: kuard
namespace: default
spec:
rules:
- host: example.example.com
http:
paths:
- backend:
serviceName: kuard
servicePort: 80
path: /
tls:
- hosts:
- example.example.com
secretName: quickstart-example-tls
So I just replaced hosts from example.com to my external IP and got this:
A copy of your changes has been stored to "/tmp/kubectl-edit-qx3kw.yaml"
The Ingress "kuard" is invalid: spec.rules[0].host: Invalid value: must be a DNS name, not an IP address
Is there any way to set it up using just my external IP? I have't yet chosen the domain name for my app and want to use just plain IP for demoing and playing around.
No. You cannot use an IP address for the Ingress. To use an IP address, you'd need to configure it to point to your worker nodes and create a NodePort Service, which will allow you to browse to http://IP:NODEPORT.