I'm having a problem where my NGINX headers are not showing when I connect to the ingress controller's IP. For example, let's say I have a HTTP header called "x-Header" setup in my NGINX config map. If I go to the ingress controller's IP I don't see the header, but when I go to the NGINX load balancer IP, I see the headers. Also when I go to the ingress IP, I get the correct cert, but the NGINX IP gives me a self signed cert. I don't think I have the controller setup right.
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/ingress.global-static-ip-name: staticip
labels:
app: exam
spec:
tls:
- hosts:
- www.example.com
secretName: tls-secret
backend:
serviceName: example-app
servicePort: 80
Ingress node:
apiVersion: v1
kind: Service
metadata:
name: exam-node
labels:
app: exam
spec:
type: NodePort
selector:
app: example-app
tier: backend
ports:
- port: 80
targetPort: 80
To create the NGINX controller I used the command helm install stable/nginx-ingress --name nginx-ingress --set rbac.create=true
My nginx controller logs say this:
2018-07-29 19:57:11.000 CDT
updating Ingress default/example-ingress status to [{12.346.151.45 }]
It seems the nginx controller knows about ingress but ingress doesn't know about the controller. I am probably confused on how Ingress and NGINX work together.
Related
I deployed nginx ingress controller and randomApp to minikube cluster.
I have 2 requirements:
All traffic for "random/.*" should go to the random service
Other paths should go to the nginx.
This configuration is correct?
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: path-rule-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
defaultBackend:
service:
name: ingress-nginx-controller
port:
number: 80
rules:
- host: random.localdev.me
http:
paths:
- path: /random/(.*)
backend:
service:
name: random
port:
number: 8000
pathType: Prefix
You also need to add metadata.annotations: kubernetes.io/ingress.class: "nginx" or spec.ingressClassName: nginx to allow nginx-ingress-controller to discover the ingress.
Also you shouldn't define default backend service as ingress-nginx-controller. Because you will get 503 Service Temporarily Unavailable since nginx-ingress-controller's nginx.conf didn't be configured for this. You can give another nginx server's service for it.
I hope you all are doing great today ,
here is my situation :
i have 2 wordpress websites (identical )
--1st one is an App Service wordpress in azure with a domain name eg:https://wordpress.azurewebsites.net
--the 2nd one is in aks cluster as a pod with a load balancer that expose it to the internet with a public ip
what i want to do :
i want to take the domain name from the app service and give it to the aks pod
what did i do :
i changed from the dashboard the domain name and changed the load balancer public ip adress
and it didn't work now i can't access the dashboard from the load balancer ip adress either
im new in kubernetes i hope someone can guide me to the right direction on how to do it
Seems like you are missing an ingress controller. You could for example install ingress-nginx and expose the ingress with this service config:
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
loadBalancerIP: 53.1.1.1
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
appProtocol: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
You can now create a service for your app:
apiVersion: v1
kind: Service
metadata:
name: app_service
namespace: app
spec:
type: ClusterIP
ports:
- name: service
port: 80
selector:
app: yoour_app
Then you can expose yoour app with an ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app_ingress
namespace: app
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- wordpress.azurewebsites.net
rules:
- host: wordpress.azurewebsites.net
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app_service
port:
number: 80
I'm using ingress-nginx-controller (0.32.0) and am attempting to point an ExternalName service at a URL and yet it’s stuck in a loop of 308 Redirects. I've seen plenty of issues out there and I figure there’s just one thing off with my configuration. Is there something really small that I'm missing here?
ConfigMap for NGINX Configuration:
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
use-proxy-protocol: "true"
use-forwarded-headers: "true"
proxy-real-ip-cidr: "0.0.0.0/0" # restrict this to the IP addresses of ELB
proxy-read-timeout: "3600"
proxy-send-timeout: "3600"
backend-protocol: "HTTPS"
ssl-redirect: "false"
http-snippet: |
map true $pass_access_scheme {
default "https";
}
map true $pass_port {
default 443;
}
server {
listen 8080 proxy_protocol;
return 308 https://$host$request_uri;
}
NGINX Service:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "XXX"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
spec:
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: http
Ingress Definition:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-members-portal
namespace: dev
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: foo-111.dev.bar.com
http:
paths:
- path: /login
backend:
serviceName: foo-service
servicePort: 80
ExternalName Service:
apiVersion: v1
kind: Service
metadata:
name: foo-service
spec:
type: ExternalName
externalName: foo.staging.bar.com
selector:
app: foo
EDIT
I figured it out! I wanted to point to a service in another namespace, so I changed the ExternalName service to this:
apiVersion: v1
kind: Service
metadata:
name: foo-service
spec:
type: ExternalName
externalName: foo-service.staging.svc.cluster.local
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: foo
I believe the issue you're seeing is due to the fact that your external service isn't working as you think it is. In your ingress definition, you are defining the service to utilize port 80 on your foo-service. In theory, this would redirect you back to your ingress controller's ELB, redirect your request to the https://foo.staging.bar.com address, and move on.
However, external services don't really work that way. Essentially, all externalName will do is run a DNS check with KubeDNS/CoreDNS, and return the CNAME information on that request. It doesn't handle redirects of any kind.
For example, in this case, foo.staging.bar.com:80 would return foo.staging.bar.com:443. You are directing the request for that site to port 80, which in itself directs the request to port 8080 in the ingress controller, which then redirects that request back out to the ELB's port 443. That redirect logic doesn't coexist with the external service.
The problem here, then, is that your app will essentially then try to do this:
http://foo-service:80 --> http://foo.staging.bar.com:80/login --> https://foo.staging.bar.com:443
My expectation on this is that you never actually reach the third step. Why? Well because foo-service:80 is not directing you to port 443, first of all, but second of all...all coreDNS is doing in the backend is running a DNS check against the foo-service's external name, which is foo.staging.bar.com. It's not handling any kind of redirection. So depending on how your host from your app is returned, and handled, your app may never actually get to that site and port. So rather than reach that site, you just have your app keep looping back to http://foo-service:80 for those requests, which will always result in a 308 loopback.
The key here is that foo-service is the host header being sent to the NGINX Ingress controller, not foo.staging.bar.com. So on a redirect to 443, my expectation is that all that is happening, then, is you are hitting foo-service, and any redirects are being improperly sent back to foo-service:80.
A good way to test this is to run curl -L -v http://foo-service:80 and see what happens. That will follow all redirects from that service, and provide you context as to how your ingress controller is handling those requests.
It's really hard to give more information, as I don't have access to your setup directly. However, if you know that your app is always going to be hitting port 443, it would probably be a good fix, in this case, to change your ingress and service to look something like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-members-portal
namespace: dev
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: foo-111.dev.bar.com
http:
paths:
- path: /login
backend:
serviceName: foo-service
servicePort: 443
---
apiVersion: v1
kind: Service
metadata:
name: foo-service
spec:
type: ExternalName
externalName: foo.staging.bar.com
That should ensure you don't have any kind of https redirect. Unfortunately, this may also cause issues with ssl validation, but that would be another issue all together. The last piece of I can recommend is to possibly just use foo.staging.bar.com itself, rather than an external service in this case.
For more information, see: https://kubernetes.io/docs/concepts/services-networking/service/#externalname
Hope this helps!
I have 2 nodes on GCP in a kubernetes cluster. I also have a load balancer in GCP as well. this is a regular cluster (not GCK). I am trying to expose my front-end-service to the world. I am trying nginx-ingress type:nodePort as a solution. Where should my loadbalancer be pointing to? is this a good architecture approach?
world --> GCP-LB --> nginx-ingress-resource(GCP k8s cluster) --> services(pods)
to access my site I would have to point LB to worker-node-IP where nginx pod is running. Is this bad practice. I am new in this subject and trying to understand.
Thank you
deployservice:
apiVersion: v1
kind: Service
metadata:
name: mycha-service
labels:
run: mycha-app
spec:
ports:
- port: 80
targetPort: 3000
protocol: TCP
selector:
app: mycha-app
nginxservice:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
labels:
app: nginx-ingress
spec:
type: NodePort
ports:
- nodePort: 31000
port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
name: nginx-ingress
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
labels:
run: nginx-ingress
spec:
type: NodePort
ports:
- nodePort: 31000
port: 80
targetPort: 3000
protocol: TCP
selector:
app: nginx-ingress
nginx-resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mycha-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: mycha-service
servicePort: 80
This configuration is not working.
When you use ingress in-front of your workload pods the service type for workload pods will always be of type clusterIP because you are not exposing pods directly outside the cluster.
But you need to expose the ingress controller outside the cluster either using NodePort type service or using Load Balancer type service and for production its recommended to use Loadbalancer type service.
This is the recommended pattern.
Client -> LoadBalancer -> Ingress Controller -> Kubernetes Pods
Ingress controller avoids usage of kube-proxy and load balancing provided by kube-proxy. You can configure layer 7 load balancing in the ingress itself.
The best practise of exposing application is:
World > LoadBalancer/NodePort (for connecting to the cluster) > Ingress (Mostly to redirect traffic) > Service
If you are using Google Cloud Platform, I would use GKE as it is optimized for containers and configure many things automatically for you.
Regarding your issue, I also couldn't obtain IP address for LB <Pending> state, however you can expose your application using NodePort and VMs IP. I will try few other config to obtain ExternalIP and will edit answer.
Below is one of examples how to expose your app using Kubeadm on GCE.
On GCE, your VM already have ExternalIP. This way you can just use Service with NodePort and Ingress to redirect traffic to proper services.
Deploy Nginx Ingress using Helm 3 as tiller is not required anymore ($ helm install nginx stable/nginx-ingress).
As Default it will deploy service with LoadBalancer type but it won't get externalIP and it will stuck in <Pending> state. You have to change it to NodePort and apply changes.
$ kubectl edit svc nginx-nginx-ingress-controller
Default it will open Vi editor. If you want other you need to specify it
$ KUBE_EDITOR="nano" kubectl edit svc nginx-nginx-ingress-controller
Now you can deploy service, deployment and ingress.
apiVersion: v1
kind: Service
metadata:
name: fs
spec:
selector:
key: app
ports:
- port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: fd
spec:
replicas: 1
selector:
matchLabels:
key: app
template:
metadata:
labels:
key: app
spec:
containers:
- name: hello1
image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mycha-deploy
labels:
app: mycha-app
spec:
replicas: 1
selector:
matchLabels:
app: mycha-app
template:
metadata:
labels:
app: mycha-app
spec:
containers:
- name: mycha-container
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: mycha-service
labels:
app: mycha-app
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: mycha-app
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.pod.svc
http:
paths:
- path: /mycha
backend:
serviceName: mycha-service
servicePort: 80
- path: /hello
backend:
serviceName: fs
servicePort: 80
service/fs created
deployment.apps/fd created
deployment.apps/mycha-deploy created
service/mycha-service created
ingress.extensions/two-svc-ingress created
$ kubectl get svc nginx-nginx-ingress-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-nginx-ingress-controller NodePort 10.105.247.148 <none> 80:31143/TCP,443:32224/TCP 97m
Now you should use your VM ExternalIP (slave VM) with port from NodePort service. My VM ExternalIP: 35.228.133.12, service: 80:31143/TCP,443:32224/TCP
IMPORTANT
If you would curl your VM with port you would get response:
$ curl 35.228.235.99:31143
curl: (7) Failed to connect to 35.228.235.99 port 31143: Connection timed out
As you are doing this manually, you also need add Firewall rule to allow traffic from outside on this specific port or range.
Information about creation of Firewall Rules can be found here.
If you will set proper values (open ports, set IP range (0.0.0.0/0), etc) you will be able to get service from you machine.
Curl from my local machine:
$ curl -H "HOST: my.pod.svc" http://35.228.235.99:31143/mycha
<!DOCTYPE html>
...
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
$ curl -H "HOST: my.pod.svc" http://35.228.235.99:31143/hello
Hello, world!
Version: 1.0.0
Hostname: fd-c6d79cdf8-dq2d6
I'm using kubernetes ingress-nginx and this is my Ingress spec. http://example.com works fine as expected. But when I go to https://example.com it still works, but pointing to default-backend with Fake Ingress Controller certificate. How can I disable this behaviour? I want to disable listening on https at all on this particular ingress, since there is no TLS configured.
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: http-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: my-deployment
servicePort: 80
I've tried this nginx.ingress.kubernetes.io/ssl-redirect: "false" annotation. However this has no effect.
I'm not aware of an ingress-nginx configmap value or ingress annotation to easily disable TLS.
You could remove port 443 from your ingress controllers service definition.
Remove the https entry from the spec.ports array
apiVersion: v1
kind: Service
metadata:
name: mingress-nginx-ingress-controller
spec:
ports:
- name: https
nodePort: NNNNN
port: 443
protocol: TCP
targetPort: https
nginx will still be listening on a TLS port, but no clients outside the cluster will be able to connect to it.
Redirection is not involved in your problem.
ingress-controller is listening on both port, 80 and 443. When you configure an ingress with only 80 port, if you reach the 443 port you are redirected to the default backend, which is expected behaviour.
A solution is to add an other nginx-controller, that will only listen on 80 port. And then you can configure your ingresses with kubernetes.io/ingress.class: myingress.
When creating the new nginx-controller, change the command --ingress-class=myingress of the daemonset. It will then handle only ingress annotated with this class.
If you use helm to deploy it, simply override the controller.ingressClass value.