I try to setup nginx ingress (nodeport) on google container with proxy protocol so that the real ip can be forwarded to backend service, but ended up with broken header.
2017/02/05 13:48:52 [error] 18#18: *2 broken header: "�����~��]H�k��m[|����I��iv.�{y��Z �嵦v�Ȭq���2Iu4P�z;� o$�s����"���+�/�,�0̨̩����/" while reading PROXY protocol, client: 10.50.0.1, server: 0.0.0.0:443
If without the proxy protocol, thing works well. According to the https://blog.mythic-beasts.com/2016/05/09/proxy-protocol-nginx-broken-header/ this is due to the protocol v2 is used (binary), but nginx only can speak v1. Any suggestion?
GKE: With kubernetes v1.6+ source ip is preserved by default and can be found in headers under x-real-ip without setting any extra nginx config.
AWS: Source ip can be preserved by adding this to the annotations
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: nginx-ingress
annotations:
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
labels:
app: nginx-ingress
Checkout this link
https://github.com/kubernetes/ingress/tree/master/examples/aws/nginx
Just ran into this problem myself. For me, I wasn't behind a load balancer (other than my nginx ingress), so I did not actually need proxy-protocol set.
However, I was getting 127.0.0.1 as the client ip still. The trick is that there was a bug in the version of the nginx ingress I was using (0.9.0-beta.5). Updating my container image to gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.8 fixed the issue and I received the proper X-Forwarded-For header.
Note that the higher versions (up to beta.11 at the time of writing this) had the issue remaining, so I've stayed on beta.8 for the time being.
You can see the versions available at https://console.cloud.google.com/gcr/images/google-containers/GLOBAL/nginx-ingress-controller.
If you are wanting to look at the configuration options available, check out https://github.com/kubernetes/ingress/tree/master/controllers/nginx.
I had this problem myself and this was the thing that finally made it work. Updating to version beta.8 of the nginx controller.
In case some people using AWS want to learn from my mistakes, don't go through manual configuration of the load balancer through the aws cli. The above mentioned service annotation does it all for you. I could have saved myself a lot of headache if I had realized that.
Related
We're running an NGINX Ingress Controller as the 'front door' to our EKS cluster.
Our upstream apps need the client IP to be preserved, so I've had my ingress configmap configured to use the proxy protocol:
apiVersion: v1
kind: ConfigMap
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
data:
custom-http-errors: 404,503,502,500
ssl-redirect: "true"
ssl-protocols: "TLSv1.2"
ssl-ciphers: "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES256-GCM-SHA384:!ECDHE-RSA-AES256-SHA384:!ECDHE-RSA-AES128-SHA256"
use-proxy-protocol: "true"
proxy-real-ip-cidr: "0.0.0.0/0"
This sends the X-Forwarded-For header with the client IP to the upstream pods. This seemed like it was working well, but once our apps started to receive heavier traffic, our monitors would occasionally report connection timeouts when connecting to the apps on the cluster.
I was able reproduce the issue in a test environment using JMeter. Once I set use-proxy-protocol to false, the connection timeouts would no longer occur. I started to look into the use-proxy-protocol setting.
The ingress docs describe the use-proxy-protocol setting here
However the docs also mention the settings "enable-real-ip" and "forwarded-for-header"
At the link provided in the description for enable-real-ip, it says that I can set forwarded-for-header to value proxy_protocol.
Based on this, I've updated my Ingress configmap to:
apiVersion: v1
kind: ConfigMap
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx-test
data:
custom-http-errors: 404,503,502,500
ssl-redirect: "true"
ssl-protocols: "TLSv1.2"
ssl-ciphers: "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES256-GCM-SHA384:!ECDHE-RSA-AES256-SHA384:!ECDHE-RSA-AES128-SHA256"
enable-real-ip: "true"
forwarded-for-header: "proxy_protocol"
proxy-real-ip-cidr: "0.0.0.0/0"
This configuration also properly sends the X-Forwarded-For header with the client IP to the upstream pods. However, it also seems to eliminate the connection timeout issues I was seeing. With this setup performance does not degrade anywhere near as badly as I ramp up the thread count in JMeter.
I would like to better understand the difference between these two configurations. I’d also like to know what is the best practice most widely adopted method of achieving this among Kubernetes shops since this is likely a common use-case.
I'm new to Kubernetes and wanted to use the NGINX Ingress Controller for the project I'm currently working on. I read some of the docs and watched some tutorials but I haven't really understood the:
installation process (should I use Helm, the git repo???)
how to properly configure the Ingress. For example, the Kubernetes docs say to use a nginx.conf file (https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/#creating-the-frontend) which is never mentioned in the actual NGINX docs. They say to use ConfigMaps or annotations
Does anybody know of a blog post or tutorial that makes these things clear. Out of everything I've learned so far (both frontend and backend) developing and deploying to a cloud environment has got me lost. I've been stuck on a problem for a week and want to figure out it Ingress can help me.
Thanks!
Answering:
How should I install nginx-ingress
There is no one correct way to install nginx-ingress. Each way has its own advantages/disadvantages, each Kubernetes cluster could require different treatment (for example: cloud managed Kubernetes and minikube) and you will need to determine which option is best suited for you.
You can choose from running:
$ kubectl apply -f ...,
$ helm install ...,
terraform apply ... (helm provider),
etc.
How should I properly configure Ingress?
Citing the official documentation:
An API object that manages external access to the services in a cluster, typically HTTP.
-- Kubernetes.io: Docs: Concepts: Services networking: Ingress
Basically Ingress is a resource that tells your Ingress controller how it should handle specific HTTP/HTTPS traffic.
Speaking specifically about the nginx-ingress, it's entrypoint that your HTTP/HTTPS traffic should be sent to is a Service of type LoadBalancer named: ingress-nginx-controller (in a ingress-nginx namespace). In Docker with Kubernetes implementation it will bind to the localhost of your machine.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
spec:
ingressClassName: "nginx"
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
The modified example from the documentation will tell your Ingress controller to pass the traffic with any Host and with path: / (every path) to a service named nginx on port 80.
The above configuration after applying will be reflected by ingress-nginx in the /etc/nginx/nginx.conf file.
A side note!
Take a look on how the part of nginx.conf looks like when you apply above definition:
location / {
set $namespace "default";
set $ingress_name "minimal-ingress";
set $service_name "nginx";
set $service_port "80";
set $location_path "/";
set $global_rate_limit_exceeding n;
On how your specific Ingress manifest should look like you'll need to consult the documentation of the software that you are trying to send your traffic to and ingress-nginx docs.
Addressing the part:
how to properly configure the Ingress. For example, the Kubernetes docs say to use a nginx.conf file (https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/#creating-the-frontend) which is never mentioned in the actual NGINX docs. They say to use ConfigMaps or annotations.
You typically don't modify nginx.conf that the Ingress controller is using by yourself. You write an Ingress manifest and rest is taken by Ingress controller and Kubernetes. nginx.conf in the Pod responsible for routing (your Ingress controller) will reflect your Ingress manifests.
Configmaps and Annotations can be used to modify/alter the configuration of your Ingress controller. With the Configmap you can say to enable gzip2 compression and with annotation you can say to use a specific rewrite.
To make things clearer. The guide that is referenced here is a frontend Pod with nginx installed that passes the request to a backend. This example apart from using nginx and forwarding traffic is not connected with the actual Ingress. It will not acknowledge the Ingress resource and will not act accordingly to the manifest you've passed.
A side note!
Your traffic would be directed in a following manner (simplified):
Ingress controller -> frontend -> backend
This example speaking from personal perspective is more of a guide how to connect frontend and backend and not about Ingress.
Additional resources:
Stackoverflow.com: Questions: Ingress nginx how sto serve assets to application
Stackoverflow.com: Questions: How nginx ingress controller backend protocol annotation works
The guide that I wrote some time ago should help you with the idea on how you can configure basic Ingress (it could be little outdated):
Stackoverflow.com: Questions: How can I access nginx ingress on my local
The most straightforward process of installing nginx ingress controller (or any other for that matter) would be using helm. This would need basic understanding of helm and how to work with helm charts.
Here's the repo: https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx
Follow the instructions in there - quite straightforward if you use the default values. For the configuration, you can customize the chart too before installing. Look at the Readme to see how to get all the configurable options.
Hope this helps as a starting point.
GOAL
I want to get access to kubernetes dashboard with a standalone nginx service and a microk8s nodeport service.
CONTEXT
I have a linux server.
On this server, there are several running services such as:
microk8s
nginx (note: I am not using ingress, nginx service works independently from microk8s).
Here is the workflow that I am looking for:
http:// URL /dashboard
NGINX service (FROM http:// URL /dashboard TO nodeIpAddress:nodeport)
nodePort service
kubernetes dashboard service
ISSUE:
However, each time I request http:// URL /dashboard I receive a 502 bad request answer, what I am missing?
CONFIGURATION
Please find below, nginx configuration, node port service configuration and the status of microk8s cluster:
nginx configuration: /etc/nginx/site-availables/default
node-port-service configuration
node ip address
microk8s namespaces
Thank you very much for your helps.
I'll summarize the whole problem and solutions here.
First, the service which needs to expose the Kubernetes Dashboard needs to point at the right target port, and also needs to select the right Pod (the kubernetes-dashboard Pod)
If you check your service with a:
kubectl desribe service <service-name>
You can easily see if its selecting a Pod (or more than one) or nothing, by looking at the Endpoints section. In general, your service should have the same selector, port, targetPort and so on of the standard kubernetes-dashboard service (which expose the dashboard but only internally to the cluster)
Second, your NGINX configuration proxy the location /dashboard to the service, but the problem is that the kubernetes-dashboard Pod is expecting requests to reach / directly, so the path /dashboard means nothing to it.
To solve the second problem, there are a few ways, but they all lay in the NGINX configuration. If you read the documentation of the module proxy (aka http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass) you can see that the solution is to add an URI in the configuration, something like this:
proxy_pass https://51.68.123.169:30000/
Notice the trailing slash, that is the URI, which means that the location matching the proxy rule is rewritten into /. This means that your_url/dashboard will just become your_url/
Without the trailing slash, your location is passed to the target as it is, since the target is only and endpoint.
If you need more complex URI changes, what you're searching is a rewrite rule (they support regex and lots more) but adding the trailing slash should solve your second problem.
Indeed #AndD, you advice me to execute this command :
sudo microk8s kubectl describe service -n kube-system kubernetes-dashboard
In order to get these below information
Labels:k8s-app=kubernetes-dashboard
TargetPort:8443/TCP
thanks to this above information, I could fix the nodePort service, you can find below a snippet:
spec:
type: NodePort
k8s-app: 'kubernetes-dashboard'
ports:
- protocol: TCP
port: 8443
targetPort: 8443
nodePort: 30000
However, I did change the nginx configuration to
proxy_pass https://51.68.123.169:30000/
I do receive a successful response (html), then all remaining requests have a 404 status (js, css, assets).
http 404
edit
the html file contains a set of dependencies (js/img/css)
<link rel="stylesheet" href="styles.3aaa4ab96be3c2d1171f.css"></head>
...
<script src="runtime.3e2867321ef71252064e.js" defer></script>
So it tries to get these assets with these URL:
https::// URL/styles.3aaa4ab96be3c2d1171f.css
https::// URL/runtime.3e2867321ef71252064e.js
instead of using:
https::// URL/dashboard/styles.3aaa4ab96be3c2d1171f.css
https::// URL/dashboard/runtime.3e2867321ef71252064e.js
edit #2
I just changed again the subpath => dashboad/ to dash/
new nginx conf
And it works with chromium.
But it doesn't with firefox. (not a big deal)
thank you very much AndD!
Besides, I got a similar issue with jenkins, however jenkins image contains a parameter that fixes the issue.
docker run --publish 8080:8080 --env JENKINS_OPTS="--prefix=/subpath" jenkins/jenkins
I was expecting to find something similar with kubernetesui/dashboard but I haven't found anything
https://hub.docker.com/r/kubernetesui/dashboard
https://github.com/kubernetes/dashboard
Well, I do not know how to configure very well nginx in order to display correctly the dashboard in a subpath and I didn't find any parameter in the kubernetes\dashboard image to handle the subpath.
I have deployed a private Docker registry (image registry:2) in a Kubernetes cluster and exposed it via an Ingress. I am using the nginxinc/kubernetes-ingress (not: kubernetes/ingress-nginx) NGINX ingress controller.
curl https://my_registry/v2/_catalog works fine. But docker push into the registry runs into this error: Pushing ... 100.6MB/100.6MB ... 413 Request Entity Too Large.
For what I know, this can be mitigated by instructing the NGINX ingres controller to accept larger chunks of data. I have e.g. tried adding the annotation nginx.ingress.kubernetes.io/proxy-body-size: "200m" into my Ingress specification (as suggested here) but this has not worked so far.
So what is the right way for instructing an nginxinc/kubernetes-ingress NGINX ingress controller to accept sufficiently large chunks?
UPDATE I have meanwhile concluded that nginxinc/kubernetes-ingress does not take its configuration from annotations, but from a ConfigMap named nginx-config that resides in the same namespace as the NGINX ingress controller. I have now added such a ConfigMap with data client-max-body-size: "200m", but the problem still persists.
You need to set Annotation:
nginx.org/client-max-body-size "200m"
I have switched from NGINX Inc.'s to Kubernetes' NGINX ingress controller, and there adding the following annotation to the Ingress' metadata proved sufficient:
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: 500m
Issue:
When working with K8's [Kubernetes] on development, I'm running into the issue where my Ingress/Nginx seems keep my client side (React) from pulling data from my API (Flask/Python).
Details:
The connection between the client and API is facilitated using an Environment Variable that we'll call API_URL for the sake of this post. API_URL is used so that the Client knows which API routes to GET and POST.
On Minikube with K8's in dev, the Minikube IP that is provided forces https from what I understand (or maybe it's ingress/nginx?). The API_URL environment variable value is value: api-cluster-ip-service. However, when I hit the dev site it's showing that this value gets assigned to http://localhost (not-https)
This causes: Blocked loading mixed active content “http://localhost/server/stuff". As a result, I can't pull anything from my API.
Question:
Is there a recommended approach for this? Perhaps a way to turn https off on dev (I don't even know if that's possible)? Or maybe I need a certificate for localhost? I'm fairly new to Kubernetes so any help is much appreciated!
Ingress-server.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: client-cluster-ip-service
servicePort: 3000
- path: /api/
backend:
serviceName: server-cluster-ip-service
servicePort: 5000
Ingress Namespace output
kubectl get ing --all-namespaces
default ingress-service * 10.0.2.15 80 4d21h
I found the cause of my problem...and the error message was fairly misleading. On a local environment, my Client side talks to my API via http://localhost/api/. However, I realized that because I was on Minikube, it's no longer on localhost (because Minikube has it's own IP). Once I changed my API_URL to http:// it began working immediately.
The only challenge here is that minikube, when stopped, changes the IP on each refresh, meaning I need to grab the IP and update the API_URL each time. However, that's a separate question/answer.
Summary:
Changed my API_URL from http://localhost to the Minikube IP. Began working immediately.