Nginx proxy on kubernetes - nginx

I have a nginx deployment in k8s cluster which proxies my api/ calls like this:
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
location /api {
proxy_pass http://backend-dev/api;
}
}
This works most of the time, however sometimes when api pods aren't ready, nginx fails with error:
nginx: [emerg] host not found in upstream "backend-dev" in /etc/nginx/conf.d/default.conf:12
After couple of hours exploring internets, I found the article which pretty much the same issue. I've tried this:
location /api {
set $upstreamName backend-dev;
proxy_pass http://$upstreamName/api;
}
Now nginx returns 502.
And this:
location /api {
resolver 10.0.0.10 valid=10s;
set $upstreamName backend-dev;
proxy_pass http://$upstreamName/api;
}
Nginx returns 503.
What's the correct way to fix it on k8s?

If your API pods are not ready, Nginx wouldn't be able to route traffic to them.
From Kubernetes documentation:
The kubelet uses readiness probes to know when a Container is ready to start accepting traffic. A Pod is considered ready when all of its Containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers.
If you are not using liveness or readiness probes, then your pod will be marked as "ready" even if your application running inside the container has not finished it's startup process and is ready to accept traffic.
The relevant section regarding Pods and DNS records can be found here
Because A records are not created for Pod names, hostname is required for the Pod’s A record to be created. A Pod with no hostname but with subdomain will only create the A record for the headless service (default-subdomain.my-namespace.svc.cluster-domain.example), pointing to the Pod’s IP address. Also, Pod needs to become ready in order to have a record unless publishNotReadyAddresses=True is set on the Service.
UPDATE: I would suggest using NGINX as an ingress controller.
When you use NGINX as an ingress controller, the NGINX service starts successfully and whenever an ingress rule is deployed, the NGINX configuration is reloaded on the fly.
This will help you avoid NGINX pod restarts.

Related

How to create reverse proxy that forward traffic to kubernetes ingress controller such as haproxy ingress or nginx ingress

i tried to forward traffic from server 192.168.243.71 to domain that show in command "oc get routes" / "kubectl get ingress", but its not as simple as that. The fact is my Nginx Reverse Proxy in server 192.168.243.x will forward the request to the IP Address of loadbalancer instead of the real domain that i wrote in nginx.conf
the result
I was expecting it will show the same result when I access the domain via web browser that show in "oc get routes" or "kubectl get ingress"
Solved by adding set $backend mydomainname.com in server block and add dns resolver resolver 192.168.45.213; proxy_pass http://$backend; server in location block.
Result
You can actually add the set $backend mydomainname.com on the server block, and also you need to add dns resolver resolver 192.168.45.213; proxy_pass http://$backend; server in the location of block

Setting up Jenkins with Nginx reverse proxy

I have a Jenkins environment setup, running off a EC2 instance and trying to get port 80 mapped to port 8080.
A suggestion made (and the way most of the configurations I've seen recommended) uses Nginx to do a reverse proxy.
I have installed Nginx on the server, and added to sites-available the following:
server {
listen 80;
server_name jenkins.acue.io;
location / {
include /etc/nginx/proxy_params;
proxy_pass http://localhost:8080;
proxy_read_timeout 60s;
# Fix the "It appears that your reverse proxy set up is broken" error.
# Make sure the domain name is correct
proxy_redirect http://localhost:8080 https://jenkins.acue.io;
}
}
I hit the IP address of the jenkins environment, it shows me the Ngnix welcome screen and Jenkins still loads against port 8080 not port 80.
Do I need to specific the current URL (I've not pointed the jenkins.acue.io sub-domain yet to the EC2 instance where I have specified localhost? I've tried it but no joy).
Few things to note.
You need to add jenkins.acue.io to your Host entries and point it to the instance where you are running NginX. Then use the FQDN to access Jenkins. Also there is a typo in your proxy_redirect where you have added https URL instead of http://jenkins.acue.io fix that as well. Other than that your NginX configurations look fine.
If you keep on getting the NginX welcome page even though you are accessing through the FQDN, that means your configurations are not being picked up by NginX. Try creating a new file like jenkins.conf and add it to /etc/nginx/conf.d. Then do a sudo systemctl restart nginx

Nginx proxy to ingress nginx controller

Is it possible to use nginx proxy_pass without defining any location block?
Just send all incoming requests to nginx proxy to ingress controller.
According to Nginx documentation the only possible way to define proxy_pass directive is location context:
Context: location, if in location, limit_except

Trying to configure Nginx for Rancher 2.X - Migrating from Rancher 1.x

Currently I’m using rancher 1.x for on my work and I have being migration to rancher 2x. I’m having a hard time to understand how could I migrate this to rancher 2.X or if I would need to reconfigure everything.
I used the migration tools to create my yaml files, and for each application it created 2 files, one deployment and one service.
When adding the service files on rancher 2.x it created each Service with a Cluster Ip, the Port Mapping was created with Publish Service port as my Rancher 1x Public Host Port and the target Port as my rancher 1.x Private Container Port
But, Currently I’m using Nginx for the applications on different versions and locating them by environment/stack for each application, the following is an exemple of my current nginx.conf
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
server {
listen 80;
#Aplication version 1
location /environment1/applicationStack{
proxy_pass http://<ipAdress for environment1 host>:3000/;
}
#Aplication version 2
location /environment2/applicationStack{
proxy_pass http://<ipAdress for environment2 host>:3000/;
}
#rancher
location /rancher {
rewrite ^([^.]*[^/])$ $1/ permanent;
rewrite ^/rancher/(.*)$ /$1 break;
proxy_pass http://<ipAdress for enviroment with nginx>:8080;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
So for example if I wanted to connect to each application it would be the rancherDNS:8080/environmentVersion/stackName
I’m having a hard time to understand how could I migrate this to rancher 2.X or if I would need to reconfigure everything. How should I configure the nginx file? should I use each service ClusterIp with the target port? or the Publish port? Or this ClusterIp is not even what I should configure?
Another thing is that we currently use CI with Travis, if Travis published a new pod in a deployment, this would not affect my service, right?
Enviroments in 1.6.x would map to multiple Kubernetes clusters in 2.x.
You could convert your 1.6.x Stacks to either Deployment or DaemonSet Specs for 2.x. Then you can create an ingress object to access them. When creating an ingress you can specify the hostname/fqdn directly, that way you don't have to use your currently nginx.
If you prefer to use your current nginx, you can skip specifying fqdn/hostname in the ingress object and use the host IP addresses of your cluster.
Idea: (You need to refer the documentation to explore various ingress options and pick the right one for your use case)
#Aplication version 1
location /app1 {
proxy_pass http://<ipAdress k8s cluster 1 host>:80/app1;
}
Also if you want to understand Ingress in detail you might find the recordings of my talks useful:
Load Balancing with Kubernetes: concepts, use cases, and
implementation details
Kubernetes Networking Master Class

Mulitple Docker Containers on Port 80 with Same Domain

My question is similar to this question but with only one domain.
Is it possible to run multiple docker containers on the same server, all of them on port 80, but with different URL paths?
For example:
Internally, all applications are hosted on the same docker server.
172.17.0.1:8080 => app1
172.17.0.2:8080 => app2
172.17.0.3:8080 => app3
Externally, users will access the applications with the following URLs:
www.mydomain.com (app1)
www.mydomain.com/app/app2 (app2)
www.mydomain.com/app/app3 (app3)
I solved this issue with an nginx reverse proxy.
Here's the Dockerfile for the nginx container:
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
And this is the nginx.conf:
http {
server {
listen 80;
location / {
proxy_pass http://app1:5001/;
}
location /api/ {
proxy_pass http://app2:5000/api/;
}
}
}
I then stood up the nginx, app1, and app2 containers inside the same docker network.
Make sure to include the trailing / in the location and proxy paths, otherwise nginx will return a '502: Bad Gateway'.
All requests go through the docker host on port 80, which hands them off to the nginx container, which then forwards them onto the app containers based on the url path.

Resources