I have an Ngins, deployed as a pod inside the Openshift cluster, acts as reverse proxy for a backend-service. The backend-service has Kubernetes service to loadbalance the traffic between the pods (we use ha proxy as loadbalancer). The Nginx pass_proxy all the request to the service.
location /service-1/api {
proxy_pass http://service-svc/api;
}
Anytime the Kubernetes service is recreated or it get a new IP address, the Nginx doesnt refresh the new address - this throws 504 timeout error. I tried the resolver of the Nginx with 127.0.0.1, 127.0.0.11 and other ways to force the Nginx refresh the dns lookup, along with assigning the service to a variable.
However, this doesn't solve the problem. The Nginx couldn't resolve the service, saying it cannot resolve using 127.0.0.1:53. What is the right way to put the resolver? What IP should I provide in the resolver?
Related
I have configured NGINX as a reverse proxy with web sockets enabled for a backend web application with multiple replicas. The request from NGINX does a proxy_pass to a Kubernetes service which in turn load balances the request to the endpoints mapped to the service. I need to ensure that the request from a particular client is proxied to the same Kubernetes back end pod for the life cycle of that access, basically maintaining session persistence.
Tried setting the sessionAffinity: ClientIP in the Kubernetes service, however this does the routing based on the client IP which is of the NGINX proxy. Is there a way to make the Kubernetes service do the affinity based on the actual client IP from where the request originated and not the NGINX internal pod IP ?
This is not an option with Nginx. Or rather it's not an option with anything in userspace like this without a lot of very fancy network manipulation. You'll need to find another option, usually an app-specific proxy rules in the outermost HTTP proxy layer.
I currently have a hello world service deployed on /svc/hello, and I've added a dentry to my namerd internal dtab as /svc/app => /svc/hello.
I've also deployed an nginx service that will serve as my ingress controller, and forward all traffic to the relevant services. Eventually it's going to do header stripping, exposing admin services only to developers in whitelisted ip ranges etc, but for now I've kept it really simple with the following config:
server {
location / {
proxy_pass http://app;
}
}
However, those nginx pods fail to start, with the error
nginx: [emerg] host not found in upstream "app" in /etc/nginx/conf.d/default.conf:3
What do I need to do to get the nginx services to be able to forward to the app service via linkerd?
I'am not sure that this is possible, use linkerd with nginx ingress.
Look on this case, https://buoyant.io/2017/04/06/a-service-mesh-for-kubernetes-part-viii-linkerd-as-an-ingress-controller/
May be it can help you.
I was actually able to solve this by looking at a different post in the same series as what Girgoriev Nick shared:
proxy_pass http://l5d.default.svc.cluster.local;
That address does cluster-local name resolution in Kubernetes, and successfully finds the Linkerd service.
My use-case requires pass-through SSL, so we unforunately can't use path-based routing natively in Openshift. Our next best solution was to set up an internal NGINX proxy to route traffic from a path to another web UI's Openshift route. I'm getting errors when doing so.
Here's my simplified NGINX config:
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /etc/nginx/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
upstream app1-ui-1-0 {
server app1-1-0.192.168.99.100.nip.io:443;
}
server {
listen 8443 ssl default_server;
location /apps/app1/ {
proxy_pass https://app1-ui-1-0/;
}
}
}
My app1 route configuration is as follows:
apiVersion: v1
kind: Route
metadata:
name: app1-1-0
spec:
host: app1-1-0.192.168.99.100.nip.io
to:
kind: Service
name: app1-1-0
tls:
insecureEdgeTerminationPolicy: Redirect
termination: passthrough
When I hit https://app1-1-0.192.168.99.100.nip.io, the app works fine.
When I hit the NGINX proxy route url (https://proxier-1-0.192.168.99.100.nip.io), it properly loads up the nginx's standard index.html place.
However, when I try to hit app1 through the proxy via https://proxier-1-0.192.168.99.100.nip.io/apps/apps1/, I get the following Openshift error:
Application is not available
The application is currently not serving requests at this endpoint. It may not have been started or is still starting.
Via logs and testing, I know the request is getting into the /apps/app1/ location block, but it never gets to app1's NGINX. I've also confirmed this error is coming from either app1's router or service, but I don't know how to troubleshoot since neither has logs. Any ideas?
When you want to make a request to some other application running in the same OpenShift cluster, the correct solution in most cases is to use the internal DNS.
OpenShift ships with a SDN which enables comms between Pods. This is more efficient than communicating to another Pod via its route since this will typically route the request back onto the public internet before it hits the OpenShift router again and is at that point forwarded via the SDN.
Services can be reached <service>.<pod_namespace>.svc.cluster.local which in your case enables NGINX to proxy via server apps1-1-0.myproject.svc.cluster.local
Routes should typically be used to route external traffic into the cluster.
See OpenShift docs for more details on networking
Per a comment above, I ended up dropping the route and referencing the service's internal DNS in NGINX's upstream:
upstream finder-ui-1-0 {
server apps1-1-0.myproject.svc.cluster.local:443;
}
This suited my needs just fine and worked well.
I have a Docker swarm running Docker version 1.13.1. I am regularly deploying stacks of Docker services (via docker stack deploy) to this swarm, and I have one nginx proxy service that sits at ports 80 and 443 acting as a reverse proxy to various applications in the swarm.
I ran into a problem with using nginx's upstream capability was that it cached the DNS lookup of my service names. This worked fine for a while but as more stacks were removed and deployed those cached IP addresses became stale and nginx would start timing out or serving requests to the wrong container.
I attempted to fix this using the following technique:
[in nginx.conf]
server {
server_name myapp.domain.com;
resolver 127.0.0.11 valid=10s ipv6=off;
set $myapp http://stack_myapp:80; # stack_myapp is the DNS name of the service
location / {
proxy_pass $myapp;
}
}
# other similar server blocks
127.0.0.11 appears to be the IP address of the internal DNS server the swarm sets up. This seems to work most of the time - the IP addresses of the upstream services do not get cached for long and the proxy recovers if upstream services move around. However, the proxy will occasionally still serve requests to incorrect addresses, for example, it will serve requests to http://10.0.0.12:80/... and time out or hit the wrong container. When I run docker exec proxycontainer ping stack_myapp, I get the correct IP address. Why is nginx not resolving the correct IP when ping does?
I have an nginx proxy pointing at an external server. When the external server is down, the nginx proxy returns a 502 bad gateway.
Instead, I'd like nginx to also refuse the connection - How can I do this?