Openshift - Internal NGINX proxy can't connect to Openshift route hostname - nginx

My use-case requires pass-through SSL, so we unforunately can't use path-based routing natively in Openshift. Our next best solution was to set up an internal NGINX proxy to route traffic from a path to another web UI's Openshift route. I'm getting errors when doing so.
Here's my simplified NGINX config:
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /etc/nginx/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
upstream app1-ui-1-0 {
server app1-1-0.192.168.99.100.nip.io:443;
}
server {
listen 8443 ssl default_server;
location /apps/app1/ {
proxy_pass https://app1-ui-1-0/;
}
}
}
My app1 route configuration is as follows:
apiVersion: v1
kind: Route
metadata:
name: app1-1-0
spec:
host: app1-1-0.192.168.99.100.nip.io
to:
kind: Service
name: app1-1-0
tls:
insecureEdgeTerminationPolicy: Redirect
termination: passthrough
When I hit https://app1-1-0.192.168.99.100.nip.io, the app works fine.
When I hit the NGINX proxy route url (https://proxier-1-0.192.168.99.100.nip.io), it properly loads up the nginx's standard index.html place.
However, when I try to hit app1 through the proxy via https://proxier-1-0.192.168.99.100.nip.io/apps/apps1/, I get the following Openshift error:
Application is not available
The application is currently not serving requests at this endpoint. It may not have been started or is still starting.
Via logs and testing, I know the request is getting into the /apps/app1/ location block, but it never gets to app1's NGINX. I've also confirmed this error is coming from either app1's router or service, but I don't know how to troubleshoot since neither has logs. Any ideas?

When you want to make a request to some other application running in the same OpenShift cluster, the correct solution in most cases is to use the internal DNS.
OpenShift ships with a SDN which enables comms between Pods. This is more efficient than communicating to another Pod via its route since this will typically route the request back onto the public internet before it hits the OpenShift router again and is at that point forwarded via the SDN.
Services can be reached <service>.<pod_namespace>.svc.cluster.local which in your case enables NGINX to proxy via server apps1-1-0.myproject.svc.cluster.local
Routes should typically be used to route external traffic into the cluster.
See OpenShift docs for more details on networking

Per a comment above, I ended up dropping the route and referencing the service's internal DNS in NGINX's upstream:
upstream finder-ui-1-0 {
server apps1-1-0.myproject.svc.cluster.local:443;
}
This suited my needs just fine and worked well.

Related

How to redirect trafic to live website if https is provided?

My localhost running on http://localhost:8080. Now, I have a requirement like this, whenever I type http://www.mywebsite.com, it should load my localhost and if I type https://www.mywebsite.com, it should load the live website.
To achieve this I tried the hosts(/etc/hosts) file and Nginx but it also stops loading the live website in my system.
Host file content:
127.0.0.1 www.mywebsite.com
nginx config
server {
listen 80;
server_name www.mywebsite.com;
location / {
proxy_pass http://127.0.0.1:8080;
}
}
Completely agree with the other answers, mapping from nginx on a remote host to your localhost can be difficult unless you know the public IP address of your local machine, ideally it should be static.
Alternatives
I would encourage giving a try to some proxy tools that can be installed on your local machine, i.e. Charles Proxy and its Map Remote feature.
Once installed, follow these steps:
Install and trust the root certificate Help -> SSL Proxying -> Install Charles Root Certificate
Enable Map Remote feature Tools -> Map Remote -> [x] Enable Map Remote
Add a new rule, e.g. http://www.mywebsite.com -> http://localhost:8080
Now you're ready to test:
Navigate to http://www.mywebsite.com (you should see results from your localhost, proxy took over)
Navigate to https://www.mywebsite.com (you should see results from your remote server)
Map Remote - Rule
Map Remote - Result
You need several pieces to make this work. Thinking through the steps of how a request could be handled:
DNS for www.mywebsite.com points to a single IP, there's no way around that. So all requests for that host, no matter the protocol, will come in to the machine with that IP, the public server.
So we need to route those requests, such that a) https requests are handled by nginx on that same machine (the public server), and b) http requests are forwarded to your local machine. nginx can do a) of course, that's a normal config, and nginx can also do b), as a reverse proxy.
Now the problem is how to route traffic from the public server to your local machine, which is probably at home behind a dynamic IP and a router doing NAT. There are services to do this but to use your own domain is usually a paid feature (eg check out ngrok, I guess Traefik probably handles this too, not sure). To do it yourself you can use a reverse SSH tunnel.
To be clear, this routes any request for http://www.mywebsite.com/ to your local machine, not just your own requests. Everyone who visits the http version of that site will end up hitting your local machine, at least while the tunnel is up.
For 1, you just need your DNS set up normally, with a single DNS record for www.mywebsite.com. You don't need any /etc/hosts tricks, remove those (and maybe reboot, to make sure they're not cached and complicating things).
For 2, your nginx config on the public server would look something like this:
# First the http server, which will route requests to your local machine
server {
listen 80;
server_name www.mywebsite.com;
location / {
# Route all http requests to port 8080 on this same server (the
# public server), which we will forward back to your localhost
proxy_pass http://127.0.0.1:8080;
}
}
# Now the https server, handled by this, public server
server {
listen 443 ssl;
server_name www.mywebsite.com;
# SSL config stuff ...
# Normal nginx config ...
root /var/www/html
location / {
# ... etc, your site config
}
}
The nginx config on your local machine should just be a normal http server listening on port 8080 (the port you mentioned it is running on). No proxying, nothing special here.
For 3), lastly, we need to open a tunnel from your local machine to the public server. If you are on Linux, or macOS, you can do that from the command line with something like this:
ssh user#www.mywebsite.com -nNT -R :8080:localhost:8080 &
If you're on Windows you could use something like PuTTY or the built in SSH client on Win 10.
The important parts of this are (copied from the SSH manpage):
-N Do not execute a remote command. This is useful for just forwarding ports.
-R Specifies that connections to the given TCP port or Unix socket on the remote
(server) host are to be forwarded to the local side.
The -R part specifies that connections to remote port 8080 (where nginx is routing http requests) should be forwarded to localhost port 8080 (your local machine). The ports can be anything of course, eg if you wanted to use port 5050 on your public server and port 80 on your local machine, it would instead look like -R :5050:localhost:80.
Of course the tunnel will fail if your public IP address (on your localhost side) changes, or if you reboot, or your local wifi goes down, etc etc ...
NOTE: you should also be aware that you really are opening your local machine up to the public internet, so will be subject to all the same security risks that any server on the public internet faces, like various scripts probing for vulnerabilities etc. Whenever I use reverse tunnels like this I tend to leave them up only while developing, and shut them down immediately when done (and of course the site will not work when the tunnel is down).
As somebody said above but in different words: I don't really get why you want to access two different locations with basically the same address (different protocols). But dude, who are we to tell you not to do it? Don't let anything or anyone stop you! 😉😁
However, we some times need to think outside the box and come up with different ways to achieve the same result. Why don't you go to your domain provider and set up something like this:
Create a subdomain (check if you need to set an A record for your domain) so you can have something like https://local.example.com/.
Forward the new subdomain to your local IP address (perhaps you need to open/forward ports on you router and install DDClient or a similar service to catch your dynamic local/public IP and send it to your domain provider).
Leave your #/naked record pointing to your website as it is.
Whenever you access: https://www.example.com or http://www.example.com, you'll see your website.
And if you access https://local.example.com or http://local.example.com, you'll access whatever you have on your local computer.
Hope it helps, or at least, gives you a different perspective for a solution.
You have to create or it may be already there in your nginx config files, a section for listen 443 (https).
// 443 is the default port for https
server {
listen 443;
....
}
Whatever solution you pick, it should only work exactly once for you. If you configure your live site correctly, it should do HSTS, and the next time you type "http://www.mywebsite.com" your browser will GET "https://www.mywebsite.com" and your nginx won't even hear about the insecure http request.
But if you really, really want this you can let your local nginx proxy the https site and strip the HSTS headers:
server {
listen 443;
server_name www.mywebsite.com;
proxy_pass https://ip_of_live_server;
proxy_set_header Host $host;
[... strip 'Strict-Transport-Security' ...]
}
Of course you will need your local nginx to serve these TLS sessions with a certificate that your browser trusts. Either adding a self-signed Snake Oil one to your browser, or... since we are implementing bad ideas... add a copy of you live secret key material to your localhost... ;)
You can do this by redirecting HTTP connections on your live site to localhost. First remove the record you have in your hosts file.
Then add the following to your live site's nginx.conf.
server {
listen 80;
server_name www.mywebsite.com;
location / {
# change this to your development machine's IP
if ($remote_addr = 1.2.3.4) {
rewrite ^ http://127.0.0.1:8080;
}
}
}

Upstream server configuration in Nginx /Worker configuration in Apache inside Kubernetes

I have an k8s nginx server which is connecting to my statefulset application servers .
I am now trying to achieve sticky sessions based on JESSIONID in the cookie. I have nginx ingress controller which directs all requests to k8s nginx service. But my nginx service is not able to maintain the sticky seesion between application server pods therefore I am not able to maintain user session in my application.
If I am connecting Ingress controller directly to application service with config nginx.ingress.kubernetes.io/session-cookie-name=JESSIONID its working as expected.
But I need a webserver either apache or Nginx in front of my application servers .
Is there is any way to achieve this? OR how we can configure statefulset pods directly inside Upstream block of Nginx or as a worker in apache?
I need below structure
Ingress ->Webserver-> front application
Currently, I have below config
nginx.ingress.kubernetes.io/session-cookie-name=JESSIONID
- backend :
serviceName: nginx-web-svc
servicePort: 80
In my nginx statefulset I have below config in nginx.conf file
location / {
proxy_pass app-svc:3000;
}
app-svc is for Application statefulset having 3 replicas (3 pods) .Its working but not managing stickiness between application pods. If I bypass webserver and directly use below ingress config it's working like charm.
nginx.ingress.kubernetes.io/session-cookie-name=JESSIONID
- backend :
serviceName: app-svc
servicePort: 3000
But I need webserver in front of my app servers .How to achieve stickiness in that scenario.

Nginx dns resolver refresh - Openshift (Kubernetes)

I have an Ngins, deployed as a pod inside the Openshift cluster, acts as reverse proxy for a backend-service. The backend-service has Kubernetes service to loadbalance the traffic between the pods (we use ha proxy as loadbalancer). The Nginx pass_proxy all the request to the service.
location /service-1/api {
proxy_pass http://service-svc/api;
}
Anytime the Kubernetes service is recreated or it get a new IP address, the Nginx doesnt refresh the new address - this throws 504 timeout error. I tried the resolver of the Nginx with 127.0.0.1, 127.0.0.11 and other ways to force the Nginx refresh the dns lookup, along with assigning the service to a variable.
However, this doesn't solve the problem. The Nginx couldn't resolve the service, saying it cannot resolve using 127.0.0.1:53. What is the right way to put the resolver? What IP should I provide in the resolver?

Kubernetes Using Proxy without ingress

My issue is that I have a web server running on port 80. I want to use nginx proxy (not the ingress) bto redirect the connection. I want to use link wwww.example.com. How should I tell nginx to proxy the connection on wwww.example.com (which is a different app). I tried using service with load balancer but it changes the hostname ( to some aws link) I need it to be exactly wwww.example.com.
If I understood your request correctly, you may just use return directive in your nginx config
server {
listen 80;
server_name www.some-service.com;
return 301 $scheme://wwww.example.com$request_uri;
}
If you need something more complex check this doc or this

Using namerd for DNS? (Or: how to configure an nginx ingress service with Linkerd)

I currently have a hello world service deployed on /svc/hello, and I've added a dentry to my namerd internal dtab as /svc/app => /svc/hello.
I've also deployed an nginx service that will serve as my ingress controller, and forward all traffic to the relevant services. Eventually it's going to do header stripping, exposing admin services only to developers in whitelisted ip ranges etc, but for now I've kept it really simple with the following config:
server {
location / {
proxy_pass http://app;
}
}
However, those nginx pods fail to start, with the error
nginx: [emerg] host not found in upstream "app" in /etc/nginx/conf.d/default.conf:3
What do I need to do to get the nginx services to be able to forward to the app service via linkerd?
I'am not sure that this is possible, use linkerd with nginx ingress.
Look on this case, https://buoyant.io/2017/04/06/a-service-mesh-for-kubernetes-part-viii-linkerd-as-an-ingress-controller/
May be it can help you.
I was actually able to solve this by looking at a different post in the same series as what Girgoriev Nick shared:
proxy_pass http://l5d.default.svc.cluster.local;
That address does cluster-local name resolution in Kubernetes, and successfully finds the Linkerd service.

Resources