Nginx Reverse Proxy With Alternating Live Backend Services - nginx

I have different versions of a backend service, and would like nginx to be like a "traffic cop", sending users ONLY to the currently online live backend service. Is there a simple way to do this without changing the nginx config each time I want to redirect users to a different backend service?
In this example, I want to shut down the live backend service and direct users to the test backend service. Then, vice-versa. I'm calling it a logical "traffic cop" which knows which backend service to direct users to.
I don't think adding all backend services to the proxy_pass using upstream load balancing will work. I think load balancing would not give me what I'm looking for.
I also do not want user root to update the /etc/hosts file on the machine, because of security and collision concerns with multiple programs editing /etc/hosts simultaneously.
I'm thinking of doing proxy_pass http://live-backend.localhost in nginx and using a local DNS server to manage the internal IP for live-backend-localhost which I can change (re-point to another backend IP) at any time. However, would nginx actually query the DNS server on every request, or does it resolve once then cache the IP forever?
Am I over-thinking this? Is there an easy way to do this within nginx?

You can use the backup parameter to the server directive so that the test server will only be used when the live one is down.
NGINX queries DNS on startup and caches it, so you'd still have to reload it to update.

Related

What will happen if a SSL-configured Nginx reverse proxy pass to an web server without SSL?

I use Nginx to manage a lot of my web services. They listens different port, but all accessed by the reverse proxy of Nginx within one domain. Such as to access a RESTful-API server I can use http://my-domain/api/, and to access a video server I can use http://my-domain/video.
I have generated a SSL certificate for my-domain and added it into my Nginx conf so my Nginx server is HTTPS now -- But those original servers are still using HTTP.
What will happen when I visit https://my-domain/<path>? Is this as safe as configuring SSL on the original servers?
One of the goals of making sites be HTTPS is to prevent the transmitted data between two endpoints from being intercepted by outside parties to either be modified, as in a man-in-the-middle attack, or for the data to be stolen and used for bad purposes. On the public Internet, any data transmitted between two endpoints needs to be secured.
On private networks, this need isn't quite so great. Many services do run on just HTTP on private networks just fine. However, there are a couple points to take into consideration:
Make sure unused ports are blocked:
While you may have an NGINX reverse proxy listening on port 443, is port 80 blocked, or can the sites still be accessed via HTTP?
Are the other ports to the services blocked as well? Let's say your web server runs on port 8080, and the NGINX reverse proxy forwards certain traffic to localhost:8080, can the site still be accessed at http://example.com:8080 or https://example.com:8080? One way to prevent this is to use a firewall and block all incoming traffic on any ports you don't intend to accept traffic on. You can always unblock them later, if you add a service that requires that port be opened.
Internal services are accessible by other services on the same server
The next consideration relates to other software that may be running on the server. While it's within a private ecosystem, any service running on the server can access localhost:8080. Since the traffic between the reverse proxy and the web server are not encrypted, that traffic can also be sniffed, even if authorisation is required in order to authenticate localhost:8080. All a rogue service would need to do is monitor the port and wait for a user to login. Then that service can capture everything between the two endpoints.
One strategy to mitigate the dangers created by spyware is to either use virtualisation to separate a single server into logical servers, or use different hardware for things that are not related. This at least keeps things separate so that the people responsible for application A don't think that service X might be something the team running application B is using. Anything out of place will more likely stand out.
For instance, a company website and an internal wiki probably don't belong on the same server.
The simpler we can keep the setup and configuration on the server by limiting what that server's job is, the more easily we can keep tabs on what's happening on the server and prevent data leaks.
Use good security practices
Use good security best practices on the server. For instance, don't run as root. Use a non-root user for administrative tasks. For any services that run which are long lived, don't run them as root.
For instance, NGINX is capable of running as the user www-data. With specific users for different services, we can create groups and assign the different users to them and then modify the file ownership and permissions, using chown and chmod, to ensure that those services only have access to what they need and nothing more. As an example, I've often wondered why NGINX needs read access to logs. It really should, in theory, only need write access to them. If this service were to somehow get compromised, the worst it could do is write a bunch of garbage to the logs, but an attacker might find their hands are tied when it comes to retrieving sensitive information from them.
localhost SSL certs are generally for development only
While I don't recommend this for production, there are ways to make localhost use HTTPS. One is with a self signed certificate. The other uses a tool called mkcert which lets you be your own CA (certificate authority) for issuing SSL certificates. The latter is a great solution, since the browser and other services will implicitly trust the generated certificates, but the general consensus, even by the author of mkcert, is that this is only recommended for development purposes, not production purposes. I've yet to find a good solution for localhost in production. I don't think it exists, and in my experience, I've never seen anyone worry about it.

Building Proxy Site with Nginx and Rotating Proxy Service

Im' looking to build a similar application to https://www.proxysite.com/ but am not sure on the best architecture.
Looking to have a data flow like this.
User Web Browser -> myproxysite.com -> Ngninx Proxy Server (somehow rotating IP for each client session) -> Targetsite.com
Then the user would need to maintain a full session on Targetsite.com as a logged in user.
In this example, targetsite.com is always the same site and is pre-determined. The challenge we are facing is that targetsite.com is blocking our users based on IP, many of whom are accessing it from the same office network.
So my questions are:
Does this seem correct?
Is there anyway for me to configure nginx with a rotating proxy service like luminati? Or do I need to add an API software layer to handle the actual IP changes?
Any guidance on this one would be greatly appreciated!
While I can't help you with your application, I do want to suggest an alternative. You mentioned an office so it sounds like the users who will use the proxy are workers.
Luminati (now BrightData) has a proxy manager which you can host on any server. The proxy manager allows you to create ports (ie port 24000) and configure it with whatever proxy you want (doesn't have to be BrightData's proxy). It has a ton of different parameters that you can include for each proxy (including IP rotation) and each port can be configured to have a unique setup.
Then you simply go to your user PC, open the browser proxy settings, type the IP address of the server that the proxy manager is running on and the specific port you configured and voila. You have central control of the managing the proxies and your user's browser is proxied.
A big benefit of this is the logs in the proxy manager show all activity on each port you setup, so you can monitor traffic and the success rates right there.
Proxy manager: https://prnt.sc/13uyjgj

Do I need a service for exposing every app running in a pod?

I'm planning to build a website to host static files. Users will upload their files and I deploy bunch of deployments with nginx images on those to a Kubernetes node. My main goal is for some point, users will deploy their apps to a subdomain like my-blog-app.mysite.com. After some time users can use custom domains.
I understand that when I deploy an nginx image on a pod, I have to create a service to expose port 80 (or 443) to the internet via load balancer.
I also read about Ingress, looks like what I need but I don't think I understand that concept.
My question is, for example if I have 500 nginx pods running (each is a different website), do I need a service for every pod in that node (in this case 500 services)?
You are looking for https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting.
With this type of Ingress, you route the traffic to the different nginx instances, based on the Host header, which perfectly matches your use-case.
In any case, yes, assuming your current architecture you need to have a service for each pod. Haven't you considered a different approach? Like having a general listener (nginx instances) and get the correct content based on authorization or something?

How to keep the session when using nginx as the Reverse Proxy to many servers with unsame project

I am new to Nginx. And I have trobule with it. We have many projects with different language and framework. And they are put in different server. How do I keep the session for every project respectively?
Question is not quite clear but from what i understood i will try to guide you a bit...
Nginx is a web server which when used as reverse proxy basically just sits in front of your project appserver. When some client tries to connect to your appserver, it will first connect to nginx and then nginx will forward that request to you appserver.
eg.
client -Req-> nginx (port 8080) -Req-> appserver(jetty, port 9000)
Now if you are trying to use a single nginx instance and direct request to multiple app servers from nginx. You will either have to make nginx listen on different ports and forward them to different appservers. Or nginx can identify which request is meant for which appserver by routes.
Here is a source which can help you to learn how to configure Nginx to do this... please ask again if you need further help.
https://www.digitalocean.com/community/tutorials/how-to-set-up-nginx-server-blocks-virtual-hosts-on-ubuntu-14-04-lts

IMAP Proxy that can connect to multiple IMAP servers

What I am trying to achieve is to have a central Webmail client that I can use in a ISP envioroment but has the capability to connect to multiple mail servers.
I have now been looking at Perdition, NGINX and Dovecot.
But most of the articles have not been updated for a very long time.
The one that I am realy looking at is NGINX imap proxy as it can almost do everything i require.
http://wiki.nginx.org/ImapAuthenticateWithEmbeddedPerlScript
But firstly the issue I have is you can no longer compile NGINX from source with those flags.
And secondly the GitRepo for this project https://github.com/falcacibar/nginx_auth_imap_perl
Does not give detailed information about the updated project.
So all I am trying to achieve is to have one webmail server that can connect to any one of my mailservers where my location is residing in a database. But the location is a hostname and not a IP.
You can tell Nginx to do auth_http with any http URL you set up.
You don't need an embedded perl script specifically.
See http://nginx.org/en/docs/mail/ngx_mail_auth_http_module.html to get an idea of the header based protocol Nginx uses.
You can implement the protocol described above in any language - CGI script with apache if you like.
You do the auth and database query and return the appropriate backend servers in this script.
(Personally, I use a python + WSGI server setup.)
Say you set up your script on apache at http://localhost:9000/cgi-bin/nginx_auth.py
In your Nginx config, you use:
auth_http http://localhost:9000/cgi-bin/nginx_auth.py

Resources