Server Testing : 2 Live servers behind nginx load balancer -- while keeping customers going to one -- I test and can see other - nginx

I have a nginx droplet with digital ocean acting as a load balancer.
My backend consists of a further 2 droplets (servers) which the Nginx load balancer forwards signals to vis a vis a fully qualified domain.
I wish to debug the application on the live server -- i.e. I want to keep having the customers going to one of the servers, while I debug and see what happens on the other.
The problem -- is that I do not know how to keep customers directed to the fully qualified domain while at the same time I can review the behaviour of the other IP.
My nginx is a very simple configuration:`
http {
...
upstream backend {
#server0
server IP address;
#server1
server IP address;
}
server {
server_name www.domain.com domain.com;
root /var/www/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
proxy_pass http://backend;
}
....
}
}`

Related

NGINX reverse proxy not redirecting to HTTP. All traffic forced to HTTPS

I recently setup NGINX for the purpose of directing 2 domains I own to 2 different servers on my network, utilizing the same WAN address.
I currently have my firewall rules setup to simply pass port 80 traffic to the IP of my NGINX server.
Utilizing the following conf file with NGINX, ALL attempts to connect to any of my previously accessible sites now forces the URL typed into the browser to immediately change to HTTPS which is not what I want.
server {
listen 80;
server_name www.domain1.net domain1.net;
location /{
proxy_pass http://192.168.50.226:8080;
}
}
server {
listen 80;
server_name www.domain2.net domain2.net;
location /{
proxy_pass http://192.168.50.35:8080;
}
}
The good news is that both domains are resolving to my WAN address which addressed my first problem. I now want them to natively go to their respective HTTP address, rather than it's current behavior of switching to HTTPS.
For those that may run into a similar issue. If using pfsense or similar technologies for your router, I found the issue to be was that the router was using port 80 as well for it's web gui, which was then forcing http traffic to then use https. After change the router default GUI port to 8080, everything feel into place and started working.

How to redirect trafic to live website if https is provided?

My localhost running on http://localhost:8080. Now, I have a requirement like this, whenever I type http://www.mywebsite.com, it should load my localhost and if I type https://www.mywebsite.com, it should load the live website.
To achieve this I tried the hosts(/etc/hosts) file and Nginx but it also stops loading the live website in my system.
Host file content:
127.0.0.1 www.mywebsite.com
nginx config
server {
listen 80;
server_name www.mywebsite.com;
location / {
proxy_pass http://127.0.0.1:8080;
}
}
Completely agree with the other answers, mapping from nginx on a remote host to your localhost can be difficult unless you know the public IP address of your local machine, ideally it should be static.
Alternatives
I would encourage giving a try to some proxy tools that can be installed on your local machine, i.e. Charles Proxy and its Map Remote feature.
Once installed, follow these steps:
Install and trust the root certificate Help -> SSL Proxying -> Install Charles Root Certificate
Enable Map Remote feature Tools -> Map Remote -> [x] Enable Map Remote
Add a new rule, e.g. http://www.mywebsite.com -> http://localhost:8080
Now you're ready to test:
Navigate to http://www.mywebsite.com (you should see results from your localhost, proxy took over)
Navigate to https://www.mywebsite.com (you should see results from your remote server)
Map Remote - Rule
Map Remote - Result
You need several pieces to make this work. Thinking through the steps of how a request could be handled:
DNS for www.mywebsite.com points to a single IP, there's no way around that. So all requests for that host, no matter the protocol, will come in to the machine with that IP, the public server.
So we need to route those requests, such that a) https requests are handled by nginx on that same machine (the public server), and b) http requests are forwarded to your local machine. nginx can do a) of course, that's a normal config, and nginx can also do b), as a reverse proxy.
Now the problem is how to route traffic from the public server to your local machine, which is probably at home behind a dynamic IP and a router doing NAT. There are services to do this but to use your own domain is usually a paid feature (eg check out ngrok, I guess Traefik probably handles this too, not sure). To do it yourself you can use a reverse SSH tunnel.
To be clear, this routes any request for http://www.mywebsite.com/ to your local machine, not just your own requests. Everyone who visits the http version of that site will end up hitting your local machine, at least while the tunnel is up.
For 1, you just need your DNS set up normally, with a single DNS record for www.mywebsite.com. You don't need any /etc/hosts tricks, remove those (and maybe reboot, to make sure they're not cached and complicating things).
For 2, your nginx config on the public server would look something like this:
# First the http server, which will route requests to your local machine
server {
listen 80;
server_name www.mywebsite.com;
location / {
# Route all http requests to port 8080 on this same server (the
# public server), which we will forward back to your localhost
proxy_pass http://127.0.0.1:8080;
}
}
# Now the https server, handled by this, public server
server {
listen 443 ssl;
server_name www.mywebsite.com;
# SSL config stuff ...
# Normal nginx config ...
root /var/www/html
location / {
# ... etc, your site config
}
}
The nginx config on your local machine should just be a normal http server listening on port 8080 (the port you mentioned it is running on). No proxying, nothing special here.
For 3), lastly, we need to open a tunnel from your local machine to the public server. If you are on Linux, or macOS, you can do that from the command line with something like this:
ssh user#www.mywebsite.com -nNT -R :8080:localhost:8080 &
If you're on Windows you could use something like PuTTY or the built in SSH client on Win 10.
The important parts of this are (copied from the SSH manpage):
-N Do not execute a remote command. This is useful for just forwarding ports.
-R Specifies that connections to the given TCP port or Unix socket on the remote
(server) host are to be forwarded to the local side.
The -R part specifies that connections to remote port 8080 (where nginx is routing http requests) should be forwarded to localhost port 8080 (your local machine). The ports can be anything of course, eg if you wanted to use port 5050 on your public server and port 80 on your local machine, it would instead look like -R :5050:localhost:80.
Of course the tunnel will fail if your public IP address (on your localhost side) changes, or if you reboot, or your local wifi goes down, etc etc ...
NOTE: you should also be aware that you really are opening your local machine up to the public internet, so will be subject to all the same security risks that any server on the public internet faces, like various scripts probing for vulnerabilities etc. Whenever I use reverse tunnels like this I tend to leave them up only while developing, and shut them down immediately when done (and of course the site will not work when the tunnel is down).
As somebody said above but in different words: I don't really get why you want to access two different locations with basically the same address (different protocols). But dude, who are we to tell you not to do it? Don't let anything or anyone stop you! 😉😁
However, we some times need to think outside the box and come up with different ways to achieve the same result. Why don't you go to your domain provider and set up something like this:
Create a subdomain (check if you need to set an A record for your domain) so you can have something like https://local.example.com/.
Forward the new subdomain to your local IP address (perhaps you need to open/forward ports on you router and install DDClient or a similar service to catch your dynamic local/public IP and send it to your domain provider).
Leave your #/naked record pointing to your website as it is.
Whenever you access: https://www.example.com or http://www.example.com, you'll see your website.
And if you access https://local.example.com or http://local.example.com, you'll access whatever you have on your local computer.
Hope it helps, or at least, gives you a different perspective for a solution.
You have to create or it may be already there in your nginx config files, a section for listen 443 (https).
// 443 is the default port for https
server {
listen 443;
....
}
Whatever solution you pick, it should only work exactly once for you. If you configure your live site correctly, it should do HSTS, and the next time you type "http://www.mywebsite.com" your browser will GET "https://www.mywebsite.com" and your nginx won't even hear about the insecure http request.
But if you really, really want this you can let your local nginx proxy the https site and strip the HSTS headers:
server {
listen 443;
server_name www.mywebsite.com;
proxy_pass https://ip_of_live_server;
proxy_set_header Host $host;
[... strip 'Strict-Transport-Security' ...]
}
Of course you will need your local nginx to serve these TLS sessions with a certificate that your browser trusts. Either adding a self-signed Snake Oil one to your browser, or... since we are implementing bad ideas... add a copy of you live secret key material to your localhost... ;)
You can do this by redirecting HTTP connections on your live site to localhost. First remove the record you have in your hosts file.
Then add the following to your live site's nginx.conf.
server {
listen 80;
server_name www.mywebsite.com;
location / {
# change this to your development machine's IP
if ($remote_addr = 1.2.3.4) {
rewrite ^ http://127.0.0.1:8080;
}
}
}

Subdomain is unexpectedly resolving despite Nginx not being set up to reverse proxy it

So I have been setting up my home network to host a few websites under a domain (and it's subdomains) using a combination of cloudflare to proxy and provide DDoS protection/HTTPS to the sites, and an Nginx reverse proxy running on my network to allow multiple sites to return from behind the same gateway that the DNS records in cloudflare are pointed at.
For the purposes of this explanation, I will replace my real domain name with [domainNameHere].
The first thing to explain is my DNS setup on cloudflare.
I have 4 CNAME records setup to do the following:
Note that I'm using CNAME records because I do not have a static home IP, therefore I'm using a Dynamic DNS address that resolves to the IP address of my gateway. This same Dynamic DNS address is used in place of an A record as I'm aiming to not need to update A records all the time, and instead just have it resolve via a automatically updating Dynamic DNS record.
[domainNameHere].net - Reverse proxy returns the root site when this domain is requested.
www.[domainNameHere].net - Behaves the same as the above, just there to handle any www requests, the reverse proxy returns the same root site for both wwww and the root domain name.
map.[domainNameHere].net - When this subdomain of [domainNameHere] is called, the reverse proxy instead returns a different site (a map, as you might have guessed).
test.[domainNameHere].net - This is a proxied DNS record setup in cloudflare for future purposes, I do not yet intend for it to actually return a site.
Now, the expected behaviour is that all of these DNS records should currently return a site, expect for test.[domainNameHere].net which shouldn't - I'd expect it to just return a standard ERR_NAME_NOT_RESOLVED like any other DNS record that doesn't actually go anywhere.
Instead though, when test.[domainNameHere].net is used, it returns the root site that [domainNameHere].net and www.[domainNameHere].net resolve to?
Using map, www or the root domain name all return the expected content.
I believe that I have configured something incorrectly in the Nginx settings, below are the two configuration files that are currently in my sites-enabled directory:
server {
listen 80;
listen [::]:80;
root /var/www/html;
index index.nginx-debian.html;
server_name [domainNameHere].net www.[domainNameHere].net;
location / {
try_files $uri $uri/ =404;
}
}
server {
listen 80;
listen [::]:80;
server_name test.[domainNameHere].net;
location / {
proxy_pass http://jake-server:8123;
}
}
Is there anything wrong with my Nginx setup? I thought the behaviour would be that if I haven't setup test.[domainNameHere].net within Nginx, then it wouldn't resolve on the reverse proxy and therefore wouldn't return anything, or would just return the Nginx 404/403 page?
Is it maybe something to do with how the first setup file is setup to point to local files on the proxy, rather than using proxy_pass like is usually done in a reverse proxy?
Or is this not an issue with Nginx, and is an issue with how I've set Cloudflare up?

how to port forward to multiple local servers?

I have purchased a server in my office to setup multiple web services like gitlab, odoo, elastic search something like this.
and I want to access multiple web services from externally.
So far what I've tried to do is
installed Ubuntu 16.04 and nginx on the server
setup port-forward from 80 to the server ip in my router
setup DNS for a domain local.example.com to my public IP address so that when I type local.exmaple.com, it redirects to the nginx web server in the server.
appended some strings to the file at /etc/nginx/site-available/default below
server {
server_name local.example.com;
listen 80;
location / {
proxy_pass http://192.168.0.11:8081;//virtual web server made by virtual box
proxy_set_header Host $http_host;
proxy_set_header X-Real_IP $remote_addr;
}
}
However, after all this stuff, when I type domain name on the browser, it shows nginx web page which is installed on a server not forwarding to virtual host.
Remove the default server block and restart nginx also. Try after that. Make sure to test in a private window with no caching
The issue is that when you have some mistake in virtual host name or something else, nginx will silently send the request to first server block defined. Or the one set with default server. So you always want to avoid that

nginx redirect subdomain to seperate server ip

I have a dynamic IP which I manage using ddclient. I use no-ip to maintain the hostnames to point to my IP.
I have www.somename.com, sub.somename.com and app.somename.com. Obviously, these all point to my IP. The first two are a couple of wordpress pages on a server (server1) running NGINX, with separate configs in sites-available for each site. The latter is a separate application server (server2) running GitLab.
My router does not allow me to switch on subdomain, so all port 80 traffic is routed to server1. I'm hoping there is a config I can apply in nginx that will allow me to send all traffic for app.somename.com to a local IP address on my network (192.168.0.nnn), but keep the address of the page as app.subdomain.com.
Right now, I have :-
/etc/nginx/site-available$ ls
somename.com domain sub.somename.com app.somename.com
The relevant ones are linked in sites-enabled. For the app server, I have :-
server {
server_name app.somename.com;
location / {
proxy_pass http://192.168.0.16:80;
}
}
The problem, is that in the browser address bar, this results in :-
http://192.168.1.16/some/pages
Where I want :-
http://app.somename.com/some/pages
How do I resolve this?
You could try like this!
server {
server_name app.somename.com;
location / {
proxy_pass http://192.168.0.16:80;
proxy_set_header Host app.somename.com;
}
}

Resources