How to redirect trafic to live website if https is provided? - nginx

My localhost running on http://localhost:8080. Now, I have a requirement like this, whenever I type http://www.mywebsite.com, it should load my localhost and if I type https://www.mywebsite.com, it should load the live website.
To achieve this I tried the hosts(/etc/hosts) file and Nginx but it also stops loading the live website in my system.
Host file content:
127.0.0.1 www.mywebsite.com
nginx config
server {
listen 80;
server_name www.mywebsite.com;
location / {
proxy_pass http://127.0.0.1:8080;
}
}

Completely agree with the other answers, mapping from nginx on a remote host to your localhost can be difficult unless you know the public IP address of your local machine, ideally it should be static.
Alternatives
I would encourage giving a try to some proxy tools that can be installed on your local machine, i.e. Charles Proxy and its Map Remote feature.
Once installed, follow these steps:
Install and trust the root certificate Help -> SSL Proxying -> Install Charles Root Certificate
Enable Map Remote feature Tools -> Map Remote -> [x] Enable Map Remote
Add a new rule, e.g. http://www.mywebsite.com -> http://localhost:8080
Now you're ready to test:
Navigate to http://www.mywebsite.com (you should see results from your localhost, proxy took over)
Navigate to https://www.mywebsite.com (you should see results from your remote server)
Map Remote - Rule
Map Remote - Result

You need several pieces to make this work. Thinking through the steps of how a request could be handled:
DNS for www.mywebsite.com points to a single IP, there's no way around that. So all requests for that host, no matter the protocol, will come in to the machine with that IP, the public server.
So we need to route those requests, such that a) https requests are handled by nginx on that same machine (the public server), and b) http requests are forwarded to your local machine. nginx can do a) of course, that's a normal config, and nginx can also do b), as a reverse proxy.
Now the problem is how to route traffic from the public server to your local machine, which is probably at home behind a dynamic IP and a router doing NAT. There are services to do this but to use your own domain is usually a paid feature (eg check out ngrok, I guess Traefik probably handles this too, not sure). To do it yourself you can use a reverse SSH tunnel.
To be clear, this routes any request for http://www.mywebsite.com/ to your local machine, not just your own requests. Everyone who visits the http version of that site will end up hitting your local machine, at least while the tunnel is up.
For 1, you just need your DNS set up normally, with a single DNS record for www.mywebsite.com. You don't need any /etc/hosts tricks, remove those (and maybe reboot, to make sure they're not cached and complicating things).
For 2, your nginx config on the public server would look something like this:
# First the http server, which will route requests to your local machine
server {
listen 80;
server_name www.mywebsite.com;
location / {
# Route all http requests to port 8080 on this same server (the
# public server), which we will forward back to your localhost
proxy_pass http://127.0.0.1:8080;
}
}
# Now the https server, handled by this, public server
server {
listen 443 ssl;
server_name www.mywebsite.com;
# SSL config stuff ...
# Normal nginx config ...
root /var/www/html
location / {
# ... etc, your site config
}
}
The nginx config on your local machine should just be a normal http server listening on port 8080 (the port you mentioned it is running on). No proxying, nothing special here.
For 3), lastly, we need to open a tunnel from your local machine to the public server. If you are on Linux, or macOS, you can do that from the command line with something like this:
ssh user#www.mywebsite.com -nNT -R :8080:localhost:8080 &
If you're on Windows you could use something like PuTTY or the built in SSH client on Win 10.
The important parts of this are (copied from the SSH manpage):
-N Do not execute a remote command. This is useful for just forwarding ports.
-R Specifies that connections to the given TCP port or Unix socket on the remote
(server) host are to be forwarded to the local side.
The -R part specifies that connections to remote port 8080 (where nginx is routing http requests) should be forwarded to localhost port 8080 (your local machine). The ports can be anything of course, eg if you wanted to use port 5050 on your public server and port 80 on your local machine, it would instead look like -R :5050:localhost:80.
Of course the tunnel will fail if your public IP address (on your localhost side) changes, or if you reboot, or your local wifi goes down, etc etc ...
NOTE: you should also be aware that you really are opening your local machine up to the public internet, so will be subject to all the same security risks that any server on the public internet faces, like various scripts probing for vulnerabilities etc. Whenever I use reverse tunnels like this I tend to leave them up only while developing, and shut them down immediately when done (and of course the site will not work when the tunnel is down).

As somebody said above but in different words: I don't really get why you want to access two different locations with basically the same address (different protocols). But dude, who are we to tell you not to do it? Don't let anything or anyone stop you! 😉😁
However, we some times need to think outside the box and come up with different ways to achieve the same result. Why don't you go to your domain provider and set up something like this:
Create a subdomain (check if you need to set an A record for your domain) so you can have something like https://local.example.com/.
Forward the new subdomain to your local IP address (perhaps you need to open/forward ports on you router and install DDClient or a similar service to catch your dynamic local/public IP and send it to your domain provider).
Leave your #/naked record pointing to your website as it is.
Whenever you access: https://www.example.com or http://www.example.com, you'll see your website.
And if you access https://local.example.com or http://local.example.com, you'll access whatever you have on your local computer.
Hope it helps, or at least, gives you a different perspective for a solution.

You have to create or it may be already there in your nginx config files, a section for listen 443 (https).
// 443 is the default port for https
server {
listen 443;
....
}

Whatever solution you pick, it should only work exactly once for you. If you configure your live site correctly, it should do HSTS, and the next time you type "http://www.mywebsite.com" your browser will GET "https://www.mywebsite.com" and your nginx won't even hear about the insecure http request.
But if you really, really want this you can let your local nginx proxy the https site and strip the HSTS headers:
server {
listen 443;
server_name www.mywebsite.com;
proxy_pass https://ip_of_live_server;
proxy_set_header Host $host;
[... strip 'Strict-Transport-Security' ...]
}
Of course you will need your local nginx to serve these TLS sessions with a certificate that your browser trusts. Either adding a self-signed Snake Oil one to your browser, or... since we are implementing bad ideas... add a copy of you live secret key material to your localhost... ;)

You can do this by redirecting HTTP connections on your live site to localhost. First remove the record you have in your hosts file.
Then add the following to your live site's nginx.conf.
server {
listen 80;
server_name www.mywebsite.com;
location / {
# change this to your development machine's IP
if ($remote_addr = 1.2.3.4) {
rewrite ^ http://127.0.0.1:8080;
}
}
}

Related

NGINX server-name and root for a webserver on a local network without any DNS and without configuring etc/hosts on the client

I am completely new to Nginx and php. I have a Baikal self-hosted webserver on my local network and I do not use a DNS. I would like to use it for synchronizing contacts and calendars with my PC and my Android phone.
I have read Nginx tutorial and googled a lot but I still cannot figure out what to use as server-name in nginx and what to use as URL for the client.
I tried :
server_name baikal in Nginx and http://baikal as URL. This works if etc/hosts on the client PC includes baikal. It fails otherwise.
without configuring etc/hosts on the client I tried :
server_name baikal 192.168.1.27/baikal URl
in the client : http://192.168.1.27/baikal produces a 404 error and in the access log Nginx looks for /baikal
What should I use as server name and as url?
On server side
/etc/nginx/sites-available/baikal file must listen on ip address
server {
listen 192.168.1.27:80;
server_name baikal;
root /var/www/baikal;,....
On client side
just type on your browser the ip address 192.168.1.27 and it works

NGINX reverse proxy not redirecting to HTTP. All traffic forced to HTTPS

I recently setup NGINX for the purpose of directing 2 domains I own to 2 different servers on my network, utilizing the same WAN address.
I currently have my firewall rules setup to simply pass port 80 traffic to the IP of my NGINX server.
Utilizing the following conf file with NGINX, ALL attempts to connect to any of my previously accessible sites now forces the URL typed into the browser to immediately change to HTTPS which is not what I want.
server {
listen 80;
server_name www.domain1.net domain1.net;
location /{
proxy_pass http://192.168.50.226:8080;
}
}
server {
listen 80;
server_name www.domain2.net domain2.net;
location /{
proxy_pass http://192.168.50.35:8080;
}
}
The good news is that both domains are resolving to my WAN address which addressed my first problem. I now want them to natively go to their respective HTTP address, rather than it's current behavior of switching to HTTPS.
For those that may run into a similar issue. If using pfsense or similar technologies for your router, I found the issue to be was that the router was using port 80 as well for it's web gui, which was then forcing http traffic to then use https. After change the router default GUI port to 8080, everything feel into place and started working.

Redirect to internal local ip and port without using /etc/hosts (nginx)

I have a diy (poor man's) NAS and I can access the file-browser in my home-network by using the ip: 192.168.0.2:1111
I could modify the /etc/hosts in each of my devices to redirect my-fancy-filebrowser-url.com to 192.168.0.2:1111.
However, I want to find an alternative that does not involve modifying the /etc/hosts of each device in my network. I do not want to set up a local dns server either as it will probably slow down the resolution of internet domains, I am using 8.8.8.8 or 1.1.1.1 to resolve domain names quicker.
One of those alternatives I found out, is by using nginx. I have purchased a domain name, let's call it mydomain.com and I have an ipv6 VPS server. I have used cloudflare to redirect a url to my server ipv6 address and I have installed nginx to my VPS and I have created this config file:
http {
# redirect to my router page
server {
listen [d6b6:8760:97ec:ea7a:562c:c954:bb8d:6e41]:80;
return 302 http://192.168.0.1;
}
# redirect to filebrowser
server {
listen [d6b6:8760:97ec:ea7a:562c:c954:bb8d:6e42]:80;
return 302 http://192.168.0.2:1111;
}
}
The redirect to my router admin page is working perfectly as expected (for anyone interested I pointed cloudflare subdomain.mydomain.com to the ipv6 address). But the filebrowser one is not. I suspect it is because I am trying to specify a port to redirect to. Is it possible to do something like this with nginx? Or is there any better alternative that does not involve modifying /etc/hosts or setting your own dns server?
Edit: my bad, I was actually inputting the ipv6 address incorrectly in cloudflare. It was missing 1 digit so it was never going to work. I corrected the ip and it works good. The accepted answer does it more cleverly with urls instead of hardcoding the ipv6 which is a good idea! Just note that if you are using a ipv6 server then you are going to listen in the [::]:80 port
Remove the IPv6 addresses in the listen directive and add server_name directives instead:
http {
# redirect to my router page
server {
listen 80;
server_name router.mydomain.com;
return 302 http://192.168.0.1;
}
# redirect to filebrowser
server {
listen 80;
server_name filebrowser.mydomain.com;
return 302 http://192.168.0.2:1111;
}
}
I do not want to set up a local dns server either as it will probably slow down the resolution of internet domains
This is probably a wrong assumption. Something like dnsmasq is able to resolve local names and forward all other DNS queries to upstream servers (like 8.8.8.8 or 1.1.1.1), caching the results. So when setup properly you wouldn't need a domain or a VPS in this case.

Enable reverse proxy and block access to the original port

I am hosting an app (Kibana) on port 5601. I want to restrict access to it by whitelisting IPs, so I am trying to host it behind Nginx. Below is my Nginx conf.
server {
listen *:5700;
server_name _;
allow 10.20.30.40; # My IP
deny all;
location / {
proxy_pass http://localhost:5601;
}
}
It works as only I can access the app on port 5700 and everyone else gets a 403. However, others can directly goto localhost:5601 and bypass the whole security. How do I stop direct access to port 5601?
localhost:5601 is a connection only accessible to users/processes running on the same host that is running Nginx & Kibana. It needs to be there so that Nginx can proxy_pass traffic to Kibana.
However, I think you are talking about external users also connecting to port 5601 from remote systems.
Kibana does not need to listen to traffic from external systems on port 5601. Note that by default at least some Kibana installs do not listen to external systems and you may not need to make any changes.
However to be sure:
Edit your kibana.yml file (possibly /etc/kibana/kibana.yml)
Ensure that server.host: "localhost" is the only server.host line and is not commented out
Restart Kibana
To further manage your system using best practices. I would strongly recommend operating some form of firewall and only opening access to ports and protocols which you expect external users to need.

How to reroute SFTP traffic via NGINX

I'm trying to setup an FTP subdomain, such that all incoming SFTP requests to (say) ftp.myname.com, get routed to a particular internal server, (say) 10.123.456 via port 22.
How do I use nginx to route this traffic?
I've already setup the SFTP server, and can SFTP directly to the server, say:
sftp username#123.456.7890, which works fine.
The problem is that when I setup nginx to route all traffic to ftp.myname.com, it connects, but the passwords get rejected. I have no problems routing web traffic to my other subdomains, say dev.myname.com (with passwords), but it doesn't work for the SFTP traffic:
server {
listen 22;
server_name ftp.myname.com;
return .............
}
How do I define the return sting to route the traffic with the passwords?
The connection is SFTP (via port 22).
Thanks
Aswering to #peixotorms: yes, you can. nginx can proxy/load balance http as well as tcp and udp traffic, see nginx stream modules documentation (at the nginx main documentation page) , and specifically the stream core module's documentation.
You cannot do this on nginx (http only), you must use something like HaProxy and a simple dns record for your subdomain pointing to the server ip.
Some info: http://jpmorris-iso.blogspot.pt/2013/01/load-balancing-openssh-sftp-with-haproxy.html
Edit:
Since nginx version 1.15.2 it's now possible to do that using the variable $ssl_preread_protocol. The official blog added post about how to use this variable for multiplexing HTTPS and SSH on the same port.
https://www.nginx.com/blog/running-non-ssl-protocols-over-ssl-port-nginx-1-15-2/
Example of configuring SSH on an upstream block:
stream {
upstream ssh {
server 192.0.2.1:22;
}
upstream sslweb {
server 192.0.2.2:443;
}
map $ssl_preread_protocol $upstream {
default ssh;
"TLSv1.2" sslweb;
}
# SSH and SSL on the same port
server {
listen 443;
proxy_pass $upstream;
ssl_preread on;
}
}

Resources