NGINX reverse proxy from internal IP with :PORT - nginx

I'm running two apps on a raspberry pi on my local network (at 192.168.1.95). I want to access them through my browser like this:
192.168.1.95:3000 = App One
192.168.1.95:3001 = App Two
So my server blocks look like this:
# App One
server {
listen *:3000;
location / {
proxy_pass http://127.0.0.1:3000;
}
}
# App Two
server {
listen *:3001;
location / {
proxy_pass http://127.0.0.1:3001;
}
}
But, on my laptop, in the browser, when I navigate to 192.168.1.95:3000 it just spins forever. I restarted NGINX on my pi, and it didn't report any errors. What is it missing?
Edits:
I checked access logs and error logs. Nothing is there which makes
me think my 192.168.1.95:3000 requests are not even getting to
nginx.
I'm open to other solutions. I would go for a subdomain, but I don't
want DNS involved. It's a single internal IP with multiple websites and tools on it. I don't think
subdomain.192.168.1.95 works.
Solution: I am an idiot. I had UFW firewall on my pi blocking everything except 22, 80, 443. I exposed the ports I wanted and I'm good to go, without even needing nginx.

Related

Is it possible to load balance to 2 different minikube servers?

I have 3 different VMs where 2 of them are running an application on Kubernetes (Minikube), on NodePort.
On the third server, I'm trying to use Nginx as a LoadBalancer but I cannot seem to reach the servers.
For that I am following the own Nginx guide using something like:
(I can access the application using NodePort on my PC)
http {
upstream backend {
server 192.168.1.1:31200;
server 192.168.1.2:31201;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
}
However when I connect to the LoadBalance, it cannot find the servers.
Am I configuring Nginx in a wrong way or by using a local server like Minikube it is not possible to Load Balance this way?
Turns out I had configured the DNS server wrongly, now it works as expected.

How to redirect trafic to live website if https is provided?

My localhost running on http://localhost:8080. Now, I have a requirement like this, whenever I type http://www.mywebsite.com, it should load my localhost and if I type https://www.mywebsite.com, it should load the live website.
To achieve this I tried the hosts(/etc/hosts) file and Nginx but it also stops loading the live website in my system.
Host file content:
127.0.0.1 www.mywebsite.com
nginx config
server {
listen 80;
server_name www.mywebsite.com;
location / {
proxy_pass http://127.0.0.1:8080;
}
}
Completely agree with the other answers, mapping from nginx on a remote host to your localhost can be difficult unless you know the public IP address of your local machine, ideally it should be static.
Alternatives
I would encourage giving a try to some proxy tools that can be installed on your local machine, i.e. Charles Proxy and its Map Remote feature.
Once installed, follow these steps:
Install and trust the root certificate Help -> SSL Proxying -> Install Charles Root Certificate
Enable Map Remote feature Tools -> Map Remote -> [x] Enable Map Remote
Add a new rule, e.g. http://www.mywebsite.com -> http://localhost:8080
Now you're ready to test:
Navigate to http://www.mywebsite.com (you should see results from your localhost, proxy took over)
Navigate to https://www.mywebsite.com (you should see results from your remote server)
Map Remote - Rule
Map Remote - Result
You need several pieces to make this work. Thinking through the steps of how a request could be handled:
DNS for www.mywebsite.com points to a single IP, there's no way around that. So all requests for that host, no matter the protocol, will come in to the machine with that IP, the public server.
So we need to route those requests, such that a) https requests are handled by nginx on that same machine (the public server), and b) http requests are forwarded to your local machine. nginx can do a) of course, that's a normal config, and nginx can also do b), as a reverse proxy.
Now the problem is how to route traffic from the public server to your local machine, which is probably at home behind a dynamic IP and a router doing NAT. There are services to do this but to use your own domain is usually a paid feature (eg check out ngrok, I guess Traefik probably handles this too, not sure). To do it yourself you can use a reverse SSH tunnel.
To be clear, this routes any request for http://www.mywebsite.com/ to your local machine, not just your own requests. Everyone who visits the http version of that site will end up hitting your local machine, at least while the tunnel is up.
For 1, you just need your DNS set up normally, with a single DNS record for www.mywebsite.com. You don't need any /etc/hosts tricks, remove those (and maybe reboot, to make sure they're not cached and complicating things).
For 2, your nginx config on the public server would look something like this:
# First the http server, which will route requests to your local machine
server {
listen 80;
server_name www.mywebsite.com;
location / {
# Route all http requests to port 8080 on this same server (the
# public server), which we will forward back to your localhost
proxy_pass http://127.0.0.1:8080;
}
}
# Now the https server, handled by this, public server
server {
listen 443 ssl;
server_name www.mywebsite.com;
# SSL config stuff ...
# Normal nginx config ...
root /var/www/html
location / {
# ... etc, your site config
}
}
The nginx config on your local machine should just be a normal http server listening on port 8080 (the port you mentioned it is running on). No proxying, nothing special here.
For 3), lastly, we need to open a tunnel from your local machine to the public server. If you are on Linux, or macOS, you can do that from the command line with something like this:
ssh user#www.mywebsite.com -nNT -R :8080:localhost:8080 &
If you're on Windows you could use something like PuTTY or the built in SSH client on Win 10.
The important parts of this are (copied from the SSH manpage):
-N Do not execute a remote command. This is useful for just forwarding ports.
-R Specifies that connections to the given TCP port or Unix socket on the remote
(server) host are to be forwarded to the local side.
The -R part specifies that connections to remote port 8080 (where nginx is routing http requests) should be forwarded to localhost port 8080 (your local machine). The ports can be anything of course, eg if you wanted to use port 5050 on your public server and port 80 on your local machine, it would instead look like -R :5050:localhost:80.
Of course the tunnel will fail if your public IP address (on your localhost side) changes, or if you reboot, or your local wifi goes down, etc etc ...
NOTE: you should also be aware that you really are opening your local machine up to the public internet, so will be subject to all the same security risks that any server on the public internet faces, like various scripts probing for vulnerabilities etc. Whenever I use reverse tunnels like this I tend to leave them up only while developing, and shut them down immediately when done (and of course the site will not work when the tunnel is down).
As somebody said above but in different words: I don't really get why you want to access two different locations with basically the same address (different protocols). But dude, who are we to tell you not to do it? Don't let anything or anyone stop you! 😉😁
However, we some times need to think outside the box and come up with different ways to achieve the same result. Why don't you go to your domain provider and set up something like this:
Create a subdomain (check if you need to set an A record for your domain) so you can have something like https://local.example.com/.
Forward the new subdomain to your local IP address (perhaps you need to open/forward ports on you router and install DDClient or a similar service to catch your dynamic local/public IP and send it to your domain provider).
Leave your #/naked record pointing to your website as it is.
Whenever you access: https://www.example.com or http://www.example.com, you'll see your website.
And if you access https://local.example.com or http://local.example.com, you'll access whatever you have on your local computer.
Hope it helps, or at least, gives you a different perspective for a solution.
You have to create or it may be already there in your nginx config files, a section for listen 443 (https).
// 443 is the default port for https
server {
listen 443;
....
}
Whatever solution you pick, it should only work exactly once for you. If you configure your live site correctly, it should do HSTS, and the next time you type "http://www.mywebsite.com" your browser will GET "https://www.mywebsite.com" and your nginx won't even hear about the insecure http request.
But if you really, really want this you can let your local nginx proxy the https site and strip the HSTS headers:
server {
listen 443;
server_name www.mywebsite.com;
proxy_pass https://ip_of_live_server;
proxy_set_header Host $host;
[... strip 'Strict-Transport-Security' ...]
}
Of course you will need your local nginx to serve these TLS sessions with a certificate that your browser trusts. Either adding a self-signed Snake Oil one to your browser, or... since we are implementing bad ideas... add a copy of you live secret key material to your localhost... ;)
You can do this by redirecting HTTP connections on your live site to localhost. First remove the record you have in your hosts file.
Then add the following to your live site's nginx.conf.
server {
listen 80;
server_name www.mywebsite.com;
location / {
# change this to your development machine's IP
if ($remote_addr = 1.2.3.4) {
rewrite ^ http://127.0.0.1:8080;
}
}
}

Enable reverse proxy and block access to the original port

I am hosting an app (Kibana) on port 5601. I want to restrict access to it by whitelisting IPs, so I am trying to host it behind Nginx. Below is my Nginx conf.
server {
listen *:5700;
server_name _;
allow 10.20.30.40; # My IP
deny all;
location / {
proxy_pass http://localhost:5601;
}
}
It works as only I can access the app on port 5700 and everyone else gets a 403. However, others can directly goto localhost:5601 and bypass the whole security. How do I stop direct access to port 5601?
localhost:5601 is a connection only accessible to users/processes running on the same host that is running Nginx & Kibana. It needs to be there so that Nginx can proxy_pass traffic to Kibana.
However, I think you are talking about external users also connecting to port 5601 from remote systems.
Kibana does not need to listen to traffic from external systems on port 5601. Note that by default at least some Kibana installs do not listen to external systems and you may not need to make any changes.
However to be sure:
Edit your kibana.yml file (possibly /etc/kibana/kibana.yml)
Ensure that server.host: "localhost" is the only server.host line and is not commented out
Restart Kibana
To further manage your system using best practices. I would strongly recommend operating some form of firewall and only opening access to ports and protocols which you expect external users to need.

Deploy Sanic server next to Nginx

I'm trying to deploy a Sanic app next to Nginx. I want Nginx to handle:
File serving (my SPA and other assets)
Certbot/letsencrypt ssl (can do without)
And I want Sanic to handle my API endpoints.
I know how to handle each separately. However, I don't know how to make them run next to each other. As far as I know, you can't have two services listening on the same TCP port. If that's the case, should I just make Nginx act as a reverse proxy to Sanic? If so, how would you go about it?
Any guidance would be appreciated.
This is my preferred way to run Sanic, behind nginx as you described. Then just proxy to Sanic which is listening on some other port.
server {
...
location /api/ {
proxy_pass http://sanic-app:1234/;
}
}

Domain name and port based proxy

I think I finally grasped how Docker works, so I am getting ready for the next step: cramming a whole bunch of unrelated applications into a single server with a single public IP. Say, for example, that I have a number of legacy Apache2-VHost-based web-sites, so the best I could figure was to run a LAMP container to replicate the current situation, and improve later. For argument sake, here is what I have a container at 172.17.0.2:80 that serves
http://www.foo.com
http://blog.foo.com
http://www.bar.com
Quite straightforward: publishing port 80 lets me correctly access all those sites. Next, I have two services that I need to run, so I built two containers
service-a -> 172.17.0.3:3000
service-b -> 172.17.0.4:5000
and all is good, I can privately access those services from my docker host. The trouble comes when I want to publicly restrict access to service-a through service-a.bar.com:80 only, and to service-b through www.foo.com:5000 only. A lot of reading after, it would seem that I have to create a dreadful artefact called a proxy, or reverse-proxy, to make things more confusing. I have no idea what I'm doing, so I dove nose-first into nginx -- which I had never used before -- because someone told me it's better than Apache at dealing with lots of small tasks and requests -- not that I would know how to turn Apache into a proxy, mind you. Anyway, nginx sounded perfect for a thing that has to take a request a pass it onto another server, so I started reading docs and I produced the following (in addition to the correctly working vhosts):
upstream service-a-bar-com-80 {
server 172.17.0.3:3000;
}
server {
server_name service-a.bar.com;
listen 80;
location / {
proxy_pass http://service-a-bar-com-80;
proxy_redirect off;
}
}
upstream www-foo-com-5000 {
server 172.17.0.4:5000;
}
server {
server_name www.foo.com;
listen 5000;
location / {
proxy_pass http://www-foo-com-5000;
proxy_redirect off;
}
}
Which somewhat works, until I access http://blog.bar.com:5000 which brings up service-b. So, my question is: what am I doing wrong?
nginx (like Apache) always has a default server for a given ip+port combination. You only have one server listening on port 5000, so it is your defacto default server for services on port 5000.
So blog.bar.com (which I presume resolves to the same IP address as www.foo.com) will use the default server for port 5000.
If you want to prevent that server block being the default server for port 5000, set up another server block using the same port, and mark it with the default_server keyword, as follows:
server {
listen 5000 default_server;
root /var/empty;
}
You can use a number of techniques to render the server inaccessible.
See this document for more.

Resources