nginx reverse proxy to backend running on localhost - nginx

EDIT: It turns out that the my setup below actually works. Previously, I was getting redirections to port 36000 but it was due to some configuration settings on my backend application that was causing it.
I am not entirely sure, but I believe I might be wanting to set up a reverse proxy using nginx.
I have an application running on a server at port 36000. By default, port 36000 is not publicly accessible and my intention is for nginx to listen to a public url, direct any request to the url to an application running on port 36000. During this entire process, the user should not know that his/her request is being sent to an application running on my server's port 36000.
To put it in more concrete terms, assume that my url is http://domain.somehost.com/
Upon visiting http://domain.somehost.com/ , nginx should pick up the request and redirect it to an application already running on the server on port 36000, the application does some processing, and passes the response back. Port 36000 is not publicly accessible and should not appear as part of any url.
I've tried a setup that looks like:
server {
listen 80;
server_name domain.somehost.com
location / {
proxy_pass http://127.0.0.1:36000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
and including that inside my main nginx.conf
However, it requires me to make port 36000 publicly accessible, and I'm trying to avoid that. The port 36000 also shows up as part of the forwarded url in the web browser.
Is there any way that I can do the same thing, but without making port 36000 accessible?
Thank you.

EDIT: The config below is from a working nginx config, with the hostname and port changed.
You need to may be able to set the server listening on port 36000 as an upstream server (see http://nginx.org/en/docs/http/ngx_http_upstream_module.html).
server {
listen 80;
server_name domain.somehost.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:36000/;
proxy_redirect http://localhost:36000/ https://$server_name/;
}
}

Related

nginx reverse proxy sends all traffic to first defined server

I have multiple servers running on the same host. I am trying to configure nginx to route traffic based on the server_name, but all traffic is sent to the first defined server.
I have two urls:
example.domain.net
domain.net
which I have configured nginx to proxy with configuration:
server {
listen 3978;
listen [::]:3978;
server_name example.domain.net:3978 example.domain.net:3978;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://127.0.0.1:8443;
}
}
server {
listen 3978;
listen [::]:3978;
server_name domain.net:3978 www.domain.net:3978;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://127.0.0.1:8020;
}
}
But all traffic to both example.domain.net:3978 and domain.net:3978 is being sent to whichever server is defined first in the file (in this case example.domain.net)
I've seen other examples where this worked like This post. Is this possible with one having a subdomain and another not?
I am using nginx version 1.18.0 with the default nginx.conf on Ubuntu 18.04
server_name should not have ports. Try removing :3978 from the server_name.
Because you have the ports, the hostname does not match any of the server_name. So, the entire traffic is sent to the first server which is considered as a default for no matches.

How to use multiple NGINX reverse proxy servers

NGINX works great as a reverse proxy for my virtual servers to host applications using a single IP address and multiple domain names.
I have one particular VM that runs several node apps that work together as one web application. That server runs its own NGINX reverse proxy to handle everything and works great when exposed to the internet with a unique IP.
Since I want to use my single IP to serve this and the rest of my things, I'd like to configure my primary NGINX server to pass everything off to that server's NGINX instance to handle, which seemed pretty straight forward but isn't working as expected.
On my primary NGINX server (that all traffic is sent to from the firewall) I have configured a site file like this:
server {
listen 80;
listen [::]:80;
server_name example.domain.com *.example.domain.com;
client_max_body_size 1G;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
location / {
proxy_pass http://10.10.10.10;
}
}
This results in a 502 error. The applications do require web sockets and each have unique subdomains so of which are pointed at directories on that server.
Am I missing some config or thinking about this incorrectly?

How to NGINX reverse proxy to backend server which has a self signed certificate?

I have a small network with a webserver and an OpenVPN Access Server (with own webinterface). I have only 1 public ip and want to be able to point subdomains to websites on the webserver (e.g. website1.domain.com, website2.domain.com) and point the subdomain vpn.domain.com to the web interface of the OpenVPN access server.
After some Google actions i think the way to go is setup a proxy server. NGINX seems to be able to do this with the "proxy_pass" function. I got it working for HTTP backend URL's (websites) but it does not work for the OpenVPN Access Server web interface as it forces to use HTTPS. I'm fine with HTTPS and prefer to use it also for the websites hosted on the webserver. By default a self signed cert. is installed and i want to use also self signed cert. for the other websites.
How can i "accept" self signed cert. for the backend servers? I found that i need to generate a cert. and define it in the NGINX reverse proxy config but i do not understand how this works as for example my OpenVPN server already has an SSL certificate installed. I'm able to visit the OpenVPN web interface via https://direct.ip.address.here/admin but got an "This site cannot deliver an secure connection" page when i try to access the web interface via Chrome.
My NGINX reverse proxy config:
server {
listen 443;
server_name vpn.domain.com;
ssl_verify_client off;
location / {
# app1 reverse proxy follow
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://10.128.20.5:443;
proxy_ssl_verify off;
}
access_log /var/log/nginx/access_log.log;
error_log /var/log/nginx/access_log.log;
}
server {
listen 80;
server_name website1.domain.com;
location / {
# app1 reverse proxy follow
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://10.128.11.20:80;
}
access_log /var/log/nginx/access_log.log;
error_log /var/log/nginx/access_log.log;
}
A nearby thought...
Maybe NGINX is not the right tool for this at all (now or on long term)? Lets assume i can fix the cert. issue i currently have and we need more backend web servers to handle the traffic, is it possible to scale the NGINX proxy as well? like a cluster or load balancer or something? Should i look for a completely different tool?

Spawning node.js server with nginx

So this is something new to me. I have a node.js Application running on my server on Port 3000. I have an nginx Proxy:
upstream websli_nodejs_app_2 {
server 127.0.0.1:3000;
}
# this is the dynamic websli server
server {
listen 80;
server_name test.ch;
#root /var/www/websli/public;
# pass the request to the node.js server with the correct headers and much more can be added, see nginx config options
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://websli_nodejs_app_2/;
proxy_redirect off;
}
}
This works like a charm. Now I'm spawning (at least I believe that's what it's called). More node application. That's where Wintersmith comes in place: I'm running wintersmith preview
On my localhost that will result in another node.js server on localhost:8000. When I then go to localhost:8000 in my browser I'm getting the expected result, but then on my localhost I don't have the nginx proxy setup.
The issue:
Now on my production setup with nginx, I'm a bit stuck because I obviously cannot access localhost:8000
I have tried to add another upstream server, but this didn't really worked out. I have then also tried to spawn on something like dev.test.ch:8000, but that would result in something like error listen EADDRNOTAVAIL
What I'm looking for
The goal is to start another server from inside my main node.js server and make it accessible from a browser. Any input is highly welcomed.

How does supervisord and nginx handle what tornado port is used?

I am using supervisord to spool 2 instances of tornado on different ports and I use nginx as a reverse proxy to these ports. I have noticed that all traffic is directing to only one port. How does supervisord or nginx decide which instance of tornado is used when a user makes a request from the web service?
nginx config:
http {
upstream frontends {
server xx.xxx.x.xxx:8001;
server xx.xxx.x.xxx:8002;
}
server {
listen 80;
server_name xx.xxx.x.xxx;
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://frontends;
}
}
}
From the nginx docs:
Requests are distributed according to the servers in round-robin manner with respect of the server weight.
By default, servers are given equal weight. Are you sure all requests are going to one port?
Also note that supervisord's role is simply process management - only nginx decides how to distribute traffic to the ports you've configured.

Resources