Nginx reverse proxy to backend gives an error ERR_CONNECTION_REFUSED - nginx

I have an application running on a server at port 6001(frontend, built by react.js) and port 6002(backend, built by node.js) in EC2 instance.
When I send a request through ubuntu terminal in the instance by curl -X GET http://127.0.0.1:6002/api/works, It works fine, I get a proper data.
Now I go to a browser with the domain (http://example.com). However, I only get the front end getting called. When I send a request on a browser to the backend server, It gives me an error GET http://127.0.0.1:6002/api/works net::ERR_CONNECTION_REFUSED (the domain goes through ELB)
Here's my nginx config.
server {
listen 80;
listen [::]:80;
server_name example.com;
root /home/www/my-project/;
index index.html;
location / {
proxy_pass http://127.0.0.1:6001/;
}
location /api/ {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
proxy_pass http://127.0.0.1:6002/;
proxy_set_header X-Real-IP $remote_addr;
}
}
my case is similar to this, he/she says
My entry point via the aws dns takes me to the React app 127.0.0.1:4100 that is the port 80 nginx is listening to. The react app is the one that makes the backend calls and it is coded to make the backend api calls on 127.0.0.1:1323/api. The client error you saw was because it is displaying what the react app is trying to call in the server, but nginx is not redirecting that call to the app that is running at that port. Does that make sense?
the selected answer didn't work on me.
Also, According to the comment, the problem is solved by sending a request by http://AWS_IP/ on the react app. but I'm not sure If it's a good solution for me since there's no point to use ELB then? If I understand the concept of ELB right? I think the requests need to be done via ELB?
Please help, this is driving me crazy.

From your question, I understood the following things,
Your Domain is pointing to Amazon ELB
And there is a VM behind this ELB, and it has Nginx and 2 applications in it.
Nginx is listening on port 80 and Backend application is listening on port
6002 and frontend is listening on port 6001
YOUR FRONTEND APPLICATION IS CALLING THE BACKEND FROM YOUR LOCAL BROWSER USING
http://127.0.0.1:6002/api/works
Here is the problem,
You can curl 127.0.0.1 from the same instance where the application is running(listening to port 6001) because you are hitting the localhost of that instance, And it is different when your web application running on your local browser because your react application(all javascript application) executes in your local machine and for backend call, it is hitting the localhost(in your case) and returning CONNECTION REFUSED.
So the solution is you've to change the backend URL so that it should look something like http://yourdomain/api/works
In Addition to this, I've a couple of suggestions on your configuration.
You don't need a separate web server for your frontend since you can use the same Nginx.
Make sure that your ELB target port is 80 or the same port that NGINX is listening to.
And close the ports 6001 and 6002 (if it is publically accessible)

Related

NGINX Reverse proxy response

I am using an NGINX server as a reverse proxy. The NGINX server accepts a request from an external client (HTTP or HTTPS doesn't matter) and passes this request to a backend server. The backend server returns "a" URL to the client that will have another URL that it should use to make subsequent API calls. I want this returned URL to have the NGIX host and port number instead of the backend service host and port number so that my backend server details are never exposed. For e.g.
1) Client request:
http://nginx_server:8080
2) Nginx receives this and passes it to the backend running with some functionality at
http://backend_server:8090
3) The backend server receives this request and passes another URL to the client http://backend_server:8090/allok.
4) The client uses this URL and makes another subsequent API calls.
What I want is that in step 4 in the response the "backend_server:port" is replaced by the nginx server and port from the initial request. For e.g
http://nginx_server:8080/allok
However, the response goes back as
http://backend_server:8090/allok
my nginx.conf
http {
server {
listen 8080; --> Client request port
server_name localhost;
location / {
proxy_pass http://localhost:8090; ---> Backend server port. The backend
service and NGINX will always be on the same
machine
proxy_redirect http://localhost:8090 http://localhost:8080; --> Not sure if this is
correct. Doesn't seem to do what I want to achieve
# proxy_set_header Host $host;
}
}
}
Thanks in advance
I was able to resolve it. I had to eliminate the proxy_redirect directive from the config.

how to port forward to multiple local servers?

I have purchased a server in my office to setup multiple web services like gitlab, odoo, elastic search something like this.
and I want to access multiple web services from externally.
So far what I've tried to do is
installed Ubuntu 16.04 and nginx on the server
setup port-forward from 80 to the server ip in my router
setup DNS for a domain local.example.com to my public IP address so that when I type local.exmaple.com, it redirects to the nginx web server in the server.
appended some strings to the file at /etc/nginx/site-available/default below
server {
server_name local.example.com;
listen 80;
location / {
proxy_pass http://192.168.0.11:8081;//virtual web server made by virtual box
proxy_set_header Host $http_host;
proxy_set_header X-Real_IP $remote_addr;
}
}
However, after all this stuff, when I type domain name on the browser, it shows nginx web page which is installed on a server not forwarding to virtual host.
Remove the default server block and restart nginx also. Try after that. Make sure to test in a private window with no caching
The issue is that when you have some mistake in virtual host name or something else, nginx will silently send the request to first server block defined. Or the one set with default server. So you always want to avoid that

Dynamic nginx upstream doesn't work with authorization header

I have a problem with a particular nginx setup. The scenario is like this: Applications need to access a couchdb service via a nginx proxy. The nginx needs to set an authorization header in order to get access to the backend. The problem is that the backend service endpoint's DNS changes sometimes and that's causing my services to stop working until I reload nginx.
I'm trying to setup the upstream as a variable, but when I do that, authorization stops working, the backend returns 403. When I just use the upstream directive, it works just fine. The upstream variable has the correct value, no errors in logs.
The config snippet below:
set $backend url.to.backend;
location / {
proxy_pass https://$backend/api;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host url.to.backend;
proxy_set_header Authorization "Basic <authorization_gibberish>";
proxy_temp_path /mnt/nginx_proxy;
}
Any help will be appreciated.
Unless you have the commercial version, nginx caches the resolution of an upstream (proxy_pass is basically a "one server upstream"), so the only way to re-resolve it is to perform a restart or reload of the configuration. This is assuming the changing DNS is the issue.
From the upstream module documentation:
Additionally, the following parameters are available as part of our
commercial subscription:
...
resolve - monitors changes of the IP
addresses that correspond to a domain name of the server, and
automatically modifies the upstream configuration without the need of
restarting nginx (1.5.12)

Nginx reverse proxy works only a few times, then fails

I deployed an meteor app to a digital ocean droplet and mapped that to a domain. I'm pretty new to server management so I followed a guide to set up a reverse proxy with nginx to point to the correct port (the meteor app is served on port 3000).
I created a file called trackburnr.com in /etc/nginx/sites-available with this content:
server {
listen 80;
server_name trackburnr.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
And start / restart the nginx service.
Now, here's the catch. If I navigate to trackburnr.com:3000, it always works. So I'm confident my droplet and DNS record on the domain works fine.
If I navigate to trackburnr.com, it seems like it's working fine, but if I refresh the page after a few minutes or navigate to it with another browser, it returns the "page not found" page from my internet provider.
If I restart the service, it usually works fine for a another few minutes and then stops working again.
There are several guides about this as it's a popular setup for deploying meteor apps, but they all use this same approach.
Following another answer in here I tried setting proxy_pass as a variable beforehand and passing it, but with no success.
Has anyone encountered similar issues?
I think I figured it out. My domain provider had an DNS redirect set up which redirected trackburner.com to www.trackburnr.com. Obviously that subdomain wasn't mapped in nginx.
I revered the redirect so that www redirected to the non-www version and that seemed to do the trick.
After that I was incurring in 400 Bad Request. I attribute this to the google analytics code in my header which made the cookies too big. I fixed this by adding the large_client_header_buffers 4 16k; to my server tag in the nginx conf file. More info about that here

Linode NodeBalancer Vs Nginx

I have a NodeBalancer created to route my request on Tomcat server via HTTP. I see that NodeBalancer is doing good but now I have to install Nginx server to server static contact and as well as reverse proxy to redirect my http traffic to HTTPS.
I have a below scenario--
User-----via http---->NodeBalncer(http:80) ---->Nginx--->Redirect to HTTPS---->NodeBalancer(https:443)------> Tomcat on HTTP:8080
Below is sample flow
1) User send a request using HTTP:80
2) NodeBalancer received request on HTTP:80 and forward to Nginx
3) Nginx redirect request to HTTPS
4) Now NodeBalancer received request on HTTPS:443 and forward to Serving Tomcat on HTTP:8080 after terminating SSL on NodeBalancer.
Now, if I need to serve all static content like (images/|img/|javascript/|js/|css/|stylesheets/) then before forwarding all HTTPS request via NodeBalance to serving Tomcat I need to forward them via Nginx to serve static content.
I can do it via pointing NodeBalncer to Nginx but then what about Tomcat clustering because NodeBalancer will always forward all HTTPS request to Nginx and I have to maintain session stickiness using Nginx which is pretty much like LoadBalancing via Nginx. I see everything can be done via Nginx server itself. Instead of terminating all user request to NodeBalancer I can directly use Nginx.
I did execute some scenarios by installing Nginx and redirecting HTTP to HTTPS and independently serving static content also but I stuck with provided NodeBalancer to serve my purpose. I am planing to drop Linode NodeBalncer and use Nginx as LoadBalancer as well as service static content.
Looking some expert advise/comments on this or suggest me if my approach is wrong.
Serving the static content and the redirect to https are two different issues. Your general approach sounds fine. I personally would do everything using Nginx and lose the NodeBalancer but that's for a personal website. If this is for business then you need to consider monitoring etc and NodeBalancer might provide some features you want to keep.
Send all traffic from NodeBalancer to Nginx and use Nginx as both the load balancer and to terminate all SSL traffic. Heres a simple examples that terminates SSL and serves images. In this case we're routing all traffic to the tomcat upstream server on port 80 which is load balanced using IP hash so you get sticky sessions. You would be adding a load balancer here.
upstream tomcat {
ip_hash;
server 192.168.1.1:80;
server 192.168.1.2:80;
server 192.168.1.3:80;
}
server {
listen 443;
server_name www.example.org;
ssl on;
ssl_certificate /etc/letsencrypt/live/example.org/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.org/privkey.pem;
location / {
proxy_cache example_cache;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header Host www.example.org:80;
proxy_pass_request_headers on;
proxy_pass http://tomcat;
}
location /images/ {
root /var/www/images/;
autoindex off;
}
}
To achieve sticky sessions you have several options that you need to read up on. IP load balancing is probably the simplest to setup.

Resources