I am using an NGINX server as a reverse proxy. The NGINX server accepts a request from an external client (HTTP or HTTPS doesn't matter) and passes this request to a backend server. The backend server returns "a" URL to the client that will have another URL that it should use to make subsequent API calls. I want this returned URL to have the NGIX host and port number instead of the backend service host and port number so that my backend server details are never exposed. For e.g.
1) Client request:
http://nginx_server:8080
2) Nginx receives this and passes it to the backend running with some functionality at
http://backend_server:8090
3) The backend server receives this request and passes another URL to the client http://backend_server:8090/allok.
4) The client uses this URL and makes another subsequent API calls.
What I want is that in step 4 in the response the "backend_server:port" is replaced by the nginx server and port from the initial request. For e.g
http://nginx_server:8080/allok
However, the response goes back as
http://backend_server:8090/allok
my nginx.conf
http {
server {
listen 8080; --> Client request port
server_name localhost;
location / {
proxy_pass http://localhost:8090; ---> Backend server port. The backend
service and NGINX will always be on the same
machine
proxy_redirect http://localhost:8090 http://localhost:8080; --> Not sure if this is
correct. Doesn't seem to do what I want to achieve
# proxy_set_header Host $host;
}
}
}
Thanks in advance
I was able to resolve it. I had to eliminate the proxy_redirect directive from the config.
Related
I have an application running on a server at port 6001(frontend, built by react.js) and port 6002(backend, built by node.js) in EC2 instance.
When I send a request through ubuntu terminal in the instance by curl -X GET http://127.0.0.1:6002/api/works, It works fine, I get a proper data.
Now I go to a browser with the domain (http://example.com). However, I only get the front end getting called. When I send a request on a browser to the backend server, It gives me an error GET http://127.0.0.1:6002/api/works net::ERR_CONNECTION_REFUSED (the domain goes through ELB)
Here's my nginx config.
server {
listen 80;
listen [::]:80;
server_name example.com;
root /home/www/my-project/;
index index.html;
location / {
proxy_pass http://127.0.0.1:6001/;
}
location /api/ {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
proxy_pass http://127.0.0.1:6002/;
proxy_set_header X-Real-IP $remote_addr;
}
}
my case is similar to this, he/she says
My entry point via the aws dns takes me to the React app 127.0.0.1:4100 that is the port 80 nginx is listening to. The react app is the one that makes the backend calls and it is coded to make the backend api calls on 127.0.0.1:1323/api. The client error you saw was because it is displaying what the react app is trying to call in the server, but nginx is not redirecting that call to the app that is running at that port. Does that make sense?
the selected answer didn't work on me.
Also, According to the comment, the problem is solved by sending a request by http://AWS_IP/ on the react app. but I'm not sure If it's a good solution for me since there's no point to use ELB then? If I understand the concept of ELB right? I think the requests need to be done via ELB?
Please help, this is driving me crazy.
From your question, I understood the following things,
Your Domain is pointing to Amazon ELB
And there is a VM behind this ELB, and it has Nginx and 2 applications in it.
Nginx is listening on port 80 and Backend application is listening on port
6002 and frontend is listening on port 6001
YOUR FRONTEND APPLICATION IS CALLING THE BACKEND FROM YOUR LOCAL BROWSER USING
http://127.0.0.1:6002/api/works
Here is the problem,
You can curl 127.0.0.1 from the same instance where the application is running(listening to port 6001) because you are hitting the localhost of that instance, And it is different when your web application running on your local browser because your react application(all javascript application) executes in your local machine and for backend call, it is hitting the localhost(in your case) and returning CONNECTION REFUSED.
So the solution is you've to change the backend URL so that it should look something like http://yourdomain/api/works
In Addition to this, I've a couple of suggestions on your configuration.
You don't need a separate web server for your frontend since you can use the same Nginx.
Make sure that your ELB target port is 80 or the same port that NGINX is listening to.
And close the ports 6001 and 6002 (if it is publically accessible)
I'm not an expert on networking or servers but I need to configure Nginx server, this server will listen two different external addresses:
https://fake.net
https://example.com
I have a Node.js application locally in this server deployed in: http://localhost:3020
I'm trying to proxy from nginx to the Node.js app but, the thing is that I need the Node.js api to received the request with the original url request.
Is there any way to forward the request in this way:
Request: https://fake.net/api/test -------> In the Node app received: http://fake.net/api/test
Request: https://example.com/api/test1 -------> In the Node app received: http://example.net/api/test1
The Host header defines the domain part of the proxied request. Generally $host is used to get the value from the original request. See this document for details.
The request is passed transparently if no optional URI is provided to the proxy_pass directive. See this document for details.
For example:
location /api/ {
proxy_set_header Host $host;
proxy_pass http://localhost:3020;
}
I wish to setup nginx as an https reverse proxy to a local application, failing over to remote hosts in case the local application is down, e.g. during deployment. My problem is I need the scheme (http or https) to depend on whether the upstream host is local or remote but I cannot find a way to set it dynamically.
Consider the below configuration.
upstream backend {
server localhost:8080; # scheme should be http here
server a.example.com:443 backup; # scheme should be https here
server b.example.com:443 backup; # scheme should be https here
}
server {
listen 443;
...
...
location / {
# How can I set the proxy_pass scheme to https when upstream is a remote host?
proxy_pass http://backend;
}
}
Is there a way of making the proxy_pass scheme depend on the chosen upstream? I looked into the nginx documentation and could not find any way of dynamically defining it. Am I missing something? Do I have to setup an intermediary server for localhost which handles https and set proxy_pass https://backend? That would be too bad.
I need to achieve below test case using nginx:
www.example.com/api/ should redirect to ABC.com/api,
while www.example.com/api/site/login should redirect to XYZ.com/api/site/login
But in the browser, user should only see www.example.com/api.... (and not the redirected URL).
Please let me know how this can be achieved.
The usage of ABC.com is forbidden by stackoverflow rules, so in example config I use domain names ABC.example.com and XYZ.example.com:
server {
...
server_name www.example.com;
...
location /api/ {
proxy_set_header Host ABC.example.com;
proxy_pass http://ABC.example.com;
}
location /api/site/login {
proxy_set_header Host XYZ.example.com;
proxy_pass http://XYZ.example.com;
}
...
}
(replace http:// with https:// if needed)
The order of location directives is of no importance because, as the documentation states, the location with the longest matching prefix is selected.
With the proxy_set_header parameter, nginx will behave exactly in the way you need, and the user will see www.example.com/api... Otherwise, without this parameter, nginx will generate HTTP 301 redirection to ABC.example.com or XYZ.example.com.
You don't need to specify a URI in the proxy_pass parameter because, as the documentation states, if proxy_pass is specified without a URI, the request URI is passed to the server in the same form as sent by a client when the original request is processed.
You can specify your servers ABC.example.com and XYZ.example.com as domain names or as IP addresses. If you specify them as domain names, you need to specify the additional parameter resolver in your server config. You can use your local name server if you have one, or use something external like Google public DNS (8.8.8.8) or DNS provided for you by your ISP:
server {
...
server_name www.example.com;
resolver 8.8.8.8;
...
}
Try this:
location /api {
proxy_pass http://proxiedsite.com/api;
}
When NGINX proxies a request, it sends the request to a specified
proxied server, fetches the response, and sends it back to the client.
It is possible to proxy requests to an HTTP server (another NGINX
server or any other server) or a non-HTTP server (which can run an
application developed with a specific framework, such as PHP or
Python) using a specified protocol. Supported protocols include
FastCGI, uwsgi, SCGI, and memcached.
To pass a request to an HTTP proxied server, the proxy_pass directive
is specified inside a location.
Resource from NGINX Docs
I have a NodeBalancer created to route my request on Tomcat server via HTTP. I see that NodeBalancer is doing good but now I have to install Nginx server to server static contact and as well as reverse proxy to redirect my http traffic to HTTPS.
I have a below scenario--
User-----via http---->NodeBalncer(http:80) ---->Nginx--->Redirect to HTTPS---->NodeBalancer(https:443)------> Tomcat on HTTP:8080
Below is sample flow
1) User send a request using HTTP:80
2) NodeBalancer received request on HTTP:80 and forward to Nginx
3) Nginx redirect request to HTTPS
4) Now NodeBalancer received request on HTTPS:443 and forward to Serving Tomcat on HTTP:8080 after terminating SSL on NodeBalancer.
Now, if I need to serve all static content like (images/|img/|javascript/|js/|css/|stylesheets/) then before forwarding all HTTPS request via NodeBalance to serving Tomcat I need to forward them via Nginx to serve static content.
I can do it via pointing NodeBalncer to Nginx but then what about Tomcat clustering because NodeBalancer will always forward all HTTPS request to Nginx and I have to maintain session stickiness using Nginx which is pretty much like LoadBalancing via Nginx. I see everything can be done via Nginx server itself. Instead of terminating all user request to NodeBalancer I can directly use Nginx.
I did execute some scenarios by installing Nginx and redirecting HTTP to HTTPS and independently serving static content also but I stuck with provided NodeBalancer to serve my purpose. I am planing to drop Linode NodeBalncer and use Nginx as LoadBalancer as well as service static content.
Looking some expert advise/comments on this or suggest me if my approach is wrong.
Serving the static content and the redirect to https are two different issues. Your general approach sounds fine. I personally would do everything using Nginx and lose the NodeBalancer but that's for a personal website. If this is for business then you need to consider monitoring etc and NodeBalancer might provide some features you want to keep.
Send all traffic from NodeBalancer to Nginx and use Nginx as both the load balancer and to terminate all SSL traffic. Heres a simple examples that terminates SSL and serves images. In this case we're routing all traffic to the tomcat upstream server on port 80 which is load balanced using IP hash so you get sticky sessions. You would be adding a load balancer here.
upstream tomcat {
ip_hash;
server 192.168.1.1:80;
server 192.168.1.2:80;
server 192.168.1.3:80;
}
server {
listen 443;
server_name www.example.org;
ssl on;
ssl_certificate /etc/letsencrypt/live/example.org/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.org/privkey.pem;
location / {
proxy_cache example_cache;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header Host www.example.org:80;
proxy_pass_request_headers on;
proxy_pass http://tomcat;
}
location /images/ {
root /var/www/images/;
autoindex off;
}
}
To achieve sticky sessions you have several options that you need to read up on. IP load balancing is probably the simplest to setup.