how to port forward to multiple local servers? - nginx

I have purchased a server in my office to setup multiple web services like gitlab, odoo, elastic search something like this.
and I want to access multiple web services from externally.
So far what I've tried to do is
installed Ubuntu 16.04 and nginx on the server
setup port-forward from 80 to the server ip in my router
setup DNS for a domain local.example.com to my public IP address so that when I type local.exmaple.com, it redirects to the nginx web server in the server.
appended some strings to the file at /etc/nginx/site-available/default below
server {
server_name local.example.com;
listen 80;
location / {
proxy_pass http://192.168.0.11:8081;//virtual web server made by virtual box
proxy_set_header Host $http_host;
proxy_set_header X-Real_IP $remote_addr;
}
}
However, after all this stuff, when I type domain name on the browser, it shows nginx web page which is installed on a server not forwarding to virtual host.

Remove the default server block and restart nginx also. Try after that. Make sure to test in a private window with no caching
The issue is that when you have some mistake in virtual host name or something else, nginx will silently send the request to first server block defined. Or the one set with default server. So you always want to avoid that

Related

nginx proxy pass ip from url

I have multiple web servers with IP address say 172.18.1.1 to 172.18.1.20, hosting different website on 443(https) and A nginx server which I need to use for proxying above servers.
Example :
My nginx server IP is https://10.220.5.39:9200 by giving web server in URL , I need to show the websites
i.e. https://10.220.5.39:9200/proxy/172.18.1.1, should get website of https://172.18.1.1
AND
https://10.220.5.39:9200/proxy/172.18.1.2, should get website of https://172.18.1.2
location /proxy/(?<uniqueId>[^/]+).* {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass https://$uniqueId/;
}
But it is not working , I can not use redirect , since client will not have access to web servers directly
Also some website use css file from route i.e. href="/static/theme.css"
because of which in browser console we are getting
not found https://10.220.5.39:9200/static/theme.css
Is this even possible with nginx

Nginx reverse proxy to backend gives an error ERR_CONNECTION_REFUSED

I have an application running on a server at port 6001(frontend, built by react.js) and port 6002(backend, built by node.js) in EC2 instance.
When I send a request through ubuntu terminal in the instance by curl -X GET http://127.0.0.1:6002/api/works, It works fine, I get a proper data.
Now I go to a browser with the domain (http://example.com). However, I only get the front end getting called. When I send a request on a browser to the backend server, It gives me an error GET http://127.0.0.1:6002/api/works net::ERR_CONNECTION_REFUSED (the domain goes through ELB)
Here's my nginx config.
server {
listen 80;
listen [::]:80;
server_name example.com;
root /home/www/my-project/;
index index.html;
location / {
proxy_pass http://127.0.0.1:6001/;
}
location /api/ {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
proxy_pass http://127.0.0.1:6002/;
proxy_set_header X-Real-IP $remote_addr;
}
}
my case is similar to this, he/she says
My entry point via the aws dns takes me to the React app 127.0.0.1:4100 that is the port 80 nginx is listening to. The react app is the one that makes the backend calls and it is coded to make the backend api calls on 127.0.0.1:1323/api. The client error you saw was because it is displaying what the react app is trying to call in the server, but nginx is not redirecting that call to the app that is running at that port. Does that make sense?
the selected answer didn't work on me.
Also, According to the comment, the problem is solved by sending a request by http://AWS_IP/ on the react app. but I'm not sure If it's a good solution for me since there's no point to use ELB then? If I understand the concept of ELB right? I think the requests need to be done via ELB?
Please help, this is driving me crazy.
From your question, I understood the following things,
Your Domain is pointing to Amazon ELB
And there is a VM behind this ELB, and it has Nginx and 2 applications in it.
Nginx is listening on port 80 and Backend application is listening on port
6002 and frontend is listening on port 6001
YOUR FRONTEND APPLICATION IS CALLING THE BACKEND FROM YOUR LOCAL BROWSER USING
http://127.0.0.1:6002/api/works
Here is the problem,
You can curl 127.0.0.1 from the same instance where the application is running(listening to port 6001) because you are hitting the localhost of that instance, And it is different when your web application running on your local browser because your react application(all javascript application) executes in your local machine and for backend call, it is hitting the localhost(in your case) and returning CONNECTION REFUSED.
So the solution is you've to change the backend URL so that it should look something like http://yourdomain/api/works
In Addition to this, I've a couple of suggestions on your configuration.
You don't need a separate web server for your frontend since you can use the same Nginx.
Make sure that your ELB target port is 80 or the same port that NGINX is listening to.
And close the ports 6001 and 6002 (if it is publically accessible)

Error: SSL Misconfiguration (Your connection to this site is not secure)

Exact error I am getting on browser:
This server could not prove that it is XXX.XX.XXX.XXX; its security certificate is from newDomain.live. This may be caused by a misconfiguration or an attacker intercepting your connection.
NGINX Config:
server {
# listen on port 443 (https)
listen 443 ssl;
server_name _;
# location of the self-signed SSL certificate
ssl_certificate /home/ubuntu/certs/server.pem;
ssl_certificate_key /home/ubuntu/certs/server.key;
# write access and error logs to /var/log
access_log /var/log/app_access.log;
error_log /var/log/app_error.log;
location / {
# forward application requests to the gunicorn server
proxy_pass http://localhost:8000;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
What I have done:
Ran openssl req –new –newkey rsa:2048 –nodes –keyout server.key –out server.csr in terminal
Copied server.csr from server to SSL provider as it asked for CSR from web hosting
SSL Certificate issued by provider have two fields 1. Server Certificate 2. CA Certificates(intermediate and root)
At this moment I have checked but it was still unverified and couldnt establish https connection.
Then, I deleted the server.csr file from server and created a new one by copying "1. Server Certificate" given by SSL provider.
I am using AWS EC2 instance and running NGINX as reverse proxy. How can I fix this misconfiguration of SSL?
The certificate returned by the server does not match the name in the URL. Based on this description you've created a certificate for newDomain.live but you are trying to access the site using and IP address xxx.xxx.xxx.xxx, which is not the domain you created.
If the domain is not a valid domain (i.e. no DNS entry you can add the domain to your local hosts file, with the IP as the target then put the domain name in your browser as the address. This will redirect to the IP defined in your hosts file.
For more information, update host in windows, update host in linux.
Solution: access the website using the same domain name that you registered the certificate for.
See this thread for details of a similar error you are experiencing and this thread for details of self signed certificate errors.
An alternative approach:
This approach does not solve your NGINX problem.
Instead of using NGINX, why don't you front your EC2 instance with an Application Load Balancer.
Then use a certificate generated by AWS Certificate Manager (ACM), not only are the certificates free but:
they are signed by Amazon, so the certificate is trusted, if you use
DNS validation the certificates are automatically rotated when they
expire.
You can find out how to do this here.
You can restrict traffic to originate from the load balancer using security groups, and you can front the load balancer with Amazon CloudFront.
ACM best practice information is available here.

nginx redirect subdomain to seperate server ip

I have a dynamic IP which I manage using ddclient. I use no-ip to maintain the hostnames to point to my IP.
I have www.somename.com, sub.somename.com and app.somename.com. Obviously, these all point to my IP. The first two are a couple of wordpress pages on a server (server1) running NGINX, with separate configs in sites-available for each site. The latter is a separate application server (server2) running GitLab.
My router does not allow me to switch on subdomain, so all port 80 traffic is routed to server1. I'm hoping there is a config I can apply in nginx that will allow me to send all traffic for app.somename.com to a local IP address on my network (192.168.0.nnn), but keep the address of the page as app.subdomain.com.
Right now, I have :-
/etc/nginx/site-available$ ls
somename.com domain sub.somename.com app.somename.com
The relevant ones are linked in sites-enabled. For the app server, I have :-
server {
server_name app.somename.com;
location / {
proxy_pass http://192.168.0.16:80;
}
}
The problem, is that in the browser address bar, this results in :-
http://192.168.1.16/some/pages
Where I want :-
http://app.somename.com/some/pages
How do I resolve this?
You could try like this!
server {
server_name app.somename.com;
location / {
proxy_pass http://192.168.0.16:80;
proxy_set_header Host app.somename.com;
}
}

Linode NodeBalancer Vs Nginx

I have a NodeBalancer created to route my request on Tomcat server via HTTP. I see that NodeBalancer is doing good but now I have to install Nginx server to server static contact and as well as reverse proxy to redirect my http traffic to HTTPS.
I have a below scenario--
User-----via http---->NodeBalncer(http:80) ---->Nginx--->Redirect to HTTPS---->NodeBalancer(https:443)------> Tomcat on HTTP:8080
Below is sample flow
1) User send a request using HTTP:80
2) NodeBalancer received request on HTTP:80 and forward to Nginx
3) Nginx redirect request to HTTPS
4) Now NodeBalancer received request on HTTPS:443 and forward to Serving Tomcat on HTTP:8080 after terminating SSL on NodeBalancer.
Now, if I need to serve all static content like (images/|img/|javascript/|js/|css/|stylesheets/) then before forwarding all HTTPS request via NodeBalance to serving Tomcat I need to forward them via Nginx to serve static content.
I can do it via pointing NodeBalncer to Nginx but then what about Tomcat clustering because NodeBalancer will always forward all HTTPS request to Nginx and I have to maintain session stickiness using Nginx which is pretty much like LoadBalancing via Nginx. I see everything can be done via Nginx server itself. Instead of terminating all user request to NodeBalancer I can directly use Nginx.
I did execute some scenarios by installing Nginx and redirecting HTTP to HTTPS and independently serving static content also but I stuck with provided NodeBalancer to serve my purpose. I am planing to drop Linode NodeBalncer and use Nginx as LoadBalancer as well as service static content.
Looking some expert advise/comments on this or suggest me if my approach is wrong.
Serving the static content and the redirect to https are two different issues. Your general approach sounds fine. I personally would do everything using Nginx and lose the NodeBalancer but that's for a personal website. If this is for business then you need to consider monitoring etc and NodeBalancer might provide some features you want to keep.
Send all traffic from NodeBalancer to Nginx and use Nginx as both the load balancer and to terminate all SSL traffic. Heres a simple examples that terminates SSL and serves images. In this case we're routing all traffic to the tomcat upstream server on port 80 which is load balanced using IP hash so you get sticky sessions. You would be adding a load balancer here.
upstream tomcat {
ip_hash;
server 192.168.1.1:80;
server 192.168.1.2:80;
server 192.168.1.3:80;
}
server {
listen 443;
server_name www.example.org;
ssl on;
ssl_certificate /etc/letsencrypt/live/example.org/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.org/privkey.pem;
location / {
proxy_cache example_cache;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header Host www.example.org:80;
proxy_pass_request_headers on;
proxy_pass http://tomcat;
}
location /images/ {
root /var/www/images/;
autoindex off;
}
}
To achieve sticky sessions you have several options that you need to read up on. IP load balancing is probably the simplest to setup.

Resources