Nginx reverse proxy return 502 - nginx

I'm very new to nginx and server game and i'm trying to setup a reverse proxy. Basically what i need is when i enter my server ip it should open a particular website (Ex: https://example.com).
So for example if i enter my ip (Ex: 45.10.127.942) it should open the website example.com , but the url should remain as http://45.10.127.942.
I tried to set my server configuration as follows but it returns a 502 error.
server {
listen 80;
location / {
proxy_pass http://example.com;
}
}
It returns a 502 error. Can you please explain what i need to do?

You can have something like this in your configuration file:
server {
root /var/www/html;
server_name _;
location / {
try_files $uri $uri/ /index.html;
}
}
Place the index.html file in root folder specified.
Then just restart the NGINX and it should work.
What is the problem with your configuration file is you should not proxy_pass.
If you want to open the other website, you should have DNS record pointing to that IP. What actually happens is the thing you are trying to do is known as CLICKJACKING. For more details, search CLICKJACKING on google and you will find a lot of references.

Related

Nginx reverse proxy - Internal servers separated by trailing slash

I'm a newbie at Nginx, and have been searching a lot for the right answer to my question, but couldn't find it; not because it is not there, but my newbie condition limits me to adapt a generic solution to my issue.
The situation is this:
I have a Mantis Bug Tracker in my private LAN (http://10.111.111.12).
On the other hand, i have an OwnCloud website also on my LAN (IP 10.111.111.5), with URL http://10.111.111.5/owncloud/.
What i want to do is to deploy a Nginx Reverse Proxy that handles all requests from Internet at publicdomain.com, and use trailing slash for each internal webserver. The desired result would be:
http://www.publicdomain.com/bugtracker -> redirects to http://10.111.111.12/index.php
http://www.publicdomain.com/cloud -> redirects to http://10.111.111.5/owncloud/ (note that "cloud" is preferred over "owncloud")
On the future, it is necessary to continue using trailing slash for other web servers to be deployed.
Questions are:
is this scenario possible? if so, is it enough with configuring nginx or I have to reconfigure internal web servers as well?
I really appreciate your help, by indicating me a possible solution or pointing me to the right direction on previous posts.
Thanks a lot in advance.
Yes it is possible to achieve such configuration and it's commonly used when NGINX is acting as a reverse proxy. You can use this configuration as an inspiration for building your own:
upstream bugtracker {
server 10.111.111.12;
}
upstream cloudupstream {
server 10.111.111.5;
}
server {
listen 80;
location /bugtracker/{
proxy_pass http://bugtracker;
}
location /cloud/{
proxy_pass http://cloudupstream/owncloud;
}
}
What's happening over here is that nginx will be listening on port 80 and as soon as a request comes for path /bugtracker, it will automatically route the request to the upstream server mentioned above. Using this you can add as many upstream servers and location blocks as you want.
Reference: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass
Thanks a lot Namam for your quick answer. However, it isn't working yet. It seems that "server" at upstream directive does not allow slash, like this:
server 10.111.111.5/owncloud;
If i used it, i obtained
nginx: [emerg] invalid host in upstream "10.111.111.5/owncloud" in /etc/nginx/nginx.conf:43
I started with the first upstream bugtracker, only, and nginx.conf like this:
upstream bugtracker {
server 10.111.111.12;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name localhost;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
location /mstic{
proxy_pass http://bugtracker;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
After that, when accesing to my Nginx Reverse proxy http://10.111.111.10/mstic/ i obtain the following:
Not Found The requested URL /mstic/ was not found on this server.
and no further details on error or access logs.
Thanks a lot in advance for any extra help you could bring me.

Configuring nginx to direct to website in server

I have a remote centOS server running locally on 127.0.0:8000 which I want to proxy with nginx.
I have tested that the server runs on that address and that nginx passes the tests.
When i try to access the ip on my browser I get:
I understand I need to edit the configuration files for nginx. I just don't know how.
I tried going to /etc/nginx/conf.d/index.conf (I named my app index because, reasons)
And I write the following:
server {
listen 80;
server_name www.<address>.com;
location / {
proxy_pass http://127.0.0.1:8000;
root /home/<user>/WebApp/templates;
index index.html;
}
}
And index.html is on the path I put on root. What am i doing wrong?

How can I hide a file from the browser, yet still use it on the webserver with NGINX?

Here's my scenario:
I have a vagrant cloud set up at an IAAS provider. It uses a .json file as its catalog to direct download requests from vagrant over to their corresponding .box files on the server.
My goal is to hide the .json file from the browser so that a surfer cannot hit it directly at, say: http://example.com/catalog.json and see the json output as that output lists the url of the box file itself. However, I still need vagrant to be able to download and use the file so it can grab the box.
In the NGINX docs, it mentions the "internal" directive which seems to offer what I want to do via try_files, but I think I'm either mis-interpreting what it does or just plain doing it wrong. Here's what I'm working with as an example:
First, I have two sub-domains.
One for the .json catalog at: catalog.example.com
A second for the box files at: boxes.example.com
These are mapped, of course, to respective folders on the server, etc.
With that in mind, in sites-available/site.conf, I have the following server blocks:
server {
listen 80;
listen [::]:80;
server_name catalog.example.com;
server_name www.catalog.example.com;
root /var/www/catalog;
# Use try_files to trigger internal directive to serve json files
location / {
try_files $uri =404;
}
# Serve json files to scripts only with content type header application/json
location ~ \.json$ {
internal;
add_header Content-Type application/json;
}
}
server {
listen 80;
listen [::]:80;
server_name boxes.example.com;
server_name www.boxes.example.com;
root /var/www/boxes;
# Use try_files to trigger internal directive to serve json files
location / {
try_files $uri =404;
}
# Serve box files to scripts only with content type application/octet-stream
location ~ \.box$ {
internal;
add_header Content-Type application/octet-stream;
}
}
The NGINX documentation for the internal directive states:
Specifies that a given location can only be used for internal requests. For external requests, the client error 404 (Not Found) is returned. Internal requests are the following:
requests redirected by the error_page, index, random_index, and try_files directives;
Based on that, my understanding is that my server blocks grab any path for those sub-domains and then, passing it through try_files, should make that available when called via vagrant, yet hide it from the browser if I hit the catalog or a box url directly.
I can confirm that the files are not accessible from the browser; however, they're also unaccessible to vagrant as well.
Am I mis-understanding internal here? Is there a way to achieve my goal?
Make sure for the sensitive calls the server listens on localhost only
Create a tunnel between the machine running vagrant (using an arbitrary port) and your IAAS provider machine (on the web server port, for example).
Create a user on your IAAS machine who is only allowed to interact with the forwarded web-server port (via sshd_config)
Use details from below
https://askubuntu.com/questions/48129/how-to-create-a-restricted-ssh-user-for-port-forwarding
Reference the tunneled server using http://:/path in both your catalog.json url and your box file url
Use a server block in your NGINX config which listens to the 127.0.0.1:80 only and doesn't use server_name. You can even add default_server to this so that anything that doesn't match other virtual host will hit this block
Use two locations in your config with different roots to serve files from /var/www/catalog and /var/www/boxes respectively.
Set regex locations for your .json and .box files and use a try_files block to accept the $uri or redirect to 444 (so you know it hit your block)
Deny the /boxes and /catalog otherwise.
See the below nginx config for example
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name example.com;
server_name www.example.com;
root /var/www;
location ~ /(catalog|boxes) {
deny all;
return 403;
}
}
server {
listen 80;
listen [::]:80;
server_name store.example.com; # I will use an eCommerce platform eventually
root /var/www/store;
}
server {
listen 127.0.0.1:80;
listen [::]:80;
root /var/www;
location ~ \.json$ {
try_files $uri $uri/ =444;i
add_header Content-Type application/json;
}
location ~ \.box$ {
try_files $uri $uri/ =444;
add_header Content-Type octet/stream;
}
location ~ /(catalog|boxes) {
deny all;
return 403;
}
}
I think all you need here is to change the access level to the file. There is 3 access level (execute, read and write) you can remove the execute access level from your file. On the server consul run the command:
chmod 766 your_file_name
you can see:
here
and here
for more information.

Nginx reverse proxy configuration multi domains virtualhost

I'm having trouble configuring my nginx proxy despite reading a number of guides and trying for three consecutive evenings.
Here is my topology:
(From internet) All traffic from port 80 is redirected to 192.168.1.4, a ubuntu-server virtual running nginx.
I have a NAS which has a subdomain myName.surname.com which connects to the admin page. On that NAS, I have apache webserver running hosting a couple of sites on port 81, 82,
The NAS uses virtualhosts, so domains successfully redirect (without using nginx).
I also have an ASP.NET website running on IIS on another 192.168.1.3:9810.
Now here is my NGINX configuration. I tried configuring it a few times but broke it so I've put it back to its default state:
server {
listen 80 default_server;
root /usr/share/nginx/html;
index index.html index.htm;
server_name localhost;
location / {
proxy_pass http://192.168.1.1; #WORKS OK
}
}
If I go on myName.surname.com or wordpressWebsite.co.uk or myIISSiteDomain.co.uk I am with config above greeted with the correct page at 192.168.1.1:8080 OR 192.168.1.1:81.
It's a start.
First problem is When I navigate to any other page (not home page) like wordpressWebsite.co.uk/blog, it breaks giving 404. So I have tried to differentiate between URLs? I read that the config should be something like:
server {
listen 80;
server_name wordpressWebsite.co.uk;
location / {
proxy_pass http://192.168.1.1:81;
}
}
server {
listen 80;
server_name myName.surname.com;
location / {
proxy_pass http://192.168.1.1;
}
}
server {
listen 80 myIISSiteDomain.co.uk
location / {
proxy_pass http://192.168.1.3:9810;
}
}
But this is not quite right.
1) wordpressWebsite.co.uk loads up the page, but as soon as I go to any other link like wordpressWebsite.co.uk/blog it breaks, giving me my NAS error message like its trying to access 192.168.1.1/blog rather than the virtualhost ~/blog. It actually changes my URL in navbar to 192.168.1.1 so why is it behaving like this?
2) if I'm using virtual host, I don't think I should need to pass in the port via nginx for 192.168.1.1:81 (wordpressWebsite.co.uk). Surely I just need to point it to 192.168.1.1, and then virtualhost should detect that the url maps to 81? I'm not sure how to to do this as I don't fully understand what actually gets passed from nginx to the server?
You can add try_files $uri $uri/ /index.php?$args;
See this https://www.geekytuts.net/linux/ultimate-nginx-configuration-for-wordpress/

nginx server block not playing nicely - 'server not found'

The basic installation is working, on linux mint OS. resolving the domain on 'localhost' confirms that nginx is running.
however, the issue i am running into stems from the generation of my own server block. its very basic:
server {
listen 80;
listen [::]:80;
root /usr/share/nginx/html;
index index.html index.htm;
# Make site accessible from alias.
server_name tokum.com www.tokum.com;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ /index.html;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
}
}
as you can see, i have created an alias for www.tokum.com in this server block. attempting to resolve this url in a browser, i am greeted with the lovely 'server not found' message.
my feeling is that it surrounds the 'try_files' functionality, but i cannot be sure why.
No other resources have been created on the server other than my tokum.com server block file, which is located at the path /etc/nginx/sites-available/tokum.com. Any help is most appreciated.

Resources