nginx server block not playing nicely - 'server not found' - nginx

The basic installation is working, on linux mint OS. resolving the domain on 'localhost' confirms that nginx is running.
however, the issue i am running into stems from the generation of my own server block. its very basic:
server {
listen 80;
listen [::]:80;
root /usr/share/nginx/html;
index index.html index.htm;
# Make site accessible from alias.
server_name tokum.com www.tokum.com;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ /index.html;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
}
}
as you can see, i have created an alias for www.tokum.com in this server block. attempting to resolve this url in a browser, i am greeted with the lovely 'server not found' message.
my feeling is that it surrounds the 'try_files' functionality, but i cannot be sure why.
No other resources have been created on the server other than my tokum.com server block file, which is located at the path /etc/nginx/sites-available/tokum.com. Any help is most appreciated.

Related

How do I configure nginx correctly to work with my Sinatra app running on thin?

I have a Sinatra app (app.rb) that resides within within /var/www/example. My setup is nginx, thin, and sinatra.
I have both nginx and thin up and running but when I navigate to my site, I get a 404 from nginx. I assume that the server block config is wrong. I've tried pointing root to /var/www/example/ instead of public but that makes no difference. I don't think the request makes it as far as the sinatra app.
What am I doing wrong?
Server block:
server {
listen 80;
listen [::]:80;
root /var/www/example/public;
index index.html index.htm index.nginx-debian.html;
server_name example.com www.example.com;
location / {
try_files $uri $uri/ =404;
}
}
config.ru within /var/www/example directory:
require File.expand_path('../app.rb', __FILE__)
run Sinatra::Application
config.yml within /var/www/example directory:
---
environment: production
chdir: /var/www/example
address: 127.0.0.1
user: root
group: root
port: 4567
pid: /var/www/example/pids/thin.pid
rackup: /var/www/example/config.ru
log: /var/www/example/logs/thin.log
max_conns: 1024
timeout: 30
max_persistent_conns: 512
daemonize: true
You have to tell nginx to proxy requests to your Sinatra application. The minimum required to accomplish that is to specify a proxy_pass directive in the location block like this:
location / {
proxy_pass http://localhost:4567;
}
The Nginx Reverse Proxy docs have more information on other proxy settings you might want to include.

Nginx reverse proxy return 502

I'm very new to nginx and server game and i'm trying to setup a reverse proxy. Basically what i need is when i enter my server ip it should open a particular website (Ex: https://example.com).
So for example if i enter my ip (Ex: 45.10.127.942) it should open the website example.com , but the url should remain as http://45.10.127.942.
I tried to set my server configuration as follows but it returns a 502 error.
server {
listen 80;
location / {
proxy_pass http://example.com;
}
}
It returns a 502 error. Can you please explain what i need to do?
You can have something like this in your configuration file:
server {
root /var/www/html;
server_name _;
location / {
try_files $uri $uri/ /index.html;
}
}
Place the index.html file in root folder specified.
Then just restart the NGINX and it should work.
What is the problem with your configuration file is you should not proxy_pass.
If you want to open the other website, you should have DNS record pointing to that IP. What actually happens is the thing you are trying to do is known as CLICKJACKING. For more details, search CLICKJACKING on google and you will find a lot of references.

How can I hide a file from the browser, yet still use it on the webserver with NGINX?

Here's my scenario:
I have a vagrant cloud set up at an IAAS provider. It uses a .json file as its catalog to direct download requests from vagrant over to their corresponding .box files on the server.
My goal is to hide the .json file from the browser so that a surfer cannot hit it directly at, say: http://example.com/catalog.json and see the json output as that output lists the url of the box file itself. However, I still need vagrant to be able to download and use the file so it can grab the box.
In the NGINX docs, it mentions the "internal" directive which seems to offer what I want to do via try_files, but I think I'm either mis-interpreting what it does or just plain doing it wrong. Here's what I'm working with as an example:
First, I have two sub-domains.
One for the .json catalog at: catalog.example.com
A second for the box files at: boxes.example.com
These are mapped, of course, to respective folders on the server, etc.
With that in mind, in sites-available/site.conf, I have the following server blocks:
server {
listen 80;
listen [::]:80;
server_name catalog.example.com;
server_name www.catalog.example.com;
root /var/www/catalog;
# Use try_files to trigger internal directive to serve json files
location / {
try_files $uri =404;
}
# Serve json files to scripts only with content type header application/json
location ~ \.json$ {
internal;
add_header Content-Type application/json;
}
}
server {
listen 80;
listen [::]:80;
server_name boxes.example.com;
server_name www.boxes.example.com;
root /var/www/boxes;
# Use try_files to trigger internal directive to serve json files
location / {
try_files $uri =404;
}
# Serve box files to scripts only with content type application/octet-stream
location ~ \.box$ {
internal;
add_header Content-Type application/octet-stream;
}
}
The NGINX documentation for the internal directive states:
Specifies that a given location can only be used for internal requests. For external requests, the client error 404 (Not Found) is returned. Internal requests are the following:
requests redirected by the error_page, index, random_index, and try_files directives;
Based on that, my understanding is that my server blocks grab any path for those sub-domains and then, passing it through try_files, should make that available when called via vagrant, yet hide it from the browser if I hit the catalog or a box url directly.
I can confirm that the files are not accessible from the browser; however, they're also unaccessible to vagrant as well.
Am I mis-understanding internal here? Is there a way to achieve my goal?
Make sure for the sensitive calls the server listens on localhost only
Create a tunnel between the machine running vagrant (using an arbitrary port) and your IAAS provider machine (on the web server port, for example).
Create a user on your IAAS machine who is only allowed to interact with the forwarded web-server port (via sshd_config)
Use details from below
https://askubuntu.com/questions/48129/how-to-create-a-restricted-ssh-user-for-port-forwarding
Reference the tunneled server using http://:/path in both your catalog.json url and your box file url
Use a server block in your NGINX config which listens to the 127.0.0.1:80 only and doesn't use server_name. You can even add default_server to this so that anything that doesn't match other virtual host will hit this block
Use two locations in your config with different roots to serve files from /var/www/catalog and /var/www/boxes respectively.
Set regex locations for your .json and .box files and use a try_files block to accept the $uri or redirect to 444 (so you know it hit your block)
Deny the /boxes and /catalog otherwise.
See the below nginx config for example
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name example.com;
server_name www.example.com;
root /var/www;
location ~ /(catalog|boxes) {
deny all;
return 403;
}
}
server {
listen 80;
listen [::]:80;
server_name store.example.com; # I will use an eCommerce platform eventually
root /var/www/store;
}
server {
listen 127.0.0.1:80;
listen [::]:80;
root /var/www;
location ~ \.json$ {
try_files $uri $uri/ =444;i
add_header Content-Type application/json;
}
location ~ \.box$ {
try_files $uri $uri/ =444;
add_header Content-Type octet/stream;
}
location ~ /(catalog|boxes) {
deny all;
return 403;
}
}
I think all you need here is to change the access level to the file. There is 3 access level (execute, read and write) you can remove the execute access level from your file. On the server consul run the command:
chmod 766 your_file_name
you can see:
here
and here
for more information.

simple nginx server not working

I am new to nginx environment and trying to host my first app using nginx.
But I am not being able start the first steps with nginx.
I have seen and read thousands of tutorials on basic nginx setup and have set up basic nginx server block as anyone would have.
Here is my sites-available/default
server {
listen 80 default_server;
listen [::]:80 default_server;
# SSL configuration
#
# listen 443 ssl default_server;
# listen [::]:443 ssl default_server;
#
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
#
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
# include snippets/snakeoil.conf;
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm ;
error_log /var/log/nginx/error.log debug;
error_page 400 401 402 403 404 40x.html;
server_name mydomain.com;
location / {
root /var/www/html;
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# include snippets/fastcgi-php.conf;
#
# # With php7.0-cgi alone:
# fastcgi_pass 127.0.0.1:9000;
# # With php7.0-fpm:
# fastcgi_pass unix:/run/php/php7.0-fpm.sock;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
location ~ /\.ht {
deny all;
}
}
And i have done several deployments with apache but with nginx, i am experiencing peculiar behaviour.
This is how it goes.
It serves the default nginx welcome page successfully from /var/www/htmlon mydomain.com
Now, if i create a new html file say test.html inside /var/www/html and try to open mydomain.com/test.html, it shows internal server error with no logs in error log or access log.
Now in my server block, if i add test.html to the index directive as the first option, the same /var/www/html/test.html file is served and seen without any error on mydomain.com (So that it is clear that there are no problems in file permissions).
Also, if say i have index as the default index page only, if i add a hyperlink in the default index page, say Test Page , and on the default home page served on mydomain.com if i click on that hyperlink, test.html file is served, but the url in my browser does not change.
I am banging my head on this from last two days and i have tried several things.
Increased the verbosity of error logs to debug, still nothing shows up in logs
Tried a hundred other logically same but syntactically different server configurations.
I am pretty experienced with server configurations and have done number of deployments with apache and have never experienced something like this on apache.
Maybe, I am skipping some of the basic concepts of nginx as i do not know much about nginx but felt it would be similar to apache.
Please help me with this issue.
Thanks in advance

Nginx configuration not updating for browser

I am trying to serve a website with nginx. I have noticed that when I make changes to my /etc/nginx/sites-available/game, run sudo service nginx restart, it is not reflected when I try to pull it up in the browser.
The browser just hangs and waits for a response and then timesout.
However, it works perfectly fine if I try to do a curl request to my site on the command line. I get the normal nginx html basic file. Why is that? Here. (and yes, I have made a soft link from sites-enabled/game to sites-available/game)
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.html index.htm;
server_name my.site.uw.edu;
location / {
try_files $uri $uri/ =404;
}
}
Also, I am using Ubuntu 14.04. I don't think this version of Linux uses SELinux, but could this be some sort of security configuration related deal? I have had trouble in the past with SELinux when deploying on CentOS machines.
You can disable adding or modifying of “Expires” and “Cache-Control” response header using expires param:
expires off;
nginx docs

Resources