HTTP user agent block nginx restart fail - nginx

I trying to paste the following on my
nginx version: nginx/1.4.6 (Ubuntu)
server {
server_name www.example.com example.com;
access_log /var/www/logs/example_access.log;
error_log /var/www/logs/example_error.log;
root /var/www/html;
# case insensitive matching
if ($http_user_agent ~* (netcrawl|npbot|malicious|wget)) {
return 403;
}
location / {
index index.html index.htm index.php;
}
}
service nginx reload && service nginx restart
I did the following at another server
wget "http://mymainserver.com/myfile.html"
It still able to 200 ok fetch the file.
Any idea what do i do wrong.
Thanks!

Missing "}" in your config file
nginx: [emerg] unexpected end of file, expecting "}"
As a result,
nginx reload fails and service nginx restart is not even not called.
OR
server_name in your config file mismatches hostname used in wget => nginx skips your location

Related

Nginx Proxy Pass dont acceppts user & password

i need to proxy pass one URL from my internal network, that looks like this here:
http://<user><passwd><ipaddress>:<port>/cgi-bin/mjpg/video.cgi?channel=1&subtype=1
But this configuration always ends up in an error:
nginx: [emerg] invalid port in upstream "user:passwd#192.168.133.122:8080/cgi-bin/mjpg/video.cgi?channel=1&subtype=1" in /etc/nginx/sites-enabled/default:15
nginx: configuration file /etc/nginx/nginx.conf test failed
The only working way i get nginx to work is this one:
server {
listen 80;
root /var/www/html;
index index.php index.html index.htm;
location /video.cgi {
proxy_pass http://192.168.133.122:8080/cgi-bin/mjpg/video.cgi?channel=1&subtype=1;
}
}
But in this configuration the User and the Password are not included. Is there a way to get user&password also in the proxy_pass?
thanks
Franz

How do I configure nginx correctly to work with my Sinatra app running on thin?

I have a Sinatra app (app.rb) that resides within within /var/www/example. My setup is nginx, thin, and sinatra.
I have both nginx and thin up and running but when I navigate to my site, I get a 404 from nginx. I assume that the server block config is wrong. I've tried pointing root to /var/www/example/ instead of public but that makes no difference. I don't think the request makes it as far as the sinatra app.
What am I doing wrong?
Server block:
server {
listen 80;
listen [::]:80;
root /var/www/example/public;
index index.html index.htm index.nginx-debian.html;
server_name example.com www.example.com;
location / {
try_files $uri $uri/ =404;
}
}
config.ru within /var/www/example directory:
require File.expand_path('../app.rb', __FILE__)
run Sinatra::Application
config.yml within /var/www/example directory:
---
environment: production
chdir: /var/www/example
address: 127.0.0.1
user: root
group: root
port: 4567
pid: /var/www/example/pids/thin.pid
rackup: /var/www/example/config.ru
log: /var/www/example/logs/thin.log
max_conns: 1024
timeout: 30
max_persistent_conns: 512
daemonize: true
You have to tell nginx to proxy requests to your Sinatra application. The minimum required to accomplish that is to specify a proxy_pass directive in the location block like this:
location / {
proxy_pass http://localhost:4567;
}
The Nginx Reverse Proxy docs have more information on other proxy settings you might want to include.

Nginx ignores root and alias directives

I'm trying to serve a React app build in nginx in /opt/hdr/static/hdr. React homepage and Router are pointing /static/hdr as React documentation says.
The .conf file I'm using is the next one:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
index index.html;
location /static/hdr/ {
alias /opt/hdr/static/hdr/;
index index.html;
expires 1d;
add_header Cache-Control public;
}
}
But I'm getting a 404 error when I access mywebpage.com/static/hdr. I've tried several combinations:
alias /opt/hdr/static/hdr
root /opt/hdr
root /opt/hdr/static/hdr
Nothing works. The thing is, looking in the error logs of nginx I've realized that it is searching in /etc/nginx/html/static/hdr instead of /opt/hdr/static/hdr. If I put the site there everything works perfectly.
Executing nginx -V gives me that --prefix is set to /etc/nginx. This could be the reason why nginx searches in /etc/nginx but I have no idea where /html is coming from. Neither why root and alias are not overwriting it.
Any idea is welcome. Thanks in advance.

How can I hide a file from the browser, yet still use it on the webserver with NGINX?

Here's my scenario:
I have a vagrant cloud set up at an IAAS provider. It uses a .json file as its catalog to direct download requests from vagrant over to their corresponding .box files on the server.
My goal is to hide the .json file from the browser so that a surfer cannot hit it directly at, say: http://example.com/catalog.json and see the json output as that output lists the url of the box file itself. However, I still need vagrant to be able to download and use the file so it can grab the box.
In the NGINX docs, it mentions the "internal" directive which seems to offer what I want to do via try_files, but I think I'm either mis-interpreting what it does or just plain doing it wrong. Here's what I'm working with as an example:
First, I have two sub-domains.
One for the .json catalog at: catalog.example.com
A second for the box files at: boxes.example.com
These are mapped, of course, to respective folders on the server, etc.
With that in mind, in sites-available/site.conf, I have the following server blocks:
server {
listen 80;
listen [::]:80;
server_name catalog.example.com;
server_name www.catalog.example.com;
root /var/www/catalog;
# Use try_files to trigger internal directive to serve json files
location / {
try_files $uri =404;
}
# Serve json files to scripts only with content type header application/json
location ~ \.json$ {
internal;
add_header Content-Type application/json;
}
}
server {
listen 80;
listen [::]:80;
server_name boxes.example.com;
server_name www.boxes.example.com;
root /var/www/boxes;
# Use try_files to trigger internal directive to serve json files
location / {
try_files $uri =404;
}
# Serve box files to scripts only with content type application/octet-stream
location ~ \.box$ {
internal;
add_header Content-Type application/octet-stream;
}
}
The NGINX documentation for the internal directive states:
Specifies that a given location can only be used for internal requests. For external requests, the client error 404 (Not Found) is returned. Internal requests are the following:
requests redirected by the error_page, index, random_index, and try_files directives;
Based on that, my understanding is that my server blocks grab any path for those sub-domains and then, passing it through try_files, should make that available when called via vagrant, yet hide it from the browser if I hit the catalog or a box url directly.
I can confirm that the files are not accessible from the browser; however, they're also unaccessible to vagrant as well.
Am I mis-understanding internal here? Is there a way to achieve my goal?
Make sure for the sensitive calls the server listens on localhost only
Create a tunnel between the machine running vagrant (using an arbitrary port) and your IAAS provider machine (on the web server port, for example).
Create a user on your IAAS machine who is only allowed to interact with the forwarded web-server port (via sshd_config)
Use details from below
https://askubuntu.com/questions/48129/how-to-create-a-restricted-ssh-user-for-port-forwarding
Reference the tunneled server using http://:/path in both your catalog.json url and your box file url
Use a server block in your NGINX config which listens to the 127.0.0.1:80 only and doesn't use server_name. You can even add default_server to this so that anything that doesn't match other virtual host will hit this block
Use two locations in your config with different roots to serve files from /var/www/catalog and /var/www/boxes respectively.
Set regex locations for your .json and .box files and use a try_files block to accept the $uri or redirect to 444 (so you know it hit your block)
Deny the /boxes and /catalog otherwise.
See the below nginx config for example
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name example.com;
server_name www.example.com;
root /var/www;
location ~ /(catalog|boxes) {
deny all;
return 403;
}
}
server {
listen 80;
listen [::]:80;
server_name store.example.com; # I will use an eCommerce platform eventually
root /var/www/store;
}
server {
listen 127.0.0.1:80;
listen [::]:80;
root /var/www;
location ~ \.json$ {
try_files $uri $uri/ =444;i
add_header Content-Type application/json;
}
location ~ \.box$ {
try_files $uri $uri/ =444;
add_header Content-Type octet/stream;
}
location ~ /(catalog|boxes) {
deny all;
return 403;
}
}
I think all you need here is to change the access level to the file. There is 3 access level (execute, read and write) you can remove the execute access level from your file. On the server consul run the command:
chmod 766 your_file_name
you can see:
here
and here
for more information.

Nginx configuration not updating for browser

I am trying to serve a website with nginx. I have noticed that when I make changes to my /etc/nginx/sites-available/game, run sudo service nginx restart, it is not reflected when I try to pull it up in the browser.
The browser just hangs and waits for a response and then timesout.
However, it works perfectly fine if I try to do a curl request to my site on the command line. I get the normal nginx html basic file. Why is that? Here. (and yes, I have made a soft link from sites-enabled/game to sites-available/game)
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.html index.htm;
server_name my.site.uw.edu;
location / {
try_files $uri $uri/ =404;
}
}
Also, I am using Ubuntu 14.04. I don't think this version of Linux uses SELinux, but could this be some sort of security configuration related deal? I have had trouble in the past with SELinux when deploying on CentOS machines.
You can disable adding or modifying of “Expires” and “Cache-Control” response header using expires param:
expires off;
nginx docs

Resources