fastcgi_mono_server 4 wildcard hostname - nginx

I have configured nginx with fastcgi_mono_server4.
In my nginx config I have 2 hostnames :
server {
listen 80;
server_name dev.example.org
location / {
root /var/www/dev.example.org/;
fastcgi_index Default.aspx;
fastcgi_pass 127.0.0.1:9001;
include /etc/nginx/fastcgi_params;
}
}
server {
listen 80;
server_name *.example.org
location / {
root /var/www/example.org/;
fastcgi_index Default.aspx;
fastcgi_pass 127.0.0.1:9000;
include /etc/nginx/fastcgi_params;
}
}
nginx is OK with this configuration. dev goes to one and all other to another one .
I've already tried this :
fastcgi-mono-server4 /applications=*.example.org:/:/var/www/example.org/ /socket=tcp:127.0.0.1:9000
but it throws an error (Uri parse exception)
Update :
I need to get the full host name in my application, for example if the request was abc.example.org, I need to get "abc".
Unfortunately, HttpContext.Current.Request.Url does not contains "abc" but "*" which causes the parse error

If nginx is going to take care of routing the appropriate sub-domains to each fastcgi port (9000 or 9001) then can you get away with a wildcard domain when you start the mono server process e.g. just use a * instead of '*.example.org'
fastcgi-mono-server4 /applications=*:/:/var/www/example.org/ /socket=tcp:127.0.0.1:9000
Update: The above works to get two Mono server apps listening via nginx, but, using the nginx config from the original question will lead to an exception if you call HttpContext.Request.Url on the catch-all server. This is due to it not liking the * in *.example.org.
There are two possible solutions, depending what you'd like to see returned from HttpContext.Request.Url when a client browses foo.example.org, bar.example.org etc.
Option 1: If you don't care about the sub-domain and want to see example.org
Configure the second (*.example.org) nginx server to be the 'default_server' and have it assign a server-name without the wildcard e.g.
server {
listen 80 default_server;
server_name example.org;
access_log ... }
With these settings, browsing to foo.example.org/Default.aspx loads the page and HttpContext.Request.Url returns example.org/Default.aspx
Option 2: If you want to see the actual sub-domain e.g. foo.example.org
Removing the server_name from the second server definition works.
server {
listen 80 default_server;
access_log ... }
With these settings, browsing to foo.example.org/Default.aspx loads the page and HttpContext.Request.Url returns foo.example.org/Default.aspx

#stephen's answer is more simple and does not need fastcgi config modification.
I tried previous answer (before update), but it did not work.
Nginx take care of routing, as #stephen said, and the routing part worked.
to start fastcgi I used this command to match all routes (and server names)
fastcgi-mono-server4 /applications=/:/var/www/example.org/ /socket=tcp:127.0.0.1:9000
The problem was that HttpContext.Request.Url contains the $server_name value in my case it was "*.example.org" and when I try to parse URI there was an error.
To handle this I changed nginx fastcgi_params and replaced thi line
fastcgi_param SERVER_NAME $server_name;
by
fastcgi_param SERVER_NAME $http_host;
and add in site-available conf
proxy_set_header Host $host;
I think it is set by default.
reload nginx
nginx -t && service nginx reload
reload fastcgi-mono-server to test
fastcgi-mono-server4 /applications=/://var/www/example.org/ /socket=tcp:127.0.0.1:9000 /printlog=True /loglevels=Debug
in the log SERVER_NAME contains the real (not *) subomain.

Related

nginx shows vhost instead of default root

I just installed ngninx on my dev machine.
It automatically migrated my vhosts from lighttpd (very comfy!), I only had to adjust the TLDs (it only took \.dev, I changed that to \.(dev|test|local).
and bound itself to port 81; after removing lighttpd, I changed the ports in /etc/nginx/sites-available to 80.
But when I call http://<ip-adress>/ in the browser, I get the index page of one of my vhosts instead of the default DOCUMENT_ROOT (/var/www/).
I touched /etc/nginx/sites-available/default, changed the port number and uncommented the PHP block.
current contents (comments stripped):
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www;
index.php index index.html index.htm index.nginx-debian.html;
server_name _;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass 127.0.0.1:9000;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
}
Half of the vhosts had self-references in /etc/nginx/sites-enabled, I replaced them with symlinks to /etc/nginx/sites-available and added a symlink for default; all my vhosts can now be accessed, but calling the IP address still routes to the same vhost instead of /var/www.
That vhost file is neither alphabetically first nor considering the mtime, but it is when I list the directory unsorted (ls -f), it even comes before ...
How do I get nginx to deliver /var/www/ instead of /var/www/vhost/?
update: After a few clicks on my primary vhost, switching to https and back, it changed:
http://www.vhost1.test now routes to /var/www, but the other vhosts seem to work correctly.
update: I tried to solve the problem by uncommenting the server block in nginx.conv (pointing to /var/www) and linking sites-enabled/default to sites-available/vhost1. The latter resulted in both the ip-address and vhost1 getting routed to another vhost. The other vhosts are still working fine.
I got it:
sites-available/vhost1 only had listen 443 ssl;; listen 80; was missing
(because listen 80 default_server caused a "duplicate default server" error),
so calling the domain via port 80 fell back to the default server.

How can I hide a file from the browser, yet still use it on the webserver with NGINX?

Here's my scenario:
I have a vagrant cloud set up at an IAAS provider. It uses a .json file as its catalog to direct download requests from vagrant over to their corresponding .box files on the server.
My goal is to hide the .json file from the browser so that a surfer cannot hit it directly at, say: http://example.com/catalog.json and see the json output as that output lists the url of the box file itself. However, I still need vagrant to be able to download and use the file so it can grab the box.
In the NGINX docs, it mentions the "internal" directive which seems to offer what I want to do via try_files, but I think I'm either mis-interpreting what it does or just plain doing it wrong. Here's what I'm working with as an example:
First, I have two sub-domains.
One for the .json catalog at: catalog.example.com
A second for the box files at: boxes.example.com
These are mapped, of course, to respective folders on the server, etc.
With that in mind, in sites-available/site.conf, I have the following server blocks:
server {
listen 80;
listen [::]:80;
server_name catalog.example.com;
server_name www.catalog.example.com;
root /var/www/catalog;
# Use try_files to trigger internal directive to serve json files
location / {
try_files $uri =404;
}
# Serve json files to scripts only with content type header application/json
location ~ \.json$ {
internal;
add_header Content-Type application/json;
}
}
server {
listen 80;
listen [::]:80;
server_name boxes.example.com;
server_name www.boxes.example.com;
root /var/www/boxes;
# Use try_files to trigger internal directive to serve json files
location / {
try_files $uri =404;
}
# Serve box files to scripts only with content type application/octet-stream
location ~ \.box$ {
internal;
add_header Content-Type application/octet-stream;
}
}
The NGINX documentation for the internal directive states:
Specifies that a given location can only be used for internal requests. For external requests, the client error 404 (Not Found) is returned. Internal requests are the following:
requests redirected by the error_page, index, random_index, and try_files directives;
Based on that, my understanding is that my server blocks grab any path for those sub-domains and then, passing it through try_files, should make that available when called via vagrant, yet hide it from the browser if I hit the catalog or a box url directly.
I can confirm that the files are not accessible from the browser; however, they're also unaccessible to vagrant as well.
Am I mis-understanding internal here? Is there a way to achieve my goal?
Make sure for the sensitive calls the server listens on localhost only
Create a tunnel between the machine running vagrant (using an arbitrary port) and your IAAS provider machine (on the web server port, for example).
Create a user on your IAAS machine who is only allowed to interact with the forwarded web-server port (via sshd_config)
Use details from below
https://askubuntu.com/questions/48129/how-to-create-a-restricted-ssh-user-for-port-forwarding
Reference the tunneled server using http://:/path in both your catalog.json url and your box file url
Use a server block in your NGINX config which listens to the 127.0.0.1:80 only and doesn't use server_name. You can even add default_server to this so that anything that doesn't match other virtual host will hit this block
Use two locations in your config with different roots to serve files from /var/www/catalog and /var/www/boxes respectively.
Set regex locations for your .json and .box files and use a try_files block to accept the $uri or redirect to 444 (so you know it hit your block)
Deny the /boxes and /catalog otherwise.
See the below nginx config for example
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name example.com;
server_name www.example.com;
root /var/www;
location ~ /(catalog|boxes) {
deny all;
return 403;
}
}
server {
listen 80;
listen [::]:80;
server_name store.example.com; # I will use an eCommerce platform eventually
root /var/www/store;
}
server {
listen 127.0.0.1:80;
listen [::]:80;
root /var/www;
location ~ \.json$ {
try_files $uri $uri/ =444;i
add_header Content-Type application/json;
}
location ~ \.box$ {
try_files $uri $uri/ =444;
add_header Content-Type octet/stream;
}
location ~ /(catalog|boxes) {
deny all;
return 403;
}
}
I think all you need here is to change the access level to the file. There is 3 access level (execute, read and write) you can remove the execute access level from your file. On the server consul run the command:
chmod 766 your_file_name
you can see:
here
and here
for more information.

simple nginx server not working

I am new to nginx environment and trying to host my first app using nginx.
But I am not being able start the first steps with nginx.
I have seen and read thousands of tutorials on basic nginx setup and have set up basic nginx server block as anyone would have.
Here is my sites-available/default
server {
listen 80 default_server;
listen [::]:80 default_server;
# SSL configuration
#
# listen 443 ssl default_server;
# listen [::]:443 ssl default_server;
#
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
#
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
# include snippets/snakeoil.conf;
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm ;
error_log /var/log/nginx/error.log debug;
error_page 400 401 402 403 404 40x.html;
server_name mydomain.com;
location / {
root /var/www/html;
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# include snippets/fastcgi-php.conf;
#
# # With php7.0-cgi alone:
# fastcgi_pass 127.0.0.1:9000;
# # With php7.0-fpm:
# fastcgi_pass unix:/run/php/php7.0-fpm.sock;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
location ~ /\.ht {
deny all;
}
}
And i have done several deployments with apache but with nginx, i am experiencing peculiar behaviour.
This is how it goes.
It serves the default nginx welcome page successfully from /var/www/htmlon mydomain.com
Now, if i create a new html file say test.html inside /var/www/html and try to open mydomain.com/test.html, it shows internal server error with no logs in error log or access log.
Now in my server block, if i add test.html to the index directive as the first option, the same /var/www/html/test.html file is served and seen without any error on mydomain.com (So that it is clear that there are no problems in file permissions).
Also, if say i have index as the default index page only, if i add a hyperlink in the default index page, say Test Page , and on the default home page served on mydomain.com if i click on that hyperlink, test.html file is served, but the url in my browser does not change.
I am banging my head on this from last two days and i have tried several things.
Increased the verbosity of error logs to debug, still nothing shows up in logs
Tried a hundred other logically same but syntactically different server configurations.
I am pretty experienced with server configurations and have done number of deployments with apache and have never experienced something like this on apache.
Maybe, I am skipping some of the basic concepts of nginx as i do not know much about nginx but felt it would be similar to apache.
Please help me with this issue.
Thanks in advance

How to handle multiple websites through FastCGI server

I'm interested in serving multiple .Net sites using Nginx for the front end proxying to fastcgi-server. I would like to know if its possible to support 2 sites on a single fastcgi-mono-server4 port (9000) or if the accepted practice to is to create a port for each site? When specifying a webapp file there seems to be nowhere to specify whether to use 9000 or 9001 so I'm confused unless you can specify a pool of fastcgi processes. I found when attempting 2 sites on Port 9000 using a webapp configuration file with 2 hosts... the same site was served on both urls.
Thanks
Yes. The fastcgi-mono-server4(mono 3.12.1) can take more than one webapp in single proc.
It seems that the fastcgi-mono-server only use the vhost+vport+vpath to match the webapp node defined in .webapp file.
Setup two webapp in different port 80 vs. 81
my_nginx.conf
server {
listen 80;
server_name localhost;
location / {
root /home/test/www;
index index.html Default.aspx;
fastcgi_pass 127.0.0.1:9000;
include /etc/nginx/fastcgi_params;
}
}
server {
listen 81;
server_name localhost;
location / {
root /home/test/www2;
index index.html Default.aspx;
fastcgi_pass 127.0.0.1:9000;
include /etc/nginx/fastcgi_params;
}
}
two.webapp
it contains 2 webapp nodes
<apps>
<web-application>
<name>www</name>
<vhost>*</vhost>
<vport>80</vport>
<vpath>/</vpath>
<path>/home/test/www/</path>
<enabled>true</enabled>
</web-application>
<web-application>
<name>www2</name>
<vhost>*</vhost>
<vport>81</vport>
<vpath>/</vpath>
<path>/home/test/www2/</path>
<enabled>true</enabled>
</web-application>
</apps>
I just tested use the vport to distinct them, and succeed. I think using vhost or vpath or any combination of vhost+vport+vpath should worked.
start the fastcgi server
listening in 9000 port.
fastcgi-mono-server4 --appconfigfile=./two.webapp /socket=tcp:127.0.0.1:9000

Non-www to www domain using Nginx on Ubuntu 12.04 LTS on Ec2 instance

After seeing this post http://www.ewanleith.com/blog/900/10-million-hits-a-day-with-wordpress-using-a-15-server I changed my server from apache2 to nginx. I am no computer geek just, savvy. I followed the steps. After that, the site was perfect, except for one thing: non-www to www thing. I searched all over the net on how to do this. I tried the modrewrite thing they said but just getting worst. For now, it is directed to www because I use wordpress and set it in general settings http://www.pageantly.com. Yet, I have static directories and it is in plain non-www. Please take a look on my default.conf in /etc/nginx/conf.d/ as well as the tutorial with link above:
server {
server_name pageantly.com www.pageantly.com;
root /var/www/;
listen 8080;
## This should be in your http block and if it is, it's not needed here.
index index.html index.htm index.php;
include conf.d/drop;
location / {
# This is cool because no php is touched for static content
try_files $uri $uri/ /index.php?q=$uri&$args;
}
location ~ \.php$ {
fastcgi_buffers 8 256k;
fastcgi_buffer_size 128k;
fastcgi_intercept_errors on;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass unix:/dev/shm/php-fpm-www.sock;
}
# BEGIN W3TC Page Cache cache
location ~ /wp-content/w3tc/pgcache.*html$ {
add_header Vary "Accept-Encoding, Cookie";
}
[...]
}
# END W3TC Page Cache core
}
Ideally, each domain (sub-domains included) should have a separate server block. Going by that, your configuration would look like:
# Following block redirects all traffic coming to pageantly.com to www.pageantly.com
server {
server_name pageantly.com;
listen 8080;
# Send a 301 permanent redirect to any request which comes on this domain
return 301 http://www.pageantly.com$request_uri;
}
# Following block handles requests for www.pageantly.com
server {
server_name www.pageantly.com;
listen 8080;
root /var/www;
[...] # all your default configuration for the website
}
Another unclean and inefficient way to achieve this would be to introduce an if statement which reads domain value and branches flow accordingly either to redirect traffic (in case of pageantly.com) or to process requests (in case of www.pageantly.com) but I would recommend you avoid going by that route.
Hope this helps.
If you are using Route 53 on AWS; then you do NOT have to do any such thing. On Route53 itself we can create an alias and configure so that non-www is redirected to www.

Resources