I've got a fastcgi applicaiton served by nginx. The site configuration is very basic:
upstream foo {
server unix:/var/run/fastcgi/foo.sock;
}
server {
listen 8080;
server_name _;
root /usr/share/nginx/www;
index index.html index.htm;
location / {
include fastcgi_params;
fastcgi_param PATH_INFO $fastcgi_script_name;
fastcgi_pass foo;
}
}
The app itself is quite fast, it's processing typical request in about 3-5 miliseconds - I can see this in the app log and in $upstream_response_time logged to access.log (this is not static files or anything like this, the app is just processing normal HTTP POSTs and returning application/json http response). However, total request time ($request_time variable) is very long, it's 140-160 miliseconds.
What's nginx doing that takes so much time? I suspect it might be opening/closing socket connection for each request or something like this (but 140 miliseconds?!?), how can I trace the cause of the issue?
Related
I've got my nginx server running a php8.0 and Laravel 8.0 web app on an ec2 instance running Ubuntu 18.04.
So the web app works just fine, I can access the DB, look up data and insert new data, all the requests for this things are POST or GET requests with no params. But when I try to do GET requests that have parameters the app just shows a white screen, like nothing is happening. For example:
This GET request doesn't work:
http://mypublicamazonurl.com/clients-update?id=2&client_name=John&client_last_name=Doe&client_email=johndoe%40gmail.com&client_phone_number=9999&action=update
This GET request does work:
http://mypublicamazonurl.com/labels
My server conf file looks like this:
server {
listen 80;
root /var/www/Proyecto-Final-IAW/las-olivas/public;
index index.php index.html index.htm index.nginx-debian.html;
server_name 'mypublicurl.com' 'www.mypublicurl.com';
location / {
try_files $uri $uri/ /index.php$query_string;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php8.0-fpm.sock;
}
location ~ /\.ht {
deny all;
}
}
This is my very first time setting a server so any help/tip is appreciated and hopefully I was clear.
I have configured nginx with fastcgi_mono_server4.
In my nginx config I have 2 hostnames :
server {
listen 80;
server_name dev.example.org
location / {
root /var/www/dev.example.org/;
fastcgi_index Default.aspx;
fastcgi_pass 127.0.0.1:9001;
include /etc/nginx/fastcgi_params;
}
}
server {
listen 80;
server_name *.example.org
location / {
root /var/www/example.org/;
fastcgi_index Default.aspx;
fastcgi_pass 127.0.0.1:9000;
include /etc/nginx/fastcgi_params;
}
}
nginx is OK with this configuration. dev goes to one and all other to another one .
I've already tried this :
fastcgi-mono-server4 /applications=*.example.org:/:/var/www/example.org/ /socket=tcp:127.0.0.1:9000
but it throws an error (Uri parse exception)
Update :
I need to get the full host name in my application, for example if the request was abc.example.org, I need to get "abc".
Unfortunately, HttpContext.Current.Request.Url does not contains "abc" but "*" which causes the parse error
If nginx is going to take care of routing the appropriate sub-domains to each fastcgi port (9000 or 9001) then can you get away with a wildcard domain when you start the mono server process e.g. just use a * instead of '*.example.org'
fastcgi-mono-server4 /applications=*:/:/var/www/example.org/ /socket=tcp:127.0.0.1:9000
Update: The above works to get two Mono server apps listening via nginx, but, using the nginx config from the original question will lead to an exception if you call HttpContext.Request.Url on the catch-all server. This is due to it not liking the * in *.example.org.
There are two possible solutions, depending what you'd like to see returned from HttpContext.Request.Url when a client browses foo.example.org, bar.example.org etc.
Option 1: If you don't care about the sub-domain and want to see example.org
Configure the second (*.example.org) nginx server to be the 'default_server' and have it assign a server-name without the wildcard e.g.
server {
listen 80 default_server;
server_name example.org;
access_log ... }
With these settings, browsing to foo.example.org/Default.aspx loads the page and HttpContext.Request.Url returns example.org/Default.aspx
Option 2: If you want to see the actual sub-domain e.g. foo.example.org
Removing the server_name from the second server definition works.
server {
listen 80 default_server;
access_log ... }
With these settings, browsing to foo.example.org/Default.aspx loads the page and HttpContext.Request.Url returns foo.example.org/Default.aspx
#stephen's answer is more simple and does not need fastcgi config modification.
I tried previous answer (before update), but it did not work.
Nginx take care of routing, as #stephen said, and the routing part worked.
to start fastcgi I used this command to match all routes (and server names)
fastcgi-mono-server4 /applications=/:/var/www/example.org/ /socket=tcp:127.0.0.1:9000
The problem was that HttpContext.Request.Url contains the $server_name value in my case it was "*.example.org" and when I try to parse URI there was an error.
To handle this I changed nginx fastcgi_params and replaced thi line
fastcgi_param SERVER_NAME $server_name;
by
fastcgi_param SERVER_NAME $http_host;
and add in site-available conf
proxy_set_header Host $host;
I think it is set by default.
reload nginx
nginx -t && service nginx reload
reload fastcgi-mono-server to test
fastcgi-mono-server4 /applications=/://var/www/example.org/ /socket=tcp:127.0.0.1:9000 /printlog=True /loglevels=Debug
in the log SERVER_NAME contains the real (not *) subomain.
My situation is the following:
I have a VM (Ubuntu server 13.04) with PHP 5.4.9-4ubuntu2.2, nginx/1.2.6, php5-fpm and Xdebug v2.2.1.
I'm developing an app using PhpStorm 6.0.3 (which I deploy on the VM).
My problem is, whenever I try to start a debugging session, the IDE never gets a connection request from the webserver (And thus, the session never starts).
I looked through a lot of recommendations about xdebug configuration and found nothing useful.
What I recently realized is that if I set the XDEBUG_SESSION cookie myself through the browser (Thanks FireCookie) I can debug my app... so my guess is there's something keeping the webserver from sending the cookie back to the client.
The thing is, I'm using the same IDE configuration in a different project, which is deployed into a different CentOS based VM (with lighttpd), and it works just fine.
I tried to deploying my current project into such VM (changing the webserver to NginX) and it worked allright (Unfortunately I lost that VM and can't check the config :().
So... here's my NginX config:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name localhost;
location / {
try_files $uri $uri/ /dispatch.php;
}
#
location ~ \.php$ {
root /var/www/bresson/web;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index dispatch.php;
fastcgi_param SCRIPT_FILENAME /var/www/$fastcgi_script_name;
include fastcgi_params;
#fastcgi_pass 127.0.0.1:9009;
}
}
fpm config (/etc/php5/fpm/pool.d/www.conf):
listen = /var/run/php5-fpm.sock
xdebug.ini:
zend_extension=/usr/lib/php5/20100525/xdebug.so
xdebug.remote_port=9000
xdebug.remote_enable=On
xdebug.remote_connect_back=On
xdebug.remote_log=/var/log/xdebug.log
Any idea will be much appreciated. Thanks!
EDIT:
Another thing I tried was to start a session from php and I saw that the session cookie was created without any problem...
2nd Edit:
I think I found where the problem is: the URI.
I wrote another script in order to try configuration parameters and stuff (A much simpler one), and it worked right out!.
So eventually I figured the problem was that the query parameters (i.e.: XDEBUG_SESSION_START=14845) were not reaching my script.
The problem is my starting URI, which is of the form /images/P/P1/P1010044-242x300.jpg. Through some virtual host configuration I should be able to route it to something of the like /dispatch.php/images/P/P1/P1010044-242x300.jpg, and use the rest of the URI as parameters. So... I haven't found a solution per se, but now I have a viable workaround (pointing my starting URL to /dispatch.php) which will do it for a while. Thanks
Just in case there's someone reading this... I got it!
The problem was nginx's configuration. I had just copied a template from somewhere, but now I read a little more and found out that my particular config was much simpler:
location / {
root /var/www/bresson/web/;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/dispatch.php;
fastcgi_pass unix:/var/run/php5-fpm.sock;
}
In my case, every request has to be forwarded to my front-controller (which then analyzes the URI), so it was really simple.
I am having trouble getting Mono to work with nginx. I installed OpenBSD 5.3 and set up the appropriate (package) ports. I built mono, mono-xsp and nginx - all without incident. All three appear to be working OK, but not in conjunction.
I am trying to run the default VS MVC3 template web app, but keep getting a 502 (Bad gateway). In the error logs, I see the following:
[crit] 31764#0: *1 connect() to unix:/tmp/fastcgi.socket failed (2: No such file or directory) while connecting to upstream,*
The frustrating thing is that /tmp/fastcgi.socket does exist. I tried 'touch' and making sure 'wheel' and 'www' has the appropriate permissions (chmod 775 and 777). The result of 'ls -la /tmp/fastcgi.socket' revealed nothing awry.
Here is my config:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
server {
listen 80;
access_log /home/www/nginx.log;
error_log /home/www/errors.log;
# root /home/www/test;
# index index.html index.htm index.aspx default.aspx;
location ^~ /Scripts/ { }
location ^~ /Content/ { }
location / {
root /home/www/test;
# fastcgi_index /;
fastcgi_pass unix:/tmp/fastcgi.socket;
# include fastcgi_params;
include /etc/nginx/fastcgi_params;
}
}
}
I'm gonna hazard a guess that OpenBSD ports runs nginx jailed or chrooted. So check that first and if so you'll need to change the socket path to be created inside the jailed root.
After seeing this post http://www.ewanleith.com/blog/900/10-million-hits-a-day-with-wordpress-using-a-15-server I changed my server from apache2 to nginx. I am no computer geek just, savvy. I followed the steps. After that, the site was perfect, except for one thing: non-www to www thing. I searched all over the net on how to do this. I tried the modrewrite thing they said but just getting worst. For now, it is directed to www because I use wordpress and set it in general settings http://www.pageantly.com. Yet, I have static directories and it is in plain non-www. Please take a look on my default.conf in /etc/nginx/conf.d/ as well as the tutorial with link above:
server {
server_name pageantly.com www.pageantly.com;
root /var/www/;
listen 8080;
## This should be in your http block and if it is, it's not needed here.
index index.html index.htm index.php;
include conf.d/drop;
location / {
# This is cool because no php is touched for static content
try_files $uri $uri/ /index.php?q=$uri&$args;
}
location ~ \.php$ {
fastcgi_buffers 8 256k;
fastcgi_buffer_size 128k;
fastcgi_intercept_errors on;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass unix:/dev/shm/php-fpm-www.sock;
}
# BEGIN W3TC Page Cache cache
location ~ /wp-content/w3tc/pgcache.*html$ {
add_header Vary "Accept-Encoding, Cookie";
}
[...]
}
# END W3TC Page Cache core
}
Ideally, each domain (sub-domains included) should have a separate server block. Going by that, your configuration would look like:
# Following block redirects all traffic coming to pageantly.com to www.pageantly.com
server {
server_name pageantly.com;
listen 8080;
# Send a 301 permanent redirect to any request which comes on this domain
return 301 http://www.pageantly.com$request_uri;
}
# Following block handles requests for www.pageantly.com
server {
server_name www.pageantly.com;
listen 8080;
root /var/www;
[...] # all your default configuration for the website
}
Another unclean and inefficient way to achieve this would be to introduce an if statement which reads domain value and branches flow accordingly either to redirect traffic (in case of pageantly.com) or to process requests (in case of www.pageantly.com) but I would recommend you avoid going by that route.
Hope this helps.
If you are using Route 53 on AWS; then you do NOT have to do any such thing. On Route53 itself we can create an alias and configure so that non-www is redirected to www.