Nginx 1.4.6 throwing 503 errors
I have configured 2 websites (wordpress and laravel v4) on digitalocean droplet - nginx/1.4.6 (Ubuntu).
Both of the websites normally works very good and quick.
But at the time of any data save in laravel website, it throws 503 error.
and in wordpress it does not thorough 503 error, but takes too long time to response near about 1-3min at the time of saving any post or any data.
Both sites virtual host configuration is same as below.
server {
listen 80;
listen 443 ssl;
root /var/www/domain1.com/public_html;
index index.php index.html index.htm;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
# Make site accessible from http://localhost/
server_name domain1.com www.domain1.com;
access_log off;
#GZIP Configuration
gzip on;
gzip_min_length 100;
gzip_comp_level 3;
gzip_types text/plain;
gzip_types text/css;
gzip_types text/javascript;
gzip_disable "msie6";
location / {
try_files $uri $uri/ /index.php?q=$uri&$args;
if ($host !~* ^www\.)
{
rewrite ^/(.*)$ http://www.$host/$1 permanent;
}
proxy_read_timeout 300;
}
error_page 404 /error.html;
location ^~ /error.html {
rewrite ^/.* http://www.domain1.com permanent;
}
location ~* \.(css|js|jpg|png|gif)$ {
access_log off;
expires 1M;
add_header Pragma public;
add_header Cache-Control public;
add_header Vary Accept-Encoding;
}
try_files $uri $uri/ #rewrite;
location #rewrite {
rewrite ^/(.*)$ /index.php?_url=/$1;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_read_timeout 600s;
#fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
I have also checked error logs and there's nothing critical has been found.
Please guide me, why Laravel v4 website is showing 503 errors, and why wordpress site is slow at the time of saving data.
The 503 means that the service is unresponsive. I guess you are getting timeouts due to laraval using too much of your resources i.e memory, possibly maxing out on your droplet.
From your config I can seen that you had a php-fpm socket but decided to use the php via port 9000.
#fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_pass 127.0.0.1:9000;
I would recommend configuring php-fpm with 2 different pools, one for wordpress and another for laravel and then have them setup to listen to different sockets with different setting per pool i.e. wordpress can have php_value[memory_limit] = 128M and laravel php_value[memory_limit] = 64M. Although this is just an idea, as there is plenty of setting you can have per pool.
Additionally your config has got very generous timeouts setup:
proxy_read_timeout 300
fastcgi_read_timeout 600s
It's very unusual to wait 5 or 10 minutes for response.
The best way in my opinion will be to:
Go with php-fpm and different socket -> custom settings per site
Install the NewRelic it's free for the basic level, this will
let you see what is going on with your server. i.e. memory usage, how
long it takes to execute a request etc. Hopefully it will help you
find an issue with your code i.e. why it takes so long for you
laravel to save data.
Change the timeouts i.e. lower it down 5 and 10 minutes is crazy.
Possibly upgrade your droplet, depending on your findings.
This is good website with instructions on how to tweak php-fpm and nginx.
Related
Hey everyone!
I'm having a really hard time figuring this out, when i run my website with apache, everything works as intended, however i recently switched to nginx, when i run my website on nginx and access the joomla backend i get an Error 520 from Cloudflare, i can't find out the difference in the two webservers, but it seems related to SSL, running without SSL works fine.
I'm out of luck i did a lot of testing and still the same issue.
Something that Cloudflare cannot understand is happening when using Nginx.
This is my Nginx Config
server {
listen 443 ssl http2;
listen 80;
server_name websitename.com www.websitename.com;
root /var/www/html;
ssl_certificate websitename.com.crt;
ssl_certificate_key websitename.com.key;
index index.php index.html index.htm default.html default.htm;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~* /(images|cache|media|logs|tmp)/.*\.(php|pl|py|jsp|asp|sh|cgi)$ {
return 403;
error_page 403 /403_error.html;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include /etc/nginx/fastcgi.conf;
}
location ~* \.(ico|pdf|flv)$ {
expires 1y;
}
location ~* \.(js|css|png|jpg|jpeg|gif|swf|xml|txt)$ {
expires 14d;
}
}
Finally i solved it.
Finally i found out that somehow the Cloudflare Railgun isn't behaving right with Nginx
I went to Cloudflare and navigated to "Speed->Optimizations" I disabled the Railgun
and i no longer have 520 Errors.
Hope this helps anyone with the same issue, been 3 days stuck on this.
I am trying to move old API endpoints to new Restful (descriptive) endpoints. I have tried the below nginx configuration for rewriting old requests to the new endpoints but it is not working. Any help will be highly appreciated.
server {
listen 80;
root /path/to/api/entry/file;
index index.php;
server_name api.example.com;
#Below not rewriting http://api.example.com/create/ to http://api.example.com/users/v1/create
rewrite ^/create/ /users/v1/create last;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_buffer_size 128k;
fastcgi_buffers 256 16k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
fastcgi_pass unix:/var/run/php5-fpm.sock;
access_log /var/log/nginx/example_api-access.log;
error_log /var/log/nginx/example_api-error.log;
fastcgi_read_timeout 600;
}
}
An example to what am trying to achieve is rewrite http://api.example.com/create/ to http://api.example.com/users/v1/create and forward the request to the entry script (index.php) which will bootstrap the necessary controller to handle the request
Your rewrite...last doesn't achieve anything, as it's an internal process which eventually ends at /index.php. Your index.php script uses the original request (probably from the REQUEST_URI parameter) to determine the API endpoint.
You need to perform an external redirection using rewrite...permanent to make it visible to index.php. See this document for details.
For example:
rewrite ^/create/ /users/v1/create permanent;
Or more efficiently, and to work with POST and GET requests:
location /create/ { return 307 /users/v1/create$is_args$args; }
If you want to support the old API without a redirection, you will need to fool index.php with a dedicated location block, for example:
location /create/ {
include fastcgi_params;
fastcgi_param REQUEST_URI /users/v1/create;
fastcgi_param SCRIPT_FILENAME $document_root/index.php;
fastcgi_pass unix:/var/run/php5-fpm.sock;
}
Many of your fastcgi directives can be moved into the outer block, so that you do not need to write them twice. See this document for details.
I am using Drupal 8 with nginx . I have a multiProject Environment with single domain.
I have 10 Drupal sites. which works as
https://example.com/site1
https://example.com/site2
Each site has its on distinct docker container and everything is running smooth on production. But I noticed A issue with few sites. They give 404 without a trailing slash .Only 4 of them. Rest six automatically appends the slash when I remove them and hit in browser.
The nginx config for all are exactly same.
Here is the nginx config for a site :
server
{
client_max_body_size 128m;
root /var/www/html;
index index.php index.html index.htm;
location /site1 {
try_files $uri/ $uri /site1/index.php?$query_string;
}
error_page 404 /404.html;
error_page 403 /403.html;
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
location ~ \.php$ {
try_files $uri $uri/ /index.php?q=$uri&$args;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
add_header Access-Control-Allow-Origin *;
proxy_set_header Access-Control-Allow-Origin $http_origin;
}
}
I have tried every thing , rewrite, try files and returns.
can any one help.
I moved from the setup of Apache 2 + Varnish to Nginx alone, and I'm kinda stuck with how I should setup/use ESI as well as fastcgi_cache in this setup.
First of all, the idea of ESI was that we setup a reverse proxy layer in front of the server to cache the cache-able parts of a page, then using esi to retrieve the dynamic parts. In my previous setup Varnish was acting as the reverse proxy and Apache only handles the esi requests when necessary.
My question is that now with Nginx acting as the sole server here, how do I make it to work? Do I need to setup another Nginx instance running as a reverse proxy server or something? I couldn't find any document on this.
The second question is regarding fastcgi_cache. I have set it up as described below but the cache does't seem to work for me, no cache file populated and I always get "MISS". I wonder if it's because I need to set max-age/shared-max-age in each controller for each to work?
fastcgi_cache_path /run levels=1:2 keys_zone=www_mysite_com:100m inactive=60m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
fastcgi_cache_use_stale error timeout invalid_header http_500;
server {
#listen 80; ## listen for ipv4; this line is default and implied
#listen [::]:80 default ipv6only=on; ## listen for ipv6
root /var/www/mysite.com/w/w/w/www/web;
index index.php index.html index.htm;
# Make site accessible from http://www.mysite.com
server_name www.mysite.com;
# Specify a character set
charset utf-8;
# strip app.php/ prefix if it is present
rewrite ^/app\.php/?(.*)$ /$1 permanent;
# h5bp nginx configs
# include conf/h5bp.conf;
location / {
index app.php;
try_files $uri #rewriteapp;
}
location #rewriteapp {
rewrite ^(.*)$ /app.php/$1 last;
}
# Deny access to .htaccess
location ~ /\.ht {
deny all;
}
# Don't log robots.txt or favicon.ico files
location = /favicon.ico { log_not_found off; access_log off; }
location = /robots.txt { access_log off; log_not_found off; }
# 404 errors handled by our application, for instance Symfony
error_page 404 /app.php;
# pass the PHP scripts to FastCGI server from upstream phpfcgi
location ~ ^/(app|app_dev|backend/app|backend/app_dev|config)\.php(/|$) {
fastcgi_split_path_info ^(.+\.php)(/.*)$;
# With php5-fpm:
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME web/$fastcgi_script_name;
fastcgi_param HTTPS off;
fastcgi_cache www_mysite_com;
fastcgi_cache_valid 200 60m;
}
# Only for nginx-naxsi : process denied requests
#location /RequestDenied {
# For example, return an error code
#return 418;
#}
# redirect server error pages to the static page /50x.html
#
#error_page 500 502 503 504 /50x.html;
}
By default, responses from the Symfony 2 application have a cache control header that disables caching:
Cache-Control: no-cache
If you would like nginx to cache pages you will have to change those headers.
You can find general information about caching in the documentation
The simplest solution is to use the SymfonyFrameworkExtraBundle (you already have it if you use the SF2 standard edition) and use annotations on your controllers and/or actions to specify the cache headers. You can find more info about this approach it the docs for the #Cache annotation.
I am replacing lighttpd with nginx on my development server. I got it working with PHP and SSL, but I'm stumped by what should be a simple rewrite. I need to rewrite URLs from
http[s]://dev.foo.com/signup/123456
to
http[s]://dev.foo.com/signup/index.php?attcode=123456
The rule I am using is:
rewrite ^/signup/([0-9]+)$ /signup/index.php?attycode=$1 last;
I have tried numerous variations on this, moved it around, put it inside a location block. What happens is the URL is rewritten to:
http://dev.foo.com/dev.foo.com/signup/123456
The hostname is inserted, and it seems to always lose https and go to http.
My nginx.com server section is below. I have read and re-read the nginx docs (as they are) and searched the nginx mailing list, but nothing I've tried has solved this problem.
Ubuntu 8.0.4 LTS in case that matters.
Thanks.
server {
listen 80;
listen 443 default ssl;
server_name dev.foo.com dev.bar.com localhost;
root /var/www/foo;
index index.php index.html;
# ssl cert stuff omitted
charset utf-8;
access_log /var/log/www/dev.access.log main;
location ~ /\. {
deny all;
}
location ~* ^.+\.(inc|tpl|sql|ini|bak|sh|cgi)$ {
deny all;
}
location ~* ^/(scripts|tmp|sql)/ {
deny all;
}
rewrite ^/robots.txt$ /robots_nocrawl.txt break;
rewrite ^/signup/([0-9]+)$ /signup/index.php?attycode=$1 last;
location / {
try_files $uri $uri/ /error_404.php;
}
location ~ \.php$ {
fastcgi_pass localhost:51115;
fastcgi_index index.php;
fastcgi_intercept_errors on;
include fastcgi_params;
fastcgi_param SERVER_NAME $http_host;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
error_page 404 /error_404.php;
}
Don't put HTTP and HTTPS in the same server block. Separate them into two almost-identical server blocks, one for HTTP and one for HTTPS. Otherwise you will confuse all kinds of Nginx internals.