Very Slow Responses On Homestead - homestead

Versions:
Lumen: 5.2
Vagrant: 1.8.1
Homestead: latest
I have just installed Homestead and am attempting to use it for developing a Lumen application. The Lumen app is very fast on MAMP (as expected), however is extremely slow on Homestead. While I expect a performance drop when using a VM, the performance drop I am experiencing is essentially rendering Homestead unusable.
I am on OS X Yosemite and have 16GB of RAM and 8 logical cores. I am also using VirtualBox to run my VM.
Homestead.yaml
---
ip: "192.168.10.10"
memory: 4096
cpus: 2
provider: virtualbox
authorize: ~/.ssh/id_rsa.pub
keys:
- ~/.ssh/id_rsa
folders:
- map: ~/repos
to: /home/vagrant/Code
type: nfs
sites:
- map: my-site.app
to: /home/vagrant/Code/my-site/public
databases:
- homestead
Inside my nginx config:
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_index /index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_intercept_errors off;
fastcgi_buffer_size 16k;
fastcgi_buffers 4 16k;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
}
While running my test suite via PHPUnit is lightning fast (running from both within Homestead and from my local file system), responses in the browser are painfully slow. For example, a simple route returning hello world takes 5 - 10 seconds to respond.
What can I do to increase the response time? Are there any other settings that I can optimize?

In my case in windows is because virtualbox uses VBOXSF as a file system to mount folders from host to guest, I don't know why but it is slow.
If you mount the folder using CIFS it should runs a lot faster.
on the guest side you should do this https://wiki.ubuntu.com/MountWindowsSharesPermanently
I don't use mac so I'm not sure how to do it on the HOST side, but I guess that you only need to Share the folder to the network using SMB.
good luck.

Related

How to configure MAMP PRO nginx server for symfony on Mac

since I have an Macbook, I use MAMP PRO to run several local servers. Now I want to change from Apache to nginx. But I allways get an 404.
What does a working standard nginx configuration for Symfony 3.x look like?
Thanks, Anton
The following configuration works:
Directory Index: app.php
try_files: $uri /index.php$is_args$args;
Custom:
$uri /index.php$is_args$args;
location ~ ^/app\.php(/|$) {
fastcgi_pass unix:/Applications/MAMP/Library/logs/fastcgi/nginxFastCGI_php7.1.8.sock;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include /Applications/MAMP/conf/nginx/fastcgi_params;
# When you are using symlinks to link the document root to the
# current version of your application, you should pass the real
# application path instead of the path to the symlink to PHP
# FPM.
# Otherwise, PHP's OPcache may not properly detect changes to
# your PHP files (see https://github.com/zendtech/ZendOptimizerPlus/issues/126
# for more information).
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
# Prevents URIs that include the front controller. This will 404:
# http://domain.tld/index.php/some-path
# Remove the internal directive to allow URIs like this
internal;
}
# return 404 for all other php files not matching the front controller
# this prevents access to other php files you don't want to be accessible.

Random "502 Error Bad Gateway" in Amazon Red Hat (Not Ubuntu) - Nginx + PHP-FPM

First of all, I already searched for 502 error in Stackoverflow. There are a lot a threads, but the difference this time is that the error appears without a pattern and it's not in Ubuntu.
Everything works perfectly, but about once a week my site shows: 502 Bad Gateway.
After this first error, every connection starts showing this message. Restarting MySQL + PHP-FPM + Nginx + Varnish doesn't work.
I have to clone this instance, and make another one, to get my site up again (It is hosted in Amazon EC2).
In Nginx log it shows these line again and again:
[error] 16773#0: *7034 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 127.0.0.1
There are nothing in MySQL or Varnish log. But in PHP-FPM it shows theses type of line:
WARNING: [pool www] child 18978, script '/var/www/mysite.com/index.php' (request: "GET /index.php") executing too slow (10.303579 sec), logging
WARNING: [pool www] child 18978, script '/var/www/mysite.com/index.php' (request: "GET /index.php") execution timed out (16.971086 sec), terminating
Inside PHP-FPM slowlog it was showing:
[pool www] pid 20401
script_filename = /var/www/mysite.com/index.php
w3_require_once() /var/www/mysite.com/wp-content/plugins/w3-total-cache/inc/define.php:1478
(Inside the file "define.php" at line number 1478, it has this line of code: require_once $path;)
I thought the problem was with W3 Total Cache plugin. So I removed W3 Total Cache.
About 5 days later it happened again with this error in PHP-FPM slow log:
script_filename = /var/www/mysite.com/index.php
wpcf7_load_modules() /var/www/mysite.com/wp-content/plugins/contact-form-7/includes/functions.php:283
(Inside the file "functions.php" at line number 283, it has this line of code: include_once $file;)
The other day, the first error occurred in another part:
script_filename = /var/www/mysite.com/wp-cron.php
curl_exec() /var/www/mysite.com/wp-includes/class-http.php:1510
And again a different part of code:
[pool www] pid 20509
script_filename = /var/www/mysite.com/index.php
mysql_query() /var/www/mysite.com/wp-includes/wp-db.php:1655
CPU, RAM ... everything is stable when this error occurs (less then 20% usage).
I tried everything, but nothing worked:
Moved to a better server (CPU and RAM)
Decreased timeout from Nginx, PHP-FPM, MySQL (my page loads quickly, so I decrease timeout to kill any outlier process)
Changed the number of PHP-FPM spare servers
Changed a lot of configuration from Nginx and PHP-FPM
I know that there is a bug with PHP-FPM and Ubuntu that could cause this error. But I don't think there is a bug with Amazon instances (Red Hat). (And I don't want to migrate from PHP-FPM to Socks because I've read that Socks don't works well under heavy load)
This was happening about every week since 5 months ago. I'm desperate.
I got to the point that I even put Nginx and PHP-FPM in Linux's crontab, to restart theses services every day. But it didn't work too.
Anyone has any suggestion where I can solve this problem? Anything will help!!
Server:
Amazon c3.large (2 core and 3.75GB RAM)
Linux Amazon Red Hat 4.8.2 64bits
PHP-FPM:
listen = 127.0.0.1:9000
listen.allowed_clients = 127.0.0.1
listen.mode = 0664
pm = ondemand
pm.max_children = 480
pm.start_servers = 140
pm.min_spare_servers =140
pm.max_spare_servers = 250
pm.max_requests = 50
request_terminate_timeout = 15s
request_slowlog_timeout = 10s
php_admin_flag[log_errors] = on
Nginx:
worker_processes 2;
events {
worker_connections 2048;
multi_accept on;
use epoll;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log off;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
types_hash_max_size 2048;
server_tokens off;
client_max_body_size 8m;
reset_timedout_connection on;
index index.php index.html index.htm;
keepalive_timeout 1;
proxy_connect_timeout 30s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
fastcgi_send_timeout 30s;
fastcgi_read_timeout 30s;
listen 127.0.0.1:8080;
location ~ .php$ {
try_files $uri =404;
include fastcgi_params;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_keep_conn on;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param HTTP_HOST $host;
}
}
I would start by tuning some configuration parameters.
PHP-FPM
I think that your pm values are somewhat off, a bit higher than I've normally seen configured on server around your specs... but you say that memory consumption it's normal so that's kind of weird.
Anyway... for pm.max_children = 480, considering that by default WordPress increases the memory limit to 40MB, you would end up using up to 18 gigs of memory, so you definitely would like to lower that.
Check the fourth part on this post for more info about that: http://www.if-not-true-then-false.com/2011/nginx-and-php-fpm-configuration-and-optimizing-tips-and-tricks/
If you're using... let's say 512MB for nginx, MySQL, Varnish and other services, you would have about 3328 MB for php-fpm... divided by 40 MBs per process, pm.max_children should be about 80... but even 80 it's very high.
It's probable that you can also lower the values of pm.start_servers, pm.min_spare_servers and pm.max_spare_servers. I prefer to keep them low and only increase them it's necessary
For pm.max_requests you should keep the default of 500 to avoid server respawns. I think it's only advisable to lower it if you suspect memory leaks.
Nginx
Change keepalive_timeout to 60 to make better use of keep alive.
Other than that, I think everything looks normal.
I had this issue with Ubuntu, but request_terminate_timeout on PHP-FPM and fastcgi_send_timeout + fastcgi_read_timeout were enough to get rid of it.
I hope you can fix it!

How to consistently setup PHP-FPM 5.6 with nginx on Amazon EC2 AMI instance

I cannot find a way to setup php-fpm on nginx on Amazon AMI EC2 instance from scratch. I know this should not be that difficult, but finding different answers based on *nix versions is confusing.
Here are the condensed steps I've taken that I thought would work, but don't. Does anyone have a set of steps to reliably setup php-fpm with nginx in Amazon AMI EC2 instance?
I've intentionally left out nginx.conf, etc from this post since they are the "stock" installations from the default yum repositories.
nginx version: 1.6.2
Does anyone have reliable steps to setup php-fpm in nginx for Amazon AMI EC2 instances? I would prefer to setup myself instead of using the AMI in the Amazon marketplace that charges for this setup.
Thanks
# install packages
yum install -y nginx
yum install -y php56-fpm.x86_64
# enable php in nginx.conf
vi /etc/nginx/nginx.conf
# add index.php at the beginning of index
index index.php index.html index.htm;
# uncomment the php block in nginx.conf
location ~ \.php$ {
root html;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
include fastcgi_params;
}
# tell php-fpm to run as same account as nginx
vi /etc/php-fpm-5.6.d/www.conf
- change user and group apache to nginx
# allow nginx user to read website files since they are typically owned by root
cd /usr/share/nginx
chown -R nginx:nginx html
# check to see if php works - doesn't with these steps
echo "<?php phpinfo(); ?>" > /usr/share/nginx/info.php
# restart services since we changed things
service nginx restart
service php-fpm-5.6 restart
# verify root path exists and is owned by nginx as we said above
# ls -l /usr/share/nginx/html
-rw-r--r-- 1 nginx nginx 3696 Mar 6 03:53 404.html
-rw-r--r-- 1 nginx nginx 3738 Mar 6 03:53 50x.html
-rw-r--r-- 1 nginx nginx 3770 Mar 6 03:53 index.html
-rw-r--r-- 1 nginx nginx 20 Apr 14 14:01 index.php
# I also verified php-fpm is listening on port 9000 and nginx is setup that way in the nginx.conf
# port 9000 usage is the default and I left it as-is for this question, but I would prefer to use sock once I get this working.
Edit
This is what I see in the nginx error log
2015/04/14 17:08:25 [error] 916#0: *9 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream,
client: 12.34.56.78, server: localhost, request: "GET /index.php HTTP/1.1",
upstream: "fastcgi://127.0.0.1:9000", host: "12.34.56.90"
What do you see in nginx error log (/var/log/nginx/errors.log)?
Added after additional info (logs) provided:
To me it looks root should be server section not location.
server {
...
root /usr/share/nginx/html;
...
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
include fastcgi_params;
}
}
Where is your index.php file? If it is here:
/usr/share/nginx/html/index.php
then change this line
fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
to:
fastcgi_param SCRIPT_FILENAME /usr/share/nginx/html$fastcgi_script_name;

Unable to get error stack trace or error log while using php-fpm + nginx

I am using php-fpm 5.5.9 along with nginx 1.4.6 on my Ubuntu 14.04 machine. I have installed them using apt-get package manager. I am unable to get a stack trace of the error that my index.php script encounters in error log as well as on the browser. I searched and implemented a couple of solutions from stackoverflow and other articles but none of them worked for me. Here is my nginx conf along with my php-fpm conf file. Please help me out if I am doing any silly mistake.
Nginx Configuration:
location ~ \.php$ {
# With php5-fpm:
#try_files $uri =404;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_intercept_errors on;
fastcgi_read_timeout 600;
fastcgi_send_timeout 600;
proxy_connect_timeout 600;
proxy_send_timeout 600s;
proxy_read_timeout 600s;
fastcgi_pass 127.0.0.1:7777;
fastcgi_index index.php;
}
PHP-FPM Configuration:
error_log = /tmp/php5-fpm.log
PHP-FPM pool Configuration:
catch_workers_output = yes
slowlog = /var/log/php-fpm/$pool.log.slow
listen = 127.0.0.1:7777
php_flag[display_errors] = On
php_admin_value[error_log] = /tmp/fpm-php.www.log
php_admin_flag[log_errors] = On
Thanks in advance.
Honestly couldn't find reasonable solution without using PHP xdebug module.
sudo apt-get install php5-xdebug
It should install the module configuration, may have to restart php-fpm afterwards though.
sudo service php5-fpm restart
Once that was installed I could finally get a stack trace out from php5-fpm.
In your configuration is says that the error_log file is in /etc/... However you could just be looking in the wrong place.
Did you anyway check the default error log location? Usually it should be:
/var/log/nginx/* - there is a nginx_error.log file in there. Possibly other log files as well.
Also note that the PHP-FPM and nginx config configurations differ in syntax.
Check if you have such an error anywhere. It may be parsed wrongly and thus your errors.
Check file permissions for the error log. Is nginx's running user able to Write there?

PHP-FPM - upstream prematurely closed connection while reading response header

Already saw this same question - upstream prematurely closed connection while reading response header from upstream, client
But as Jhilke Dai said it not solved at all and i agree.
Got same exact error on nginx+phpFPM installation. Current software versions: nginx 1.2.8 php 5.4.13 (cli) on FreeBSd9.1. Actually bit isolated this error and sure it happened when trying to import large files, larger than 3 mbs to mysql via phpMyadmin. Also counted that backend closing connection when 30 secs limit reached.
Nginx error log throwing this
[error] 49927#0: *196 upstream prematurely closed connection while reading response header from upstream, client: 7X.XX.X.6X, server: domain.com, request: "POST /php3/import.php HTTP/1.1", upstream: "fastcgi://unix:/tmp/php5-fpm.sock2:", host: "domain.com", referrer: "http://domain.com/phpmyadmin/db_import.php?db=testdb&server=1&token=9ee45779dd53c45b7300545dd3113fed"
My php.ini limits raised accordingly
upload_max_filesize = 200M
default_socket_timeout = 60
max_execution_time = 600
max_input_time = 600
my.cnf related limit
max_allowed_packet = 512M
Fastcgi limits
location ~ \.php$ {
# fastcgi_split_path_info ^(.+\.php)(.*)$;
fastcgi_pass unix:/tmp/php5-fpm.sock2;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_intercept_errors on;
fastcgi_ignore_client_abort on;
fastcgi_connect_timeout 60s;
fastcgi_send_timeout 200s;
fastcgi_read_timeout 200s;
fastcgi_buffer_size 128k;
fastcgi_buffers 8 256k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
Tried to change fastcgi timeouts as well buffer sizes, that's not helped.
php error log doesn't show problem, enabled all notices, warning - nothing useful.
Also tried disable APC - no effect.
I had this same issue, got 502 Bad Gateway frequently and randomly at my development machine (OSX + nginx + php-fpm), and solved it by changing some parameters at /usr/local/etc/php/5.6/php-fpm.conf:
I had this settings:
pm = dynamic
pm.max_children = 10
pm.start_servers = 3
pm.max_spare_servers = 5
... and changed them to:
pm = dynamic
pm.max_children = 10
pm.start_servers = 10
pm.max_spare_servers = 10
... and then restarted the php-fpm service.
This settings are based on what I found here: [https://bugs.php.net/bug.php?id=63395]
How long does your script take to compute? Try to set, both in PHP and Nginx HUGE timeouts and monitor your system during the request. Then tune your values to optimise performance.
Also, lower the log level in PHP-FPM, maybe there is some type of warning, info or debug trace that can give you some info.
Finally, be careful with the number of children and processes available in PHP-FPM. Maybe Nginx is starving, waiting for a PHP-FPM child to be available.

Resources