phpinfo() showing invalid value - nginx

test.php
<?php
sleep(45);
phpinfo();
upon executing above code,
after 47 sec, i'm getting response :
max_execution_time 30 | 30
strange but yes phpinfo() showing invalid timeout value.
upon sleep(75);phpinfo(); after 61 sec I'm getting request timout error in browser.
Problem: Not sure why phpinfo() is showing invalid value?
PHP Version: 5.6.29
Server API: FPM/FastCGI
php-fpm: active
NGINX_VERSION: 1.11.8; linux
from above tests, it seems, server max_execution_time is 60 sec but its showing 30sec in phpinfo();

No, this is entirely expected. sleep() is blocking call. PHP doesn't know it has timed out until the execution thread is scheduled by the OS.
Try:
for ($x=0; $x<30; $x++) sleep(2);

<?php
set_time_limit(300)
sleep(45);
phpinfo();
set new max_execution_time with set_time_limit() function

Related

Impossible to connect to a mysql Cleardb database on Heroku

My last build failed with this error message:
Executing script cache:clear [KO]
[KO]
Script cache:clear returned with error code 255
!!
!! In ExceptionConverter.php line 117:
!!
!! An exception occurred in the driver: SQLSTATE[HY000] [1226] User 'xxxxxxxx
!! xxxxx' has exceeded the 'max_user_connections' resource (current value: 15)
And when I try to connect to my database via CLI,
mysql -u xxxxxxxxxx -pxxxxxxxx -h us-cdbr-east-xxx.cleardb.com
I am getting this error message:
ERROR 1040 (HY000): Too many connections
The problem is that the dashboard of the database says : No connections are currently established to the database. Therefore, I guess that a hidden process might be running.
Do you have an idea on how to fix this issue? I have already restarted all the dynos but it didn't have any effect. Is restarting the dynos and restarting the app the same? I also read that I should stop background workers, but I have no clue on how to do this...

is_wp_error and wp_remote_get cURL error 28

I'm using the following code. Assume $url is something like https://example.com/cron/cron.php. I'm getting the failure message, but it's actually working because in the linked script I send out an e-mail and I am receiving the e-mails.
So if it's working, then why is is_wp_error triggering?
if (is_wp_error($wp_remote_get = wp_remote_get($url))) {
echo "Failed to get script. Error: " . $wp_remote_get->get_error_message() . " Exiting...";
exit;
}
Got error: Failed to get script. Error: cURL error 28: Operation timed out after 5004 milliseconds with 0 out of -1 bytes received
So it was a bit of a silly issue. wp_remote_get retrieves the http response, but I thought it would just be visiting the url to run the script. The script takes a while to complete, so it must be running into time limits, which is what the error message seems to be suggesting.
I resolved this by just using wp_redirect instead. Would be nice if I can find a way to run the other script without redirecting though.

How to fix a "URL Error #:Operation timed out after 300001 milliseconds with 0 bytes received" in WordPress

I have the following fatal error when trying to use Curl GET Method using woocommerce rest api V3 in WordPress. I have try to set_time_limit(0) and also, try with wp_remote_get method of WordPress but not solve issue. this because I get errors and I'm not able to fix them. I've set the timeout to 30 seconds. Also with this settings I get a timeout error and getting null response from REST API.
Fatal error: cURL Error: Operation timed out after 30009 milliseconds with 0 bytes received
Anyone have suggestion to fix this?
I think the answer here is likely what you're looking for https://wordpress.stackexchange.com/a/346236/31838
Increase the request timeout with a http_request_timeout filter
function custom_http_request_timeout( ) {
return 90;
}
add_filter( 'http_request_timeout', 'custom_http_request_timeout' );

How to deal with PHP-FPM finishing state?

I have a website using NGINX & PHP-FPM. As you may know PHP-FPM has a status page for it's pools with detailed information about it's processes. My problem is that as time passes many processes state become "finishing", and will not change their "finishing" state until I reload PHP-FPM.
The bad thing is that "finishing" processes counts as active processes and when the number of active processes surpass pm.max_children bad things happen on my website.
I know some php-fpm pool parameters to kill idle processes, but I can't find parameters to kill finishing proccesses after a certain amount of time.
How to deal with PHP-FPM finishing state? Is there a configuration parameter to kill these "finishing" processes after some time? Could this be a misconfiguration between NGINX and PHP-FPM? What are the reasons for "finishing" states?
Here is an image of php-fpm status. Reds are finishing states, which is what I'm trying to fix. Request URI are the different pages of my site.
Thanks for your knowledge.
PD1: Right now I'm reloading PHP-FPM every 15 minutes and that "fixes" more or less finishing states... but I think this could be an important performance problem with more traffic.
PD2: Until now the only solution I think could work is to read the php-fpm status page, get all procceses with finishing states and kill proccesses by pid that surpass an arbitrary request duration.
Had the same problem. Here's what I used as a temporary solution:
Create a PHP file with the content:
<?php
fastcgi_finish_request();
?>
edit php.ini:
auto_append_file = /path/to/your/file.php
i had problem like that and this was my fix :
It turns out we were using invalid Memcached keys in certain
situations. This was causing Memcached to die without error and the
PHP process was staying alive.
https://serverfault.com/questions/626904/php-fpm-state-finishing-but-never-completes
Neither comment solved or explained the root cause behind the issue. Sample code to perform the PD2 approach in killing these processes of should be something like this:
<?php
while (true) {
$data_json = file_get_contents("http://localhost/fpmstatus?json&full");
$data = json_decode($data_json, true);
foreach ($data['processes'] as $proc) {
if ($proc['state'] === "Finishing") {
$pid = $proc['pid'];
$duration = $proc['request duration'] / 1000000.0;
echo json_encode(compact("pid", "duration"));
if ($duration > 60) {
passthru("kill -9 " . $pid);
echo " KILLED\n";
} else {
echo "\n";
}
}
}
echo count($data['processes']);
echo "\n";
sleep(30);
}
When running this code, I did find out that errors like this would occur in the error.log:
2017/08/06 13:46:42 [error] 20#20: *9161 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 77.88.5.19, server: hostname1, request: "GET /?p=9247 HTTP/1.1", upstream: "fastcgi://uni
x:/run/php/php7.0-fpm.sock:", host: "hostname2"
The obvious mismatch was that the request was handled by the block for "hostname1", and a block for "hostname2" didn't exist (anymore). I can't say for sure that it was the reason. There were "finishing" requests even after declaring a server_name _; block, but they were less frequent than anything that was going on before that.
My server had the enablereuse=on setting in the apache proxy configuration. Removing this fixed the "Finishing" problem.
Also listed in the question: https://serverfault.com/questions/626904/php-fpm-state-finishing-but-never-completes

Drupal site - Memcache Connection errors

We are tying to perf tune our drupal site.
We are using Siege to measure performance (as drupal visitor).
Env:
Nginx + FastCGI+ Memcache
Siege runs fine for a few seconds, and then we run into connection errors:
Example:
HTTP/1.1 200 29.18 secs: 5877 bytes ==> /
HTTP/1.1 200 29.39 secs: 5877 bytes ==> /
warning: socket: -1656235120 select timed out: Connection timed out
warning: socket: -1673020528 select timed out: Connection timed out
Using the same Siege test confiuration, Nginx + FastCGI+ Drupal Cache seems to work fine.
Example:
HTTP/1.1 200 1.41 secs: 5868 bytes ==> /
HTTP/1.1 200 1.40 secs: 5868 bytes ==> /
As you can see, Response time is much higher with MemCache, in addition to the connection errors.
Any idea what could be wrong here... and why Drupal is throwing errors with memcache under load?
Memcache runs on a separate instance. Allocated 2GB memory for MemCache.
I guess that You run out of memcached connections. Please run a check of Your memcached installation with a simple script every second. Then start Siege. I guess Your memcached stops responding after a while.
Test memcache php script:
<?php
$memcache = new Memcache;
$memcache->connect('localhost', 11211) or die ('Unable to connect');
$version = $memcache->getVersion();
echo 'Server version: '.$version;
?>
What I guess is happening is that You have not disable the persistent connections in memcache and they hang around in the php threads. Memcached can serve ~1023 of them at a time and that might not be enough while Sieging.
You might also try ab, apache benchmarking tool with the close look to the -c switch. Play around with it and see how the results change on different values.
Finally, You should run a tcpdump on Your memcached port (usually 11211) on the php machine to find out what is happening to the connections. Does drupal start them? Does the other host respond with a RST or does it time out?
There was a bug in the memcached php documentation api that said that the connections are non-persistent by default. They are persistent by default (well, they were at the time I had the problem with it).
Feel free to comment this answer, I'll read the comments and assist further if necessary.

Resources