Nginx + FastCGI + PHP (php-fpm) not logging caught errors/warnings - nginx

FastCGI doesn't want to log PHP errors properly. Well, that's not entirely true: it logs errors fine, with a little fiddling; it just won't log anything else, such as warnings.
The notorious FastCGI -> Nginx log bug isn't an issue, necessarily. Errors and warnings from php-fpm go straight to Nginx--but only if they're uncaught. That is, if set_error_handler successfully intercepts an error, no log entry is appended. This means that I can see parse errors, but that's about it.
php-fpm doesn't log PHP errors by itself (separate from nginx) without a bit of a hack. php-fpm's instance configuration file includes these two lines by default:
php_admin_value[error_log] = /mnt/log/php-fpm/default.log
php_admin_flag[log_errors] = on
I changed the error_log path, obviously. I had to add the following line to get it to actually log anything:
php_admin_value[error_reporting] = E_ALL & ~E_DEPRECATED & ~E_STRICT
Version note: the E_STRICT part is unnecessary, as I'm using PHP 5.3.27, but I plan on upgrading to 5.4 at some point. With this line, it logs errors--and only errors--to /mnt/log/php-fpm/default.log. Now, this sets error_reporting to the same value that I have set in php.ini, so something is obviously wrong here. In addition, it doesn't log caught errors: the behavior is identical to that of the nginx log. I tried using the numeric value (22527) instead, but still no luck.
I don't care in which log file the entries end up (nginx versus php-fpm), but I do need caught errors to be logged somewhere. I could resort to injecting my own error and exception handlers, but that's a bit hackish, so I'd rather avoid that.

I use this directive in the pool configuration file for PHP-FPM:
catch_workers_output = yes

Related

How to set fastcgi_params_hash_max_size?

On my Ubuntu server running Nginx + FastCGI (via Webinoly), nginx -t throws the following warning:
nginx: [warn] could not build optimal fastcgi_params_hash,
you should increase either fastcgi_params_hash_max_size: 512 or
fastcgi_params_hash_bucket_size: 64; ignoring fastcgi_params_hash_bucket_size
At first I thought that I had to change server_names_hash_max_size, but it was already set to 2048 and was unrelated. To make sure, I tried lowering its value to 8, at which nginx -t threw a warning that I should increase it. After restoring it, I got back the initial warning about fastcgi_params_hash_max_size.
I also tried setting fastcgi_params_hash_max_size on my nginx.conf file, but then I got an error that fastcgi_params_hash_max_size is an unknown directive.
So, I guess my question is, how can I change the fastcgi_params_hash_max_size?
Thanks in advance!
There is indeed no fastcgi_params_hash_max_size and fastcgi_params_hash_bucket_size directives. This error message is "automatic" in a way that it takes the hash name (in this case fastcgi_params_hash) then constructs suggested directive names by appending _max_size and _bucket_size.
The NGINX hashes in general are used to store array-like configuration data like maps or fastcgi_paramss (your case), and there is separate document about them having to say:
Most hashes have the corresponding directives that allow changing these parameters, for example, for the server names hash they are server_names_hash_max_size and server_names_hash_bucket_size.
So your hash does not have the "corresponding directives" and there is nothing you can do aside from:
Investigate why you have too many fastcgi_param directives in your configuration? (or with too much lengthy values). It very likely that you have suboptimal configuration in regards with fastcgi_paramss. Remember, those are passed to FastCGI (e.g. PHP-FPM), and if that data is too big, then you sure to experience performance issues simply because of much data going through between NGINX-PHP-FPM.
File a bug issue with NGINX about adding these directives

the "user" directive makes sense only if the master process runs with super-user privileges

I had to remove a folder ON LOCAL COMPUTER with my project (because of corrupted git files) and re-git clone it (successfully).
It runs on MAMP's NGINX server.
Now when I am trying to open project's main page in broswer and get "HTTP ERROR 500".
/Applications/MAMP/logs/nginx_error.log:
2018/06/08 18:58:29 [warn] 3218#0: the "user" directive makes sense
only if the master process runs with super-user privileges, ignored in
/Applications/MAMP/conf/nginx/nginx.conf:7
/Applications/MAMP/conf/nginx/nginx.conf:7:
user sergeyfomin staff;
(sergeyfomin is my User name on my Mac)
I guess it has something to do with user-priviliges I need to re-set on my project after git-cloning it?
Would appreciate any help.
The 500 error and the log warning are probably unrelated, since the warning says that it just ignored the directive. You'll probably have to dig elsewhere for the cause.

Shiny failed to start

I had an application in my shiny-server working fine some months ago. Today I returned to it and got the most curious error. Whenever I try to access it, I just get this massage
An error has occurred
The application failed to start.
The application exited during initialization.
Now the natural step would be to go check the log of this error, right? But the Logs created by this error are just empty, 0 bytes. So I am really puzzled why this is happening, I also tried to run the shiny sample apps and get the same error, but the server itself seems to be running just fine.
I know this is sort of a vague question, but honestly I don't know what other info could I put here due to the empty logs and it could be that someone came across a similar issue
Shiny Server defaults to sanitised error logs, which is not useful in this case. You can change this behaviour by adding the line sanitize_errors off; to /etc/shiny-server/shiny-server.conf. You may need to restart Shiny Server to see the effect.
To open /etc/shiny-server/shiny-server.conf:
sudo nano /etc/shiny-server/shiny-server.conf
To restart Shiny Server:
sudo restart shiny-server
This should give you the verbose error messages you want.

Swift Juno complains 'Account not found'

I'm new to stack so this might be a very silly mistake.
I'm trying to setup a one node swift configuration for a simple proof of concept. I did follow the instructions. However, something is missing. I keep getting this error:
root#lab-srv2544:/etc/swift# swift stat
Traceback (most recent call last):
File "/usr/bin/swift", line 10, in <module>
sys.exit(main())
File "/usr/lib/python2.7/dist-packages/swiftclient/shell.py", line 1287, in main
globals()['st_%s' % args[0]](parser, argv[1:], output)
File "/usr/lib/python2.7/dist-packages/swiftclient/shell.py", line 492, in st_stat
stat_result = swift.stat()
File "/usr/lib/python2.7/dist-packages/swiftclient/service.py", line 427, in stat
raise SwiftError('Account not found', exc=err)
swiftclient.service.SwiftError: 'Account not found'
Also, the syslog always complains about proxy-server:
Dec 12 12:16:37 lab-srv2544 proxy-server: Account HEAD returning 503 for [] (txn: tx9536949d19d14f1ab5d8d-00548b4d25) (client_ip: 127.0.0.1)
Dec 12 12:16:37 lab-srv2544 proxy-server: 127.0.0.1 127.0.0.1 12/Dec/2014/20/16/37 HEAD /v1/AUTH_71e79a29599149099aa98d5d276eaa0b HTTP/1.0 503 - python-swiftclient-2.3.0 8d2b0748804f4b34... - - - tx9536949d19d14f1ab5d8d-00548b4d25 - 0.0013 - - 1418415397.334497929 1418415397.335824013
Anyone seen this problem before?
When using 'swift' command to access swift storage, pass user id and password as argument, if it is not set in environment variable.
The most probable reason for this behavior is a funny order in your "pipeline" directive in /etc/swift/proxy-server.conf
To verify this hypothesis:
comment out your current pipeline, and write this one instead:
pipeline = authtoken cache healthcheck keystoneauth proxy-logging proxy-server
restart your proxy server with the command
swift-init proxy-server restart
Make sure the environment variables OS_USERNAME, OS_PASSWORD, OS_TENANT_NAME and OS_AUTH_URL are defined
try to list your containers with
swift list
If you get a list of containers then the diagnoses is correct.
Get back to your proxy-server.conf and try to add one element per time to your pipeline, restarting the server each time, and testing each time, until you find the right order.
For your reference see http://docs.openstack.org/developer/swift/deployment_guide.html#proxy-server-configuration

DOMDocument->load external entity fail for local file with PHP-FPM

The following PHP "failed to load external entity", even though it is trying to load a local XML file:
<?php
$path = "/usr/share/pear/www/horde/config";
#libxml_disable_entity_loader(false);
$dom = new DOMDocument();
$v = $dom->load($path . '/conf.xml');
echo "status = ".($v?'success':'error')."\n";
?>
The basic question is how can this be fixed?
Log file:
2014/03/10 20:07:10 [error] 26117#0: *24 FastCGI sent in stderr: "PHP message: PHP Warning: DOMDocument::load(): I/O warning : failed to load external entity "/usr/share/pear/www/horde/config/conf.xml" in /usr/share/nginx/html/test.php on line 5
PHP message: PHP Stack trace:
PHP message: PHP 1. {main}() /usr/share/nginx/html/test.php:0
PHP message: PHP 2. DOMDocument->load() /usr/share/nginx/html/test.php:5" while reading response header from upstream, client: x.x.x.x, server: example.com, request: "GET /test.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "example.com"
Uncommenting the libxml_disable_entity_loader line works, but this is not an acceptable solution for a few reasons, e.g. it is systemwide for php-fpm.
Running the PHP from shell returns "status = success". Doing a file_get_contents and then $dom->loadXML($string) also works (i.e. file exists and not a permissions issue). This might be an acceptable workaround, but shouldn't be necessary and doesn't explain why the error is occurring.
The XML file itself is the Horde config, but the problem does not seem to be the contents of the file, since it also occurs with this XML content:
<?xml version="1.0"?>
<configuration></configuration>
Environment is php and php-fpm 5.3.3, nginx 1.4.6, libxml2 2.7.6. My first guess is something to do with php-fpm, but I can't find any config setting that affects this. Any pearls of wisdom appreciated!
EDIT TO ADD
Restarting php-fpm causes it to work briefly. Disabling APC did not seem to help. Seems like something with php-fpm - but what?
FURTHER TESTING
Some additional info:
I tried hitting the server repeatedly, and get an error about 80% of the time. The pattern isn't random - a few seconds of successes followed by a series of errors;
I added a phpinfo() to the end of the above php and doing a diff on the success and failure runs - there is no difference;
If I put libxml_disable_entity_loader(true), I seem to always get an error, which suggests that bug #64938 is at work.
It seems that I need to find why the XML is considered to have external entities.
I think that, if the external entity loader is disabled, it should be obvious that external entities can't be loaded. The only solution is to enable loading of external entities with libxml_disable_entity_loader(false). Since this setting is not thread-safe, I can see two approaches:
Enable it globally and use some other feature to prevent loading of unwanted entities (typically from a network):
Register your own entity loader with libxml_set_external_entity_loader. I think that's the safest solution.
Use the parse option LIBXML_NONET. This should be enough if you simply want to disable network access of libxml2. But you have to make sure to always pass it to calls like DOMDocument::load.
Use locks to protect calls to libxml_disable_entity_loader. This is probably impractical and potentially unsafe.

Resources