How to set fastcgi_params_hash_max_size? - nginx

On my Ubuntu server running Nginx + FastCGI (via Webinoly), nginx -t throws the following warning:
nginx: [warn] could not build optimal fastcgi_params_hash,
you should increase either fastcgi_params_hash_max_size: 512 or
fastcgi_params_hash_bucket_size: 64; ignoring fastcgi_params_hash_bucket_size
At first I thought that I had to change server_names_hash_max_size, but it was already set to 2048 and was unrelated. To make sure, I tried lowering its value to 8, at which nginx -t threw a warning that I should increase it. After restoring it, I got back the initial warning about fastcgi_params_hash_max_size.
I also tried setting fastcgi_params_hash_max_size on my nginx.conf file, but then I got an error that fastcgi_params_hash_max_size is an unknown directive.
So, I guess my question is, how can I change the fastcgi_params_hash_max_size?
Thanks in advance!

There is indeed no fastcgi_params_hash_max_size and fastcgi_params_hash_bucket_size directives. This error message is "automatic" in a way that it takes the hash name (in this case fastcgi_params_hash) then constructs suggested directive names by appending _max_size and _bucket_size.
The NGINX hashes in general are used to store array-like configuration data like maps or fastcgi_paramss (your case), and there is separate document about them having to say:
Most hashes have the corresponding directives that allow changing these parameters, for example, for the server names hash they are server_names_hash_max_size and server_names_hash_bucket_size.
So your hash does not have the "corresponding directives" and there is nothing you can do aside from:
Investigate why you have too many fastcgi_param directives in your configuration? (or with too much lengthy values). It very likely that you have suboptimal configuration in regards with fastcgi_paramss. Remember, those are passed to FastCGI (e.g. PHP-FPM), and if that data is too big, then you sure to experience performance issues simply because of much data going through between NGINX-PHP-FPM.
File a bug issue with NGINX about adding these directives

Related

Postfix error 'TLS Library Problem' code 45

I've updated the location of certificates that Postfix uses (now served from the nginx directory since they are wildcard) and I'm getting a TLS library error:
Sep 18 13:37:22 blueberry postfix/smtps/smtpd[15717]: warning: TLS library problem: error:14094415:SSL routines:ssl3_read_bytes:sslv3 alert certificate expired:../ssl/record/rec_layer_s3.c:1544:SSL alert number 45:
I have checked the certificate paths in both dovecot and postifx (specifically in /etc/dovecot/conf.d/10-ssl.conf, /etc/postfix/main.cf and /etc/postfix/vmail_ssl.map, have I missed any?) and they all seem to be correct. The error reads ../ssl/record/rec_layer_s3.c:1544, which I assume is some sort of file but I don't remember ever seeing (or touching) it. In fact I do not know what it is relative to and had no luck finding it. I would appreciate any help.
Edit: Should I symlink the certificates into postfix/ssl?

the "user" directive makes sense only if the master process runs with super-user privileges

I had to remove a folder ON LOCAL COMPUTER with my project (because of corrupted git files) and re-git clone it (successfully).
It runs on MAMP's NGINX server.
Now when I am trying to open project's main page in broswer and get "HTTP ERROR 500".
/Applications/MAMP/logs/nginx_error.log:
2018/06/08 18:58:29 [warn] 3218#0: the "user" directive makes sense
only if the master process runs with super-user privileges, ignored in
/Applications/MAMP/conf/nginx/nginx.conf:7
/Applications/MAMP/conf/nginx/nginx.conf:7:
user sergeyfomin staff;
(sergeyfomin is my User name on my Mac)
I guess it has something to do with user-priviliges I need to re-set on my project after git-cloning it?
Would appreciate any help.
The 500 error and the log warning are probably unrelated, since the warning says that it just ignored the directive. You'll probably have to dig elsewhere for the cause.

Rmpi, OpenCPU, and Apparmor: DENIED request for "/"

I have an R package that sends out a job to the OpenMPI cluster I have running by means of the Rmpi package. All works as expected within an R session run from the console. However, when I try to execute the relevant function with from my OpenCPU server like this (details changed to protect the innocent):
curl -XPOST http://99.999.999.99/ocpu/library/MyPackage/R/my_cluster_function
I get this error:
R call failed: process died.
(Other, non-cluster calling functions within the package work as expected via OpenCPU). I noticed in /var/log/kern.log a variety of requests being DENIED by apparmor, and I have been able to resolve most of them by adding entries into /etc/apparmor.d/opencpu.d/custom to allow OpenMPI to access the files it needs. However, I cannot resolve these two issues (again, IP address changed) related to "open" requests for location "/":
Oct 26 03:49:58 99.999.999.99 kernel: [142952.551234] type=1400 audit(1414295398.849:957): apparmor="DENIED" operation="open" profile="opencpu-main" name="/" pid=22486 comm="orted" requested_mask="r" denied_mask="r" fsuid=33 ouid=0
Oct 26 03:49:58 99.999.999.99 kernel: [142952.556422] type=1400 audit(1414295398.857:958): apparmor="DENIED" operation="open" profile="opencpu-main" name="/" pid=22485 comm="apache2" requested_mask="r" denied_mask="r" fsuid=33 ouid=0
Adding this to my apparmor rules did not help:
/* r,
Two questions:
Why is opencpu trying to read from my root level directory (or does this mean something else)?
More urgently, how can I resolve this apparmor issue?
Thanks.
You might need to add both apparmor rules
/ r,
/* r,
The first rule allows directory listing of / and the second rule allows read access to any file under /.
I don't understand why Rmpi wants to read / or why were you getting process died error instead of access denied. Are you sure the problem is completely resolved?

DOMDocument->load external entity fail for local file with PHP-FPM

The following PHP "failed to load external entity", even though it is trying to load a local XML file:
<?php
$path = "/usr/share/pear/www/horde/config";
#libxml_disable_entity_loader(false);
$dom = new DOMDocument();
$v = $dom->load($path . '/conf.xml');
echo "status = ".($v?'success':'error')."\n";
?>
The basic question is how can this be fixed?
Log file:
2014/03/10 20:07:10 [error] 26117#0: *24 FastCGI sent in stderr: "PHP message: PHP Warning: DOMDocument::load(): I/O warning : failed to load external entity "/usr/share/pear/www/horde/config/conf.xml" in /usr/share/nginx/html/test.php on line 5
PHP message: PHP Stack trace:
PHP message: PHP 1. {main}() /usr/share/nginx/html/test.php:0
PHP message: PHP 2. DOMDocument->load() /usr/share/nginx/html/test.php:5" while reading response header from upstream, client: x.x.x.x, server: example.com, request: "GET /test.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "example.com"
Uncommenting the libxml_disable_entity_loader line works, but this is not an acceptable solution for a few reasons, e.g. it is systemwide for php-fpm.
Running the PHP from shell returns "status = success". Doing a file_get_contents and then $dom->loadXML($string) also works (i.e. file exists and not a permissions issue). This might be an acceptable workaround, but shouldn't be necessary and doesn't explain why the error is occurring.
The XML file itself is the Horde config, but the problem does not seem to be the contents of the file, since it also occurs with this XML content:
<?xml version="1.0"?>
<configuration></configuration>
Environment is php and php-fpm 5.3.3, nginx 1.4.6, libxml2 2.7.6. My first guess is something to do with php-fpm, but I can't find any config setting that affects this. Any pearls of wisdom appreciated!
EDIT TO ADD
Restarting php-fpm causes it to work briefly. Disabling APC did not seem to help. Seems like something with php-fpm - but what?
FURTHER TESTING
Some additional info:
I tried hitting the server repeatedly, and get an error about 80% of the time. The pattern isn't random - a few seconds of successes followed by a series of errors;
I added a phpinfo() to the end of the above php and doing a diff on the success and failure runs - there is no difference;
If I put libxml_disable_entity_loader(true), I seem to always get an error, which suggests that bug #64938 is at work.
It seems that I need to find why the XML is considered to have external entities.
I think that, if the external entity loader is disabled, it should be obvious that external entities can't be loaded. The only solution is to enable loading of external entities with libxml_disable_entity_loader(false). Since this setting is not thread-safe, I can see two approaches:
Enable it globally and use some other feature to prevent loading of unwanted entities (typically from a network):
Register your own entity loader with libxml_set_external_entity_loader. I think that's the safest solution.
Use the parse option LIBXML_NONET. This should be enough if you simply want to disable network access of libxml2. But you have to make sure to always pass it to calls like DOMDocument::load.
Use locks to protect calls to libxml_disable_entity_loader. This is probably impractical and potentially unsafe.

Nginx + FastCGI + PHP (php-fpm) not logging caught errors/warnings

FastCGI doesn't want to log PHP errors properly. Well, that's not entirely true: it logs errors fine, with a little fiddling; it just won't log anything else, such as warnings.
The notorious FastCGI -> Nginx log bug isn't an issue, necessarily. Errors and warnings from php-fpm go straight to Nginx--but only if they're uncaught. That is, if set_error_handler successfully intercepts an error, no log entry is appended. This means that I can see parse errors, but that's about it.
php-fpm doesn't log PHP errors by itself (separate from nginx) without a bit of a hack. php-fpm's instance configuration file includes these two lines by default:
php_admin_value[error_log] = /mnt/log/php-fpm/default.log
php_admin_flag[log_errors] = on
I changed the error_log path, obviously. I had to add the following line to get it to actually log anything:
php_admin_value[error_reporting] = E_ALL & ~E_DEPRECATED & ~E_STRICT
Version note: the E_STRICT part is unnecessary, as I'm using PHP 5.3.27, but I plan on upgrading to 5.4 at some point. With this line, it logs errors--and only errors--to /mnt/log/php-fpm/default.log. Now, this sets error_reporting to the same value that I have set in php.ini, so something is obviously wrong here. In addition, it doesn't log caught errors: the behavior is identical to that of the nginx log. I tried using the numeric value (22527) instead, but still no luck.
I don't care in which log file the entries end up (nginx versus php-fpm), but I do need caught errors to be logged somewhere. I could resort to injecting my own error and exception handlers, but that's a bit hackish, so I'd rather avoid that.
I use this directive in the pool configuration file for PHP-FPM:
catch_workers_output = yes

Resources