NGINX in containerized deployement retruns error "getrlimit(RLIMIT_NOFILE): 1048576:1048576" - nginx

When we moved to Azure for testing our deployments, NGINX returns error which seems to be OS level error.
Same deployment works well on other cloud platforms, also the OS version is maintained uniform across all our testing cloud platforms.
OS version:
Linux version 4.19.0-18-cloud-amd64 (debian-kernel#lists.debian.org) (gcc version 8.3.0 (Debian 8.3.0-6)) #1 SMP Debian 4.19.208-1 (2021-09-29)
Docker container error:
2022/03/04 14:42:58 [notice] 14#14: using the "epoll" event method
2022/03/04 14:42:58 [notice] 14#14: nginx/1.21.6
2022/03/04 14:42:58 [notice] 14#14: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2022/03/04 14:42:58 [notice] 14#14: OS: Linux 4.19.0-18-cloud-amd64
2022/03/04 14:42:58 [notice] 14#14: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2022/03/04 14:42:58 [notice] 14#14: start worker processes
2022/03/04 14:42:58 [notice] 14#14: start worker process 15
2022/03/04 14:42:58 [notice] 14#14: start worker process 16
PLease advise.

Related

Kong k8s deployment fails after seemingly innocent eks worker ami upgrade

After AWS AMI workers upgrade to a new version our kong deployment on k8s fails.
kong version: 1.4
old ami version: amazon-eks-node-1.14-v20200423
new ami version: amazon-eks-node-1.14-v20200723
kubernetes version: 1.14
I see that the new AMI comes with a new docker version: 19.03.06, while the old one ships with 18.09.09. could this cause the issue?
I can see in kong pod logs a lot of signal 9 exits:
2020/08/11 09:00:48 [notice] 1#0: using the "epoll" event method
2020/08/11 09:00:48 [notice] 1#0: openresty/1.15.8.2
2020/08/11 09:00:48 [notice] 1#0: built by gcc 4.8.5 20150623 (Red Hat 4.8.5-39) (GCC)
2020/08/11 09:00:48 [notice] 1#0: OS: Linux 4.14.181-140.257.amzn2.x86_64
2020/08/11 09:00:48 [notice] 1#0: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2020/08/11 09:00:48 [notice] 1#0: start worker processes
2020/08/11 09:00:48 [notice] 1#0: start worker process 38
2020/08/11 09:00:48 [notice] 1#0: start worker process 39
2020/08/11 09:00:48 [notice] 1#0: start worker process 40
2020/08/11 09:00:48 [notice] 1#0: start worker process 41
2020/08/11 09:00:50 [notice] 1#0: signal 17 (SIGCHLD) received from 40
2020/08/11 09:00:50 [alert] 1#0: worker process 40 exited on signal 9
2020/08/11 09:00:50 [notice] 1#0: start worker process 42
2020/08/11 09:00:51 [notice] 1#0: signal 17 (SIGCHLD) received from 39
2020/08/11 09:00:51 [alert] 1#0: worker process 39 exited on signal 9
2020/08/11 09:00:51 [notice] 1#0: start worker process 43
2020/08/11 09:00:52 [notice] 1#0: signal 17 (SIGCHLD) received from 41
2020/08/11 09:00:52 [alert] 1#0: worker process 41 exited on signal 9
2020/08/11 09:00:52 [notice] 1#0: signal 29 (SIGIO) received
2020/08/11 09:00:52 [notice] 1#0: start worker process 44
2020/08/11 09:00:48 [debug] 38#0: *1 [lua] globalpatches.lua:243: randomseed(): seeding PRNG from OpenSSL RAND_bytes()
2020/08/11 09:00:48 [debug] 38#0: *1 [lua] globalpatches.lua:269: randomseed(): random seed: 255136921215 for worker nb 0
2020/08/11 09:00:48 [debug] 38#0: *1 [lua] events.lua:211: do_event_json(): worker-events: handling event; source=resty-worker-events, event=started, pid=38, data=nil
2020/08/11 09:00:48 [notice] 38#0: *1 [lua] cache_warmup.lua:42: cache_warmup_single_entity(): Preloading 'services' into the cache ..., context: init_worker_by_lua*
2020/08/11 09:00:48 [warn] 38#0: *1 [lua] socket.lua:159: tcp(): no support for cosockets in this context, falling back to LuaSocket, context: init_worker_by_lua*
2020/08/11 09:00:53 [notice] 1#0: signal 17 (SIGCHLD) received from 38
2020/08/11 09:00:53 [alert] 1#0: worker process 38 exited on signal 9
2020/08/11 09:00:53 [notice] 1#0: start worker process 45
2020/08/11 09:00:54 [notice] 1#0: signal 17 (SIGCHLD) received from 42
2020/08/11 09:00:54 [alert] 1#0: worker process 42 exited on signal 9
2020/08/11 09:00:54 [notice] 1#0: signal 29 (SIGIO) received
2020/08/11 09:00:54 [notice] 1#0: start worker process 46
2020/08/11 09:00:55 [notice] 1#0: signal 29 (SIGIO) received
2020/08/11 09:00:55 [notice] 1#0: signal 17 (SIGCHLD) received from 43
2020/08/11 09:00:55 [alert] 1#0: worker process 43 exited on signal 9
2020/08/11 09:00:55 [notice] 1#0: start worker process 47
2020/08/11 09:00:56 [notice] 1#0: signal 17 (SIGCHLD) received from 44
2020/08/11 09:00:56 [alert] 1#0: worker process 44 exited on signal 9
2020/08/11 09:00:56 [notice] 1#0: signal 29 (SIGIO) received
2020/08/11 09:00:56 [notice] 1#0: start worker process 48
2020/08/11 09:00:56 [notice] 1#0: signal 17 (SIGCHLD) received from 45
2020/08/11 09:00:56 [alert] 1#0: worker process 45 exited on signal 9
2020/08/11 09:00:58 [notice] 1#0: signal 29 (SIGIO) received
2020/08/11 09:00:58 [notice] 1#0: start worker process 49
2020/08/11 09:00:59 [notice] 1#0: signal 17 (SIGCHLD) received from 46
2020/08/11 09:00:59 [alert] 1#0: worker process 46 exited on signal 9
2020/08/11 09:00:59 [notice] 1#0: signal 29 (SIGIO) received
2020/08/11 09:00:59 [notice] 1#0: start worker process 50
2020/08/11 09:00:59 [notice] 1#0: signal 17 (SIGCHLD) received from 47
only critical message is:
[crit] 235#0: *45 [lua] balancer.lua:749: init(): failed loading initial list of upstreams: failed to get from node cache: could not acquire callback lock: timeout, context: ngx.timer
looking at kubectl describe pod kong... I see OOMKilled
could this be a memory issue?
The new node ami ulimit (nofile) has changed to 1048576, which is a big change from 65536 which caused memory issues with our current Kong setup, and thus failing to deploy.
Changing the new node file limit to the previous value fixed the kong deployment.
Although we decided to increase Kong memory request instead, which also fixes the issue.
relevant github issue

rstudio server is not opening after stoping and restarting

my rstudio server was hanging too often while loading starting r shiny app. So after googling around i tried to stop and start the rstudio server again. i also tried to kill all process running on 8787 port. But had no luck solving the issue. now r studio server keeps waiting while opening on web browser.
I have used below command to kill process running on 8787 port. after running the command there was no result.
sudo kill -TERM 20647
(20647 is port where rserver process is listening. i got this port number after running 'sudo netstat -ntlp | grep :8787' command).
to stop and restart r studio server, i used below command
sudo rstudio-server stop
sudo rstudio-server start
expected result is working sr studio server which doesnt hang while loading shiny app.
after running status command i found below error logged for rstudio server.
rstudio-server.service - RStudio Server
Loaded: loaded (/etc/systemd/system/rstudio-server.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Wed 2019-08-28 04:50:07 CDT; 11s ago
Process: 31611 ExecStop=/usr/bin/killall -TERM rserver (code=exited, status=0/SUCCESS)
Process: 31609 ExecStart=/usr/lib/rstudio-server/bin/rserver (code=exited, status=0/SUCCESS)
Main PID: 31610 (code=exited, status=1/FAILURE)
CGroup: /system.slice/rstudio-server.service
└─20647 /usr/lib/rstudio-server/bin/rserver
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: Unit rstudio-server.service entered failed state.
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: rstudio-server.service failed.
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: rstudio-server.service holdoff time over, scheduling r...rt.
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: Stopped RStudio Server.
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: start request repeated too quickly for rstudio-server....ice
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: Failed to start RStudio Server.
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: Unit rstudio-server.service entered failed state.
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: rstudio-server.service failed.
As a last resort, i have restarted my VM where i am running r-studio server. It seems to have resolved my issue.

MAMP server not working after OS Sierra update

I recently updated to the latest version of macOS and can't get an old MAMP-dependent project I have to run. It was working fine right before I upgraded. I googled my way around thinking the upgrade might be something simple like the previous OS upgrade fix that required simply renaming the file but I couldn't fine anything.
Has anyone else had any MAMP issues with the latest macOS update or what are some common local environment fixes to look into after an OS update?
apache_error_log:
[Sun Nov 27 17:40:12 2016] [notice] FastCGI: process manager initialized (pid 2517)
[Sun Nov 27 17:40:12 2016] [notice] Digest: generating secret for digest authentication ...
[Sun Nov 27 17:40:12 2016] [notice] Digest: done
[Sun Nov 27 17:40:13 2016] [notice] Apache/2.2.26 (Unix) mod_fastcgi/2.4.6 mod_wsgi/3.4 Python/2.7.6 PHP/5.5.10 DAV/2 mod_ssl/2.2.26 OpenSSL/0.9.8zh mod_perl/2.0.8 Perl/v5.18.2 configured -- resuming normal operations
[Sun Nov 27 18:41:25 2016] [notice] caught SIGTERM, shutting down
[Sun Nov 27 18:41:34 2016] [notice] FastCGI: process manager initialized (pid 2928)
[Sun Nov 27 18:41:34 2016] [notice] Digest: generating secret for digest authentication ...
[Sun Nov 27 18:41:34 2016] [notice] Digest: done
[Sun Nov 27 18:41:35 2016] [notice] Apache/2.2.26 (Unix) mod_fastcgi/2.4.6 mod_wsgi/3.4 Python/2.7.6 PHP/5.5.10 DAV/2 mod_ssl/2.2.26 OpenSSL/0.9.8zh mod_perl/2.0.8 Perl/v5.18.2 configured -- resuming normal operations
I was also having trouble with MAMP after having upgraded the Mac operating system to the High Sierra. The Apache Server was not starting. My MAMP is an old version (version 1.9). To solve I just created a folder called logs inside the Library folder. Only that and MAMP returned to normal operation.
I found this solution by seeing in the logs folder. In the Apache log (apache_error_log) I saw that it was looking for the "logs" folder inside the "Library" folder, which did not exist. So just creating this folder solved.

Nginx cache, redis_pass

I have been banging my head against a wall all day.
I am using the following nginx configuration to test something
location /help {
set $redis_key "cache:$scheme://$host$request_uri";
default_type text/html;
redis_pass 127.0.0.1:6379;
error_page 404 = #upstream;
}
There is a key and value inside my redis instance for the cache:$scheme.... (in my case cache:http://localhost/help)
I know they exist because I can monitor redis-cli for the nginx redis request, copy the "get" "cache:http://localhost/help", paste it into another redis-cli window and get the expected response.
The problem comes with nginx, it's not getting the response. Again I can see it connect from inside redis-cli -> monitor and I know the key and value exist.
From the nginx error log I can see this
2016/04/08 16:52:42 [notice] 9304#0: worker process 6328 exited with code 0
2016/04/08 16:52:42 [notice] 9304#0: signal 29 (SIGIO) received
terminate called after throwing an instance of 'std::length_error'
what(): basic_string::append
2016/04/08 16:52:49 [notice] 9304#0: signal 17 (SIGCHLD) received
2016/04/08 16:52:49 [alert] 9304#0: worker process 7328 exited on signal 6 (core dumped)
2016/04/08 16:52:49 [notice] 9304#0: start worker process 7516
2016/04/08 16:52:49 [notice] 9304#0: signal 29 (SIGIO) received
terminate called after throwing an instance of 'std::length_error'
what(): basic_string::append
2016/04/08 16:52:50 [notice] 9304#0: signal 17 (SIGCHLD) received
2016/04/08 16:52:50 [alert] 9304#0: worker process 7335 exited on signal 6 (core dumped)
2016/04/08 16:52:50 [notice] 9304#0: start worker process 7544
2016/04/08 16:52:50 [notice] 9304#0: signal 29 (SIGIO) received
terminate called after throwing an instance of 'std::length_error'
what(): basic_string::append
Has this appened to anyone else or can someone kick me in the right direction?
Thanks in advance
For anyone reading this in the future.
Firstly, Hello from the past!
Secondly, turns out nginx pagespeed module and this kind of caching are incompatible.

How to solve exit signal Segmentation fault (11)

After I migrated wordpress from a shared host to my own VPS I got the unpleasant surprise that half of the back-end pages on my websites rendered a No data receive ERR_EMPTY_RESPONSE.
Determined to find out what caused the problem I started troubleshooting. 7 of my 8 websites were affected, all running wordpress 4.1.5. Upgrading to 4.2.2 did not fix the problem.
The only unaffected website is an old website running on wordpress 3.3.1. Upgrading this website to run 4.2.2 results in the same errors. When I try to do a fresh wordpress install the same error pops up after step one (both when installing 4.2.2 and 3.3.1).
The 7 sites are running on 4 different themes, and I tried dis-enabling all plugins, still no luck.
I had a look at the error logs, I will copy a fragment here since they might provide some useful info. I've been googling all these lines but can't find the solution yet.
[Sun Jun 21 10:22:41 2015] [notice] caught SIGTERM, shutting down
[Sun Jun 21 10:22:42 2015] [notice] SSL FIPS mode disabled
[Sun Jun 21 10:22:42 2015] [warn] RSA server certificate CommonName (CN) `localhost' does NOT match server name!?
[Sun Jun 21 10:22:42 2015] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Sun Jun 21 10:22:43 2015] [notice] SSL FIPS mode disabled
[Sun Jun 21 10:22:43 2015] [warn] RSA server certificate CommonName (CN) `localhost' does NOT match server name!?
[Sun Jun 21 10:22:43 2015] [notice] Apache/2.2.22 (Unix) mod_ssl/2.2.22 OpenSSL/1.0.0-fips DAV/2 PHP/5.2.17 configured -- resuming normal operations
[Sun Jun 21 10:22:59 2015] [notice] child pid 3943 exit signal Segmentation fault (11)
[Sun Jun 21 10:23:00 2015] [notice] child pid 3944 exit signal Segmentation fault (11)
[Sun Jun 21 10:23:02 2015] [notice] child pid 3945 exit signal Segmentation fault (11)
[Sun Jun 21 10:23:03 2015] [notice] child pid 3942 exit signal Segmentation fault (11)
[Sun Jun 21 10:23:04 2015] [notice] child pid 4080 exit signal Segmentation fault (11)
[Sun Jun 21 10:23:05 2015] [notice] child pid 3946 exit signal Segmentation fault (11)
[Sun Jun 21 10:23:06 2015] [notice] child pid 4083 exit signal Segmentation fault (11)
[Sun Jun 21 10:23:07 2015] [notice] child pid 4082 exit signal Segmentation fault (11)

Resources