What is NGINX [notice] signal process started error message - nginx

With regards to nginx error log, what does 2020/10/23 06:51:45 [notice] 361#361: signal process started mean?
Some more context:
I have some raspberry pi's communicating with my Django application on a digital ocean Ubuntu droplet running nginx as the web server. These raspberry pi's stopped communicating with my server and they are physically very far from me. I can see their last communication with my server was at 2020/10/23 06:51:41 then they stopped (seconds before nginx error message was logged).
A user that has access to the pi's said they did not lose power, internet is working, so they did a reboot, still nothing.
I have tried:
sudo systemctl restart nginx followed by sudo systemctl restart gunicorn
This did not resolve the issue. I can't seem to find the documentation on this error

Have you verified the security group of your instance from digital ocean ? I think your port is not opened for http and HTTPS. Cross check for port 80 and 443. Is it opened or closed.

Related

Nginx fails with port 80, but only when I am connected to the virtual server by SSH

I first opened a Linode.com droplet.
Then installed Nginx, and left the default configuration.
I pointed my domain to the droplet's IP address.
The command "systemctl status nginx" - shows an error, this one seems relevant:
"Jun 18 14:40:13 localhost nginx[25837]: nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)"
I searched and racked my brains out until I tried this:
change config file so that Nginx/website listens to port 8080
Working fine, I then changed settings back to port 80 (stopped working)
The process was not halted, so step 2 allowed me to run the command "Nginx -s reload" (the reload signal needs the process running, and I couldn't restart Nginx with systemctl due to the original error)
While the website was not loading, I exited the SSH connection (which I checked, was using port 22, not 80)
My website then started working!
Re-entered the droplet with SSH, and found Nginx listening on port 80 just fine.
I want to understand why this worked, as it makes no sense that port 80 should be unavailable to Nginx just because I have an SSH connection.
I don't think it's unique to Linode's configuration as I have a Digital Ocean droplet too, and this was failing to listen on port 80 as well, although I never tried the fix I discovered on Linode.

serverspec test fails for listening port

I got a kitchen-ansible test that runs serverspec as a verifier. The test runs on two containers. One running with Amazon Linux 1 and the other Amazon Linux 2. The Ansible code installs a Keycloak server which listens on the ports 8080 and 8443.
In the Amazon Linux 1 container, everything's fine and the serverspec reports the ports to be listening.
In the Amazon Linux 2 container, the installation also ends without any errors but serverspec fails to report the ports not be listening. As I found out Serverspec is wrong.
After logging into the container running netstat -tulpen |grep LISTEN it shows the ports to be listening. Serverspec is checking with ss command: /bin/sh -c ss\ -tunl\ \|\ grep\ -E\ --\ :8443\\\
So I logged in to the Amazon Linux 1 container for checking the output of the ss command there and it showed no listening on both ports.
So has anyone a clue why the serverspec succeeds on Amazon Linux 1 and fails on Amazon Linux 2 despite in both containers the ss command is reporting no ports to be listened on?
The root cause was, that the ports aren't bind quickly enough. Serverspec starts to check, when the service hasn't been started completely. Logging in to the container takes more time, so the service is started successful and the ports are bound.

Kubernetes API server failing to start: TLS handshake error

Out of nowhere one of our API servers has started to fail with the following error:
http: TLS handshake error from 172.23.88.213:17244: EOF
It throws this error for every single node in the cluster, thus failing to start. This started happening this morning with no changes to any infrastructure.
Things I've tried that haven't helped:
Manually restart the weave docker container on the master node.
Manually kill and reschedule the api-server.
Manually restart the Docker daemon.
Manually restarted the kubelet service.
Check all SSL certs are valid, which they are.
Check inodes, thousands free.
Ping IP addresses of other nodes in cluster, all return ok with 0 packet loss.
Check journalctl and systemctl logs of kubelet services and the only significant errors I see are related to TLS handshake error.
Cluster specs:
Cloud provider: AWS
Kubernetes version: 1.11.6
Kubelet version: 1.11.6
Kops version: 1.11
I'm at a bit of a loss as to how to debug this further.

Creating docker repo in Artifactory with dedicated port, it says "SocketException: Permission denied"

I am running Artifactory Pro (5.3.1), and was trying to use the docker registry functionality.
I created a docker repository, and gave it a port 5001 in the "Registry Port" config.
However, there's nothing running on port 5001 ("telnet localhost 5001" refuses to connect), and the logs show this:
[http-nio-8081-exec-7] [ERROR] (o.a.s.s.SshAuthServiceImpl:210) - Failed to start SSH server
java.net.SocketException: Permission denied
at sun.nio.ch.Net.bind0(Native Method) ~[na:1.8.0_72-internal]
at sun.nio.ch.Net.bind(Net.java:433) ~[na:1.8.0_72-internal]
at sun.nio.ch.Net.bind(Net.java:425) ~[na:1.8.0_72-internal]
at sun.nio.ch.AsynchronousServerSocketChannelImpl.bind(AsynchronousServerSocketChannelImpl.java:162) ~[na:1.8.0_72-internal]
at org.apache.sshd.common.io.nio2.Nio2Acceptor.bind(Nio2Acceptor.java:66) ~[sshd-core-0.14.0.jar:0.14.0]
Any idea what could cause a "permission denied"? There's nothing running on that port (same error for any other port). It's on Ubuntu 14.04.
I had a misunderstanding how the docker registry worked with Artifactory.
The Artifactory service doesn't actually open the port assigned to the repo (5001 in this case), but the reverse proxy will listen on it and forward it (with the right X-forwarded-port) to the "normal" Artifactory service port (e.g. 8081).
After setting up the reverse proxy for it, it worked fine.

why i am getting this error "Installation failed. Failed to receive heartbeat from agent." in cloudera installtion

I am installing cloudera manager on local machine.
When trying to add new host getting following error
Installation failed. Failed to receive heartbeat from agent.
Ensure that the host's hostname is configured properly.
Ensure that port 7182 is accessible on the Cloudera Manager server
(check firewall rules).
Ensure that ports 9000 and 9001 are free on the host being added.
Check agent logs in /var/log/cloudera-scm-agent/ on the host being
added
(some of the logs can be found in the installation details).
i checked the logs,it shows like hostname differs from canonical name
So I also changed the hostname from /etc/resolv.conf
But still getting sam error
I had the same error for a simple mistake in the file /etc/hosts :
Have you checked you have DNS and reverse DNS ?
Then to check if your port is open 7182, you should do a telnet IP 7182 (replace IP by the host of Cloudera Manager Server).
If there are still some problems, maybe you have forgotten to deactivate the firewall (iptables).
Regards, K.
To resolve this issue you need to check first all port opened on your server service listing to the port no, use command: sudo netstat -lpten
Check if any thing is running on 9000 or 90001, mostly java services required for set up is running on port 9000, and cloudera-scm-agent listner also runs on port 9000. to over come this issue you can re-configure theports as well in /etc/cloudera-scm-agent/config.ini by changing as below:
--------------------------------------------------
## It should not normally be necessary to modify these.
# Port that the CM agent should listen on.
listening_port=9001
-------------------------------------------------
and then restart the cloudera-scm-agent service by command:
service cloudera-scm-agent restart
To verify this port is not activated for other sshd service check Ports opened in /etc/ssh/sshd_config.
I hope this resolution will work for others too.
Cheers,
Ankit Gupta

Resources