Monit fails to execute the complete command in start program - syslog

I'm using monit 5.4 on Mac 10.7.4 machine. When i tried to execute a example configuration
check process syslogd with pidfile /var/run/syslogd.pid
start program = "/etc/init.d/sysklogd start"
stop program = "/etc/init.d/sysklogd stop"
if 5 restarts within 5 cycles then timeout
from monit wiki page, I get the following error.
'syslogd' process is not running
'syslogd' trying to restart
'syslogd' start: /etc/init.d/sysklogd
'syslogd' failed to start
Monit does not take the complete command given in the "start program" of the monitrc file. It just takes the first word in the command and tries to execute it and fails. Is this a known issue? If yes, does it have a workaround? If not, what am i missing here and how to get it working?
Thanks in advance.

Try this (from http://mmonit.com/wiki/Monit/FAQ#execution)
start program = "/bin/bash -c '/etc/init.d/blah start'"

Does /etc/init.d/sysklogd actually exist?
On 10.8 I have /etc/init.d/syslog and manually running /etc/init.d/syslog restart works fine.

Related

airflow scheduler: WARNING - (parsing_processes = 2) when using sqlite. So we set parallelism to 1

How can I fix the problem here?
I tried to run airflow scheduler but it does not work.
It seems to me that warning "Because we cannot use more than 1 thread (parsing_processes = 2) when using sqlite. So we set parallelism to 1." is not connected with errors. In my case the problem was caused with the processes using :8793 port. To fix it you could try using these commands:
lsof -i :8793 (install lsof beforehand if needed)
Kill processes that were printed with the command

How can I stop the http server, downloaded using 'npm install http-server"?

How can I stop the http server, downloaded using 'npm install http-server" comand in terminal (console) and launched then?
Simply Ctrl+C, if you read the output after you launch it, you should see:
Starting up http-server, serving xxx
Available on:
http://<some ip>:<some port>
Hit CTRL-C to stop the server
Its built on node so Kill the node process for stopping it if it stuck. You can find all the node process ids and see what I'd your server have and kill that.

Weird delay when using "tail -f" command

To monitor a log file I have to connect to an ssh connection and redirect the output of the log file(let's call it RemoteLog.txt) out to a local machine so it can be read by a java program and put on a GUI.
Right now I have the output redirected out of the ssh connection and onto the local machine with the command:
ssh remote#ip.address tail logs/RemoteLog.txt -f > ~/Log/LocalLog.txt
and everything works fine technically with one exception: for some reason LocalLog.txt only gets updated with the changes to RemoteLog.txt every 35 seconds to the millisecond.
It doesn't matter the number of changes to RemoteLog, the number of lines specified with the tail command, or using the >> operator vs the > operator; there is always a 35 second delay between updates of LocalLog.txt while RemoteLog is constantly updating.
Does anyone have any clue why this might be?

Start Scrapyd as service

I would like to start scrapyd as service but when I start scrapyd,
if I close the SSH session the service scrapyd close automatically.
When I try to start as service like this I have an error :
root#vps:~# service scrapyd start
scrapyd: Failed to start scrapyd.service: Unit scrapyd.service failed to load: No such file or directory.
And when i try to start daemon scrapyd the CURL request return :
{"status": "error", "message": "Use \"scrapy\" to see available commands", "node_name": "vps"}
Can someone help me to start my scrapyd as service please !
daemon --chdir=/home/Crawler scrapyd
I need to set the --chdir to load the service on the Scrapy folder !
I could not get my scrapyd start through /etc/rc.local or crontab so I found a work around. I am sure there will be a better way but for mean time this worked for me.
I created a python file start.py.
import os
os.system('/usr/bin/python3 /home/ubuntu/.local/bin/scrapyd > /home/ubuntu/scrapy.log &')
And simply call the python file start.py through crontab.
Add this to crintab file by cronat -e.
#reboot /usr/bin/python3 /home/ubuntu/start.py
Basically what I did is execute the scrapy through shell by calling the python filr through crontab.
If anyone finds something better please comment.

Autosys jobs hung

We have jobs getting stuck in autosys R11 screen due to app server down
So is there any way to monitor autoys itself is up and running
Note-The jobs which got stuck shows completed in database but the dependent jobs cannot start though from front end the jobs are still in runnig status
Please help
chk_auto_up command will check if application server, event server,
scheduler and agent are working fine.
chase command checks if agent is running fine.
autoping command checks if agent is able to communicate with the
application server.
Check the log files of components by below commands :
autosyslog -e (scheduler)
autosyslog -s (server)
autosyslog -d j (job)
check the status of each components manually by below commands
unisrvcntr status waae_server.$AUTOSERV
unisrvcntr status waae_agent-$AGENT_NAME
unisrvcntr status waae_webserver.$AUOTSERV
unisrvcntr status waae_sched.$AUTOSERV

Resources