When I am running a command bitcore-node start it starts two services.
Screenshot of ps aux is attached.
I created a service in /etc/init.d
description "Bitcoin Core for Bitcore"
author "BitPay, Inc."
limit nofile 20000 30000
start on runlevel [2345]
stop on runlevel [016]
kill timeout 300
kill signal SIGINT
# user/group for bitcore daemon to run as
setuid ubuntu
setgid ubuntu
# home dir of the bitcore daemon user
env HOME=/home/ubuntu
respawn
respawn limit 5 15
script
exec bitcore-node -conf=/home/ubuntu/love/data/bitcoin.conf -datadir=/home/ubuntu/love/data -testnet
end script
I am getting error to while running it.
qT.png
Any Idea ?
You have written a script as an Upstart Init Script, and executed it as a SysV init using the systemd init systemd.
You could try placing the script in /etc/init/ instead of
/etc/init.d. That way, it might be processed as the Upstart init script that it actually is.
However, Upstart is being replaced with systemd, so following a tutorial on translating your Upstart init script into a systemd .service file is recommended.
Related
Hi I'm using Airflow and put my airflow project in EC2. However, how would one keep the airflow scheduler running while my mac goes sleep or exiting ssh?
You have a few options, but none will keep it active on a sleeping laptop. On a server:
Can use --daemon to run as daemon: airflow scheduler --daemon
Or, maybe run in background: airflow scheduler >& log.txt &
Or, run inside 'screen' as above, then detach from screen using ctrl-a d, reattach as needed using 'screen -r'. That would work on an ssh connection.
I use nohup to keep the scheduler running and redirect the output to a log file like so:
nohup airflow scheduler >> ${AIRFLOW_HOME}/logs/scheduler.log 2>&1 &
Note: Assuming you are running the scheduler here on your EC2 instance and not on your laptop.
In case you need more details on running it as deamon i.e. detach completely from terminal and redirecting stdout and stderr, here is an example:
airflow webserver -p 8080 -D --pid /your-path/airflow-webserver.pid --stdout /your-path/airflow-webserver.out --stderr /your-path/airflow-webserver.err
airflow scheduler -D --pid /your-path/airflow-scheduler.pid —stdout /your-path/airflow-scheduler.out --stderr /your-path/airflow-scheduler.err
The most robust solution would be to register it as a service on your EC2 instance. Airflow provides systemd and upstart scripts for that (https://github.com/apache/incubator-airflow/tree/master/scripts/systemd and https://github.com/apache/incubator-airflow/tree/master/scripts/upstart).
For Amazon Linux, you'd need the upstart scripts, and for e.g. Ubuntu, you would use the systemd scripts.
Registering it as a system service is much more robust because Airflow will be started upon reboot or when it crashes. This is not the case when you use e.g. nohup like other people suggest here.
I use supervisor.
I have configuration file that launches a task in a directory that is mounted after some time after system startup (/vagrant/task.sh).
So when system starts, supervisor can't start. I have to run sudo service supervisor start command after the system is loaded and the /vagrant directory is mounted.
How can I add delay to /etc/init.d/supervisor to make it start after delay or event?
I found out a solution (event vagrant-mounted):
cat << EOF > /etc/init/supervisor-launcher.conf
description "Supervisor Launcher"
start on vagrant-mounted
script
/usr/sbin/service supervisor start
end script
EOF
I have created a new service named some-service. The shell script is present in /etc/init.d/some-service I have the same shell script file in /usr/local/bin/some-service which is a copy of some-service.
i ran the below command to create a daemon service:
os-svc-daemon -i $INSTALLDIR -d some-service some-service root some-service
This created a /etc/init/some-service.conf
start on runlevel [2345]
stop on runlevel [016]
env OS_SVC_ENABLE_CONTROL=1
export OS_SVC_ENABLE_CONTROL
pre-start script
mkdir -p /var/run/some-service
chown -R root:root /var/run/some-service
end script
respawn
# the default post-start of 1 second sleep delays respawning enough to
# not hit the default of 10 times in 5 seconds. Make it 2 times in 5s.
respawn limit 2 5
exec start-stop-daemon --start -c root --exec **INSTALLDIR**/bin/some-service --
post-start exec sleep 1
to reload the changes ran the below command
initctl reload-configuration
in tried to start the service but it never runs.
initctl start some-service
What am i doing wrong here? Also is it safe to use shell script to start it and not a python bin file?
Use os-svc-enable servicename and try start servicename
I have a script like this this:
#!/usr/bin/php
<?php
file_put_contents('/home/vagrant/sleep.pid', getmypid());
sleep(9);
exit(0);
I want to write my Monit config so it makes sure this application is always running. This is my monit config:
set daemon 5
check process program with pidfile /home/vagrant/sleep.pid
start program = "/usr/bin/php /home/vagrant/myphp.php"
But after my program exits Monit doesn't try to run it again. It kinda make sense but I was wondering if there is any way to tell Monit to rerun the process.
After playing around more with Monit, I found that I wasn't running Monit properly.
First you need to add the following config to your monit file:
set httpd port 2812 and allow monit:monit
set daemon 5
check process program with pidfile /home/vagrant/sleep.pid
start program = "/usr/bin/php /home/vagrant/myphp.php"
Then run the monit with this command:
monit -c monit.conf
Now Monit runs in the background and rerun the process if it dies, you may restart or start specific program with :
monit -c monit.conf start all
OR
monit -c monit.conf stop my_role
I am writing a file syncing application where I collect event from the filesystem whenever the file is modified and than later I copy it over to remote share via rsync over ssh. In my setup I have a slot which is connected to a QTimer. Each 5 seconds I pick a file from a sqlite db for synchronization and start a QProcess::start with the following parameters
/usr/bin/rsync -a /aufs/another-test-folder/testfile286.txt --rsh="ssh -p 8023" user#myserver.de:/home/neox/another-test-folder/testfile286.txt --rsync-path="mkdir -p /home/neox/another-test-folder && rsync"
I have at most 2 rsync processes running in parallel. This results in a process tree:
MyApp
\_rsync
| \_ssh
|_rsync
\_ssh
The problem is that sometimes the application hangs and the ps says that ssh processes have gone zombie. First I have tried to kill MyApp with SIGKILL but no luck. Than I moved on to kill rsync and ssh but still no luck. The whole tree hangs. And if I try to start the daemon from another console or even try to ssh to another box, I can't. My idea here is that somewhere ssh blocks some IO resources. Any idea how to solve this?
P.S. This happens randomly and not often