I am new to airflow, tried to run a dag by starting airflow webserver and scheduler. After I closed the scheduler and airflow webserver, the airflow processes are still running.
ps aux | grep airflow shows 2 airflow webserver running, and scheduler running for all dags.
I tried running kill $(ps aux | grep airflow | awk '{print $2}') but it did not help.
I don't have sudo permissions and webserver UI access.
If you run Airflow locally and start it with the two commands airflow scheduler and airflow webserver, then those processes will run in the foreground. So, simply hitting Ctrl-C for each of them should terminate them and all their child processes.
If you don't have those two processes running in the foreground, there is another way. Airflow creates files with process IDs of the scheduler and gunicorn server in its home directory (by default ~/airflow/).
Running
kill $(cat ~/airflow/airflow-scheduler.pid)
should terminate the scheduler.
Unfortunately, airflow-webserver.pid contains the PID of the gunicorn server and not the initial Airflow command that started it (which is the parent of the gunicorn process). So, we will first have to find the parent PID of the gunicorn process and then kill the parent process.
Running
kill $(ps -o ppid= -p $(cat ~/airflow/airflow-webserver.pid))
should terminate the webserver.
If simply running kill (i.e., sending SIGTERM) for these processes does not work you can always try sending SIGKILL: kill -9 <pid>. This should definitely kill them.
Related
I have airflow installed on kubernetes cluster. Ive created some dags nevertheless any time i restart server for example by: cat $AIRFLOW_HOME/airflow-webserver.pid | xargs kill -9
server deletes all my dags i created. How to keep dags even on webserver restarts?
I can start an "airflow worker" (with a celery executor) but I don't know how to kill it properly. I have the impression that many subprocesses are created and I don't know how to shut them gracefully.
Thanks in advance.
I normally set up the Airflow processes as systemd processes and just run systemctl stop airflow-* however I think you can simply kill the main airflow-worker process with a standard linux kill command? Probably what the systemd daemon does anyway.
To stop a worker running on a machine you can use:
airflow celery stop
Hi I'm using Airflow and put my airflow project in EC2. However, how would one keep the airflow scheduler running while my mac goes sleep or exiting ssh?
You have a few options, but none will keep it active on a sleeping laptop. On a server:
Can use --daemon to run as daemon: airflow scheduler --daemon
Or, maybe run in background: airflow scheduler >& log.txt &
Or, run inside 'screen' as above, then detach from screen using ctrl-a d, reattach as needed using 'screen -r'. That would work on an ssh connection.
I use nohup to keep the scheduler running and redirect the output to a log file like so:
nohup airflow scheduler >> ${AIRFLOW_HOME}/logs/scheduler.log 2>&1 &
Note: Assuming you are running the scheduler here on your EC2 instance and not on your laptop.
In case you need more details on running it as deamon i.e. detach completely from terminal and redirecting stdout and stderr, here is an example:
airflow webserver -p 8080 -D --pid /your-path/airflow-webserver.pid --stdout /your-path/airflow-webserver.out --stderr /your-path/airflow-webserver.err
airflow scheduler -D --pid /your-path/airflow-scheduler.pid —stdout /your-path/airflow-scheduler.out --stderr /your-path/airflow-scheduler.err
The most robust solution would be to register it as a service on your EC2 instance. Airflow provides systemd and upstart scripts for that (https://github.com/apache/incubator-airflow/tree/master/scripts/systemd and https://github.com/apache/incubator-airflow/tree/master/scripts/upstart).
For Amazon Linux, you'd need the upstart scripts, and for e.g. Ubuntu, you would use the systemd scripts.
Registering it as a system service is much more robust because Airflow will be started upon reboot or when it crashes. This is not the case when you use e.g. nohup like other people suggest here.
I am writing a file syncing application where I collect event from the filesystem whenever the file is modified and than later I copy it over to remote share via rsync over ssh. In my setup I have a slot which is connected to a QTimer. Each 5 seconds I pick a file from a sqlite db for synchronization and start a QProcess::start with the following parameters
/usr/bin/rsync -a /aufs/another-test-folder/testfile286.txt --rsh="ssh -p 8023" user#myserver.de:/home/neox/another-test-folder/testfile286.txt --rsync-path="mkdir -p /home/neox/another-test-folder && rsync"
I have at most 2 rsync processes running in parallel. This results in a process tree:
MyApp
\_rsync
| \_ssh
|_rsync
\_ssh
The problem is that sometimes the application hangs and the ps says that ssh processes have gone zombie. First I have tried to kill MyApp with SIGKILL but no luck. Than I moved on to kill rsync and ssh but still no luck. The whole tree hangs. And if I try to start the daemon from another console or even try to ssh to another box, I can't. My idea here is that somewhere ssh blocks some IO resources. Any idea how to solve this?
P.S. This happens randomly and not often
I have a shell script that starts an ssh session to a remote host and pipes the output to another, local script, like so:
#!/bin/sh
ssh user#host 'while true ; do get-info ; sleep 1 ; done' | awk -f parse-info.awk
It works fine. I run it under the 'supervise' program from djb's daemontools. The only problem is shutting down the daemon. If I terminate the process for this shell script, the ssh and awk processes continue running as orphans. Normally I would solve this problem with exec to replace the supervising shell process, but the two processes run in their own subshells and can't replace the shell process.
What I would like to do is have the supervising shell script 'forward' any signals it receives to at least one of the child processes, so that I can break the pipe and shut down cleanly. Is there an easy way to do this?
Inter process communications.
You should be looking at pipes, etc.