I have searched & googled a lot but I cannot get this to work.
I want the Beaglebone to boot up into my Qt application. However, what I get is that the GUI boots up OK but then in a few seconds the Angstrom login screen overwrites my GUI, which stays running in the background.
I set up a systemd service as follows in /etc/systemd/system:
#!/bin/sh
[Unit]
Description=Meta Systemd script
[Service]
USER=root
WorkingDirectory=/home/root
ExecStart=/bin/sh -c 'source /etc/profile ; /home/root/meta6 -qws'
After=local-fs.target
Type=oneshot
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
I activated it with:
systemctl enable meta.service
I disabled gdm with:
systemctl disable gdm
I suspect that maybe I should change the After statement to wait 'til some other service is complete. But what?
Regards,
James
The following commands will disable gdm on Beaglebone Black running angstrom
update-rc.d -f gdm remove
systemctl disable gdm.service
Consider removing
#!/bin/sh
Also add the following to the file
[Unit]
After=getty#.service or getty.target
Also change the following
[Service]
ExecStart=/home/root/meta6 -qws'
The following might not be required
[Service]
After=local-fs.target
Reference
Creating Ångström System Services on BeagleBone Black
Related
AIRFLOW_HOME=/path/to/my/airflow_home
I get this warning ...
>airflow trigger_dag python_dag3
/Users/alexryan/miniconda3/envs/airflow/lib/python3.7/site-packages/airflow/configuration.py:627: DeprecationWarning: You have two airflow.cfg files: /Users/alexryan/airflow/airflow.cfg and /path/to/my/airflow_home/airflow.cfg. Airflow used to look at ~/airflow/airflow.cfg, even when AIRFLOW_HOME was set to a different value. Airflow will now only read /path/to/my/airflow_home/airflow.cfg, and you should remove the other file
I complied and deleted ~/airflow/airflow.cfg, but it keeps coming back.
Is there some way to tell airflow to stop re-creating this?
Running on macOS Mojave
>pip freeze | grep air
apache-airflow==1.10.6
Have you created a systemd service for airflow server?
I think the ~/airflow folder is recreated again and again because you run webserver as daemon mode (maybe launchctl? I am not familiar with macOS)
You should figure out how to pass the enviroment variables to the daemon process. On Linux platform, when daemon configuration is created, "Environment=AIRFLOW_HOME=/path/to/my/airflow_home" is neccessary.
[Unit]
Description=Airflow webserver daemon
After=network.target postgresql.service mysql.service redis.service rabbitmq-server.service
Wants=postgresql.service mysql.service redis.service rabbitmq-server.service
[Service]
EnvironmentFile=/etc/sysconfig/airflow
Environment="AIRFLOW_HOME=/path/to/my/airflow_home"
User=airflow
Group=airflow
Type=simple
ExecStart=/bin/airflow webserver
Restart=always
RestartSec=5s
[Install]
WantedBy=multi-user.target
I have a network init script than
runs wpa_supplicant
runs dhcpcd
configureWifi.sh
pkill -9 wpa_supplicant
pkill -9 dhcpcd
wpa_supplicant -i wlan0 -c /etc/wpa_supplicant/wpa_supplicant.conf -B
dhcpcd wlan0
i want to disable all systemd "network configuring" features
because their not working and make my system hangs on boot.
I just want to start the shell script.
My systemd unit description file looks like so:
/lib/systemd/system/goodwifi.service
[Unit]
description=Good wifi service, initializes wifi without networkmanger
[Service]
ExecStart=/root/configureWifi.sh
type=oneshot
[Install]
WantedBy=network.target
# systemctl enable goodwifi
then after reboot wifi is not configured.
when i run this script by hand it works 100%.
Any advice?
Your service might be running at the wrong time in the startup sequence. Try adding at least Afterand Beforeproperties to configure that.
My suggestion is, let everything network start and then fire your service.
This was taken from /etc/systemd/system/network.service in an OpenSuse box
[Unit]
Description=Network Manager
Documentation=man:NetworkManager(8)
Wants=remote-fs.target network.target
After=network-pre.target dbus.service
Before=network.target
Also, it would be better if your script is located in an standard PATH directory, perhaps /usr/local/sbin.
ExecStart=/usr/local/sbin/configureWifi.sh
Check service start order with
systemctl list-dependencies network.service
network.service
● ├─dbus.socket
● ├─system.slice
● ├─network.target
● ├─remote-fs.target
● │ ├─iscsi.service
● │ └─remote-fs-pre.target
● └─sysinit.target
● ├─detect-part-label-duplicates.service
● ├─dev-hugepages.mount
journalctl -b0 should tell you how the script went.
The docs specify instructions for the integration
What I want is that every time the scheduler stop working it will be restarted by it's own. Usually I start it manually with airflow scheduler -D but sometimes it stops when I'm not available.
Reading the docs I'm not sure about the configs.
The GitHub contains the following files:
airflow
airflow-scheduler.service
airflow.conf
I'm running Ubuntu 16.04
Airflow is installed on:
home/ubuntu/airflow
I have path of:
etc/systemd
The docs says to:
Copy (or link) them to /usr/lib/systemd/system
Copy which of the files?
copy the airflow.conf to /etc/tmpfiles.d/
What is tmpfiles.d ?
What is # AIRFLOW_CONFIG= in the airflow file?
Or in another words... a more "down to earth" guide on how to do it?
Integrating Airflow with systemd files makes watching your daemons easy as systemd can take care of restarting a daemon on failure. This also enables to automatically start airflow webserver and scheduler on system start.
Edit the airflow file from systemd folder in Airflow Github as per the current configuration to set the environment variables for AIRFLOW_CONFIG, AIRFLOW_HOME & SCHEDULER.
Copy the services files (the files with .service extension) to /usr/lib/systemd/system in the VM.
Copy the airflow.conf file to /etc/tmpfiles.d/ or /usr/lib/tmpfiles.d/. Copying airflow.conf ensures /run/airflow is created with the right owner and permissions (0755 airflow airflow). Check whether /run/airflow exist with airflow:airflow owned by airflow user and airflow group if it doesn't create /run/airflowfolder with those permissions.
Enable this services by issuing systemctl enable <service> on command line as shown below.
sudo systemctl enable airflow-webserver
sudo systemctl enable airflow-scheduler
airflow-scheduler.service file should be as below:
[Unit]
Description=Airflow scheduler daemon
After=network.target postgresql.service mysql.service redis.service rabbitmq-server.service
Wants=postgresql.service mysql.service redis.service rabbitmq-server.service
[Service]
EnvironmentFile=/etc/sysconfig/airflow
User=airflow
Group=airflow
Type=simple
ExecStart=/bin/airflow scheduler
Restart=always
RestartSec=5s
[Install]
WantedBy=multi-user.target
Your question dates a little, but I just discovered it, because I'm interested at the moment in the same subject. I think the answer to your question is here.
https://medium.com/#shahbaz.ali03/run-apache-airflow-as-a-service-on-ubuntu-18-04-server-b637c03f4722
I have some dags that can't seem to locate python modules. Inside of the Airflow UI, I see a ton of these message variations.
Broken DAG: [/home/airflow/source/airflow/dags/test.py] No module named 'paramiko'
Inside of a file I can directly modify the python sys.path and that seems to mitigate my issue.
import sys
sys.path.append('/home/airflow/.local/lib/python2.7/site-packages')
That doesn't feel right though having to set my path in my code directly. I've tried exporting PYTHONPATH in the Airflow user accounts .bashrc but doesn't seem to be read when the dag jobs are executed. What's the correct way to go about this?
Thanks.
----- update -----
Thanks for the responses.
below is my systemctl scripts.
::::::::::::::
airflow-scheduler-airflow2.service
::::::::::::::
[Unit]
Description=Airflow scheduler daemon
[Service]
EnvironmentFile=/usr/local/airflow/instances/airflow2/etc/envars
User=airflow2
Group=airflow2
Type=simple
ExecStart=/usr/local/airflow/instances/airflow2/venv/bin/airflow scheduler
Restart=always
RestartSec=5s
[Install]
WantedBy=multi-user.target
::::::::::::::
airflow-webserver-airflow2.service
::::::::::::::
[Unit]
Description=Airflow webserver daemon
[Service]
EnvironmentFile=/usr/local/airflow/instances/airflow2/etc/envars
User=airflow2
Group=airflow2
Type=simple
ExecStart=/usr/local/airflow/instances/airflow2/venv/bin/airflow webserver
Restart=always
RestartSec=5s
[Install]
WantedBy=multi-user.target
this is the EnvironentFile Contents uses from above
more /usr/local/airflow/instances/airflow2/etc/envars
PATH=/usr/local/airflow/instances/airflow2/venv/bin:/usr/local/bin:/usr/bin:/bin
AIRFLOW_HOME=/usr/local/airflow/instances/airflow2/home
AIRFLOW_CONFIG=/usr/local/airflow/instances/airflow2/etc/airflow.cfg
I had similar issue:
Python wasn't loaded from virtualenv for running airflow (this fixed airflow deps not being fetched from virtualenv)
Submodules under dags path wasn't loaded due different base path (this fixed importing own modules under dags folder
I added following strings to the environemnt file for systemd service
(/usr/local/airflow/instances/airflow2/etc/envars in your case)
source /home/ubuntu/venv/airflow/bin/activate
PYTHONPATH=/home/ubuntu/venv/airflow/dags:$PYTHONPATH
It looks like your python environment is degraded - you have multiple instances of python on your vm (python 3.6 and python 2.7) and multiple instances of pip. There is a pip with python3.6 that is trying to be used, but all of your modules are actually with your python 2.7.
This can be solved easily by using symbolic links to redirect to 2.7.
Type the commands and see which instance of python is used (2.7.5, 2.7.14, 3.6, etc):
python
python2
python2.7
or type which python to find which python instance is being used by your vm. You can also do which pip to see what pip instance is being used.
I am going to assume python and which python leads to python 3 (which you do not want to use), but python2 and python2.7 lead to the instance you do want to use.
To create a symbolic link so that /home/airflow/.local/lib/python2.7/ is used, do the following and create the following symbolic links:
cd home/airflow/.local/lib/python2.7
ln -s python2 python
ln -s /home/airflow/.local/lib/python2.7 python2
Symbolic link structure is: ln -s #PATHDIRECTED #LINKNAME
You are essentially saying when you run the command python, go to python2. When python2 is then ran, go to /home/airflow/.local/lib/python2.7. Its all being redirected.
Now re run the three commands above (python, python2, python2.7). All should lead to the python instance you want.
Hope this helps!
You can add this directly to the Airflow Dockerfile, such as the example below. If you have a .env file you can do ENV PYTHONPATH "${PYTHONPATH}:${AIRFLOW_HOME}".
FROM puckel/docker-airflow:1.10.6
RUN pip install --user psycopg2-binary
ENV AIRFLOW_HOME=/usr/local/airflow
# add persistent python path (for local imports)
ENV PYTHONPATH "/home/jovyan/work:${AIRFLOW_HOME}"
COPY ./airflow.cfg /usr/local/airflow/airflow.cfg
CMD ["airflow initdb"]
I still have the same problem when I try to trigger a dag from UI (cant locate python local modules i.e my_module.my_sub_module... etc), but when I test with :
airflow test my_dag my_task 2021-04-01
It works fine !
I also have in my .bashrc the line (where it supposed to find python local modules):
export PYTHONPATH="/home/my_user"
Sorry guys this topics is very old but i have a lot of problem for launch airflow as daemon, i share my solution
first i installed anaconda in /home/myuser/anaconda3 and i installed all library that i using in my dags, next create follow files:
/etc/systemd/system/airflow-webserver.service
[Unit]
Description=Airflow webserver daemon
After=network.target
[Service]
Environment="PATH=/home/ubuntu/anaconda3/envs/airflow/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
RuntimeDirectory=airflow
RuntimeDirectoryMode=0775
User=myuser
Group=myuser
Type=simple
ExecStart=/bin/bash -c 'source /home/myuser/anaconda3/bin/activate; airflow webserver -p 8080 --pid /home/myuser/airflow/webserver.pid'
Restart=on-failure
RestartSec=5s
PrivateTmp=true
[Install]
WantedBy=multi-user.target
same for daemon scheduler
/etc/systemd/system/airflow-schedule.service
[Unit]
Description=Airflow schedule daemon
After=network.target
[Service]
Environment="PATH=/home/ubuntu/anaconda3/envs/airflow/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
RuntimeDirectory=airflow
RuntimeDirectoryMode=0775
User=myuser
Group=myuser
Type=simple
ExecStart=/bin/bash -c 'source /home/myuser/anaconda3/bin/activate; airflow scheduler'
Restart=on-failure
RestartSec=5s
PrivateTmp=true
[Install]
WantedBy=multi-user.target
next exec command of systemclt:
sudo systemctl daemon-reload
sudo systemctl enable airflow-webserver.service
sudo systemctl enable airflow-schedule.service
sudo systemctl start airflow-webserver.service
sudo systemctl start airflow-schedule.service
I'm trying to provide a splashscreen for Raspbian Stretch using fbi. Based upon some tutorials I found here my situation:
/etc/systemd/system/splashscreen.service
[Unit]
Description=Splash screen
DefaultDependencies=no
After=local-fs.target
[Service]
ExecStart=/usr/bin/fbi -T 1 -d /dev/fb0 --noverbose /opt/logo.png
[Install]
WantedBy=sysinit.target
enabled (checked the symlink under sysinit.target.wants).
/boot/cmdline.txt
dwc_otg.lpm_enable=0 console=tty1 root=PARTUUID=ee397c53-02 rootfstype=ext4 elevator=deadline rootwait quiet logo.nologo loglevel=1 fsck.mode=skip noswap ro consoleblank=0
p
/boot/config.txt
hdmi_drive=2
dtparam=i2c_arm=on
dtparam=spi=on
dtparam=audio=on
dtparam=i2c1=on
dtoverlay=i2c-rtc,ds1307
disable_splash=1
Executing the exactly same command (fbi -T 1 -d /dev/fb0 --noverbose /opt/logo.png) from prompt leads to show the image as expected.
In the boot messages I can't find any error. Any thought?
I finally got this to work! Here's what I did (essentially copied from https://yingtongli.me/blog/2016/12/21/splash.html, with a few small changes that made it work for me).
Install fbi: apt install fbi
Create /etc/systemd/system/splashscreen.service with:
[Unit]
Description=Splash screen
DefaultDependencies=no
After=local-fs.target
[Service]
ExecStart=/usr/bin/fbi --noverbose -a /opt/splash.png
StandardInput=tty
StandardOutput=tty
[Install]
WantedBy=sysinit.target
The only thing I did differently from the article linked above is remove the -d flag from the /usr/bin/fbi command (the command was originally /usr/bin/fbi -d /dev/fb0 --noverbose -a /opt/splash.png). I'm guessing fb0 was the wrong device and leaving it out just means fbi will use the current display device and gets it right.
Put your splash images in /opt/splash.png.
Enable the service: systemctl enable splashscreen
I'm still trying to figure out how to get rid of the rest of the boot text, but this is a step in the right direction.