Run a script with systemd timers on Nixos - web-scraping

I have a small shellscript scrape.sh that scrapes a website and puts the resulting data into a new directory:
website='test.com'
dir_name="datev_data/$(date -u +'%Y-%m-%dT%H:%M:%S')"
mkdir $dirname
wget --directory-prefix="$dir_name" "$website"
(I don't realy care where the data ends up as long as it is gets a new directory every time and I can get to the data. I therefore put it into my home directory /home/kaligule for now.)
Running this script by hand works fine, so now I want to run this script every hour on my nixos-server. Therefore I put the following into my config there (Inspired by this post):
systemd.services.test_systemd_timers = {
serviceConfig.Type = "oneshot";
script = ''
echo "Will start scraping now."
{pkgs.bash}/bin/bash /home/kaligule/scrape.sh
echo "Done scraping."
'';
};
systemd.timers.test_systemd_timers = {
wantedBy = [ "timers.target" ];
partOf = [ "test_systemd_timers.service" ];
timerConfig.OnCalendar = [ "*-*-* *:00:00" ];
};
Then I test it out:
sudo nixos-rebuild switch # everything is fine
sudo systemctl start test_systemd_timers # run it immediatelly for testing
I get:
Job for test_systemd_timers.service failed because the control process exited with error code.
See "systemctl status test_systemd_timers.service" and "journalctl -xe" for details.
The first suggested command gives me this:
● test_systemd_timers.service
Loaded: loaded (/nix/store/f8348svxpnn6qx08adrv5s7ksc2zy1sk-unit-test_systemd_timers.service/test_systemd_timers.service; linked; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2021-04-02 14:50:02 CEST; 2min 36s ago
TriggeredBy: ● test_systemd_timers.timer
Process: 5686 ExecStart=/nix/store/4smyxxxlhnnmw8l6l3nnfjyvmg0wxcwh-unit-script-test_systemd_timers-start/bin/test_systemd_timers-start (code=exited, status=127)
Main PID: 5686 (code=exited, status=127)
IP: 0B in, 0B out
CPU: 11ms
Apr 02 14:50:02 regulus systemd[1]: Starting test_systemd_timers.service...
Apr 02 14:50:02 regulus test_systemd_timers-start[5686]: Will start scraping now.
Apr 02 14:50:02 regulus test_systemd_timers-start[5687]: /nix/store/4smyxxxlhnnmw8l6l3nnfjyvmg0wxcwh-unit-script-test_systemd_timers-start/bin/test_systemd_timers-start: line 3: {pkgs.bash}/bin/bash: No such file or directory
Apr 02 14:50:02 regulus systemd[1]: test_systemd_timers.service: Main process exited, code=exited, status=127/n/a
Apr 02 14:50:02 regulus systemd[1]: test_systemd_timers.service: Failed with result 'exit-code'.
Apr 02 14:50:02 regulus systemd[1]: Failed to start test_systemd_timers.service.
The second suggested command gives me:
Apr 02 14:54:42 regulus systemd[1]: Starting test_systemd_timers.service...
░░ Subject: A start job for unit test_systemd_timers.service has begun execution
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit test_systemd_timers.service has begun execution.
░░
░░ The job identifier is 34454.
Apr 02 14:54:42 regulus test_systemd_timers-start[5734]: Will start scraping now.
Apr 02 14:54:42 regulus test_systemd_timers-start[5735]: /nix/store/4smyxxxlhnnmw8l6l3nnfjyvmg0wxcwh-unit-script-test_systemd_timers-start/bin/test_systemd_timers-start: line 3: {pkgs.bash}/bin/bash: No such file or directory
Apr 02 14:54:42 regulus systemd[1]: test_systemd_timers.service: Main process exited, code=exited, status=127/n/a
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ An ExecStart= process belonging to unit test_systemd_timers.service has exited.
░░
░░ The process' exit code is 'exited' and its exit status is 127.
Apr 02 14:54:42 regulus systemd[1]: test_systemd_timers.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ The unit test_systemd_timers.service has entered the 'failed' state with result 'exit-code'.
Apr 02 14:54:42 regulus systemd[1]: Failed to start test_systemd_timers.service.
░░ Subject: A start job for unit test_systemd_timers.service has failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit test_systemd_timers.service has finished with a failure.
░░
░░ The job identifier is 34454 and the job result is failed.
Apr 02 14:54:42 regulus sudo[5731]: pam_unix(sudo:session): session closed for user root
So because I got the Will start scraping now. in the logs I think the job started but wasn't able to run the script. My questions are:
where (and with which permissions) should I put my script?
how should I addapt my config so that the script runs as I described?
(Of course I am thankfull for every feedback to my approach.)

Your problem is that, in your script, {pkgs.bash} should be ${pkgs.bash}; without the $, you won't get variable interpolation.

On NixOS, systemd services run with a very minimal default environment. You can add the missing packages in systemd.services.<name>.path
systemd.services.test_systemd_timers = {
serviceConfig.Type = "oneshot";
path = [
pkgs.wget
pkgs.gawk
pkgs.jq
];
script = ''
echo "Will start scraping now."
${pkgs.bash}/bin/bash /home/kaligule/scrape.sh
echo "Done scraping."
'';
};

Related

mariadb won't start suddenly

I have some trouble with my MariaDB Server, it was working ok but it won't start anymore. When I try to start the server it failed:
root#vps45223599:/var/log# /etc/init.d/mysql start
[....] Starting mysql (via systemctl): mysql.serviceJob for mariadb.service failed because the control process exited with error code.
See "systemctl status mariadb.service" and "journalctl -xe" for details.
failed!
root#vps45223599:/var/log# systemctl status mariadb.service
● mariadb.service - MariaDB 10.1.41 database server
Loaded: loaded (/lib/systemd/system/mariadb.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Tue 2023-01-10 21:20:58 UTC; 1min 15s ago
Docs: man:mysqld(8)
https://mariadb.com/kb/en/library/systemd/
Process: 1349 ExecStart=/usr/sbin/mysqld $MYSQLD_OPTS $_WSREP_NEW_CLUSTER $_WSREP_START_POSITION (code=exited, status=1/FAILURE)
Process: 1274 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR= || VAR=`/usr/bin/galera_recovery`; [ $? -eq 0 ] && systemctl set-environment _WSREP_START_POSITION=$VAR
Process: 1272 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS)
Process: 1271 ExecStartPre=/usr/bin/install -m 755 -o mysql -g root -d /var/run/mysqld (code=exited, status=0/SUCCESS)
Main PID: 1349 (code=exited, status=1/FAILURE)
Status: "MariaDB server is down"
Jan 10 21:20:55 vps45223599.local systemd[1]: Starting MariaDB 10.1.41 database server...
Jan 10 21:20:55 vps45223599.local mysqld[1349]: 2023-01-10 21:20:55 140599894461824 [Note] /usr/sbin/mysqld (mysqld 10.1.41-MariaDB-0+deb9u1) starting as process 1349 ...
Jan 10 21:20:58 vps45223599.local systemd[1]: mariadb.service: Main process exited, code=exited, status=1/FAILURE
Jan 10 21:20:58 vps45223599.local systemd[1]: Failed to start MariaDB 10.1.41 database server.
Jan 10 21:20:58 vps45223599.local systemd[1]: mariadb.service: Unit entered failed state.
Jan 10 21:20:58 vps45223599.local systemd[1]: mariadb.service: Failed with result 'exit-code'.
Jan 10 21:15:43 vps45223599.local systemd[1]: Failed to start MariaDB 10.1.41 database server.
-- Subject: Unit mariadb.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit mariadb.service has failed.
--
-- The result is failed.
Jan 10 21:15:43 vps45223599.local systemd[1]: mariadb.service: Unit entered failed state.
Jan 10 21:15:43 vps45223599.local systemd[1]: mariadb.service: Failed with result 'exit-code'.
Jan 10 21:15:43 vps45223599.local sudo[1040]: pam_unix(sudo:session): session closed for user root
Jan 10 21:15:50 vps45223599.local sudo[1146]: root : TTY=pts/1 ; PWD=/var/log ; USER=root ; COMMAND=/bin/systemctl start mariadb
Jan 10 21:15:50 vps45223599.local sudo[1146]: pam_unix(sudo:session): session opened for user root by cbarca(uid=0)
Jan 10 21:15:50 vps45223599.local systemd[1]: Failed to set devices.allow on /system.slice/mariadb.service: Operation not permitted
Jan 10 21:15:50 vps45223599.local systemd[1]: Failed to set devices.allow on /system.slice/mariadb.service: Operation not permitted
Jan 10 21:15:50 vps45223599.local systemd[1]: Starting MariaDB 10.1.41 database server...
-- Subject: Unit mariadb.service has begun start-up
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit mariadb.service has begun starting up.
Jan 10 21:15:51 vps45223599.local mysqld[1227]: 2023-01-10 21:15:51 139968422329728 [Note] /usr/sbin/mysqld (mysqld 10.1.41-MariaDB-0+deb9u1) starting as process 1227 ...
Jan 10 21:15:54 vps45223599.local systemd[1]: mariadb.service: Main process exited, code=exited, status=1/FAILURE
Jan 10 21:15:54 vps45223599.local systemd[1]: Failed to start MariaDB 10.1.41 database server.
-- Subject: Unit mariadb.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit mariadb.service has failed.
--
-- The result is failed.
Jan 10 21:15:54 vps45223599.local systemd[1]: mariadb.service: Unit entered failed state.
Jan 10 21:15:54 vps45223599.local systemd[1]: mariadb.service: Failed with result 'exit-code'.
Jan 10 21:15:54 vps45223599.local sudo[1146]: pam_unix(sudo:session): session closed for user root
Jan 10 21:20:55 vps45223599.local systemd[1]: Failed to set devices.allow on /system.slice/mariadb.service: Operation not permitted
Jan 10 21:20:55 vps45223599.local systemd[1]: Failed to set devices.allow on /system.slice/mariadb.service: Operation not permitted
Jan 10 21:20:55 vps45223599.local systemd[1]: Starting MariaDB 10.1.41 database server...
-- Subject: Unit mariadb.service has begun start-up
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit mariadb.service has begun starting up.
Jan 10 21:20:55 vps45223599.local mysqld[1349]: 2023-01-10 21:20:55 140599894461824 [Note] /usr/sbin/mysqld (mysqld 10.1.41-MariaDB-0+deb9u1) starting as process 1349 ...
Jan 10 21:20:58 vps45223599.local systemd[1]: mariadb.service: Main process exited, code=exited, status=1/FAILURE
Jan 10 21:20:58 vps45223599.local systemd[1]: Failed to start MariaDB 10.1.41 database server.
-- Subject: Unit mariadb.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit mariadb.service has failed.
--
-- The result is failed.
Jan 10 21:20:58 vps45223599.local systemd[1]: mariadb.service: Unit entered failed state.
Jan 10 21:20:58 vps45223599.local systemd[1]: mariadb.service: Failed with result 'exit-code'.
I don't know what happened. I also try
mysqlcheck
root#vps45223599:/var/log# mysqlcheck --all-databases -p
Enter password:
mysqlcheck: Got error: 2002: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2 "No such file or directory") when trying to connect
I don't know what else I should try, can anyone helpe me, please?
Cheers
As #obe said this is likely a proxmox or whatever is instigating systemd without sufficient privileges is the issue.
The devices.allow error is probably requested by the PrivateDevices=true (seems confirmed based on MDEV-13207 which failed to provide more info) aspect of the systemd service file for MariaDB.
PrivateDevices=true allows:
/dev/null
/dev/zero
/dev/random
Based on this answer for a different device, doing the equivalent for these devices would be:
lxc config set mycontainer raw.lxc 'lxc.cgroup.devices.allow = c 1:3 rwm'
lxc config set mycontainer raw.lxc 'lxc.cgroup.devices.allow = c 1:8 rwm'
lxc config set mycontainer raw.lxc 'lxc.cgroup.devices.allow = c 1:5 rwm'
Major/minor device code determined by:
$ ls -la /dev/zero /dev/random /dev/null
crw-rw-rw-. 1 root root 1, 3 Jan 8 22:20 /dev/null
crw-rw-rw-. 1 root root 1, 8 Jan 8 22:20 /dev/random
crw-rw-rw-. 1 root root 1, 5 Jan 8 22:20 /dev/zero
Thanks all for the answer.
#danblack Nothing change, suddenly stop working, but reading this forum I found the solution in this thread.
Can't start MariaDB on debian 10
Basically the solution is ...
"Solved it by deleting/renaming the tc.log mv -vi /var/lib/mysql/tc.log /root And restarting the database service mysql restart"
And mariaddb start again.

FreeRadius & PI - Constant failure

I'm new to Freeradius. Relatively familiar with linux. I've never been this stumped by an issue like this before.
No matter what I do, or how I config freeradius on my Pi, I end up with the following error when trying to start the service. This error will just repeat.
I've played with permissions & wiped the pi twice, followed many tutorials, and I still hit the same spot.
Can anyone help please?
lines 2500-2551/2551 (END)
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit freeradius.service has begun execution.
░░
░░ The job identifier is 13806.
Mar 04 19:44:11 raspberrypi freeradius[4362]: FreeRADIUS Version 3.0.21
Mar 04 19:44:11 raspberrypi freeradius[4362]: Copyright (C) 1999-2019 The FreeRADIUS server project and contributors
Mar 04 19:44:11 raspberrypi freeradius[4362]: There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
Mar 04 19:44:11 raspberrypi freeradius[4362]: PARTICULAR PURPOSE
Mar 04 19:44:11 raspberrypi freeradius[4362]: You may redistribute copies of FreeRADIUS under the terms of the
Mar 04 19:44:11 raspberrypi freeradius[4362]: GNU General Public License
Mar 04 19:44:11 raspberrypi freeradius[4362]: For more information about these matters, see the file named COPYRIGHT
Mar 04 19:44:11 raspberrypi freeradius[4362]: Starting - reading configuration files ...
Mar 04 19:44:11 raspberrypi freeradius[4362]: Debug state unknown (cap_sys_ptrace capability not set)
Mar 04 19:44:11 raspberrypi freeradius[4362]: Creating attribute Unix-Group
Mar 04 19:44:11 raspberrypi freeradius[4362]: Creating attribute LDAP-Group
Mar 04 19:44:11 raspberrypi freeradius[4362]: Please use tls_min_version and tls_max_version instead of disable_tlsv1
Mar 04 19:44:11 raspberrypi freeradius[4362]: Please use tls_min_version and tls_max_version instead of disable_tlsv1_2
Mar 04 19:44:11 raspberrypi freeradius[4362]: tls: Using cached TLS configuration from previous invocation
Mar 04 19:44:11 raspberrypi freeradius[4362]: tls: Using cached TLS configuration from previous invocation
Mar 04 19:44:11 raspberrypi freeradius[4362]: rlm_cache (cache_eap): Driver rlm_cache_rbtree (module rlm_cache_rbtree) loaded and linked
Mar 04 19:44:11 raspberrypi freeradius[4362]: rlm_detail (auth_log): 'User-Password' suppressed, will not appear in detail output
Mar 04 19:44:11 raspberrypi freeradius[4362]: rlm_ldap: libldap vendor: OpenLDAP, version: 20457
Mar 04 19:44:11 raspberrypi freeradius[4362]: rlm_ldap (ldap): Initialising connection pool
Mar 04 19:44:11 raspberrypi freeradius[4362]: rlm_mschap (mschap): using internal authentication
Mar 04 19:44:11 raspberrypi freeradius[4362]: Ignoring "sql" (see raddb/mods-available/README.rst)
Mar 04 19:44:11 raspberrypi freeradius[4362]: # Skipping contents of 'if' as it is always 'false' -- /etc/freeradius/3.0/sites-enabled/inner-tunnel:>
Mar 04 19:44:11 raspberrypi freeradius[4362]: radiusd: #### Skipping IP addresses and Ports ####
Mar 04 19:44:11 raspberrypi freeradius[4362]: Configuration appears to be OK
Mar 04 19:44:11 raspberrypi systemd[1]: freeradius.service: Main process exited, code=exited, status=1/FAILURE
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ An ExecStart= process belonging to unit freeradius.service has exited.
░░
░░ The process' exit code is 'exited' and its exit status is 1.
Mar 04 19:44:11 raspberrypi systemd[1]: freeradius.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ The unit freeradius.service has entered the 'failed' state with result 'exit-code'.
Mar 04 19:44:11 raspberrypi systemd[1]: Failed to start FreeRADIUS multi-protocol policy server.
░░ Subject: A start job for unit freeradius.service has failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit freeradius.service has finished with a failure.
░░
░░ The job identifier is 13806 and the job result is failed.
I actually found the answer. There was a typo in Googles instructions. It's minor and I missed it:
https://support.google.com/a/answer/9089736?hl=en#zippy=%2Cfreeradius
It uses both .cer and .crt for the same certificate file as shown below.
Follow these steps:
Install and configure FreeRADIUS at /etc/freeradius/3.0/.
Once FreeRADIUS is installed, you can add the LDAP configuration by installing the freeradius-ldap plugin.
sudo apt install freeradius freeradius-ldap
Copy the LDAP client key and cert files to /etc/freeradius/3.0/certs/ldap-client.key and /etc/freeradius/3.0/certs/ldap-client.crt respectively.
chown freeradius:freeradius /etc/freeradius/3.0/certs/ldap-client.*
chmod 640 /etc/freeradius/3.0/certs/ldap-client.*
Enable the LDAP module.
cd /etc/freeradius/3.0/mods-enabled/
ln -s ../mods-available/ldap ldap
Edit /etc/freeradius/3.0/mods-available/ldap.
a. ldap->server = 'ldaps://ldap.google.com:636'
b. identity = username from the application credentials
c. password = password from the application credentials
d. base_dn = 'dc=domain,dc=com'
e. tls->start_tls = no
f. tls->certificate_file = /etc/freeradius/3.0/certs/ldap-client.cer
g. tls->private_key_file = /etc/freeradius/3.0/certs/ldap-client.key
h. tls->require_cert = 'allow'
i. Comment out all fields in the breadcrumb representing the section 'ldap -> post-auth -> update'
​Edit /etc/freeradius/3.0/sites-available/default.
This modifies the FreeRadius client connection. If you are not using the default client, be sure to update the relevant client (inner-tunnel or any custom client) that you have configured.
a. Modify the authorize section to add the following block at the bottom after the password authentication protocol (PAP) statement:
if (User-Password) {
update control {
Auth-Type := ldap
}
}
b. In the authorize section, enable LDAP by removing the '-' sign before it.
#
# The ldap module reads passwords from the LDAP database.
ldap
c. Modify the authenticate section by editing the Auth-Type LDAP block as follows:
# Auth-Type LDAP {
ldap
# }
d. Modify the authenticate section by editing the Auth-Type PAP block as follows:
Auth-Type PAP {
# pap
ldap
}

redis start error "Job for redis-server.service failed because a configured resource limit was exceeded."

How to resolve this issue? Here is the error
I tried to start redis-server on my nginx.
It shows me this errors below.
I just followed this but it doesn't work for me.
I am not sure how to resolve it.
root#li917-222:~# service redis restart
Job for redis-server.service failed because a configured resource limit was exceeded. See "systemctl status redis-server.service" and "journalctl -xe" for details.
root#li917-222:~# systemctl status redis-server.service
● redis-server.service - Advanced key-value store
Loaded: loaded (/lib/systemd/system/redis-server.service; enabled; vendor preset: enabled)
Active: failed (Result: start-limit-hit) since Thu 2021-07-01 13:29:01 UTC; 39s ago
Docs: http://redis.io/documentation,
man:redis-server(1)
Process: 13798 ExecStopPost=/bin/run-parts --verbose /etc/redis/redis-server.post-down.d (code=exited, status=0/SUCCESS)
Process: 13794 ExecStop=/bin/kill -s TERM $MAINPID (code=exited, status=0/SUCCESS)
Process: 13789 ExecStop=/bin/run-parts --verbose /etc/redis/redis-server.pre-down.d (code=exited, status=0/SUCCESS)
Process: 13784 ExecStartPost=/bin/run-parts --verbose /etc/redis/redis-server.post-up.d (code=exited, status=0/SUCCESS)
Process: 13781 ExecStart=/usr/bin/redis-server /etc/redis/redis.conf (code=exited, status=0/SUCCESS)
Process: 13776 ExecStartPre=/bin/run-parts --verbose /etc/redis/redis-server.pre-up.d (code=exited, status=0/SUCCESS)
Jul 01 13:29:01 li917-222 systemd[1]: redis-server.service: Unit entered failed state.
Jul 01 13:29:01 li917-222 systemd[1]: redis-server.service: Failed with result 'resources'.
Jul 01 13:29:01 li917-222 systemd[1]: redis-server.service: Service hold-off time over, scheduling restart.
Jul 01 13:29:01 li917-222 systemd[1]: Stopped Advanced key-value store.
Jul 01 13:29:01 li917-222 systemd[1]: redis-server.service: Start request repeated too quickly.
Jul 01 13:29:01 li917-222 systemd[1]: Failed to start Advanced key-value store.
Jul 01 13:29:01 li917-222 systemd[1]: redis-server.service: Unit entered failed state.
Jul 01 13:29:01 li917-222 systemd[1]: redis-server.service: Failed with result 'start-limit-hit'.
root#li917-222:~# ExecStart=/usr/bin/redis-server /etc/redis/redis.conf --supervised systemd
/etc/redis/redis.conf: line 42: daemonize: command not found
/etc/redis/redis.conf: line 46: pidfile: command not found
/etc/redis/redis.conf: line 50: port: command not found
/etc/redis/redis.conf: line 59: tcp-backlog: command not found
/etc/redis/redis.conf: line 69: bind: warning: line editing not enabled
Try 'timeout --help' for more information.
/etc/redis/redis.conf: line 95: tcp-keepalive: command not found
/etc/redis/redis.conf: line 103: loglevel: command not found
/etc/redis/redis.conf: line 108: logfile: command not found
/etc/redis/redis.conf: line 123: databases: command not found
/etc/redis/redis.conf: line 147: save: command not found
/etc/redis/redis.conf: line 148: save: command not found
/etc/redis/redis.conf: line 149: save: command not found
/etc/redis/redis.conf: line 164: stop-writes-on-bgsave-error: command not found
/etc/redis/redis.conf: line 170: rdbcompression: command not found
/etc/redis/redis.conf: line 179: rdbchecksum: command not found
/etc/redis/redis.conf: line 182: dbfilename: command not found
backup.db dump.rdb exp.so root
/etc/redis/redis.conf: line 230: slave-serve-stale-data: command not found

openstack error : virNetSocketReadWire:1806 : End of file while reading data: Input/output error

In compute node of OPenstack environment, I added to bridge and restarted network, but the network can't be up.
ovs-vsctl add-br br-eno1
ovs-vsctl add-port br-eno1 eno1
systemctl restart network.service
In the log, I can find the following errors:
Jul 09 12:58:46 sh-compute-c7k4-bay06 kvm[3000]: 1 guest now active
Jul 09 12:58:46 sh-compute-c7k4-bay06 kvm[3001]: 0 guests now active
Jul 09 12:58:47 sh-compute-c7k4-bay06 libvirtd[7510]: 2020-07-09 04:58:47.118+0000: 7510: error
virNetSocketReadWire:1806 : End of f
Jul 09 12:58:47 sh-compute-c7k4-bay06 systemd[1]: openstack-nova-compute.service holdoff time over,
scheduling restart.
Jul 09 12:58:47 sh-compute-c7k4-bay06 systemd[1]: Stopped OpenStack Nova Compute Server.
-- Subject: Unit openstack-nova-compute.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit openstack-nova-compute.service has finished shutting down.
Jul 09 12:58:47 sh-compute-c7k4-bay06 systemd[1]: Starting OpenStack Nova Compute Server...
-- Subject: Unit openstack-nova-compute.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit openstack-nova-compute.service has begun starting up.
Jul 09 12:58:49 sh-compute-c7k4-bay06 systemd[1]: Started OpenStack Nova Compute Server.
-- Subject: Unit openstack-nova-compute.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- Unit openstack-nova-compute.service has finished starting up.
-- The start-up result is done.

Main process exited code=exited status=203/exec

I followed a flask+uwsgi+nginx tutorial like this. However it comes with error:
Warning: The unit file, source configuration file or drop-ins of myproject.service changed on disk. Run 'systemctl daem-reload' to reload units>
● myproject.service - uWSGI instance to serve myproject
Loaded: loaded (/etc/systemd/system/myproject.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2020-05-05 18:26:18 AEST; 20min ago
Main PID: 38304 (code=exited, status=203/EXEC)
May 05 18:26:18 ip-10-86-162-214.ap-southeast-2.compute.internal systemd[1]: Started uWSGI instance to serve myproject。
May 05 18:26:18 ip-10-86-162-214.ap-southeast-2.compute.internal systemd[1]: myproject.service: Main process exited code=exited status=203/exec
May 05 18:26:18 ip-10-86-162-214.ap-southeast-2.compute.internal systemd[1]: myproject.service: Failed with result 'exit-code'.
/etc/systemd/system/myproject.service
[Unit]
Description=uWSGI instance to serve myproject
After=network.target
[Service]
User=rh
WorkingDirectory=/home/rh/myproject
Environment="PATH=/home/rh/myproject/myprojectenv/bin"
ExecStart=/home/rh/myproject/myprojectenv/bin/uwsgi --ini myproject.ini
[Install]
WantedBy=multi-user.target
The "rh" is the user I used.

Resources