I am trying to mount a flask app using uwsgi and nginx server. I keep getting permission denied error everytime i run my service.
/home/zerotouch/zerotouch/zerotouch.ini
[uwsgi]
module = wsgi
master = true
processes = 5
uid=nginx
gid=nginx
socket = /run/uwsgi/zerotouch.sock
chown-socket = zerotouch:nginx
chmod-socket = 660
vacuum = true
die-on-term = true
/etc/systemd/system/zerotouch.service
[Unit]
Description=uWSGI instance to serve zerotouch
After=network.target
[Service]
User=zerotouch
Group=nginx
WorkingDirectory=/home/zerotouch/zerotouch
Environment="PATH=/home/zerotouch/zerotouch/env/bin"
ExecStartPre=-/usr/bin/bash -c 'mkdir -p /run/uwsgi; chown zerotouch:nginx /run/uwsgi; chown zerotouch:nginx /home/zerotouch/zerotouch/env/bin/activate;'
ExecStart=/usr/bin/bash -c 'source /home/zerotouch/zerotouch/env/bin/activate;/home/zerotouch/zerotouch/env/bin/uwsgi --ini /home/zerotouch/zerotouch/zerotouch.ini'
[Install]
WantedBy=multi-user.target
Error
systemctl status zerotouch
● zerotouch.service - uWSGI instance to serve zerotouch
Loaded: loaded (/etc/systemd/system/zerotouch.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2019-10-10 16:34:35 CEST; 8s ago
Process: 28843 ExecStart=/usr/bin/bash -c source /home/zerotouch/zerotouch/env/bin/activate;/home/zerotouch/zerotouch/env/bin/uwsgi --ini /home/zerotouch/zerotouch/zerotouch.ini (code=exited, status=1/FAILURE)
Process: 28839 ExecStartPre=/usr/bin/bash -c mkdir -p /run/uwsgi; chown zerotouch:nginx /run/uwsgi; chown zerotouch:nginx /home/zerotouch/zerotouch/env/bin/activate; (code=exited, status=127)
Main PID: 28843 (code=exited, status=1/FAILURE)
Oct 10 16:34:35 aj-poc-1 bash[28843]: detected binary path: /home/zerotouch/zerotouch/env/bin/uwsgi
Oct 10 16:34:35 aj-poc-1 bash[28843]: your processes number limit is 7259
Oct 10 16:34:35 aj-poc-1 bash[28843]: your memory page size is 4096 bytes
Oct 10 16:34:35 aj-poc-1 bash[28843]: detected max file descriptor number: 1024
Oct 10 16:34:35 aj-poc-1 bash[28843]: lock engine: pthread robust mutexes
Oct 10 16:34:35 aj-poc-1 bash[28843]: thunder lock: disabled (you can enable it with --thunder-lock)
Oct 10 16:34:35 faj-poc-1 bash[28843]: bind(): Permission denied [core/socket.c line 230]
Oct 10 16:34:35 aj-poc-1 systemd[1]: zerotouch.service: main process exited, code=exited, status=1/FAILURE
Oct 10 16:34:35 aj-poc-1 systemd[1]: Unit zerotouch.service entered failed state.
Oct 10 16:34:35 aj-poc-1 systemd[1]: zerotouch.service failed.
There were problems in permissions for creating and editing a socket.
So I went to /run/uwsgi and used ls -lhtr to get overview of file permissions
Then I created a blank sock file zerotouch.sock using vi zerotouch.sock
And added permissions to this file for user:zerotouch and group:nginx
chown zerotouch:nginx -R /run/uwsgi
Related
I'm trying to run devstack on Ubuntu 16.04 VM using ./stack.sh
+lib/etcd3:start_etcd3:61 sudo systemctl daemon-reload
+lib/etcd3:start_etcd3:62 sudo systemctl enable devstack#etcd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/devstack#etcd.service to
/etc/systemd/system/devstack#etcd.service.
+lib/etcd3:start_etcd3:63 sudo systemctl start devstack#etcd.service
Job for devstack#etcd.service failed because the control process exited with error code. See "systemctl status
devstack#etcd.service" and "journalctl -xe" for details.
+lib/etcd3:start_etcd3:1 exit_trap
+./stack.sh:exit_trap:515 local r=1
++./stack.sh:exit_trap:516 jobs -p
+./stack.sh:exit_trap:516 jobs=
+./stack.sh:exit_trap:519 [[ -n '' ]]
+./stack.sh:exit_trap:525 '[' -f /tmp/tmp.0IwC5vOcG5 ']'
+./stack.sh:exit_trap:526 rm /tmp/tmp.0IwC5vOcG5
+./stack.sh:exit_trap:530 kill_spinner
+./stack.sh:kill_spinner:425 '[' '!' -z '' ']'
+./stack.sh:exit_trap:532 [[ 1 -ne 0 ]]
+./stack.sh:exit_trap:533 echo 'Error on exit'
Error on exit
+./stack.sh:exit_trap:535 type -p generate-subunit
+./stack.sh:exit_trap:536 generate-subunit 1524706916 269 fail
+./stack.sh:exit_trap:538 [[ -z /opt/stack/logs ]]
+./stack.sh:exit_trap:541 /opt/stack/devstack/tools/worlddump.py -d /opt/stack/logs
World dumping... see /opt/stack/logs/worlddump-2018-04-26-014626.txt for details
+./stack.sh:exit_trap:550 exit 1
When i the run the command sudo systemctl status devstack#etcd.service:
stack#openstack-demo-vm:/opt/stack/devstack$ sudo systemctl status devstack#etcd.service
● devstack#etcd.service - Devstack devstack#etcd.service
Loaded: loaded (/etc/systemd/system/devstack#etcd.service; enabled; vendor preset: enabled)
Active: inactive (dead) (Result: exit-code) since Thu 2018-04-26 01:46:27 UTC; 1min 27s ago
Process: 122376 ExecStart=/opt/stack/bin/etcd --name openstack-demo-vm --data-dir /opt/stack/data/etcd --initial-
cluster-state new --initial-cluster-token etcd-cluster-01 --initial-cluster openst
Main PID: 122376 (code=exited, status=1/FAILURE)
Apr 26 01:46:26 openstack-demo-vm systemd[1]: devstack#etcd.service: Main process exited, code=exited, status=1/FAILURE
Apr 26 01:46:26 openstack-demo-vm systemd[1]: Failed to start Devstack devstack#etcd.service.
Apr 26 01:46:26 openstack-demo-vm systemd[1]: devstack#etcd.service: Unit entered failed state.
Apr 26 01:46:26 openstack-demo-vm systemd[1]: devstack#etcd.service: Failed with result 'exit-code'.
Apr 26 01:46:27 openstack-demo-vm systemd[1]: devstack#etcd.service: Service hold-off time over, scheduling restart.
Apr 26 01:46:27 openstack-demo-vm systemd[1]: Stopped Devstack devstack#etcd.service.
Apr 26 01:46:27 openstack-demo-vm systemd[1]: devstack#etcd.service: Start request repeated too quickly.
Apr 26 01:46:27 openstack-demo-vm systemd[1]: Failed to start Devstack devstack#etcd.service.
lines 1-14/14 (END)
While etcd runs on the VM:
stack#openstack-demo-vm:/opt/stack/devstack$ systemctl status etcd
● etcd.service - etcd - highly-available key value store
Loaded: loaded (/lib/systemd/system/etcd.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2018-04-26 00:12:18 UTC; 1h 58min ago
Docs: https://github.com/coreos/etcd
man:etcd
Main PID: 4425 (etcd)
Tasks: 9
Memory: 9.3M
CPU: 22.020s
CGroup: /system.slice/etcd.service
└─4425 /usr/bin/etcd
What am i missing?
Well, you may simply add the following in your local.conf:
disable_service etcd3
And then, run stack.sh.
Afte some researcha nd asking aroundin Openstack forums and devstack, disabling etcd' instack.sh' helped to resole this.
Steps:
Edit the file /opt/stack/devstack/stach.sh
Comment the below lines (you will find them at around 1035 line)
# Start Services
# ==============
# Dstat
# -----
# A better kind of sysstat, with the top process per time slice
#start_dstat
# Etcd
# -----
# etcd is a distributed key value store that provides a reliable way
to store data across a cluster of machines
#if is_service_enabled etcd3; then
# start_etcd3
#fi
Save the above file.
Run ./unstack.sh
Run ./stack.sh
This might be specific to openSUSE / systemd.
I'm having trouble mounting an encrypted loopback file using the procedure described on the SDB:Encrypted filesystems knowledge base. I get this behaviour:
[mjl#tesla:~]
[11:12] $ sudo systemctl start /home/mjl/key
Job for home-mjl-key.mount failed. See "systemctl status home-mjl-key.mount" and "journalctl -xe" for details.
[mjl#tesla:~]
[11:12] 1 $ sudo systemctl status home-mjl-key.mount
● home-mjl-key.mount - /home/mjl/key
Loaded: loaded (/etc/fstab; bad; vendor preset: disabled)
Active: failed (Result: exit-code) since Sun 2018-03-11 11:12:41 AEDT; 3s ago
Where: /home/mjl/key
What: /home/mjl/.tomb
Docs: man:fstab(5)
man:systemd-fstab-generator(8)
Process: 12949 ExecMount=/usr/bin/mount /home/mjl/.tomb /home/mjl/key -t crypt -o loop,user,acl,user_xattr (code=exited, status=32)
Mar 11 11:12:41 tesla systemd[1]: Mounting /home/mjl/key...
Mar 11 11:12:41 tesla mount[12949]: mount: unknown filesystem type 'crypt'
Mar 11 11:12:41 tesla systemd[1]: home-mjl-key.mount: Mount process exited, code=exited status=32
Mar 11 11:12:41 tesla systemd[1]: Failed to mount /home/mjl/key.
Mar 11 11:12:41 tesla systemd[1]: home-mjl-key.mount: Unit entered failed state.
[mjl#tesla:~]
[11:12] 3 $
The /home/mjl/.tomb loopback file was created using YaST Partitioner; I specified that I did not want it mounted at system start time, but that users should be allowed to mount it.
So it created the file, added an entry to /etc/cryptab and also this entry to /etc/fstab:
[mjl#tesla:~]
[11:12] 3 $ tail -n1 /etc/fstab
/home/mjl/.tomb /home/mjl/key crypt loop,user,noauto,acl,user_xattr,nofail 0 0
[mjl#tesla:~]
[11:15]$
There is the 'crypt' filesystem type.
My question is: how should I be mounting this as a user? Is systemd failing because of the filesystem type, or because I haven't told it the encryption key?
I've also tried mounting directly:
[mjl#tesla:~]
[11:16]$ sudo mount /home/mjl/key
mount: unknown filesystem type 'crypt'
[mjl#tesla:~]
The same error. So I guess I'm not mounting it correctly. Do I need to do something with cryptsetup
Going to my app produces a 502 gateway error. Found out that it was because my how_lit.service is failing. But I am having trouble finding out why.
Tried editing the application and the ini document. Cannot figure out whats wrong.
The Nginx and uWSGI services are up and running fine.
Service Status:
lit#digitalocean:~/howlit$ sudo service how_lit status
[sudo] password for lit:
● how_lit.service - uWSGI instance to serve how lit rest api
Loaded: loaded (/etc/systemd/system/how_lit.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2016-08-04 00:30:44 EDT; 5 days ago
Process: 14294 ExecStart=/home/lit/howlit/env/bin/uwsgi --ini /home/lit/howlit/howlit.ini (code=exited, status=1/FAILURE)
Main PID: 14294 (code=exited, status=1/FAILURE)
Aug 04 00:30:44 digitalocean systemd[1]: Started uWSGI instance to serve how lit rest api.
Aug 04 00:30:44 digitalocean uwsgi[14294]: [uWSGI] getting INI configuration from /home/lit/howlit/howlit.ini
Aug 04 00:30:44 digitalocean systemd[1]: how_lit.service: Main process exited, code=exited, status=1/FAILURE
Aug 04 00:30:44 digitalocean systemd[1]: how_lit.service: Unit entered failed state.
Aug 04 00:30:44 digitalocean systemd[1]: how_lit.service: Failed with result 'exit-code'.
Directory and Permissions:
lit#digitalocean:~/howlit$ ls -l .
total 16
drwx---r-x 6 lit www-data 4096 Jul 29 11:47 env
-rwx---r-x 1 lit www-data 202 Aug 3 23:29 howlit.ini
-rwx---r-x 1 lit www-data 1203 Aug 3 23:01 how_lit_restapi.py
-rwxr-xr-x 1 lit www-data 72 Aug 3 23:27 wsgi.py
/etc/systemd/system/how_lit.service:
lit#digitalocean:~/howlit$ cat /etc/systemd/system/how_lit.service
[Unit]
Description=uWSGI instance to serve how lit rest api
After=network.target
[Service]
User=lit
Group=www-data
WorkingDirectory=/home/lit/howlit/
Environment="PATH=/home/lit/howlit/env/bin"
ExecStart=/home/lit/howlit/env/bin/uwsgi --ini /home/lit/howlit/howlit.ini
[Install]
WantedBy=multi-user.target
howlit.ini file:
lit#digitalocean:~/howlit$ cat howlit.ini
[uwsgi]
module = wsgi:app
uid = lit
gid = www-data
master = true
processes = 5
socket = how_lit_restapi.sock
chmod-sock = 666
vacum = true
die-on-term = true
gto = /var/log/uwsgi/%n.log
Tried running it by hand:
lit#digitalocean:~/howlit$ /home/lit/howlit/env/bin/uwsgi --ini /home/lit/howlit/howlit.ini
[uWSGI] getting INI configuration from /home/lit/howlit/howlit.ini
*** Starting uWSGI 2.0.13.1 (64bit) on [Tue Aug 9 18:28:25 2016] ***
compiled with version: 5.4.0 20160609 on 29 July 2016 11:48:08
os: Linux-4.4.0-31-generic #50-Ubuntu SMP Wed Jul 13 00:07:12 UTC 2016
nodename: digitalocean
machine: x86_64
clock source: unix
detected number of CPU cores: 1
current working directory: /home/lit/howlit
detected binary path: /home/lit/howlit/env/bin/uwsgi
!!! no internal routing support, rebuild with pcre support !!!
your processes number limit is 1896
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
bind(): Permission denied [core/socket.c line 230]
permission error again?
SOLVED IT: By sending my socket into tmp, but still getting bad gateway error when I navigate to my site :(
Solved my own problem.
First I checked my services.
sudo service nginx status
sudo service uwsgi status
sudo service how_lit status
then I saw them all running and up but was still getting the bad gateway error. Well after checking the logs had no errors. I had to assume my configs.
Then I realized my mistake....I never restarted all of it, just certain parts at certain times. So I restarted every single one as such:
sudo service nginx restart
sudo service uwsgi restart
sudo service how_lit restart
now it works.
About the permission issue I tried it by putting the socket into the /tmp directory that way www-data group users can access it as well as root. I learned that you need to be able to create the socket and allow access to the system for it.
I moved it out of tmp btw later for production as I was told that was not best practice.
I am trying to install maria db and getting the following issue.
[root#localhost ~]# service mysqld start
Redirecting to /bin/systemctl start mysqld.service
Job for mariadb.service failed. See 'systemctl status mariadb.service' and 'journalctl -xn' for details.
I tried 'systemctl status mariadb.service' and 'journalctl -xn' and follows the details.
[root#localhost ~]# systemctl status mariadb.service
mariadb.service - MariaDB database server
Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled)
Active: failed (Result: exit-code) since Sun 2014-09-21 17:19:44 IST; 23s ago
Process: 2712 ExecStartPost=/usr/libexec/mariadb-wait-ready $MAINPID (code=exited, status=1/FAILURE)
Process: 2711 ExecStart=/usr/bin/mysqld_safe --basedir=/usr (code=exited, status=0/SUCCESS)
Process: 2683 ExecStartPre=/usr/libexec/mariadb-prepare-db-dir %n (code=exited, status=0/SUCCESS)
Main PID: 2711 (code=exited, status=0/SUCCESS)
Sep 21 17:19:42 localhost.localdomain mysqld_safe[2711]: 140921 17:19:42 mysqld_safe Logging to '/var/lib/mysql/localhost.localdomain.err'.
Sep 21 17:19:42 localhost.localdomain mysqld_safe[2711]: 140921 17:19:42 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
Sep 21 17:19:43 localhost.localdomain mysqld_safe[2711]: 140921 17:19:43 mysqld_safe mysqld from pid file /var/lib/mysql/localhost.localdoma...d ended
Sep 21 17:19:44 localhost.localdomain systemd[1]: mariadb.service: control process exited, code=exited status=1
Sep 21 17:19:44 localhost.localdomain systemd[1]: Failed to start MariaDB database server.
Sep 21 17:19:44 localhost.localdomain systemd[1]: Unit mariadb.service entered failed state.
[root#localhost ~]# journalctl -xn
-- Logs begin at Sun 2014-09-21 02:33:29 IST, end at Sun 2014-09-21 17:20:11 IST. --
Sep 21 17:16:26 localhost.localdomain systemd[1]: Started dnf makecache.
-- Subject: Unit dnf-makecache.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit dnf-makecache.service has finished starting up.
--
-- The start-up result is done.
Sep 21 17:18:11 localhost.localdomain NetworkManager[683]: <warn> nl_recvmsgs() error: (-33) Dump inconsistency detected, interrupted
Sep 21 17:19:42 localhost.localdomain systemd[1]: Starting MariaDB database server...
-- Subject: Unit mariadb.service has begun with start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mariadb.service has begun starting up.
Sep 21 17:19:42 localhost.localdomain mysqld_safe[2711]: 140921 17:19:42 mysqld_safe Logging to '/var/lib/mysql/localhost.localdomain.err'.
Sep 21 17:19:42 localhost.localdomain mysqld_safe[2711]: 140921 17:19:42 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
Sep 21 17:19:43 localhost.localdomain mysqld_safe[2711]: 140921 17:19:43 mysqld_safe mysqld from pid file /var/lib/mysql/localhost.localdomain.pid end
Sep 21 17:19:44 localhost.localdomain systemd[1]: mariadb.service: control process exited, code=exited status=1
Sep 21 17:19:44 localhost.localdomain systemd[1]: Failed to start MariaDB database server.
-- Subject: Unit mariadb.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mariadb.service has failed.
--
-- The result is failed.
Sep 21 17:19:44 localhost.localdomain systemd[1]: Unit mariadb.service entered failed state.
Sep 21 17:20:11 localhost.localdomain NetworkManager[683]: <warn> nl_recvmsgs() error: (-33) Dump inconsistency detected, interrupted
Can any one please help?
I have tried uninstalling and installing many a times but received the same error.
Thanks in advance.
Most of the time if the system journal (journalctl) doesn't show what the problem was, the MariaDB error log (located in /var/lib/mysql/localhost.localdomain.err) does. Looking into that file you usually see what the problem is.
Most commonly errors that do not disappear after a reinstallation mean that your data directory (by default /var/lib/mysql/) is corrupted and the database needs to be reinstalled with mysql_install_db. To make sure you do a clean installation, remove all files located in the data directory and then run sudo mysql_install_db --user=mysql.
I solved as follows:
After installing
Run: > mysql_install_db --user=mysql --basedir=/usr --datadir=/var/lib/
Then: > mysql_secure_installation
And then: systemctl start mariadb
with this this, I can resolved.
A quick update for anyone coming here through a Web search.
I had a "failed to start" message following a Debian 9 -> Debian 10 (Buster) in-place server upgrade, and after a bit of digging I found that the following line in /etc/mysql/my.cnf needed updating:
From:
[mysqld]
: (other stuff)
:
innodb_large_prefix
To:
[mysqld]
: (other stuff)
:
innodb_large_prefix = "ON"
The clue was the following lines in /var/log/mysql/error.log
2020-06-06 16:41:24 0 [ERROR] /usr/sbin/mysqld: option '--innodb-large-prefix' requires an argument
2020-06-06 16:41:24 0 [ERROR] Parsing options for plugin 'InnoDB' failed.
Not sure about your case, But u can check if mariadb/mysql client is deleted accidentally,
Since in my case, I had deleted the mariadb client repo from shared file,So reinstalled the client as,
sudo apt-get install libmariadb-dev
Note: But before installing client do one thing, for rails app just change the mysql version in gemfile and try to install as bundle install mysql2, If its mariadb client issue then it will throw error mentioning that need to install mariadb client or mysql2 client as,
sudo apt-get install libmariadb-dev
OR
sudo apt-get install libmysqlclient-dev
Please refer other answers in case its not mariadb client issue 😊 😜
This is the optimal solution for this problem:
netstat -tulpn | grep LISTEN
Now we need to look for mysqld service.
tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 32029/mysqld
now kill the process in my case it is (32029).
kill 32029
Now use systemctl start mariadb.service
I tried to integrate Plone with a systemctl based startup (on openSUSE 12.3)
As a first attempt I have a very simple plone.service file
[Unit]
Description=Plone content management system
After=network.target
[Service]
Type=simple
ExecStart=/srv/plone/zeocluster/bin/plonectl start
[Install]
WantedBy=multi-user.target
Checking with systemclt status plone I see that the processes get started, but they immediatly vanish again. I've also tried Type=Daemon, but the endresult is the same.
Any hints were my error is?
The service actually finds/executes the plonectl script successfully, just the processes die quickly
linux-wezo:/etc/systemd/system # systemctl start plone.service
linux-wezo:/etc/systemd/system # systemctl status plone.service
plone.service - Plone content management system
Loaded: loaded (/etc/systemd/system/plone.service; disabled)
Active: inactive (dead) since Mon, 2013-03-18 22:00:50 CET; 1s ago
Process: 25494 ExecStart=/srv/plone/zeocluster/bin/plonectl start (code=exited, status=0/SUCCESS)
CGroup: name=systemd:/system/plone.service
Mar 18 22:00:42 linux-wezo.site systemd[1]: Starting Plone content management system...
Mar 18 22:00:42 linux-wezo.site systemd[1]: Started Plone content management system.
Mar 18 22:00:43 linux-wezo.site plonectl[25494]: zeoserver: .
Mar 18 22:00:43 linux-wezo.site plonectl[25494]: daemon process started, pid=25502
Mar 18 22:00:46 linux-wezo.site plonectl[25494]: client1: .
Mar 18 22:00:46 linux-wezo.site plonectl[25494]: daemon process started, pid=25507
Mar 18 22:00:49 linux-wezo.site plonectl[25494]: client2: .
Mar 18 22:00:49 linux-wezo.site plonectl[25494]: daemon process started, pid=25522
I do have a SysV style init script, that works via systemctl, but think it would be great to have a service file since this should be more generic than the various init scripts floating around.
The issue is the program plonectl is not a daemon, it is a wrapper script that starts Zope. You need to change the type to forking and probably to tell systemd where to find the PID file.
Plonectl forks the daemon. Try this in plone.service:
[Unit]
Description=Plone content management system
After=network.target
ConditionPathExists=/srv/plone/zeocluster/bin/plonectl
[Service]
Type=forking
ExecStart=/srv/plone/zeocluster/bin/plonectl start
ExecStop=/srv/plone/zeocluster/bin/plonectl stop
ExecReload=/srv/plone/zeocluster/bin/plonectl restart
[Install]
WantedBy=multi-user.target