Hello and thanks for taking some of your time to check my problem
i'm following the detailed steps by openstack to install openstack in no avail (https://docs.openstack.org/keystone/rocky/install/keystone-install-rdo.html)
I've tried to change the 5000 port for the service but the result is the same
any insights are most welcomed
[root#localhost i-openstack]# systemctl enable httpd.service
[root#localhost i-openstack]# systemctl start httpd.service
Job for httpd.service failed because the control process exited with error code. See "systemctl status httpd.service" and "journalctl -xe" for details.
[root#localhost i-openstack]# journalctl -xe
Oct 08 05:12:39 localhost.localdomain systemd[1]: Failed to start The Apache HTTP Server.
-- Subject: Unit httpd.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit httpd.service has failed.
--
-- The result is failed.
Oct 08 05:12:39 localhost.localdomain systemd[1]: Unit httpd.service entered failed state.
Oct 08 05:12:39 localhost.localdomain systemd[1]: httpd.service failed.
Oct 08 05:12:39 localhost.localdomain polkitd[1824]: Unregistered Authentication Agent for unix-process:4229:106865 (system bus name :1.42, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 08 05:27:21 localhost.localdomain polkitd[1824]: Registered Authentication Agent for unix-process:4930:195069 (system bus name :1.43 [/usr/bin/pkttyagent --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale
Oct 08 05:27:21 localhost.localdomain systemd[1]: Reloading.
Oct 08 05:27:21 localhost.localdomain polkitd[1824]: Unregistered Authentication Agent for unix-process:4930:195069 (system bus name :1.43, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 08 05:27:26 localhost.localdomain polkitd[1824]: Registered Authentication Agent for unix-process:4950:195568 (system bus name :1.44 [/usr/bin/pkttyagent --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale
Oct 08 05:27:26 localhost.localdomain systemd[1]: Starting The Apache HTTP Server...
-- Subject: Unit httpd.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit httpd.service has begun starting up.
Oct 08 05:27:26 localhost.localdomain httpd[4956]: (13)Permission denied: AH00072: make_sock: could not bind to address [::]:5000
Oct 08 05:27:26 localhost.localdomain httpd[4956]: (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:5000
Oct 08 05:27:26 localhost.localdomain httpd[4956]: no listening sockets available, shutting down
Oct 08 05:27:26 localhost.localdomain httpd[4956]: AH00015: Unable to open logs
Oct 08 05:27:26 localhost.localdomain systemd[1]: httpd.service: main process exited, code=exited, status=1/FAILURE
Oct 08 05:27:26 localhost.localdomain kill[4958]: kill: cannot find process ""
Oct 08 05:27:26 localhost.localdomain systemd[1]: httpd.service: control process exited, code=exited status=1
Oct 08 05:27:26 localhost.localdomain systemd[1]: Failed to start The Apache HTTP Server.
-- Subject: Unit httpd.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit httpd.service has failed.
--
-- The result is failed.
Oct 08 05:27:26 localhost.localdomain systemd[1]: Unit httpd.service entered failed state.
Oct 08 05:27:26 localhost.localdomain systemd[1]: httpd.service failed.
Oct 08 05:27:26 localhost.localdomain polkitd[1824]: Unregistered Authentication Agent for unix-process:4950:195568 (system bus name :1.44, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 08 05:34:01 localhost.localdomain polkitd[1824]: Registered Authentication Agent for unix-process:5222:235020 (system bus name :1.45 [/usr/bin/pkttyagent --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale
Oct 08 05:34:01 localhost.localdomain systemd[1]: Reloading.
Oct 08 05:34:01 localhost.localdomain polkitd[1824]: Unregistered Authentication Agent for unix-process:5222:235020 (system bus name :1.45, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 08 05:34:03 localhost.localdomain polkitd[1824]: Registered Authentication Agent for unix-process:5240:235248 (system bus name :1.46 [/usr/bin/pkttyagent --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale
Oct 08 05:34:03 localhost.localdomain systemd[1]: Starting The Apache HTTP Server...
-- Subject: Unit httpd.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
SOLUTION : It seemed i had to disable SELinux
Disable temporaliy
sudo setenforce 0
Restart httpd service
service httpd restart
Disable SELinux persistently (reboot required)
nano /etc/selinux/config
SELINUX=disabled
Related
I have some trouble with my MariaDB Server, it was working ok but it won't start anymore. When I try to start the server it failed:
root#vps45223599:/var/log# /etc/init.d/mysql start
[....] Starting mysql (via systemctl): mysql.serviceJob for mariadb.service failed because the control process exited with error code.
See "systemctl status mariadb.service" and "journalctl -xe" for details.
failed!
root#vps45223599:/var/log# systemctl status mariadb.service
● mariadb.service - MariaDB 10.1.41 database server
Loaded: loaded (/lib/systemd/system/mariadb.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Tue 2023-01-10 21:20:58 UTC; 1min 15s ago
Docs: man:mysqld(8)
https://mariadb.com/kb/en/library/systemd/
Process: 1349 ExecStart=/usr/sbin/mysqld $MYSQLD_OPTS $_WSREP_NEW_CLUSTER $_WSREP_START_POSITION (code=exited, status=1/FAILURE)
Process: 1274 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR= || VAR=`/usr/bin/galera_recovery`; [ $? -eq 0 ] && systemctl set-environment _WSREP_START_POSITION=$VAR
Process: 1272 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS)
Process: 1271 ExecStartPre=/usr/bin/install -m 755 -o mysql -g root -d /var/run/mysqld (code=exited, status=0/SUCCESS)
Main PID: 1349 (code=exited, status=1/FAILURE)
Status: "MariaDB server is down"
Jan 10 21:20:55 vps45223599.local systemd[1]: Starting MariaDB 10.1.41 database server...
Jan 10 21:20:55 vps45223599.local mysqld[1349]: 2023-01-10 21:20:55 140599894461824 [Note] /usr/sbin/mysqld (mysqld 10.1.41-MariaDB-0+deb9u1) starting as process 1349 ...
Jan 10 21:20:58 vps45223599.local systemd[1]: mariadb.service: Main process exited, code=exited, status=1/FAILURE
Jan 10 21:20:58 vps45223599.local systemd[1]: Failed to start MariaDB 10.1.41 database server.
Jan 10 21:20:58 vps45223599.local systemd[1]: mariadb.service: Unit entered failed state.
Jan 10 21:20:58 vps45223599.local systemd[1]: mariadb.service: Failed with result 'exit-code'.
Jan 10 21:15:43 vps45223599.local systemd[1]: Failed to start MariaDB 10.1.41 database server.
-- Subject: Unit mariadb.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit mariadb.service has failed.
--
-- The result is failed.
Jan 10 21:15:43 vps45223599.local systemd[1]: mariadb.service: Unit entered failed state.
Jan 10 21:15:43 vps45223599.local systemd[1]: mariadb.service: Failed with result 'exit-code'.
Jan 10 21:15:43 vps45223599.local sudo[1040]: pam_unix(sudo:session): session closed for user root
Jan 10 21:15:50 vps45223599.local sudo[1146]: root : TTY=pts/1 ; PWD=/var/log ; USER=root ; COMMAND=/bin/systemctl start mariadb
Jan 10 21:15:50 vps45223599.local sudo[1146]: pam_unix(sudo:session): session opened for user root by cbarca(uid=0)
Jan 10 21:15:50 vps45223599.local systemd[1]: Failed to set devices.allow on /system.slice/mariadb.service: Operation not permitted
Jan 10 21:15:50 vps45223599.local systemd[1]: Failed to set devices.allow on /system.slice/mariadb.service: Operation not permitted
Jan 10 21:15:50 vps45223599.local systemd[1]: Starting MariaDB 10.1.41 database server...
-- Subject: Unit mariadb.service has begun start-up
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit mariadb.service has begun starting up.
Jan 10 21:15:51 vps45223599.local mysqld[1227]: 2023-01-10 21:15:51 139968422329728 [Note] /usr/sbin/mysqld (mysqld 10.1.41-MariaDB-0+deb9u1) starting as process 1227 ...
Jan 10 21:15:54 vps45223599.local systemd[1]: mariadb.service: Main process exited, code=exited, status=1/FAILURE
Jan 10 21:15:54 vps45223599.local systemd[1]: Failed to start MariaDB 10.1.41 database server.
-- Subject: Unit mariadb.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit mariadb.service has failed.
--
-- The result is failed.
Jan 10 21:15:54 vps45223599.local systemd[1]: mariadb.service: Unit entered failed state.
Jan 10 21:15:54 vps45223599.local systemd[1]: mariadb.service: Failed with result 'exit-code'.
Jan 10 21:15:54 vps45223599.local sudo[1146]: pam_unix(sudo:session): session closed for user root
Jan 10 21:20:55 vps45223599.local systemd[1]: Failed to set devices.allow on /system.slice/mariadb.service: Operation not permitted
Jan 10 21:20:55 vps45223599.local systemd[1]: Failed to set devices.allow on /system.slice/mariadb.service: Operation not permitted
Jan 10 21:20:55 vps45223599.local systemd[1]: Starting MariaDB 10.1.41 database server...
-- Subject: Unit mariadb.service has begun start-up
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit mariadb.service has begun starting up.
Jan 10 21:20:55 vps45223599.local mysqld[1349]: 2023-01-10 21:20:55 140599894461824 [Note] /usr/sbin/mysqld (mysqld 10.1.41-MariaDB-0+deb9u1) starting as process 1349 ...
Jan 10 21:20:58 vps45223599.local systemd[1]: mariadb.service: Main process exited, code=exited, status=1/FAILURE
Jan 10 21:20:58 vps45223599.local systemd[1]: Failed to start MariaDB 10.1.41 database server.
-- Subject: Unit mariadb.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit mariadb.service has failed.
--
-- The result is failed.
Jan 10 21:20:58 vps45223599.local systemd[1]: mariadb.service: Unit entered failed state.
Jan 10 21:20:58 vps45223599.local systemd[1]: mariadb.service: Failed with result 'exit-code'.
I don't know what happened. I also try
mysqlcheck
root#vps45223599:/var/log# mysqlcheck --all-databases -p
Enter password:
mysqlcheck: Got error: 2002: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2 "No such file or directory") when trying to connect
I don't know what else I should try, can anyone helpe me, please?
Cheers
As #obe said this is likely a proxmox or whatever is instigating systemd without sufficient privileges is the issue.
The devices.allow error is probably requested by the PrivateDevices=true (seems confirmed based on MDEV-13207 which failed to provide more info) aspect of the systemd service file for MariaDB.
PrivateDevices=true allows:
/dev/null
/dev/zero
/dev/random
Based on this answer for a different device, doing the equivalent for these devices would be:
lxc config set mycontainer raw.lxc 'lxc.cgroup.devices.allow = c 1:3 rwm'
lxc config set mycontainer raw.lxc 'lxc.cgroup.devices.allow = c 1:8 rwm'
lxc config set mycontainer raw.lxc 'lxc.cgroup.devices.allow = c 1:5 rwm'
Major/minor device code determined by:
$ ls -la /dev/zero /dev/random /dev/null
crw-rw-rw-. 1 root root 1, 3 Jan 8 22:20 /dev/null
crw-rw-rw-. 1 root root 1, 8 Jan 8 22:20 /dev/random
crw-rw-rw-. 1 root root 1, 5 Jan 8 22:20 /dev/zero
Thanks all for the answer.
#danblack Nothing change, suddenly stop working, but reading this forum I found the solution in this thread.
Can't start MariaDB on debian 10
Basically the solution is ...
"Solved it by deleting/renaming the tc.log mv -vi /var/lib/mysql/tc.log /root And restarting the database service mysql restart"
And mariaddb start again.
In compute node of OPenstack environment, I added to bridge and restarted network, but the network can't be up.
ovs-vsctl add-br br-eno1
ovs-vsctl add-port br-eno1 eno1
systemctl restart network.service
In the log, I can find the following errors:
Jul 09 12:58:46 sh-compute-c7k4-bay06 kvm[3000]: 1 guest now active
Jul 09 12:58:46 sh-compute-c7k4-bay06 kvm[3001]: 0 guests now active
Jul 09 12:58:47 sh-compute-c7k4-bay06 libvirtd[7510]: 2020-07-09 04:58:47.118+0000: 7510: error
virNetSocketReadWire:1806 : End of f
Jul 09 12:58:47 sh-compute-c7k4-bay06 systemd[1]: openstack-nova-compute.service holdoff time over,
scheduling restart.
Jul 09 12:58:47 sh-compute-c7k4-bay06 systemd[1]: Stopped OpenStack Nova Compute Server.
-- Subject: Unit openstack-nova-compute.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit openstack-nova-compute.service has finished shutting down.
Jul 09 12:58:47 sh-compute-c7k4-bay06 systemd[1]: Starting OpenStack Nova Compute Server...
-- Subject: Unit openstack-nova-compute.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit openstack-nova-compute.service has begun starting up.
Jul 09 12:58:49 sh-compute-c7k4-bay06 systemd[1]: Started OpenStack Nova Compute Server.
-- Subject: Unit openstack-nova-compute.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- Unit openstack-nova-compute.service has finished starting up.
-- The start-up result is done.
I am installing Apache cloudstack on ubuntu 16.0.4, but after installing cloudstack setup when I start services of cloudstack management service it displayed the following errors. (I have installed tomcat7, but tomcat 6 is not installed)
Warning: cloudstack-management.service changed on disk. Run 'systemctl daemon-reload' to reload units. Job for
cloudstack-management.service failed because the control process
exited with error code. See "systemctl status
cloudstack-management.service" and "journalctl -xe" for details.
I have checked systemctl status cloudstack-management.service command and it displays the following:
cloudstack-management.service - LSB: Start Tomcat (CloudStack).
Loaded: loaded (/etc/init.d/cloudstack-management; bad; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2018-08-25 21:53:07 IST; 1min 11s ago
Docs: man:systemd-sysv-generator(8)
Process: 26684 ExecStart=/etc/init.d/cloudstack-management start (code=exited, status=1/FAILURE)
Aug 25 21:53:07 dhaval-pc systemd[1]: Starting LSB: Start Tomcat (CloudStack)....
Aug 25 21:53:07 dhaval-pc cloudstack-management[26684]: * cloudstack-management is not installed
Aug 25 21:53:07 dhaval-pc systemd[1]: cloudstack-management.service: Control process exited, code=exited status=1
Aug 25 21:53:07 dhaval-pc systemd[1]: Failed to start LSB: Start Tomcat (CloudStack)..
Aug 25 21:53:07 dhaval-pc systemd[1]: cloudstack-management.service: Unit entered failed state.
Aug 25 21:53:07 dhaval-pc systemd[1]: cloudstack-management.service: Failed with result 'exit-code'.
Warning: cloudstack-management.service changed on disk. Run 'systemctl daemon-reload' to reload units.
What change can I make in vi /etc/init.d/cloudstack-management file?
Running on Ubuntu 16.4
Elastic version: 6.2.4
Kibana version: 6.2.4
Elastic is up and running on port 9200.
Kibana suddenly stopped working, I am trying to run the start command: sudo systemctl start kibana.service and I get the following error in the service stdout(journalctl -fu kibana.service):
Started Kibana.
Aug 27 12:54:33 ubuntuserver systemd[1]: kibana.service: Main process exited, code=exited, status=1/FAILURE
Aug 27 12:54:33 ubuntuserver systemd[1]: kibana.service: Unit entered failed state.
Aug 27 12:54:33 ubuntuserver systemd[1]: kibana.service: Failed with result 'exit-code'.
Aug 27 12:54:33 ubuntuserver systemd[1]: kibana.service: Service hold-off time over, scheduling restart.
Aug 27 12:54:33 ubuntuserver systemd[1]: Stopped Kibana.
No details on this log.
My yaml configuration has only this props:
server.port: 5601
server.host: "0.0.0.0"
I also have tried writing to a log file(hoping for more info there), I tried adding this configurations:
logging.dest: /var/log/kibana/kibana.log
logging.verbose: true
I gave the folder/file full access control but nothing is being written there(still writing to the stdout)
I have a simple nginx config that is syntactically correct. I install nginx using chef and the chef script works fine.
But as I check status of nginx , I see it is in failed state. If I reload nginx , it again goes in failed state. journalctl -xn also doesnt give much of error except :
[root#localhost vagrant]# journalctl -xn
-- Logs begin at Wed 2016-10-26 04:28:18 UTC, end at Wed 2016-10-26 04:45:00 UTC. --
Oct 26 04:45:00 localhost.localdomain kill[17003]: -s, --signal <sig> send specified signal
Oct 26 04:45:00 localhost.localdomain kill[17003]: -q, --queue <sig> use sigqueue(2) rather than kill(2)
Oct 26 04:45:00 localhost.localdomain kill[17003]: -p, --pid print pids without signaling them
Oct 26 04:45:00 localhost.localdomain kill[17003]: -l, --list [=<signal>] list signal names, or convert one to a name
Oct 26 04:45:00 localhost.localdomain kill[17003]: -L, --table list signal names and numbers
Oct 26 04:45:00 localhost.localdomain kill[17003]: -h, --help display this help and exit
Oct 26 04:45:00 localhost.localdomain kill[17003]: -V, --version output version information and exit
Oct 26 04:45:00 localhost.localdomain kill[17003]: For more details see kill(1).
Oct 26 04:45:00 localhost.localdomain systemd[1]: nginx.service: control process exited, code=exited status=1
Oct 26 04:45:00 localhost.localdomain systemd[1]: Unit nginx.service entered failed state.
[root#localhost vagrant]#
nginx -t is successful and I see nothing in /var/log/nginx/errors.log
Is there any other way to troubleshoot exactly why this fails ?
Both systemctl status nginx.service gives:
[root#localhost vagrant]# systemctl status nginx.service
nginx.service - The nginx HTTP and reverse proxy server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; static)
Active: failed (Result: exit-code) since Wed 2016-10-26 04:45:00 UTC; 9h ago
Process: 17003 ExecStop=/bin/kill -s QUIT $MAINPID (code=exited, status=1/FAILURE)
Process: 16999 ExecStart=/opt/nginx-1.10.1/sbin/nginx (code=exited, status=0/SUCCESS)
Process: 16998 ExecStartPre=/opt/nginx-1.10.1/sbin/nginx -t (code=exited, status=0/SUCCESS)
Main PID: 16999 (code=exited, status=0/SUCCESS)
Oct 26 04:45:00 localhost.localdomain kill[17003]: -s, --signal <sig> send specified signal
Oct 26 04:45:00 localhost.localdomain kill[17003]: -q, --queue <sig> use sigqueue(2) rather than kill(2)
Oct 26 04:45:00 localhost.localdomain kill[17003]: -p, --pid print pids without signaling them
Oct 26 04:45:00 localhost.localdomain kill[17003]: -l, --list [=<signal>] list signal names, or convert one to a name
Oct 26 04:45:00 localhost.localdomain kill[17003]: -L, --table list signal names and numbers
Oct 26 04:45:00 localhost.localdomain kill[17003]: -h, --help display this help and exit
Oct 26 04:45:00 localhost.localdomain kill[17003]: -V, --version output version information and exit
Oct 26 04:45:00 localhost.localdomain kill[17003]: For more details see kill(1).
Oct 26 04:45:00 localhost.localdomain systemd[1]: nginx.service: control process exited, code=exited status=1
Oct 26 04:45:00 localhost.localdomain systemd[1]: Unit nginx.service entered failed state.
systemctl cat nginx.service gives :
[root#virsinplatformapi02 sysadmin]# systemctl cat nginx.service
Unknown operation 'cat'.
I cd cd /lib/systemd/system and do cat on nginx.service:
[root#virsinplatformapi02 system]# cat nginx.service
[Unit]
Description=The nginx HTTP and reverse proxy server
After=network.target remote-fs.target nss-lookup.target
[Service]
ExecStartPre=/opt/nginx-1.10.1/sbin/nginx -t
ExecStart=/opt/nginx-1.10.1/sbin/nginx
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true
[Install]
If I do a echo $MAINPID , I get nothing.
That is not very good unitfile. Type is not set and defaults to simple, while you want to you forking for nginx. That may be the reason for the wrong $MAINPID value. Try to use official unit:
[Unit]
Description=The NGINX HTTP and reverse proxy server
After=syslog.target network.target remote-fs.target nss-lookup.target
[Service]
Type=forking
PIDFile=/run/nginx.pid
ExecStartPre=/opt/nginx-1.10.1/sbin/nginx -t
ExecStart=/opt/nginx-1.10.1/sbin/nginx
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true
[Install]
WantedBy=multi-user.target
You should just add it to /etc/systemd/system/nginx.service - that directory it intended for administrator-created units, and has priority.