Error installing Apache Cloudstack management on Ubuntu 16.0.4 - apache-cloudstack

I am installing Apache cloudstack on ubuntu 16.0.4, but after installing cloudstack setup when I start services of cloudstack management service it displayed the following errors. (I have installed tomcat7, but tomcat 6 is not installed)
Warning: cloudstack-management.service changed on disk. Run 'systemctl daemon-reload' to reload units. Job for
cloudstack-management.service failed because the control process
exited with error code. See "systemctl status
cloudstack-management.service" and "journalctl -xe" for details.
I have checked systemctl status cloudstack-management.service command and it displays the following:
cloudstack-management.service - LSB: Start Tomcat (CloudStack).
Loaded: loaded (/etc/init.d/cloudstack-management; bad; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2018-08-25 21:53:07 IST; 1min 11s ago
Docs: man:systemd-sysv-generator(8)
Process: 26684 ExecStart=/etc/init.d/cloudstack-management start (code=exited, status=1/FAILURE)
Aug 25 21:53:07 dhaval-pc systemd[1]: Starting LSB: Start Tomcat (CloudStack)....
Aug 25 21:53:07 dhaval-pc cloudstack-management[26684]: * cloudstack-management is not installed
Aug 25 21:53:07 dhaval-pc systemd[1]: cloudstack-management.service: Control process exited, code=exited status=1
Aug 25 21:53:07 dhaval-pc systemd[1]: Failed to start LSB: Start Tomcat (CloudStack)..
Aug 25 21:53:07 dhaval-pc systemd[1]: cloudstack-management.service: Unit entered failed state.
Aug 25 21:53:07 dhaval-pc systemd[1]: cloudstack-management.service: Failed with result 'exit-code'.
Warning: cloudstack-management.service changed on disk. Run 'systemctl daemon-reload' to reload units.
What change can I make in vi /etc/init.d/cloudstack-management file?

Related

mariadb won't start suddenly

I have some trouble with my MariaDB Server, it was working ok but it won't start anymore. When I try to start the server it failed:
root#vps45223599:/var/log# /etc/init.d/mysql start
[....] Starting mysql (via systemctl): mysql.serviceJob for mariadb.service failed because the control process exited with error code.
See "systemctl status mariadb.service" and "journalctl -xe" for details.
failed!
root#vps45223599:/var/log# systemctl status mariadb.service
● mariadb.service - MariaDB 10.1.41 database server
Loaded: loaded (/lib/systemd/system/mariadb.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Tue 2023-01-10 21:20:58 UTC; 1min 15s ago
Docs: man:mysqld(8)
https://mariadb.com/kb/en/library/systemd/
Process: 1349 ExecStart=/usr/sbin/mysqld $MYSQLD_OPTS $_WSREP_NEW_CLUSTER $_WSREP_START_POSITION (code=exited, status=1/FAILURE)
Process: 1274 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR= || VAR=`/usr/bin/galera_recovery`; [ $? -eq 0 ] && systemctl set-environment _WSREP_START_POSITION=$VAR
Process: 1272 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS)
Process: 1271 ExecStartPre=/usr/bin/install -m 755 -o mysql -g root -d /var/run/mysqld (code=exited, status=0/SUCCESS)
Main PID: 1349 (code=exited, status=1/FAILURE)
Status: "MariaDB server is down"
Jan 10 21:20:55 vps45223599.local systemd[1]: Starting MariaDB 10.1.41 database server...
Jan 10 21:20:55 vps45223599.local mysqld[1349]: 2023-01-10 21:20:55 140599894461824 [Note] /usr/sbin/mysqld (mysqld 10.1.41-MariaDB-0+deb9u1) starting as process 1349 ...
Jan 10 21:20:58 vps45223599.local systemd[1]: mariadb.service: Main process exited, code=exited, status=1/FAILURE
Jan 10 21:20:58 vps45223599.local systemd[1]: Failed to start MariaDB 10.1.41 database server.
Jan 10 21:20:58 vps45223599.local systemd[1]: mariadb.service: Unit entered failed state.
Jan 10 21:20:58 vps45223599.local systemd[1]: mariadb.service: Failed with result 'exit-code'.
Jan 10 21:15:43 vps45223599.local systemd[1]: Failed to start MariaDB 10.1.41 database server.
-- Subject: Unit mariadb.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit mariadb.service has failed.
--
-- The result is failed.
Jan 10 21:15:43 vps45223599.local systemd[1]: mariadb.service: Unit entered failed state.
Jan 10 21:15:43 vps45223599.local systemd[1]: mariadb.service: Failed with result 'exit-code'.
Jan 10 21:15:43 vps45223599.local sudo[1040]: pam_unix(sudo:session): session closed for user root
Jan 10 21:15:50 vps45223599.local sudo[1146]: root : TTY=pts/1 ; PWD=/var/log ; USER=root ; COMMAND=/bin/systemctl start mariadb
Jan 10 21:15:50 vps45223599.local sudo[1146]: pam_unix(sudo:session): session opened for user root by cbarca(uid=0)
Jan 10 21:15:50 vps45223599.local systemd[1]: Failed to set devices.allow on /system.slice/mariadb.service: Operation not permitted
Jan 10 21:15:50 vps45223599.local systemd[1]: Failed to set devices.allow on /system.slice/mariadb.service: Operation not permitted
Jan 10 21:15:50 vps45223599.local systemd[1]: Starting MariaDB 10.1.41 database server...
-- Subject: Unit mariadb.service has begun start-up
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit mariadb.service has begun starting up.
Jan 10 21:15:51 vps45223599.local mysqld[1227]: 2023-01-10 21:15:51 139968422329728 [Note] /usr/sbin/mysqld (mysqld 10.1.41-MariaDB-0+deb9u1) starting as process 1227 ...
Jan 10 21:15:54 vps45223599.local systemd[1]: mariadb.service: Main process exited, code=exited, status=1/FAILURE
Jan 10 21:15:54 vps45223599.local systemd[1]: Failed to start MariaDB 10.1.41 database server.
-- Subject: Unit mariadb.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit mariadb.service has failed.
--
-- The result is failed.
Jan 10 21:15:54 vps45223599.local systemd[1]: mariadb.service: Unit entered failed state.
Jan 10 21:15:54 vps45223599.local systemd[1]: mariadb.service: Failed with result 'exit-code'.
Jan 10 21:15:54 vps45223599.local sudo[1146]: pam_unix(sudo:session): session closed for user root
Jan 10 21:20:55 vps45223599.local systemd[1]: Failed to set devices.allow on /system.slice/mariadb.service: Operation not permitted
Jan 10 21:20:55 vps45223599.local systemd[1]: Failed to set devices.allow on /system.slice/mariadb.service: Operation not permitted
Jan 10 21:20:55 vps45223599.local systemd[1]: Starting MariaDB 10.1.41 database server...
-- Subject: Unit mariadb.service has begun start-up
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit mariadb.service has begun starting up.
Jan 10 21:20:55 vps45223599.local mysqld[1349]: 2023-01-10 21:20:55 140599894461824 [Note] /usr/sbin/mysqld (mysqld 10.1.41-MariaDB-0+deb9u1) starting as process 1349 ...
Jan 10 21:20:58 vps45223599.local systemd[1]: mariadb.service: Main process exited, code=exited, status=1/FAILURE
Jan 10 21:20:58 vps45223599.local systemd[1]: Failed to start MariaDB 10.1.41 database server.
-- Subject: Unit mariadb.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit mariadb.service has failed.
--
-- The result is failed.
Jan 10 21:20:58 vps45223599.local systemd[1]: mariadb.service: Unit entered failed state.
Jan 10 21:20:58 vps45223599.local systemd[1]: mariadb.service: Failed with result 'exit-code'.
I don't know what happened. I also try
mysqlcheck
root#vps45223599:/var/log# mysqlcheck --all-databases -p
Enter password:
mysqlcheck: Got error: 2002: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2 "No such file or directory") when trying to connect
I don't know what else I should try, can anyone helpe me, please?
Cheers
As #obe said this is likely a proxmox or whatever is instigating systemd without sufficient privileges is the issue.
The devices.allow error is probably requested by the PrivateDevices=true (seems confirmed based on MDEV-13207 which failed to provide more info) aspect of the systemd service file for MariaDB.
PrivateDevices=true allows:
/dev/null
/dev/zero
/dev/random
Based on this answer for a different device, doing the equivalent for these devices would be:
lxc config set mycontainer raw.lxc 'lxc.cgroup.devices.allow = c 1:3 rwm'
lxc config set mycontainer raw.lxc 'lxc.cgroup.devices.allow = c 1:8 rwm'
lxc config set mycontainer raw.lxc 'lxc.cgroup.devices.allow = c 1:5 rwm'
Major/minor device code determined by:
$ ls -la /dev/zero /dev/random /dev/null
crw-rw-rw-. 1 root root 1, 3 Jan 8 22:20 /dev/null
crw-rw-rw-. 1 root root 1, 8 Jan 8 22:20 /dev/random
crw-rw-rw-. 1 root root 1, 5 Jan 8 22:20 /dev/zero
Thanks all for the answer.
#danblack Nothing change, suddenly stop working, but reading this forum I found the solution in this thread.
Can't start MariaDB on debian 10
Basically the solution is ...
"Solved it by deleting/renaming the tc.log mv -vi /var/lib/mysql/tc.log /root And restarting the database service mysql restart"
And mariaddb start again.

Main process exited code=exited status=203/exec

I followed a flask+uwsgi+nginx tutorial like this. However it comes with error:
Warning: The unit file, source configuration file or drop-ins of myproject.service changed on disk. Run 'systemctl daem-reload' to reload units>
● myproject.service - uWSGI instance to serve myproject
Loaded: loaded (/etc/systemd/system/myproject.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2020-05-05 18:26:18 AEST; 20min ago
Main PID: 38304 (code=exited, status=203/EXEC)
May 05 18:26:18 ip-10-86-162-214.ap-southeast-2.compute.internal systemd[1]: Started uWSGI instance to serve myproject。
May 05 18:26:18 ip-10-86-162-214.ap-southeast-2.compute.internal systemd[1]: myproject.service: Main process exited code=exited status=203/exec
May 05 18:26:18 ip-10-86-162-214.ap-southeast-2.compute.internal systemd[1]: myproject.service: Failed with result 'exit-code'.
/etc/systemd/system/myproject.service
[Unit]
Description=uWSGI instance to serve myproject
After=network.target
[Service]
User=rh
WorkingDirectory=/home/rh/myproject
Environment="PATH=/home/rh/myproject/myprojectenv/bin"
ExecStart=/home/rh/myproject/myprojectenv/bin/uwsgi --ini myproject.ini
[Install]
WantedBy=multi-user.target
The "rh" is the user I used.

rstudio server is not opening after stoping and restarting

my rstudio server was hanging too often while loading starting r shiny app. So after googling around i tried to stop and start the rstudio server again. i also tried to kill all process running on 8787 port. But had no luck solving the issue. now r studio server keeps waiting while opening on web browser.
I have used below command to kill process running on 8787 port. after running the command there was no result.
sudo kill -TERM 20647
(20647 is port where rserver process is listening. i got this port number after running 'sudo netstat -ntlp | grep :8787' command).
to stop and restart r studio server, i used below command
sudo rstudio-server stop
sudo rstudio-server start
expected result is working sr studio server which doesnt hang while loading shiny app.
after running status command i found below error logged for rstudio server.
rstudio-server.service - RStudio Server
Loaded: loaded (/etc/systemd/system/rstudio-server.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Wed 2019-08-28 04:50:07 CDT; 11s ago
Process: 31611 ExecStop=/usr/bin/killall -TERM rserver (code=exited, status=0/SUCCESS)
Process: 31609 ExecStart=/usr/lib/rstudio-server/bin/rserver (code=exited, status=0/SUCCESS)
Main PID: 31610 (code=exited, status=1/FAILURE)
CGroup: /system.slice/rstudio-server.service
└─20647 /usr/lib/rstudio-server/bin/rserver
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: Unit rstudio-server.service entered failed state.
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: rstudio-server.service failed.
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: rstudio-server.service holdoff time over, scheduling r...rt.
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: Stopped RStudio Server.
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: start request repeated too quickly for rstudio-server....ice
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: Failed to start RStudio Server.
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: Unit rstudio-server.service entered failed state.
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: rstudio-server.service failed.
As a last resort, i have restarted my VM where i am running r-studio server. It seems to have resolved my issue.

How to restart meteor by systemd

I need to use systemd to restart the meteor as a service, so I create cloud.service in /etc/systemd/system/. The file looks like below,
[Unit]
Description=cloud
After=network.target
[Service]
User=someone
Type=simple
WorkingDirectory=/home/someone/cloud/
ExecStart=/home/someone/cloud/start.sh
Restart=always
[Install]
WantedBy=multi-user.target
and in the start.sh, it looks like
nohup meteor &
But when the system restarts, the service cannot start.
● cloud.service - cloud
Loaded: loaded (/etc/systemd/system/cloud.service; enabled; vendor preset: enabled)
Active: failed (Result: start-limit-hit) since Fri 2018-11-30 03:22:51 UTC; 13min ago
Nov 30 03:22:51 cloud-euro systemd[1]: cloud.service: Unit entered failed state.
Nov 30 03:22:51 cloud-euro systemd[1]: cloud.service: Failed with result 'exit-code'.
Nov 30 03:22:51 cloud-euro systemd[1]: cloud.service: Service hold-off time over, scheduling restart.
Nov 30 03:22:51 cloud-euro systemd[1]: Stopped cloud.
Nov 30 03:22:51 cloud-euro systemd[1]: cloud.service: Start request repeated too quickly.
Nov 30 03:22:51 cloud-euro systemd[1]: Failed to start cloud.
Nov 30 03:22:51 cloud-euro systemd[1]: cloud.service: Unit entered failed state.
Nov 30 03:22:51 cloud-euro systemd[1]: cloud.service: Failed with result 'start-limit-hit'.
I've tried to use Type=forking, but the situation doesn't change. Any suggestions?

Kibana - process not starting- log not clear

Running on Ubuntu 16.4
Elastic version: 6.2.4
Kibana version: 6.2.4
Elastic is up and running on port 9200.
Kibana suddenly stopped working, I am trying to run the start command: sudo systemctl start kibana.service and I get the following error in the service stdout(journalctl -fu kibana.service):
Started Kibana.
Aug 27 12:54:33 ubuntuserver systemd[1]: kibana.service: Main process exited, code=exited, status=1/FAILURE
Aug 27 12:54:33 ubuntuserver systemd[1]: kibana.service: Unit entered failed state.
Aug 27 12:54:33 ubuntuserver systemd[1]: kibana.service: Failed with result 'exit-code'.
Aug 27 12:54:33 ubuntuserver systemd[1]: kibana.service: Service hold-off time over, scheduling restart.
Aug 27 12:54:33 ubuntuserver systemd[1]: Stopped Kibana.
No details on this log.
My yaml configuration has only this props:
server.port: 5601
server.host: "0.0.0.0"
I also have tried writing to a log file(hoping for more info there), I tried adding this configurations:
logging.dest: /var/log/kibana/kibana.log
logging.verbose: true
I gave the folder/file full access control but nothing is being written there(still writing to the stdout)

Resources