I've upgraded my server from stretch to buster and then to bullseye and from this time i have some problems with mariadb server which is restarting often. While is restarting my emails doesnt work cuz of looking for virtual table etc...
mariadb version is
mariadb -v
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 9135
Server version: 10.5.15-MariaDB-0+deb11u1-log Debian 11
then in syslog i can see this
cat /var/log/syslog |grep "mariadb.service"
Aug 19 10:34:45 srv systemd[1]: mariadb.service: Main process exited, code=killed, status=6/ABRT
Aug 19 10:34:45 srv systemd[1]: mariadb.service: Failed with result 'signal'.
Aug 19 10:34:50 srv systemd[1]: mariadb.service: Scheduled restart job, restart counter is at 401.
dont know how to resolve this problem, maybe i should reinstall mariadb and purge before all packets of mariadb and mysql?
mariadb is restarting 6-10 times in one hour
in syslog i can see this interesting
Aug 19 10:34:45 srv mariadbd[22091]: 2022-08-19 10:34:45 11917 [ERROR] [FATAL] InnoDB: Page old data size 15870 new data size 8280, page old max ins size 36 new max ins size 7626
Aug 19 10:34:45 srv mariadbd[22091]: 220819 10:34:45 [ERROR] mysqld got signal 6 ;
Aug 19 10:34:45 srv mariadbd[22091]: This could be because you hit a bug. It is also possible that this binary
Aug 19 10:34:45 srv mariadbd[22091]: or one of the libraries it was linked against is corrupt, improperly built,
Aug 19 10:34:45 srv mariadbd[22091]: or misconfigured. This error can also be caused by malfunctioning hardware.
Aug 19 10:34:45 srv mariadbd[22091]: To report this bug, see https://mariadb.com/kb/en/reporting-bugs
Aug 19 10:34:45 srv mariadbd[22091]: We will try our best to scrape up some info that will hopefully help
Aug 19 10:34:45 srv mariadbd[22091]: diagnose the problem, but since we have already crashed,
Aug 19 10:34:45 srv mariadbd[22091]: something is definitely wrong and this may fail.
Aug 19 10:34:45 srv mariadbd[22091]: Server version: 10.5.15-MariaDB-0+deb11u1-log
Aug 19 10:34:45 srv mariadbd[22091]: key_buffer_size=792723456
Aug 19 10:34:45 srv mariadbd[22091]: read_buffer_size=131072
Aug 19 10:34:45 srv mariadbd[22091]: max_used_connections=15
Aug 19 10:34:45 srv mariadbd[22091]: max_threads=2002
Aug 19 10:34:45 srv mariadbd[22091]: thread_count=12
Related
I had setup my nginx server fine last week until I noticed I was receiving DOSS attacks against it. I then noticed at this point my Nginx server was failing to start. I have tried everything else and unsure what to do to resolve the issue apart from reading documentation which does not help.
Documentation on Nginx
main nginx.conf appears to be empty and I cannot save to it for some reason.
root#ubuntu-vpc-do-moon:~# /etc/init.d/nginx status
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2019-11-04 10:54:44 UTC; 1min 43s ago
Docs: man:nginx(8)
Process: 2550 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=1/FAILURE)
Nov 04 10:54:44 ubuntu-vpc-do-moon systemd[1]: Starting A high performance web server and a reverse proxy server...
Nov 04 10:54:44 ubuntu-vpc-do-moon nginx[2550]: nginx: [emerg] open() "/etc/nginx/sites-enabled/nginx.conf" failed (2: No such file or directory) in /etc/nginx/nginx.conf:62
Nov 04 10:54:44 ubuntu-vpc-do-moon nginx[2550]: nginx: configuration file /etc/nginx/nginx.conf test failed
Nov 04 10:54:44 ubuntu-vpc-do-moon systemd[1]: nginx.service: Control process exited, code=exited status=1
Nov 04 10:54:44 ubuntu-vpc-do-moon systemd[1]: nginx.service: Failed with result 'exit-code'.
Nov 04 10:54:44 ubuntu-vpc-do-moon systemd[1]: Failed to start A high performance web server and a reverse proxy server.
Removed Nginx from ubuntu and done a clean installation onto server. Managed to sort the server blocks out this time so all good.
my rstudio server was hanging too often while loading starting r shiny app. So after googling around i tried to stop and start the rstudio server again. i also tried to kill all process running on 8787 port. But had no luck solving the issue. now r studio server keeps waiting while opening on web browser.
I have used below command to kill process running on 8787 port. after running the command there was no result.
sudo kill -TERM 20647
(20647 is port where rserver process is listening. i got this port number after running 'sudo netstat -ntlp | grep :8787' command).
to stop and restart r studio server, i used below command
sudo rstudio-server stop
sudo rstudio-server start
expected result is working sr studio server which doesnt hang while loading shiny app.
after running status command i found below error logged for rstudio server.
rstudio-server.service - RStudio Server
Loaded: loaded (/etc/systemd/system/rstudio-server.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Wed 2019-08-28 04:50:07 CDT; 11s ago
Process: 31611 ExecStop=/usr/bin/killall -TERM rserver (code=exited, status=0/SUCCESS)
Process: 31609 ExecStart=/usr/lib/rstudio-server/bin/rserver (code=exited, status=0/SUCCESS)
Main PID: 31610 (code=exited, status=1/FAILURE)
CGroup: /system.slice/rstudio-server.service
└─20647 /usr/lib/rstudio-server/bin/rserver
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: Unit rstudio-server.service entered failed state.
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: rstudio-server.service failed.
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: rstudio-server.service holdoff time over, scheduling r...rt.
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: Stopped RStudio Server.
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: start request repeated too quickly for rstudio-server....ice
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: Failed to start RStudio Server.
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: Unit rstudio-server.service entered failed state.
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: rstudio-server.service failed.
As a last resort, i have restarted my VM where i am running r-studio server. It seems to have resolved my issue.
I need to use systemd to restart the meteor as a service, so I create cloud.service in /etc/systemd/system/. The file looks like below,
[Unit]
Description=cloud
After=network.target
[Service]
User=someone
Type=simple
WorkingDirectory=/home/someone/cloud/
ExecStart=/home/someone/cloud/start.sh
Restart=always
[Install]
WantedBy=multi-user.target
and in the start.sh, it looks like
nohup meteor &
But when the system restarts, the service cannot start.
● cloud.service - cloud
Loaded: loaded (/etc/systemd/system/cloud.service; enabled; vendor preset: enabled)
Active: failed (Result: start-limit-hit) since Fri 2018-11-30 03:22:51 UTC; 13min ago
Nov 30 03:22:51 cloud-euro systemd[1]: cloud.service: Unit entered failed state.
Nov 30 03:22:51 cloud-euro systemd[1]: cloud.service: Failed with result 'exit-code'.
Nov 30 03:22:51 cloud-euro systemd[1]: cloud.service: Service hold-off time over, scheduling restart.
Nov 30 03:22:51 cloud-euro systemd[1]: Stopped cloud.
Nov 30 03:22:51 cloud-euro systemd[1]: cloud.service: Start request repeated too quickly.
Nov 30 03:22:51 cloud-euro systemd[1]: Failed to start cloud.
Nov 30 03:22:51 cloud-euro systemd[1]: cloud.service: Unit entered failed state.
Nov 30 03:22:51 cloud-euro systemd[1]: cloud.service: Failed with result 'start-limit-hit'.
I've tried to use Type=forking, but the situation doesn't change. Any suggestions?
I am installing Apache cloudstack on ubuntu 16.0.4, but after installing cloudstack setup when I start services of cloudstack management service it displayed the following errors. (I have installed tomcat7, but tomcat 6 is not installed)
Warning: cloudstack-management.service changed on disk. Run 'systemctl daemon-reload' to reload units. Job for
cloudstack-management.service failed because the control process
exited with error code. See "systemctl status
cloudstack-management.service" and "journalctl -xe" for details.
I have checked systemctl status cloudstack-management.service command and it displays the following:
cloudstack-management.service - LSB: Start Tomcat (CloudStack).
Loaded: loaded (/etc/init.d/cloudstack-management; bad; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2018-08-25 21:53:07 IST; 1min 11s ago
Docs: man:systemd-sysv-generator(8)
Process: 26684 ExecStart=/etc/init.d/cloudstack-management start (code=exited, status=1/FAILURE)
Aug 25 21:53:07 dhaval-pc systemd[1]: Starting LSB: Start Tomcat (CloudStack)....
Aug 25 21:53:07 dhaval-pc cloudstack-management[26684]: * cloudstack-management is not installed
Aug 25 21:53:07 dhaval-pc systemd[1]: cloudstack-management.service: Control process exited, code=exited status=1
Aug 25 21:53:07 dhaval-pc systemd[1]: Failed to start LSB: Start Tomcat (CloudStack)..
Aug 25 21:53:07 dhaval-pc systemd[1]: cloudstack-management.service: Unit entered failed state.
Aug 25 21:53:07 dhaval-pc systemd[1]: cloudstack-management.service: Failed with result 'exit-code'.
Warning: cloudstack-management.service changed on disk. Run 'systemctl daemon-reload' to reload units.
What change can I make in vi /etc/init.d/cloudstack-management file?
Running on Ubuntu 16.4
Elastic version: 6.2.4
Kibana version: 6.2.4
Elastic is up and running on port 9200.
Kibana suddenly stopped working, I am trying to run the start command: sudo systemctl start kibana.service and I get the following error in the service stdout(journalctl -fu kibana.service):
Started Kibana.
Aug 27 12:54:33 ubuntuserver systemd[1]: kibana.service: Main process exited, code=exited, status=1/FAILURE
Aug 27 12:54:33 ubuntuserver systemd[1]: kibana.service: Unit entered failed state.
Aug 27 12:54:33 ubuntuserver systemd[1]: kibana.service: Failed with result 'exit-code'.
Aug 27 12:54:33 ubuntuserver systemd[1]: kibana.service: Service hold-off time over, scheduling restart.
Aug 27 12:54:33 ubuntuserver systemd[1]: Stopped Kibana.
No details on this log.
My yaml configuration has only this props:
server.port: 5601
server.host: "0.0.0.0"
I also have tried writing to a log file(hoping for more info there), I tried adding this configurations:
logging.dest: /var/log/kibana/kibana.log
logging.verbose: true
I gave the folder/file full access control but nothing is being written there(still writing to the stdout)