bitnami mariadb fails with Table 'mysql.db' doesn't exist - mariadb

I spent a lot of time trying to debug and fix a database that was working without a problem few days earlier, and is refusing to start with:
FATAL ERROR: Upgrade failed
after adding the env var:
- name: BITNAMI_DEBUG
value: "true"
I can see that the real error is:
2021-07-14 23:12:32 0 [ERROR] Fatal error: Can't open and lock privilege tables: Table 'mysql.db' doesn't exist
2021-07-14 23:12:32 0 [ERROR] Aborting
crazy that bitnami hides these errors by default....

Until now I couldn't figure out what broke it and how to really fix it, but I managed to start the database by exec into the container and running:
adduser mysql --uid=1001
chown mysql:root /opt/bitnami/mariadb/tmp/
mysql_install_db --user=mysql --basedir=/opt/bitnami/mariadb --datadir=/bitnami/mariadb/data
mysqld --skip-grant-tables --user=mysql --skip-external-locking --port=3306 --sock=/opt/bitnami/mariadb/tmp/mysql.sock --datadir=/bitnami/mariadb/data
After running the db I managed to run a mysqldump, copy all what is relevant for me, and reinstall a new db from scratch then restoring the content.
Why the user was not present and why I needed to change the permissions is beside me.

I am glad you were able to recover the database. I was going to suggest the mysql_install_db command as you mentioned, I am not sure if there is any other approach to solve that kind of issue. In any case, I think the ideal solution would pass through finding what causes the problem initially.

Related

Why is microk8s failing to install?

I'm trying to use microk8s for my KubernetesPodOperator in my dag. Unfortunately I can't seem get it to install consistently.
I'm using homebrew to install (or reinstall) microk8s and multipass. When I execute
microk8s install --cpu=4 --mem=10000
I get the errors:
launch failed: The following errors occurred:
qemu-system-aarch64: Error: HV_BAD_ARGUMENT
launch failed: instance "microk8s-vm" already exists
An error occurred with the instance when trying to launch with 'multipass': returned exit code 2.
Ensure that 'multipass' is setup correctly and try again.
(where launch failed: instance "microk8s-vm" already exists appears several times.)
I've tried reinstalling both several times and that doesn't appear to help. Any advice?
Turns out I need to be using microk8s install --cpu=4 --mem=10 not 10000. It wants GB, not MB. My bad. I wish the error message was a little clearer though.

Mariadb mysql_upgrade failed

a have upgraded my DB server from 10.1.36 to 10.2.30 and after running command mysql_upgrade it fail at step 4. Please take a look on screenshot. Does anybody have idea what is wrong?
I believe you have to give mysql users read and execute permissions to /var/log/ and /var/log/mysql/ directory, before it can access /var/log/mysql/slow_log.CSV file.

Mariadb not starting after homebrew upgrade

I upgraded my os x to Sierra and reinstalled my brew list, everthing went well till I reinstalled mariadb.
The database is not starting, i get .ERROR! on mysql.server start
I checked the logs, but i only have two warnings :
2016-10-09 10:56:17 140735690990528 [Warning] Setting lower_case_table_names=2 because file system for /usr/local/var/mysql/ is case insensitive
2016-10-09 10:56:17 140735690990528 [Note] Plugin 'FEEDBACK' is disabled.
I'm pretty sure this is not the problem.
Update :
I figured out why the server couldn't start, i had put the service to auto start putting it in ~/Library/LaunchAgents so when i started it i had a log saying:
mysqld is already running
So i removed the service from the launchAgents but now when I try to start it, i just get :
Starting MySQL
...............................................
with infinite dots
Update && RESOLVED :
I finally found the solution, the problem here was the mysqladmin not pinging because of a problem in my.cnf so the server started but mysqladmin couldn't ping the server to prompt SUCCESS.
Also my first problem seemed to be a problem in the log-error, i had much logs with different names, i deleted them all and my server works fine now

Shiny failed to start

I had an application in my shiny-server working fine some months ago. Today I returned to it and got the most curious error. Whenever I try to access it, I just get this massage
An error has occurred
The application failed to start.
The application exited during initialization.
Now the natural step would be to go check the log of this error, right? But the Logs created by this error are just empty, 0 bytes. So I am really puzzled why this is happening, I also tried to run the shiny sample apps and get the same error, but the server itself seems to be running just fine.
I know this is sort of a vague question, but honestly I don't know what other info could I put here due to the empty logs and it could be that someone came across a similar issue
Shiny Server defaults to sanitised error logs, which is not useful in this case. You can change this behaviour by adding the line sanitize_errors off; to /etc/shiny-server/shiny-server.conf. You may need to restart Shiny Server to see the effect.
To open /etc/shiny-server/shiny-server.conf:
sudo nano /etc/shiny-server/shiny-server.conf
To restart Shiny Server:
sudo restart shiny-server
This should give you the verbose error messages you want.

SVN - SQLite - disk I/O error

When trying to commit to my SVN repository, I got the following error:
Working copy 'Z:\prace-pj\projects\other\CopyRT' locked.
So I run the clean up command and then the commit succeeded, but at the end of the response message, there was the following error:
Error bumping revisions post-commit (details follow):
disk I/O error, executing statement 'RELEASE s11'
Now when I try to e.g. update the repository, it says that it is stil locked. When I clean up and try to update again, I get an error like this:
disk I/O error, executing statement 'RELEASE s2'
sqlite: disk I/O error
What should I do to fix this?
For others reference, I just had this same error and found that one of my log files was taking up all my space (and could not write to the HDD because there was no free space).
Run (to make sure you have enough disk space)
df -h
Then I just needed to run:
svn cleanup
This resolved the error for me.
have you tried:
svn unlock --force path/to/workingcopy
? Seems it can be pointed at a url if the problem is in the repository itself... I've only used an unlock operation via the tortoise gui before, but I assume it just wraps the svn command anyway.
hope that helps

Resources