Airflow drops database on reboot (docker) - airflow

So im trying to setup airflow working on docker and im facing a problem that whenever i reboot airflow container, it recreates the postgres database and drops all the data in there.
Ive set up at environment variable for SQL_ALCHEMY_CONN to a connection string to postgres but it doesnt seem to help.
Any ideas how i could make it retain all the data?

Reporting the solution for the problem now:
I was using docker image for Airflow maintained by Puckel as it seemed to be quite popular. It doesnt seem to be maintained anymore as there is over 100 issues and quite alot pull requests waiting and nothing has happened in past 6 months.
2 things that I recommend doing if you decide to go with Puckel's image of Airflow:
Change the postgres backend database to be something newer than the version 9 that is in there by default. Also bind volume to /var/lib/postgresql/data folder to retain all the data (this volume binding didnt work with the version 9 for some reason, I didnt dig any deeper to it).
Also bind the entrypoint.sh file to root folder of you docker container and almost at the very bottom change the "initdb" command to "upgradedb". This way you shouldnt have any trouble with the setup.

Related

Database Corruption - Disk Image Is Malformed - Unraid - Plex [migrated]

This question was migrated from Stack Overflow because it can be answered on Super User.
Migrated 17 days ago.
I am not sure where a question like this really falls as it is from an Unraid Linux server, with a Plex Media Server container, which utilizes SQLite (looking for troubleshooting at the root level). I have posted in both Unraid, and Plex forums with no luck.
My Plex container has been failing time and time again on Unraid resulting in me doing integrity checks, rebuilds, dump, import, and a complete wipe and restart (completely remove old directory and start over). At best I get it up for a few minutes before the container fails again. The errors I am receieving have changed but as of the last situation (complete wipe and reinstall of a new container) I am getting the following error in the output log:
Error: Unable to set up server:
sqlite3_statement_backend::loadOne:database disk image is malformed
(N4soci10soci_errorE)
I decided to copy the database onto my windows machine and poke around the database to get a better understanding of the structure. Upon viewing a table called media_items I get the same error.
Clearly one of what I assume to be main tables is corrupt. The question I have then is what if anything can I do to try and fix this or learn about the cause? I would have thought a completely new database would fix my issue, unless it's purely coincidence two back to back databases became corrupted before I could even touch them, with no connection. Could it be one of my media files? Could it be Unraid? Could it be my hard drive?
For context, if unfamiliar with Plex. Once the container is up, it scans my media library and populates it with data such as metadata, posters, watch state, ratings, etc. I get through the full automated build and within 30 minutes it falls apart before I can even customize my library.
Below are references to the bash lines I used in several scenarios throughout troubleshooting. May be useful to someone somewhere.
Integrity Check:
./Plex\ SQLite "$plexDB" "PRAGMA integrity_check"
Recover From Backup:
./Plex\ SQLite "$plexDB" ".output recover.out" ".recover"
Dump:
./Plex\ SQLite "$plexDB" ".output dump.sql" ".dump"
Import:
./Plex\ SQLite "$plexDB" ".read dump.sql"
After hours, days, and a week of all kinds of troubleshooting. To include resetting the docker image (plus others mentioned in the post), it was suggested in another forum to run a memtest. Put memtest on a bootable USB and I was immediately able to conclude one stick was bad. Upon removing that stick I have zero issues and everything is completely fine... Bizarre.

How to extend docker environment generated by wp-env

I've been using wp-env for a while now for running local WordPress environments for development on my Mac. With the introduction of Monterey, Apple removed PHP from MacOS. There are a couple of ways I can think of to handle this situation. Many people seem to be using Homebrew and MAMP. However, I'd prefer not to have to use Homebrew, both because of past personal experience, but also because going down this path seems to create a whole other mess for how to handle PHP and Composer (see, for example, Using PHPCS with Homebrew On MacOS Monterey).
So, my thought was, maybe I can just start doing development inside of the docker container. The questions then:
how do I extend the wp-env npm module to add things by default to the docker container, without modifying the wp-env source? i.e., does docker have some sort of config I can write that will run wp-env and then add some other stuff to the image? (e.g., npm, git, eslint, etc... so that the docker container itself becomes a development environment).
as I'm actually writing this question, does it even make sense to do it this way? I've found hints that a few people are doing it this way (e.g., a commenter on Using Docker in development the right way talked about his setup where he has vim/tmux/vscode/zsh configuration and shortcuts baked in, and recommends running all services as dockers inside that volume (which he claims is a huge performance increase over host bind mount). Unfortunately, he linked to a git repo that either no longer exists or is at least no longer public.)
While I cannot assist you specifically with wp-env I would recommend using DDev https://ddev.readthedocs.io/en/stable/ As you will basically have the freedom of choosing custom PHP environments, plus it comes with pre defined configurations to use specific stacks e.g. Laravel, WordPress, Drupal, and is dead simple to use.
I understand you might like to continue with wp-env but maybe this will help you out.

Kolla-ansible too many open files

I am having an issue with a relatively small openstack cluster deployed with kolla-ansible. The issue is that after a few days the controllers stop working. When I go into the docker container logs, I see in all of them that there are Too Many Open Files. I have tried changing limits.conf sysctl max files for processes and user. After all of that, the issue still shows up.
One interesting thing is that this was not happening until I had to reboot all of the controllers. I rebooted them because I needed to increase the amount of ram that they have after they died swapping. My first thought was that kolla-ansible is setting a configuration after running deploy, but I can't seem to find any point in the repo when kolla-ansible is changing ulimits or other.
Any theories what could cause this? Would it be related to increasing ram? Should I run reconfigure/deploy on each controller? I've tried looking in kolla-ansible's docs and forums and couldn't see where anyone else was having this issue.
Update this hasn't been fixed yet:
I submitted a bug report, https://bugs.launchpad.net/kolla-ansible/+bug/1901898
I don't know your used versions of Kolla-Ansible and your Linux, but your problem seems really related to this one:
On Ubuntu 16.04, please uninstall lxd and lxc packages. (An issue exists with cgroup mounts, mounts exponentially increasing when restarting container) (source: docs.openstack.org/kolla-ansible/4.0.0/quickstart.html)
I had this problem with the exponentially growing number of mount-pointers after the restart of my docker-containers too. My single-node test-deployment had become very slow based on this problem, but I can't remember at the moment, that I would had the same error with too many open files.
You can delete the packages with apt-get remove lxc-common lxcfs lxd lxd-client. I had done this fix together with a complete reinstallation of the kolla-ansible installation, so I don't know, if this also helps with an already existing installation. You should also use docker-ce instead of the docker from the apt-repos.
This was fixed with a workaround in bug https://bugs.launchpad.net/keystonemiddleware/+bug/1883659 problem was neutron server was keeping memcached connections open and not closing them until the memcached container reached too many files open. There is a work around mentioned in the bug link.

Using Docker for Drupal Dev (Local)

So, to put it simply, I have a drupal site that's live.
I want to work on it locally and use docker containers to manage that.
I want to use this Image:
https://index.docker.io/u/bnchdrff/nginx-php5-drupal/
And use this as my data container:
https://index.docker.io/u/bnchdrff/mariadb/
I have the database downloaded from the live site saved as an .sql file.
I need to be able to use this pre-existing database.
Best case scenario is to be able to run the images in terminal and open a browser, navigate to something like 'localhost' and have the Drupal site pop up there for me to work on.
I am running Ubuntu 13.10 and have the latest version of Docker. Needless to say I have been working on trying to get this working for a while but don't want to complicate things with my failed attempts. Any and all suggestions welcomed.

Derby won't let me re-create a data base

I'm developing a JSF/JPA application under Glassfish which uses Derby (JavaDB) as it's default data base. It turns out the "DROP AND CREATE" policy of the Persistence Unit doesn't work reliably, so i have taken to deleting the data base and then re-creating it it when I change the schema.
Or at least I am trying to. If I delete the data base, it won't let me create a "new" data base with the same time as the deleted one. Nor will it let me open the old one.
My work around for now is just create a data base with a new name and use that (have to edit the glassfish resources xml file each time) but I would like to know what is going on. Anybody else have this problem and/or know how to fix it?
I am not 100% sure I understand what the pathology of this problem is, but I have a work-around. It would be good someone could enter a bug-report to the Netbeans community or give me a link to do it.
The problem is created on a MacOS version of NetBeans 7.2, with the GlassFish 3.1 server under its management. When you start the server to run your app, it automatically starts Derby (the Java DB) with it. However when you stop the GlassFish server or even exit Netbeans, the Derby installation stays running.
When you start NetBeans & GlassFish again, you will note that when it attempts to start the Derby server it will complain that port 1527 is already in use. Normally this is not a problem, since the application will continue to run and communicate to the previously-started Derby process via the port it is holding open. However I suspect that the communications path used by the Netbeans menu system to delete and create a data base does NOT use this data path, and consequently attempts to do a delete/create operation on a data structure that is held open by a process with which it is not communicating. Hence the lock up and failure.
The work around is to kill the Derby process in the background, then do the Delete/Create operation, and it works fine. On MacOS or Linux, open a command window and do a
ps axe | grep -i derby
and you will find a java JVM running Derby. Just copy the process ID and do
kill <pid>
(the -9 seems not to be necessary) and do the ps command again and you should see the process is gone. Derby will be started by Netbeans the next time it is needed.

Resources