How to handle multiple schemas in a single migrate with flyway - flyway

I'm totally new to Flyway but I'm trying to migrate a number of identical test databases using the docker-compose flyway+mysql arrangement described in https://github.com/flyway/flyway-docker
As far as I can tell, the migrate command can take multiple schemas in its -schemas argument but it only seems to apply the actual SQL migration to the first schema in the list.
For example, when I run the migrate with schemas=test_1,test_2,test_3, flyway creates all three databases but only creates the tables specified in the migration file on the first test_1 database.
Is there a way to apply the SQL migration file to all the schemas in the list?

I'm going to leave this question up in case someone can still answer how multiple schemas is useful if the migration file isn't applied to all databases in the list. But, I was able to handle multiple databases in a docker-compose by overriding the flyway entrypoint and command.
So now my docker-compose service looks like:
services:
flyway:
image: flyway/flyway:6.1.4
volumes:
- ./migrations:/flyway/sql
depends_on:
- db
entrypoint: ["bash"]
command: > -c "/flyway/flyway -url=jdbc:mysql://db -schemas=test1 migrate;
/flyway/flyway -url=jdbc:mysql://db -schemas=test2 migrate"

For me what worked was breaking up my migrations into separate executions in my docker-compose file along with docker-postgresql-multiple-databases as follows:
version: '3.8'
services:
postgres-db:
image: 'postgres:13.3'
environment:
POSTGRES_MULTIPLE_DATABASES: 'customers,addresses'
POSTGRES_USER: 'pocketlaundry'
POSTGRES_PASSWORD: 'iceprism'
volumes:
- ./docker-postgresql-multiple-databases:/docker-entrypoint-initdb.d
expose:
- '5432' # Publishes 5432 to other containers (addresses-flyway, customers-flyway) but NOT to host machine
ports:
- '5432:5432'
addresses-flyway:
image: flyway/flyway:7.12.0
command: -url=jdbc:postgresql://postgres-db:5432/addresses -schemas=public -user=pocketlaundry -password=iceprism -connectRetries=60 migrate
volumes:
- ./sports-ball-project/src/test/resources/db/addresses/migrations:/flyway/sql
depends_on:
- postgres-db
links:
- postgres-db
customers-flyway:
image: flyway/flyway:7.12.0
command: -url=jdbc:postgresql://postgres-db:5432/customers -schemas=public -user=pocketlaundry -password=iceprism -connectRetries=60 migrate
volumes:
- ./sports-ball-project/src/test/resources/db/customers/migrations:/flyway/sql
depends_on:
- postgres-db
links:
- postgres-db

Related

Can not run migrations with alembic. Socket error

Here i created a User model using sqlmodel.
from sqlmodel import SQLModel, Field, Index
from typing import Optional
class User(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
name: str = Field(max_length=30)
surname: str = Field(max_length=50)
docker-compose with postgres:
version: "3.4"
services:
db:
image: postgres:14.0-alpine
restart: always
container_name: test_db
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=testdb
ports:
- "5432:5432"
volumes:
- db:/var/lib/postgresql/data
volumes:
db:
Now i am trying to create migrations with alembic revision
with "alembic revision --autogenerate -m "msg""
But it falls with
File "C:\Python3.10\lib\socket.py", line 955, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 11001] getaddrinfo failed
The error indicates that the hostname cannot be resolved. Is your database running locally? Has your python file that holds the SQLAlchemy engine access to it?
I also see that you are running albemic from your global python installation (File "C:\Python3.10\), does that have the same dependencies as your python application? In any case, it is very advisable to use virtual environments to ensure you are developing with the same modules as you are running albemic migrations with.

Running dotnet watch run in docker container breaks Nuget package references

I have a docker-compose file where I start the database and the asp.net server with dotnet watch run like this:
backend:
image: mcr.microsoft.com/dotnet/sdk:6.0
depends_on:
- db
environment:
DB_HOST: db
DB_NAME: db
DB_USERNAME: ${DB_USER}
DB_PASSWORD: ${DB_PW}
ASPNETCORE_ENVIRONMENT: Development
ports:
- 7293:7293
volumes:
- ./:/app
working_dir: /app
command: 'dotnet watch run --urls "http://0.0.0.0:7293"'
As you can see I mount the entire project directory in the container.
Now this runs fine and reloads on changes.
But as soon as the container starts in Visual Studio all the references to Nuget packages get marked red and can't be resolved anymore.
There is a .dockerignore file but I don't think anything in here is really relevant. But here is it anyways:
.dockerignore:
**/.classpath
**/.dockerignore
**/.env
**/.git
**/.gitignore
**/.project
**/.settings
**/.toolstarget
**/.vs
**/.vscode
**/*.*proj.user
**/*.dbmdl
**/*.jfm
**/azds.yaml
**/bin
**/charts
**/docker-compose*
**/Dockerfile*
**/node_modules
**/npm-debug.log
**/obj
**/secrets.dev.yaml
**/values.dev.yaml
LICENSE
README.md
I know this has something to do with project restore but nothing I tried helped. --no-restore made it just not run at all with an exception that it couldn't find the reference to one of the packages.
Any ideas how to avoid this?
Ok I figured it out. Mounting Apperently doesn't care about .dockerignore. I have to exclude the obj directory in the mount. So I just mount a empty directory like this.
backend:
...
volumes:
- ./:/app
- /app/obj # <- directory won't be mounted
working_dir: /app
command: 'dotnet watch run --urls "http://0.0.0.0:7293"'

How do I see what a docker container that doesn't stay running is doing/tried to do?

I'm trying to use the WordPress local development environment, which sets up a bunch of docker containers (and I'm new to Docker). One of them is a WordPress CLI, which I presume is running some scripts to do some configuration, but that particular container doesn't stay running (I believe this is intentional). I'm guessing that a script it's running is failing, and that's happening because of some configuration error, but I can't figure out how to tell what it's doing when it executes (what scripts it's running, what environment variables are set, etc...).
Is there any way to somehow "trace" what the container is trying to do?
$docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
wordpressdevelop/phpunit latest 8ebb6f73d762 2 days ago 732MB
wordpress latest 6c2c086d9173 2 days ago 554MB
wordpress <none> 408627ce79b1 2 days ago 551MB
composer latest ff854871a595 9 days ago 173MB
wordpress cli ee6de7f71aa0 9 days ago 137MB // I want to see what this does
mariadb latest e27cf5bc24fe 12 days ago 401MB
$docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
19007b991e08 wordpress "docker-entrypoint.s…" 48 minutes ago Up 48 minutes 0.0.0.0:8888->80/tcp e6bc9159b910bda3d9b0dae2e230eabd_wordpress_1
26ac5c7ec782 wordpress "docker-entrypoint.s…" 48 minutes ago Up 48 minutes 0.0.0.0:8889->80/tcp e6bc9159b910bda3d9b0dae2e230eabd_tests-wordpress_1
8ae0a4dc4f77 mariadb "docker-entrypoint.s…" 48 minutes ago Up 48 minutes 0.0.0.0:54989->3306/tcp e6bc9159b910bda3d9b0dae2e230eabd_mysql_1
This is on MacOS 11.2.2 running Docker Desktop, Docker version 20.10.5.
I'm not sure if it's relevant, but for completeness' sake, I've included the docker-compose.yml which wp-env generates, below.
Thanks!
Background
This used to Just Work, but I think I broke something in my environment, and am trying to diagnose that. I initially asked about that on WordPress StackExchange, but have since dug deeper. The Docker compose step fails thusly:
...
⠏ Configuring WordPress.Creating e6bc9159b910bda3d9b0dae2e230eabd_cli_run ... done
⠹ Configuring WordPress.mysqlcheck: Got error: 1045: Access denied for user 'username_here'#'172.19.0.5' (using password: YES) when trying to connect
The database container is left up and running, and if I go look in the database, I see it's configured to have root connect with no password:
MariaDB [(none)]> select user, host, password from mysql.user;
+-------------+-----------+----------+
| User | Host | Password |
+-------------+-----------+----------+
| mariadb.sys | localhost | |
| root | localhost | |
| root | % | |
+-------------+-----------+----------+
3 rows in set (0.003 sec)
In the main WordPress container, the wp-config.php file contains this little snippet:
...
// a helper function to lookup "env_FILE", "env", then fallback
function getenv_docker($env, $default) {
if ($fileEnv = getenv($env . '_FILE')) {
return file_get_contents($fileEnv);
}
else if ($val = getenv($env)) {
return $val;
}
else {
return $default;
}
}
...
/** MySQL database username */
define( 'DB_USER', getenv_docker('WORDPRESS_DB_USER', 'username_here') );
/** MySQL database password */
define( 'DB_PASSWORD', getenv_docker('WORDPRESS_DB_PASSWORD', 'password_here') )
Given the error message, I assume the CLI container is trying to do something similar, but the environment variables WORDPRESS_* aren't set, and so it's using the defaults, which ... aren't working. What I think I need to do is track down something that's failing to set those variables earlier in the run process.
docker-compose.yml
version: '3.7'
services:
mysql:
image: mariadb
ports:
- '3306'
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
volumes:
- 'mysql:/var/lib/mysql'
wordpress:
build: .
depends_on:
- mysql
image: wordpress
ports:
- '${WP_ENV_PORT:-8888}:80'
environment:
WORDPRESS_DB_NAME: wordpress
volumes: &ref_0
- 'wordpress:/var/www/html'
- '/Users/cwr/src/cwra/foo:/var/www/html/wp-content/plugins/foo'
tests-wordpress:
depends_on:
- mysql
image: wordpress
ports:
- '${WP_ENV_TESTS_PORT:-8889}:80'
environment:
WORDPRESS_DB_NAME: tests-wordpress
volumes: &ref_1
- 'tests-wordpress:/var/www/html'
- '/Users/cwr/src/cwra/foo:/var/www/html/wp-content/plugins/foo'
cli:
depends_on:
- wordpress
image: 'wordpress:cli'
volumes: *ref_0
user: '33:33'
tests-cli:
depends_on:
- tests-wordpress
image: 'wordpress:cli'
volumes: *ref_1
user: '33:33'
composer:
image: composer
volumes:
- '/Users/cwr/src/cwra/foo:/app'
phpunit:
image: 'wordpressdevelop/phpunit:latest'
depends_on:
- tests-wordpress
volumes:
- 'tests-wordpress:/var/www/html'
- '/Users/cwr/src/cwra/foo:/var/www/html/wp-content/plugins/foo'
- 'phpunit-uploads:/var/www/html/wp-content/uploads'
environment:
LOCAL_DIR: html
WP_PHPUNIT__TESTS_CONFIG: /var/www/html/phpunit-wp-config.php
volumes:
wordpress: {}
tests-wordpress: {}
mysql: {}
phpunit-uploads: {}
docker logs could help you here as it is also showing logs of exited containers (Source).
UPDATE: Acc. to the OP, the actual issue is still unclear but starting with a fresh docker & node installation did the trick & got wp-env running.

Airflow: connection is not being created when using environment variables

I want to create a Mongo connection (other than default) without using the Airflow UI.
I read from the Airflow documentation:
Connections in Airflow pipelines can be created using environment
variables. The environment variable needs to have a prefix of
AIRFLOW_CONN_ for Airflow with the value in a URI format to use the
connection properly.
When referencing the connection in the Airflow pipeline, the conn_id
should be the name of the variable without the prefix. For example, if
the conn_id is named postgres_master the environment variable should
be named AIRFLOW_CONN_POSTGRES_MASTER (note that the environment
variable must be all uppercase).
I tried to apply this when using the Puckel docker image.
This is a docker compose using that image:
version: '2.1'
services:
postgres:
image: postgres:9.6
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
webserver:
image: puckel/docker-airflow:1.10.6
restart: always
depends_on:
- postgres
environment:
- LOAD_EX=n
- EXECUTOR=Local
- AIRFLOW_CONN_MY_MONGO=mongodb://mongo:27017
volumes:
- ./src/:/usr/local/airflow/dags
- ./requirements.txt:/requirements.txt
ports:
- "8080:8080"
command: webserver
healthcheck:
test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-webserver.pid ]"]
interval: 30s
timeout: 30s
retries: 3
Note the line AIRFLOW_CONN_MY_MONGO=mongodb://mongo:27017 where I'm passing the environment variable as the Airflow documentation suggests.
Problem here is that there is no my_mongo connection created when I'm listing the connections in the UI.
Any advice? Thanks!
The connection won't be listed in the UI when you create it with environment variable.
Reason:
Airflow supports the creation of connections via Environment variable for ad-hoc jobs in the DAGs
The connection in the UI are actually saved in the DB and retrieved from it. The ones created by Env vars are not stored in DB
How do I test my connection?
Create a sample DAG and use your connection to run a sample job. It should work fine.
I read a Puckel issue where they mention that the connection is created, but is not showed in the UI. I tested it and in fact the connection works when used in a DAG.

Container command could not be invoked

Just encountered this error message while trying to bring up a docker-compose stack on my local machine. I have a Dockerfile which is identical to the official Wordpress image. My docker-compose file looks like this:
wordpress:
image: joystick/wp
ports:
- "8000:80"
links:
- wordpress_db:mysql
environment:
- WORDPRESS_DB_HOST=mysql
- WORDPRESS_DB_NAME=wordpress
- WORDPRESS_DB_USER=admin
- WORDPRESS_DB_PASSWORD=password
wordpress_db:
image: tutum/mysql
environment:
- ON_CREATE_DB=wordpress
- MYSQL_PASS=password
When I change the "image" part at the beginning of this to "wordpress" and use the official image, everything comes up as I'd expect. But when I try to build my own image first, and then use it in this docker-compose file, I get the error message "Container command could not be invoked".
I tried adding a "command" node into the "wordpress" section of this docker-compose, but that did not work.
If you're building from official images, e.g. https://github.com/docker-library/wordpress/tree/master/apache, note the file docker-entrypoint.sh. It must be executable, I set 755 and managed to build the image and run the container.

Resources