Here i created a User model using sqlmodel.
from sqlmodel import SQLModel, Field, Index
from typing import Optional
class User(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
name: str = Field(max_length=30)
surname: str = Field(max_length=50)
docker-compose with postgres:
version: "3.4"
services:
db:
image: postgres:14.0-alpine
restart: always
container_name: test_db
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=testdb
ports:
- "5432:5432"
volumes:
- db:/var/lib/postgresql/data
volumes:
db:
Now i am trying to create migrations with alembic revision
with "alembic revision --autogenerate -m "msg""
But it falls with
File "C:\Python3.10\lib\socket.py", line 955, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 11001] getaddrinfo failed
The error indicates that the hostname cannot be resolved. Is your database running locally? Has your python file that holds the SQLAlchemy engine access to it?
I also see that you are running albemic from your global python installation (File "C:\Python3.10\), does that have the same dependencies as your python application? In any case, it is very advisable to use virtual environments to ensure you are developing with the same modules as you are running albemic migrations with.
Related
Apache Airflow Docker : sqlalchemy.exc.NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:mysqldb
version: '3'
x-airflow-common:
&airflow-common
image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.3.3}
# build: .
environment:
&airflow-common-env
AIRFLOW__CORE__EXECUTOR: CeleryExecutor
AIRFLOW__DATABASE__SQL_ALCHEMY_CONN: mysql+mysqldb://xxx:xxx#xxxxx:xxxx/airflow
# For backward compatibility, with Airflow <2.3
AIRFLOW__CORE__SQL_ALCHEMY_CONN: mysql+mysqldb://root:xxxx#xxxxxx:xxxxx/airflow
AIRFLOW__CELERY__RESULT_BACKEND: db+mysqldb://root:xxxxxx#xxxxxx:xxxxxx/airflow
AIRFLOW__CELERY__BROKER_URL: redis://:#xxxxxx:6379/0
AIRFLOW__CORE__FERNET_KEY: ''
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
AIRFLOW__CORE__LOAD_EXAMPLES: 'true'
AIRFLOW__API__AUTH_BACKENDS: 'airflow.api.auth.backend.basic_auth'
_PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-}
volumes:
- airflow:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./plugins:/opt/airflow/plugins
user: "${AIRFLOW_UID:-50000}:0"
I have the same problem,How should this problem be solved.
I think that the problem is with the celery result backend, where celery uses databases drivers different from airflow drivers, you can try:
AIRFLOW__CELERY__RESULT_BACKEND: db+mysql://root:xxxxxx#xxxxxx:xxxxxx/airflow
Here is the celery configuration doc.
I'm not very successfully trying to figure out how to achieve an override of a group/package with and via a yaml file. Trying to explain my problem using the example (files and folder structre) from the hydra documentation https://hydra.cc/docs/tutorials/structured_config/schema/.
yaml.config as:
defaults:
- base_config # --> reference to dataclass
- db: base_mysql # --> reference to dataclass
- _self_ debug: true
gives the expected (print when running the myapp.py):
db:
driver: mysql
host: localhost
port: 3306
user: ???
password: ???
Using the yaml file instead instead of the base_mysql dataclass is also fine thus the yaml.config as:
defaults:
- base_config
- db: mysql # --> reads db/mysql.yaml
- _self_
debug: true
prints again as expected
db:
driver: mysql
host: localhost
port: 3306
user: omry
password: secret
Overriding individual fields is as well fine, e.g. with config.yaml like
defaults:
- base_config
- db: mysql
- _self_
debug: true
db:
password: UpdatedPassword
What I'm to able to figure out is how to override the full db group with a/via another yaml file - defining the structure via a dataclass and then override/set the values like:
defaults:
- base_config
- db: base_mysql # --> reference to dataclass to define the structure
- _self_
debug: true
db: mysql # --> mysql.yaml
throws the following error:
In 'config': Validation error while composing config:
Merge error: str is not a subclass of MySQLConfig. value: mysql
full_key:
object_type=Config
Searching the internet/stackoverflow already showed me that moving the self to the first position will get rid of the error - but then the composition order is "wrong".
Keeping the order as it is and using the mysql.yaml for an override works well - when done via commandline (python myapp.py db=mysql when the line "db:mysql" is not present), but for my usecase it much more convinient to handle it all via the yaml file(s).
Somehow I assume that the same functionality is available via CLI and files/code and that I just did not mange to figure out how it works.
(hydra version 1.1 in a conda environment with python 3.9)
Thank you very much in advance for any help that you can provide.
If understand correctly, you want to use the defaults list in your primary yaml file to merge together the base_mysql config with the mysql config. This will do the trick:
defaults:
- base_config
- db: [base_mysql, mysql]
- _self_
debug: true
Passing a list [base_mysql, mysql] of config names causes those configs base_mysql and mysql to be merged together. This is documented here -- see the "CONFIG_NAMES" alternative for specifying an option in the defaults list.
Note that passing the CLI override db=mysql (as in python myapp.py db=mysql) results in modification of the defaults list; the resulting defaults list will be the same as if you had used the following in your yaml file:
defaults:
- base_config
- db: mysql
- _self_
debug: true
You can pass a list [base_mysql, mysql] of config names at the CLI like this:
python my_app.py 'db=[base_mysql, mysql]'
I've set up a docker container with photoprism, which is self-hosted replacement for Google Photos, and imported all my pictures into its database. Now I think it would be really cool to get access to the MariaDB database that the program uses. The database is in a separate docker container:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bc9832e0cac1 mariadb:10.5 "docker-entrypoint.s…" 5 seconds ago Up 4 seconds 0.0.0.0:3307->3306/tcp, :::3307->3306/tcp pictures_mariadb_1
27eb740da6bd photoprism/photoprism:latest "/entrypoint.sh phot…" 6 weeks ago Up 4 seconds 0.0.0.0:2342->2342/tcp, :::2342->2342/tcp pictures_photoprism_1
The docker-compose file contains the info to access the database:
PHOTOPRISM_DATABASE_DRIVER: "mysql" # Use MariaDB (or MySQL) instead of SQLite for improved performance
PHOTOPRISM_DATABASE_SERVER: "mariadb:3306" # MariaDB database server (hostname:port)
PHOTOPRISM_DATABASE_NAME: "photoprism" # MariaDB database schema name
PHOTOPRISM_DATABASE_USER: "photoprism" # MariaDB database user name
PHOTOPRISM_DATABASE_PASSWORD: "XXXXXXXX" # MariaDB database user password
An further down in the file is the docker compose part for mariadb:
mariadb:
image: mariadb:10.5
restart: unless-stopped
ports:
- 3307:3306 # 3306 is already in use
security_opt:
- seccomp:unconfined
- apparmor:unconfined
command: mysqld --transaction-isolation=READ-COMMITTED --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci --max-connections=512 --innodb-rollback-on-timeout=OFF --innodb-lock-wait-timeout=50
volumes: # Don't remove permanent storage for index database files!
- "~/.mariadb/database:/var/lib/mysql"
environment:
MYSQL_ROOT_PASSWORD: YYYYYYYY
MYSQL_DATABASE: photoprism
MYSQL_USER: photoprism
MYSQL_PASSWORD: XXXXXXXX
I tried to translate this into R code, but I have very limited experience with the underlying packages and also don't know enough about Docker. So my guess was that one of these code chunks should work (but they don't):
con <- DBI::dbConnect(
drv = RMariaDB::MariaDB(),
dbname = "photoprism",
username = "photoprism",
password = "XXXXXXXX",
host = "mariadb",
port = 3307
)
#> Error: Failed to connect: Unknown MySQL server host 'mariadb' (-2)
con <- DBI::dbConnect(
drv = RMariaDB::MariaDB(),
dbname = "photoprism",
username = "photoprism",
password = "XXXXXXXX",
host = "localhost",
port = 3307
)
#> Error: Failed to connect: Access denied for user 'photoprism'#'localhost'
Created on 2022-01-22 by the reprex package (v2.0.1)
I'm trying to access the container on the same machine which is running the containers. So localhost should be correct.
I'm totally new to Flyway but I'm trying to migrate a number of identical test databases using the docker-compose flyway+mysql arrangement described in https://github.com/flyway/flyway-docker
As far as I can tell, the migrate command can take multiple schemas in its -schemas argument but it only seems to apply the actual SQL migration to the first schema in the list.
For example, when I run the migrate with schemas=test_1,test_2,test_3, flyway creates all three databases but only creates the tables specified in the migration file on the first test_1 database.
Is there a way to apply the SQL migration file to all the schemas in the list?
I'm going to leave this question up in case someone can still answer how multiple schemas is useful if the migration file isn't applied to all databases in the list. But, I was able to handle multiple databases in a docker-compose by overriding the flyway entrypoint and command.
So now my docker-compose service looks like:
services:
flyway:
image: flyway/flyway:6.1.4
volumes:
- ./migrations:/flyway/sql
depends_on:
- db
entrypoint: ["bash"]
command: > -c "/flyway/flyway -url=jdbc:mysql://db -schemas=test1 migrate;
/flyway/flyway -url=jdbc:mysql://db -schemas=test2 migrate"
For me what worked was breaking up my migrations into separate executions in my docker-compose file along with docker-postgresql-multiple-databases as follows:
version: '3.8'
services:
postgres-db:
image: 'postgres:13.3'
environment:
POSTGRES_MULTIPLE_DATABASES: 'customers,addresses'
POSTGRES_USER: 'pocketlaundry'
POSTGRES_PASSWORD: 'iceprism'
volumes:
- ./docker-postgresql-multiple-databases:/docker-entrypoint-initdb.d
expose:
- '5432' # Publishes 5432 to other containers (addresses-flyway, customers-flyway) but NOT to host machine
ports:
- '5432:5432'
addresses-flyway:
image: flyway/flyway:7.12.0
command: -url=jdbc:postgresql://postgres-db:5432/addresses -schemas=public -user=pocketlaundry -password=iceprism -connectRetries=60 migrate
volumes:
- ./sports-ball-project/src/test/resources/db/addresses/migrations:/flyway/sql
depends_on:
- postgres-db
links:
- postgres-db
customers-flyway:
image: flyway/flyway:7.12.0
command: -url=jdbc:postgresql://postgres-db:5432/customers -schemas=public -user=pocketlaundry -password=iceprism -connectRetries=60 migrate
volumes:
- ./sports-ball-project/src/test/resources/db/customers/migrations:/flyway/sql
depends_on:
- postgres-db
links:
- postgres-db
I want to create a Mongo connection (other than default) without using the Airflow UI.
I read from the Airflow documentation:
Connections in Airflow pipelines can be created using environment
variables. The environment variable needs to have a prefix of
AIRFLOW_CONN_ for Airflow with the value in a URI format to use the
connection properly.
When referencing the connection in the Airflow pipeline, the conn_id
should be the name of the variable without the prefix. For example, if
the conn_id is named postgres_master the environment variable should
be named AIRFLOW_CONN_POSTGRES_MASTER (note that the environment
variable must be all uppercase).
I tried to apply this when using the Puckel docker image.
This is a docker compose using that image:
version: '2.1'
services:
postgres:
image: postgres:9.6
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
webserver:
image: puckel/docker-airflow:1.10.6
restart: always
depends_on:
- postgres
environment:
- LOAD_EX=n
- EXECUTOR=Local
- AIRFLOW_CONN_MY_MONGO=mongodb://mongo:27017
volumes:
- ./src/:/usr/local/airflow/dags
- ./requirements.txt:/requirements.txt
ports:
- "8080:8080"
command: webserver
healthcheck:
test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-webserver.pid ]"]
interval: 30s
timeout: 30s
retries: 3
Note the line AIRFLOW_CONN_MY_MONGO=mongodb://mongo:27017 where I'm passing the environment variable as the Airflow documentation suggests.
Problem here is that there is no my_mongo connection created when I'm listing the connections in the UI.
Any advice? Thanks!
The connection won't be listed in the UI when you create it with environment variable.
Reason:
Airflow supports the creation of connections via Environment variable for ad-hoc jobs in the DAGs
The connection in the UI are actually saved in the DB and retrieved from it. The ones created by Env vars are not stored in DB
How do I test my connection?
Create a sample DAG and use your connection to run a sample job. It should work fine.
I read a Puckel issue where they mention that the connection is created, but is not showed in the UI. I tested it and in fact the connection works when used in a DAG.