I've set up a docker container with photoprism, which is self-hosted replacement for Google Photos, and imported all my pictures into its database. Now I think it would be really cool to get access to the MariaDB database that the program uses. The database is in a separate docker container:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bc9832e0cac1 mariadb:10.5 "docker-entrypoint.s…" 5 seconds ago Up 4 seconds 0.0.0.0:3307->3306/tcp, :::3307->3306/tcp pictures_mariadb_1
27eb740da6bd photoprism/photoprism:latest "/entrypoint.sh phot…" 6 weeks ago Up 4 seconds 0.0.0.0:2342->2342/tcp, :::2342->2342/tcp pictures_photoprism_1
The docker-compose file contains the info to access the database:
PHOTOPRISM_DATABASE_DRIVER: "mysql" # Use MariaDB (or MySQL) instead of SQLite for improved performance
PHOTOPRISM_DATABASE_SERVER: "mariadb:3306" # MariaDB database server (hostname:port)
PHOTOPRISM_DATABASE_NAME: "photoprism" # MariaDB database schema name
PHOTOPRISM_DATABASE_USER: "photoprism" # MariaDB database user name
PHOTOPRISM_DATABASE_PASSWORD: "XXXXXXXX" # MariaDB database user password
An further down in the file is the docker compose part for mariadb:
mariadb:
image: mariadb:10.5
restart: unless-stopped
ports:
- 3307:3306 # 3306 is already in use
security_opt:
- seccomp:unconfined
- apparmor:unconfined
command: mysqld --transaction-isolation=READ-COMMITTED --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci --max-connections=512 --innodb-rollback-on-timeout=OFF --innodb-lock-wait-timeout=50
volumes: # Don't remove permanent storage for index database files!
- "~/.mariadb/database:/var/lib/mysql"
environment:
MYSQL_ROOT_PASSWORD: YYYYYYYY
MYSQL_DATABASE: photoprism
MYSQL_USER: photoprism
MYSQL_PASSWORD: XXXXXXXX
I tried to translate this into R code, but I have very limited experience with the underlying packages and also don't know enough about Docker. So my guess was that one of these code chunks should work (but they don't):
con <- DBI::dbConnect(
drv = RMariaDB::MariaDB(),
dbname = "photoprism",
username = "photoprism",
password = "XXXXXXXX",
host = "mariadb",
port = 3307
)
#> Error: Failed to connect: Unknown MySQL server host 'mariadb' (-2)
con <- DBI::dbConnect(
drv = RMariaDB::MariaDB(),
dbname = "photoprism",
username = "photoprism",
password = "XXXXXXXX",
host = "localhost",
port = 3307
)
#> Error: Failed to connect: Access denied for user 'photoprism'#'localhost'
Created on 2022-01-22 by the reprex package (v2.0.1)
I'm trying to access the container on the same machine which is running the containers. So localhost should be correct.
Related
Here i created a User model using sqlmodel.
from sqlmodel import SQLModel, Field, Index
from typing import Optional
class User(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
name: str = Field(max_length=30)
surname: str = Field(max_length=50)
docker-compose with postgres:
version: "3.4"
services:
db:
image: postgres:14.0-alpine
restart: always
container_name: test_db
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=testdb
ports:
- "5432:5432"
volumes:
- db:/var/lib/postgresql/data
volumes:
db:
Now i am trying to create migrations with alembic revision
with "alembic revision --autogenerate -m "msg""
But it falls with
File "C:\Python3.10\lib\socket.py", line 955, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 11001] getaddrinfo failed
The error indicates that the hostname cannot be resolved. Is your database running locally? Has your python file that holds the SQLAlchemy engine access to it?
I also see that you are running albemic from your global python installation (File "C:\Python3.10\), does that have the same dependencies as your python application? In any case, it is very advisable to use virtual environments to ensure you are developing with the same modules as you are running albemic migrations with.
I'm trying to use the WordPress local development environment, which sets up a bunch of docker containers (and I'm new to Docker). One of them is a WordPress CLI, which I presume is running some scripts to do some configuration, but that particular container doesn't stay running (I believe this is intentional). I'm guessing that a script it's running is failing, and that's happening because of some configuration error, but I can't figure out how to tell what it's doing when it executes (what scripts it's running, what environment variables are set, etc...).
Is there any way to somehow "trace" what the container is trying to do?
$docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
wordpressdevelop/phpunit latest 8ebb6f73d762 2 days ago 732MB
wordpress latest 6c2c086d9173 2 days ago 554MB
wordpress <none> 408627ce79b1 2 days ago 551MB
composer latest ff854871a595 9 days ago 173MB
wordpress cli ee6de7f71aa0 9 days ago 137MB // I want to see what this does
mariadb latest e27cf5bc24fe 12 days ago 401MB
$docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
19007b991e08 wordpress "docker-entrypoint.s…" 48 minutes ago Up 48 minutes 0.0.0.0:8888->80/tcp e6bc9159b910bda3d9b0dae2e230eabd_wordpress_1
26ac5c7ec782 wordpress "docker-entrypoint.s…" 48 minutes ago Up 48 minutes 0.0.0.0:8889->80/tcp e6bc9159b910bda3d9b0dae2e230eabd_tests-wordpress_1
8ae0a4dc4f77 mariadb "docker-entrypoint.s…" 48 minutes ago Up 48 minutes 0.0.0.0:54989->3306/tcp e6bc9159b910bda3d9b0dae2e230eabd_mysql_1
This is on MacOS 11.2.2 running Docker Desktop, Docker version 20.10.5.
I'm not sure if it's relevant, but for completeness' sake, I've included the docker-compose.yml which wp-env generates, below.
Thanks!
Background
This used to Just Work, but I think I broke something in my environment, and am trying to diagnose that. I initially asked about that on WordPress StackExchange, but have since dug deeper. The Docker compose step fails thusly:
...
⠏ Configuring WordPress.Creating e6bc9159b910bda3d9b0dae2e230eabd_cli_run ... done
⠹ Configuring WordPress.mysqlcheck: Got error: 1045: Access denied for user 'username_here'#'172.19.0.5' (using password: YES) when trying to connect
The database container is left up and running, and if I go look in the database, I see it's configured to have root connect with no password:
MariaDB [(none)]> select user, host, password from mysql.user;
+-------------+-----------+----------+
| User | Host | Password |
+-------------+-----------+----------+
| mariadb.sys | localhost | |
| root | localhost | |
| root | % | |
+-------------+-----------+----------+
3 rows in set (0.003 sec)
In the main WordPress container, the wp-config.php file contains this little snippet:
...
// a helper function to lookup "env_FILE", "env", then fallback
function getenv_docker($env, $default) {
if ($fileEnv = getenv($env . '_FILE')) {
return file_get_contents($fileEnv);
}
else if ($val = getenv($env)) {
return $val;
}
else {
return $default;
}
}
...
/** MySQL database username */
define( 'DB_USER', getenv_docker('WORDPRESS_DB_USER', 'username_here') );
/** MySQL database password */
define( 'DB_PASSWORD', getenv_docker('WORDPRESS_DB_PASSWORD', 'password_here') )
Given the error message, I assume the CLI container is trying to do something similar, but the environment variables WORDPRESS_* aren't set, and so it's using the defaults, which ... aren't working. What I think I need to do is track down something that's failing to set those variables earlier in the run process.
docker-compose.yml
version: '3.7'
services:
mysql:
image: mariadb
ports:
- '3306'
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
volumes:
- 'mysql:/var/lib/mysql'
wordpress:
build: .
depends_on:
- mysql
image: wordpress
ports:
- '${WP_ENV_PORT:-8888}:80'
environment:
WORDPRESS_DB_NAME: wordpress
volumes: &ref_0
- 'wordpress:/var/www/html'
- '/Users/cwr/src/cwra/foo:/var/www/html/wp-content/plugins/foo'
tests-wordpress:
depends_on:
- mysql
image: wordpress
ports:
- '${WP_ENV_TESTS_PORT:-8889}:80'
environment:
WORDPRESS_DB_NAME: tests-wordpress
volumes: &ref_1
- 'tests-wordpress:/var/www/html'
- '/Users/cwr/src/cwra/foo:/var/www/html/wp-content/plugins/foo'
cli:
depends_on:
- wordpress
image: 'wordpress:cli'
volumes: *ref_0
user: '33:33'
tests-cli:
depends_on:
- tests-wordpress
image: 'wordpress:cli'
volumes: *ref_1
user: '33:33'
composer:
image: composer
volumes:
- '/Users/cwr/src/cwra/foo:/app'
phpunit:
image: 'wordpressdevelop/phpunit:latest'
depends_on:
- tests-wordpress
volumes:
- 'tests-wordpress:/var/www/html'
- '/Users/cwr/src/cwra/foo:/var/www/html/wp-content/plugins/foo'
- 'phpunit-uploads:/var/www/html/wp-content/uploads'
environment:
LOCAL_DIR: html
WP_PHPUNIT__TESTS_CONFIG: /var/www/html/phpunit-wp-config.php
volumes:
wordpress: {}
tests-wordpress: {}
mysql: {}
phpunit-uploads: {}
docker logs could help you here as it is also showing logs of exited containers (Source).
UPDATE: Acc. to the OP, the actual issue is still unclear but starting with a fresh docker & node installation did the trick & got wp-env running.
I needed to put 3 shards of a database on three different servers. So I created 3 servers in pgAdmin(s1,s2,s3), then I put each server one shard. Then, I tried to connect one of the servers in R; however, I couldn't make the connection. I always get an error:
Error in postgresqlNewConnection(drv, ...) : RS-DBI driver: (could not connect postgres#172.17.0.1:5432 on dbname "postgres": could not connect to server: Operation timed out Is the server running on host "172.17.0.1" and accepting TCP/IP connections on port 5432?
My code is:
#install.packages("RPostgreSQL")
require("RPostgreSQL")
library(DBI)
# create a connection
# save the password that we can "hide" it as best as we can by collapsing it
pw <- {
"postgres"
}
# loads the PostgreSQL driver
drv <- dbDriver("PostgreSQL")
# creates a connection to the postgres database
con <- dbConnect(
drv,
dbname = "postgres",
host = "172.17.0.1",
port = 5432,
user = "postgres",
password = pw
)
rm(pw) # removes the password
pgAdmin snap
Did I write something wrong?
if this is using container make sure to forward the port 5432 on 0.0.0.0 i.e the container is listening on the port 5432.
Also you've gotta check this setting if you are not doing the connection locally ONLY>, in the postgresql.conf file:
# - Connection Settings -
#listen_addresses = 'localhost' >>>> This should be = '*' instead of localhost
Save the conf and restart the service. Hope this helps!
I have a local Docker container running PostgreSQL. I want to be able to connect to and interact with this database from R running on my host machine (Mac OS).
I can connect using pgadmin4 via the following address
http://0.0.0.0:5434/browser/
then adding a new server:
Add new server. General Tab --> name: tagbase. Connection Tab --> Host name/address: postgres. Connection Tab --> Port: 5432. Connection Tab --> Maintenance database: postgres. Connection Tab --> Username: tagbase
This works perfectly.
However, to connect from R I try:
require("RPostgreSQL")
# load the PostgreSQL driver
drv <- dbDriver("PostgreSQL")
# create a connection to the postgres database
con <- RPostgreSQL::dbConnect(drv, dbname = "postgres",
host = "localhost", port = 5434,
user = "tagbase", password = "tagbase")
This attempt simply hangs until it crashes R.
Perhaps a viable solution is something similar to this. Many thanks for any help.
EDIT - 20190207
Thanks for the comments. I have made the changes with no improvement but agreed the changes were necessary.
I successfully start this docker network (of 3 containers) via terminal as below. It looks to me like I want to connect to the postgres container at 0.0.0.0 on port 5432, correct?
$ docker-compose up
Starting tagbase-server_postgres_1_3f42d4fc1a77 ... done
Starting tagbase-server_pgadmin4_1_52ab92a49f22 ... done
Starting tagbase-server_tagbase_1_9d3a22c8be46 ... done
Attaching to tagbase-server_postgres_1_3f42d4fc1a77, tagbase-server_pgadmin4_1_52ab92a49f22, tagbase-server_tagbase_1_9d3a22c8be46
postgres_1_3f42d4fc1a77 | 2019-02-05 19:35:45.999 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
I thought I was connecting to the server via R exactly as I've done using pgadmin but the following doesn't seem to work:
# create a connection to the postgres database
con <- DBI::dbConnect(RPostgreSQL::PostgreSQL(), dbname = "postgres",
host = "0.0.0.0", port = 5432,
user = "tagbase", password = "tagbase")
Error in postgresqlNewConnection(drv, ...) :
RS-DBI driver: (could not connect tagbase#0.0.0.0:5432 on dbname "postgres":
FATAL: role "tagbase" does not exist)
I now realize pgadmin is also running in the docker container network. Thus, local host for the pgadmin connection is the database server. Seems like I need a solution like this
Note the source for the docker builds is here following the instructions here.
If you want to connect directly to a postgres database inside a docker from outside docker world, you must expose a port on postgres docker. So first, you need to edit the file "Dockerfile-postgres", and add EXPOSE 5432
FROM postgres:10
COPY ./sqldb/tagbase-schema.sql /docker-entrypoint-initdb.d/
# Expose default postgres port
EXPOSE 5432
Then build and run the dockers according to the provided instrucctions (Checked on October 6, 2019)
$ docker-compose build
$ docker-compose up
Add the database using pgAdmin
Add New Server
General Tab --> name: tagbase
Connection Tab --> Host name/address: postgres
Connection Tab --> Port: 5432
Connection Tab --> Maintenance database: postgres
Connection Tab --> Username: tagbase
Edit your R scritp according to the databse name and port:
# install.packages('RPostgreSQL')
library(RPostgreSQL)
# load the PostgreSQL driver
drv <- dbDriver("PostgreSQL")
# create a connection to the postgres database
con <- RPostgreSQL::dbConnect(drv, dbname = "tagbase",
host = "localhost", port = 5432,
user = "tagbase", password = "tagbase")
# Test query
temp <- dbGetQuery(con, 'select * from public.metadata_types')
# Evaluate output
str(temp)
# 'data.frame': 142 obs. of 8 variables:
# $ attribute_id : num 1 2 3 4 5 6 7 8 9 10 ...
# $ category : chr "instrument" "instrument" "instrument" "instrument" ...
# $ attribute_name: chr "instrument_name" "instrument_type" "firmware" "manufacturer" ...
# $ type : chr "string" "string" "string" "string" ...
# $ description : chr "Append an identifer that is unique within your organization. This is essential if a device is recycled." "Type of instrument" "Version number of the firmware used to build the device" "Name of manufacturer" ...
# $ example : chr "16P0100-Refurb2" "archival, popup, satellite, acoustic tag, or acoustic receiver" NA "Wildlife Computers, Microwave Telemetry, Lotek Wireless, Desert Star Systems, CEFAS, StarOddi, Sea Mammal Resea"| __truncated__ ...
# $ comments : chr "Devices might be reused, so the serial number will be the same. The only way to distinguish is by providing a u"| __truncated__ "Should be restricted to the examples provided." NA NA ...
# $ necessity : chr "required" "required" "required" "required" ...
# Disconnect from database
dbDisconnect(con)
I have an ec2 instance set up with my shiny app and my postgresql database, I want to get the shiny-app to read from the database
If I type psql and \conninfo while ssh-ed into my instance I get
You are connected to database "ubuntu" as user "ubuntu" via socket in "/var/run/postgresql" at port "5432".
When I use R in the ec2 command line and type the following, I can read from my database no problem!
drv <- dbDriver("PostgreSQL")
con <- dbConnect(drv, dbname = "ubuntu", host = "/var/run/postgresql", port = 5432, user = "ubuntu", password = pw)
However, when I put these same lines in my shiny app.R file I get
Error in postgresqlNewConnection(drv, ...) :
RS-DBI driver: (could not connect ubuntu#/var/run/postgresql:5432 on dbname "ubuntu": FATAL: Peer authentication failed for user "ubuntu")
I've tried so many different values for host like
host = "localhost"
host = "my ec2 public ip address"
host = "127.0.0.1"
for example and nothing has been working.
my security group for this ec2 instance has an inboud connection to port 5432.
could this be it: why is one file green and the other pink? the green one is the one that works (local) and the pink one is on my instance
Finally figured it out.. this is the same problem as Getting error: Peer authentication failed for user "postgres", when trying to get pgsql working with rails
except that I was getting a different error for the same underlying problem.
the answer that worked for me is the second one:
1.
nano /etc/postgresql/9.x/main/pg_hba.conf
change peer in this line
local all postgres peer
to
local all postgres trust
Restart the server
sudo service postgresql restart
Login into psql and set your password
psql -U postgres
ALTER USER postgres with password 'your-pass';
Finally change the pg_hba.conf from
local all postgres trust
to
local all postgres md5
and that finally worked