I've been trying to figure out how to connect to a mariadb container while using podman-compose and I keep getting an error.
Here's what I've done:
Software Versions
podman version = 3.3.1
podman-compose version = podman-compose-0.1.7-2.git20201120.el8.noarch`
docker-compose.yml
version: '3'
services:
db:
image: mariadb:latest
environment:
MARIADB_USER: user
MARIADB_PASSWORD: pass
MARIADB_DATABASE: testdb
MARIADB_ROOT_PASSWORD: rootpass
volumes:
- db_data:/var/lib/mysql
When I spin this container up using: podman-compose up -d
Then try to connect to this database using:
podman exec -it project_db_1 mariadb -uroot -prootpass
I get this error:
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
I get this error both as a regular user and as the root user.
Yet, when I run the same container with podman:
Podman Command
podman run -d \
--name mariadb \
-e MARIADB_USER=user \
-e MARIADB_PASSWORD=pass \
-e MARIADB_DATABASE=testdb \
-e MARIADB_ROOT_PASSWORD=rootpass \
mariadb:latest
Then try to connect to this database using:
podman exec -it mariadb mariadb -uroot -prootpass
I am able to successfully login.
Can anyone please explain to me what I'm doing wrong with podman-compose or why it's not allowing me to login with the credentials I've given?
OK, for anyone that want's to know what I did to fix this...
Firstly, I uninstalled podman & podman-compose because there were just too many issues with it, so I installed docker & docker-compose.
Secondly, when I ran this script in docker-compose, I got an error that wasn't showing up in podman-compose.
Error: services.db.environment must be a mapping
After Googling that error, I found out that I needed to format the environment section like this:
version: '3'
services:
db:
image: mariadb:latest
environment:
- MARIADB_USER=user
- MARIADB_PASSWORD=pass
- MARIADB_DATABASE=testdb
- MARIADB_ROOT_PASSWORD=rootpass
volumes:
- db_data:/var/lib/mysql
After I fixed that, docker-compose ran beautifully and I could connect to my database, even from another container. I did do a test using podman-compose with this code fix, and it still wouldn't work. So I'm just going to use docker until podman fixes their issues.
You need to make sure to bind to the "public" address of the container. It will by default bind to 0.0.0.0. On that interface you cannot access it from other pods.
Add a command to the service, and set --bind-address. Like this:
services:
db:
image: docker.io/mariadb:latest
command: --bind-address=db
This will make it listen to the correct interface :)
Related
I'm using Docker Compose locally with:
app container: Nginx & PHP-FPM with a Symfony 4 app
PostgreSQL container
Redis container
It works great locally but when deployed to the development Docker Swarm cluster, I can't login to the Symfony app.
The Swarm stack is the same as local, except for PostgreSQL which is installed on its own server (not a Docker container).
Using the profiler, I nearly always get the following error:
Token not found
Token "2df1bb" was not found in the database.
When I display the content of the var/log/dev.log file, I get these lines about my login attempts:
[2019-07-22 10:11:14] request.INFO: Matched route "app_login". {"route":"app_login","route_parameters":{"_route":"app_login","_controller":"App\\Controller\\SecurityController::login"},"request_uri":"http://dev.ip/public/login","method":"GET"} []
[2019-07-22 10:11:14] security.DEBUG: Checking for guard authentication credentials. {"firewall_key":"main","authenticators":1} []
[2019-07-22 10:11:14] security.DEBUG: Checking support on guard authenticator. {"firewall_key":"main","authenticator":"App\\Security\\LoginFormAuthenticator"} []
[2019-07-22 10:11:14] security.DEBUG: Guard authenticator does not support the request. {"firewall_key":"main","authenticator":"App\\Security\\LoginFormAuthenticator"} []
[2019-07-22 10:11:14] security.INFO: Populated the TokenStorage with an anonymous Token. [] []
The only thing I may find useful here is the Guard authenticator does not support the request. message, but I have no idea what do search from there.
UPDATE:
Here is my docker-compose.dev.yml (removed redis container and changed app environment variables):
version: "3.7"
networks:
web:
driver: overlay
services:
# Symfony + Nginx
app:
image: "registry.gitlab.com/my-image"
deploy:
replicas: 2
restart_policy:
condition: on-failure
networks:
- web
ports:
- 80:80
environment:
APP_ENV: dev
DATABASE_URL: pgsql://user:pass#0.0.0.0/my-db
MAILER_URL: gmail://user#gmail.com:pass#localhost
Here is the Dockerfile.dev used to build the app image on development servers:
# Base image
FROM php:7.3-fpm-alpine
# Source code into:
WORKDIR /var/www/html
# Import Symfony + Composer
COPY --chown=www-data:www-data ./symfony .
COPY --from=composer /usr/bin/composer /usr/bin/composer
# Alpine Linux packages + PHP extensions
RUN apk update && apk add \
supervisor \
nginx \
bash \
postgresql-dev \
wget \
libzip-dev zip \
yarn \
npm \
&& apk --no-cache add pcre-dev ${PHPIZE_DEPS} \
&& pecl install redis \
&& docker-php-ext-enable redis \
&& docker-php-ext-configure pgsql -with-pgsql=/usr/local/pgsql \
&& docker-php-ext-install pdo_pgsql \
&& docker-php-ext-configure zip --with-libzip \
&& docker-php-ext-install zip \
&& composer install \
--prefer-dist \
--no-interaction \
--no-progress \
&& yarn install \
&& npm rebuild node-sass \
&& yarn encore dev \
&& mkdir -p /run/nginx
# Nginx conf + Supervisor entrypoint
COPY ./dev.conf /etc/nginx/conf.d/default.conf
COPY ./.htpasswd /etc/nginx/.htpasswd
COPY ./supervisord.conf /etc/supervisord.conf
EXPOSE 80 443
ENTRYPOINT /usr/bin/supervisord -c /etc/supervisord.conf
UPDATE 2:
I pulled my Docker images and ran the application using only the docker-compose.dev.yml (without the docker-compose.local.yml that I'd use too locally). I have been able to login, everything is okay.
So... It works with Docker Compose locally, but not in Docker Swarm on a remote server.
UPDATE 3:
I made the dev server leave the Swarm cluster and started the services using Docker Compose. It works.
The issue is about going from Compose to Swarm. I created an issue: docker/swarm #2956
Maybe it's not your specific case, but it could help some user who have problems using Docker Swarm which are not present in Docker Compose.
I've been fighting this issue for over a week. I found that the default network for Docker Compose uses the bridge driver and Docker Swarm uses Overlay.
Later, I read in the Caveats section in the Postgres Docker image repo that there's a poblem with the IPVS connection timeouts in overlay networks and it refers to this blog for solutions.
I try with the first option and changed the endpoint_mode setting to dnsrr in my docker-compose.yml file:
db:
image: postgres:12
# Others settings ...
deploy:
endpoint_mode: dnsrr
Keep in mind that there are some caveats (mentioned in the blog) to consider. However, you could try the other options.
Also in this issue maybe you find something useful as they faced the same problem.
I have a project using ASP.NET Core and SQL Server. I am trying to put everything in docker containers. For my app I need to have some initial data in the database.
I am able to use docker sql server image from microsoft (microsoft/mssql-server-linux), but it is (obviously) empty. Here is my docker-compose.yml:
version: "3"
services:
web:
build: .\MyProject
ports:
- "80:80"
depends_on:
- db
db:
image: "microsoft/mssql-server-linux"
environment:
SA_PASSWORD: "your_password1!"
ACCEPT_EULA: "Y"
I have an SQL script file that I need to run on the database to insert initial data. I found an example for mongodb, but I cannot find which tool can I use instead of mongoimport.
You can achieve this by building a custom image. I'm currently using the following solution. Somewhere in your dockerfile should be:
RUN mkdir -p /opt/scripts
COPY database.sql /opt/scripts
ENV MSSQL_SA_PASSWORD=Passw#rd
ENV ACCEPT_EULA=Y
RUN /opt/mssql/bin/sqlservr --accept-eula & sleep 30 & /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'Passw#rd' -d master -i /opt/scripts/database.sql
Alternatively you can wait for a certain text to be outputted, useful when working on the dockerfile setup, as it is immediate. It's less robust as it relies on some 'random' text of course:
RUN ( /opt/mssql/bin/sqlservr --accept-eula & ) | grep -q "Service Broker manager has started" \
&& /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'Passw#rd' -i /opt/scripts/database.sql
Don't forget to put a database.sql (with your script) next to the dockerfile, as that is copied into the image.
Roet's answer https://stackoverflow.com/a/52280924/10446284 ditn't work for me.
The trouble was with bash ampersands firing the sqlcmd too early. Not waiting for sleep 30 to finish.
Our Dockerfile now looks like this:
FROM microsoft/mssql-server-linux:2017-GA
RUN mkdir -p /opt/scripts
COPY db-seed/seed.sql /opt/scripts/
ENV MSSQL_SA_PASSWORD=Passw#rd
ENV ACCEPT_EULA=true
RUN /opt/mssql/bin/sqlservr & sleep 60; /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'Passw#rd' -d master -i /opt/scripts/seed.sql
Footnotes:
Now the bash command works like this
run-asych(sqlservr) & run-asynch(wait-and-run-sqlcmd)`
We chose sleep 60, because the build of the docker image happens "offline", before all the runtine evironment is set up. Those 60 seconds don't occur at container runtime anymore. Giving more time for the sqlservr command gives our teammates' machines more time to complete the docker build phase successfully.
One simple option is to just navigate to the container file system and copy the database files in, and then use a script to attach.
This
https://learn.microsoft.com/en-us/sql/linux/quickstart-install-connect-docker
has an example of using sqlcmd in a docker container although I'm not sure how you would add this to whatever build process you have
I would like to use a docker-compose app to run unit tests on a wordpress plugin.
Following (mostly) this tutorial I have created four containers:
my-wpdb:
image: mariadb
ports:
- "8081:3306"
environment:
MYSQL_ROOT_PASSWORD: dockerpass
my-wp:
image: wordpress
volumes:
- ./:/var/www/html
ports:
- "8080:80"
links:
- my-wpdb:mysql
environment:
WORDPRESS_DB_PASSWORD: dockerpass
my-wpcli:
image: tatemz/wp-cli
volumes_from:
- my-wp
links:
- my-wpdb:mysql
entrypoint: wp
command: "--info"
my-phpunit:
image: phpunit/phpunit
volumes_from:
- my-wp
links:
- my-wpdb
This tutorial got me as far as creating the phpunit files (xml, tests, bin, .travis), with the exception that I had to install subversion manually:
docker exec wp_my-wp_1 apt-get update
docker exec wp_my-wp_1 apt-get install -y wget git curl zip vim
docker exec wp_my-wp_1 apt-get install -y apache2 subversion libapache2-svn libsvn-perl
And run the last part of bin/install-wp-tests.sh manually in the database container:
docker exec wp_my-wpdb_1 mysqladmin create wordpress_test --user=root --password=dockerpass --host=localhost --protocol=tcp
I can run phpunit: docker-compose run --rm my-wp phpunit --help.
I can specify the config xml file:
docker-compose run --rm my-wp phpunit --configuration /var/www/html/wp-content/plugins/my-plugin/phpunit.xml.dist
However, the test wordpress installation is installed in the my-wp container's /tmp directory: /tmp/wordpress-tests-lib/includes/functions.php
I think I have to link the my-phpunit containers /tmp to the one in my-wp?
This doesn't answer my question, but as a solution to the problem, there is a github repo for a Docker Image that provides the wanted features: https://github.com/chriszarate/docker-wordpress as well as a wrapper you can invoke it through: https://github.com/chriszarate/docker-wordpress-vip
I wonder if for the initial set-up (noted in the question), it might make more sense to add Phpunit to the Wordpress container, rather than making a separate container for it.
I have two servers. Each one must be send some data to the other one. The address of the other server (or servers) is passed as argument (--servers ...).
The problem is that when the container of dmserver0 is created it doesn't find the host "dmserver1" since its container has not been created yet. If I use links then there is an error because of the recursivity.
How could I solve this trouble?
This is my docker-compose.yml:
services:
dmserver0:
build: .
command: nodejs dmserver.js --servers 'tcp://dmserver1:2221'
container_name: dmserver_0
dmserver1:
build: .
command: nodejs dmserver.js --servers 'tcp://dmserver0:2220'
container_name: dmserver_1
And this is my Dockerfile:
FROM node:boron
RUN mkdir -p /var/www/forum
WORKDIR /var/www/forum
RUN apt-get update
RUN apt-get install -y libzmq-dev
RUN ln -s /usr/bin/nodejs /usr/bin/node
ADD package.json /var/www/forum
RUN npm install
ADD . /var/www/forum
Docker can't help you with this. This is an architectural problem in your server applications.
The solution is to modify your server's connection functionality. Instead of exiting on a failed connection you need to enter in a retry loop in both servers.
This will allow both servers to come alive and connect to each other in any arbitrary timespan. This approach has the benefit of being robust and independent of external factors affecting the start-up period.
I'm a Docker newbie and I'm trying to setup my first project.
To test how to play with it, I just cloned one ready-to-go project and I setup it (Project repo).
As the guide claims if I access a specific url, I reach the homepage. To be more specific a symfony start page.
Moreover with this command
docker run -i -t testdocker_application /bin/bash
I'm able to login to the container.
My problem is if I try to go to the application folder through bash, the folder that I shared with my host is empty.
I tried with another project, but the result is the same.
Where I'm wrong?
Here some infos about my env:
Ubuntu 12.04
Docker version 1.8.3, build f4bf5c7
Config:
application:
build: code
volumes:
- ./symfony:/var/www/symfony
- ./logs/symfony:/var/www/symfony/app/logs
tty: true
Looks like you have a docker-compose.yml file but are running the image with docker. You don't actually need docker-compose to start a single container. If you just want to start the container your command should look like this:
docker run -ti -v $(pwd)/symfony:/var/www/symfony -v $(pwd)/logs/symfony:/var/www/symfony/app/logs testdocker_application /bin/bash
To use your docker-compose.yml start your container with docker-compose up. You would also need to add the following to drop into a shell.
stdin_open: true
command: /bin/bash