Azure: Unable to use volumeMount with MariaDB container instance - mariadb

I'm trying to store my MariaDB in a Azure Storage Account
In my YAML I've got this to define the MariaDB image:
- name: mariadb
properties:
image: mariadb:latest
environmentVariables:
- name: "MYSQL_INITDB_SKIP_TZINFO"
value: "1"
- name: "MYSQL_DATABASE"
value: "metrics"
- name: "MYSQL_USER"
value: "user"
- name: "MYSQL_PASSWORD"
value: "password"
- name: "MYSQL_ROOT_PASSWORD"
value: "root_password"
ports:
- port: 3306
protocol: TCP
resources:
requests:
cpu: 1.0
memoryInGB: 1.5
volumeMounts:
- mountPath: /var/lib/mysql
name: filesharevolume
My volume definition looks like this:
volumes:
- name: filesharevolume
azureFile:
sharename: <share-name>
storageAccountName: <name>
storageAccountKey: <key>
When this image starts however, it gets terminated with an error explaining that the ibdata1 file size doesn't match what's in the config file.
If I remove the volumeMount, the database image works fine.
Is there something I'm missing?

For this issue, the reason had shown in the Note:
Mounting an Azure Files share to a container instance is similar to a
Docker bind mount. Be aware that if you mount a share into a container
directory in which files or directories exist, these files or
directories are obscured by the mount and are not accessible while the
container runs.
The File share mounts on the existing directory, then it overwrites the directory. And MariaDB will rebuild the ibdata1 file according to the requirement, but it's empty and not match with the previous before.
For the use of Azure File Share, I recommend you only mount the File Share to the directory which does not exist before to persist the data. Or the files in the directory does not affect the normal running of the application.

Related

Docker is slow after adding volumes (Wordpress)

I would like to use Docker for local development. When I create a container with Wordpress using Docker Compose, everything loads very quickly in the browser. It's much faster than using Local by Flywheel. The problem is that I do not have access to Wordpress files. To access these files, I added volumes to docker-compose.yml:
volumes:
- ./wp-content:/var/www/html/wp-content
I can access the files now, but everything loads so slowly in the browser that using Docker loses its meaning.
Is it possible to speed it up in any way?
The problem is about "consistency type" in volume. Set it up as "cached"
services:
wordpress:
...
volumes:
- ./data:/data
- ./scripts:/docker-entrypoint-initwp.d
#- ./wp-content:/app/wp-content
- type: bind
source: ./wp-content
target: /app/wp-content
consistency: cached
#- ./php-conf:/usr/local/etc/php
- type: bind
source: ./php-conf
target: /usr/local/etc/php
consistency: cached
Here for more details

How to get root password in Bitnami Wordpress from kubernetes shell?

I have installed Worpress in Rancher, (docker.io/bitnami/wordpress:5.3.2-debian-10-r43) I have to make wp-config writable but I get stuck, when get shell inside this pod to log as root :
kubectl exec -t -i --namespace=annuaire-p-brqcw annuaire-p-brqcw-wordpress-7ff856cd9f-l9gf7 bash
I cannot login to root, no password match with Bitnami Wordpress installation.
wordpress#annuaire-p-brqcw-wordpress-7ff856cd9f-l9gf7:/$ su root
Password:
su: Authentication failure
What is the default password, or how to change it ?
I really need your help!
The WordPress container has been migrated to a "non-root" user
approach. Previously the container ran as the root user and the Apache
daemon was started as the daemon user. From now on, both the container
and the Apache daemon run as user 1001. You can revert this behavior
by changing USER 1001 to USER root in the Dockerfile.
No writing permissions will be granted on wp-config.php by default.
This means that the only way to run it as root user is to create own Dockerfile and changing user to root.
However it's not recommended to run those containers are root for security reasons.
The simplest and most native Kubernetes way to change the file content on the Pod's container file system is to create a ConfigMap object from file using the following command:
$ kubectl create configmap myconfigmap --from-file=foo.txt
$ cat foo.txt
foo test
(Check the ConfigMaps documentation for details how to update them.)
then mount the ConfigMap to your container to replace existing file as follows:
(example requires some adjustments to work with Wordpress image):
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: nginx
volumeMounts:
- name: volname1
mountPath: "/etc/wpconfig.conf"
readOnly: true
subPath: foo.txt
volumes:
- name: volname1
configMap:
name: myconfigmap
In the above example, the file in the ConfigMap data: section replaces original /etc/wpconfig.conf file (or creates if the file doesn't exist) in the running container without necessity to build a new container.
$ kubectl exec -ti mypod -- bash
root#mypod:/# ls -lah /etc/wpconfig.conf
-rw-r--r-- 1 root root 9 Jun 4 16:31 /etc/wpconfig.conf
root#mypod:/# cat /etc/wpconfig.conf
foo test
Note, that the file permissions is 644 which is enough to be readable by non-root user.
BTW, Bitnami Helm chart also uses this approach, but it relies on the existing configMap in your cluster for adding custom .htaccess and persistentVolumeClaim for mounting Wordpress data folder.

Docker with Symfony 4 - Unable to see the file changes

I'm working on a docker image for dev environment for a Symfony 4 application. I'm building it on alpine, php-fpm and nginx.
I have configured an application, but the performance was not great (~700ms) even for the simple hello world application, so I thought I can make it faster somehow.
First of all, I went for semantics configuration and configured the volumes to use cached configuration. Then, I moved vendor to separate volume as it caused the most of performance issues.
As a second thing I wanted to use docker-sync as the benchmarks looked amazing. I configured it and everything ran smoothly. But now I realized that the docker is not reacting to changes in code.
First, I thought that it has something to do with Symfony 4 cache, so I did connect to php's container and ran php bin/console cache:clear. Cache has been cleared, but the docker did not react to anything. I double-check the files on both web and php containers and the files are changed there. I'm wondering if there is something more I need to configure or why is Symfony not reacting to changes.
UPDATE
Symfony/Container does not react to changes even after complete image re-build and removal of semantics configuration and docker-sync. So, basically, it's plain docker with hello-world symfony 4 application and it does not react to changes. Changes are not even synced with container
Configuration:
# docker-compose-dev.yml
version: '3'
volumes:
symfony-sync:
external: true
services:
php:
build: build/php
expose:
- 9000
volumes:
- symfony-sync:/var/www/html/symfony
- ./vendor:/var/www/html/vendor
web:
build: build/nginx
restart: always
expose:
- 80
- 443
ports:
- 8080:80
- 8081:443
depends_on:
- php
volumes:
- symfony-sync:/var/www/html/symfony
- ./vendor:/var/www/html/vendor
networks:
default:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.4.0.0/16
# docker-sync.yml
version: "2"
options:
verbose: true
syncs:
symfony-sync:
src: './symfony'
sync_excludes:
- '.git'
- 'composer.lock'
Makefile I use for running the app
start:
docker-sync stop
docker-sync clean
cd symfony
docker volume create --name=symfony-sync
cd ..
docker-compose -f docker-compose-dev.yml down
docker-compose -f docker-compose-dev.yml up -d
docker-sync start
stop:
docker-compose stop
docker-sync stop
I recommend to use dinghy instead docker4mac: https://github.com/codekitchen/dinghy
Have a try to this repo for example too: https://github.com/jorge07/symfony-4-es-cqrs-boilerplate
If this doesn't work the problem will be in you host or dockerfile. Be sure you don't enable opcache for development.

Kubernetes Minikube Secrets appear not mounted in Pod

I have a "Deployment" in Kubernetes which works fine in GKE, but fails in MiniKube.
I have a Pod with 2 containers:-
(1) Nginx as reverse proxy ( reads secrets and configMap volumes at /etc/tls & /etc/nginx respectively )
(2) A JVM based service listening on localhost
The problem in the minikube deployment is that the Nginx container fails to read the TLS certs which appear not to be there - i.e. the volume mount of the secrets to the Pod appears to have failed.
nginx: [emerg] BIO_new_file("/etc/tls/server.crt") failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/tls/server.crt','r') error:2006D080:BIO routines:BIO_new_file:no such file)
But if I do "minikube logs" I get a large amount of seemingly "successful" tls volume mounts...
MountVolume.SetUp succeeded for volume "kubernetes.io/secret/61701667-eca7-11e6-ae16-080027187aca-scriptwriter-tls" (spec.Name: "scriptwriter-tls")
And the secret themselves are in the cluster okay ...
$ kubectl get secrets scriptwriter-tls
NAME TYPE DATA AGE
scriptwriter-tls Opaque 3 1h
So it would appear that as far as miniKube is concerned all is well from a secrets point of view. But on the other hand the nginx container can't see it.
I can't logon to the container either since it keeps terminating.
For completeness the relevant sections from the Deployment yaml ...
Firstly the nginx config...
- name: nginx
image: nginx:1.7.9
imagePullPolicy: Always
ports:
- containerPort: 443
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx", "-s", "quit"]
volumeMounts:
- name: "nginx-scriptwriter-dev-proxf-conf"
mountPath: "/etc/nginx/conf.d"
- name: "scriptwriter-tls"
mountPath: "/etc/tls"
And secondly the volumes themselves at the container level ...
volumes:
- name: "scriptwriter-tls"
secret:
secretName: "scriptwriter-tls"
- name: "nginx-scriptwriter-dev-proxf-conf"
configMap:
name: "nginx-scriptwriter-dev-proxf-conf"
items:
- key: "nginx-scriptwriter.conf"
path: "nginx-scriptwriter.conf"
Any pointers of help would be greatly appreciated.
I am a first class numpty! :-) Sometimes the error is just the error! So the problem was that the secrets are created using local $HOME/.ssh/* certs ... and if you are generating them from different computers with different certs then guess what?! So all fixed now :-)

Symfony app deployment with docker

I come here because I develop an app with Symfony3. And I've some questions about the deployment of the app.
Actually I use docker-compose:
version: '2'
services:
nginx:
build: ./docker/nginx/
ports:
- 8081:80
volumes:
- .:/home/docker:ro
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
- ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
networks:
- default
php:
build: ./docker/php/
volumes:
- .:/home/docker:rw
- ./docker/php/php.ini:/usr/local/etc/php/conf.d/custom.ini:ro
working_dir: /home/docker
networks:
- default
dns_search:
- php
db:
image: mariadb:latest
ports:
- 3307:3306
environment:
- MYSQL_ROOT_PASSWORD=collectionManager
- MYSQL_USER=collectionManager
- MYSQL_PASSWORD=collectionManager
- MYSQL_DATABASE=collectionManager
volumes:
- mariadb_data:/var/lib/mysql
networks:
- default
dns_search:
- db
search:
build: ./docker/search/
ports:
- 9200:9200
- 9300:9300
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
networks:
- default
dns_search:
- search
volumes:
mariadb_data:
driver: local
elasticsearch_data:
driver: local
networks:
default:
nginx is clear, engine is PHP-FPM with some extensions and composer, db is MariaDB, and search ElasticSearch with some plugins.
Before I don't use Docker and to deploy I used Megallanes or Deployer, when I want to deploy webapp.
With Docker I can use the docker-compose file and recreate images and container on the server, I also can save my containers in images and in tar archive and load it on the server. It's okay for nginx, and php-fpm, but what about elasticsearch and the db ? Because I need to keep data in for future update of the code. Then when I deploy the code I need to execute a Doctrine Migration and maybe some commands, and Deployer do it perfectly with some other interresting things. And how I deploy the code with Docker ? Can we use both ? Deployer for code and Docker for services ?
Thanks a lot for your help.
First of all , Please try using user-defined networks, they have additional features vs legacy linking like Embedded DNS. Meaning you can call other containers on the same network with their names in your applications. Containers on a User defined network are isolate from containers on another User defined network.
To create a user defined network:
docker network create --driver bridge <networkname>
Dockerfile to use user defined network example:
search:
restart: unless-stopped
build: ./docker/search/
ports:
- "9200:9200"
- "9300:9300"
networks:
- <networkname>
Second: I noticed you didnt use data volumes for you DB and ElasticSearch.
You need to mount volumes at certain points to keep your persistant data.
Third: When you export your containers it wont contain mounted volumes. You need to back up volume data and migrate it manually.
To backup volume data:
docker run --rm --volumes-from db -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
The above command will create a container, mounts volumes from DB container and mounts current directory in container as /backup , uses ubuntu image and tar command to create a backup of /dbdata in container (consider changing this to your dbdirectory) in the /backup that is mounted from your docker hosts). after the operation completes the transient container will be removed (ubuntu container we used for creating the backup with --rm switch).
To restore:
You copy the tar archive to the remote location and create your container with empty mounted volume. then extract the tar archive in that volume using the following command.
docker run --rm --volumes-from dbstore2 -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"

Resources