Docker is slow after adding volumes (Wordpress) - wordpress

I would like to use Docker for local development. When I create a container with Wordpress using Docker Compose, everything loads very quickly in the browser. It's much faster than using Local by Flywheel. The problem is that I do not have access to Wordpress files. To access these files, I added volumes to docker-compose.yml:
volumes:
- ./wp-content:/var/www/html/wp-content
I can access the files now, but everything loads so slowly in the browser that using Docker loses its meaning.
Is it possible to speed it up in any way?

The problem is about "consistency type" in volume. Set it up as "cached"
services:
wordpress:
...
volumes:
- ./data:/data
- ./scripts:/docker-entrypoint-initwp.d
#- ./wp-content:/app/wp-content
- type: bind
source: ./wp-content
target: /app/wp-content
consistency: cached
#- ./php-conf:/usr/local/etc/php
- type: bind
source: ./php-conf
target: /usr/local/etc/php
consistency: cached
Here for more details

Related

Multiple docker-compose: Error: getaddrinfo EAI_AGAIN from frontend to backend

I have 2 folders separated, one for backend and one for frontend services:
backend/docker-compose.yml
frontend/docker-compose.yml
The backend has a headless wordpress installation on nginx, with the scope to serve the frontend as an api service. The frontend runs on next.js. Here are the 2 different docker-compose.yml:
backend/docker-compose.yml
version: '3.9'
services:
nginx:
image: nginx:latest
container_name: my-app-nginx
ports:
- '80:80'
- '443:443'
- '8080:8080'
...
networks:
- internal-network
mysql:
...
networks:
- internal-network
wordpress:
...
networks:
- internal-network
networks:
internal-network:
external: true
frontend/docker-compose.yml
version: '3.9'
services:
nextjs:
build:
...
container_name: my-app-nextjs
restart: always
ports:
- 3000:3000
networks:
- internal-network
networks:
internal-network:
driver: bridge
name: internal-network
In the frontend I use the fetch api in nextjs as following:
fetch('http://my-app-nginx/wp-json/v1/enpoint', ...)
I tried also with ports 80 and 8080, without success.
The sequence of commands I run are:
docker network create internal-network
in backend/ folder, docker-compose up -d (all backend containers run fine, I can fetch data with Postman from WordPress api)
in frontend/ folder, docker-compose up -d fails with the error Error: getaddrinfo EAI_AGAIN my-app-nginx
I am not a very expert user of docker so I might miss something here, but I understand that there might be internal network issues over the containers. I read many answers regarding this topic but I couldn't figure it out.
Any recommendations?
Just to add a proper answer:
Generally you should NOT really want to be executing multiple docker-compose up -d commands
If you want to combine two separate docker-compose configs and run as one (slightly more preferable), you can use the extends keyword as described in the docs
However, I would suggest that you treat it as a single docker-compose project which can itself have multiple nested git repositories:
Example SO answer - Git repository setup for a Docker application consisting of multiple repositories
You can keep your code in a mono-repo or multiple repos, up to you
Real working example to backup using your applications that validates this approach:
headless-wordpress-nextjs-starter-kit and it's docker-compose.yml
I have found this thread here
Communication between multiple docker-compose projects
By looking at the most upvoted answers, I wonder if it is related to network prefix?
It seems like the internal-network would be prefixed with frontend_? On the other hand you can also try to locate the network by name in backend/docker-compose.yml:
networks:
internal-network:
external:
name: internal-network
The issue is external networks need the network name specified (because docker compose prefixes resources by default). Your backend docker compose network section should look like this:
networks:
internal-network:
name: internal-network
external: true
You are creating the network in your frontend docker compose so you should omit the docker network create ... command (just need to init frontend first). Or instead treat them both as external and keep the command. In which use the named external network as shown above in your frontend docker compose as well.

Azure: Unable to use volumeMount with MariaDB container instance

I'm trying to store my MariaDB in a Azure Storage Account
In my YAML I've got this to define the MariaDB image:
- name: mariadb
properties:
image: mariadb:latest
environmentVariables:
- name: "MYSQL_INITDB_SKIP_TZINFO"
value: "1"
- name: "MYSQL_DATABASE"
value: "metrics"
- name: "MYSQL_USER"
value: "user"
- name: "MYSQL_PASSWORD"
value: "password"
- name: "MYSQL_ROOT_PASSWORD"
value: "root_password"
ports:
- port: 3306
protocol: TCP
resources:
requests:
cpu: 1.0
memoryInGB: 1.5
volumeMounts:
- mountPath: /var/lib/mysql
name: filesharevolume
My volume definition looks like this:
volumes:
- name: filesharevolume
azureFile:
sharename: <share-name>
storageAccountName: <name>
storageAccountKey: <key>
When this image starts however, it gets terminated with an error explaining that the ibdata1 file size doesn't match what's in the config file.
If I remove the volumeMount, the database image works fine.
Is there something I'm missing?
For this issue, the reason had shown in the Note:
Mounting an Azure Files share to a container instance is similar to a
Docker bind mount. Be aware that if you mount a share into a container
directory in which files or directories exist, these files or
directories are obscured by the mount and are not accessible while the
container runs.
The File share mounts on the existing directory, then it overwrites the directory. And MariaDB will rebuild the ibdata1 file according to the requirement, but it's empty and not match with the previous before.
For the use of Azure File Share, I recommend you only mount the File Share to the directory which does not exist before to persist the data. Or the files in the directory does not affect the normal running of the application.

Docker with Symfony 4 - Unable to see the file changes

I'm working on a docker image for dev environment for a Symfony 4 application. I'm building it on alpine, php-fpm and nginx.
I have configured an application, but the performance was not great (~700ms) even for the simple hello world application, so I thought I can make it faster somehow.
First of all, I went for semantics configuration and configured the volumes to use cached configuration. Then, I moved vendor to separate volume as it caused the most of performance issues.
As a second thing I wanted to use docker-sync as the benchmarks looked amazing. I configured it and everything ran smoothly. But now I realized that the docker is not reacting to changes in code.
First, I thought that it has something to do with Symfony 4 cache, so I did connect to php's container and ran php bin/console cache:clear. Cache has been cleared, but the docker did not react to anything. I double-check the files on both web and php containers and the files are changed there. I'm wondering if there is something more I need to configure or why is Symfony not reacting to changes.
UPDATE
Symfony/Container does not react to changes even after complete image re-build and removal of semantics configuration and docker-sync. So, basically, it's plain docker with hello-world symfony 4 application and it does not react to changes. Changes are not even synced with container
Configuration:
# docker-compose-dev.yml
version: '3'
volumes:
symfony-sync:
external: true
services:
php:
build: build/php
expose:
- 9000
volumes:
- symfony-sync:/var/www/html/symfony
- ./vendor:/var/www/html/vendor
web:
build: build/nginx
restart: always
expose:
- 80
- 443
ports:
- 8080:80
- 8081:443
depends_on:
- php
volumes:
- symfony-sync:/var/www/html/symfony
- ./vendor:/var/www/html/vendor
networks:
default:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.4.0.0/16
# docker-sync.yml
version: "2"
options:
verbose: true
syncs:
symfony-sync:
src: './symfony'
sync_excludes:
- '.git'
- 'composer.lock'
Makefile I use for running the app
start:
docker-sync stop
docker-sync clean
cd symfony
docker volume create --name=symfony-sync
cd ..
docker-compose -f docker-compose-dev.yml down
docker-compose -f docker-compose-dev.yml up -d
docker-sync start
stop:
docker-compose stop
docker-sync stop
I recommend to use dinghy instead docker4mac: https://github.com/codekitchen/dinghy
Have a try to this repo for example too: https://github.com/jorge07/symfony-4-es-cqrs-boilerplate
If this doesn't work the problem will be in you host or dockerfile. Be sure you don't enable opcache for development.

Default configuration for WordPress Docker Container

I'm currently working on a project using the WordPress API.
This is my docker-compose.yml file:
version: '3.1'
services:
wordpress:
image: wordpress
volumes:
- ./volumes/wordpress/:/var/www/html
ports:
- 8080:80
environment:
WORDPRESS_DB_PASSWORD: root
depends_on:
- mysql
mysql:
image: mysql:5.7
volumes:
- ./volumes/mysql/:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
web:
build:
context: .
dockerfile: Dockerfile
volumes:
- ./src/:/src
ports:
- 8081:8081
depends_on:
- wordpress
In order to use the WordPress API, I need to configure the WordPress container manually by going to http://localhost:8080/wp-admin and changing some settings.
Thing is, I need to make this settings changes automatic because everytime I remove the volume folder in order to reset the WordPress content, it also removes the settings.
Any idea on how I can achieve this?
I guess that all settings configured via the wp-admin section are stored in the database.
If that's the case than you can do this:
Setup a first WordPress instance by running your docker-compose and completing the setup steps.
Stop the compose. At this point in the mysql volume folder you have a database structure with a configured wordpress in it.
Store the contents of the folder somewhere.
Now, if you want to create another WordPress instance, you can edit the docker-compose.yml file in order to adjust volume binding and make sure that the initial content of the mysql volume contains the data you got from step 3 above.
When you start the new docker-compose stack it'll start from a populated database and you should have a preconfigured WordPress instance.
You need to locate the file/folder that containes the settings that you are changing.
Start the container, do the changes and backup the settings file into your host machine using:
docker cp <container-name>:<path-to-settings> .
You then can create a custom image that replaces the default settings with the backuped settings you copied to the host.
FROM wordpress
COPY <settings-from-host> <settings-path-in-container>

FTP into existing Docker Containers

I'm looking to see if it is possible to somehow FTP into an already existing Docker container? For example, I'm using the dockerfile/ghost in combination with jwilder/nginx-proxy, and once I deploy/build a container, I'd like for the user to be able to FTP into the container running Ghost so they can upload additional files such as themes, stylesheets, etc. What would be the best method in accomplishing this? Thanks in advance!
You have a few choices:
run ftp in the Ghost container and expose a port
use a host directory to store the user content and give them FTP to the host (not the best choice)
map the same host directory into both the Ghost container and the FTP server container
Personally I thnk the last one is the best and the most work though the advantages will be worth it in the long run. I'm making the assumption that the uploaded content should survive container termination which is why I recommend using a mapped host directory, if this is not the case you can use linked volumes between the FTP container and the Ghost container.
Probably the most "dockery" way of doing this would be to run a container for your ftp server (something such as this https://github.com/gizur/docker-ftpserver) and a separate one for ghost, then mount a common volume.
The best way to synchronize these is with a version 2 docker-compose.yml
https://docs.docker.com/compose/compose-file/#volume-configuration-reference
The docker-compose.yml file might look something like this:
version: '2'
services:
ghost:
build:
context: ./ghost
ports:
- "80:80"
- "443:443"
volumes:
- "ftpsharevolume:/mnt/ftp"
depends_on:
- ftp
ftp:
image: someftpdockerimage
volumes:
- "ftpsharevolume:/srv/ftp"
ports:
- "21:21"
- "20:20"
volumes:
ftpsharevolume:
driver: local

Resources