Docker and Drupal 8 - settings.php - drupal

I'm trying to dockerize Drupal 8, and I'm running into this issue that after running Drupal 8 in a container and installing it, if I then remove the container and start it again, it prompts to install it again.
The thing is, when Drupal is installed, a settings.php file is created with the database details.
I wanted to create a systemd unit file for launching the Drupal 8 container in a smart way that even if it's removed, it should start again next time with the same installation.
Someone recommended me to write a systemd unit file with ConditionPathExists= to mount settings.php based on whether it's there locally, however I think this is not going to fully work, because on installation in the container, the generated settings.php file wouldn't be persisted back to the host machine.
So how can I solve the issue of making a Docker container for Drupal that offers to install if it hasn't been installed yet, and from then on use the installed instance even if the container is removed and rebuilt?

I would highly recommend using the official docker image for drupal
https://hub.docker.com/_/drupal/
Saves a lot of time and if you still need to customize your environment at least you can look at its Dockerfile and see how it's been done by the community.
Container persistence
When a container is stopped it can be restarted. All it's files are preserved including any settings.php files that may have been created.
A brand new container, on the other hand, will always start from scratch no simple way to avoid this. To persist data across container instances you need to use volumes.
https://docs.docker.com/engine/userguide/containers/dockervolumes/
Here's how it's done:
#
# Create a data container
#
docker create \
-v /var/www/html/sites \
-v /var/www/private \
--name my-data \
drupal
#
# Run drupal without a db container (select sqlite on first install)
#
docker run --volumes-from my-data --name my-drupal -p 8080:80 -d drupal
Note:
You could use volume mappings to the host machine, but this data container pattern is more flexible, for example when upgrading drupal.
How it works
Drupal 8 is built on top of the offical PHP language image.
Drupal 8.1 Dockerfile
Php 7 Apache Dockerfile
In the PHP buildfile note how Apache is being run in the foreground?
CMD ["apache2-foreground"]
No need for systemd running inside the container.

Related

Unable to install IIS AspNetCoreModuleV2 in a dockerimage (and Azure Pipelines)

I have a problem for few days now with the "dotnet hosting bundle" and AspNetCoreV2 IIS module in a dockerimage.
So, I creating a dockerimage with many IIS modules and requirements to execute our software. The dockerimage works good expect this AspNetCoreV2 module. When the container created, I check the modules installed with Get-WebGlobalModule and doesn't appear.
But, when I start the quiet (or passive) installation manualy, into the container, this module works and appear in the IIS Module list.
I tried many solutions to do that (multistage with aspnetcore Microsoft images, last version of dotnet_hosting_bundle.exe and many other, but same issue).
I tried to automatise the docker exec process to install this module manualy and commit it with Azure Pipelines and Windows agent in a VM, but doesn't work :(.
To try that, I use different way :
docker stop mycontainer
docker rm mycontainer
docker run --name mycontainer -d -it $(containerRegistry)/$(container_requirement_name):v1.0.$(Build.BuildId)
docker exec mycontainer powershell.exe -command Start-Process -FilePath 'C:\Program Files\MySoftware\PowerShell\Installer.Prerequisites\dotnet-hosting-3.1.2-win.exe' -ArgumentList "/passive","/install","/norestart" -PassThru -Wait
docker stop mycontainer
docker commit mycontainer $(containerRegistry)/$(container_requirement_name):v1.0.$(Build.BuildId).1
In the Start-Process, I can see :
The process is created but apparently not started
I also tried with : cmd 'C:\Program Files\MySoftware\PowerShell\Installer.Prerequisites\dotnet-hosting-3.1.2-win.exe' /quiet /install
This task in Azure Pipeline working without error, but when I download this new image (pushed after these instructions), the module doesn't appear in Get-WebGlobalModule
Also, the module is not presend into ProgramFiles
I don't really understand how can I install this module. All other modules working, expect this ...
Thanks you very much in advance for your advises.
Best
Set the preference variables with below command fixed above issue.
powershell -Command $ErrorActionPreference = 'Stop' $ProgressPreference = 'Continue'
The values of the preference variables affect how PowerShell operates and executes cmdLets. It might be because the default settings of the preference variables of the container that caused the PowerShell failing to complete the installation. You can override these preference variables in your script.
Please see this document for more information about Preference Variables.

How do you persist the installation on a Magento Docker container?

I am busy setting up a dockerized environment to develop PHP for Magento.
The image I am using is the following: alexcheng/magento2.
The git repository for this does contain an install script.
When I run "docker-compose up -d" everything works fine but I have to install Magento afresh each time the container goes down.
Any advice for how to deal with this? I am a relative newb at using docker but I can't imagine that you would have to reinstall it each time.
Note, I don't think this has to do with data persistence as a volume has been provisioned. When I include a line in the Dockerfile "RUN install-magento" I get the following error when building:
/usr/bin/env: ‘bash\r’: No such file or directory
The command '/bin/sh -c install-magento' returned a non-zero code: 127
Any guidance would be appreciated. Thank you.
I'm newbie about Docker as you. I suggest that you learn docker before wildly uses docker-compose tool. When we need data persistence the way is to use docker volumes.
Check if your docker-compose.yml have a volume section and if your user have permission on that path.

How do I give my app directory the correct permissions in order to run docker-compose up?

I am very new to Docker Images. I have therefore been following a tutorial about how to run Docker Image for Meteor Apps. It seems like I managed to install the Base Docker Image for Meteor Apps from https://hub.docker.com/r/geoffreybooth/meteor-base/ which I now am having issues running on my remote server while using the docker-compose up command.
The below is what I get after running docker-compose up:
I thought using sudo docker-compose up might help but I still get the following:
Can someone kindly help point out how to resolve this issue.
Find below my environment details, hopefully you will find them of use in the attempt to help me:
I thought there might be something wrong with my docker installation so I run: sudo docker run hello-world but the message: Hello from Docker!
This message shows that your installation appears to be working correctly.... suggests that the installation is fine.
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
Find below the contents of my folder. Note that the app is the folder containing my app.
$ ls
Dockerfile app docker-compose.yml
The content of Dockerfile is copied and pasted from Dockerfile content:
The content of docker-compose.yml is copied and pasted from docker-compose.yml
also the contents of .dockerignore is copied and pasted from .dockerignore.
Kindly let me know if there is any other environment that I need to expose.
Looking forward to your help.

Running rocket.chat on docker composer. Where are source codes?

This is probably a stupid question but I am running rocket.chat deployed w/ docker compose. I'm trying to customize the app but I don't know where to access the source codes. I don't really have full grasp of what docker is. Their docs are too confusing for me to understand. Any help? Can anyone direct me to the right direction?
Usually the files are stored on a folder that is specified in the Dockerfile. In the case of Rocket.Chat, this is the /app folder, inside the container.
I strongly suggest you do not edit source files directly inside your docker container. Rocket.Chat files inside a container are a compiled version and not the original source files.
If you still need access to that container files, you can access your container by running:
docker exec -it _your_container_name_ /bin/sh

Docker: how to manage development and production settings?

I'm just getting started with Docker. With the official NGINX image on my OSX development machine (with Docker Machine as the Docker host) I ran up against the bug with sendfile and VirtualBox which means the server fails to show changes I make to files.
The workaround for this is to use a modified nginx.conf file that turns off sendfile. This guy's solution has an instruction in the Dockerfile to copy a customised conf file into the container. Alternatively, this guy maps the NGINX configuration to a new folder with modified conf file.
This kind of thing works OK locally. But what if I don't need this modification on my cloud host? How should I handle this and other differences when it comes to deployment?
You could mount your custom nginx.conf into the container in development via e.g. --volume ./nginx/nginx.conf:/etc/nginx/nginx.conf and simply omit this parameter to docker run in production.
If using docker-compose, the two options I would recommend are:
Employ the limited support for environment variable interpolation and add something like the following under volumes in your container definition: ./nginx/nginx.${APP_ENV}.conf:/etc/nginx/nginx.conf
Use a separate YAML file for production overrides.

Resources