docker-compose.yml file:
web:
build: ./code
ports:
- "80:80"
volumes:
- ./mount:/var/www/html
dockerfile in ./code:
FROM wordpress
WORKDIR /var/www/html
RUN touch test.txt
This is a production environment I'm using to set up a simple WordPress blog (omitted other services in docker-compose.yml & Dockerfile for simplicity).
Here's what I'm doing:
Bind mounting host directory at container destination /var/www/html
Create test.txt file during build time
What's NOT working:
When I inspect /www/var/html on the container, I don't find my test.txt file
What I DO understand:
Bind mounting happens at run-time
In this particular case file gets created, but the when you mount the host directory, commands in Dockerfile get overridden
When you use volume mount, It works
What I DON'T understand:
What are the ways in which you can get your latest code into the container which is using a bind mount to persist data?
How can one create a script that can let me achieve this in runtime?
How else can I achieve this considering I HAVE to use a bind mount (AWS ECS persists data only when you use a host directory path for a volume)
Your data will be persisted at runtime. Everything stored in /var/www/html at runtime will be persisted in the host ./mount directory.
At build time, everything happens at docker layer inside the container image.
If you want to do things before anything you could create a script, ADDit to your image and use CMD or ENTRYPOINT to run the script when the container starts.
In summary
What are the ways in which you can get your latest code into the container which is using a bind mount to persist data?
You add your latest code to the image (i.e. git clone, COPY, ADD, or what suits you). A container shouldn't be mutable, so you keep your code versioned and you define persist folder (e.g. for uploads)
How can one create a script that can let me achieve this in runtime?
If you want to do it at runtime, you add your shell script to the image and then you run it. Although, this is not the best approach for this use case.
How else can I achieve this considering I HAVE to use a bind mount (AWS ECS persists data only when you use a host directory path for a volume)
IMHO, you should use your images as your build of your code. Your image should not be mutable and must reflect an instance in your code lifecycle. Define paths with data and those paths would be your mounts at host level.
Related
I am new to docker.
I can create image using dockerfile and successfully call WSO2-API
I have hardcoded configuration in deployment.toml file
I want to update this information at the docker runtime for different env - DEV,QA etc
deployment.toml file content -
[server]
offset = 22
How to update .toml file config at the runtime ?
https://ei.docs.wso2.com/en/7.2.0/micro-integrator/setup/dynamic_server_configurations/#environment-variables
Here it says you can mention like-
offset = "${VariableName}"
but what do I mention in my dockerfile to update these variables at runtime ?
I want to update this information at the docker runtime for different env - DEV,QA etc
There are multiple ways to achieve this, here are at least two we commonly use in our deployment
Using a template for the config files
Basically the idea is to mount the deployment.toml (or other config files / folders) as configmap values in Kubernetes or volume in pure docker.
For each environment you can template the configuration using any deployment tool (Maven, Puppet, Ansible, any cloud devops,...). This approach allows you to update the configuration templates without needing a new image.
Template the configuration in the entrypoint
Create an entrypoint script, which templates the configuration based on env variables - e. g. using the sed utility) and then starts the application. Then use that entrypoint in the Dockerfile
This approach doesn't need external configuration (volumes, templates) but if the template needs to be updated, you need a new image.
Edit:
I haven't seen using the env variables in the deployment.toml before as refered in the question, must be something new for wso2. But if it is supported, then it can make your life easier just to specify the env variables in the pod. (oh this is you may be missing)
specify the ENV value in the Dockerfile for the default value
run the docker with your defined value (-e parameter for pure docker or defined environment in the compose or deployment config)
Define the Variable using ARG option in Dockerfile.
Example:
ARG VariableName
Now the value can be given at runtime as below.
docker build --build-arg VariableName=0 .
For more details on how to use ARG in dockerfile, please refer https://docs.docker.com/engine/reference/builder/#arg
Yesterday I created a docker container with
docker-compose up -d
(and docker-compose.yaml file). It created a wordpress site, a database, phpmyadmin, etc.)
I made some changes to the wordpress installation, content, etc. I then shut it down with:
docker-compose down -volumes
This morning I wanted to run this container again and run the docker-compose up -d command again and when I visited the url it showed a wordpress configuration wizard instead of the existing installation from yesterday. In hindsight, it makes sense. Not sure why I expected not to create a new container. I then deleted the install* file from wp-admin but it didn't help.
Are the changes from my yesterday's wp installation lost? Have I overwritten everything?
Generally, how can I re-start an existing container with docker/docker-compose
by using docker-compose down -volumes you deleting :
Stops containers and removes containers, networks, volumes, and images created by up
see this
you may use docker-compsoe start/stop instead to stop or start your running containers
The command
docker-compose down
will stop all your containers, delete all your containers and remove any networks defined in your docker compose file.
It does not remove your volumes, by the way (unless you additonally pass the -v flag to the command).
So your command
docker-compose down --volumes
will also remove any volumes.
If you want to persist your wordpress installation for development purpose but want to be able to remove and create containers during development you can mount volumes on your host machine. E.g. for your database data or also for your wordpress source code (if needed).
See also here: https://docs.docker.com/compose/wordpress/
Take a look at the docker compose file provided there and specifically take a look at the volume directives.
In the example the database files are mounted on your host machine so that they don't vanish if you remove the database container.
If you are already using volumes in your docker compose file than you can simply remove the --volumes flag from the docker-compose down command
You can recreate a service inside compose file with following command.
for example you have wordpress,mysql,nginx services inside compose file.
docker-compose -f docker-compose.yml up -f --build wordpress
this command recreate your container
I'm hoping someone has some expertise with AWX running as a Docker container. We've switched on the Azure AD authentication and I'd like to hide the local login modal via CSS. It seems that the CSS files are generated on startup and any changes to app.xxxxxxxx.css in /var/lib/awx/public/static/css don't seem to have any effect, and are newly generated upon restart anyway. I was wondering if there was a source CSS file I could edit so I can make changes and keep them through a reboot. Any help would be appreciated.
Docker image: ansible/awx_web
AWX Version: 7.0.0.0
Here was my solution to this.
I copied the static folder from the awx_web container:
docker cp id:/var/lib/awx/public/static/ /somefolder/static/
This will copy all of the HTML/CSS/JS elements of the web application to a local folder so that you can edit the files and keep your changes through a reboot.
This particular issue required me edit the app.xxxxxxx.css file in /static/css/ and find the styling for "btn LoginModal-signInButton" and change the visibility to hidden.
The next step was to mount the locally copied static folder from earlier to the static folder inside the container. I navigated to the 'awxcompose' directory (in my case it was /var/awxcompose) and added the following line to the docker-compose.yml file under awx_web > volumes:
- "somefolder/static/:/var/lib/awx/public/static/:ro"
Then once I was ready to push my changes, I re-made the container using docker-compose:
docker-compose down && docker-compose up -d
And to make sure the containers remained in this state after a reboot, I added the following line to my crontab:
#reboot docker-compose -f /var/awxcompose/docker-compose.yml up -d
I'm trying to upload files into 'var' directory (successfully). But when I want to take this one file from 'var' I only can observe No route found for "GET /var/uploads/images/....
Structure:
- app
- bin
- src
- var
- cache
- logs
- uploads
- images
- vendor
- web
For saving my files I'm using: '%kernel.project_dir%/var/uploads/images'
For order to take file: '/var/uploads/images/' . $fileName;
Where're my mistake?
P.S. Moreover I try to use volume (I'm using docker) in my docker-compose file like this: - ./data/cabinet/uploads/images:/data/www/cabinet/var/uploads/images
And, unfortunately, no one file didn't copy to this directory. What's wrong?
Thank you!
Since the document root is pointing to the web directory, there's no possibility for path ../var to be accessible. Either store uploads inside the web directory or create a symlink inside it pointing to the var/data/uploads:
// since I don't know your path tree inside the container
// I assume whole application is installed inside /var/www/html
// and this is the default working directory
docker-compose exec YOUR_CONTAINER_NAME \
ln -rs /var/www/html/var/data web/var
Bear in mind that if you are using Apache webserver, the FollowSymLinks option has to be enabled, but most probably is.
As for the second question, most likely you mapped an invalid directory, I guess this should be:
// yet again I assume whole application is inside the /var/www/html
./data/cabinet/uploads/images:/var/www/html/data/cabinet/uploads/images
Is this some kind of production anyway? If it is not and this is merely a work-in-progress, then you rather should map your whole local directory as container volume. This is how it's usually done in dev mode, so the mapping would be:
.:/var/www/html
I'm using docker to run a simple static web project, using the nginx official image. As a bower dependence I have a ui lib that is mine and is shared among two of my projects. To facilitate the development process I created a volume to my local machine to serve local files through the /html folder inside the nginx container. It works fine this way.
But, if I try to use bower link to create a link between a local copy of my ui lib and the bower dependence the nginx web server is not able to find the folder, since the link points to my local machine.
I'm running the docker vm in a Mac.
Did someone experienced something similar and have an idea about how to solve it?
Thanks,
I just run into this issue and found a way to solve it nicely.
The problem is that when you mount as a volume the whole /html folder the symlinks created by bower link are copied into your container but not the actual folders they are pointing at. When nginx tries to serve the file, it follows the symlink but now INSIDE the container, where the route is invalid.
To fix this, create another volume that maps the symlink directly. This way, docker-compose will follow the symlink BEFORE mounting into the container, therefore copying the actual folder contents. The nice thing about this is that in your local file system you still have the folder and the symlink working, so you can work as usually :)
Practical example:
My folder structure
/app
|--/bower_components
|--/packageA
|--/packageB -> symlink to /foo/bar/packageB
My compose file:
version: '2'
services:
nginx:
volumes:
- .:/foo
- ./bower_components/packageB:/foo/bower_components/packageB
...
Let me know if it worked, cheers!