I am new to docker.
I can create image using dockerfile and successfully call WSO2-API
I have hardcoded configuration in deployment.toml file
I want to update this information at the docker runtime for different env - DEV,QA etc
deployment.toml file content -
[server]
offset = 22
How to update .toml file config at the runtime ?
https://ei.docs.wso2.com/en/7.2.0/micro-integrator/setup/dynamic_server_configurations/#environment-variables
Here it says you can mention like-
offset = "${VariableName}"
but what do I mention in my dockerfile to update these variables at runtime ?
I want to update this information at the docker runtime for different env - DEV,QA etc
There are multiple ways to achieve this, here are at least two we commonly use in our deployment
Using a template for the config files
Basically the idea is to mount the deployment.toml (or other config files / folders) as configmap values in Kubernetes or volume in pure docker.
For each environment you can template the configuration using any deployment tool (Maven, Puppet, Ansible, any cloud devops,...). This approach allows you to update the configuration templates without needing a new image.
Template the configuration in the entrypoint
Create an entrypoint script, which templates the configuration based on env variables - e. g. using the sed utility) and then starts the application. Then use that entrypoint in the Dockerfile
This approach doesn't need external configuration (volumes, templates) but if the template needs to be updated, you need a new image.
Edit:
I haven't seen using the env variables in the deployment.toml before as refered in the question, must be something new for wso2. But if it is supported, then it can make your life easier just to specify the env variables in the pod. (oh this is you may be missing)
specify the ENV value in the Dockerfile for the default value
run the docker with your defined value (-e parameter for pure docker or defined environment in the compose or deployment config)
Define the Variable using ARG option in Dockerfile.
Example:
ARG VariableName
Now the value can be given at runtime as below.
docker build --build-arg VariableName=0 .
For more details on how to use ARG in dockerfile, please refer https://docs.docker.com/engine/reference/builder/#arg
Related
Yesterday I created a docker container with
docker-compose up -d
(and docker-compose.yaml file). It created a wordpress site, a database, phpmyadmin, etc.)
I made some changes to the wordpress installation, content, etc. I then shut it down with:
docker-compose down -volumes
This morning I wanted to run this container again and run the docker-compose up -d command again and when I visited the url it showed a wordpress configuration wizard instead of the existing installation from yesterday. In hindsight, it makes sense. Not sure why I expected not to create a new container. I then deleted the install* file from wp-admin but it didn't help.
Are the changes from my yesterday's wp installation lost? Have I overwritten everything?
Generally, how can I re-start an existing container with docker/docker-compose
by using docker-compose down -volumes you deleting :
Stops containers and removes containers, networks, volumes, and images created by up
see this
you may use docker-compsoe start/stop instead to stop or start your running containers
The command
docker-compose down
will stop all your containers, delete all your containers and remove any networks defined in your docker compose file.
It does not remove your volumes, by the way (unless you additonally pass the -v flag to the command).
So your command
docker-compose down --volumes
will also remove any volumes.
If you want to persist your wordpress installation for development purpose but want to be able to remove and create containers during development you can mount volumes on your host machine. E.g. for your database data or also for your wordpress source code (if needed).
See also here: https://docs.docker.com/compose/wordpress/
Take a look at the docker compose file provided there and specifically take a look at the volume directives.
In the example the database files are mounted on your host machine so that they don't vanish if you remove the database container.
If you are already using volumes in your docker compose file than you can simply remove the --volumes flag from the docker-compose down command
You can recreate a service inside compose file with following command.
for example you have wordpress,mysql,nginx services inside compose file.
docker-compose -f docker-compose.yml up -f --build wordpress
this command recreate your container
docker-compose.yml file:
web:
build: ./code
ports:
- "80:80"
volumes:
- ./mount:/var/www/html
dockerfile in ./code:
FROM wordpress
WORKDIR /var/www/html
RUN touch test.txt
This is a production environment I'm using to set up a simple WordPress blog (omitted other services in docker-compose.yml & Dockerfile for simplicity).
Here's what I'm doing:
Bind mounting host directory at container destination /var/www/html
Create test.txt file during build time
What's NOT working:
When I inspect /www/var/html on the container, I don't find my test.txt file
What I DO understand:
Bind mounting happens at run-time
In this particular case file gets created, but the when you mount the host directory, commands in Dockerfile get overridden
When you use volume mount, It works
What I DON'T understand:
What are the ways in which you can get your latest code into the container which is using a bind mount to persist data?
How can one create a script that can let me achieve this in runtime?
How else can I achieve this considering I HAVE to use a bind mount (AWS ECS persists data only when you use a host directory path for a volume)
Your data will be persisted at runtime. Everything stored in /var/www/html at runtime will be persisted in the host ./mount directory.
At build time, everything happens at docker layer inside the container image.
If you want to do things before anything you could create a script, ADDit to your image and use CMD or ENTRYPOINT to run the script when the container starts.
In summary
What are the ways in which you can get your latest code into the container which is using a bind mount to persist data?
You add your latest code to the image (i.e. git clone, COPY, ADD, or what suits you). A container shouldn't be mutable, so you keep your code versioned and you define persist folder (e.g. for uploads)
How can one create a script that can let me achieve this in runtime?
If you want to do it at runtime, you add your shell script to the image and then you run it. Although, this is not the best approach for this use case.
How else can I achieve this considering I HAVE to use a bind mount (AWS ECS persists data only when you use a host directory path for a volume)
IMHO, you should use your images as your build of your code. Your image should not be mutable and must reflect an instance in your code lifecycle. Define paths with data and those paths would be your mounts at host level.
I have the following environment on my php container like so:
DATABASE_URL:mysql://root:${MYSQL_ROOT_PASSWORD}#db:3306/${MYSQL_DATABASE}
I tried to echo it inside my container, and do a db migration. All is good. Now I used it on my .env file on symfony like so:
DATABASE_URL=${DATABASE_URL}
When i tried to login, the app says:
Authentication request could not be processed due to a system problem.
When i try to manually put everything on .env DATABASE_URL all is good.
I suspect that when I tried to use the container's ENV it doesn't get it right.
My question is how can I use the actual Containers' environment variable?
Thanks!
Note:
I on dev environment.
I am not quite familiar with symfony, but it seems like symfony never overwrites existing environment variables. (Ref: https://symfony.com/doc/current/components/dotenv.html)
What if you remove that line in your .env file? Since DATABASE_URL is already an environment variable, calling getenv('DATABASE_URL') in symfony should return you the correct value even if you did not define it in .env. All dotenv does it to write those key value pairs as environment variables in your system. You don't need to define it again if it is already present.
note that there is separate environments when you run php from cli and when it gets run by webserver (when you access it from your browser).
For example, in case of nginx-fpm, the variables that you see in your cli by running printenv are not avaiable in php script run by nginx when you call getenv(). In order to set environment variable for php fpm you can edit php-fpm.conf:
....
[www]
env[DATABASE_URL] = 'mysql://...'
....
If you use another webserver than you should find out how to make env vars available in php script.
Your DATABASE_URL=${DATABASE_URL} in .env file didn't work bacause DATABASE_URL was not set for php fpm.
Hope this helps.
P.S. Note, that this construction VAR=${VAR} makes nothing because DotEnv will not override VAR as it is already defined.
P.P.S. It is advised that you use .env file for your dev server and "real" env variables in staging/production.
I'm working on an SBT project that has to be built with the options like:
-Xmx2G -Xss256M -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled
This means that every new developer has to read the readme and assign the options to SBT_OPTS in bash profile or put them in the sbtopts file. Similarly, this has to be configured on Jenkins and this applies to all the projects (so if someone wants to use -XX:+UseG1GC with other projects it becomes an issue). Is it possible to specify the required options in the build file itself? This seems logical to me, as the options are project-specific and without them, you cannot build the project.
Create a .sbtopts file at the root of the build with contents:
-J-Xmx2G
-J-Xss256M
-J-XX:+UseConcMarkSweepGC
-J-XX:+CMSClassUnloadingEnabled
I am currently trying out on the docker link between my app and db containers. I've checked on my app container and environment variables are automatically set when I link the containers together.
What I want to do is for my config file, which is packaged into a jar file, to receive the environment variables and set the required values to it. Any advice or help?
And this is how I create a config file in my jar file to connect to MySQL
database { url="jdbc:mysql://${MYSQL_PORT_3306_TCP_ADDR}:${MYSQL_PORT_3306_TCP_PORT}/mydb" driver="com.mysql.jdbc.Driver"}
Updating the config file inside the jar could be quite overkill.
It think you have several choices
read the config environment variable directly in you program
use variable either directly or generate the config file there
create launch script (details of this depends of you guest os in docker how to do it; sh/bash for linux etc..)
that script can generate new config file from environment and put it on classpath before jar so you program sees it.
EDIT: added example
You can save this kind of launcher script on docker image which dynamically creates configuration before launching actual program.
#!/bin/bash
# some default values for testing even without links to other container
MYSQL_PORT_3306_TCP_ADDR=${MYSQL_PORT_3306_TCP_ADDR:-127.0.0.1}
MYSQL_PORT_3306_TCP_PORT=${MYSQL_PORT_3306_TCP_PORT:-3306}
cat << EOF > /opt/yourprogram/dbconfig.conf
database { url="jdbc:mysql://${MYSQL_PORT_3306_TCP_ADDR}:${MYSQL_PORT_3306_TCP_PORT}/mydb" driver="com.mysql.jdbc.Driver"
}
EOF
scala -classpath /opt/yourprogram YourProgram
What I did is that I wrote the sh file in my directory /tmp/restcore-1.0-SNAPSHOT/bin like this:
#!/bin/bash echo "database{url="jdbc:mysql://"${MYSQL_PORT_3306_TCP_ADDR}":"${MYSQL_PORT_3306_TCP_PORT}"/mydb" driver="com.mysql.jdbc.Driver" }" > myconf.conf
jar uf /tmp/restcore-SNAPSHOT/lib/com.organization.restcore-1.0-SNAPSHOT.jar /tmp/restcore-1.0-SNAPSHOT/bin/myconf.conf
After building the Dockerfile and running the sh file in CMD, I use cat myconf.conf to check the config file and I'll be able to see the environment set.