Flyway supports environment variables in config files.
Is there a way to make Flyway load these variables from a file, similarly to what Docker and Node.js with dotenv do?
The content of the .env file is for example:
DB_URL=jdbc:postgresql://localhost:5432/db_name
And flyway.conf:
flyway.url=${DB_URL}
If you are using flyway-maven-plugin, you have 3 ways currently:
Defining flyway properties in POM.xml
eg.
<properties>
<flyway.url>jdbc:h2:mem:public;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE;MODE=MySQL;INIT=CREATE SCHEMA IF NOT EXISTS "public";</flyway.url>
<flyway.user>root</flyway.user>
<flyway.password></flyway.password>
</properties>
Defining your flyway properties in some .env or a .conf file.
mvn -Dflyway.configFiles=src/main/resources/some-env-file.env flyway:migrate
Contents of some-env-file.env:
flyway.url=jdbc:h2:mem:public;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE;MODE=MySQL;INIT=CREATE SCHEMA IF NOT EXISTS "public";
flyway.user=root
flyway.password=
Injecting the environment variables directly during maven goal execution:
mvn -Dflyway.url="jdbc:h2:mem:public;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE;MODE=MySQL;INIT=CREATE SCHEMA IF NOT EXISTS public;" -Dflyway.user=root -Dflyway.password=root flyway:migrate
But if you want to load properties from some file using properties-maven-plugin and make them available as enviroment variables, to be used by your flyway-maven-plugin , then unfortunately that is not working.
Here is the github issue tracking this.
Related
I am new to docker.
I can create image using dockerfile and successfully call WSO2-API
I have hardcoded configuration in deployment.toml file
I want to update this information at the docker runtime for different env - DEV,QA etc
deployment.toml file content -
[server]
offset = 22
How to update .toml file config at the runtime ?
https://ei.docs.wso2.com/en/7.2.0/micro-integrator/setup/dynamic_server_configurations/#environment-variables
Here it says you can mention like-
offset = "${VariableName}"
but what do I mention in my dockerfile to update these variables at runtime ?
I want to update this information at the docker runtime for different env - DEV,QA etc
There are multiple ways to achieve this, here are at least two we commonly use in our deployment
Using a template for the config files
Basically the idea is to mount the deployment.toml (or other config files / folders) as configmap values in Kubernetes or volume in pure docker.
For each environment you can template the configuration using any deployment tool (Maven, Puppet, Ansible, any cloud devops,...). This approach allows you to update the configuration templates without needing a new image.
Template the configuration in the entrypoint
Create an entrypoint script, which templates the configuration based on env variables - e. g. using the sed utility) and then starts the application. Then use that entrypoint in the Dockerfile
This approach doesn't need external configuration (volumes, templates) but if the template needs to be updated, you need a new image.
Edit:
I haven't seen using the env variables in the deployment.toml before as refered in the question, must be something new for wso2. But if it is supported, then it can make your life easier just to specify the env variables in the pod. (oh this is you may be missing)
specify the ENV value in the Dockerfile for the default value
run the docker with your defined value (-e parameter for pure docker or defined environment in the compose or deployment config)
Define the Variable using ARG option in Dockerfile.
Example:
ARG VariableName
Now the value can be given at runtime as below.
docker build --build-arg VariableName=0 .
For more details on how to use ARG in dockerfile, please refer https://docs.docker.com/engine/reference/builder/#arg
I have a dotnet core gRPC project and I'm trying to include route annotations in my proto files like below:
import "google/api/annotations.proto";
file structure is like this (for the reason being that I imported googleapis repository as a git submodule):
protos/
myproto.proto
googleapis/
google/
api/
annotations.proto
...
in a go project it can be done by:
protoc -I . -I ./googleapis --go_out=plugins=grpc:. *.proto
where -I ./googleapis gives compiler the dir where it can find annotations.proto file and its dependencies.
But when I'm using MSBuild in a dotnet grpc project using config like below, I could not figure out how to include custom directories.
<ItemGroup>
<Protobuf Include="protos/*.proto" GrpcServices="Server" />
</ItemGroup>
Official doc mentioned an alternative to customize target build so that I can use protoc:
protoc --plugin=protoc-gen-grpc=$(gRPC_PluginFullPath) -I $(Protobuf_StandardImportsPath) ...
but above command ignores service definition and does not generate server stub code as mentioned here, while MSBuild does.
A workaround I found but not ideal:
I realize Grpc.Tools dotnet package distributes some commonly used proto files, so I copied annotations.proto and its dependencies there (in macOS) and it worked:
`~\.nuget\packages\grpc.tools\2.25.0\build\native\include`
Updates:
Another workaround:
The project root directory is included by default, so use it as the base path and copy the imported proto files there also works (better but still not ideal).
Any ideas how to include custom directories like above through MSBuild?
Finally figured out. As Jan Tattermusch suggested, ProtoRoot specifies the directories from which msbuild Include property as well as the proto import keyword will be looking for files, whereas the way ProtoRoot works for Include is different from how it works for import. Thus in order to customize the proto file structures to be like this, ProtoRoot must include all different paths which should be:
<Protobuf Include="protos/*.proto" ProtoRoot="./protos; ./protos/googleapis" ... />
Updates:
Above works fine for grpc-dotnet version prior to v2.31.0. For newer versions, warnings will show up saying ProtoRoot path is not a prefix of Include path. To fix it, use below config instead:
<Protobuf Include="protos/*.proto" ProtoRoot="protos" AdditionalImportDirs="protos/googleapis" ... />
When I run composer install for my symfony project, my .env file is rewrote by new generated one. What should I do to save my .env intact?
Applications created after November 2018 had a slightly different system, read changes
From the official documentation
.env.local : defines/overrides env vars for all environments but only in your local machine
.env : defines the default value of env vars.
The .env and .env.<environment> files should be committed to the shared repository because they are the same for all developers. However, the .env.local and .env.<environment>.local and files should not be committed because only you will use them. In fact, the .gitignore file that comes with Symfony prevents them from being committed.
I'm working on an SBT project that has to be built with the options like:
-Xmx2G -Xss256M -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled
This means that every new developer has to read the readme and assign the options to SBT_OPTS in bash profile or put them in the sbtopts file. Similarly, this has to be configured on Jenkins and this applies to all the projects (so if someone wants to use -XX:+UseG1GC with other projects it becomes an issue). Is it possible to specify the required options in the build file itself? This seems logical to me, as the options are project-specific and without them, you cannot build the project.
Create a .sbtopts file at the root of the build with contents:
-J-Xmx2G
-J-Xss256M
-J-XX:+UseConcMarkSweepGC
-J-XX:+CMSClassUnloadingEnabled
I have a Java Maven project. I'm using liquibase to update DB.
Locally, to update my db, I just run in command line:
mvn liquibase:update
In production environment, I don't have Maven installed.
What I need to achieve is through console, execute a command to run the liquibase scripts in a specific classpath.
Any ideas?
Edit:
Ok.I'm trying to follow this approach. I put in a folder the following items:
liquibase jar
The war containing my application and the liquibase changesets
liquibase.properties: It contains the following:
url=jdbc:jtds:sqlserver://xxxxxxxx:xxxx/xxxxx
username=xxx
password=xxxxx
classpath=war_file.war
changeLogFile=WEB-INF/classes/sql/projectName/liquibase/liquibase.xml
Then, in a console, I execute:
java -jar liquibase-core-3.0.5.jar update
It works! It finds my liquibase.xml file and starts the liquibase update.
BUT, when it refers to liquibase.xml that are inside another jar file included in the lib, it fails, because I included it in the liquibase.xml as:
<include file="../other_module/src/main/resources/sql/projectName/liquibase/liquibase.xml" />
How can I add this "include" without doing "src/main/resources" and make it find this xml?
Running the updateSQL goal on your Dev machine:
mvn liquibase:updateSQL
You can generate a migration script:
└── target
└── liquibase
└── migrate.sql
This is one of my favourite features of liquibase. Sometimes clients insist that all database schema changes must be performed manually by their staff.
Another option is to build an auto-upgrade capability into your application. See the servlet listener