I'm on Symfony but It's not very important. I have a .env file and I would like to use his variables in cloudbuild.yaml. There is no way to avoid duplication 😠?
Moreover, I read this article and I saw that author use Yaml merge feature with gitlab hidden key, its very useful when the file is big. I try to use this but cloud build not like, it seems to be impossible to use custom key like in gitlab-ci.yaml. Any Idea ?
UPDATE
In build we need to have env variables and generic config file to avoid to change a lot of value manually. So I would like to use hidden keys in cloudbuild.yaml because I need to use Yaml merge feature for avoid code duplication.
This is my cloudbuild.yaml example without optimisation :
steps:
- name: gcr.io/cloud-builders/docker
args: ['build', '-t', 'gcr.io/$PROJECT_ID/image-pgsql', '-f', 'docker/postgresql/Dockerfile', '.']
- name: gcr.io/cloud-builders/docker
args: ['build', '-t', 'gcr.io/$PROJECT_ID/image-nginx', '--build-arg', 'VERSION=1.15.3', '-f', 'docker/nginx/Dockerfile', '.']
But I would like to have this, or something like that :
.build-template: &buildTemplate
args: ['build', '-t', 'gcr.io/$PROJECT_ID/${IMAGE_NAME}', '--build-arg', 'VERSION=${VERSION}', '-f', '${DOCKER_PATH}', '.']
steps:
- name: 'gcr.io/cloud-builders/docker'
<<: *buildTemplate
env: ['IMAGE_NAME=pgsql', 'VERSION=12', 'DOCKER_PATH=docker/postgresql/Dockerfile']
- name: 'gcr.io/cloud-builders/docker'
<<: *buildTemplate
env: ['IMAGE_NAME=nginx', 'VERSION=1.15.3', 'DOCKER_PATH=docker/nginx/Dockerfile']
I get this when I try to run cloud-build-local --dryrun=false . =>
Error loading config file: unknown field ".build-template" in cloudbuild.Build
Unfortunately, Google Cloud Build doesn't have this feature of hidden keys in Cloud build. I have create a Feature Request in Public Issue Tracker on your behalf where you can track all the updates related to the feature request of hidden keys in Cloud Build.
You have to follow the cloudbuild.yaml schema, which is documented here. Since a build would be directly triggered from that yaml file, it is not possible to add other fields and do some sort of pre-processing to merge different files together.
The only options that are on the table as we speak:
Use global environment variables:
options:
env: [string, string, ...]
steps: [...]
Use step-specific environment variables:
steps:
- name: string
env: [string, string, ...]
Use substitution (with allow-loose):
substitutions:
_SUB_VALUE: world
options:
substitution_option: 'ALLOW_LOOSE'
steps:
- name: 'ubuntu'
args: ['echo', 'hello ${_SUB_VALUE}']
Source your [environment].env file in a build step:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
source ${BRANCH_NAME}.env
echo $${_MY_VARIABLE_1}
echo $${_MY_VARIABLE_2}
...
Related
I'm not very successfully trying to figure out how to achieve an override of a group/package with and via a yaml file. Trying to explain my problem using the example (files and folder structre) from the hydra documentation https://hydra.cc/docs/tutorials/structured_config/schema/.
yaml.config as:
defaults:
- base_config # --> reference to dataclass
- db: base_mysql # --> reference to dataclass
- _self_ debug: true
gives the expected (print when running the myapp.py):
db:
driver: mysql
host: localhost
port: 3306
user: ???
password: ???
Using the yaml file instead instead of the base_mysql dataclass is also fine thus the yaml.config as:
defaults:
- base_config
- db: mysql # --> reads db/mysql.yaml
- _self_
debug: true
prints again as expected
db:
driver: mysql
host: localhost
port: 3306
user: omry
password: secret
Overriding individual fields is as well fine, e.g. with config.yaml like
defaults:
- base_config
- db: mysql
- _self_
debug: true
db:
password: UpdatedPassword
What I'm to able to figure out is how to override the full db group with a/via another yaml file - defining the structure via a dataclass and then override/set the values like:
defaults:
- base_config
- db: base_mysql # --> reference to dataclass to define the structure
- _self_
debug: true
db: mysql # --> mysql.yaml
throws the following error:
In 'config': Validation error while composing config:
Merge error: str is not a subclass of MySQLConfig. value: mysql
full_key:
object_type=Config
Searching the internet/stackoverflow already showed me that moving the self to the first position will get rid of the error - but then the composition order is "wrong".
Keeping the order as it is and using the mysql.yaml for an override works well - when done via commandline (python myapp.py db=mysql when the line "db:mysql" is not present), but for my usecase it much more convinient to handle it all via the yaml file(s).
Somehow I assume that the same functionality is available via CLI and files/code and that I just did not mange to figure out how it works.
(hydra version 1.1 in a conda environment with python 3.9)
Thank you very much in advance for any help that you can provide.
If understand correctly, you want to use the defaults list in your primary yaml file to merge together the base_mysql config with the mysql config. This will do the trick:
defaults:
- base_config
- db: [base_mysql, mysql]
- _self_
debug: true
Passing a list [base_mysql, mysql] of config names causes those configs base_mysql and mysql to be merged together. This is documented here -- see the "CONFIG_NAMES" alternative for specifying an option in the defaults list.
Note that passing the CLI override db=mysql (as in python myapp.py db=mysql) results in modification of the defaults list; the resulting defaults list will be the same as if you had used the following in your yaml file:
defaults:
- base_config
- db: mysql
- _self_
debug: true
You can pass a list [base_mysql, mysql] of config names at the CLI like this:
python my_app.py 'db=[base_mysql, mysql]'
In a cloudbuild template, I have a step to deploy a Cloud function.
I try to deploy the function passing clear environment variables but also secretEnv at the same step.
I have tried several things without success and the documentation is clear about the fact we cannot use --set-env-vars and --env-vars-file nor --update-env-vars in the same command.
Does anyone succeeded in sending both variables types: clear ones from a file and secret ones with secretEnv ?
The following definition successfully create variables from .env.prod.yaml but USER and PASSWORD secrets are not created into the Cloud function.
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['functions',
'deploy', 'my-function',
'--runtime', 'go111',
'--entry-point', 'MyFunction',
'--env-vars-file', '.env.prod.yaml',
'--trigger-topic', 'my-topic']
secretEnv: ['USER', 'PASSWORD']
secrets:
- kmsKeyName: projects/my-project/locations/global/keyRings/my-keyring/cryptoKeys/my-key
secretEnv:
USER: XXXXXXXXXXXXXXXXXXXXXXXXXXX
PASSWORD: XXXXXXXXXXXXXXXXXXXXXXXXXXX
Any ideas, best practice or nice workaround ?
Plaintext environment variables should be set on the build step itself in the env field:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
env:
- USER=my-username
secretEnv: ['PASSWORD']
secretEnv:
- kmsKeyName: projects/...
secretEnv:
PASSWORD: ajklddafjkalda....
Just deploy the function twice. First to deploy the changes and set the env variables from the file, and then again using the --update-env-vars to add encrypted variables.
As per the Updating environment variables documentation, the --update-env-vars will not remove all existing environment variables before adding new ones.
An example of the cloudbuild.yaml could be:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['functions', 'deploy', 'my-function’,
'--runtime', 'go111',
'--entry-point', 'MyFunction',
'--env-vars-file', 'env.prod.yaml',
'--trigger-topic', 'my-topic']
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args: ['-c', 'gcloud functions deploy my-function --update-env-vars USER=$$USER,PASSWORD=$$PASSWORD']
secretEnv: ['USER', 'PASSWORD']
secrets:
- kmsKeyName: projects/[PROJECT-ID]/locations/global/keyRings/[KEYRING-NAME]/cryptoKeys/[KEY-NAME]
secretEnv:
USER: XXXXXXXXXXXXXXXXXXXXXXXXX
PASSWORD: XXXXXXXXXXXXXXXXXXXXX
I am trying to follow this tutorial on configuring nginx-ingress-controller for a Kubernetes cluster I deployed to AWS using kops.
https://daemonza.github.io/2017/02/13/kubernetes-nginx-ingress-controller/
When I run kubectl create -f ./nginx-ingress-controller.yml, the pods are created but error out. From what I can tell, the problem lies with the following portion of nginx-ingress-controller.yml:
volumes:
- name: tls-dhparam-vol
secret:
secretName: tls-dhparam
- name: nginx-template-volume
configMap:
name: nginx-template
items:
- key: nginx.tmpl
path: nginx.tmpl
Error shown on the pods:
MountVolume.SetUp failed for volume "nginx-template-volume" : configmaps "nginx-template" not found
This makes sense, because the tutorial does not have the reader create this configmap before creating the controller. I know that I need to create the configmap using:
kubectl create configmap nginx-template --from-file=nginx.tmpl=nginx.tmpl
I've done this using nginx.tmpl files found from sources like this, but they don't seem to work (always fail with invalid NGINX template errors). Log example:
I1117 16:29:49.344882 1 main.go:94] Using build: https://github.com/bprashanth/contrib.git - git-92b2bac
I1117 16:29:49.402732 1 main.go:123] Validated default/default-http-backend as the default backend
I1117 16:29:49.402901 1 main.go:80] mkdir /etc/nginx-ssl: file exists already exists
I1117 16:29:49.402951 1 ssl.go:127] using file '/etc/nginx-ssl/dhparam/dhparam.pem' for parameter ssl_dhparam
F1117 16:29:49.403962 1 main.go:71] invalid NGINX template: template: nginx.tmpl:1: function "where" not defined
The image version used is quite old, but I've tried newer versions with no luck.
containers:
- name: nginx-ingress-controller
image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
This thread is similar to my issue, but I don't quite understand the proposed solution. Where would I use docker cp to extract a usable template from? Seems like the templates I'm using use a language/syntax incompatible with Docker...?
To copy the nginx template file from the ingress controller pod to your local machine, you can first grab the name of the pod with kubectl get pods then run kubectl exec [POD_NAME] -it -- cat /etc/nginx/template/nginx.tmpl > nginx.tmpl.
This will leave you with the nginx.tmpl file you can then edit and push back up as a configmap. I would recommend though keeping custom changes to the template to a minimum as it can make it hard for you to update the controller in the future.
Hope this helps!
I'm trying SaltStack after using Puppet for a while, but I can't understand their use of the word "state".
My understanding is that, for example, a light switch has 2 possible states - on or off. When I write my SLS configuration I am describing what state a server should be in. When I ask SaltStack to provision a server I issue the command salt '*' state.highstate. I understand that a server can be in a highstate (as described in my config) or not. All good so far.
But this page describes other states. It describes lowstate, highstate and overstate (amongst others) as layers. Does this mean a server passes through several states to get to a highstate? Or all states are maintained simultaneously as layers? Or can I configure multiple possible states in my SLS and have SaltStack switch between them? Or are they just layers to SaltStack that have 'state' in the name and I'm confused?
I'm probably missing something obvious, if anyone can nudge me in the right direction I think a lot of the documentation will become clear to me!
Here, top.sls wihch contain,
# cat top.sls
base:
'*':
- httpd_require
and,
# cat httpd_require.sls
install_httpd:
pkg.installed:
- name: httpd
service.running:
- name: httpd
- enable: True
- require:
- file: install_httpd
file.managed:
- name: /var/www/html/index.html
- source: salt://index1.html
- user: root
- group: root
- mode: 644
- require:
- pkg: install_httpd
High state:
We can see all the aspects of high state system while working with state files( .sls), There are three specific components.
High data:
SLS file:
High State
Each individual State represents a piece of high data(pkg.installed:'s block), Salt will compile all relevant SLS inside the top.sls, When these files are tied together using includes, and further glued together for use inside an environment using a top.sls file, they form a High State.
# salt 'remote_minion' state.show_highstate --out yaml
remote_minion:
install_httpd:
__env__: base
__sls__: httpd_require
file:
- name: /var/www/html/index.html
- source: salt://index1.html
- user: root
- group: root
- mode: 644
- require:
- pkg: install_httpd
- managed
- order: 10002
pkg:
- name: httpd
- installed
- order: 10000
service:
- name: httpd
- enable: true
- require:
- file: install_httpd
- running
- order: 10001
First, an order is declared, All States that are set to be first will have their order adjusted accordingly. Salt will then add 10000 to the last defined number (which is 0 by default), and add any States that are not explicitly ordered.
Salt will also add some variables that it uses internally, to know which environment (__env__) to execute the State in, and which SLS file (__sls__) the State declaration came from, Remember that the order is still no more than a starting point; the actual High State will be executed based first on requisites, and then on order.
"In other words, "High" data refers generally to data as it is seen by the user."
Low States:
""Low" data refers generally to data as it is ingested and used by Salt."
Once the final High State has been generated, it will be sent to the State compiler. This will reformat the State data into a format that Salt uses internally to evaluate each declaration, and feed data into each State module (which will in turn call the execution modules, as necessary). As with high data, low data can be broken into individual components:
Low State
Low chunks
State module
Execution module(s)
# salt 'remote_minion' state.show_lowstate --out yaml
remote_minion:
- __env__: base
__id__: install_httpd
__sls__: httpd_require
fun: installed
name: httpd
order: 10000
state: pkg
- __env__: base
__id__: install_httpd
__sls__: httpd_require
enable: true
fun: running
name: httpd
order: 10001
require:
- file: install_httpd
state: service
- __env__: base
__id__: install_httpd
__sls__: httpd_require
fun: managed
group: root
mode: 644
name: /var/www/html/index.html
order: 10002
require:
- pkg: install_httpd
source: salt://index1.html
state: file
user: root
Together, all this comprises a Low State. Each individual item is a Low Chunk. The first Low Chunk on this list looks like this:
- __env__: base
__id__: install_httpd
__sls__: httpd_require
fun: installed
name: http
order: 10000
state: pkg
Each low chunk maps to a State module (in this case, pkg) and a function inside that State module (in this case, installed). An ID is also provided at this level (__id__). Salt will map relationships (that is, requisites) between States using a combination of State and __id__. If a name has not been declared by the user, then Salt will automatically use the __id__ as the name.Once a function inside a State module has been called, it will usually map to one or more execution modules which actually do the work.
salt '\*' state.highstate
'*' refers to all the minions connected to the master.
'state.highstate' is used to run all modules / scripts mentioned in top.sls defined in master
To invoke a specific module / script on all minions, use the following salt command where the state information is defined in state.sls for apache in the example given below.
salt '\*' state.sls apache
To invoke the above salt call only on a specific minion, use the below command.
salt 'minion-name' state.sls apache
I don't know all levels of state, but when you run :
salt '*' state.highstate
Saltstack apply the states you provide in /srv/salt/top.sls.
If you write nothing in it, you can't apply an highstate.
You can apply other state with this command :
salt '*' state.sls state.example
A highstate is just the collection of states that is applied to your server. There is a process in the background where Salt's "state compiler" goes through several stages preparing the data in order to produce the highstate, but you don't really need to worry about those.
Things like the lowstate can help with debugging, but aren't necessary for day to day usage. The highstate is only applied once.
This may seem at first to be pretty simple. But I can tell you I've been wracking my brains for a couple days on this. I've read a lot of docs, sat on IRC with folks, and spoken to colleagues and at this point I don't have an answer I really think holds up.
I've looked into a few possible approaches
reactor
orchestration runner
I don't like these two because of the top down execution necessity... they seem tailored to orchestrating multiple node states, not workflows in a single node.
custom states
This is kind of something I would REALLY like to avoid as this is a repeated workflow, and I don't want to build customizations like this. There's too much room for non legibility if I go down this path with my team mates.
requires / watches
These don't have a concept ( that I am aware of ) of applying a state repeatedly, or in a logical order / workflow.
And a few others I won't mention.
Without further discussion, here's my dilemma.
Goals:
Jenkins Master gets Deployed
We can unit.test the deployment as it proceeds
We only restart tomcat when necessary
We can update plugins on a per package basis
A big emphasis on good clean intuitively clear salt configs
Jenkins deployment is pretty straight forward. We drop in the packages, and the configs, and we're set.
Unit testing is harder. As an example I've got this state file.
actions/version.sls:
# Hit's the jenkins CLI interface to check for version info
# This can be used to verify that jenkins is active and the version we want
# Import some info
{%- from 'jenkins/init.sls' import jenkins_home with context %}
# Install plugins in jenkins_plugins list
jenkins_version:
cmd.run:
- name: java -jar jenkins-cli.jar -s "http://127.0.0.1:8080" version
- cwd: /var/lib/tomcat/webapps/ROOT/WEB-INF/
- user: jenkins
actions.version basically verifies that jenkins is running and queryable. we want to be sure of this during the build at several points.
example... tomcat takes time to spin up. we had to add a delay to that restart operation. If you check out start.sls below you can see that operation occurring. Note the bug open on init_delay: .
actions/start.sls:
# Starts the tomcat service
tomcat_start:
service.running:
- name: tomcat
- enable: True
- full_restart: True
# Not functional atm see --> https://github.com/saltstack/salt/issues/20631
# - init_delay: 120
# initiate a 120 second delay after any service start to let tomcat come up.
tomcat_wait:
module.run:
- name: test.sleep
- length: 60
include:
- jenkins.actions.version
Now we have this restart capability by doing an actions.stop and an actions.start. We have this actions.version state that we can use to verify that the system is ready to proceed with jenkins specific state workflows.
I want to do something kinda like this...
Install Jenkins --> Grab yaml of plugins --> install plugins that need it
Pretty straight forward.
Except, to loop through the yaml of plugins I am using Jinja.
And now I have no way to call and be sure that the start.sls and version.sls states can be repeatedly applied.
I am looking for, a good way to do that.
This would be something akin to a jenkins.sls
{% set repo_username = "foo" -%}
{% set repo_password = "bar" -%}
include:
- jenkins.actions.version
- jenkins.actions.stop
- jenkins.actions.start
# Install Jenkins
jenkins:
pkg:
- installed
# Import Jenkins Plugins as List, and Working Path
{%- from 'jenkins/init.sls' import jenkins_home with context %}
{%- import_yaml "jenkins/plugins.sls" as jenkins_plugins %}
{%- import_yaml "jenkins/custom-plugins.sls" as custom_plugins %}
# Grab updated package list
jenkins-contact-update-server:
cmd.run:
- name: curl -L http://updates.jenkins-ci.org/update-center.json | sed '1d;$d' > {{ jenkins_home }}/updates/default.json
- unless: test -d {{ jenkins_home }}/updates/default.json
- require:
- pkg: jenkins
- service: tomcat
# Install plugins in jenkins_plugins list
{% for plugin in jenkins_plugins %}
jenkins-plugin-{{ plugin }}:
cmd.run:
- name: java -jar jenkins-cli.jar -s "http://127.0.0.1:8080" install-plugin "{{ plugin }}"
- unless: java -jar jenkins-cli.jar -s "http://127.0.0.1:8080" list-plugins | grep "{{ plugin }}"
- cwd: /var/lib/tomcat/webapps/ROOT/WEB-INF/
- user: jenkins
- require:
- pkg: jenkins
- service: tomcat
Here is where I am stuck. require won't do this. and lists
of actions don't seem to schedule linearly in salt. I need to
be able to just verify that jenkins is up and ready. I need
to be able to restart tomcat after a single plugin in the
iteration is added. I need to be able to do this to satisfy
dependencies in the plugin order.
- sls: jenkins.actions.version
- sls: jenkins.actions.stop
- sls: jenkins.actions.start
# This can't work for several reasons
# - watch_in:
# - sls: jenkins-safe-restart
{% endfor %}
# Install custom plugins in the custom_plugins list
{% for cust_plugin,cust_plugin_url in custom_plugins.iteritems() %}
# manually downloading the plugin, because jenkins-cli.jar doesn't seem to work direct to artifactory URLs.
download-plugin-{{ cust_plugin }}:
cmd.run:
- name: curl -o {{ cust_plugin }}.jpi -O "https://{{ repo_username }}:{{ repo_password }}#{{ cust_plugin_url }}"
- unless: java -jar jenkins-cli.jar -s "http://127.0.0.1:8080" list-plugins | grep "{{ cust_plugin }}"
- cwd: /tmp
- user: jenkins
- require:
- pkg: jenkins
- service: tomcat
# installing the plugin ( REQUIRES TOMCAT RESTART AFTER )
custom-plugin-{{ cust_plugin }}:
cmd.run:
- name: java -jar jenkins-cli.jar -s "http://127.0.0.1:8080" install-plugin /tmp/{{ cust_plugin }}.jpi
- unless: java -jar jenkins-cli.jar -s "http://127.0.0.1:8080" list-plugins | grep "{{ cust_plugin }}"
- cwd: /var/lib/tomcat/webapps/ROOT/WEB-INF/
- user: jenkins
- require:
- pkg: jenkins
- service: tomcat
{% endfor %}
You won't be able to achieve this without using reactors, beacons and especially not without writing your own python execution modules.
Jenkins Master gets Deployed
Write a jenkins execution module in python with a function install(...):. In that function you would manage any dependencies by either calling existing execution modules or by writing them yourself.
We can unit.test the deployment as it proceeds
Inside the install function of the jenkins module you would fire specific events depending on the results of the install.
if not _run_deployment_phase(...):
__salt__['event.send']('jenkins/install/error', {
'finished': False,
'message': "Something failed during the deployment!",
})
You would map that event to reactor sls files and handle it.
We only restart tomcat when necessary
Write a tomcat module. Add an _is_up(...) function where you would check if tomcat is up by parsing the tomcat logs for the result. Call the function inside a state module and add a mod_watch function.
def mod_watch():
# required dict to return
return_dict = {
"name": "Tomcat install",
"changes": {},
"result": False,
"comment": "",
}
if __salt__["tomcat._is_up"]():
return_dict["result"] = True
return_dict["comment"] = "Tomcat is up."
if __opts__["test"]:
return_dict["result"] = None
return_dict["comment"] = "comment here about what will change"
return return_dict
# execute changes now
return return_dict
Use your state module inside a state file.
install tomcat:
tomcat.install:
- name: ...
- user: ...
...
wait until tomcat is up:
cmd.run:
- name: ...
- watch:
- tomcat: install tomcat
We can update plugins on a per package basis
Add a function to your jenkins execution module named install_plugin. View pkg.install code to replicate interface.
A big emphasis on good clean intuitively clear salt configs
Write python execution modules for easy and maintainable configuration logic. Use that execution module inside your own state modules. Inside state files call your own state modules and supply individual configuration with any state renderer you like.
States only execute once, by design. If you need the same action to occur multiple times, you need multiple states. Also, includes are only included a single time.
Rather than all of this include/require stuff you're doing, you should just put all of the code into a single sls file, and generate states through jinja iteration.
If what you're trying to do is add a bunch of plugins, add config files, then at the end do restarts, then you should really just execute everything in order, don't use require, and use listen or listen_in, rather than watch or watch_in.
listen/listen_in cause triggered actions to happen at the end of a state run. They are similar to the concept of handlers in Ansible.
This is a pretty old question, but If you change your Jenkins/tomcat start/stop procedure to be a standard init/systemd/windows service (as all well behaved services should be), you could have a service.running for the Jenkins service and add this to each of your custom-plugin-{{ cust_plugin }} states.
require_in:
- svc: jenkins
watch_in:
- svc: jenkins
You could continue to use the cmd.run module with onchanges. You'd have to add onchanges_in: to each of the custom-plugin-{{ cust_plugin }} states, but you need to have at least one item in the on changes list or the command will fire every time the state runs.
If you use require you cause salt to re-order your states. If you want your states to run in order, just write them in the order you want them to run in.
Watch/watch_in will also re-order your states. If you use listen/listen_in instead, it'll queue the triggered actions to run in the order they were triggered at the end of the state run.
See:
http://ryandlane.com/blog/2014/07/14/truly-ordered-execution-using-saltstack/
http://ryandlane.com/blog/2015/01/06/truly-ordered-execution-using-saltstack-part-2/