In standalone statsd-exporter we can add statsd config mapping file (yml) to parameter --statsd.mapping-config (reference), so we can mapping metrics that send from airflow. But when I use airflow helm chart, I confuse how we can add additional config mapping in default statsd-exporter from airflow chart? Thank you
Actually, I found parameter extraMappings=[] in values.yml that have possibilities to add mapping, but I still don't know how to use it
I believe you need to add mappings simply by adding an array in your custom my_values.yml:
extraMappings:
- match: "test.*.*.counter"
name: "..."
labels:
provider: "..."
- match: "test2.*.*.counter"
name: "$..."
labels:
provider: "..."
And then using your file:
helm install -f my_values.yaml CHART_SPECIFICATION
You can see sime examples of mapping in the reference you mentioned.
Related
We are using S2i Build command in our Azure Devops pipeline and using the below command task.
`./s2i build http://azuredevopsrepos:8080/tfs/IT/_git/shoppingcart --ref=S2i registry.access.redhat.com/ubi8/dotnet-31 --copy shopping-service`
The above command asks for user name and password when the task is executed,
How could we provide the username and password of the git repository from the command we are trying to execute ?
Git credential information can be put in a file .gitconfig on your home directory.
As I looked at the document*2 for s2i cli, I couldn't find any information for secured git.
I realized that OpenShift BuildConfig uses .gitconfig file while building a container image.*3 So, It could work.
*1: https://git-scm.com/book/en/v2/Git-Tools-Credential-Storage
*2: https://github.com/openshift/source-to-image/blob/master/docs/cli.md#s2i-build
*3: https://docs.openshift.com/container-platform/4.11/cicd/builds/creating-build-inputs.html#builds-gitconfig-file-secured-git_creating-build-inputs
I must admit I am unfamiliar with Azure Devops pipelines, however if this is running a build on OpenShift you can create a secret with your credentials using oc.
oc create secret generic azure-git-credentials --from-literal=username=<your-username> --from-literal=password=<PAT> --type=kubernetes.io/basic-auth
Link the secret we created above to the builder service account, this account is the one OpenShift uses by default behind the scenes when running a new build.
oc secrets link builder azure-git-credentials
Lastly, you will want to link this source-secret to the build config.
oc set build-secret --source bc/<your-build-config> azure-git-credentials
Next time you run your build the credentials should be picked up from the source-secret in the build config.
You can also do this from the UI on OpenShift, steps below are a copy of what is done above, choose one but not both.
Create a secret from YAML, modify the below where indicated:
kind: Secret
apiVersion: v1
metadata:
name: azure-git-credentials
namespace: <your-namespace>
data:
password: <base64-encoded-password-or-PAT>
username: <base64-encoded-username>
type: kubernetes.io/basic-auth
Then under the ServiceAccounts section on OpenShift, find and edit the 'builder' service account.
kind: ServiceAccount
apiVersion: v1
metadata:
name: builder
namespace: xxxxxx
secrets:
- name: azure-git-credentials ### only add this line, do not edit anything else.
And finally, edit your build config for the build finding where the git entry is and adding the source-secret entry:
source:
git:
uri: "https://github.com/user/app.git"
### Add the entries below ###
sourceSecret:
name: "azure-git-credentials"
I'm on Symfony but It's not very important. I have a .env file and I would like to use his variables in cloudbuild.yaml. There is no way to avoid duplication 😠?
Moreover, I read this article and I saw that author use Yaml merge feature with gitlab hidden key, its very useful when the file is big. I try to use this but cloud build not like, it seems to be impossible to use custom key like in gitlab-ci.yaml. Any Idea ?
UPDATE
In build we need to have env variables and generic config file to avoid to change a lot of value manually. So I would like to use hidden keys in cloudbuild.yaml because I need to use Yaml merge feature for avoid code duplication.
This is my cloudbuild.yaml example without optimisation :
steps:
- name: gcr.io/cloud-builders/docker
args: ['build', '-t', 'gcr.io/$PROJECT_ID/image-pgsql', '-f', 'docker/postgresql/Dockerfile', '.']
- name: gcr.io/cloud-builders/docker
args: ['build', '-t', 'gcr.io/$PROJECT_ID/image-nginx', '--build-arg', 'VERSION=1.15.3', '-f', 'docker/nginx/Dockerfile', '.']
But I would like to have this, or something like that :
.build-template: &buildTemplate
args: ['build', '-t', 'gcr.io/$PROJECT_ID/${IMAGE_NAME}', '--build-arg', 'VERSION=${VERSION}', '-f', '${DOCKER_PATH}', '.']
steps:
- name: 'gcr.io/cloud-builders/docker'
<<: *buildTemplate
env: ['IMAGE_NAME=pgsql', 'VERSION=12', 'DOCKER_PATH=docker/postgresql/Dockerfile']
- name: 'gcr.io/cloud-builders/docker'
<<: *buildTemplate
env: ['IMAGE_NAME=nginx', 'VERSION=1.15.3', 'DOCKER_PATH=docker/nginx/Dockerfile']
I get this when I try to run cloud-build-local --dryrun=false . =>
Error loading config file: unknown field ".build-template" in cloudbuild.Build
Unfortunately, Google Cloud Build doesn't have this feature of hidden keys in Cloud build. I have create a Feature Request in Public Issue Tracker on your behalf where you can track all the updates related to the feature request of hidden keys in Cloud Build.
You have to follow the cloudbuild.yaml schema, which is documented here. Since a build would be directly triggered from that yaml file, it is not possible to add other fields and do some sort of pre-processing to merge different files together.
The only options that are on the table as we speak:
Use global environment variables:
options:
env: [string, string, ...]
steps: [...]
Use step-specific environment variables:
steps:
- name: string
env: [string, string, ...]
Use substitution (with allow-loose):
substitutions:
_SUB_VALUE: world
options:
substitution_option: 'ALLOW_LOOSE'
steps:
- name: 'ubuntu'
args: ['echo', 'hello ${_SUB_VALUE}']
Source your [environment].env file in a build step:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
source ${BRANCH_NAME}.env
echo $${_MY_VARIABLE_1}
echo $${_MY_VARIABLE_2}
...
I feel like I'm missing something pretty basic here, but can't find what I'm looking for.
Referring to the NGINX Ingress Controller documentation regarding command line arguments how exactly would you use these? Are you calling a command on the nginx-ingress-controller pod with these arguments? If so, what is the command name?
Can you provide an example?
Command line arguments are accepted by the Ingress controller executable.This can be set in container spec of the nginx-ingress-controller Deployment manifest.
List of annotation document :
https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
Command line argument document:
https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/cli-arguments.md
If you will run the command
kubectl describe deployment/nginx-ingress-controller --namespace
You will find this snip :
Args:
--default-backend-service=$(POD_NAMESPACE)/default-http-backend
--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
--annotations-prefix=nginx.ingress.kubernetes.io
Where these all are command line arguments of nginx as suggested.From here you can also change the --annotations-prefix=nginx.ingress.kubernetes.io from here.
Default annotation in nginx is nginx.ingress.kubernetes.io.
!!! note The annotation prefix can be changed using the --annotations-prefix inside command line argument, but the default is nginx.ingress.kubernetes.io.
If you are using the Helm chart, then you can simply create a configmap named {{ include "ingress-nginx.fullname" . }}-tcp in the same namespace where the ingress controller is deployed. (Unfortunately, I wasn't able to figure out what the default value is for ingress-nginx.fullname... sorry. If someone knows, feel free to edit this answer.)
If you need to specify a different namespace for the configmap, then you might be able to use the .Values.tcp.configMapNamespace property, but honestly, I wasn't able to find it applied anywhere in the code, so YMMV.
## Allows customization of the tcp-services-configmap
##
tcp:
configMapNamespace: "" # defaults to .Release.Namespace
## Annotations to be added to the tcp config configmap
annotations: {}
I am trying to follow this tutorial on configuring nginx-ingress-controller for a Kubernetes cluster I deployed to AWS using kops.
https://daemonza.github.io/2017/02/13/kubernetes-nginx-ingress-controller/
When I run kubectl create -f ./nginx-ingress-controller.yml, the pods are created but error out. From what I can tell, the problem lies with the following portion of nginx-ingress-controller.yml:
volumes:
- name: tls-dhparam-vol
secret:
secretName: tls-dhparam
- name: nginx-template-volume
configMap:
name: nginx-template
items:
- key: nginx.tmpl
path: nginx.tmpl
Error shown on the pods:
MountVolume.SetUp failed for volume "nginx-template-volume" : configmaps "nginx-template" not found
This makes sense, because the tutorial does not have the reader create this configmap before creating the controller. I know that I need to create the configmap using:
kubectl create configmap nginx-template --from-file=nginx.tmpl=nginx.tmpl
I've done this using nginx.tmpl files found from sources like this, but they don't seem to work (always fail with invalid NGINX template errors). Log example:
I1117 16:29:49.344882 1 main.go:94] Using build: https://github.com/bprashanth/contrib.git - git-92b2bac
I1117 16:29:49.402732 1 main.go:123] Validated default/default-http-backend as the default backend
I1117 16:29:49.402901 1 main.go:80] mkdir /etc/nginx-ssl: file exists already exists
I1117 16:29:49.402951 1 ssl.go:127] using file '/etc/nginx-ssl/dhparam/dhparam.pem' for parameter ssl_dhparam
F1117 16:29:49.403962 1 main.go:71] invalid NGINX template: template: nginx.tmpl:1: function "where" not defined
The image version used is quite old, but I've tried newer versions with no luck.
containers:
- name: nginx-ingress-controller
image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
This thread is similar to my issue, but I don't quite understand the proposed solution. Where would I use docker cp to extract a usable template from? Seems like the templates I'm using use a language/syntax incompatible with Docker...?
To copy the nginx template file from the ingress controller pod to your local machine, you can first grab the name of the pod with kubectl get pods then run kubectl exec [POD_NAME] -it -- cat /etc/nginx/template/nginx.tmpl > nginx.tmpl.
This will leave you with the nginx.tmpl file you can then edit and push back up as a configmap. I would recommend though keeping custom changes to the template to a minimum as it can make it hard for you to update the controller in the future.
Hope this helps!
Disclaimer: I find out what Prometheus is about a day ago.
I'm trying to use Prometheus with nginx exporter
I copy-pasted a config example from grafana dashboard and it works flawlessly with node-exporter, but, when I'm trying to adapt it to nginx-exporter, deployed in one pod with nginx server, Prometheus outputs lots of trash in Targets (all opened ports for all available IPs).
So, I wonder, how should I adapt job to output only a needed container (with its' name in labels, etc.)
- job_name: 'kubernetes-nginx-exporter'
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- api_servers:
- 'https://kubernetes.default.svc'
in_cluster: true
role: container
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- source_labels: [__meta_kubernetes_role]
action: replace
target_label: kubernetes_role
- source_labels: [__address__]
regex: '(.*):10250'
replacement: '${1}:9113'
target_label: __address__
The right workaround was to add annotations to deployment in template section:
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9113'
and set role: pod in job_name: 'kubernetes-pods' (if not set).
That's it, your endpoints would be present only with ports you provided and with all needed labels.