Encrypting a Secret String in a Prometheus YAML file - encryption

I am using OAuth2.0 in this Prometheus YAML file, and don't want to expose the client_secret directly in the file. Does anybody know how to encrypt another file with the client secret (client_secret_file) so that Prometheus can decrypt and use it?
global:
scrape_interval: 15s
evaluation_interval: 15s
alerting:
alertmanagers:
- static_configs:
- targets: []
rule_files: []
scrape_configs:
- job_name: "prometheus"
metrics_path: "/actuator/prometheus"
static_configs:
- targets: ["localhost:8080"]
oauth2:
client_id: ""
client_secret_file: ""
scopes: []
token_url: ""

IIUC, the solution is to use client_secret_file instead of client_secret.
While client_secret includes the secret directly in the Prometheus config, client_secret_file is a file reference to the secret, doesn't disclose the secret to a viewer of the file and the file should not be checked into e.g. source control.
To my knowledge, there's no way to encrypt arbitrary sections of the Prometheus config.

Related

Cloud endpoints is it possible to protect all API accesses using only a base URL?

As in let's say my api is located at domain/_ah/api. We have domain/_ah/api/getUser, domain/_ah/api/stuff/getStuff, domain/_ah/api/stuff/moreStuff/postMoreStuff.
Is it possible to do that by only defining something like this?´
swagger: '2.0'
info:
title: "Cloud Endpoints + Cloud Run"
description: "Sample API on Cloud Endpoints with a Cloud Run backend"
version: "1.0.0"
host: "domain"
schemes:
- "https"
produces:
- "application/json"
x-google-backend:
jwt_audience: "audience"
address: "domain_backend"
protocol: "h2"
paths:
/_ah/api/*:
get, post, put, etc:
description: "Protects Base URL"
operationId: "authInfoFirebase"
security:
- firebase: []
securityDefinitions:
firebase:
authorizationUrl: ""
flow: "implicit"
type: "oauth2"
x-google-issuer: "https://securetoken.google.com/<project_id>"
x-google-jwks_uri: "https://www.googleapis.com/service_accounts/v1/metadata/x509/securetoken#system.gserviceaccount.com"
x-google-audiences: "<project_id>"
I am afraid Cloud Endpoints does not recognize wildcards as you specified.
Quoting the documentation:
“Endpoints only supports URL path template parameters that correspond to entire path segments (delimited by slashes /). URL path template parameters that correspond to partial path segments aren't supported.”[1]
A workaround to wildcards would be to use path templates.
You can use curly braces {} to mark parts of an URL as path parameters, using your example:
domain/_ah/api/{value1}
domain/_ah/api/{value1}/{value2}
domain/_ah/api/{value1}/{value2}/{value3}
Just be careful not to overlap the path templates, like in this example:
/items/{itemid} ---> This is valid
/items/{itemId}/subitem ----> This is valid
/items/cat ----> This is NOT valid
[1] https://cloud.google.com/endpoints/docs/openapi/openapi-limitations#url_path_templating

Using x-google-endpoints OpenAPI extension with multiple API versions in Cloud Endpoints

I'm trying to deploy two API versions to Google Cloud Endpoints but end up facing configuration issues during the deployment.
API definition api-1.yaml looks like this:
swagger: "2.0"
info:
description: "API"
title: "API"
version: "1.0.0"
host: "api.endpoints.GCP_PROJECT.cloud.goog"
basePath: "/v1"
x-google-api-name: v1
x-google-endpoints:
- name: "api.endpoints.GCP_PROJECT.cloud.goog"
target: "IP_ADDRESS"
...
This works just fine if deployed alone. However if api-2.yaml is added:
swagger: "2.0"
info:
description: "API"
title: "API"
version: "2.0.0"
host: "api.endpoints.GCP_PROJECT.cloud.goog"
basePath: "/v2"
x-google-api-name: v2
x-google-endpoints:
- name: "api.endpoints.GCP_PROJECT.cloud.goog"
target: "IP_ADDRESS"
...
This leads to deployment error: OpenAPI spec is invalid. Multiple endpoint entries are defined in the extension \'x-google-endpoints\'. At most one entry is allowed.
Removing x-google-endpoints extension from one yaml file works but it leaves another yaml file as incomplete and thus, not optimal solution.
Could there be and issue with combining/validating yaml files during deployment? Can x-google-endpoints extension be used to define .cloud.goog domain for versioned API's?
There are two ways to do this:
1) version is in the domain name, such as v1-api.endpoints.GCP_PROJECT.cloud.goog.
You define and deploy two services; one for v1 and another for v2. Each has its own IP, proper service name and its own backend. This is most straightforward way and easy approach.
2) version is in the path, such as api.endpoints.GCP_PROJECT.cloud.goog/v1. You can only define and deploy one service. But you have two backends. This is tricky one. You can use x-google-backend extension in the OpenApi spec and deploy one Cloud Endpoint service.
You are using two ESP proxy as: v1_esp -> v1, v2_esp -> v2.
Each proxy has its own IP and you are trying to bind a domain name to two IPs. This is not supported.
My suggestion is to use one ESP as such:
esp -> v1 + v2 by using x-google-backend.
With the following:
In the openApi.yaml
paths:
/v1/path1:
...
x-google-backend:
address: v1_host
# do above for all your path/methods
/v2:
...
x-google-backend:
address: v2_host
# do above for all your path/methods
x-google-endpoints:
- name: "api.endpoints.GCP_PROJECT.cloud.goog"
target: "IP_ADDRESS"
add --enable_backend_routing and --rewrite to remove /v1 and /v2 prefix before sending the request to your backends.
We have not tested such deployment, but you can try it.

How to write airflow logs to Elasticsearch?

I am using Airflow 1.10.5. Can't seem to find complete documentation or sample on how to setup remote logging using Elasticsearch. I saw airflow documentation about logging, but it wasn't helpful. I am trying to write the airflow (not task) logs to ES.
As far as I understand the docs, the ES log handler can only read from ES. You would have to setup your logging to print into a file, then use something like filebeat to post the file content to ES and Airflow can then read them back...
https://airflow.readthedocs.io/en/stable/howto/write-logs.html#writing-logs-to-elasticsearch
Writing Logs to Elasticsearch
Airflow can be configured to read task
logs from Elasticsearch and optionally write logs to stdout in
standard or json format. These logs can later be collected and
forwarded to the Elasticsearch cluster using tools like fluentd,
logstash or others.
I was able to achieve using [filebeat][1] shipper.
Input config section in filebeat.yml
</snip>
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /path/to/logs/*.log
</snip>
Output config section in filebeat.yml
<snip>
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "elastic"
password: "changeme"
</snip>
Good doc to read especially about airflow --> ES.

SPNEGO authentication with uri module

Using curl I can access HTTP resource on a Web service with Kerberos / SPNEGO this way, after I did a kinit
curl -x POST --negotiate -u : http://host.mydomain.net:14000/my/web/resource
You can see I just pass -u : without actually passing any user / password and it works because of --negotiate
With ansible I can access the resource but I need to put my credentials
- uri:
url: "http://host.mydomain.net:14000/my/web/resource"
return_content: true
method: POST
headers:
Content-Type: "application/x-www-form-urlencoded"
user: "{{ myuser }}"
password: "{{ mypass }}"
register: login
- debug:
msg: "{{ login.content }}"
Now I like to access the resource only using Kerberos authentication so the executor will use it's credentials, I tried to define user and password parameters empty but this fails.
So I'd like to know if uri module support SPNEGO and how I should do?
Thanks
Curl comitter here...
This will not work. Curl cannot authenticate for you. The authentication has to happen at logon time to the machine/server. Since you want to automate that, create a service account, export the keytab and provide the keytab file with the env var KRB5_CLIENT_KTNAME to Ansible. This will work, but you need MIT Kerberos.
Please read my canonical answer to this. If you are in a Active Directory environment, you can easily use msktutil(1) which will do all the magic for you.

How do I encrypt a build step?

I need a secret token to be part of a command executed by Travis CI, but am in a public repository. I found that I can encrypt parts of .travis.yml to keep secrets safe. However, encrypting the command like in the following example fails saying Y95MgqDf...Bc=}: No such file or directory
after_deploy:
- secure: "Y95MgqDf...Bc="
You don't encrypt the step. That does not appear to be supported by Travis.
Instead, encrypt only secret part:
$ travis encrypt TOKEN=verysecret
secure: "CnLZ...lI="
Put the secret in an environment variable:
env:
global:
secure: CnLZ...lI=
Then dereference the environment variable when you need your secret.
after_deploy:
- mycommand $TOKEN

Resources