Locating a valid NGINX template for nginx-ingress-controller? (Kubernetes) - nginx

I am trying to follow this tutorial on configuring nginx-ingress-controller for a Kubernetes cluster I deployed to AWS using kops.
https://daemonza.github.io/2017/02/13/kubernetes-nginx-ingress-controller/
When I run kubectl create -f ./nginx-ingress-controller.yml, the pods are created but error out. From what I can tell, the problem lies with the following portion of nginx-ingress-controller.yml:
volumes:
- name: tls-dhparam-vol
secret:
secretName: tls-dhparam
- name: nginx-template-volume
configMap:
name: nginx-template
items:
- key: nginx.tmpl
path: nginx.tmpl
Error shown on the pods:
MountVolume.SetUp failed for volume "nginx-template-volume" : configmaps "nginx-template" not found
This makes sense, because the tutorial does not have the reader create this configmap before creating the controller. I know that I need to create the configmap using:
kubectl create configmap nginx-template --from-file=nginx.tmpl=nginx.tmpl
I've done this using nginx.tmpl files found from sources like this, but they don't seem to work (always fail with invalid NGINX template errors). Log example:
I1117 16:29:49.344882 1 main.go:94] Using build: https://github.com/bprashanth/contrib.git - git-92b2bac
I1117 16:29:49.402732 1 main.go:123] Validated default/default-http-backend as the default backend
I1117 16:29:49.402901 1 main.go:80] mkdir /etc/nginx-ssl: file exists already exists
I1117 16:29:49.402951 1 ssl.go:127] using file '/etc/nginx-ssl/dhparam/dhparam.pem' for parameter ssl_dhparam
F1117 16:29:49.403962 1 main.go:71] invalid NGINX template: template: nginx.tmpl:1: function "where" not defined
The image version used is quite old, but I've tried newer versions with no luck.
containers:
- name: nginx-ingress-controller
image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
This thread is similar to my issue, but I don't quite understand the proposed solution. Where would I use docker cp to extract a usable template from? Seems like the templates I'm using use a language/syntax incompatible with Docker...?

To copy the nginx template file from the ingress controller pod to your local machine, you can first grab the name of the pod with kubectl get pods then run kubectl exec [POD_NAME] -it -- cat /etc/nginx/template/nginx.tmpl > nginx.tmpl.
This will leave you with the nginx.tmpl file you can then edit and push back up as a configmap. I would recommend though keeping custom changes to the template to a minimum as it can make it hard for you to update the controller in the future.
Hope this helps!

Related

Upgrading Kubernetes NGINX to use StackDriver new resource model in External Metrics

I have successfully set up NGINX as an ingress for my Kubernetes cluster on GKE. I have enabled and configured external metrics (and I am using an external metric in my HPA for auto-scaling). All good there and it's working well.
However, I have a deprecation warning in StackDriver around these external metrics. I have come to discover that these warnings are because of "old" resource types being used.
For example, using this command:
kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/default/custom.googleapis.com|nginx-ingress-controller|nginx_ingress_controller_nginx_process_connections" | jq
I get this output:
{
"metricName": "custom.googleapis.com|nginx-ingress-controller|nginx_ingress_controller_nginx_process_connections",
"metricLabels": {
"metric.labels.controller_class": "nginx",
"metric.labels.controller_namespace": "ingress-nginx",
"metric.labels.controller_pod": "nginx-ingress-controller-[snip]",
"metric.labels.state": "writing",
"resource.labels.cluster_name": "[snip]",
"resource.labels.container_name": "",
"resource.labels.instance_id": "[snip]",
"resource.labels.namespace_id": "ingress-nginx",
"resource.labels.pod_id": "nginx-ingress-controller-[snip]",
"resource.labels.project_id": "[snip]",
"resource.labels.zone": "[snip]",
"resource.type": "gke_container"
},
"timestamp": "2020-01-26T05:17:33Z",
"value": "1"
}
Note that the "resource.type" field is "gke_container". As of the next version of Kubernetes this needs to be "k8s_container".
I have looked through the Kubernetes NGINX configuration to try to determine when (or if) an upgrade has been made to support the new StackDriver resource model, but I have failed so far. And I would rather not "blindly" upgrade NGINX if I can help it (even in UAT).
These are the Docker images that I am currently using:
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.2
gcr.io/google-containers/prometheus-to-sd:v0.9.0
gcr.io/google-containers/custom-metrics-stackdriver-adapter:v0.10.0
Could anyone help out here?
Thanks in advance,
Ben
Ok this has nothing to do with NGINX and everything to do with Prometheus (and specifically the Prometheus sidecar prometheus-to-sd).
For future readers if your Prometheus start-up looks like this:
- name: prometheus-to-sd
image: gcr.io/google-containers/prometheus-to-sd:v0.9.0
ports:
- name: profiler
containerPort: 6060
command:
- /monitor
- --stackdriver-prefix=custom.googleapis.com
- --source=nginx-ingress-controller:http://localhost:10254/metrics
- --pod-id=$(POD_NAME)
- --namespace-id=$(POD_NAMESPACE)
Then is needs to look like this:
- name: prometheus-to-sd
image: gcr.io/google-containers/prometheus-to-sd:v0.9.0
ports:
- name: profiler
containerPort: 6060
command:
- /monitor
- --stackdriver-prefix=custom.googleapis.com
- --source=nginx-ingress-controller:http://localhost:10254/metrics
- --monitored-resource-type-prefix=k8s_
- --pod-id=$(POD_NAME)
- --namespace-id=$(POD_NAMESPACE)
That is, include the --monitored-resource-type-prefix=k8s_ option.

What is the command to execute command line arguments with NGINX Ingress Controller?

I feel like I'm missing something pretty basic here, but can't find what I'm looking for.
Referring to the NGINX Ingress Controller documentation regarding command line arguments how exactly would you use these? Are you calling a command on the nginx-ingress-controller pod with these arguments? If so, what is the command name?
Can you provide an example?
Command line arguments are accepted by the Ingress controller executable.This can be set in container spec of the nginx-ingress-controller Deployment manifest.
List of annotation document :
https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
Command line argument document:
https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/cli-arguments.md
If you will run the command
kubectl describe deployment/nginx-ingress-controller --namespace
You will find this snip :
Args:
--default-backend-service=$(POD_NAMESPACE)/default-http-backend
--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
--annotations-prefix=nginx.ingress.kubernetes.io
Where these all are command line arguments of nginx as suggested.From here you can also change the --annotations-prefix=nginx.ingress.kubernetes.io from here.
Default annotation in nginx is nginx.ingress.kubernetes.io.
!!! note The annotation prefix can be changed using the --annotations-prefix inside command line argument, but the default is nginx.ingress.kubernetes.io.
If you are using the Helm chart, then you can simply create a configmap named {{ include "ingress-nginx.fullname" . }}-tcp in the same namespace where the ingress controller is deployed. (Unfortunately, I wasn't able to figure out what the default value is for ingress-nginx.fullname... sorry. If someone knows, feel free to edit this answer.)
If you need to specify a different namespace for the configmap, then you might be able to use the .Values.tcp.configMapNamespace property, but honestly, I wasn't able to find it applied anywhere in the code, so YMMV.
## Allows customization of the tcp-services-configmap
##
tcp:
configMapNamespace: "" # defaults to .Release.Namespace
## Annotations to be added to the tcp config configmap
annotations: {}

Environmental Variables in Fission

Is there a way to set environment variables in fission? I can't seem to find anything on their documentation and do not want to put credentials in the codebase.
I wasn't sure if it would make sense to add it as a build variable but don't know how that would work with the cli.
As far as I know of the support for environment variables is being worked on.
The relevant PR: https://github.com/fission/fission/pull/399
As a temporary workaround you could inject environment variables using a custom Fission environment. For example with the python environment:
FROM fission/python-env
ENV DB_CREDENTIALS=foobar
ENTRYPOINT ["python3"]
CMD ["server.py"]
Note that any function using the custom environment will have access to the environment variable(!)
I think a good way to store the credentials would be storing them in the K8S cluster in ConfigMap resources and the accessing the in our code.
You can follow this link to read more about how to access the configmap from fission code.
You can do this By setting up a function yaml-spec
apiVersion: fission.io/v1
kind: Environment
metadata:
creationTimestamp: null
name: func-name
spec:
builder:
command: build
container:
name: ""
resources: {}
image: fission/python-builder
imagepullsecret: myregistrykey
keeparchive: false
poolsize: 3
resources: {}
runtime:
podspec:
containers:
- name: container-name
env: # here !!!!!!!!!!!!
- name: value1
value: 1
- name: value2
value: 2
resources: {}
image: addr_of_image
version: 2
please read: https://doc.crds.dev/github.com/fission/fission

OS::Heat::SoftwareDeployment is staying stuck in CREATE_IN_PROGRESS status

I am trying customise new instances created within openstack mikata, using HEAT templates. Using OS::Nova::Server with a script in user_data works fine.
Next the idea is to do additional steps via OS::Heat::SoftwareConfig.
The config is:
type: OS::Nova::Server
....
user_data_format: SOFTWARE_CONFIG
user_data:
str_replace:
template:
get_file: vm_init1.sh
config1:
type: OS::Heat::SoftwareConfig
depends_on: vm
properties:
group: script
config: |
#!/bin/bash
echo "Running $0 OS::Heat::SoftwareConfig look in /var/tmp/test_script.log" | tee /var/tmp/test_script.log
deploy:
type: OS::Heat::SoftwareDeployment
properties:
config:
get_resource: config1
server:
get_resource: vm
The instance is setup nicely (the script vm_init1.sh above runs fine) and one can login, but he "config1" example above is never executed.
Analysis
- The base image is Ubuntu 16.04, created with disk-image-create and including "vm ubuntu os-collect-config os-refresh-config os-apply-config heat-config heat-config-script"
- From "openstack stack resource list $vm" one see that deployment never fisnihe, with OS::Heat::SoftwareDeployment status=CREATE_IN_PROGRESS
- "openstack stack resource show $vm config1" shows resource_status=CREATE_COMPLETE
- Within the vm, /var/log/cloud-init-output.log shows the output of the script vm_init1.sh, but no trace of the 'config1' script. The log os-apply-config.log is empty, is that normal?
How does one troubleshoot OS::Heat::SoftwareDeployment configs?
(I have read https://docs.openstack.org/developer/heat/template_guide/software_deployment.html#software-deployment-resources)

Prometheus + nginx-exporter: collect only from <some_nginx_container_ip>:9113

Disclaimer: I find out what Prometheus is about a day ago.
I'm trying to use Prometheus with nginx exporter
I copy-pasted a config example from grafana dashboard and it works flawlessly with node-exporter, but, when I'm trying to adapt it to nginx-exporter, deployed in one pod with nginx server, Prometheus outputs lots of trash in Targets (all opened ports for all available IPs).
So, I wonder, how should I adapt job to output only a needed container (with its' name in labels, etc.)
- job_name: 'kubernetes-nginx-exporter'
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- api_servers:
- 'https://kubernetes.default.svc'
in_cluster: true
role: container
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- source_labels: [__meta_kubernetes_role]
action: replace
target_label: kubernetes_role
- source_labels: [__address__]
regex: '(.*):10250'
replacement: '${1}:9113'
target_label: __address__
The right workaround was to add annotations to deployment in template section:
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9113'
and set role: pod in job_name: 'kubernetes-pods' (if not set).
That's it, your endpoints would be present only with ports you provided and with all needed labels.

Resources