I have successfully set up NGINX as an ingress for my Kubernetes cluster on GKE. I have enabled and configured external metrics (and I am using an external metric in my HPA for auto-scaling). All good there and it's working well.
However, I have a deprecation warning in StackDriver around these external metrics. I have come to discover that these warnings are because of "old" resource types being used.
For example, using this command:
kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/default/custom.googleapis.com|nginx-ingress-controller|nginx_ingress_controller_nginx_process_connections" | jq
I get this output:
{
"metricName": "custom.googleapis.com|nginx-ingress-controller|nginx_ingress_controller_nginx_process_connections",
"metricLabels": {
"metric.labels.controller_class": "nginx",
"metric.labels.controller_namespace": "ingress-nginx",
"metric.labels.controller_pod": "nginx-ingress-controller-[snip]",
"metric.labels.state": "writing",
"resource.labels.cluster_name": "[snip]",
"resource.labels.container_name": "",
"resource.labels.instance_id": "[snip]",
"resource.labels.namespace_id": "ingress-nginx",
"resource.labels.pod_id": "nginx-ingress-controller-[snip]",
"resource.labels.project_id": "[snip]",
"resource.labels.zone": "[snip]",
"resource.type": "gke_container"
},
"timestamp": "2020-01-26T05:17:33Z",
"value": "1"
}
Note that the "resource.type" field is "gke_container". As of the next version of Kubernetes this needs to be "k8s_container".
I have looked through the Kubernetes NGINX configuration to try to determine when (or if) an upgrade has been made to support the new StackDriver resource model, but I have failed so far. And I would rather not "blindly" upgrade NGINX if I can help it (even in UAT).
These are the Docker images that I am currently using:
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.2
gcr.io/google-containers/prometheus-to-sd:v0.9.0
gcr.io/google-containers/custom-metrics-stackdriver-adapter:v0.10.0
Could anyone help out here?
Thanks in advance,
Ben
Ok this has nothing to do with NGINX and everything to do with Prometheus (and specifically the Prometheus sidecar prometheus-to-sd).
For future readers if your Prometheus start-up looks like this:
- name: prometheus-to-sd
image: gcr.io/google-containers/prometheus-to-sd:v0.9.0
ports:
- name: profiler
containerPort: 6060
command:
- /monitor
- --stackdriver-prefix=custom.googleapis.com
- --source=nginx-ingress-controller:http://localhost:10254/metrics
- --pod-id=$(POD_NAME)
- --namespace-id=$(POD_NAMESPACE)
Then is needs to look like this:
- name: prometheus-to-sd
image: gcr.io/google-containers/prometheus-to-sd:v0.9.0
ports:
- name: profiler
containerPort: 6060
command:
- /monitor
- --stackdriver-prefix=custom.googleapis.com
- --source=nginx-ingress-controller:http://localhost:10254/metrics
- --monitored-resource-type-prefix=k8s_
- --pod-id=$(POD_NAME)
- --namespace-id=$(POD_NAMESPACE)
That is, include the --monitored-resource-type-prefix=k8s_ option.
Related
I have the following jps manifest:
jpsVersion: 1.3
jpsType: install
application:
id: my-app
name: My App
version: 0.0
settings:
fields:
- name: envName
caption: Env Name
type: string
required: true
- name: topo
type: radio-fieldset
values:
0-dev: '<b>Development:</b> one master (1) and one scalable worker (1+)'
1-prod: '<b>Production:</b> multi master (3) with API balancers (2+) and scalable workers (2+)'
default: 0-dev
- name: k8s-version
type: string
caption: k8s manifest version
default: v1.16.3
onInstall:
- installKubernetes
actions:
installKubernetes:
install:
jps: https://github.com/jelastic-jps/kubernetes/blob/${settings.k8s-version}/manifest.jps
envName: ${settings.envName}
displayName: ${settings.envName}
settings:
deploy: cc
topo: ${settings.topo}
dashboard: version2
ingress-controller: Nginx
storage: true
api: true
monitoring: true
version: ${settings.k8s-version}
jaeger: false
Now, I'd like to add a load balancer in front of the k8s cluster, something like
env:
topology:
nodes:
- nodeGroup: bl
nodeType: nginx-dockerized
tag: 1.16.1
displayName: Node balancing
count: 1
fixedCloudlets: 1
cloudlets: 4
Of course, the above kubernetes jps installation creates a topology. Therefore, there is no way I can call the above env section. How can I add a new node to the topology created by the jelastic kubernetes jps? I found addNodes, but it does not seem to allow to define what comes into the bl node group.
In the Jelastic API, I was able to find the EditNodeGroup method, which I believe would solve my problem. However, the documentation is not very clear, it's kind of missing an example from which I could guess how to fill up the parameters. How do I use that method to add an nginx load balancer to my k8s environment?
EDIT
The EditNodeGroup method is of no use for that problem. I think, currently, my best option is to fork the jelastic-jps/kubernetes and adapt the beforeinstall for my needs. Do I have any other option? I browsed the API and found no way to add my nginx load balancer.
The environment topology cannot be changed during an external manifest invocation, since it's created within that manifest. But it can be altered after the manifest finish.
The whole approach is:
onInstall:
- installKubernetes
- addBalancer
actions:
installKubernetes:
install:
jps: https://github.com/jelastic-jps/kubernetes/blob/${settings.k8s-version}/manifest.jps
envName: ${settings.envName}
...
addBalancer:
- install:
envName: ${settings.envName}
jps:
type: update
name: Add Balancer Node
onInstall:
- addNodes:
....
Please refer https://github.com/jelastic-jps/kubernetes/blob/ad62208a5b3796bb7beeaedfce5c42b18512d9f0/addons/storage.jps example on how to use "addNodes" action in the manifest.
Also, the reference https://docs.cloudscripting.com/creating-manifest/actions/#addnodes describes all fields that can be used.
The latest published version of K8s for Jelastic is: v1.16.6, so you could use it in your manifest.
But, please note, that via this Balancer instance you will be accessing the default Kubernetes ingress controller, i.e. the same ingresses/paths that you currently have at "http(s)://".
Of course, you can assign a public ip to added BL, and access the same functionality not via Shared Balancers as before, but via public IP from now on.
In a nutshell, Jelastic Balancer instance currently doesn't provide a Kubernetes service LoadBalancer functionality — if you need exactly this one. The K8S LoadBalancer functionality will be added in the next release: public IPs added to "cp" worker can be automatically used for LoadBalancers created inside the Kubernetes cluster. We expect this functionality be added to 1.16.8+
Please let us know if you have any further questions.
I am trying to run the Azure Forms Recognizer Label Tool in Azure Container instance.
I have followed the instructions given in here.
I was able to deploy the container image but when I try to start it, it terminates with the following message:
Missing EULA=accept command line option. You must provide this to continue.
This quite surprising, because this option has been specified in my YAML file (see below).
What can I do to fix this?
My YAML file:
apiVersion: 2018-10-01
location: West Europe
name: renecognitiveservice
imageRegistryCredentials: # This is required when pulling a non-public image
- server: mcr.microsoft.com
username: xxx
password: xxx
properties:
containers:
- name: xxxeamlabelingtool
properties:
image: mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool
environmentVariables: # These env vars are required
- name: eula
value: accept
- name: billing
value: https://rk-formsrecognizer.cognitiveservices.azure.com/
- name: apikey
value: xxx
resources:
requests:
cpu: 2 # Always refer to recommended minimal resources
memoryInGb: 4 # Always refer to recommended minimal resources
ports:
- port: 5000
osType: Linux
restartPolicy: OnFailure
ipAddress:
type: Public
ports:
- protocol: tcp
port: 5000
tags: null
type: Microsoft.ContainerInstance/containerGroups
Apparently you can run it with command:
"command": [
"./run.sh", "eula=accept"
],
Worked from the portal
https://github.com/MicrosoftDocs/azure-docs/issues/46623
This is what you want to add in the Azure portal while creating the container instance.
You will find this in the "Advanced" tab.
Afterwards you can access the IP address of that instance to open the label-tool.
"./run.sh", "eula=accept"
I am trying to follow this tutorial on configuring nginx-ingress-controller for a Kubernetes cluster I deployed to AWS using kops.
https://daemonza.github.io/2017/02/13/kubernetes-nginx-ingress-controller/
When I run kubectl create -f ./nginx-ingress-controller.yml, the pods are created but error out. From what I can tell, the problem lies with the following portion of nginx-ingress-controller.yml:
volumes:
- name: tls-dhparam-vol
secret:
secretName: tls-dhparam
- name: nginx-template-volume
configMap:
name: nginx-template
items:
- key: nginx.tmpl
path: nginx.tmpl
Error shown on the pods:
MountVolume.SetUp failed for volume "nginx-template-volume" : configmaps "nginx-template" not found
This makes sense, because the tutorial does not have the reader create this configmap before creating the controller. I know that I need to create the configmap using:
kubectl create configmap nginx-template --from-file=nginx.tmpl=nginx.tmpl
I've done this using nginx.tmpl files found from sources like this, but they don't seem to work (always fail with invalid NGINX template errors). Log example:
I1117 16:29:49.344882 1 main.go:94] Using build: https://github.com/bprashanth/contrib.git - git-92b2bac
I1117 16:29:49.402732 1 main.go:123] Validated default/default-http-backend as the default backend
I1117 16:29:49.402901 1 main.go:80] mkdir /etc/nginx-ssl: file exists already exists
I1117 16:29:49.402951 1 ssl.go:127] using file '/etc/nginx-ssl/dhparam/dhparam.pem' for parameter ssl_dhparam
F1117 16:29:49.403962 1 main.go:71] invalid NGINX template: template: nginx.tmpl:1: function "where" not defined
The image version used is quite old, but I've tried newer versions with no luck.
containers:
- name: nginx-ingress-controller
image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
This thread is similar to my issue, but I don't quite understand the proposed solution. Where would I use docker cp to extract a usable template from? Seems like the templates I'm using use a language/syntax incompatible with Docker...?
To copy the nginx template file from the ingress controller pod to your local machine, you can first grab the name of the pod with kubectl get pods then run kubectl exec [POD_NAME] -it -- cat /etc/nginx/template/nginx.tmpl > nginx.tmpl.
This will leave you with the nginx.tmpl file you can then edit and push back up as a configmap. I would recommend though keeping custom changes to the template to a minimum as it can make it hard for you to update the controller in the future.
Hope this helps!
I have a couple of NodeJS backends running as pods in a Kubernetes setup, with Ingress-managed nginx over it.
These backends are API servers, and can return 400, 404, or 500 responses during normal operations. These responses would provide meaningful data to the client; besides the status code, the response has a JSON-serialized structure in the body informing about the error cause or suggesting a solution.
However, Ingress will intercept these error responses, and return an error page. Thus the client does not receive the information that the service has tried to provide.
There's a closed ticket in the kubernetes-contrib repository suggesting that it is now possible to turn off error interception: https://github.com/kubernetes/contrib/issues/897. Being new to kubernetes/ingress, I cannot figure out how to apply this configuration in my situation.
For reference, this is the output of kubectl get ingress <ingress-name>: (redacted names and IPs)
Name: ingress-name-redacted
Namespace: default
Address: 127.0.0.1
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
public.service.example.com
/ service-name:80 (<none>)
Annotations:
rewrite-target: /
service-upstream: true
use-port-in-redirects: true
Events: <none>
I have solved this on Tectonic 1.7.9-tectonic.4.
In the Tectonic web UI, go to Workloads -> Config Maps and filter by namespace tectonic-system.
In the config maps shown, you should see one named "tectonic-custom-error".
Open it and go to the YAML editor.
In the data field you should have an entry like this:
custom-http-errors: '404, 500, 502, 503'
which configures which HTTP responses will be captured and be shown with the custom Tectonic error page.
If you don't want some of those, just remove them, or clear them all.
It should take effect as soon as you save the updated config map.
Of course, you could to the same from the command line with kubectl edit:
$> kubectl edit cm tectonic-custom-error --namespace=tectonic-system
Hope this helps :)
Disclaimer: I find out what Prometheus is about a day ago.
I'm trying to use Prometheus with nginx exporter
I copy-pasted a config example from grafana dashboard and it works flawlessly with node-exporter, but, when I'm trying to adapt it to nginx-exporter, deployed in one pod with nginx server, Prometheus outputs lots of trash in Targets (all opened ports for all available IPs).
So, I wonder, how should I adapt job to output only a needed container (with its' name in labels, etc.)
- job_name: 'kubernetes-nginx-exporter'
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- api_servers:
- 'https://kubernetes.default.svc'
in_cluster: true
role: container
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- source_labels: [__meta_kubernetes_role]
action: replace
target_label: kubernetes_role
- source_labels: [__address__]
regex: '(.*):10250'
replacement: '${1}:9113'
target_label: __address__
The right workaround was to add annotations to deployment in template section:
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9113'
and set role: pod in job_name: 'kubernetes-pods' (if not set).
That's it, your endpoints would be present only with ports you provided and with all needed labels.