Azure Form Recognizer Label Tool Docker: Missing EULA=accept command line option. You must provide this to continue - azure-cognitive-services

I am trying to run the Azure Forms Recognizer Label Tool in Azure Container instance.
I have followed the instructions given in here.
I was able to deploy the container image but when I try to start it, it terminates with the following message:
Missing EULA=accept command line option. You must provide this to continue.
This quite surprising, because this option has been specified in my YAML file (see below).
What can I do to fix this?
My YAML file:
apiVersion: 2018-10-01
location: West Europe
name: renecognitiveservice
imageRegistryCredentials: # This is required when pulling a non-public image
- server: mcr.microsoft.com
username: xxx
password: xxx
properties:
containers:
- name: xxxeamlabelingtool
properties:
image: mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool
environmentVariables: # These env vars are required
- name: eula
value: accept
- name: billing
value: https://rk-formsrecognizer.cognitiveservices.azure.com/
- name: apikey
value: xxx
resources:
requests:
cpu: 2 # Always refer to recommended minimal resources
memoryInGb: 4 # Always refer to recommended minimal resources
ports:
- port: 5000
osType: Linux
restartPolicy: OnFailure
ipAddress:
type: Public
ports:
- protocol: tcp
port: 5000
tags: null
type: Microsoft.ContainerInstance/containerGroups

Apparently you can run it with command:
"command": [
"./run.sh", "eula=accept"
],
Worked from the portal
https://github.com/MicrosoftDocs/azure-docs/issues/46623

This is what you want to add in the Azure portal while creating the container instance.
You will find this in the "Advanced" tab.
Afterwards you can access the IP address of that instance to open the label-tool.
"./run.sh", "eula=accept"

Related

Upgrading Kubernetes NGINX to use StackDriver new resource model in External Metrics

I have successfully set up NGINX as an ingress for my Kubernetes cluster on GKE. I have enabled and configured external metrics (and I am using an external metric in my HPA for auto-scaling). All good there and it's working well.
However, I have a deprecation warning in StackDriver around these external metrics. I have come to discover that these warnings are because of "old" resource types being used.
For example, using this command:
kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/default/custom.googleapis.com|nginx-ingress-controller|nginx_ingress_controller_nginx_process_connections" | jq
I get this output:
{
"metricName": "custom.googleapis.com|nginx-ingress-controller|nginx_ingress_controller_nginx_process_connections",
"metricLabels": {
"metric.labels.controller_class": "nginx",
"metric.labels.controller_namespace": "ingress-nginx",
"metric.labels.controller_pod": "nginx-ingress-controller-[snip]",
"metric.labels.state": "writing",
"resource.labels.cluster_name": "[snip]",
"resource.labels.container_name": "",
"resource.labels.instance_id": "[snip]",
"resource.labels.namespace_id": "ingress-nginx",
"resource.labels.pod_id": "nginx-ingress-controller-[snip]",
"resource.labels.project_id": "[snip]",
"resource.labels.zone": "[snip]",
"resource.type": "gke_container"
},
"timestamp": "2020-01-26T05:17:33Z",
"value": "1"
}
Note that the "resource.type" field is "gke_container". As of the next version of Kubernetes this needs to be "k8s_container".
I have looked through the Kubernetes NGINX configuration to try to determine when (or if) an upgrade has been made to support the new StackDriver resource model, but I have failed so far. And I would rather not "blindly" upgrade NGINX if I can help it (even in UAT).
These are the Docker images that I am currently using:
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.2
gcr.io/google-containers/prometheus-to-sd:v0.9.0
gcr.io/google-containers/custom-metrics-stackdriver-adapter:v0.10.0
Could anyone help out here?
Thanks in advance,
Ben
Ok this has nothing to do with NGINX and everything to do with Prometheus (and specifically the Prometheus sidecar prometheus-to-sd).
For future readers if your Prometheus start-up looks like this:
- name: prometheus-to-sd
image: gcr.io/google-containers/prometheus-to-sd:v0.9.0
ports:
- name: profiler
containerPort: 6060
command:
- /monitor
- --stackdriver-prefix=custom.googleapis.com
- --source=nginx-ingress-controller:http://localhost:10254/metrics
- --pod-id=$(POD_NAME)
- --namespace-id=$(POD_NAMESPACE)
Then is needs to look like this:
- name: prometheus-to-sd
image: gcr.io/google-containers/prometheus-to-sd:v0.9.0
ports:
- name: profiler
containerPort: 6060
command:
- /monitor
- --stackdriver-prefix=custom.googleapis.com
- --source=nginx-ingress-controller:http://localhost:10254/metrics
- --monitored-resource-type-prefix=k8s_
- --pod-id=$(POD_NAME)
- --namespace-id=$(POD_NAMESPACE)
That is, include the --monitored-resource-type-prefix=k8s_ option.

Environmental Variables in Fission

Is there a way to set environment variables in fission? I can't seem to find anything on their documentation and do not want to put credentials in the codebase.
I wasn't sure if it would make sense to add it as a build variable but don't know how that would work with the cli.
As far as I know of the support for environment variables is being worked on.
The relevant PR: https://github.com/fission/fission/pull/399
As a temporary workaround you could inject environment variables using a custom Fission environment. For example with the python environment:
FROM fission/python-env
ENV DB_CREDENTIALS=foobar
ENTRYPOINT ["python3"]
CMD ["server.py"]
Note that any function using the custom environment will have access to the environment variable(!)
I think a good way to store the credentials would be storing them in the K8S cluster in ConfigMap resources and the accessing the in our code.
You can follow this link to read more about how to access the configmap from fission code.
You can do this By setting up a function yaml-spec
apiVersion: fission.io/v1
kind: Environment
metadata:
creationTimestamp: null
name: func-name
spec:
builder:
command: build
container:
name: ""
resources: {}
image: fission/python-builder
imagepullsecret: myregistrykey
keeparchive: false
poolsize: 3
resources: {}
runtime:
podspec:
containers:
- name: container-name
env: # here !!!!!!!!!!!!
- name: value1
value: 1
- name: value2
value: 2
resources: {}
image: addr_of_image
version: 2
please read: https://doc.crds.dev/github.com/fission/fission

Locating a valid NGINX template for nginx-ingress-controller? (Kubernetes)

I am trying to follow this tutorial on configuring nginx-ingress-controller for a Kubernetes cluster I deployed to AWS using kops.
https://daemonza.github.io/2017/02/13/kubernetes-nginx-ingress-controller/
When I run kubectl create -f ./nginx-ingress-controller.yml, the pods are created but error out. From what I can tell, the problem lies with the following portion of nginx-ingress-controller.yml:
volumes:
- name: tls-dhparam-vol
secret:
secretName: tls-dhparam
- name: nginx-template-volume
configMap:
name: nginx-template
items:
- key: nginx.tmpl
path: nginx.tmpl
Error shown on the pods:
MountVolume.SetUp failed for volume "nginx-template-volume" : configmaps "nginx-template" not found
This makes sense, because the tutorial does not have the reader create this configmap before creating the controller. I know that I need to create the configmap using:
kubectl create configmap nginx-template --from-file=nginx.tmpl=nginx.tmpl
I've done this using nginx.tmpl files found from sources like this, but they don't seem to work (always fail with invalid NGINX template errors). Log example:
I1117 16:29:49.344882 1 main.go:94] Using build: https://github.com/bprashanth/contrib.git - git-92b2bac
I1117 16:29:49.402732 1 main.go:123] Validated default/default-http-backend as the default backend
I1117 16:29:49.402901 1 main.go:80] mkdir /etc/nginx-ssl: file exists already exists
I1117 16:29:49.402951 1 ssl.go:127] using file '/etc/nginx-ssl/dhparam/dhparam.pem' for parameter ssl_dhparam
F1117 16:29:49.403962 1 main.go:71] invalid NGINX template: template: nginx.tmpl:1: function "where" not defined
The image version used is quite old, but I've tried newer versions with no luck.
containers:
- name: nginx-ingress-controller
image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
This thread is similar to my issue, but I don't quite understand the proposed solution. Where would I use docker cp to extract a usable template from? Seems like the templates I'm using use a language/syntax incompatible with Docker...?
To copy the nginx template file from the ingress controller pod to your local machine, you can first grab the name of the pod with kubectl get pods then run kubectl exec [POD_NAME] -it -- cat /etc/nginx/template/nginx.tmpl > nginx.tmpl.
This will leave you with the nginx.tmpl file you can then edit and push back up as a configmap. I would recommend though keeping custom changes to the template to a minimum as it can make it hard for you to update the controller in the future.
Hope this helps!

OS::Heat::SoftwareDeployment is staying stuck in CREATE_IN_PROGRESS status

I am trying customise new instances created within openstack mikata, using HEAT templates. Using OS::Nova::Server with a script in user_data works fine.
Next the idea is to do additional steps via OS::Heat::SoftwareConfig.
The config is:
type: OS::Nova::Server
....
user_data_format: SOFTWARE_CONFIG
user_data:
str_replace:
template:
get_file: vm_init1.sh
config1:
type: OS::Heat::SoftwareConfig
depends_on: vm
properties:
group: script
config: |
#!/bin/bash
echo "Running $0 OS::Heat::SoftwareConfig look in /var/tmp/test_script.log" | tee /var/tmp/test_script.log
deploy:
type: OS::Heat::SoftwareDeployment
properties:
config:
get_resource: config1
server:
get_resource: vm
The instance is setup nicely (the script vm_init1.sh above runs fine) and one can login, but he "config1" example above is never executed.
Analysis
- The base image is Ubuntu 16.04, created with disk-image-create and including "vm ubuntu os-collect-config os-refresh-config os-apply-config heat-config heat-config-script"
- From "openstack stack resource list $vm" one see that deployment never fisnihe, with OS::Heat::SoftwareDeployment status=CREATE_IN_PROGRESS
- "openstack stack resource show $vm config1" shows resource_status=CREATE_COMPLETE
- Within the vm, /var/log/cloud-init-output.log shows the output of the script vm_init1.sh, but no trace of the 'config1' script. The log os-apply-config.log is empty, is that normal?
How does one troubleshoot OS::Heat::SoftwareDeployment configs?
(I have read https://docs.openstack.org/developer/heat/template_guide/software_deployment.html#software-deployment-resources)

Network issuse on scaling out deployment on Cloudify

I am using Cloudify 3.3 and OpenStack Kilo.
After I have successfully installed a blueprint, I tried to scale out the host VM (associated with a floating IP W.X.Y.Z) using the default scale workflow. My expected result is that a new VM will be created with a new floating IP, say A.B.C.D, associated to it.
However, after the scale workflow has been completed, I found that the floating IP W.X.Y.Z has been disassociated from the original host VM while this floating IP has been associated to the newly created VM.
My testing "blueprint.yaml":
tosca_definitions_version: cloudify_dsl_1_2
imports:
- http://www.getcloudify.org/spec/cloudify/3.3/types.yaml
- http://www.getcloudify.org/spec/openstack-plugin/1.3/plugin.yaml
inputs:
image:
description: Openstack image ID
flavor:
description: Openstack flavor ID
agent_user:
description: agent username for connecting to the OS
default: centos
node_templates:
web_server_floating_ip:
type: cloudify.openstack.nodes.FloatingIP
web_server_security_group:
type: cloudify.openstack.nodes.SecurityGroup
properties:
rules:
- remote_ip_prefix: 0.0.0.0/0
port: 8080
web_server:
type: cloudify.openstack.nodes.Server
properties:
cloudify_agent:
user: { get_input: agent_user }
image: { get_input: image }
flavor: { get_input: flavor }
relationships:
- type: cloudify.openstack.server_connected_to_floating_ip
target: web_server_floating_ip
- type: cloudify.openstack.server_connected_to_security_group
target: web_server_security_group
I have tried to create a node_template with type cloudify.nodes.Tier and put all the things inside this container. However, the scale workflow cannot be executed normally in this case.
I wonder what should I do so that the newly created VM can be associated to a new floating IP?
Thanks, Sam
What you are describing is a "one to one" relationship between the node and the resources related to it.
Currently Cloudify does not support this kind of relationship and your blueprint is working just as it should.
This feature will be available as of Cloudify 3.4 that will be released in few months

Resources