I am using Cloudify 3.3 and OpenStack Kilo.
After I have successfully installed a blueprint, I tried to scale out the host VM (associated with a floating IP W.X.Y.Z) using the default scale workflow. My expected result is that a new VM will be created with a new floating IP, say A.B.C.D, associated to it.
However, after the scale workflow has been completed, I found that the floating IP W.X.Y.Z has been disassociated from the original host VM while this floating IP has been associated to the newly created VM.
My testing "blueprint.yaml":
tosca_definitions_version: cloudify_dsl_1_2
imports:
- http://www.getcloudify.org/spec/cloudify/3.3/types.yaml
- http://www.getcloudify.org/spec/openstack-plugin/1.3/plugin.yaml
inputs:
image:
description: Openstack image ID
flavor:
description: Openstack flavor ID
agent_user:
description: agent username for connecting to the OS
default: centos
node_templates:
web_server_floating_ip:
type: cloudify.openstack.nodes.FloatingIP
web_server_security_group:
type: cloudify.openstack.nodes.SecurityGroup
properties:
rules:
- remote_ip_prefix: 0.0.0.0/0
port: 8080
web_server:
type: cloudify.openstack.nodes.Server
properties:
cloudify_agent:
user: { get_input: agent_user }
image: { get_input: image }
flavor: { get_input: flavor }
relationships:
- type: cloudify.openstack.server_connected_to_floating_ip
target: web_server_floating_ip
- type: cloudify.openstack.server_connected_to_security_group
target: web_server_security_group
I have tried to create a node_template with type cloudify.nodes.Tier and put all the things inside this container. However, the scale workflow cannot be executed normally in this case.
I wonder what should I do so that the newly created VM can be associated to a new floating IP?
Thanks, Sam
What you are describing is a "one to one" relationship between the node and the resources related to it.
Currently Cloudify does not support this kind of relationship and your blueprint is working just as it should.
This feature will be available as of Cloudify 3.4 that will be released in few months
Related
I have the following jps manifest:
jpsVersion: 1.3
jpsType: install
application:
id: my-app
name: My App
version: 0.0
settings:
fields:
- name: envName
caption: Env Name
type: string
required: true
- name: topo
type: radio-fieldset
values:
0-dev: '<b>Development:</b> one master (1) and one scalable worker (1+)'
1-prod: '<b>Production:</b> multi master (3) with API balancers (2+) and scalable workers (2+)'
default: 0-dev
- name: k8s-version
type: string
caption: k8s manifest version
default: v1.16.3
onInstall:
- installKubernetes
actions:
installKubernetes:
install:
jps: https://github.com/jelastic-jps/kubernetes/blob/${settings.k8s-version}/manifest.jps
envName: ${settings.envName}
displayName: ${settings.envName}
settings:
deploy: cc
topo: ${settings.topo}
dashboard: version2
ingress-controller: Nginx
storage: true
api: true
monitoring: true
version: ${settings.k8s-version}
jaeger: false
Now, I'd like to add a load balancer in front of the k8s cluster, something like
env:
topology:
nodes:
- nodeGroup: bl
nodeType: nginx-dockerized
tag: 1.16.1
displayName: Node balancing
count: 1
fixedCloudlets: 1
cloudlets: 4
Of course, the above kubernetes jps installation creates a topology. Therefore, there is no way I can call the above env section. How can I add a new node to the topology created by the jelastic kubernetes jps? I found addNodes, but it does not seem to allow to define what comes into the bl node group.
In the Jelastic API, I was able to find the EditNodeGroup method, which I believe would solve my problem. However, the documentation is not very clear, it's kind of missing an example from which I could guess how to fill up the parameters. How do I use that method to add an nginx load balancer to my k8s environment?
EDIT
The EditNodeGroup method is of no use for that problem. I think, currently, my best option is to fork the jelastic-jps/kubernetes and adapt the beforeinstall for my needs. Do I have any other option? I browsed the API and found no way to add my nginx load balancer.
The environment topology cannot be changed during an external manifest invocation, since it's created within that manifest. But it can be altered after the manifest finish.
The whole approach is:
onInstall:
- installKubernetes
- addBalancer
actions:
installKubernetes:
install:
jps: https://github.com/jelastic-jps/kubernetes/blob/${settings.k8s-version}/manifest.jps
envName: ${settings.envName}
...
addBalancer:
- install:
envName: ${settings.envName}
jps:
type: update
name: Add Balancer Node
onInstall:
- addNodes:
....
Please refer https://github.com/jelastic-jps/kubernetes/blob/ad62208a5b3796bb7beeaedfce5c42b18512d9f0/addons/storage.jps example on how to use "addNodes" action in the manifest.
Also, the reference https://docs.cloudscripting.com/creating-manifest/actions/#addnodes describes all fields that can be used.
The latest published version of K8s for Jelastic is: v1.16.6, so you could use it in your manifest.
But, please note, that via this Balancer instance you will be accessing the default Kubernetes ingress controller, i.e. the same ingresses/paths that you currently have at "http(s)://".
Of course, you can assign a public ip to added BL, and access the same functionality not via Shared Balancers as before, but via public IP from now on.
In a nutshell, Jelastic Balancer instance currently doesn't provide a Kubernetes service LoadBalancer functionality — if you need exactly this one. The K8S LoadBalancer functionality will be added in the next release: public IPs added to "cp" worker can be automatically used for LoadBalancers created inside the Kubernetes cluster. We expect this functionality be added to 1.16.8+
Please let us know if you have any further questions.
I am trying to run the Azure Forms Recognizer Label Tool in Azure Container instance.
I have followed the instructions given in here.
I was able to deploy the container image but when I try to start it, it terminates with the following message:
Missing EULA=accept command line option. You must provide this to continue.
This quite surprising, because this option has been specified in my YAML file (see below).
What can I do to fix this?
My YAML file:
apiVersion: 2018-10-01
location: West Europe
name: renecognitiveservice
imageRegistryCredentials: # This is required when pulling a non-public image
- server: mcr.microsoft.com
username: xxx
password: xxx
properties:
containers:
- name: xxxeamlabelingtool
properties:
image: mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool
environmentVariables: # These env vars are required
- name: eula
value: accept
- name: billing
value: https://rk-formsrecognizer.cognitiveservices.azure.com/
- name: apikey
value: xxx
resources:
requests:
cpu: 2 # Always refer to recommended minimal resources
memoryInGb: 4 # Always refer to recommended minimal resources
ports:
- port: 5000
osType: Linux
restartPolicy: OnFailure
ipAddress:
type: Public
ports:
- protocol: tcp
port: 5000
tags: null
type: Microsoft.ContainerInstance/containerGroups
Apparently you can run it with command:
"command": [
"./run.sh", "eula=accept"
],
Worked from the portal
https://github.com/MicrosoftDocs/azure-docs/issues/46623
This is what you want to add in the Azure portal while creating the container instance.
You will find this in the "Advanced" tab.
Afterwards you can access the IP address of that instance to open the label-tool.
"./run.sh", "eula=accept"
I am trying customise new instances created within openstack mikata, using HEAT templates. Using OS::Nova::Server with a script in user_data works fine.
Next the idea is to do additional steps via OS::Heat::SoftwareConfig.
The config is:
type: OS::Nova::Server
....
user_data_format: SOFTWARE_CONFIG
user_data:
str_replace:
template:
get_file: vm_init1.sh
config1:
type: OS::Heat::SoftwareConfig
depends_on: vm
properties:
group: script
config: |
#!/bin/bash
echo "Running $0 OS::Heat::SoftwareConfig look in /var/tmp/test_script.log" | tee /var/tmp/test_script.log
deploy:
type: OS::Heat::SoftwareDeployment
properties:
config:
get_resource: config1
server:
get_resource: vm
The instance is setup nicely (the script vm_init1.sh above runs fine) and one can login, but he "config1" example above is never executed.
Analysis
- The base image is Ubuntu 16.04, created with disk-image-create and including "vm ubuntu os-collect-config os-refresh-config os-apply-config heat-config heat-config-script"
- From "openstack stack resource list $vm" one see that deployment never fisnihe, with OS::Heat::SoftwareDeployment status=CREATE_IN_PROGRESS
- "openstack stack resource show $vm config1" shows resource_status=CREATE_COMPLETE
- Within the vm, /var/log/cloud-init-output.log shows the output of the script vm_init1.sh, but no trace of the 'config1' script. The log os-apply-config.log is empty, is that normal?
How does one troubleshoot OS::Heat::SoftwareDeployment configs?
(I have read https://docs.openstack.org/developer/heat/template_guide/software_deployment.html#software-deployment-resources)
I'm getting an error when trying to deploy an instance in Amazon. I'm using Cloudify 3.2.1.
My blueprint:
...
node_templates:
host:
type: cloudify.aws.nodes.Instance
properties:
image_id: { get_input: image }
instance_type: { get_input: size_wordpress }
...
My inputs:
...
size_wordpress: t2.small
...
Error:
<Code>VPCResourceNotSpecified</Code>
<Message>The specified instance type can only be used in a VPC. A subnet ID or network interface ID is required to carry out the request.</Message>
How to solve?
T2 instance types require a VPC and not EC2 Classic.
You can either use VPC or use a different instance type.
EC2 instances
Cloudify VPC spec
Is there a way to assign multiple IPs from a subnet to a server using heat templates? I defined a resource for a port using fixed IPs, like below. I then used this resource to create a port on a OS::Nova::Server. But I see only one IP from the subnet assigned. Is there a way to assign to IPs from the subnet?
resources:
a_port:
type: OS::Neutron::Port
properties:
network: "a_network"
fixed_ips: [
{
"subnet_id" : "a_subnet_id",
"subnet_id" : "a_subnet_id"
}
]
Running on our system, I was able to use something like this to get a couple of IP addresses:
resources:
a_port:
type: OS::Neutron::Port
properties:
network_id: "a_network"
fixed_ips:
- subnet_id: a_subnet_id
- subnet_id: a_subnet_id
I think the problem you've got is both your subnet_id definitions are within the same map? (NB, there seems to have been some property name changes to drop _id in later releases.)