How to properly auto-scale groups of VMs in Cloudify? - cloudify

I'm using cloudify community version 19.01.24.
Trying to figure out how to auto-scale a group of two VMs. Here's what I came up with so far (skipped irrelevant parts):
monitored_vm1_port:
type: cloudify.openstack.nodes.Port
properties:
openstack_config: *openstack_config
relationships:
- type: cloudify.relationships.contained_in
target: proxy_server_network
monitored_vm2_port:
type: cloudify.openstack.nodes.Port
properties:
openstack_config: *openstack_config
relationships:
- type: cloudify.relationships.contained_in
target: proxy_server_network
monitored_vm1_host:
type: cloudify.openstack.nodes.Server
properties:
image: { get_input: image }
flavor: { get_input: flavor }
resource_id: { concat: ['monitored_vm1-', { get_input: client_name }] }
agent_config:
user: { get_input: agent_user }
key: { get_property: [ keypair, private_key_path ] }
interfaces:
cloudify.interfaces.monitoring_agent:
install:
implementation: diamond.diamond_agent.tasks.install
inputs:
diamond_config:
interval: 10
start: diamond.diamond_agent.tasks.start
stop: diamond.diamond_agent.tasks.stop
uninstall: diamond.diamond_agent.tasks.uninstall
cloudify.interfaces.monitoring:
start:
implementation: diamond.diamond_agent.tasks.add_collectors
inputs:
collectors_config:
NetworkCollector: {}
relationships:
- type: cloudify.openstack.server_connected_to_port
target: monitored_vm1_port
- type: cloudify.openstack.server_connected_to_keypair
target: keypair
monitored_vm2_host:
type: cloudify.openstack.nodes.Server
properties:
image: { get_input: image }
flavor: { get_input: flavor }
resource_id: { concat: ['monitored_vm2-', { get_input: client_name }] }
agent_config:
user: { get_input: agent_user }
key: { get_property: [ keypair, private_key_path ] }
interfaces:
cloudify.interfaces.monitoring_agent:
install:
implementation: diamond.diamond_agent.tasks.install
inputs:
diamond_config:
interval: 10
start: diamond.diamond_agent.tasks.start
stop: diamond.diamond_agent.tasks.stop
uninstall: diamond.diamond_agent.tasks.uninstall
cloudify.interfaces.monitoring:
start:
implementation: diamond.diamond_agent.tasks.add_collectors
inputs:
collectors_config:
NetworkCollector: {}
relationships:
- type: cloudify.openstack.server_connected_to_port
target: monitored_vm2_port
- type: cloudify.openstack.server_connected_to_keypair
target: keypair
groups:
vm_group:
members: [monitored_vm1_host, monitored_vm2_host]
scale_up_group:
members: [monitored_vm1_host, monitored_vm2_host]
policies:
auto_scale_up:
type: scale_policy_type
properties:
policy_operates_on_group: true
scale_limit: 2 # max additional instances
scale_direction: '<'
scale_threshold: 31457280
service_selector: .*monitored_vm1_host.*network.eth0.rx.bit
cooldown_time: 60
triggers:
execute_scale_workflow:
type: cloudify.policies.triggers.execute_workflow
parameters:
workflow: scale
workflow_parameters:
delta: 1
scalable_entity_name: vm_group
scale_compute: true
policies:
vm_group_scale_policy:
type: cloudify.policies.scaling
properties:
default_instances: 1
targets: [vm_group]
So the blueprint gets deployed correctly, and scale workflow is being triggered according to specified condition (traffic on the VM's interface), but it fails during creation of the new VMs instances with following errors:
2019-11-18 14:54:46,591:ERROR: Task nova_plugin.server.create[f736f81c-7f8c-4f82-a280-8352c1d01bff] raised:
Traceback (most recent call last):
(...)
NonRecoverableError: Port 3b727b5e-a2ec-47cc-b711-37cb80a7b4e5 is still in use. [status_code=409]
Looks like Cloudify is trying to spawn new instances using existing ports, weird.
So I thought, maybe I should explicitly place VM's ports in the scaling group as well, to replicate them along with VMs. Tried it like this:
vm_group:
members: [monitored_vm1_host, monitored_vm1_port, monitored_vm2_host, monitored_vm2_port]
But in that case I get an error regarding some missing objects relationships, already at blueprint verification stage:
Invalid blueprint - Node 'monitored_vm1_host' and 'monitored_vm1_port' belong to some shared group but they are not contained in any shared node, nor is any ancestor node of theirs.
in: /opt/manager/resources/blueprint-with-scaling-d79fed3d-0b3b-4459-a851-fedd9ecf50c6/blueprint-with-scaling.yaml
I've gone through documentation and any examples I could find (there are not many), but they're unclear to me.
How can I get it scale correctly?

You got the first error as you said because Cloudify tried to scale the VM and connect it to the port that was already bound to the first VM.
The second error means that you cannot scale the port if it does not depend on a node that is also scaled, this is to avoid the scale of resources that can't be scaled.
The solution for this would be, to have a node of type cloudify.nodes.Root and connect to it by relationship the port, if the port depends on this node and this node will be part of the scale group you will be able to scale.
Your blueprint will have something like this:
my_relationship_node:
type: cloudify.nodes.Root
port:
type: cloudify.openstack.nodes.Port
properties:
openstack_config: *openstack_config
relationships:
- type: cloudify.relationships.connected_to
target: public_network
- type: cloudify.relationships.depends_on
target: public_subnet
- type: cloudify.openstack.port_connected_to_security_group
target: security_group
- type: cloudify.openstack.port_connected_to_floating_ip
target: ip
- type: cloudify.relationships.contained_in
target: my_relationship_node
I hope it helps.

Related

Openstack Heat Template output for powershell command

I create Heat Template with Powershell commands and my template is not getting executed after adding deployment section. I have added this section to get output for commands.Below shown template I am using :
heat_template_version: 2016-10-14
description: Template to install HyperV Feature in Server
resources:
floating_ip:
type: OS::Neutron::FloatingIP
properties:
floating_network: Net_External_16
instance:
type: OS::Nova::Server
properties:
name: machine2
flavor: LARGE
networks:
- network: 71xxxx85-8a24-475b-9xxc-169xxxxxbb0
security_groups:
- default
- all_open
block_device_mapping_v2:
- device_name: /dev/vpa
volume_id: {get_resource: volume}
delete_on_termination: "true"
volume:
type: OS::Cinder::Volume
properties:
size: 25
image: 51xxxxxbe-44e6-4206-920c-xxxxxxxxxx
name: {get_param: volumename}
ps_script:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config:
str_replace:
template: |
#ps1_sysnative
$log = New-Item "C:\check_file.txt" -Type File
start-sleep -s 20
install-windowsfeature -Name DNS -IncludeManagementTools
start-sleep -s 60
$pass = "_parameter_1_"
Add-content $log $pass
params:
_parameter_1_: {get_param: parameter1}
association:
type: OS::Neutron::FloatingIPAssociation
properties:
floatingip_id: {get_resource: floating_ip}
port_id: {get_attr: [instance, addresses, 71xxxxx85-8a24-4xxb-9xxc-16xxxx84bb0, 0, port]}
deployment:
type: OS::Heat::SoftwareDeployment
properties:
config: {get_resource: ps_script}
server: {get_resource: instance }
outputs:
instance_ip:
description: Ipaddress
value: {get_attr: [instance,addresses]}
result:
description: Checkoutput
value: {get_attr: [deployent]}
If anybody tried this same method or any other solution they can provide to get output for powershell commands executed from Template.
Add the line user_data_format: SOFTWARE_CONFIG under OS::Nova::Server properties :
instance:
type: OS::Nova::Server
properties:
name: machine2
flavor: LARGE
networks:
- network: 71xxxx85-8a24-475b-9xxc-169xxxxxbb0
security_groups:
- default
- all_open
block_device_mapping_v2:
- device_name: /dev/vpa
volume_id: {get_resource: volume}
delete_on_termination: "true"
user_data_format: SOFTWARE_CONFIG
This line is required when there is another resource for software configuration.
And also there is a typo in the output section deployent -> deployment
outputs:
instance_ip:
description: Ipaddress
value: { get_attr: [instance,addresses] }
result:
description: Checkoutput
value: { get_attr: [deployment] }
Note: Add the space after { and before }. For example :
{ get_resource: volume }

What is the OpenStack HEAT syntax for multiple fixed_ips as a parameter

I am trying to create a HEAT template that will use 'allowed_address_pairs' and neutron ports to support the concept of a virtual IP address shared between instances for an application similar to VRRP.
I've followed the examples from http://superuser.openstack.org/articles/implementing-high-availability-instances-with-neutron-using-vrrp and from https://github.com/nvpnathan/heat/blob/master/allowed-address-pairs.yaml to come up with my own template to achieve this, and it works great for a single virtual IP address.
Here is what that template looks like:
heat_template_version: 2013-05-23
description: Simple template using allowed_address_pairs for a virtual IP
parameters:
image:
type: string
label: Image name or ID
description: Image to be used for compute instance
default: "cirros"
flavor:
type: string
label: Flavor
description: Type of instance (flavor) to be used
default: "t1.small"
key:
type: string
label: Key name
description: Name of key-pair to be used for compute instance
default: "mykey"
ext_network:
type: string
label: External network name or ID
description: External network that can assign a floating IP
default: "provider"
test_virtual_ip:
type: string
label: Virtual IP address
description: Virtual IP address that can be used on different instances
default: "192.168.10.101"
resources:
# Create the internal test network
test_net:
type: OS::Neutron::Net
properties:
admin_state_up: true
name: test_net
# Create a subnet on the test network
test_subnet:
type: OS::Neutron::Subnet
properties:
name: test_subnet
cidr: 192.168.10.2/24
enable_dhcp: true
allocation_pools: [{end: 192.168.10.99, start: 192.168.10.10}]
gateway_ip: 192.168.10.1
network_id: { get_resource: test_net }
# Create router for the test network
test_router:
type: OS::Neutron::Router
properties:
admin_state_up: true
name: test_router
external_gateway_info: { "network": { get_param: ext_network }}
# Create router interface and attach to subnet
test_router_itf:
type: OS::Neutron::RouterInterface
properties:
router_id: { get_resource: test_router }
subnet_id: { get_resource: test_subnet }
# Create extra port for a virtual IP address
test_vip_port:
type: OS::Neutron::Port
properties:
network_id: { get_resource: test_net }
fixed_ips:
- ip_address: { get_param: test_virtual_ip }
# Create instance ports that have an internal IP and the virtual IP
instance1_test_vip_port:
type: OS::Neutron::Port
properties:
admin_state_up: true
network_id: { get_resource: test_net }
allowed_address_pairs:
- ip_address: { get_param: test_virtual_ip}
security_groups:
- default
# Create instances
test_instance_1:
type: OS::Nova::Server
properties:
name: instance1
image: { get_param: image }
flavor: { get_param: flavor }
key_name: { get_param: key }
networks:
- port: { get_resource: instance1_test_vip_port }
user_data_format: RAW
user_data: |
#cloud-config
password: mysecret
chpasswd: { expire: False }
ssh_pwauth: True
final_message: "The system is up after $UPTIME sec"
outputs:
instance1_ip:
description: IP address of the first instance
value: { get_attr: [test_instance_1, first_address] }
So far so good. Now I need to take this to the next level and assign multiple IP addresses that can be used as virtual IPs within an instance. The problem is that it is not known in advance how many will be needed when the instance is launched, so it needs to be a parameter and cannot simply be hard-coded as
- ip_address: {get_param: ip1}
- ip_address: {get_param: ip2}
and so on
In other words, the parameter test_virtual_ip needs to be a list of IP addresses rather than a single IP address, e.g. "191.168.10.101, 192.168.10.102, 192.168.10.103"
This impacts the definitions for test_vip_port and instance1_test_vip_port, but I can't figure out the correct syntax.
I tried this:
# Create extra port for a virtual IP address
test_vip_port:
type: OS::Neutron::Port
properties:
network_id: { get_resource: test_net }
fixed_ips: [{ get_param: test_virtual_ip }]
# Create instance ports that have an internal IP and the virtual IP
instance1_test_vip_port:
type: OS::Neutron::Port
properties:
admin_state_up: true
network_id: { get_resource: test_net }
allowed_address_pairs: [{ get_param: test_virtual_ip}]
security_groups:
- default
But get error "unicode object has no attribute get" when I try to launch the stack.
What is the proper syntax for providing a list of IP addresses as a parameter to the OS::Neutron::Port::fixed_ips and OS::Neutron::Port::allowed_address_pairs ?
The only solution I was able to get to work was to use the repeat/for_each construct and define the parameter as a comma_delimited_list as follows:
test_virtual_ip:
type: comma_delimited_list
label: Virtual IP address
description: Virtual IP address that can be used on different instances
default: "192.168.10.101,192.168.10.102"
test_vip_port:
type: OS::Neutron::Port
properties:
network_id: { get_resource: test_net }
fixed_ips:
repeat:
for_each:
<%ipaddr%>: {get_param: test_virtual_ip}
template:
ip_address: <%ipaddr%>
A couple of details for this to work:
Your heat template version must support the repeat/for_each construct, I used heat_template_version: 2016-04-08
Don't include any spaces in the list of IP addresses or you will get validation errors.
firewall_rules:
- { get_resource: heat_firewall_tcp_22 }
- { get_resource: heat_firewall_tcp_43 }
- { get_resource: heat_firewall_tcp_53 }
- { get_resource: heat_firewall_tcp_80 }
- { get_resource: heat_firewall_tcp_443 }
This works fine for multiple entries type: OS::Neutron::FirewallPolicy
- { get_resource: heat_firewall_pol_web_1 }
- { get_resource: heat_firewall_pol_dns_1 }
- { get_resource: fw_pol_ssh_1 }
This does not work throwing expecting some sort of string value error for
type: OS::Neutron::Firewall
I am guessing there is not any general standard for formatting multiple entries in yaml?

OpenStack how to route 2 subnet of the same Network

I'm relatively new to Openstack and I cannot find how to route 2 subnets of the same Network.
My topologie is the following :
1. 1 Network,
2. 2 subnets in Network. sub1 (192.168.10.0/24) and sub2 (192.168.20.0/24)
An instance in first sub1 cannot see another instance in sub2.
Q1 : is this normal ? Why are subnet not routed by default ?
I try to add router but router is only possible between an internal Network and a Public Network, but not between subnets.
Q2 : So what is the best solution to communicate between 2 instances in 2 subnets of the same Network ?
Many thank's in advance.
For one network to talk to a different network, you need a router. I don't know where you got the idea that routers only route between public and private networks; to the router, they are simply two different networks.
You have two networks: 192.168.10.0/24 and 192.168.20.0/24. For either network to communicate with the other network, you need at least one router in between them, A single router is the simplest since it will not involve routing protocols or statically defined routes.
OK, after some tries, I finally find a solution and I want to share it with you.
First, as said by Ron above a router is not necessary a gateway to public networks.
For a precision, I want to have only one network with subnets and not 2 networks.
The solution is to have a router with an interface on each subnet AND to add routing information on each subnet using 'host_routes' features.
A Heat stack doing this is the following:
subnet_public:
type: OS::Neutron::Subnet
properties:
name: PublicSubnet
cidr: 192.168.11.0/24
network: { get_resource: network_public }
allocation_pools: [ { "start" : '192.168.11.1', "end" : '192.168.11.253'}]
dns_nameservers: [ 'xx.xx.xx.xx', ...]
enable_dhcp: True
gateway_ip: 192.168.11.254
host_routes: [ { 'destination' : '192.168.12.0/24', 'nexthop' : '192.168.11.254'}, { 'destination' : '192.168.13.0/24', 'nexthop' : '192.168.11.254'}]
ip_version: 4
# tenant_id: { get_param: tenantId }
subnet_appli:
type: OS::Neutron::Subnet
properties:
name: ApplicationSubnet
cidr: 192.168.12.0/24
network: { get_resource: network_public }
allocation_pools: [ { "start" : '192.168.12.1', "end" : '192.168.12.253'}]
dns_nameservers: [ 'xx.xx.xx.xx', ...]
enable_dhcp: True
gateway_ip: 192.168.12.254
host_routes: [ { 'destination' : '192.168.11.0/24', 'nexthop' : '192.168.12.254'}, { 'destination' : '192.168.13.0/24', 'nexthop' : '192.168.12.254'}]
ip_version: 4
# tenant_id: { get_param: tenantId }
subnet_database:
type: OS::Neutron::Subnet
properties:
name: DatabaseSubnet
cidr: 192.168.13.0/24
network: { get_resource: network_public }
allocation_pools: [ { "start" : '192.168.13.1', "end" : '192.168.13.253'}]
dns_nameservers: [ 'xx.xx.xx.xx', ...]
enable_dhcp: True
gateway_ip: 192.168.13.254
host_routes: [ { 'destination' : '192.168.11.0/24', 'nexthop' : '192.168.13.254'}, { 'destination' : '192.168.12.0/24', 'nexthop' : '192.168.13.254'}]
ip_version: 4
# tenant_id: { get_param: tenantId }
#
# Router
router_nat:
type: OS::Neutron::Router
properties:
name: routerNat
admin_state_up: True
external_gateway_info: { "network": 'ext-net' }
gateway_itf:
type: OS::Neutron::RouterInterface
depends_on: [ network_public, subnet_public, router_nat ]
properties:
router_id: { get_resource: router_nat }
subnet: { get_resource: subnet_public }
router_appli_itf:
type: OS::Neutron::RouterInterface
depends_on: [ network_public, subnet_appli, router_nat ]
properties:
router_id: { get_resource: router_nat }
subnet: { get_resource: subnet_appli }
router_database_itf:
type: OS::Neutron::RouterInterface
depends_on: [ network_public, subnet_database, router_nat ]
properties:
router_id: { get_resource: router_nat }
subnet: { get_resource: subnet_database }

Openstack Heat template for flat network

I have configured 2-Node Openstack(Icehouse) setup and heat is also configured. when creating instance using HOT template it is successfully launched. But when I'm trying to create the flat network using my yml file it shows below error-
"Unable to create the network. No tenant network is available for allocation"
heat_template_version: 2013-05-23
description: Simple template to deploy a single compute instance
resources:
provider_01:
type: OS::Neutron::ProviderNet
properties:
physical_network: physnet2
shared: true
network_type: flat
network_01:
type: OS::Neutron::Net
properties:
admin_state_up: true
name: External2
shared: true
#admin tenant id
tenant_id: 6ec23610836048ddb8f9294dbf89a41e
subnet_01:
type: OS::Neutron::Subnet
properties:
name: Subnet2
network_id: { get_resource: network_01 }
cidr: 192.168.56.0/24
gateway_ip: 192.168.56.1
allocation_pools: [{"start": 192.168.56.50, "end": 192.168.56.70}]
enable_dhcp: true
port_01:
type: OS::Neutron::Port
properties:
admin_state_up: true
network_id: { get_resource: network_01 }
#security_groups: "default"
heat_template_version: 2014-10-16
description: Template to create a tenant network along with router config
parameters:
ImageId:
type: string
label: cirros-0.3.2-x86_64
description: cirros-0.3.2-x86_64
resources:
demo-net:
type: OS::Neutron::Net
properties:
name: demo-net
demo-subnet:
type: OS::Neutron::Subnet
properties:
name: demo-subnet
network_id: { get_resource: demo-net }
cidr: 10.10.0.0/24
gateway_ip: 10.10.0.1
my_instance:
type: OS::Nova::Server
properties:
name: "demo_test_nw_01"
image: { get_param: ImageId }
flavor: "m1.tiny"
networks:
- network : { get_resource: demo-net }

exclude fields in _source mapping with foqelasticabundle

I have attachment plugin for Elasticsearch to index all my file stored in Document. I would like to excludes the file content from being stored in the _source.
My config file look likes:
document:
mappings:
id: { index: not_analyzed }
path: {}
name: { boost: 5}
file:
type: attachment
store: "yes"
fields:
title: { store : "yes" }
file : {term_vector: "with_positions_offsets", store: yes}
analyzer: standard
boost: 2
persistence:
driver: orm
model: ACF\CaseBundle\Entity\Document
listener:
finder:
provider:
batch_size: 100
_source:
excludes:
file: ~
When I run foq:elastica:populate I still see the "file" attribute being stored in _source. I can not figure out what is missing. Please help
If anyone else comes across this problem, if you exclude your properties as follows it should just work:
_source:
excludes:
[ file ]

Resources