Is there a way to assign multiple IPs from a subnet to a server using heat templates? I defined a resource for a port using fixed IPs, like below. I then used this resource to create a port on a OS::Nova::Server. But I see only one IP from the subnet assigned. Is there a way to assign to IPs from the subnet?
resources:
a_port:
type: OS::Neutron::Port
properties:
network: "a_network"
fixed_ips: [
{
"subnet_id" : "a_subnet_id",
"subnet_id" : "a_subnet_id"
}
]
Running on our system, I was able to use something like this to get a couple of IP addresses:
resources:
a_port:
type: OS::Neutron::Port
properties:
network_id: "a_network"
fixed_ips:
- subnet_id: a_subnet_id
- subnet_id: a_subnet_id
I think the problem you've got is both your subnet_id definitions are within the same map? (NB, there seems to have been some property name changes to drop _id in later releases.)
Related
I have made a heat template that starts up some servers and installs puppet. In the heat template I have put for the servers their hostname by doing:
properties:
name: dir
Some servers actually gets their hostname, but there are a few that gets their hostname appended by ".novalocal".
An example for a server I have
properties:
name: server1
actual hostname: server1.novalocal
Any idea what cause this? I am at a total loss.
Reference:
Neutron Network DNS Suffix via DHCP
Nova appends the default domain name .novalocal to the hostname. This can be resolved by setting dhcp_domain to an empty string in nova.conf on the Control node.
# This option allows you to specify the domain for the DHCP server.
#
# Possible values:
#
# * Any string that is a valid domain name.
#
#dhcp_domain = novalocal
dhcp_domain =
FYI, As #Дмитрий Работягов mentioned, this option has been moved to [api] section, here is the change 480616 on Openstack Code-Review system.
I am trying to run the Azure Forms Recognizer Label Tool in Azure Container instance.
I have followed the instructions given in here.
I was able to deploy the container image but when I try to start it, it terminates with the following message:
Missing EULA=accept command line option. You must provide this to continue.
This quite surprising, because this option has been specified in my YAML file (see below).
What can I do to fix this?
My YAML file:
apiVersion: 2018-10-01
location: West Europe
name: renecognitiveservice
imageRegistryCredentials: # This is required when pulling a non-public image
- server: mcr.microsoft.com
username: xxx
password: xxx
properties:
containers:
- name: xxxeamlabelingtool
properties:
image: mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool
environmentVariables: # These env vars are required
- name: eula
value: accept
- name: billing
value: https://rk-formsrecognizer.cognitiveservices.azure.com/
- name: apikey
value: xxx
resources:
requests:
cpu: 2 # Always refer to recommended minimal resources
memoryInGb: 4 # Always refer to recommended minimal resources
ports:
- port: 5000
osType: Linux
restartPolicy: OnFailure
ipAddress:
type: Public
ports:
- protocol: tcp
port: 5000
tags: null
type: Microsoft.ContainerInstance/containerGroups
Apparently you can run it with command:
"command": [
"./run.sh", "eula=accept"
],
Worked from the portal
https://github.com/MicrosoftDocs/azure-docs/issues/46623
This is what you want to add in the Azure portal while creating the container instance.
You will find this in the "Advanced" tab.
Afterwards you can access the IP address of that instance to open the label-tool.
"./run.sh", "eula=accept"
I'm getting an error when trying to deploy an instance in Amazon. I'm using Cloudify 3.2.1.
My blueprint:
...
node_templates:
host:
type: cloudify.aws.nodes.Instance
properties:
image_id: { get_input: image }
instance_type: { get_input: size_wordpress }
...
My inputs:
...
size_wordpress: t2.small
...
Error:
<Code>VPCResourceNotSpecified</Code>
<Message>The specified instance type can only be used in a VPC. A subnet ID or network interface ID is required to carry out the request.</Message>
How to solve?
T2 instance types require a VPC and not EC2 Classic.
You can either use VPC or use a different instance type.
EC2 instances
Cloudify VPC spec
If this explanation exists somewhere, I've spent 3 months trying to find it, and failed. I come from a Puppet background, however for various reasons I really want to try replacing it with Salt.
I've gotten a basic setup and I can code my own states and see them work without any issues. The documentation on this is pretty clear. Where I'm stuck is attempting to implement a community salt formula. I can include the formula with it's basic setup and they work fine, however I cannot figure out how to override the defaults from my pillar data. This seems to be where the Salt documentation is weakest.
The documentation states that you should check the pillar.example for how to configure the formula. The pillar.example gives the configuration part clearly, however nether the documentation or the pillar.example tell you how to include this into your pillar data.
In my case I'm trying to use the snmp-formula. I've got a basic setup for my salt file structure, which you can see from my file roots:
file_roots:
base:
- /srv/salt/base
- /srv/formulas/snmp-formula
Inside base I have two pillars:
base/
top.sls
common.sls
top.sls is very simple:
base:
'*':
- common
common.sls has all common config:
include:
- snmp
- snmp.conf
- snmp.trap
- snmp.conftrap
tcpdump:
pkg.latest:
- name: tcpdump
telnet:
pkg.latest:
- name: telnet
htop:
pkg.latest:
- name: htop
snmp:
conf:
location: 'Office'
syscontact: 'Example.com Admin <admin#example.com>'
logconnects: false
# vacm com2sec's (map communities into security names)
com2sec:
- name: mynetwork
source: 192.168.0.13/31
community: public
# vacm group's (map security names to group names)
groups:
- name: MyROGroup
version: v1
secname: mynetwork
- name: MyROGroup
version: v1c
secname: mynetwork
# vacm views (map mib trees to views)
views:
- name: all
type: included
oid: '.1'
# vacm access (map groups to views with access restrictions)
access:
- name: MyROGroup
context: '""'
match: any
level: noauth
prefix: exact
read: all
write: none
notify: none
# v3 users for read-write
rwusers:
- username: 'nagios'
passphrase: 'myv3password'
view: all
In common.sls I've included the snmp-formula and then followed the pillar.example from the formula to customize the configuration. However when I run a test with this I get the following error:
Data failed to compile:
----------
Detected conflicting IDs, SLS IDs need to be globally unique.
The conflicting ID is 'snmp' and is found in SLS 'base:common' and SLS 'base:snmp'
I'm not sure how to proceed with this. It seems like I would have to actually modify the community formula directly to achieve what I want, which seems like the wrong idea. I want to be able to keep the community formula up to date with it's repository and coming from the Puppet perspective, I should be overriding a modules defaults as I need, not modifying the modules directly.
Can someone please make the missing connection for me? How do I implement the pillar.example?
The Salt formula in question is here:
https://github.com/saltstack-formulas/snmp-formula
I have finally figured this out, and it was a problem with a fundamental misunderstanding of the differences between 'file_roots' and 'pillar_roots' as well as 'pillars' vs 'states'. I don't feel that the documentation is very clear in the Getting Started guide about these so I'll explain it, but first the answer.
ANSWER:
To implement the above pillar.example, simply create a dedicated snmp.sls file in your 'base' environment in your pillar data:
/srv/pillar/snmp.sls:
snmp:
conf:
location: 'Office'
syscontact: 'Example.com Admin <admin#example.com>'
logconnects: false
# vacm com2sec's (map communities into security names)
com2sec:
- name: mynetwork
source: 192.168.0.13/31
community: public
# vacm group's (map security names to group names)
groups:
- name: MyROGroup
version: v1
secname: mynetwork
- name: MyROGroup
version: v1c
secname: mynetwork
# vacm views (map mib trees to views)
views:
- name: all
type: included
oid: '.1'
mask: 80
# vacm access (map groups to views with access restrictions)
access:
- name: MyROGroup
context: '""'
match: any
level: noauth
prefix: exact
read: all
write: none
notify: none
# v3 users for read-write
rwusers:
- username: 'nagios'
passphrase: 'myv3password'
view: all
Your pillar_root must also include a top.sls (not to be confused with the top.sls in your file_roots for your states) like this:
/srv/pillar/top.sls
base:
'*':
- snmp
IMPORTANT: This directory and this top.sls for pillar data cannot exist or be included by your file_roots! This is where I was going wrong. For a complete picture, this is the the config I now have:
/etc/salt/master: (snippet)
file_roots:
base:
- /srv/salt/base
- /srv/formulas/snmp-formula
pillar_roots:
base:
- /srv/pillar
Inside /srv/salt/base I have a top.sls which includes a common.sls
for the 'base' environment. This is where the snmp-formula and it's states are included.
/srv/salt/base/top.sls:
base/
top.sls
common.sls
/srv/salt/base/common.sls:
include:
- snmp
- snmp.conf
- snmp.trap
- snmp.conftrap
tcpdump:
pkg.latest:
- name: tcpdump
telnet:
pkg.latest:
- name: telnet
htop:
pkg.latest:
- name: htop
Now the snmp parameter in the pillar data does not conflict with ID of the snmp state from the formula included by the state data.
I am using Cloudify 3.3 and OpenStack Kilo.
After I have successfully installed a blueprint, I tried to scale out the host VM (associated with a floating IP W.X.Y.Z) using the default scale workflow. My expected result is that a new VM will be created with a new floating IP, say A.B.C.D, associated to it.
However, after the scale workflow has been completed, I found that the floating IP W.X.Y.Z has been disassociated from the original host VM while this floating IP has been associated to the newly created VM.
My testing "blueprint.yaml":
tosca_definitions_version: cloudify_dsl_1_2
imports:
- http://www.getcloudify.org/spec/cloudify/3.3/types.yaml
- http://www.getcloudify.org/spec/openstack-plugin/1.3/plugin.yaml
inputs:
image:
description: Openstack image ID
flavor:
description: Openstack flavor ID
agent_user:
description: agent username for connecting to the OS
default: centos
node_templates:
web_server_floating_ip:
type: cloudify.openstack.nodes.FloatingIP
web_server_security_group:
type: cloudify.openstack.nodes.SecurityGroup
properties:
rules:
- remote_ip_prefix: 0.0.0.0/0
port: 8080
web_server:
type: cloudify.openstack.nodes.Server
properties:
cloudify_agent:
user: { get_input: agent_user }
image: { get_input: image }
flavor: { get_input: flavor }
relationships:
- type: cloudify.openstack.server_connected_to_floating_ip
target: web_server_floating_ip
- type: cloudify.openstack.server_connected_to_security_group
target: web_server_security_group
I have tried to create a node_template with type cloudify.nodes.Tier and put all the things inside this container. However, the scale workflow cannot be executed normally in this case.
I wonder what should I do so that the newly created VM can be associated to a new floating IP?
Thanks, Sam
What you are describing is a "one to one" relationship between the node and the resources related to it.
Currently Cloudify does not support this kind of relationship and your blueprint is working just as it should.
This feature will be available as of Cloudify 3.4 that will be released in few months