Error VPCResourceNotSpecified - cloudify

I'm getting an error when trying to deploy an instance in Amazon. I'm using Cloudify 3.2.1.
My blueprint:
...
node_templates:
host:
type: cloudify.aws.nodes.Instance
properties:
image_id: { get_input: image }
instance_type: { get_input: size_wordpress }
...
My inputs:
...
size_wordpress: t2.small
...
Error:
<Code>VPCResourceNotSpecified</Code>
<Message>The specified instance type can only be used in a VPC. A subnet ID or network interface ID is required to carry out the request.</Message>
How to solve?

T2 instance types require a VPC and not EC2 Classic.
You can either use VPC or use a different instance type.
EC2 instances
Cloudify VPC spec

Related

How to create a network of type vxlan using OS::Neutron::ProviderNet?

I'm trying to create a network with type vxlan using OS::Neutron::ProviderNet provider.
Following is a snippet of the template:
lab_private_net:
type: OS::Neutron::ProviderNet
properties:
network_type: vxlan
segmentation_id: 101
name:
str_replace:
template:
$rand-private-network
params:
$rand: { get_resource: randstr }
But Heat service is answering with a error:
ERROR: segmentation_id not allowed for flat network type.
clean_up CreateStack: ERROR: segmentation_id not allowed for flat network type.
Openstack Release: Zed
Installation: openstack-ansible, source code.
Backed OS: Rocky Linux 9
openstack client version: 6.0.0
heat_template_version: 2021-04-16
command used:
openstack stack create -t example_network_creation.yml my_stack -vvvv
Heat Response:
https://paste.openstack.org/raw/bjCG57OKSLPGypJpwn05/
Creating the openstack provider network using openstack client works without problem.

Ansible Hetzner Cloud - Create a server in private network

I am using Ansible to create a server in the Hetzner Cloud, the playbook reads:
- name: create the server at Hetzner
hetzner.hcloud.hcloud_server:
name: "{{server_hostname}}"
enable_ipv4: false
enable_ipv6: false
server_type: cx11
location: "{{server_location}}"
image: ubuntu-22.04
ssh_keys:
- "mykey"
state: present
api_token: "{{hetzner_secret}}"
private_networks: ipfire
register: server
My aim is to integrate the new server into the private network named 'ipfire' that I have previously created. The server should not be accessible via the internet, so I have disabled ipv4 and ipv6. Rather, I'd like to access the server by connecting via OpenVPN to the private network 'ipfire' and connect by use of ssh from there.
Unfortunately, I get an error message as follows:
PLAY [Order servers] ********************************************************************************************************
TASK [hetznerserver : create the server at Hetzner] *************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (hetzner.hcloud.hcloud_server) module: private_networks. Supported parameters include: rebuild_protection, api_token, location, enable_ipv6, upgrade_disk, ipv4, endpoint, ipv6, firewalls, server_type, state, force, labels, ssh_keys, delete_protection, image, id, name, enable_ipv4, placement_group, force_upgrade, user_data, datacenter, rescue_mode, allow_deprecated_image, volumes, backups."}
PLAY RECAP ******************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
The module private_networks does not seem to work like this?
Error messages like Unsupported parameters for (<moduleName>) module: <givenParameter>. Supported parameters include: <supportedParametersList> are usually syntax errors of the module used.
Therefore one may need to look up the respective documentation, in the example case hcloud_server module – Create and manage cloud servers on the Hetzner Cloud.
If the documentation shows the Parameters in question are available, it indicates
either a version mismatch of module used, means the used version is too old and an update is necessary
or an bug within the module code and further debugging and investigation within the module code is necessary
Code and Documentation Links
Community Authors> hetzner> hcloud
ansible-collections / hetzner.hcloud
After further investigation it might turn out that the parameter in question was introduced recently, in example
Github hetzner.hcloud Issue #150 "Unable to create cloud server without public ipv4 and ipv6"
Github hetzner.hcloud Pull #160 "Add possibility to specify private network when creating or updating servers"
which indicates in your example case that you'll need to update the Ansible Collection module in question since the parameter wasn't introduced in your used version of the module but as of v1.9.0.

How to solve the error `com.facebook.thrift.protocol.TProtocolException: Expected protocol id xxx` in NebulaGraph Exchange?

The following error occurs when NebulaGraph Exchange is running:
com.facebook.thrift.protocol.TProtocolException: Expected protocol id xxx
Attached is the configuration file for execution, with configuration instructions in the comments.
{
# NebulaGraph configuration
nebula: {
address:{
# Specify the IP addresses and ports for Graph and Meta services.
# If there are multiple addresses, the format is "ip1:port","ip2:port","ip3:port".
# Addresses are separated by commas.
graph:["127.0.0.1:9669"]
meta:["127.0.0.1:9559"]
}
# The account entered must have write permission for the NebulaGraph space.
user: root
pswd: nebula
# Fill in the name of the graph space you want to write data to in the NebulaGraph.
space: basketballplayer
connection: {
timeout: 3000
retry: 3
}
execution: {
retry: 3
}
error: {
max: 32
output: /tmp/errors
}
rate: {
limit: 1024
timeout: 1000
}
}
}
Check that the NebulaGraph service port is configured correctly.
For source, RPM, or DEB installations, configure the port number corresponding to --port in the configuration file for each service.
For docker installation, configure the docker mapped port number as follows:
Execute docker-compose ps in the nebula-docker-compose directory, for example:
$ docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
nebula-docker-compose_graphd_1 /usr/local/nebula/bin/nebu ... Up (healthy) 0.0.0.0:33205->19669/tcp, 0.0.0.0:33204->19670/tcp, 0.0.0.0:9669->9669/tcp
nebula-docker-compose_metad0_1 ./bin/nebula-metad --flagf ... Up (healthy) 0.0.0.0:33165->19559/tcp, 0.0.0.0:33162->19560/tcp, 0.0.0.0:33167->9559/tcp, 9560/tcp
nebula-docker-compose_metad1_1 ./bin/nebula-metad --flagf ... Up (healthy) 0.0.0.0:33166->19559/tcp, 0.0.0.0:33163->19560/tcp, 0.0.0.0:33168->9559/tcp, 9560/tcp
nebula-docker-compose_metad2_1 ./bin/nebula-metad --flagf ... Up (healthy) 0.0.0.0:33161->19559/tcp, 0.0.0.0:33160->19560/tcp, 0.0.0.0:33164->9559/tcp, 9560/tcp
nebula-docker-compose_storaged0_1 ./bin/nebula-storaged --fl ... Up (healthy) 0.0.0.0:33180->19779/tcp, 0.0.0.0:33178->19780/tcp, 9777/tcp, 9778/tcp, 0.0.0.0:33183->9779/tcp, 9780/tcp
nebula-docker-compose_storaged1_1 ./bin/nebula-storaged --fl ... Up (healthy) 0.0.0.0:33175->19779/tcp, 0.0.0.0:33172->19780/tcp, 9777/tcp, 9778/tcp, 0.0.0.0:33177->9779/tcp, 9780/tcp
nebula-docker-compose_storaged2_1 ./bin/nebula-storaged --fl ... Up (healthy) 0.0.0.0:33184->19779/tcp, 0.0.0.0:33181->19780/tcp, 9777/tcp, 9778/tcp, 0.0.0.0:33185->9779/tcp, 9780/tcp
Check the Ports column to find the docker mapped port number, for example:
The port number available for Graph service is 9669.
The port number for Meta service are 33167, 33168, 33164.
The port number for Storage service are 33183, 33177, 33185.
This error is saying that the graphd-client RPC calling of remote thrift port got exceptions, normally it’s either a wrong thrift port(i.e. graphclient talking to metad/storaged), or the RPC version is mismatched(i.e. graphclient 2.6.0 talking to graphd 3.x).

How do I add an nginx load balancer to a kubernetes cluster on Jelastic?

I have the following jps manifest:
jpsVersion: 1.3
jpsType: install
application:
id: my-app
name: My App
version: 0.0
settings:
fields:
- name: envName
caption: Env Name
type: string
required: true
- name: topo
type: radio-fieldset
values:
0-dev: '<b>Development:</b> one master (1) and one scalable worker (1+)'
1-prod: '<b>Production:</b> multi master (3) with API balancers (2+) and scalable workers (2+)'
default: 0-dev
- name: k8s-version
type: string
caption: k8s manifest version
default: v1.16.3
onInstall:
- installKubernetes
actions:
installKubernetes:
install:
jps: https://github.com/jelastic-jps/kubernetes/blob/${settings.k8s-version}/manifest.jps
envName: ${settings.envName}
displayName: ${settings.envName}
settings:
deploy: cc
topo: ${settings.topo}
dashboard: version2
ingress-controller: Nginx
storage: true
api: true
monitoring: true
version: ${settings.k8s-version}
jaeger: false
Now, I'd like to add a load balancer in front of the k8s cluster, something like
env:
topology:
nodes:
- nodeGroup: bl
nodeType: nginx-dockerized
tag: 1.16.1
displayName: Node balancing
count: 1
fixedCloudlets: 1
cloudlets: 4
Of course, the above kubernetes jps installation creates a topology. Therefore, there is no way I can call the above env section. How can I add a new node to the topology created by the jelastic kubernetes jps? I found addNodes, but it does not seem to allow to define what comes into the bl node group.
In the Jelastic API, I was able to find the EditNodeGroup method, which I believe would solve my problem. However, the documentation is not very clear, it's kind of missing an example from which I could guess how to fill up the parameters. How do I use that method to add an nginx load balancer to my k8s environment?
EDIT
The EditNodeGroup method is of no use for that problem. I think, currently, my best option is to fork the jelastic-jps/kubernetes and adapt the beforeinstall for my needs. Do I have any other option? I browsed the API and found no way to add my nginx load balancer.
The environment topology cannot be changed during an external manifest invocation, since it's created within that manifest. But it can be altered after the manifest finish.
The whole approach is:
onInstall:
- installKubernetes
- addBalancer
actions:
installKubernetes:
install:
jps: https://github.com/jelastic-jps/kubernetes/blob/${settings.k8s-version}/manifest.jps
envName: ${settings.envName}
...
addBalancer:
- install:
envName: ${settings.envName}
jps:
type: update
name: Add Balancer Node
onInstall:
- addNodes:
....
Please refer https://github.com/jelastic-jps/kubernetes/blob/ad62208a5b3796bb7beeaedfce5c42b18512d9f0/addons/storage.jps example on how to use "addNodes" action in the manifest.
Also, the reference https://docs.cloudscripting.com/creating-manifest/actions/#addnodes describes all fields that can be used.
The latest published version of K8s for Jelastic is: v1.16.6, so you could use it in your manifest.
But, please note, that via this Balancer instance you will be accessing the default Kubernetes ingress controller, i.e. the same ingresses/paths that you currently have at "http(s)://".
Of course, you can assign a public ip to added BL, and access the same functionality not via Shared Balancers as before, but via public IP from now on.
In a nutshell, Jelastic Balancer instance currently doesn't provide a Kubernetes service LoadBalancer functionality — if you need exactly this one. The K8S LoadBalancer functionality will be added in the next release: public IPs added to "cp" worker can be automatically used for LoadBalancers created inside the Kubernetes cluster. We expect this functionality be added to 1.16.8+
Please let us know if you have any further questions.

Network issuse on scaling out deployment on Cloudify

I am using Cloudify 3.3 and OpenStack Kilo.
After I have successfully installed a blueprint, I tried to scale out the host VM (associated with a floating IP W.X.Y.Z) using the default scale workflow. My expected result is that a new VM will be created with a new floating IP, say A.B.C.D, associated to it.
However, after the scale workflow has been completed, I found that the floating IP W.X.Y.Z has been disassociated from the original host VM while this floating IP has been associated to the newly created VM.
My testing "blueprint.yaml":
tosca_definitions_version: cloudify_dsl_1_2
imports:
- http://www.getcloudify.org/spec/cloudify/3.3/types.yaml
- http://www.getcloudify.org/spec/openstack-plugin/1.3/plugin.yaml
inputs:
image:
description: Openstack image ID
flavor:
description: Openstack flavor ID
agent_user:
description: agent username for connecting to the OS
default: centos
node_templates:
web_server_floating_ip:
type: cloudify.openstack.nodes.FloatingIP
web_server_security_group:
type: cloudify.openstack.nodes.SecurityGroup
properties:
rules:
- remote_ip_prefix: 0.0.0.0/0
port: 8080
web_server:
type: cloudify.openstack.nodes.Server
properties:
cloudify_agent:
user: { get_input: agent_user }
image: { get_input: image }
flavor: { get_input: flavor }
relationships:
- type: cloudify.openstack.server_connected_to_floating_ip
target: web_server_floating_ip
- type: cloudify.openstack.server_connected_to_security_group
target: web_server_security_group
I have tried to create a node_template with type cloudify.nodes.Tier and put all the things inside this container. However, the scale workflow cannot be executed normally in this case.
I wonder what should I do so that the newly created VM can be associated to a new floating IP?
Thanks, Sam
What you are describing is a "one to one" relationship between the node and the resources related to it.
Currently Cloudify does not support this kind of relationship and your blueprint is working just as it should.
This feature will be available as of Cloudify 3.4 that will be released in few months

Resources