Cloudify 3.3 - Use of existing network (no Floating IP) - openstack

we want to configure a Cloudify Manager into an OpenStack project in which there is only an external network (named public_net), with public IP addresses.
In other words, each Cloudify VM (both Manager and Application) should be attached to the external net (no Floating IP).
The Cloudify CLI, on the other hand, was created out of OpenStack.
How should we configure the OpenStack plugin to implement this scenario?

The Cloudify manager if bootstrapped with the Openstack blueprint will be configured to two networks:
The network that it can connect to the OpenStack API (external network)
A management network that it will create.
If you want to use the external network for the management network, you should change the blueprint so it will have this network as an external resource (external_resource: true) and set the name of the network to public_net.
your blueprint will look like this:
management_network:
type: cloudify.openstack.nodes.Network
properties:
use_external_resource: true
resource_id: public_net
openstack_config: *openstack_configuration

Related

Openstack Kolla Ansible reconfigure all compute node to enable provider network

May I ask how to reconfigure all compute node to enable provider network in Kolla Ansible openstack?
As I know, openstack will support to create provider network while kolla-ansible deploy success.
Because it enable the relevant composes default in all-in-one or multinode file, something like this:
# Neutron
[neutron-server:children]
control
[neutron-dhcp-agent:children]
neutron
[neutron-l3-agent:children]
neutron
While neutron-l3-agent enable default, you could also create Self-service network not only Provider networks.
IIUC, you could create the provider network and launch instance with this network, like the documentation describe:
$ openstack network create --share --external \
--provider-physical-network provider \
--provider-network-type flat provider
One more thing, your network infrastructure (such as switch or router) should configure correctly at first while you using it.

Rancher Unable to connect Cluster Ip from another project

I'm using rancher 2.3.5 on centos 7.6
in my cluster the "project Network isolation" is enable.
I have 2 projects:
In the projet 1, I have deployed one apache docker that listens to port 80 on cluster IP
[enter image description network isolation config
In the second project, I unable to connect the projet 1 cluster IP
Is the project Network isolation block also the traffic for the cluter IP between the two projects.
Thanks you
Other answers have correctly pointed out how a ClusterIP works from the standpoint of just Kuberentes, however the OP specifies that Rancher is involved.
Rancher provides the concept of a "Project" which is a collection of Kubernetes namespaces. When Rancher is set to enable "Project Network Isolation", Rancher manages Kubernetes Network Policies such that namespaces in different Projects can not exchange traffic on the cluster overlay network.
This creates the situation observed by the OP. When "Project Network Isolation" is enabled, a ClusterIP in one Project is unable to exchange traffic with a traffic source in a different Project, even though they are in the same Kubernetes Cluster.
There is a brief note about this by Rancher documentation:
https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/
and while that document seems to limit the scope to Pods, because Pods and ClusterIPs are allocated from the same network it applies to ClusterIPs as well.
K8s Cluster IPs are restricted for communication with in the cluster. A good read on the CLusterIP, Node Port and the load balancer can be found at https://www.edureka.co/community/19351/clusterip-nodeport-loadbalancer-different-from-each-other .
If your intention is to make the services in the 2 different cluster communicate, then go for the below methods.
Deploy a overlay network for your Nodegroup
Cluster peering

Use more than one physical interface on devstack ocata

I have a barebones PC with 6 physical interfaces to 'play' around with devstack. I'd like to deploy VMs on it and connect them to different Ethernet interfaces. Currently, I'm only able to use the interface associated to the br-ex bridge. I've tried to define several OVS bridges and assign the physical interfaces to different bridges. I try to define a mapping with more than one bridge, but that doesn't seem to work.
Has anyone had any success on this.
Thanks, /PA
You will need to create "provider networks", which allow you to associate neutron networks with specific physical interfaces. You can find documentation on creating provider networks in the install guide.
Once everything is configured correctly, you can attach new Nova servers to your provider networks by passing the appropriate --network argument when you boot a new server.

Each Kaa Node in the cluster needs to have a separate Public IP address?

I am trying to setup a Kaa cluster with 3 kaa-node servers. I would like to know whether each node (bootstrap service & operations_service) must have its own public IP address? Otherwise the endpoint will not be able to access them?
But I have only one Public IP address & one Domain Name. Each node has it's own local ip address. How can I setup this kaa-cluster?
on each node:
open kaa-node.properties file at /etc/kaa-node/conf directory.
change thrift_host and transport_public_interface properties onto the local IP address.
Then you need to integrate kaa-node with the following services:
Zookeeper, SQL and NoSQL databases.
For more information, refer to the following documentation page.
As alternative, you are able to setup kaa cluster using docker environment. Also, look to the documentation page.
Please, take into account that docker extension is supported from kaa 0.10 version.

How to integrate Opendaylight with OpenStack? How does Opendaylight operate when integrated with OpenStack?

I was trying to integrate ODL with Neutron by enable odl service in the local.conf file, but it wasn't able to exit the ./stack.sh script properly.
How can I integrate both?
And how does Neutron work with and without ODL.
Neutron provides "networking as a service" between interface devices managed by other Openstack services, but it's just an API which parse messages to REAL messages in the core of the network and Opendaylight really deals with the core of the network itself.
While OpenDaylight Controller provides several ways to integrate with OpenStack, you might use VTN features available on OpenDaylight controller. In the integration, VTN Manager work as network service provider for OpenStack. VTN Manager features, enable OpenStack to work in pure OpenFlow environment in which all switches in data plane are OpenFlow switch.
So you can follow this tutorial: https://wiki.opendaylight.org/view/Release/Helium/VTN/User_Guide/OpenStack_Support#How_to_set_up_OpenStack_for_the_integration_with_VTN_Manager

Resources