Is there a way to specify a network id to the network and sub network during creation?
neutron net-create test-net --provider:network_type vlan --provider:physical_network physnet2 --provider:segmentation_id 22
neutron subnet-create test-net --name test-subnet --allocation-pool start=10.153.9.20,end=10.153.9.34 --gateway 10.153.8.1 10.153.8.0/22
The answer is as simple as NO! This option is not available from the command line client. For the supported parameters you can refer to:
netron net-create
neutron subnet-create
As I see it there is not a hint that this is going to be changed in the feature release.
If for any reason you still need to ad-hoc specify the Network ID, it is possible (I do not know how exactly) to do it manually by changing the values post-creation. But I discourage these action.
Related
I installed Openstack using Devstack on a VirtualBox VM running Ubuntu 18.04. I am trying to create a provider network with the following command:
neutron net-create mgmt --provider:network_type=vlan --provider:physical_network=physnet_em1 --provider:segmentation_id=500 --shared
This command returns the following error:
neutronclient.common.exceptions.BadRequest: Invalid input for operation:
physical_network 'physnet_em1' unknown for VLAN provider network.
Neutron server returns request_ids: ['req-7a0bfe13-b4c3-4408-bc60-8d36e8bc3f9a']
I would like to know how to proceed.
You should use the openstack-client commands like openstack network create ..., because the client-commands of the single libraries, like your neutron net-create, are deprecated. There are some really special cases, which are only possible with the client-library of the single components, but the most is covered by the openstack-client. Unfortunately there are often used the old commands in documentations, because many documents are not up-to-date.
To avoid the error you had, you only need to remove the --provider:physical_network=physnet_em1 and --provider:segmentation_id=500 from your command. The physical network and vlan-range should be defined within the ml2_conf.ini of the neutron-server, like this for example:
[ml2]
type_drivers = flat,vlan,vxlan
...
[ml2_type_vlan]
network_vlan_ranges = physnet_em1:171:280
...
So with neutron net-create mgmt --provider:network_type=vlan --shared it works in my test-deployment (at least there in no error in the terminal, not tested the network-connectioin now). The openstack-command for this task would be openstack network create --provider-network-type vlan mgmt --share --external.
Normally, as far as I know, for the provider network a flat network-type is used instead of vlan, because the provider-network should normally not directly connected to any VM. The other non-provider networks can be vlan or vxlan and then connected with a neutron-router to the provider-network. An openstack-command for this could be: openstack network create --provider-network-type flat --provider-physical-network physnet_em1 mgmt --share --external. For flat-networks you have the possibility to define a provider-physical-network via command-line.
In some documentations like this: https://docs.openstack.org/newton/install-guide-ubuntu/launch-instance-networks-provider.html they also use a flat-network as provider-network-type.
How can I set the hostname of a VM to the VM-Name in OpenStack?
I can set the hostname using cloud-init, but I do not know how to set it to a 'parameter', that is, how to make cloud-init / OpenStack pass in the VM-Name.
That's automatically done when OpenStack metadata services are running. As a matter of facts, if your cloud-images are prepared to use cloud-init, your OpenStack services have the Metadata-Services running, and there is no "preserve_hostname: True" in your cloud-init config (normally at /etc/cloud/cloud.cfg)
Any name that you give to the instance, will be passed as "hostname" vía metadata-services to your instance.
Do the following test in any of your instances. Run the following command:
ec2metadata
If that fails, either the cloud-init software is incomplete, or your metadata-services is not reachable from the instances !.
I have 3 network nodes running neutron-server ..
Only one of these nodes is attached to the external network
I use ml2 with openvswitch
in the bridge mapping of the node connected to the external network - VIA FLOATING IPS - , i have external_net mapped to the correct bridge ..
On the other nodes i do not have this mapping defined and i do not have interfaces
The issue i have is the following
When i try to start a virtual machine that is connected to the external network , i have this error in the logs :
neutron-server: 2016-09-07 12:33:00.975 57352 ERROR neutron.plugins.ml2.managers [req-def18170-5e45-4fef-9653-e008faa39913 -
- - - -] Failed to bind port 035a58e1-f18f-428b-b78e-e8c0aaba7d14 on host node002 for vnic_type normal using segments [{'segmentation_id': None, 'phy
sical_network': u'external_net', 'id': u'0d4590e5-0c48-4316-8b78-1636d3f44d43', 'network_type': u'flat'}]
neutron-server: 2016-09-07 12:33:00.975 57352 ERROR neutron.plugins.ml2.managers [req-def18170-5e45-4fef-9653-e008faa39913 -
- - - -] Failed to bind port 035a58e1-f18f-428b-b78e-e8c0aaba7d14 on host node003 for vnic_type normal using segments [{'segmentation_id': None, 'phy
sical_network': u'external_net', 'id': u'0d4590e5-0c48-4316-8b78-1636d3f44d43', 'network_type': u'flat'}]
on both nodes( node002 and node003 ) , because they DO NOT have this network defined ! so is this a bug or such a setup is not valid ?
Thank you
In a typical OpenStack deployment you do not bind Nova instances directly to the external network. As you have already surmised, this won't work because that network isn't provisioned on the compute hosts.
Instead, you attach your instances to an internal network, and then you assign floating ip addresses from the external network using,e.g., nova floating-ip-create and nova floating-ip-associate.
An alternative solution is to use "provider external networks", an arrangement in which your nova instances are attached directly to L2 networks with external connectivity, rather than relying on the floating-ip NAT solution described in the previous paragraphs.
the reason behind the error was bad configuration on nodes that DOES NOT host a provider network
mainly the ml2 core file ml2_conf.ini
parameter :
flat_network should be set to the appropriate value on each node
like on the node which is connected to all flat networks ( including the internal network ) it should be set to
flat_networks = *
and on the node that does not host all flat networks ( the provider network for instance )
flat_networks = physical_internal
I believe it won't work. You need to have binded ports to all your 3 network nodes.
A quick test would be to stop neutron-server, neutron-dhcp-agent, neutron-l3-agent and neutron-metadata-agent services from the 2 network nodes that are not bined to external ports... and test again.
I am trying to get non SRIOV pci-passthrough using OpenStack Liberty, but not successful.
These are the steps followed
create pci_passthrough_whitelist in nova.conf of the compute node as pci_passthrough_whitelist = {"address": "0000:89:00.0", "physical_network": "test_phy_nw"}
As sriov is not used, do not add sriovnicswitch as mechanism driver
in ml2. and do not do any ml2 sriov configurations. do not configure pci_passthrough_alias as alias does not support BDF (address)
create a neutron net - neutron net-create --name test_os_nw
--provider:physical_network test_phy_nw --provider:physical_network_type flat. (is Flat ok ? or should i use vlan or vxlan type networks ?)
create port with direct vnic_type - neutron port-create
--name pci.port --binding:vnic_type direct
boot an instance with this port nova boot --flavor m1.small --image
ubuntu --nic port-id=$(neutron port-show pci.port -F id -f value)
test.vm
Two questions in this regard
Are the steps mentioned above correct & am i missing anything in the
above steps ?
Is the process to achieve pci-passthrough (non SRIOV) different from
SRIOV pci-passthrough ? If it is different, can you please share a
link to it (or better can u give a quick summary of the process).
After some more experimenting and reading, figured out BDF based pass through is supported only for SRIOV (as of Liberty).
First, I followed exactly "The Riak Fast Track" tutorial to build a four nodes Riak cluster.
Then, I changed the 127.0.0.1 IP to 192.168.1.11 in dev[1-4]/etc/app.config files, and reinstalled clusters(delete dev[1-4], fresh install).
but Riak tells me:
Node dev1#192.168.1.11 is not reachable when I issue dev2/bin/riak-admin cluster join dev1#192.168.1.11
What's wrong?
+1 to what Brian Roach said in the comment.
Make sure to update the node name and IP address in both the app.config files AND the vm.args, before you start up the node.
Make sure node dev1 is up and reachable, before issuing a cluster join command to dev2.
Meaning, make sure dev1/bin/riak ping returns a 'pong', etc.