OpenStack Cloud-Init Set Hostname to VM-Name - openstack

How can I set the hostname of a VM to the VM-Name in OpenStack?
I can set the hostname using cloud-init, but I do not know how to set it to a 'parameter', that is, how to make cloud-init / OpenStack pass in the VM-Name.

That's automatically done when OpenStack metadata services are running. As a matter of facts, if your cloud-images are prepared to use cloud-init, your OpenStack services have the Metadata-Services running, and there is no "preserve_hostname: True" in your cloud-init config (normally at /etc/cloud/cloud.cfg)
Any name that you give to the instance, will be passed as "hostname" vía metadata-services to your instance.
Do the following test in any of your instances. Run the following command:
ec2metadata
If that fails, either the cloud-init software is incomplete, or your metadata-services is not reachable from the instances !.

Related

Neutron - Invalid input for operation: physical_network 'physnet_em1' unknown for VLAN provider network

I installed Openstack using Devstack on a VirtualBox VM running Ubuntu 18.04. I am trying to create a provider network with the following command:
neutron net-create mgmt --provider:network_type=vlan --provider:physical_network=physnet_em1 --provider:segmentation_id=500 --shared
This command returns the following error:
neutronclient.common.exceptions.BadRequest: Invalid input for operation:
physical_network 'physnet_em1' unknown for VLAN provider network.
Neutron server returns request_ids: ['req-7a0bfe13-b4c3-4408-bc60-8d36e8bc3f9a']
I would like to know how to proceed.
You should use the openstack-client commands like openstack network create ..., because the client-commands of the single libraries, like your neutron net-create, are deprecated. There are some really special cases, which are only possible with the client-library of the single components, but the most is covered by the openstack-client. Unfortunately there are often used the old commands in documentations, because many documents are not up-to-date.
To avoid the error you had, you only need to remove the --provider:physical_network=physnet_em1 and --provider:segmentation_id=500 from your command. The physical network and vlan-range should be defined within the ml2_conf.ini of the neutron-server, like this for example:
[ml2]
type_drivers = flat,vlan,vxlan
...
[ml2_type_vlan]
network_vlan_ranges = physnet_em1:171:280
...
So with neutron net-create mgmt --provider:network_type=vlan --shared it works in my test-deployment (at least there in no error in the terminal, not tested the network-connectioin now). The openstack-command for this task would be openstack network create --provider-network-type vlan mgmt --share --external.
Normally, as far as I know, for the provider network a flat network-type is used instead of vlan, because the provider-network should normally not directly connected to any VM. The other non-provider networks can be vlan or vxlan and then connected with a neutron-router to the provider-network. An openstack-command for this could be: openstack network create --provider-network-type flat --provider-physical-network physnet_em1 mgmt --share --external. For flat-networks you have the possibility to define a provider-physical-network via command-line.
In some documentations like this: https://docs.openstack.org/newton/install-guide-ubuntu/launch-instance-networks-provider.html they also use a flat-network as provider-network-type.

Kaa node service fails to start mongodb and zookeeper

We are trying to setup a Single Node Kaa server(version 0.10.0) in an Ubuntu 16.04 machine.
Followed the documentation given here
We were unable to connect to the admin UI after starting the kaa node service.
On investigating further we could see that the Mongodb and zookeeper services were not started. So we manually started those services. After that we were able to connect to Kaa admin UI. Do we need any additional steps to get these service running on kaa-node start ?
I setup kaaproject with the guide for my Ubuntu 16.04.1 LTS VM and Zookeeper was not running by default on my server also, so I had to install the deamon (which starts zookeeper also on startup):
sudo apt-get install zookeeperd
Check if zookeeper is running:
netstat -ntlp | grep 2181
This should result in an output like this:
With mongodb I had the problem, that there was not enough space available for the journal files. I fixed this by increasing the available disk space + setting smallfiles=true in the /etc/mongod.conf
Probably you have some troubles with configurations for services. Check if auto-startup is enabled for MongoDB / Zookeeper by the next command:
$ systemctl is-enabled ${service-name}
if you see this:
$ disabled
then auto-startup is disabled for specified service and you should try next in order to enable it:
$ systemctl enable ${service-name}

bdf based pci-passthrough (non SRIOV) using OpenStack Liberty

I am trying to get non SRIOV pci-passthrough using OpenStack Liberty, but not successful.
These are the steps followed
create pci_passthrough_whitelist in nova.conf of the compute node as pci_passthrough_whitelist = {"address": "0000:89:00.0", "physical_network": "test_phy_nw"}
As sriov is not used, do not add sriovnicswitch as mechanism driver
in ml2. and do not do any ml2 sriov configurations. do not configure pci_passthrough_alias as alias does not support BDF (address)
create a neutron net - neutron net-create --name test_os_nw
--provider:physical_network test_phy_nw --provider:physical_network_type flat. (is Flat ok ? or should i use vlan or vxlan type networks ?)
create port with direct vnic_type - neutron port-create
--name pci.port --binding:vnic_type direct
boot an instance with this port nova boot --flavor m1.small --image
ubuntu --nic port-id=$(neutron port-show pci.port -F id -f value)
test.vm
Two questions in this regard
Are the steps mentioned above correct & am i missing anything in the
above steps ?
Is the process to achieve pci-passthrough (non SRIOV) different from
SRIOV pci-passthrough ? If it is different, can you please share a
link to it (or better can u give a quick summary of the process).
After some more experimenting and reading, figured out BDF based pass through is supported only for SRIOV (as of Liberty).

SaltStack : is it possible to apply states on the master and if so, how?

I am a total beginner with SaltStack but I have managed to setup some states on a machine and run them on a minion.
What I have right now is a Debian machine setup with salt-master as well as another Debian setup as salt-minion.
Since I am using the salt-master also as a development machine, I would like to know if I can somehow apply the states on the master itself as well. And if so, how?
Is there a command I can run to apply the states on the master? (so far I was unable to find it)
Should I install salt-minion on the same machine as well to be able to do this and simply register the same machine as a minion on itself?
Thanks!
Since I am using the salt-master also as a development machine, I would like to know if I can somehow apply the states on the master itself as well. And if so, how?
You can do that by following the following steps:
Install salt-minion on your development machine
Edit /etc/salt/minion to point to your master (vi /etc/salt/minion and change the following : master: salt -> master: 127.0.0.1)
(optional) Edit /etc/salt/minion_id to something that is meaningful to you
Start up your salt-minion
Use salt-key to accept your minion's key
Use your salt-master to control your minion as if it were any other salt-minion
Is there a command I can run to apply the states on the master?
The salt-master doesn't really run the the state files, the salt-minions do. If you followed the above steps then you can target your salt-master to run highstate with the following command:
salt 'the_value_of_/etc/salt/minion_id' state.highstate
Should I install salt-minion on the same machine as well to be able to do this and simply register the same machine as a minion on itself?
Yup. I think you have an idea as to what you need to do and just need a push in the right direction.
Install both Minion and Master on single node
I call such node Master Minion. No steps provided - you already know it based on the question.
Some conceptual info instead:
In short, Master never applies states. Instead, it triggers Minions (the local Master Minion in this case).
Salt Minion and Master are two separate services with independent runtime & configuration.
While instances use common software, runtime talk over the network (location-independent).
If you can apply states on remote Minion, the same mechanism will be used for local Minion one as well.
Additional info
There are two ways to apply states:
Master-side salt command to "push" states to multiple remote minions.
rpm -qf $(which salt)
salt-master-2015.5.3-4.fc22.noarch
Minion-side salt-call command to "pull" states on single local minion.
rpm -qf $(which salt-call)
salt-minion-2015.5.3-4.fc22.noarch
Until more than one minion is involved, it's better to use salt-call for the same effect:
salt-call state.highstate
Minion-side salt-call provides advantages especially for testing, isolation, troubleshooting:
It makes network issues (if any) more obvious.
It safely applies states only on single local minion (no way to specify more than one).
It shows debug output directly in the local terminal:
salt-call -l debug test.ping
The last point, salt-call--local can also be used in masterless setup using no network.
Now it's near end of 2015. Let's review some more possibilities to salt master self-control:
Install a minion aside with salt master on the same box
This one has been widely discussed as above two answers.
Use salt-ssh + salt-run state.orchestrate
Setup steps:
Step 1: install salt-ssh
Step 2: modify roster file (e.g. /etc/salt/roster in CentOS 6). The default installation already provide you some example. Since you probably ssh into salt master, of course username / password / private key setup should not be a problem for you. For example to control salt master vagrant box, this sample should do:
localhost:
host: 127.0.0.1
user: vagrant
passwd: vagrant
sudo: True
Now, steal from official tutorial with a little bit twist:
# /srv/salt/orch/cleanfoo.sls
cmd.run:
salt.function:
- tgt: 'localhost'
- ssh: 'true'
- arg:
- touch /tmp/test.txt
And run it with:
salt-run state.orchestrate orch.cleanfoo
Check your salt master vagrant box /tmp directory if test.txt file is there.
This approach should also work for state. Either way you need to install something. I prefer the second way since in general, calling salt master self control (to provision some work) is just a step before I actually call minion to process other state(s).

Accessing remote Fuse/Karaf console using SSH

I have a Fuse ESB standalone server running in a RHEL box. I want to connect to the Karaf console remotely to manage the bundles.
If I close my current session, How I go back to my karaf console again ?
I have my Fuse ESB configured to 8101 port for SSH. Will I be able to connect it directly through my SSH client(Putty)
Or Do I need another fuse esb instance locally to access the remote Fuse instance ?
Either ways I am not able to connect, It says access denied. Is there any other easier way to connect to remote fuse/karaf instance ?
Even I tried using Client.sh from bin directory, it says authentication failure. But I have created a JAAS user with Admin role.
By the way, Is just a user is enough to do this ? Or does it need Public/Private key configuration also ?
What is the usual approach for managing the remote Fuse/Karaf instance ?
You can find many details in the JBoss Fuse documentation (eg successor to Fuse ESB) at
https://access.redhat.com/site/documentation/en-US/JBoss_Fuse/
And there is a chapter on remote connecting to containers here
https://access.redhat.com/site/documentation/en-US/JBoss_Fuse/6.0/html-single/Configuring_and_Running_JBoss_Fuse/index.html#ESBRuntimeRemote
You need to pass in credentials for a user on the container that is valid and is in the admin role.
The karaf shell also has a jaas command, which allows you to list the users and their roles etc. And as well add new users, etc. You can also do some user management form the FMC web console that is part of Fuse ESB.
You might also want to check your IPtables
http://ask.xmodulo.com/open-port-firewall-centos-rhel.html.
- $ sudo iptables -I INPUT -p tcp -m tcp --dport 8101 -j ACCEPT
- $ sudo service iptables save
- $ service iptables restart
From another karaf instance you can run this command
JBossFuse:karaf#root> ssh -l username -P password -p port hostname
e.g
- JBossFuse:karaf#root> ssh -l smx-P smx -p 8101 10.234.12.12
You have to make sure that the ssh role name that is defined in etc/org.apache.karaf.shell.cfg
# shRole defines the role required to access the console through ssh
#
sshRole = ssh
matches the one in etc/user.properties
#
# This file contains the users, groups, and roles.
# Each line has to be of the format:
#
# USER=PASSWORD,ROLE1,ROLE2,...
# USER=PASSWORD,_g_:GROUP,...
# _g_\:GROUP=ROLE1,ROLE2,...
#
# All users, grousp, and roles entered in this file are available after Karaf startup
# and modifiable via the JAAS command group. These users reside in a JAAS domain
# with the name "karaf".
#
karaf = karaf,_g_:admingroup
_g_\:admingroup = group,admin,manager,viewer,ssh

Resources