Openstack cinder volume resize does not reflect without a vm reboot - openstack

When I extend a cinder volume which is "in-use" state, the volume gets extended but when I see inside the vm it does not show the extension it still shows the same size.
Only after a reboot the change shows inside the vm - windows disk management.
Is there a way to force this change, where I dont have to reboot the server and it shows the change on the fly.
The problem is we can not stop the IO on the disk, so can not really afford a reboot.
Any direction here, how to solve this problem. Thanks
Openstack version: Xena
KVM hypervisor

Add "qemu guest agent" app to windows image . try this document for the purpose of intalling guest agent in windows
https://pve.proxmox.com/wiki/Qemu-guest-agent
then activate guest agent via image properties
openstack image set IMG-UUID --property hw_qemu_guest_agent=yes
hope this is helpull

Related

Can't change Replication properties for Linux VM in Azure Site Recovery

We replicete some Windows and Linux VMs from Hyper-V to Azure
As a part of recovery preparation we need to change VMs params, e.g. size, nsg, resource group to be recovered into (different from ASR vault RG) via Set-AzRecoveryServicesAsrReplicationProtectedItem cmdlet
For windows it works well, but for Linux we're getting Failed to update the LicenseType for the physical or virtual machine.
We don't need to change anything related to license, just RG to be recovered into
Why this issue is happening and how can we workaround it?
the solution is to specify param -LicenseType to be NoLicenseType

Problem communicating over a local area network (LAN) with ROS on WSL2

I am a developer of ROS projects. Recently I am trying using ROS(melodic) on WSL2(Windows Subsystem for Linux), and all things works just great. But I got some trouble when I want to use another PC which also in the same local area network(LAN) to communicate with. Before setting the environment variables like "ROS_MASTER_URI, ROS_IP", I know that since WSL 2 work on Hyper-V so the IP show on WSL2 is not the one in the real LAN. I have to do some command like below in order to make everyone in LAN communicate with the specific host:PORT on WSL2.
netsh interface portproxy delete v4tov4 listenport=$port listenaddress=$addr
But here comes a new question:
The nodes which use TCPROS to communicate with each other have a random PORT every time I launch the file.
How can I handle this kind of problem?
Or is there any information on the internet that I can have a look?
Thank you.
The root problem is described in WSL issue #4150. To quote from that thread,
WSL 2 seems to NAT it's virtual network, instead of making it bridged
to the host NIC.
Option 1 - Port forwarding script on login
Note: From #kraego's comment (and the edited question, which I'm just seeing based on the comment), this is probably not a good option for ROS, since the port numbers are randomly assigned. This makes port forwarding something that would have to be dynamically done.
There are a number of workarounds described in that issue, for which you've already figured out the first part (the port forwarding). The primary technique seems to be to create a PowerShell script to detect the IP address and create the port forwarding rules that runs upon Windows login. This particular comment near the top of the thread seems to be the canonical go-to answer, although many people have posted their tweaks or alternatives throughout the very long thread.
One downside - I believe the script that is mentioned there needs to be run at logon since the WSL subsystem seems to only want to run when a user is logged in. I've found that attempting to run a WSL service or instance through Windows OpenSSH results in that instance/service shutting down soon after the SSH session is closed, unless the user is already logged into Windows with a WSL instance opened.
Option 2 - WSL1
I would also propose that, assuming it fits your workflow and if the ROS works on it (it may not, given the device access you need, but not sure), you can simply use WSL1 instead of WSL2 to avoid this. You can try this out by:
Backing up your existing distro (from PowerShell or cmd, use wsl --export <DistroName> <FileName>
Import the backup into a new WSL1 instance with wsl --import <NewDistroName> <InstallLocation> <FileNameOfBackup> --version 1
It's possible to simply change versions in place, but I tend to like to have a backup anyway before doing it, and as long as you are backing up, you may as well leave the original in place.

Debugging poor I/O performance on OpenStack block device (OpenStack kolla:queen)

I have an OpenStack VM that is getting really poor performance on its root disk - less than 50MB/s writes. My setup is 10 GbE, OpenStack deployed using kolla, the Queen release, with storage on Ceph. I'm trying to follow the path through the infrastructure to identify where the performance bottleneck is, but getting lost along the way:
nova show lets me see which hypervisor (an Ubuntu 16.04 machine) the VM is running on but once I'm on the hypervisor I don't know what to look at. Where else can I look?
Thank you!
My advice is to check the performance first between host (hypervisor) and ceph , if you are able to create a ceph block device, then you will able to map it with rbd command , create filesystem, and mount it - then you can measure the device io perf with : sysstat , iostas, iotop, dstat, vmastat or even with sar

Accessing Serial Ports with an Application Runs in Flatpak

I recently updated my IDE to Monodevelop 6 using Flatpak, on Ubuntu 16.04 LTS, from an older version 5.
I have an application that interacts with serial ports which is basically a USB/RS232 adapter connecting a device to my computer.
I have no issue accessing USB port (/dev/ttyUSB0) when I debug the application in Monodevelop5. However, the device directory (/dev/) that I have access to, using Monodevelop6 is completely different than the one I have access to in Linux, and there is no ttyUSB0 in that folder.
I believe this is because Flatpak runs the application in sandbox. So, if that is the reason, how can I access to a serial port then?
Thanks.
Most likely that's because Flatpak is blocking access to the serial device.
Unfortunately at the moment I don't think there is a way to give access specifically to the serial devices, so you'd need to give access to all:
$ flatpak run --device=all com.xamarin.MonoDevelop
What this does is essentially mount the host's /dev inside the sandbox, so the app has full access to it.
It's a pretty big hole in the sandbox, but sometimes it's needed until all the permission handling stuff gets implemented.

how to get instances back after reboot in openstack

After successful installation of devstack and launching instances,but once reboot machine, need to start all over again and lose all the instances which were launched back then.I tried rejoin-stack but did not worked,How can i get the instances back after reboot ?
You might set resume_guests_state_on_host_boot = True in nova.conf. The file should be located at /etc/nova/nova.conf
I've found some old discussion http://www.gossamer-threads.com/lists/openstack/dev/8772
AFAIK at the present time OpenStack (Icehouse) still not completely aware about environments inside it, so it can't restore completely after reboot. The instances will be there (virsh domains), but even if you start them manually or using nova flags I'm not sure whether other facilities will handle this correctly (e.g. neutron will correctly configure all L3 rules according to DB records, etc.) Honestly I'm pretty sure they won't...
The answer depends of what you need to achieve:
If you need a template environment (e.g. similar set of instances and networks each time after reboot) you may just script everything. In other words just make a bash script creating everything you need and run it each time after stack.sh. Make sure you're starting with clean environment since OpenStack DB state remains between ./unstack - ./stack.sh or ./rejoin-stack.sh (you might try to just clean DB, or delete it. stack.sh will build it back).
If you need a persistent environment (e.g. you don't want to loose VM's and whole infrastructure state after reboot) I'm not aware how to do this using OpenStack. F.e. neutron agents (they configure iptables, dhcp etc) do not save state and driven by events from Neutron service. They will not restore after reboot, so the network will be dead. I'll be very glad if someone will share a method to do such recovery.
In general I think OpenStack is not focusing on this and will not focus during the nearest release cycles. Common approach is to have multi-node environment where each node is replaceable.
See http://docs.openstack.org/high-availability-guide/content/ch-intro.html for reference
devstack is an ephemeral environment. it is not supposed to survive a reboot. this is not a supported behavior.
that being said you might find success in re-initializing the environment by running
./unstack.sh
follower by
./stack.sh
again.
Again, devstack is an ephemeral environment. It's primary purpose for existing is to run gate testing for openstack's CI infrastructure.
or try ./rejoin-stack.sh to re-join previous screens.

Resources