I am new to openstack for virtualization.
I can reboot instance by 2 ways: cold and hard reboot.
I can understand the difference on a physical computer, but what is the difference between cold and hot reboot on a VM ?
Thanks
Apart from the documentation here that it's already mentioned on this thread:
http://docs.openstack.org/user-guide/cli-reboot-an-instance.html
A hard-reboot also affects the virtual machine at hypervisor level. Example: If you are using libvirt-based hypervisors (qemu/kvm), the instance control file (the libvirt XML representing the virtual machine in Libvirt) get's reconstructed from scratch.
That's very usefull when for any reason the instance storage space (/var/lib/nova/instances/INSTANCE_UUID) suffers any kind of problem, or, in general for any reason that you need OpenStack to reconstruct the libvirt definitions !.
It affects both the XML libvirt definition normally stored at /etc/libvirt/qemu and the copy at /var/nova/instances/INSTANCE_UUID.
So, in resume: Use hard-reboot if you need to fully reset/reboot the instance up to Hypervisor level. As you can see, is more like a "power-cycle with steroids".
Hope this helps !!
Related
I am a developer of ROS projects. Recently I am trying using ROS(melodic) on WSL2(Windows Subsystem for Linux), and all things works just great. But I got some trouble when I want to use another PC which also in the same local area network(LAN) to communicate with. Before setting the environment variables like "ROS_MASTER_URI, ROS_IP", I know that since WSL 2 work on Hyper-V so the IP show on WSL2 is not the one in the real LAN. I have to do some command like below in order to make everyone in LAN communicate with the specific host:PORT on WSL2.
netsh interface portproxy delete v4tov4 listenport=$port listenaddress=$addr
But here comes a new question:
The nodes which use TCPROS to communicate with each other have a random PORT every time I launch the file.
How can I handle this kind of problem?
Or is there any information on the internet that I can have a look?
Thank you.
The root problem is described in WSL issue #4150. To quote from that thread,
WSL 2 seems to NAT it's virtual network, instead of making it bridged
to the host NIC.
Option 1 - Port forwarding script on login
Note: From #kraego's comment (and the edited question, which I'm just seeing based on the comment), this is probably not a good option for ROS, since the port numbers are randomly assigned. This makes port forwarding something that would have to be dynamically done.
There are a number of workarounds described in that issue, for which you've already figured out the first part (the port forwarding). The primary technique seems to be to create a PowerShell script to detect the IP address and create the port forwarding rules that runs upon Windows login. This particular comment near the top of the thread seems to be the canonical go-to answer, although many people have posted their tweaks or alternatives throughout the very long thread.
One downside - I believe the script that is mentioned there needs to be run at logon since the WSL subsystem seems to only want to run when a user is logged in. I've found that attempting to run a WSL service or instance through Windows OpenSSH results in that instance/service shutting down soon after the SSH session is closed, unless the user is already logged into Windows with a WSL instance opened.
Option 2 - WSL1
I would also propose that, assuming it fits your workflow and if the ROS works on it (it may not, given the device access you need, but not sure), you can simply use WSL1 instead of WSL2 to avoid this. You can try this out by:
Backing up your existing distro (from PowerShell or cmd, use wsl --export <DistroName> <FileName>
Import the backup into a new WSL1 instance with wsl --import <NewDistroName> <InstallLocation> <FileNameOfBackup> --version 1
It's possible to simply change versions in place, but I tend to like to have a backup anyway before doing it, and as long as you are backing up, you may as well leave the original in place.
So last night I got a mail from google-cloud-compliance that one of the VM instances have some critical problems and it will be suspended after 72 hours if the pattern is continued and appeal not filed. Below is the mail I received.
We have recently detected that your Google Cloud Project has been
performing intrusion attempts against a third-party and appears to be
violating our Terms of Service. Specifically, we detected port
scanning on remote port 22 originating from your Compute Engine
project targeting more than 4451 IP addresses between 2019-04-02 09:31
and 2019-04-02 09:55 (Pacific Time). Please check the traffic
originating from all your instances and fix any other instances that
may be impacted by this.
To access the VM via ssh you've to add your public key in the instance itself and A minimal Django project is deployed in the instance so I don't think it was due to both of these things. So my question is what caused it and how I can secure my VM instance.
I will strongly recommend you to delete your VM instance, as you cannot be completely sure about until what extent it has been compromised.
Once everything has been recreated again, you can take some measures to try to prevent this from happening again. As stated in the comments, I will use strong passwords and I will make sure that all the software you are using is properly updated and patched, especially Django.
Also, I would take a look to the firewall rules, using the least privilege best-practice. For example, you can make sure that only the IPs you are using to access your instance are allowed in port 22.
Finally, I would suggest you to take a look to this. It is a new Beta feature that allows you to detect “DDoS attacks originating inside your organization”. Besides you can check this best practices in order to harden SSH access. Especially remarkable among them is https://www.sshguard.net/ which is able to recognize patterns such as several login failures within a few seconds and then block the offending IP.
Is it possible in Openstack to have 2 VMs instantiated on the same host, where:
VM1 is instantiated from the "unpinned" 2-vCPU flavor (hw:cpu_policy not set)
VM2 is instantiated from the "pinned" 2-vCPU flavor (hw:cpu_policy=dedicated)
and be sure that VM2's pinned vCPUs (thus physical CPUs) will not be used by VM1?
When reading the 'CPU topologies' section in the OpenStack docs it says:
Caution: Host aggregates should be used to separate pinned instances
from unpinned instances as the latter will not respect the resourcing
requirements of the former.
so according to above it looks it's not possible. Would like to confirm that.
Cause if you can't mix pinned and unpinned VMs on 1 host it seems to me like a huge limitation, isn't it? Asking in the telecom context where pinning is often a must for some VMs (VNFCs) and for the others not; and sometimes it's desirable to have them on the same host.
At the moment Nova does not support mixed pinned/unpinned on a single VM (at least not out of the box), but it is a feature they are looking at implementing.
You can read about the current suggested implementation here.
Additional reading on work related to this blueprint is available here.
After successful installation of devstack and launching instances,but once reboot machine, need to start all over again and lose all the instances which were launched back then.I tried rejoin-stack but did not worked,How can i get the instances back after reboot ?
You might set resume_guests_state_on_host_boot = True in nova.conf. The file should be located at /etc/nova/nova.conf
I've found some old discussion http://www.gossamer-threads.com/lists/openstack/dev/8772
AFAIK at the present time OpenStack (Icehouse) still not completely aware about environments inside it, so it can't restore completely after reboot. The instances will be there (virsh domains), but even if you start them manually or using nova flags I'm not sure whether other facilities will handle this correctly (e.g. neutron will correctly configure all L3 rules according to DB records, etc.) Honestly I'm pretty sure they won't...
The answer depends of what you need to achieve:
If you need a template environment (e.g. similar set of instances and networks each time after reboot) you may just script everything. In other words just make a bash script creating everything you need and run it each time after stack.sh. Make sure you're starting with clean environment since OpenStack DB state remains between ./unstack - ./stack.sh or ./rejoin-stack.sh (you might try to just clean DB, or delete it. stack.sh will build it back).
If you need a persistent environment (e.g. you don't want to loose VM's and whole infrastructure state after reboot) I'm not aware how to do this using OpenStack. F.e. neutron agents (they configure iptables, dhcp etc) do not save state and driven by events from Neutron service. They will not restore after reboot, so the network will be dead. I'll be very glad if someone will share a method to do such recovery.
In general I think OpenStack is not focusing on this and will not focus during the nearest release cycles. Common approach is to have multi-node environment where each node is replaceable.
See http://docs.openstack.org/high-availability-guide/content/ch-intro.html for reference
devstack is an ephemeral environment. it is not supposed to survive a reboot. this is not a supported behavior.
that being said you might find success in re-initializing the environment by running
./unstack.sh
follower by
./stack.sh
again.
Again, devstack is an ephemeral environment. It's primary purpose for existing is to run gate testing for openstack's CI infrastructure.
or try ./rejoin-stack.sh to re-join previous screens.
I have a VMWare Player (Workstation 9 )virtual machine on an Ubuntu 12.10 (13.10 Kernel) host running Ubuntu 12.04 using a bridged connection and set to replicate the physical network connection. Everything usually works properly in a variety of locations. But at one location that I often frequent, the ip address of the virtual machine changes roughly every 10 minutes -rendering the vm entirely useless as it is a postgresql server and thus needs a dedicated local ip. Not only that, but when I copied a database dump into a shared folder, the file ended up getting corrupted.
I can verify that the network caused this problem, as the actual on the vm was not corrupted. I managed to temporarily solve the problem by going into a local modem and setting a DHCP Mac Address. Everything was working and files were not getting corrupted. However, it only lasted temporarily, and another random address was assigned, breaking several running processes on my machine. Between the router/gateway, there is a redundant apple router involved in the network that is likely causing the issue -but I cannot just throw it away or deactivate it, as it is not my network
Furthermore, DHCP leases work just fine for every other machine on the network; so
I believe the root issue is with vmware.
I have no clue what could possibly cause something like this to occur, as IP address assignment is one of those things that normally "just works". I am thinking about just switching to VitualBox, as I have used it in the past and never had a problem (except with properly running Windows 8. However,I have never actually seen any article suggesting VirtualVox over WMWare, as the latter supposedly performs better and has more intuitive shared folder support. Obviously though, any benefit from a shared folder is negated if it just shares corrupt garbage.
So you manually set a MAC address on your VM? In the past, I've seen VM's change MACs quite often; generally only after a reboot or cold start. It shouldn't happen on the fly... You could install Wireshark and grab a few packet captures to see if anything in there points you in the direction of the root cause.