How to increase capacity of open stack hypervisor local disks - openstack

I configured an open stack with devstack this time, but I think there will be a shortage of local storage, so I'm going to add it. So we have a 2TB hard drive that is composed of raid, so we're going to add it to the open stack, do you know how to do it?

If you are talking about ephemeral storage, I believe Devstack puts that on /opt/stack/data/nova/instances (you may want to double-check that on your system). You would have to mount your disk there.
You may want to consider putting the entire /opt/stack onto the disk, since other large storage consumers such as Cinder and Swift would then have lots of space as well.
I am a bit worried. What are your plans with this Devstack? Note that Devstack is suitable for testing, experimenting and learning, but nothing else. Among other problems, a Devstack cloud can't be rebooted without manual intervention.
EDIT:
I would try it as follows:
delete all instances
stop Nova, e.g. systemctl stop devstack#n-*
mount the hard drive somewhere, for example on /mnt/2tb
copy /opt/stack/data/nova to the drive, e.g. cd /opt/stack/data/nova; find . | cpio -pdumva /mnt/2tb
mount the 2TB drive on /opt/stack/data/nova
The first two steps are precautions against copying inconsistent data.
Better, reinstall that server, mount the 2TB drive on /opt, then rebuild the Devstack.
Note that only works if your instances use ephemeral storage. If they use volumes, you need to copy Cinder data instead of Nova.

Related

How to backup kvm volumes using incremental disk backup using "push" and "pull" mode in libvirt

I am working on a task to be able to backup VM image volumes in another server and location, the problem is, I don't want to copy the whole image file each time I want to start a backup job, I want to backup the whole image only once and then backup incrementally each time I want to do a backup from a vm.
Is there anyway to do that? I don't want to use snapshots because when the number of snapshots increases, it will have an impact on volume performance.
If there is another way or if there is a way to use snapshots more efficient, please tell me.
I have tried volume snapshot locally, I want to know how to do it externally or any other sufficient ways to do incremental external backups.
ive been working on a project allowing you to create full and incremental or differential backups of libvirt/kvm based virtual machines. It works by using the latest features provided by the libvirt and qemu projects (changed block tracking). So if the libvirt/qemu versions you are using are supporting the features, you might want to give it a try:
https://github.com/abbbi/virtnbdbackup

Shared Folders in Xen Hypervisor

I recently started using xen hypervisor migrating from virtualbox. My host system is Ubuntu 15.04 and guest is Windows 7. I wanted to know if there is any way where we can use shared folders similar to VirtualBox?
Thanks
to share files, you need a shared filesystem. there are two main
classes of these:
network filesystems: NFS, Samba, 9p, etc.
clustered filesystems: GFS, OCFS2, CXFS, etc. they're designed for
SAN systems where several hosts access the same storage box. in VM
case, if you create a single partition accessible from several VMs you
get exactly the same situation, (shared block device) and need the
same solution.
what definitely won't work is to use a 'normal' filesystem (ext3/4,
XFS, ReiserFS, FAT, HPFS, NTFS, etc) on a shared partition (just like
it won't work in a shared block device). Since every filesystem
aggressively caches metadata to avoid rereading the disk for every
access, a VM won't be 'notified' if another one modifies a directory,
so it won't 'notice' any change. and worse, since now the cached
metadata isn't consistent with the content of the disk, any write will
result in a heavily corrupted filesystem.
PS:
There was a project called XenFS which looked promising, but never reached a stable release.

Live migration on Openstack

I'm working on a projet on OpenStack. I have installed OpenStack by creating two virtual machines, one for the controller node and the other for the compute node.
Actually, I want to test an example of live migration on openstack and I have found a video which describes the aproch. As the video shows, I need to have 2 compute nodes, and I want to know if I just need to create a second compute node or this second compute should be created at the phase of installation of openstack.
This is the link of the video that I have watched: https://www.youtube.com/watch?v=_4vJUYFGbEM
Thank you
It doesn't matter when you add the compute nodes (During the install or later on). Please also remember that the live-migration piggy backs on the hypervisor. So depending on hypervisor that one uses, this may or may not be possible.
Please look at this http://docs.openstack.org/admin-guide/compute-configuring-migrations.html#section-configuring-compute-migrations to ensure that the migration capability exists
It simply boils down to a few things
The storage is not moved in case of a live migration, so if you have a VM with instance storage, you will need to have a shared file system like NFS or something, If you have an instance backed by a cinder volume you will be able do the migration without the shared storage.
The Nova-Compute application needs to be installed on the destiantion
The hypervisor version should be the same.
I hope this clarifies.
Either works. OpenStack allows you dynamically add and remove computes nodes from a cloud environment.
Please refer to http://docs.openstack.org/admin-guide/compute-configuring-migrations.html for extra details.
Live migration for light instances can be done over network ,without shared storage, but for heavy instances ,shared storage or shared volume will be preferred. As you mentioned you have two compute nodes ,theirs nova storage should be shared storage.
Long answer short in my perspective,
You can add/remove compute node at any time from an OpenStack installation.
For adding compute, follow installation guide to add new compute node right from environment setup.
Also, dont forget to install networking part in your new Compute node.

Vagrant shared/synced folders permissions

From my research I understand that VirtualBox synced folders have permissions set up during the mounting process. Later, I am unable to change it therefore permissions for the whole synced folder MUST be same for every single file/folder in the shared folder. When trying to change with or without superuser permissions, changes are reverted straight away.
How this can work with for example Symfony PHP framework where there are several different permissions for different files/folders? (i.e. app/console needs execute rights but I don't want to have 7XX everywhere).
I have found in different but similar question (Vagrant and symfony2) that I could set the permissions to 777 for everything in the Vagrantfile, however this is not desirable as I need to use GIT behind my source code which is than deployed to the live environment. Running everything under 777 in the production is, nicely put, not correct.
How do you people cope with this? What are yours permissions setups?
A possible solution could be using the rsync synced folder strategy, along with the vagrant rsync and vagrant rsync-auto commands.
In this way you'll lose bidirectional sync, but you can manage file permission and ownership.
I am in a similar situation. I started using Vagrant mount options, and found out that as I upgraded parts of my tech stack (Kernel, Virtualbox, Vagrant, Guest Additions) I started getting different behavior while trying to set permissions in synced folders.
At some point, I was perfectly fine updating a few of the permissions in my shell provisioner. At first, the changes were being reflected in the guest and the host. At another point in time, it was being done the way I expected, with the changes being reflected only in the guest and not the host file-system. After updating the kernel and VB on my host, I noticed that permission changes in the guest are being reflected on the host only.
I was trying to use DKMS to compile VBOX against an older version of my Kernel. No luck yet.
Now when I have little more experience, I can actually answer this question.
There are 3 solution to this problem:
Use Git in your host system because vagrant basic shared folders setup somehow forces 777 (at least on Windows hosts)
Use NFS shared folders option from the vagrant (not available on Windows out of the box)
Configure more complex rsync as mentioned in Emyl's answer (slower sync speeds).

How to work on several machines, keep them in sync without Internet?

For quite a while now, I have been using Dropbox to sync a Git repository on several virtual machines (one Windows, one Mac, one Linux). I would commit my changes on one of the machines and Dropbox would take care of syncing the changes of the files and the repo onto the other machines.
This works very seamless. I code on OSX, test the code on Windows and Linux, maybe make some changes there, then commit from one of the three.
However, it has three major drawbacks:
It requires an internet connection. I frequently have to rely on my cellphone for internet connectivity, which is unreliable if I'm on a train and only good for a few hundred Mb per month.
Dropbox syncs EVERYTHING including object files, Visual Studio debug databases and just a whole lot of unnecessary stuff that does not need to be synced.
It always goes through Dropbox servers, which is fine for some minor project or some open source stuff, but I don't want to push my work files to an untrusted server.
So, how do you manage an environment like this?
Edit:
Actually, all the three virtual machines live on the very same laptop, so network connections between them are not a problem. Also, I frequently code on one OS and compile on another--and go back and forth until I have found all errors. I don't want to spam the company repo with hundreds of incremental commits.
Edit 2:
To give you an idea for what I am looking for, here is a partial solution I came up with: On each machine, I created a git repository of the files I want to work with. Typically, I will start working on a bug/feature one machine, then commit my work. On the next machine, I will call git reset origin to load the changes from the first machine, then continue working on the commit using git commit --amend. This will go back and forth a few times. Once I am done, I will finally commit the changes for real (no more amending) and start working on the next feature/bug.
However, this workflow feels cumbersome and inelegant. What I am looking for is something that results in the same output--one commit on the repo--but was created fluently between the three machines.
You could consider setting up your own software versioning server.
Most clients for these servers have implementations on varying OS's and platforms.
But if you want to communicate between machines that are not in a LAN, you're going to need an internet connection.
The versioning servers network communication can be exposed over NAT through a gateway to the internet. You could implement security by setting up a tunnel mechanism. Any client would then tunnel up to a gateway server and then communicate with the versioning server.
As for control over which files are actually versioned: I have some experience with SVN, with which you can select on file level which files to add to versioning. the SVN client will then simply ignore the rest of the files and directories.
Edit:
Reading the edit of the original author's question:
Maybe setup a 4th virutal machine, containing the Versioning server. SVN isn't (by any stretch of the imagination) hard to manage. (RTM). Have the three virtual machines connect to the server on the 4th. (This is ofcourse, if it's possible to run the machines in parallel on the same machine.)
If you can share a disk between the three, put the master repo on that. (Make sure you make backups! Especially for removable media.)
Edit: In more detail, you can have your master repo on a USB stick or a shared partition on your hard drive (as you indicate you are using multiple virtual machines on the same hardware).
To set up a private git repo, simply create an empty directory and run git init.
Assuming you are on your Ubuntu box and have an USB stick with a file system which you can read and write in all your operating systems, mounted in /media/usbgit, run this:
vnix$ mkdir /media/usbgit/mycode
vnix$ cd /media/usbgit/mycode
vnix$ git init
Initialized empty Git repository in /media/usbgit/mycode/.git/
(Given that you already have a git repo, you probably just want to clone it to the USB stick instead:
vnix$ cd /media/usbgit
vnix$ git clone /home/you/work/wherever/mycode
Initialized empty Git repository in /media/usbgit/mycode/.git/
This will now contain all the commits from the repo you pulled from.)
Now you have an empty repository which you can clone and pull from on all the boxes. Once you have the USB stick mounted, you can clone from it.
vnix$ cd
vnix$ mount | fgrep usbgit
/dev/whatever on /media/usbgit type NTFS (rw)
vnix$ git clone /media/usbgit/mycode
Initialized empty Git repository in /home/you/mycode/.git/
warning: You appear to have cloned an empty repository.
All of this is doable with SVN too (use svnadmin create to initialize a repository, and svn checkout file:///media/usbgit/mycode to check it out), but you will lose the benefits of a distributed VCS, which seem useful in your scenario.
In particular, with a distributed VCS, you can have multiple private repositories (each working directory is a repository on its own) and you can sync with and pull from your private master and a public repository e.g. on Github -- just make sure you know what you have where.

Resources