What filesystem type is appropriate for openstack cinder volume? - openstack

What filesystem type is appropriate for openstack cinder volume?
How do you go about modifying the cinder service of an already deployed juju charms openstack to work with volumes on an external iSCSI as persistent storage?
Thanks in advance.

The XFS, btrfs and ext4 file systems provide numerous advantages, XFS was developed for Silicon Graphics, and is a mature and stable filesystem. Beside that btrfs is a copy-on-write filesystem, supports file creation timestamps and checksums that verify metadata integrity, so it can detect bad copies of data and fix them with the good copies, it support snapshots that are writable , transparent compression and other features.
In case using a ceph storage better to use an advice: BlueStore

Related

How to increase capacity of open stack hypervisor local disks

I configured an open stack with devstack this time, but I think there will be a shortage of local storage, so I'm going to add it. So we have a 2TB hard drive that is composed of raid, so we're going to add it to the open stack, do you know how to do it?
If you are talking about ephemeral storage, I believe Devstack puts that on /opt/stack/data/nova/instances (you may want to double-check that on your system). You would have to mount your disk there.
You may want to consider putting the entire /opt/stack onto the disk, since other large storage consumers such as Cinder and Swift would then have lots of space as well.
I am a bit worried. What are your plans with this Devstack? Note that Devstack is suitable for testing, experimenting and learning, but nothing else. Among other problems, a Devstack cloud can't be rebooted without manual intervention.
EDIT:
I would try it as follows:
delete all instances
stop Nova, e.g. systemctl stop devstack#n-*
mount the hard drive somewhere, for example on /mnt/2tb
copy /opt/stack/data/nova to the drive, e.g. cd /opt/stack/data/nova; find . | cpio -pdumva /mnt/2tb
mount the 2TB drive on /opt/stack/data/nova
The first two steps are precautions against copying inconsistent data.
Better, reinstall that server, mount the 2TB drive on /opt, then rebuild the Devstack.
Note that only works if your instances use ephemeral storage. If they use volumes, you need to copy Cinder data instead of Nova.

rdiff-backup-like storage on Artifactory

I am looking for a way to store files in Artifactory repository in a storage efficient way and upload/download difference between local version and remote in order to save disk space, bandwidth and time.
There are two good utilities which works in this way rsync and rdiff-backup. Sure there are others.
Is there a way to organize something similar with Artifactory stack?
What is rsync:
DESCRIPTION
Rsync is a fast and extraordinarily versatile file copying tool. It can copy locally,
to/from another host over any remote shell, or to/from a remote rsync daemon. It offers
a large number of options that control every aspect of its behavior and permit very
flexible specification of the set of files to be copied. It is famous for its
delta-transfer algorithm, which reduces the amount of data sent over the network by
sending only the differences between the source files and the existing files in the des-
tination. Rsync is widely used for backups and mirroring and as an improved copy com-
mand for everyday use.
JFrog CLI includes a functionality called "Sync Deletes", allowing to sync files between the local file system and Artifactory.
This functionality is supported by both the "jfrog rt upload" and "jfrog rt download" commands. Both commands accept the optional --sync-deletes flag.
When uploading, the value of this flag specofies a path in Artifactory, under which to sync the files after the upload. After the upload, this path will include only the files uploaded during this upload operation. The other files under this path will be deleted.
The same goes for downloading, but this time, the value of the --sync-deletes flag specifies a path in the local file system, under which files which had not been downloaded from Artifactory are deleted.
Read more this in the following link:
https://www.jfrog.com/confluence/display/CLI/CLI+for+JFrog+Artifactory

Can Artifactory age artifacts to S3?

We have an Artifactory solution deployed and I am trying to figure out if it can meet my use case. The normal use case is that artifacts are deleted within a week or so and can normally fit in X GB of local storage, but we'd like to be able to:
Keep some artifacts around much longer, and since they are accessed infrequently, store them in AWS S3.
Sometimes artifacts aren't able to be cleaned up in time, so we'd like to burst to the cloud when local storage is overflowed.
I was thinking I could do the following:
Local repository of X GB
Repo pointing to S3
Virtual repo in front of both of these
Setup a plugin to move artifacts from local->S3 via our policies
However, I can't figure out what a Filestore is in Artifactory, and how you'd have two Repositories backed by different filestores.
Anyone have pointers to documentation or anything that can help? The docs I can find are rather slim on the high level details of filestores and repositories.
The Artifactory binary provider does not support configuring multiple storage backends, so it is impossible to use S3 and NFS in parallel. The main reason for this limitation is that Artifactory has a checksum based storage which stores each binary only once and keeps pointers from all relevant repositories. For that reason Artifactory does not manage separate storage per repository.
For archiving purposes, one of the possible solutions is setting up another Artifactory instance which will take care of archiving. This instance can be connected to an S3 storage backend.
You can use replication to synchronize between the two instances (without syncing deletes). You can have a repository(s) in your master Artifactory which contains artifacts which should be archived, those artifacts will be replicated to the archive Artifactory and later on can be deleted from the master.
You can use a user plugin to decide which artifacts should be moved to the archive repository.

Shared Folders in Xen Hypervisor

I recently started using xen hypervisor migrating from virtualbox. My host system is Ubuntu 15.04 and guest is Windows 7. I wanted to know if there is any way where we can use shared folders similar to VirtualBox?
Thanks
to share files, you need a shared filesystem. there are two main
classes of these:
network filesystems: NFS, Samba, 9p, etc.
clustered filesystems: GFS, OCFS2, CXFS, etc. they're designed for
SAN systems where several hosts access the same storage box. in VM
case, if you create a single partition accessible from several VMs you
get exactly the same situation, (shared block device) and need the
same solution.
what definitely won't work is to use a 'normal' filesystem (ext3/4,
XFS, ReiserFS, FAT, HPFS, NTFS, etc) on a shared partition (just like
it won't work in a shared block device). Since every filesystem
aggressively caches metadata to avoid rereading the disk for every
access, a VM won't be 'notified' if another one modifies a directory,
so it won't 'notice' any change. and worse, since now the cached
metadata isn't consistent with the content of the disk, any write will
result in a heavily corrupted filesystem.
PS:
There was a project called XenFS which looked promising, but never reached a stable release.

Mounting two locations with html5fs and different PERSISTENT and TEMPORARY types

I am using nacl_io library in my project. Is it possible to mount two locations with html5fs file system but different type PERSISTENT and TEMPORARY at the same time?
Thanks.
This is supported.
The nacl-spawn library in the naclports repo, which is used to build command line tools, does this by default. It mounts a temporary html5fs at /tmp and persistent ones at /mnt/html5 and /home/user.

Resources