Openstack nova mount volumes - openstack

I have one volume mounted as /vda, I created a new volume which is /vdb . Now when I mount this new volume I guess I can mount it to an existing directory on vda?? Hence growing the overall size of the existing directory ??

#larsks is correct. This is mostly an operating systems question, and it would be better to ask on another site like ServerFault or Unix or AskUbuntu.
But since you are here, and since you asked in the context of Openstack, I'll answer here.
Now when I mount this new volume I guess I can mount it to an existing directory on vda? Hence growing the overall size of the existing directory?
The short answer is No. That's not how UNIX / Linux filesystems work.
When an Openstack Volume is attached to an Instance, it is viewed by the guest OS as a virtual disk device. You then (typically) format it to contain a file system and mount the file system on top of a directory in the root file system.
(For a real physical disk, you normally partition the disk as well, and put the file systems into some of the partitions. It is not normal practice to do that with an Volume.)
What you want here is a single file system that spans multiple Volumes; i.e. multiple virtual disks. This is possible if you use Linux LVM (see https://opensource.com/business/16/9/linux-users-guide-lvm). However, it is not possible to do an in-place conversion of an existing ordinary device into an LVM; see https://serverfault.com/questions/241987/convert-full-hard-drive-to-lvm-without-external-storage.
You probably need to do something like this:
Read all about LVM ... and decide if you can deal with the extra complexity.
Back up all your data!!
Attach your new Volume to the Instance.
Format it with LVM, and create a LVM volume on it and format the volume.
Mount the volume.
Copy the file system from your existing Volume into the new file system
Unmount file systems for the old volume and the new volume
Add the old volume into LVM and make it part of the existing LVM volume
Resize the file system so that it can use all of the space in the combined LVM volume.
Mount the file system.
If you have a file system split over two Volumes, your data integrity will depend on both of them. I think you are better off either using the two Volumes as separate filesystems or creating a single large Volume. Did you know that you can resize a Volume in some circumstances?

Related

How to increase capacity of open stack hypervisor local disks

I configured an open stack with devstack this time, but I think there will be a shortage of local storage, so I'm going to add it. So we have a 2TB hard drive that is composed of raid, so we're going to add it to the open stack, do you know how to do it?
If you are talking about ephemeral storage, I believe Devstack puts that on /opt/stack/data/nova/instances (you may want to double-check that on your system). You would have to mount your disk there.
You may want to consider putting the entire /opt/stack onto the disk, since other large storage consumers such as Cinder and Swift would then have lots of space as well.
I am a bit worried. What are your plans with this Devstack? Note that Devstack is suitable for testing, experimenting and learning, but nothing else. Among other problems, a Devstack cloud can't be rebooted without manual intervention.
EDIT:
I would try it as follows:
delete all instances
stop Nova, e.g. systemctl stop devstack#n-*
mount the hard drive somewhere, for example on /mnt/2tb
copy /opt/stack/data/nova to the drive, e.g. cd /opt/stack/data/nova; find . | cpio -pdumva /mnt/2tb
mount the 2TB drive on /opt/stack/data/nova
The first two steps are precautions against copying inconsistent data.
Better, reinstall that server, mount the 2TB drive on /opt, then rebuild the Devstack.
Note that only works if your instances use ephemeral storage. If they use volumes, you need to copy Cinder data instead of Nova.

Docker layer reuse

Artifactory at the moment stores multiple duplicate docker image layers. If image A and image B both depend on layer SHA__12345 then artifactory will store both layer copies. Which is not a problem unless the layer SHA__12345 is a a gigabyte in size. In that case you can really quickly run out of space.
Is there a way in artifactory to deduplicate overlapping layers for storage reasons?
Thanks!
Artifactory uses checksum-based storage:
A file that is uploaded to Artifactory, first has its SHA1 checksum calculated, and is then renamed to its checksum. It is then hosted in the configured filestore in a directory structure made up of the first two characters of the checksum. For example, a file whose checksum is "ac3f5e56..." would be stored in directory "ac"; a file whose checksum is "dfe12a4b..." would be stored in directory "df" and so forth.
In parallel, Artifactory's creates a database entry mapping the file's checksum to the path it was uploaded to in a repository. This way of storing binaries optimizes many operations in Artifactory since they are implemented through simple database transactions rather than actually manipulating files.
One implication of this is that artifacts are deduplicated in general. Any two artifacts with the same checksum will point to the same file in storage, even if they're in different repositories. This applies to docker layers, as well as all other artifacts. So you shouldn't be having any issues with this.

I can't packing my Data.fs, Because too large more than 500GB

Unfortunately, I have a more than 500GB ZODB, Data.fs in my Plone site(Plone 5.05)
So, I have no way to use bin/zeopack to packing it,
Seriously affecting performance
What should I do ?
I assume you're running out of space on the volume containing your data.
First, try turning off pack-keep-old in your zeoserver settings:
[zeoserver]
recipe = plone.recipe.zeoserver
...
pack-keep-old false
This will disable the creation of a .old copy of the Data.fs file and matching blobs. That may allow you to complete your pack.
Alternatively, create a matching Zope/Plone install on a separate machine or volume with more storage and copy over the data files. Run zeopack there. Copy the now-packed storage back.

Why is rsync so slow?

I use rsync to backup a folder of 60G between my laptop and an external USB drive. Only 4G of data have been added. It took a long time to finish : 2 hours.
Here is the command :
rsync -av --exclude=target/ --exclude=".git/" --delete --link-dest=$destdir/backup.1 $element $destdir/backup.0
Do you have an explanation ?
What slows down rsync more : a lot of small files or big binary files (photos) ?
As I don't exactly know your system, I am making a few assumptions here. If they don't match your situation, please clarify your question and I'll happily update my answer.
I am assuming you have a lot of files, no matter their sizes in the location you are copying from. This will cause a rather slow rsync caused by the design of the rsync protocol.
rsync works like this:
1. Build a file-list of the source location.
2. For all files in the source location:
a. Get the size and the mtime (modification timestamp)
b. Compare it with the size and mtime of the copy in the destination location
c. If they differ, copy the file from the source to the destination
Done.
If you just have a few files, this will obviously be faster than for many files. Your usb drive might be your bottleneck, as retrieval of the size and timestamp will create a lot of jumps in the inode table.
Maybe a tool like iotop (in the case your on Linux, similar tools are available for almost all platforms) can help you identify the bottleneck.
The --delete option can also cause a slow rsync, if retrieving the complete file-list of the target location is slow (which is probable for an external, rotating USB disk). To verify that this is the problem, on any os with a bash, just type time ls -Ral <target-location> > filelist.txt (diverting the output to a file since putting out data on the screen is way slower). If this takes a lot longer than for your source disk, your target disk could be the bottleneck.

Unix - server gets polluted - find out were new files get stored

My server has no available space left on disk. Yesterday, I deleted 200 GB Data, today it is full again. Some Process must write some files. How do I find out where possibly new huge files are stored?
Check df to check partition usage.
Use du to find sizes of folders.
I tend to do this:
du -sm /mount/point/* | sort -n
This gives you a list with the size of folders in MB in the /mount/point folder.
Also if you have X you can use baobab or similar utilies to explore disk usage.
PS: check the log files. For example if you have Tomcat installed it tends to generate crazy amount of log if not configured properly.

Resources