Xenserver storage overbooking - xen

I do not find any information about this topic. Can I overbook my storage of virtual disks on XenServer 6.2? E.g I have a 2TB storage and 2 VM's with 2TB virtual disks each.

I'm using Xenserver 6.2 with around 12 VMS and 4 TB of storage.
One of the things I've notice is that even if the size of one vdi (virtual disk) is fixed, the amount really used on disk is variable depending on how full the disks are.
For example, if you create the VMs with 2 TB each this is a possible scenario:
VM1 after install and some moderated use is using 1 TB out of 2 TB. The vhd file would be around 1 TB of use on the storage drive.
VM2 after install is using 200 GB out of 2 TB. It also uses on disk around that size on the storage.
The problems you might encounter depends on the total amount of space you have on your SR.
When the vdis increase in size, they don't release the space after that.
Lets say you delete 500 GB from VM1 it will still use 1 TB on the storage, and if you increase the size of VM2 over your storage limit, problems will appear. Probably the SO will stop working is there's no more space to allocate.
I hope my answer helps you.

I would suggest you to use NFS, or re-create your LUN with ext3 filesystem.
This will help you to over assign the storage, but you have to make sure actual disk usage dont go beyond 2TB.

Related

how many volumes can ceph support?

you can use ceph-volume lvm create --filestore --data example_vg/data_lv --journal example_vg/journal_lv to create ceph volume,but I want to know how many volumes can ceph support,can it be infinite?
Ceph can serve an infinite number of volumes to clients, but that is not really your question.
ceph-volume Is used to prepared a disk to be consumed by Ceph for serving capacity to users. The prepared volume will be served by an OSD and join the RADOS cluster, adding its capacity to the cluster’s.
If your question is how many disks you can attach to a single cluster today, the sensible answer is “a few thousand”. You can push farther using a few tricks. Scale increases over time, but I would say 2,500-5,000 OSDs is a reasonable limit today.

I want to cache static frequently used content on disk

We are going to deploy a storage server without raid ( we have lots of data but limited storage for now | data is not important ), so we will assign a subdomain to each of 12 x 8 TB drives for our clients to download from it.
Clients will be downloading content through a static URL over http (http://subdomain1.xyzwebsite.com/folder1/file1.mkv), our server is powerful with 128 GB of RAM and 6 x 2 Cores Processor with 10 Gigabit LAN Card but without RAID when multiple clients download from same drive it will look like a bottleneck so to overcome it I started to look into varnish cache but i do not get a satisfaction how will it serve data (I do not understand setting object size and manually setting cache location to RAM or DISK).
NOTE: each file size can range from 500 MB to 4 GB
We do not want a separate server for caching data, we want to utilize this powerful server to do this, now for the solution i think that data is located in a 1 drive and if it is possible to copy/mirror/cache frequent used (files download in 24 hours or 12 hours) content to second drive and serve same file with same sub-domain
NOTE: Nginx know which file is accessed via access.log
scenerio:
there are 12 drives (there are 2 separate drives for os which i'm not counting here), i will store data on 11 drives and use 12th drive as a copy/mirror/cache for all drives, i know how http works whether i add multiple ip to same domain i can only download from one ip at a time ( i will add multiple ip address on same server ), this is my solution data will be served via round-robin, if one client is downloading from one ip another client might get to download from second ip.
Now i dont know how to implement it, i tried searching for solutions but i do not get any, there are two main problems:
how to copy/mirror/cache only frequent data of the 11 drives to 1 drive and serve from it
If i add second ip address entry to same subdomain and there is no data on 12th drive how will it fetch it
Nginx or Varnish based solution is required on same server, if RAM based cache can be done it will be good to
Varnish can be used for this, but unfortunately not the open source version.
Varnish Enteprise features the so-called Massive Storage Engine, which uses both disk and RAM to store large volumes of data.
Instead of using files to store objects, MSE uses pre-allocated large files with filesystem-like behavior. This is much faster and less prone to disk fragmentation.
In MSE you can configure how individual disks should behave and how much storage per disk is used. Each disk or group of disks can be tagged.
Based on Varnish Enterprise's MSE VMOD, you can then control what content is stored on each disk or group of disks.
You can decide how content is distributed to disk based on content type, URL, content size, disk usage and many other parameters. You can also choose not to persist content on disk, but just keep content in memory.
Regardless of this MSE VMOD, "hot content" will be automatically buffered from disk into memory. There are also "waterlevel" settings you can tune do decide how to automatically ensure that enough space is always available.

Why is direct output to network share much slower than inter-buffering?

This is an Arch Linux System where I mounted a network device over SSHFS (SFTP) using GVFS managed by Nemo FM. I'm using Handbrake to convert a video that lies on my SSD.
Observations:
If I encode the video using Handbrake and set the destination to a folder on the SSD, I get 100 FPS
If I copy a file from the SSD to the network share (without Handbrake), I get 3 MB/s
However, if I combine both (using Handbrake with the destination set to a folder on the network share), I get 15 FPS and 0.2 MB/s, both being significantly lower than the available capacities.
I suppose this is a buffering problem. But where does it reside? Is it Handbrake's fault, or perhaps GVFS caching not enough? Long story short, how can the available capacities be fully used in this situation?
When accessing the file over SFTP Handbrake will be requesting small portions of the file rather than the entire thing, meaning it is starting and finishing lots of transfers and adding that much more overhead.
Your best best for solving this issue is to transfer the ENTIRE file to the SSD before performing the encoding. 3 MB/s is slower than direct access to an older, large capacity mechanical drive and as such will not give you the performance you are looking for so direct access to a network share is not recommended unless you can speed up those transfers significantly.

Maria DB recommended RAM,disk,core capacity?

I am not able to find maria DB recommended RAM,disk,number of Core capacity. We are setting up initial level and very minimum data volume. So just i need maria DB recommended capacity.
Appreciate your help!!!
Seeing that over the last few years Micro-Service architecture is rapidly increasing, and each Micro-Service usually needs its own database, I think this type of question is actually becoming more appropriate.
I was looking for this answer seeing that we were exploring the possibility to create small databases on many servers, and was wondering for interest sake what the minimum requirements for a Maria/MySQL DB would be...
Anyway I got this helpful answer from here that I thought I could also share here if someone else was looking into it...
When starting up, it (the database) allocates all the RAM it needs. By default, it
will use around 400MB of RAM, which isn’t noticible with a database
server with 64GB of RAM, but it is quite significant for a small
virtual machine. If you add in the default InnoDB buffer pool setting
of 128MB, you’re well over your 512MB RAM allotment and that doesn’t
include anything from the operating system.
1 CPU core is more than enough for most MySQL/MariaDB installations.
512MB of RAM is tight, but probably adequate if only MariaDB is running. But you would need to aggressively shrink various settings in my.cnf. Even 1GB is tiny.
1GB of disk is more than enough for the code and minimal data (I think).
Please experiment and report back.
There are minor differences in requirements between Operating system, and between versions of MariaDB.
Turn off most of the Performance_schema. If all the flags are turned on, lots of RAM is consumed.
20 years ago I had MySQL running on my personal 256MB (RAM) Windows box. I suspect today's MariaDB might be too big to work on such tiny machine. Today, the OS is the biggest occupant of any basic machine's disk. If you have only a few MB of data, then disk is not an issue.
Look at it this way -- What is the smallest smartphone you can get? A few GB of RAM and a few GB of "storage". If you cut either of those numbers in half, the phone probably cannot work, even before you add apps.
MariaDB or MySQL both actually use very less memory. About 50 MB to 150 MB is the range I found in some of my servers. These servers are running a few databases, having a handful of tables each and limited user load. MySQL documentation claims in needs 2 GB. That is very confusing to me. I understand why MariaDB does not specify any minimum requirements. If they say 50 MB there are going to be a lot of folks who will want to disagree. If they say 1 GB then they are unnecessarily inflating the minimum requirements. Come to think of it, more memory means better cache and performance. However, a well designed database can do disk reads every time without any performance issues. My apache installs (on the same server) consistently use up more memory (about double) than the database.

Upgrading an Amazon EC2 instance from t1.micro to medium, instance storage remains same

We have been using micro instance till our development phase. But now, as we are about to go live, we want to upgrade our instance to type medium.
I followed these simple steps: stop the running instance, change instance type to medium and then start the instance again. I can see the instance is upgraded in terms of the memory. But the storage still shows to be 8GB. But according to the configuration mentioned, a m1.medium type instance should have 1x410GB storage.
Am I doing anything wrong or missing out something? Please help!
Keep in mind, EBS storage (which you are currently using) and Instance storage (which is what you are looking for) are two different things in EC2.
EBS storage is similar to a SAN volume. It exists outside of the host. You can create multiple EBS volumes of up to 1TB and attach them to any instance size. Smaller instances have lower available bandwidth to EBS volumes so they will not be able to effectively take advantage of all that many volumes.
Instance storage is essentially hard drives attached to the host. While its included in the instance cost, it comes with some caveats. It is not persistent. If you stop your instance, or the host fails for any reason, the data stored on the instance store will be lost. For this reason, it has to be explicitly enabled when the instance is first launched.
Generally, its not recommended that to use instance storage unless you are conformable with and have designed your infrastructure around the non-persistance of instance storage.
The sizes mentioned for the instance types are just these defaults. If you create an image from a running micro instance, it will get that storage size as default, even if this image later is started as medium.
But you can change the storage size when launching the instance:
You also can change the default storage size when creating an image:
WARNING: This will resize the storage size. It will not necessarily resize the partition existing on it nor will it necessarily resize the file system on that partition. On Linux it resized everything automagically (IIRC), on a Windows instance you will have to resize your stuff yourself. For other OSes I have no idea.
I had a similar situation. I created a m2.medium instance of 400 GB, but when I log into the shell and issue the command
df -h
... it shows an 8 GB partition.
However, the command
sudo fdisk -l
showed that the device was indeed 400 GB. The problem is that Amazon created a default 8 GB partition on it, and that partition needs to be expanded to the full size of the device. The command to do this is:
sudo resize2fs -f /dev/xvda1
where /dev/xvda1 is the mounted root volume. Use the 'df -h' command to be sure you have the right volume name.
Then simply reboot the instance, log in again, and you'll see the fdisk command now says there's nearly 400 GB available space. Problem solved.

Resources