Shared Folders in Xen Hypervisor - xen

I recently started using xen hypervisor migrating from virtualbox. My host system is Ubuntu 15.04 and guest is Windows 7. I wanted to know if there is any way where we can use shared folders similar to VirtualBox?
Thanks

to share files, you need a shared filesystem. there are two main
classes of these:
network filesystems: NFS, Samba, 9p, etc.
clustered filesystems: GFS, OCFS2, CXFS, etc. they're designed for
SAN systems where several hosts access the same storage box. in VM
case, if you create a single partition accessible from several VMs you
get exactly the same situation, (shared block device) and need the
same solution.
what definitely won't work is to use a 'normal' filesystem (ext3/4,
XFS, ReiserFS, FAT, HPFS, NTFS, etc) on a shared partition (just like
it won't work in a shared block device). Since every filesystem
aggressively caches metadata to avoid rereading the disk for every
access, a VM won't be 'notified' if another one modifies a directory,
so it won't 'notice' any change. and worse, since now the cached
metadata isn't consistent with the content of the disk, any write will
result in a heavily corrupted filesystem.
PS:
There was a project called XenFS which looked promising, but never reached a stable release.

Related

How to increase capacity of open stack hypervisor local disks

I configured an open stack with devstack this time, but I think there will be a shortage of local storage, so I'm going to add it. So we have a 2TB hard drive that is composed of raid, so we're going to add it to the open stack, do you know how to do it?
If you are talking about ephemeral storage, I believe Devstack puts that on /opt/stack/data/nova/instances (you may want to double-check that on your system). You would have to mount your disk there.
You may want to consider putting the entire /opt/stack onto the disk, since other large storage consumers such as Cinder and Swift would then have lots of space as well.
I am a bit worried. What are your plans with this Devstack? Note that Devstack is suitable for testing, experimenting and learning, but nothing else. Among other problems, a Devstack cloud can't be rebooted without manual intervention.
EDIT:
I would try it as follows:
delete all instances
stop Nova, e.g. systemctl stop devstack#n-*
mount the hard drive somewhere, for example on /mnt/2tb
copy /opt/stack/data/nova to the drive, e.g. cd /opt/stack/data/nova; find . | cpio -pdumva /mnt/2tb
mount the 2TB drive on /opt/stack/data/nova
The first two steps are precautions against copying inconsistent data.
Better, reinstall that server, mount the 2TB drive on /opt, then rebuild the Devstack.
Note that only works if your instances use ephemeral storage. If they use volumes, you need to copy Cinder data instead of Nova.

JxBrowser: (why) can I (not) use URI path for cache directories?

I evaluated JxBrowser a short while ago. The following questions came to mind:
Can I use Java URIs to "reroute" all temporary files from the underlaying Chromium engine through a custom FileSystemProvider like encFs4J?
The reason I want to that is to comply with data privacy laws. Since browsers can not be forced by a web application to clear their cache or store any temporary files in a safe manner, I thought I could use JxBrowser for this. If I can handle all files myself, I can do some crypto magic so that (almost) no one has access to the data besides my application.
There is an API to define the directories via BrowserContextParams.
However, only absolute paths are allowed. URIs are not accepted.
Instead of doing
BrowserContext context = new BrowserContext(new BrowserContextParams("C:\\Chromium\\Data"));
Browser browser1 = new Browser(context);
I would like to do something like
BrowserContext context = new BrowserContext(new BrowserContextParams(new URI("enc+file:///C:/Chromium/Data"));
Browser browser1 = new Browser(context);
Does anyone know of a way to tap into the file handling routines of a process like JxBrowser? Can I somehow add this functionality like a wrapper around it?
I considered using something like VeraCrypt for this. But this is no good in terms of usability since you have to install virtual harddrive drivers. This is overkill for a rather simple issue.
Underlying Chromium engine in JxBrowser does not use Java IO API to access files. There is only a path string to the data directory that is passed to Chromium engine and it decides by itself how to handle all IO operations.
There is a mode in Chromium called incognito. In that mode all the files, including cookies, cache, history are stored in memory, nothing is stored on the hard drive, so once you close the application, all the data will be cleared automatically. If this meets your requirements we could investigate how to enable incognito mode in JxBrowser.
I will accepting Artem's answer to the original question. Incognito / private browser sessions - as long as they do not store anything on hard disk - would be a perfect and simple solution.
Furthermore, I want to share my research on this topic. The following answer is not related to JxBrowser but to any 3rd party applications and libraries which do not support URI path or require additional safeguarding of (temporary) files.
Option 1: RamDisk
needed: kernel mode driver for ram disk
privileges: admin once (to install the driver)
usability: might be seemless, if application can handle ram disk by code (not researched)
Installing a RamdDisk which can "catch" the files. If the ram disk only persists while the application is running, it is already automatically cleaned up. (not researched for feasibility)
With an own ram disk implementation one could perform additional steps.
Option 2: Virtual File System, e.g. VeraCrypt
needed: VeraCrypt, kernel mode driver
privileges: admin once (to install the driver)
usability: user has to mount container manually before using the application
Due to usability issues this was not further researched.
Option 3: embedded SMB server with local share
needed: SMB server implementation (e.g. JVLAN for Java), creating a server and share in code
privileges: user (Ports 1445 can be used under Linux etc.)
usability: seemless for the user, but quite a complicated solution for a simple issue
Steps: start a SMB server by code, add a share and user authentication (optional), mount the share to a local drive (windows) or mount point (linux), use an absolute path to access the files on the locally mounted share. If the application crashes, then the volatile / in-memory key for the "real" file encryption of the SMB server is lost and the files are safe from other eyes.
This option also has more potential, like clearing files once they got read, controling the access to third party apps and many more - even freakier - ideas.

Drupal & NFS Directory

I have 2 parallel Drupal Web Servers running (serving for one Drupal Instance together) and now i need to install NFS. My experience in multi Drupal Servers is, each Drupal Instance (Server) uses their own Aggregated JS + CSS files (storing in: sites/default/files/js and sites/default/files/css folders) which can NOT be used as common. (Files can not be the same for both Servers. They use their own ones.)
Based on these issues, my questions are:
How NFS actually works between Multi Drupal Servers?
Which directories will be/need to be shared between?
What will happen to Aggregated Files?
What will happen to Web User Uploads paths and files? (Need any configuration in Drupal?)
Can anyone share these knowledge/experience please?
You can definitively work with NFS and Drupal.
I do not understand why you do not want to share the files directory between both.
In fact you have two solutions:
1) Share all the source tree, starting at the web directory root, or even earlier if you have external directories for private files
2) Share only the moving directories and have all code based synchronised before and aftter any upgrade via some rsync commands. In this case you need to share between servers:
the files directories (project/www/site/default/files, project/www/site//files)
the private files directories (project/private) <-- it's an example
the php temporary upload path (project/tmp for example), check that both servers use the right folder (it's a php setting) and that this folder is shared.
before Drupal7 I would use solution 1, now the number of internal filesystem tree traversal launched by Drupal on a lot of occasion make it very bad on slow filesystem (and NFS is usually quite slow). Using APC with all filesystem check disabled (apc.stat, stat.ctime, etc) does not prevent Drupal from trying to access every file on your filesystem on a lot of occasions. So solution 2 is to be prefered.
I did not experience any problems with file aggregation with such installations.

Is it possible and safe to create a RamDisk on an existing EC2 instance? If possible, how to do that?

I have an EC2 instance that currently hold our web site, and we are thinking about pointing our "Temporary ASP.NET Files" Compilation Folder to a RamDisk on EC2.
Not sure if it's a good idea and couldn't find much info around.
Ramdisks are normally used for speeding up access to commonly used files. In your case I would assume that speed is not an issue, but that you want to save space on your main drive. In this case I would use the instance storage from EC2 - in contrast to the EBS backed main storage, this one will be flushed on reboot.
Here is a posting on Serverfault that shows you how to mount the instance storage. Also have a look at the instance storage in the AWS documentation. Then you would have to move your Temporary ASP.NET Files folder to one of the instance-backed disks.

Keeping dot files synched across machines?

Like most *nix people, I tend to play with my tools and get them configured just the way that I like them. This was all well and good until recently. As I do more and more work, I tend to log onto more and more machines, and have more and more stuff that's configured great on my home machine, but not necessarily on my work machine, or my web server, or any of my work servers...
How do you keep these config files updated? Do you just manually copy them over? Do you have them stored somewhere public?
I've had pretty good luck keeping my files under a revision control system. It's not for everyone, but most programmers should be able to appreciate the benefits.
Read
Keeping Your Life in Subversion
for an excellent description, including how to handle non-dotfile configuration (like cron jobs via the svnfix script) on multiple machines.
I also use subversion to manage my dotfiles. When I login to a box my confs are automagically updated for me. I also use github to store my confs publicly. I use git-svn to keep the two in sync.
Getting up and running on a new server is just a matter of running a few commands. The create_links script just creates the symlinks from the .dotfiles folder items into my $HOME, and also touches some files that don't need to be checked in.
$ cd
# checkout the files
$ svn co https://path/to/my/dotfiles/trunk .dotfiles
# remove any files that might be in the way
$ .dotfiles/create_links.sh unlink
# create the symlinks and other random tasks needed for setup
$ .dotfiles/create_links.sh
It seems like everywhere I look these days I find a new thing that makes me say "Hey, that'd be a good thing to use DropBox for"
Rsync is about your best solution. Examples can be found here:
http://troy.jdmz.net/rsync/index.html
I use git for this.
There is a wiki/mailing list dedicated to the topic.
vcs-home
I would definetly recommend homesick. It uses git and automatically symlinks your files. homesick track tracks a new dotfile, while homesick symlink symlinks new dotfiles from the repository into your homefolder. This way you can even have more than one repository.
You could use rsync. It works through ssh which I've found useful since I only setup new servers with ssh access.
Or, create a tar file that you move around everywhere and unpack.
I store them in my version control system.
i use svn ... having a public and a private repository ... so as soon as i get on a server i just
svn co http://my.rep/home/public
and have all my dot files ...
I store mine in a git repository, which allows me to easily merge beyond system dependent changes, yet share changes that I want as well.
I keep master versions of the files under CM control on my main machine, and where I need to, arrange to copy the updates around. Fortunately, we have NFS mounts for home directories on most of our machines, so I actually don't have to copy all that often. My profile, on the other hand, is rather complex - and has provision for different PATH settings, etc, on different machines. Roughly, the machines I have administrative control over tend to have more open source software installed than machines I use occasionally without administrative control.
So, I have a random mix of manual and semi-automatic process.
There is netskel where you put your common files on a web server, and then the client program maintains the dot-files on any number of client machines. It's designed to run on any level of client machine, so the shell scripts are proper sh scripts and have a minimal amount of dependencies.
Svn here, too. Rsync or unison would be a good idea, except that sometimes stuff stops working and i wonder what was in my .bashrc file last week. Svn is a life saver in that case.
Now I use Live Mesh which keeps all my files synchronized across multiple machines.
I put all my dotfiles in to a folder on Dropbox and then symlink them to each machine. Changes made on one machine are available to all the others almost immediately. It just works.
Depending on your environment you can also use (fully backupped) NFS shares ...
Speaking about storing dot files in public there are
http://www.dotfiles.com/
and
http://dotfiles.org/
But it would be really painful to manually update your files as (AFAIK) none of these services provide any API.
The latter is really minimalistic (no contact form, no information about who made/owns it etc.)
briefcase is a tool to facilitate keeping dotfiles in git, including those with private information (such as .gitconfig).
By keeping your configuration files in a git public git repository, you can share your settings with others. Any secret information is kept in a single file outside the repository (it’s up to you to backup and transport this file).
-- http://jim.github.com/briefcase
mackup
https://github.com/lra/mackup
Ira/mackup is a utility for Linux & Mac systems that will sync application preferences using almost any popular shared storage provider (dropbox, icloud, google drive). It works by replacing the dot files with symlinks.
It also has a large library of hundreds of applications that are supported https://github.com/lra/mackup/tree/master/mackup/applications

Resources