Copy Openstack instance volume manually - openstack

I have a server with an Openstack deployment on top. I've created an instance with a 10 Gb volume and an ubuntu image.
I would like to copy the volume manually using dd in order to analyze it. The volumes are supposed to be located in /dev with the names dm-x, where x is a number and the ID for each volume can be identified to the instance it belongs to from the Openstack dashboard and looking into the symlink in /dev/stack-volumes-lvmdriver1-id.
Well, once I've dd'd the desired dm-x into a folder in my home directory, I obtain a DOS/MBR file but, shouldn't it be a logical volume?
Thanks in advance.

Related

oc rsync fails due to no space on the disk in /tmp storage

I'm copying files from openshift pod to UNIX server. Files are in Giga Bytes size. I'm using oc rsync in Unix server. But, it's using /tmp directory as cache directory while copying. File size is greater than the /tmp directory size. Due to that, I'm getting "no space left on the device"
Is there is any way to bypass /tmp directory cache to different folder or can we totally avoid the cache?
You can try to set variable TMP or TEMP to point other directory with enough space.
I am sure in documentation you will find mentioned the proper variable (if it's not in two above)
The following worked for me.
export TMPDIR="folder were data should be cached"
oc rsync pod:source_path target_path
Thanks to #Romeo Ninov for pointing me in the right direction.

Copy or move file into directory with parallel processing from another process

I'm running two processes on AIX. Process one is generating several files, process two does backups from all files that are in a backup directory.
Process one will copy or move the files into the backup directory. Since process two is always running in the background there is the risk of it starting a backup of a file that is still in the process of being copied or moved and therefore incomplete. How can I avoid this problem?
Process one should create files in another directory (on the same disk); and when a file is created, move it into the final directory. Move is an atomic operation, so process2 will only find complete files.
Edit: on AIX, /usr/bin/istat helps to make sure that two directories (or files) are on the same disk/partition/device, e.g.
for Dir in /home /home/zsiga /tmp;
do /usr/bin/istat "$Dir" | grep device;
done
Result:
Inode 2 on device 10/8 Directory
Inode 33 on device 10/8 Directory
Inode 2 on device 10/7 Directory
The first two are on the same disk/partition/device (10/8); the last one is on another device (10/7)

TFS 2015 Build vNext: Upload Files via WebDAV?

We want to upload files to a WebDAV-Folder and tried to use the "Windows Machine File Copy"-task for that (this uses ROBOCOPY inside). But we donĀ“t see any possibility to enter the destination folder in the right form.
1. We tried
\\our.website.de#SSL\DavWWWRoot\remote.php\webdav\
and
2. we also mounted this location as a network drive first and tried to use the drive letter,
but none of these attempts succeeded.
Is "Windows Machine File Copy" the right type of task for that or is there any custom task out there that accomplishes what we need?
Thx...
You should separate specified Machines and the folder path. Do not use it together.
Machines:
Example: dbserver.fabrikam.com, dbserver_int.fabrikam.com:5986,192.168.34:5986
Folder Path:
The folder on the Windows machine(s) to which the files will be
copied.
Example: C:\FabrikamFibre\Web
More detail info please refer this link: Deploy: Windows Machine File Copy

How to automate working directory change

I usually switch between Windows and Mac while accessing my R codes from Google Drive. One of the repetitive tasks I need to do whenever I switch between my desktop and laptop is to (un-)comment the file path to the respective directories where my google drive is located. Can anyone share an automation code on how to do this? I am already doing this in Stata.
Usually, for each project or analysis that I start I use a "config-like" R file which looks more or less like this:
.job <- list ()
## rootDir in my laptop
.job$base_data_dir <- file.path ("", "home", "dmontaner", "datos")
## rootDir in my server
##.job$base_data_dir <- file.path ("", "scratch", "datos")
In this "config" file I set the root directory where I am keeping the data in each machine. I keep a different "config" file in each machine and do not synchronize them via dropbox.
Then I start my R scripts with this line:
try (source (".job.r"))
and when I have to address any file or folder I do:
setwd (file.path (.job$base_data_dir, "raw_data"))
...
setwd (file.path (.job$base_data_dir, "results"))
Like this, if you keep the internal structure of the data directory in both machines, you are able to set the base or root dir where it is allocated and reach the data in both machines.
Also the file.path function takes care of the changes in operative system.
In the R session I call the config variable starting with a dot for it to be a hidden variable so I do not see it when I do a ls () or similar things.
That's my solution:
setwd(ifelse(.Platform$OS.type=="unix", "/Users/.../Google Drive", "C:/Users/.../Google Drive/"))

Atomic rsync at directory level with minimum temporary storage

I have some files on remote host (in a directory) and I want to perform rsync in an atomic manner at directory level to pull files on local host (In a distributed setup). One way I could think of is a very trivial case when I can take files backup on local host and then replace the old files with the new files, but the approach is not efficient as far as disk space is concerned. e.g. files size is 10GB and diff is just 100 MB.
Is there a way to store just the rsync diff on local host in temporary location and then update the files on local host?
You could do it like this:
Run rsync between local host and a temp folder in remote host. To make sure you only get the diff, use the --link-dest option and link to the real folder in remote host.
You'd basically have a command like this:
rsync --link-dest="/var/www" --archive "/localhost/path/www/" "remote#example.com:/var/www_update_20131129"
(With /var/www being the files to update and /var/www_update_20131129/ being the "temp" folder)
Once the rsync operation is done, you can swap the www_update_20131129/ and real www/ folders in remote host (possibly by soft-linking www/ to www_update_20131129/).

Resources