How to backup kvm volumes using incremental disk backup using "push" and "pull" mode in libvirt - openstack

I am working on a task to be able to backup VM image volumes in another server and location, the problem is, I don't want to copy the whole image file each time I want to start a backup job, I want to backup the whole image only once and then backup incrementally each time I want to do a backup from a vm.
Is there anyway to do that? I don't want to use snapshots because when the number of snapshots increases, it will have an impact on volume performance.
If there is another way or if there is a way to use snapshots more efficient, please tell me.
I have tried volume snapshot locally, I want to know how to do it externally or any other sufficient ways to do incremental external backups.

ive been working on a project allowing you to create full and incremental or differential backups of libvirt/kvm based virtual machines. It works by using the latest features provided by the libvirt and qemu projects (changed block tracking). So if the libvirt/qemu versions you are using are supporting the features, you might want to give it a try:
https://github.com/abbbi/virtnbdbackup

Related

How to convert a snapshot to a snapshot to an Image in Openstack?

It seems that snapshots and instances are very similar (e.g. https://serverfault.com/questions/527449/why-does-openstack-distinguish-images-from-snapshots).
However, I've been unable to share snapshots publicly globally (i.e. across all projects). Note, I'm a user of the OpenStack installation, not an administrator of the installation.
Assuming that Images don't suffer the same limitation of Snapshots, is there a procedure for convert a snapshot to an image? If not, maybe I should ask a separate question, but my cloud admin tells me it needs to be an image.
for download
glance-image download
Initially I tried this:
openstack image save --file NixOS.qcow2 5426bbcf-06b3-42f3-b117-37395e7dde83
However, the reported size of NixOS.qcow2 was always 0 bytes. Not good. Well, the issue was apparently related to the fact that is is also what was reported in OpenStack and Horizon for the size of the Snapshot. So something weird was going on, but functionally, I could still use the snapshot to create instances without issue.
I then created a volume of a snapshot in horizon (when shut off, couldn't create a volume when shelved), then used this command to create an image from the newly created volume (NixSnapVol):
openstack image create --volume NixSnapVol NixSnapVol-img
Interestingly the reported size went from 41GB to 45GB, maybe that was part of the issue. Anyway, seems to work now, and bonus is that it is now RAW type instead of qcow2, so I don't need to do the conversion (our system largely benefits from using RAW as we have a ceph backend).

Can a serverless build service (e.g. AWS CodeBuild) used to build Android AOSP ROMs?

As far as I understand, AWS CodeBuild is frequently used to build Android apps.
Could a serverless build service like e.g. CodeBuild also be used to build a complete custom ROMs based on AOSP?
The output should be the device specific image files, e.g. boot.img, system.img, ...
The idea is to avoid to set up and maintain an own (virtual) machine with the full AOSP build environment.
Maybe, but probably not. It requires 16GB of RAM to build the AOSP. This is a hard requirement. I've tried it with less. You can get away with 12GB and 4GB of swap, but 4GB with 12 of swap doesn't work....
Anyway, why does this matter?
Because the largest compute available for AWS Code Build is 15GB.
It's also just impractical. The source code for AOSP is ~80GB in size. It takes hours to download it all. You don't want to do that every time. At most, you want to sync with the latest changes and move on.
AWS instances are also virtualized. This has a huge impact on the build time.
As much as I love the cloud, if you want to set up a build server for AOSP, your best bet is to purchase a decent linux workstation to act as your build server. It's a bit of up front cost, but you'll get it back 100 fold in development time saved.

Sonatype Nexus - Full Blob folder

The blobs folder on my Sonatype Nexus has completely filled the server memory.
Does anyone know how to make room? Does it exist an automatic way to free that space, or I have to do it manually..?
And, at last: what happens if I completety remove all the data in the directory blobs/default/content?
Thank you all in advance
Marco
In NXRM3, the blobstore contains all the components of your repository manager. If your disk is full, you will not be able to write anything more to NXRM and risk corruption of existing data.
Cleanup can be performed using scheduled tasks. What you need varies based around what formats your system is using. You can find more general information here: https://help.sonatype.com/display/NXRM3/System+Configuration#SystemConfiguration-ConfiguringandExecutingTasks
It is important to note that you must run the "Compact blob store" task after any cleanup is done, otherwise the space will not be freed.
However, it is advisable if you have reached full disk space, you shut down and restore from a backup in case there is corruption, preferably giving yourself a larger disk for your blobstore before restarting.
RE "what happens if I completety remove all the data in the directory blobs/default/content": That is in effect removing all data from NXRM in the default blobstore. You will have no components if you do that.

Transfer content from one Alfresco instance to another (same version) on another server

What would be the best /better way to transfer repository content from one Alfresco (enterprise edition) to another instance running on a different server. Currently we copy the entire Alfresco database & file system under alf_data but that needs a down time on the servers.
I would require a mechanism without down time & the repository data be copied from one instance to another. Is there any way this is possible ?
In addition to Heiko's solution, you might be interested in:
The out-of-the-box replication service, which wouldn't be good for replicating your entire repo, but can be used for replicating a handful of nodes from one server to another.
A solution from Parashift which allows one- and two-way replication of nodes between servers.
An Alfresco presentation on using Apache Camel and Apache Kafka to replicate nodes between servers. This is available through Alfresco's professional services organization, but it may make it into the product at some point. Or you could use it as inspiration to write your own solution.
What is your intention? A standby system, a real copy, an external private cloud with a subset of data?
If you just need a 100% clone you can script backup & restore without downtime on the source server. Downtime is limited to the db and index restore on the target system. Your script shouldn't copy life data from solr index - use the backup done by the solr backup job instead. Depending on the database you use online db backup shouldn't be an issue.
Our Alfresco Virtual Appliance has preconfigured scripts and jobs for this task to start an additional alfresco instance from snapshot backups without copying the contentstore (we call this Alfresco Time Machine).
If your aim is an external private cloud server or a road warrior solution ecm4u has a commercial alfresco module to sync very efficient a subset of modified nodes including metadata/types/aspects (list of types and aspects needs to be defined). This sync provides a REST interface for automation and also manual execution from alfresco's admin console. We support mix of alfreso versions and editions. At the moment this sync is implemented as a unidirectional sync but could be extended as a bidirectional sync.
I recently did this task of installing 2 alfresco instances on my local running on 2 different ports.
While performing some tasks, I realized that 2 instances having same Repository ID is creating issues.
I was able to change the repository ID of one of them following below steps:
update alfresco-global.properties:
db.name="Add new DB Name"
(Alfresco will create a db db.name mentioned here while initializing)
and restart the server
If you are still facing issues, try deleting solr indexes under alf-data folder.

Set up for working offline and uploading to remote site

I am currently using Komodo edit for coding, and have a set up where I am using MAMP and a local install of Drupal, and SASS to build my site offline.
Once its ready test online, I upload onto the remote site. However, then I am working sometimes on the remote site and sometimes on the local one, and finding some problems.
I'm not using SASS on the remote site, so I am working in the CSS file. I don't have all the Drupal panels in code yet, so I'm having to rebuild them on the remote site, and I'm making tweaks and changes on the fly.
I end up with two slightly different versions of the site and need to keep track of the changes I want to keep from both. What can I do to clean up my workflow?
It would be better if I could work entirely locally and then sync that to the remote environment.
With something like Netbeans I think I could have a local copy of the site running and then right click and upload each file onto the remote server so there are two copies of the file.
I could do with some advice as to what the cleanest set up is.
I have an actual Dev server with its own ip and a live server with its own ip but they both connect to the same mysql server. There are two databases set up, db1 and db2.
I use a php script with basic sql instructions to:
Check which db is in use.
Backup that db (say, db1).
Sync the databases
Import that db into db2.
Once that is done, I use this:
cp webdb2.settings.php settings.php
So I always have two databases available and can roll right back (in this case with:
cp webdb1.settings.php settings.php) if something when wrong.
This seems to be a pretty good system. I only ever work on the dev, and then push it to the live with the process above.
The best thing that you can do is work offline an then sync the content (themes/modules) when you have to.
If you work on both you could lose data or lose track of code changes.

Resources