Sonatype nexus3 server Manually remove items - nexus

I'm using nexus sonatype-work nexus3. Still it didn't have a cleaning process. So, it has filled. Is there any commands that I can use to find unused images and data? And, how can delete files manually?

No, you should not delete any of the files manually otherwise it lead to issues with retrieving data from your repositories. You should set up some cleanup policies that will maintain the disk space for you - you can learn more about from these resources:
Cleanup policies
Keeping disk usage low
Storage management

Related

How to backup kvm volumes using incremental disk backup using "push" and "pull" mode in libvirt

I am working on a task to be able to backup VM image volumes in another server and location, the problem is, I don't want to copy the whole image file each time I want to start a backup job, I want to backup the whole image only once and then backup incrementally each time I want to do a backup from a vm.
Is there anyway to do that? I don't want to use snapshots because when the number of snapshots increases, it will have an impact on volume performance.
If there is another way or if there is a way to use snapshots more efficient, please tell me.
I have tried volume snapshot locally, I want to know how to do it externally or any other sufficient ways to do incremental external backups.
ive been working on a project allowing you to create full and incremental or differential backups of libvirt/kvm based virtual machines. It works by using the latest features provided by the libvirt and qemu projects (changed block tracking). So if the libvirt/qemu versions you are using are supporting the features, you might want to give it a try:
https://github.com/abbbi/virtnbdbackup

Automatic backup/restore at start server

I would like an automatic backup of a schema to a file on every load and at a restart of icCube an automatic restore of the last backup. And of course an automatic cleanup of those files. This way we would have a lot less downtime on a restart.
It looks like icCube has that with backup and/or offline data, but I can't get that working like I described above. Is what I want possible and how?
You can activate the backup in the schema file (Advanced Properties).
Now everytime you load the schema it will create a backup.
And if you set "Load On Startup" as well, icCube is going to load the last backup available.
There is no automatic cleanup: for that purpose you can use the Rest API available in the latest icCube. Otherwise, you can cleanup yourself the backup files created in the ~/icCube-data/backup folder.
Hope that helps.

Can Artifactory age artifacts to S3?

We have an Artifactory solution deployed and I am trying to figure out if it can meet my use case. The normal use case is that artifacts are deleted within a week or so and can normally fit in X GB of local storage, but we'd like to be able to:
Keep some artifacts around much longer, and since they are accessed infrequently, store them in AWS S3.
Sometimes artifacts aren't able to be cleaned up in time, so we'd like to burst to the cloud when local storage is overflowed.
I was thinking I could do the following:
Local repository of X GB
Repo pointing to S3
Virtual repo in front of both of these
Setup a plugin to move artifacts from local->S3 via our policies
However, I can't figure out what a Filestore is in Artifactory, and how you'd have two Repositories backed by different filestores.
Anyone have pointers to documentation or anything that can help? The docs I can find are rather slim on the high level details of filestores and repositories.
The Artifactory binary provider does not support configuring multiple storage backends, so it is impossible to use S3 and NFS in parallel. The main reason for this limitation is that Artifactory has a checksum based storage which stores each binary only once and keeps pointers from all relevant repositories. For that reason Artifactory does not manage separate storage per repository.
For archiving purposes, one of the possible solutions is setting up another Artifactory instance which will take care of archiving. This instance can be connected to an S3 storage backend.
You can use replication to synchronize between the two instances (without syncing deletes). You can have a repository(s) in your master Artifactory which contains artifacts which should be archived, those artifacts will be replicated to the archive Artifactory and later on can be deleted from the master.
You can use a user plugin to decide which artifacts should be moved to the archive repository.

Sonatype Nexus - Full Blob folder

The blobs folder on my Sonatype Nexus has completely filled the server memory.
Does anyone know how to make room? Does it exist an automatic way to free that space, or I have to do it manually..?
And, at last: what happens if I completety remove all the data in the directory blobs/default/content?
Thank you all in advance
Marco
In NXRM3, the blobstore contains all the components of your repository manager. If your disk is full, you will not be able to write anything more to NXRM and risk corruption of existing data.
Cleanup can be performed using scheduled tasks. What you need varies based around what formats your system is using. You can find more general information here: https://help.sonatype.com/display/NXRM3/System+Configuration#SystemConfiguration-ConfiguringandExecutingTasks
It is important to note that you must run the "Compact blob store" task after any cleanup is done, otherwise the space will not be freed.
However, it is advisable if you have reached full disk space, you shut down and restore from a backup in case there is corruption, preferably giving yourself a larger disk for your blobstore before restarting.
RE "what happens if I completety remove all the data in the directory blobs/default/content": That is in effect removing all data from NXRM in the default blobstore. You will have no components if you do that.

Can you use rsync to replicate block changes in a Berkeley DB file?

I have a Berkeley DB file that is quite large (~1GB) and I'd like to replicate small changes that occur (weekly) to an alternate location without having the entire file be re-written at the target location.
Does rsync properly handle Berkeley DBs by it's block level algo?
Does anyone have an alternative to only have changes be written to the Berkeley DBs files that are targets of replication?
Thanks!
Rsync handles files perfectly, at the block level. The problem with databases can come into play in a number of ways.
Caching
File locking
Synchronization/transaction logs
If you can insure that during the period of the rsync, no applications have the berkeley db open, then rsync should work fine, and offer a significent advantage over copying the entire file. However, depending on the configuration and version of bdb, there are transaction logs. You probably want to investigate the same mechanisms used for backups and hot backups. They also have a "snapshot" feature that might better facilitate a working solution.
You should probably read this carefully: http://www.cs.sunysb.edu/documentation/BerkeleyDB/ref/transapp/archival.html
I'd also recommend you consider using replication as an alternative solution that is blessed by BDB https://idlebox.net/2010/apidocs/db-5.1.19.zip/programmer_reference/rep.html
They now call this High Availabity -> http://www.oracle.com/technetwork/database/berkeleydb/overview/high-availability-099050.html

Resources