The blobs folder on my Sonatype Nexus has completely filled the server memory.
Does anyone know how to make room? Does it exist an automatic way to free that space, or I have to do it manually..?
And, at last: what happens if I completety remove all the data in the directory blobs/default/content?
Thank you all in advance
Marco
In NXRM3, the blobstore contains all the components of your repository manager. If your disk is full, you will not be able to write anything more to NXRM and risk corruption of existing data.
Cleanup can be performed using scheduled tasks. What you need varies based around what formats your system is using. You can find more general information here: https://help.sonatype.com/display/NXRM3/System+Configuration#SystemConfiguration-ConfiguringandExecutingTasks
It is important to note that you must run the "Compact blob store" task after any cleanup is done, otherwise the space will not be freed.
However, it is advisable if you have reached full disk space, you shut down and restore from a backup in case there is corruption, preferably giving yourself a larger disk for your blobstore before restarting.
RE "what happens if I completety remove all the data in the directory blobs/default/content": That is in effect removing all data from NXRM in the default blobstore. You will have no components if you do that.
Related
I am working on a task to be able to backup VM image volumes in another server and location, the problem is, I don't want to copy the whole image file each time I want to start a backup job, I want to backup the whole image only once and then backup incrementally each time I want to do a backup from a vm.
Is there anyway to do that? I don't want to use snapshots because when the number of snapshots increases, it will have an impact on volume performance.
If there is another way or if there is a way to use snapshots more efficient, please tell me.
I have tried volume snapshot locally, I want to know how to do it externally or any other sufficient ways to do incremental external backups.
ive been working on a project allowing you to create full and incremental or differential backups of libvirt/kvm based virtual machines. It works by using the latest features provided by the libvirt and qemu projects (changed block tracking). So if the libvirt/qemu versions you are using are supporting the features, you might want to give it a try:
https://github.com/abbbi/virtnbdbackup
I'm using nexus sonatype-work nexus3. Still it didn't have a cleaning process. So, it has filled. Is there any commands that I can use to find unused images and data? And, how can delete files manually?
No, you should not delete any of the files manually otherwise it lead to issues with retrieving data from your repositories. You should set up some cleanup policies that will maintain the disk space for you - you can learn more about from these resources:
Cleanup policies
Keeping disk usage low
Storage management
I am just starting with Meteor creating some test/practice apps. After I have created the app and run it, the .meteor folder size baloons to 500 MB. Each practice app adds 500 MB or so to my working folder.
I am not playing with any huge data sets on anything, my database will be less than 10 MB.
As I sync my work folder with my laptop, it is a major pain to back it up. How can I reduce the size of default mongodb while creating a practice app so that backing it up or folder sync
Also even when I copy the whole app folder to the new location, It does not run, likely because the database is stored somewhere else.
Can I save the database to the same folder as the app, so that just copying the folder over will enable me to continue working on the laptop as well?
Sorry if the question is too noobish.
Thanks for your time.
meteor reset >>> deletes my database. I want to be able to preserve it.
Yes, this can be a pain and is unavoidable by default at present. However, a couple of ideas that might be useful:
If you have multiple meteor apps, it's possible to use the same DB for each, as per #elfoslav: link. However, note that you have to supply the env variable every time or create a shell script for when you start meteor, otherwise it'll create a new db for you if you run meteor on its own just once!
If it's just portability of the app you're concerned about, get comfortable with mongodump and mongorestore, which will yield bson files containing just your database contents (i.e. about 10mb) which are pretty easy to insert back into another instance of mongoDB, so that you only have to copy these backwards and forwards. Here is a guide to doing this with your Meteor DB, and here is a great gist from #olizilla.
Have you tried below mongoDB configuration options to limit the space it occupies?
storage.smallFiles
Type: boolean Default: False
Sets MongoDB to use a smaller default file size. The
storage.smallFiles option reduces the initial size for data files and
limits the maximum size to 512 megabytes. storage.smallFiles also
reduces the size of each journal file from 1 gigabyte to 128
megabytes. Use storage.smallFiles if you have a large number of
databases that each holds a small quantity of data.
storage.journal.enabled
Type: boolean
Default: true on 64-bit systems, false on 32-bit systems
Enables the durability journal to ensure data files remain valid and
recoverable. This option applies only when you specify the --dbpath
option. The mongod enables journaling by default on 64-bit builds of
versions after 2.0.
Refer to: http://docs.mongodb.org/manual/reference/configuration-options/
Background
I have a database thats been corrpted, and want to save so much of the data possible.
I have tried sql dump the data with numerous of tools, without success.
Always same error message:
Error: database disk image is malformed
I'm pretty sure this did happen due to a power failure.
Approach?
Now the the database is in fact a file. And I'm thinking if its possible to treat it so and try to save so much data as possible.
I guessing when the db is opened by a tool or program it first check its headers.
In my case I get the error message right away. I'm assuming that the headers are corrupt, or miss matching. And due to that no tool will try to read the payload.
In the documents http://www.sqlite.org/fileformat2.html there are explanations for the header offsets.
Questions: Is this is an reasonable approach? And if it possible to repair, modify or exchange headers on the corrupted db. And how do I do it?
Despite several replies in multiple threads on SO to the contrary, SQLite databases can be recovered from corruption!
I have requested an update from the SQLite team in their FAQ (http://www.sqlite.org/faq.html#q20), but in the meantime, here are a couple of options.
The FAQs state:
"...If SQLITE_SECURE_DELETE is not used and VACUUM has not been run, then some of the deleted content might still be in the database file, in areas marked for reuse. But, again, there exist no procedures or tools that we know of to help you recover that data."
and:
"...Depending how badly your database is corrupted, you may be able to recover some of the data by using the CLI to dump the schema and contents to a file and then recreate. Unfortunately, once humpty-dumpty falls off the wall, it is generally not possible to put him back together again."
There are in fact at least two excellent tools to do data recovery for whole SQLite databases and individual records, and they can help in cases of hardware failure, software errors or human problems. It will not be 100% pristine, but the situation is not hopeless
PhotoRec is open source and multi-platform. While historically, it was used for images and pdfs, it now supports SQLite recovery (http://www.cgsecurity.org/wiki/File_Formats_Recovered_By_PhotoRec), along with 220+ binary file types. If a database (or entire directory) is deleted, PhotoRec can often restore the db file to a sane-enough state to be opened and exported. There are pre-compiled versions of the app freely available for Windows, Mac and Linux.
In addition, the commercial product Epilog by CCL Forensics can do very advanced record recovery, including retrieving data from the write-ahead log (WAL) transaction files. It is a few hundred dollars, but it can do fairly amazing forensic reconstruction on SQLite data (both native binary db files as well as raw disk images).
Both the above have saved my hide several times, so passing this along for others who may have lost hope in deleted/corrupted SQLite databases (as well as genuine forensics for popular use cases, like mobile phones, browsers, address books, etc.).
Once you've regenerated/exported data, it's always a good idea to verify your backup schemes and definitely run a pragma integrity_check periodically, along with vacuuming.
I have requested that the official FAQ be updated to at least mention that one can google "sqlite recovery" or something if it's verboten to mention other projects/products by name.
Cheers.
I have a Berkeley DB file that is quite large (~1GB) and I'd like to replicate small changes that occur (weekly) to an alternate location without having the entire file be re-written at the target location.
Does rsync properly handle Berkeley DBs by it's block level algo?
Does anyone have an alternative to only have changes be written to the Berkeley DBs files that are targets of replication?
Thanks!
Rsync handles files perfectly, at the block level. The problem with databases can come into play in a number of ways.
Caching
File locking
Synchronization/transaction logs
If you can insure that during the period of the rsync, no applications have the berkeley db open, then rsync should work fine, and offer a significent advantage over copying the entire file. However, depending on the configuration and version of bdb, there are transaction logs. You probably want to investigate the same mechanisms used for backups and hot backups. They also have a "snapshot" feature that might better facilitate a working solution.
You should probably read this carefully: http://www.cs.sunysb.edu/documentation/BerkeleyDB/ref/transapp/archival.html
I'd also recommend you consider using replication as an alternative solution that is blessed by BDB https://idlebox.net/2010/apidocs/db-5.1.19.zip/programmer_reference/rep.html
They now call this High Availabity -> http://www.oracle.com/technetwork/database/berkeleydb/overview/high-availability-099050.html