Nexus blob size to big after crash - nexus

So i use nexus OSS for private repo purposes now we ran into an issue where the disk got out of space and after long hours of debugging we were able to recover. However now it seems that my blob store is bigger then it should by at least 100GB.
This repo is the main culprit:
"maven-releases": {
"reclaimableBytes": 0,
"totalBytes": 170605745239
}
See how it's pretty much 150GB big while i completely deleted the repo and emptied it (I did run the compact blob store task)
In the above screenshot you can see my total blob size, which should be under or around 10GB at this moment. Any body an idea how i can clean that repo and everything in it even though it's not visible in nexus?
It feels like the blob store and component database are out of sync.
Just as FYI this is a raw repo and not an actual maven repo.
Any help would be appreciated.

Sounds like https://issues.sonatype.org/browse/NEXUS-17740. If that's right, you can follow that ticket for progress on an eventual fix.

Related

Binary provider has no content

I had to take over responsiblity over Artifactory suddently (responsible employee left), I've never worked with it before, I've spent the day trying to learn the product and figure things out.
Problem Context:
Artifactory deployed on VM in Azure (ubuntu), mounted disk has artifactory deployed on it (OSS 6.0.2 rev 60002900)
Disk got full = application crashed.
I increased disk size, repartioned and re-mounted and the artifactory came up again - but now getting the following error message in the browser:
{
"errors" : [ {
"status" : 500,
"message" : "Could not process download request: Binary provider has no content for 'b8135c33f045ab2cf45ae4d256148d66373a0c89'"
} ]
}
I have searched a bit and found various possible solutions.
This one:Artifactory has lost track of local artifacts
which seems to be the most promesing since the context of our issue is similar, but I don't see those paths - i.e. I do see the filestore and everything in it, but not other paths/files mentioned in the conversation.
I also found this: https://www.jfrog.com/jira/browse/RTFACT-6324 but again not finding the paths in our deployment.
To the best of my understanding it seems that if I somehow "reinstall" the filestore and/or database things should work?
Is there a clear manual, or something basic I'm missing? I'd like to avoid having to isntall everything from scratch.
Any help would be most appreciated as our entire Dev org is not sort of blocked and trying to workaround locally somehow until this is resolved.
I am a JFrog Support Engineer and we saw your issue, we will contact you on other channels in order to help you resolve this issue.
Edit:
After reaching out, we found that this issue was caused by a specific file that was corrupted/missing from your filestore, and after deleting this file and re-pulling it the issue was solved.
To further elaborate on this issue and what can cause it:
Artifactory implements a checksum based storage. All files deployed/cached in Artifactory are renamed to their checksum value and saved in the filestore, and Artifactory creates a pointer in the DataBase containing the name, checksum and some other properties of the file. This allows for a more efficient storage since all files are only saved once in the filestore but can have multiple pointers in the DataBase (in various locations inside Artifactory - different repositories or archives even).
When a file gets corrupted in the filestore or even deleted (without deleting it from Artifactory), this issue can manifest, since there is still a pointer to this file in Artifactory's DataBase but the binary itself does not exist in the filestore.
This issue can be caused by various causes such as connection issues with the NFS/S3/other types of storage, files being corrupted or deleted from the filestore, etc.
Another edit:
You can also use a user plugin called "filestoreIntegrity" that can go through all the pointers to files in your DataBase and checks if they exist in the filestore. This way you can know if there are corrupted or missing files and fix this issue.

How to convert a snapshot to a snapshot to an Image in Openstack?

It seems that snapshots and instances are very similar (e.g. https://serverfault.com/questions/527449/why-does-openstack-distinguish-images-from-snapshots).
However, I've been unable to share snapshots publicly globally (i.e. across all projects). Note, I'm a user of the OpenStack installation, not an administrator of the installation.
Assuming that Images don't suffer the same limitation of Snapshots, is there a procedure for convert a snapshot to an image? If not, maybe I should ask a separate question, but my cloud admin tells me it needs to be an image.
for download
glance-image download
Initially I tried this:
openstack image save --file NixOS.qcow2 5426bbcf-06b3-42f3-b117-37395e7dde83
However, the reported size of NixOS.qcow2 was always 0 bytes. Not good. Well, the issue was apparently related to the fact that is is also what was reported in OpenStack and Horizon for the size of the Snapshot. So something weird was going on, but functionally, I could still use the snapshot to create instances without issue.
I then created a volume of a snapshot in horizon (when shut off, couldn't create a volume when shelved), then used this command to create an image from the newly created volume (NixSnapVol):
openstack image create --volume NixSnapVol NixSnapVol-img
Interestingly the reported size went from 41GB to 45GB, maybe that was part of the issue. Anyway, seems to work now, and bonus is that it is now RAW type instead of qcow2, so I don't need to do the conversion (our system largely benefits from using RAW as we have a ceph backend).

Sonatype Nexus - Full Blob folder

The blobs folder on my Sonatype Nexus has completely filled the server memory.
Does anyone know how to make room? Does it exist an automatic way to free that space, or I have to do it manually..?
And, at last: what happens if I completety remove all the data in the directory blobs/default/content?
Thank you all in advance
Marco
In NXRM3, the blobstore contains all the components of your repository manager. If your disk is full, you will not be able to write anything more to NXRM and risk corruption of existing data.
Cleanup can be performed using scheduled tasks. What you need varies based around what formats your system is using. You can find more general information here: https://help.sonatype.com/display/NXRM3/System+Configuration#SystemConfiguration-ConfiguringandExecutingTasks
It is important to note that you must run the "Compact blob store" task after any cleanup is done, otherwise the space will not be freed.
However, it is advisable if you have reached full disk space, you shut down and restore from a backup in case there is corruption, preferably giving yourself a larger disk for your blobstore before restarting.
RE "what happens if I completety remove all the data in the directory blobs/default/content": That is in effect removing all data from NXRM in the default blobstore. You will have no components if you do that.

How do I delete wp-config from github recursively?

So I am a noob. Looking back I don't know what I was thinking. But I just realized I have uploaded my wp-config file for WordPress to GitHub. Which means my access keys and database login is out for the world to see. In the short term I have converted the repository to private. But I need to figure out how to remove the file from all of the repositories commits. I found this, https://help.github.com/articles/remove-sensitive-data/ but I am afraid that I don't quite understand it and I am not sure how to use it. I have Git Shell but I have only really used the GitHub software. Can anyone walk me through the steps to take? Or am I better off deleting the entire repository and starting over?
Even if you converted it to private, it was online for a while. Check their red mean danger text:
Danger: Once you have pushed a commit to GitHub, you should consider
any data it contains to be compromised. If you committed a password,
change it!
Change the password, then try this repo cleaner:
https://rtyley.github.io/bfg-repo-cleaner/
You'll need java. If you consider it too much work just delete and recreate the repo, but change the exposed password anyway.

"sqlite3.OperationalError: database or disk is full" on Lustre

I have this error in my application log:
sqlite3.OperationalError: database or disk is full
As plenty of disk space is available and my SQLite database does not appear to be corrupted (integrity_check did not report any error), why is this happening and how can I debug it?
I am using the Lustre filesystem (with flock set), and until now, it worked perfectly.
Versions are:
Python 2.6.6
SQLite 3.3.6
It's probably too late for the original poster, but I just had this problem and couldn't find an answer so I'll document my findings in the hope that it will help others:
As it turns out, an SQLite database actually can get full even if there's plenty of disk space, because it has a limit for the number of pages in a database file:
http://www.sqlite.org/pragma.html#pragma_max_page_count
In my case the value was 1073741823, which meant that in combination with a page size of 1024 Bytes the database maxed out at 1 TB and returned the "database or disk is full" error.
The good news is that you can raise the limit; for example double it by issuing PRAGMA max_page_count = 2147483646;.
The limit doesn't seem to be saved in the database file, though, so you have to run it in your application every time you open the database.
By default, SQLite uses /tmp temporary directory (not the memory). If /tmp is too small you will get disk full. In that case change the temporary directory like that: export TMPDIR=<big file system>.
I had same problem too.
Your host or PC's storage is full so delete some files in your system then problem is gone.

Resources