How do I delete old snapshot versions - nexus

The Problem
Our Nexus (v2.14) is managing several hosted but also a couple of proxy repositories. Inspecting a specific proxy snapshot repository, I can see that a handful of GAVs are in storage (the latest developments). There are multiple (timestamped) snapshot versions within the latest GAVs, some multiple weeks old. I want to get rid of older snapshot versions and free disk space.
The Question
I might be on an entirely wrong track, but I've tried running three different kind of jobs now in order to free some space, but none worked:
Evict Unused Proxied Items From Repository Caches
Remove Snapshots From Repository
Remove Unused Snapshots From Repository
The latter two, I assume, aren't fit to work with proxied repositories (since they fail with the error "The repository with ID=XY is not valid for Snapshot Removal Task!").
The first one, for which I found documentation somewhere, is the job that should be used in my case, I think.
I set it up so that it evicts all items that are older than 7 days.
When I run the task, I expect to not find any snapshot versions that are older than 7 days in the "Browse Storage" view, but there are.
It's entirely possible that my expectations are wrong. But my questions stands nonetheless:
How do I remove old snapshot versions from a proxied snapshot repository in order to free disk space?
Thanks for the help!
PS: I have emptied the trash after running each job just in case, but no dice

Related

How to delete outdated Firebase Cloud function containers from GC Storage?

So recently Firebase started charging for Cloud Functions container storage: https://firebase.google.com/pricing
No free usage $0.026/GB
I have deployed 2 functions several times (no more than 10 times, can't remember exact count, but this is still pretty low, IMO). Now I am already billed a small amount (fractions of a cent for now). So seems that if I deploy the functions another few dozens of times, I'll get close to a dollar, because old (and unused) containers are not deleted from the storage bucket.
Is there a way to safely delete outdated, not used containers to free some space? Well, it may seem that a few cents are not worth the time, but still, that's not what a free tier should be like.
I found the only robust solution to this ongoing issue (for now) is to periodically remove all of the artifact files (following Doug's instructions). As noted by others, removing some of the files can cause subsequent deploy errors (I experienced these).
IMPORTANT: Only delete the artifact files, NOT the folders as this can also cause issues.
You can do partial or full deploys as normal without any issues (it seems that the artifact files are only referenced during the build/deploy process).
Not ideal by a long shot, but at least reduces the storage usage to the minimum (until it starts accumulating again).
Edit: I have experimented with Lifecycle rules in the artifacts bucket to try and automate the clearing out of the container, but the parameters provided cannot guarantee that ALL will be cleared in one hit (which you need it to).
For convenience, you can see the artifacts bucket from within the Firebase Storage UI by selecting the "Add Bucket" option and importing the buckets from GCP.
Go to the Cloud console
Select "Cloud Storage -> Browser" from the products in the hamburger menu.
You will see multiple storage buckets there. Simply dig in to the buckets that start with "artifacts" or end with "cloudbuild" and delete the old files (by date) that you don't want.
In case of Firebase Cloud Functions you can see from their documentation (lifecycle of a background function section):
When you update the function by deploying updated code, instances for older versions are cleaned up along with build artifacts in Cloud Storage and Container Registry, and replaced by new instances.
When you delete the function, all instances and zip archives are cleaned up, along with related build artifacts in Cloud Storage and Container Registry. The connection between the function and the event provider is removed.
It means that there is no need to manually cleanup and firebase deploy scripts are doing it automatically.
You should not remove build artifacts since cloud functions are scaling automatically and new instances are built from theese artifacts.
I don't really think that cost is nearly a problem since it's 0.026$/GB so you need very approximately about 76 functions to pay 1$ for their artifacts storage (I take approx size of 1 function's artifacts as 500mb). Also artifacts size should not grow up with every functions since it's basically size of dependencies which are more or less independent from number of deployed functions.

How to convert a snapshot to a snapshot to an Image in Openstack?

It seems that snapshots and instances are very similar (e.g. https://serverfault.com/questions/527449/why-does-openstack-distinguish-images-from-snapshots).
However, I've been unable to share snapshots publicly globally (i.e. across all projects). Note, I'm a user of the OpenStack installation, not an administrator of the installation.
Assuming that Images don't suffer the same limitation of Snapshots, is there a procedure for convert a snapshot to an image? If not, maybe I should ask a separate question, but my cloud admin tells me it needs to be an image.
for download
glance-image download
Initially I tried this:
openstack image save --file NixOS.qcow2 5426bbcf-06b3-42f3-b117-37395e7dde83
However, the reported size of NixOS.qcow2 was always 0 bytes. Not good. Well, the issue was apparently related to the fact that is is also what was reported in OpenStack and Horizon for the size of the Snapshot. So something weird was going on, but functionally, I could still use the snapshot to create instances without issue.
I then created a volume of a snapshot in horizon (when shut off, couldn't create a volume when shelved), then used this command to create an image from the newly created volume (NixSnapVol):
openstack image create --volume NixSnapVol NixSnapVol-img
Interestingly the reported size went from 41GB to 45GB, maybe that was part of the issue. Anyway, seems to work now, and bonus is that it is now RAW type instead of qcow2, so I don't need to do the conversion (our system largely benefits from using RAW as we have a ceph backend).

Azure Git Deploy Doesn't Include More Recent Commits

I am deploying a .net application to Azure via a git deployment. However when I launch the deployment I can see it spinning on the website and eventually complete, however it has an older commit message listed and not the most recent commits. I also confirmed in /site/deployments/ that it is using an older commit ID. What would possibly cause the deploy to appear as thought it works but essentially ignore a bunch of commits after a certain point? I also confirmed in the code that it isn't getting updated.
It looks like it is just taking the commit on top (the one deployed a few weeks back) and updating the date of it to be today.
UPDATE: I will also say that this branch was initially an autodeploy and it was failing for permissions for some reason. I have a bunch of instances that have a similar code base and set up and all of them work. Not sure if the fact it was an autodeploy at first is making a difference but why would pushing directly to the remote's master look like it is working but not include updates?
UPDATE 2:
Here are photos of the progression: There is definitely commits after the one it keeps reusing that I cherry-picked from other branches.

Alfresco failed to find new folder

I'm using alfresco throw cmis.
On one of our environment, we have an issue.
We want to create a folder and put some docs in it.
This works fines in all our env except one.
In this one, we can create the folder.
But when we do a search to find the folder, the folder isn't found.
After that i can find it with the share gui.
I have no error message in the share app.
Does any one have an idea on what could be the issue?
Promoting a comment to an answer...
When using Alfresco with SOLR, you need to be aware that the SOLR index isn't quite real-time. Close to real time, sure, but it's asynchronous so there's always a lag. (It's an eventually consistent index, not a fully realtime one)
There's a lot of information on the Alfresco and SOLR Wiki, including the way you can query what the current lag is.
If the lag is very low (eg a lightly loaded system), you can find that SOLR will catch up almost instantly, and newly created items will show instantly in the search results. However, it's more normal to expect to have to wait a little bit, especially on more loaded systems.
If no new results are showing up even after several minutes, you'll want to follow the instructions on the wiki or the SOLR Monitoring and Troubleshooting docs to work out why and fix.

Nexus Repository Manager

We want to install Nexus in an environment where >100 developers use this.
What is the max load that Nexus could handle. We might have >50 simultanesous requests for artifacts (a fresh local repo is used on every build)
I want to have multiple instances share the repo storage (I have tried and it does not work but wondering if anyone has tried to do this). We want to have one instance of Nexus in Read mode that is in sync with other one in Read/Write mode. Any possibilities?)
Please share your thoughts. Thanks in advance.
We have many huge customers running Nexus without load problems, including shops on the order of 10's of thousands of users. That's in addition to the large and public instances at:
http://repository.apache.org
http://nexus.codehaus.org
http://maven.atlassian.com
https://repository.jboss.org/nexus
http://oss.sonatype.org
and many others.
Two Nexus instances can't effectively share the entire storage folder because Lucene is used for many things and those files aren't shareable. It might be possible to share just the repo folder, but the indexes and caches will be out of data on the standby.
Redundancy is something we're working on in commercial Nexus versions.

Resources