Get size of specific repository in Nexus 3 - nexus

How can I get a size of specific repository in Nexus 3?
For example, Artifactory shows the repository "size on disk" via UI.
Does Nexus have something similar? If not - how can I get this information by script?

You can use admin task with groovy script nx-blob-repo-space-report.groovy from https://issues.sonatype.org/browse/NEXUS-14837 - for me turned out too slow
Or you can get it from database:
login with user-owner nexus installation on nexus server (e.g.
nexus)
go to application directory (e.g. /opt/nexus):
$ cd /opt/nexus
run java orient console:
$ java -jar ./lib/support/nexus-orient-console.jar
connect to local database (e.g. /opt/sonatype-work/nexus3/db/component):
> CONNECT PLOCAL:/opt/sonatype-work/nexus3/db/component admin admin
find out repository row id in #RID column by repository_name value:
> select * from bucket limit 50;
get sum for all assets with repo row id found in the previous step:
> select sum(size) from asset where bucket = #15:9;
result should be like (apparently in bytes):
+----+------------+
|# |sum |
+----+------------+
|0 |224981921470|
+----+------------+
nexus database connection steps took from https://support.sonatype.com/hc/en-us/articles/115002930827-Accessing-the-OrientDB-Console
another useful queries
summary size by repository name (instead 5 and 6 steps):
> select sum(size) from asset where bucket.repository_name = 'releases';
top 10 repositories by size:
> select bucket.repository_name as repository,sum(size) as bytes from asset group by bucket.repository_name order by bytes desc limit 10;

Assign each repository to its own Blob Store.

You can refer below GitHub project. The script can be help to clear storage space on Nexus repository by analyzing the stats generated by the script. The script feature prompt-based user input, Search/Filter the results, Generates CSV output file and Print the output on console in a tabular format.
Nexus Space Utilization - GitHub
You may also refer below post on the same.
Nexus Space Utilization - Post

Related

JFrog Artifactory system restore failing

In our organization we are running Artifactory Pro edition with daily exports of data to NAS drive (full system export). Every night it is running for around 4 hours and write that "system export was successful". The time has come to migrate our instance to PostgreSQL based (running on derby now). I have read that you need to do it with the full system import.
Few numbers:
Artifacts: almost 1 million
Data size: over 2TB of data
Export data volume: over 5TB of data
If you also were pondering why export data volume is more than 2 times bigger than disk space usage, our guess is that docker images are deduplicated (per layer) when stored in the docker registry, but on export the deduplication is not there.
Also, I had success migrating the instance by rsync'ing the data over to another server and then starting exactly the same setup there. Worked just fine.
When starting exactly the same setup on another machine (clean install) and running system import, it fails with the following log:
[/data/artifactory/logs/artifactory.log] - "errors" : [ {
[/data/artifactory/logs/artifactory.log] - "code" : "INTERNAL_SERVER_ERROR",
[/data/artifactory/logs/artifactory.log] - "message" : "Unable to import access server",
[/data/artifactory/logs/artifactory.log] - "detail" : "File '/root/.jfrog-access/etc/access.bootstrap.json' does not exist"
[/data/artifactory/logs/artifactory.log] - } ]
[/data/artifactory/logs/artifactory.log] - }
Full log is here: https://pastebin.com/ANZBiwHC
The /root/.jfrog-access directory is Access home directory (Access uses derby as well).
What am I missing here?
There are couple of things we were doing wrong according to the Artifactory documentation:
Export is not a proper way to backup big instance. In case of running Artifactory with derby, it is sufficient to rsync filestore and derby directories to NAS.
Incremental export over several versions of Artifactory is NOT supported. Meaning, that if you had full export on version 4.x.x, then you upgraded over to version 5.x.x, then to version 6.x.x and you had incremental exports along the way... Then your export will NOT be imported into version 6.x.x. After each version upgrade it is necessary to create new full export of the instance.
I resolved the situation by removing the export and doing full system export (around 30 hours). Full system export was successfully imported on another instance (around 12 hours).
P.S. The error is still cryptic to me.

Upload dSYMS to Firebase via Fastlane

I am struggling to upload dSYM files to Firebase via Fastlane. I have a lane that looks like the following:
desc "Fetch and upload dSYM files to Firebase Crashlytics"
lane :refresh_dsyms_firebase do |options|
download_dsyms(version: options[:version])
upload_symbols_to_crashlytics(gsp_path: "./App/GoogleService-Info.plist")
clean_build_artifacts
end
I confirmed that that is the correct path to the plist file, but when I try to run the lane at first I see the following:
[17:22:47]: invalid byte sequence in UTF-8
[17:22:47]: invalid byte sequence in UTF-8
[17:22:47]: invalid byte sequence in UTF-8
and then one of these for every dSYM file found:
[17:22:48]: Uploading '70DBE65E-227E-3754-89F2-EEFA6B8EEC2F.dSYM'...
[17:22:48]: Shell command exited with exit status instead of 0.
I am trying to determine exactly what I am missing from this process. Does anyone have ideas? I am fairly new to Fastlane, so definitely assume I could be missing something basic. (Although, that empty exit status is a bit weird).
fastlane 2.107.0
EDIT(June 7th: 2021):
I updated the answer from my own to one that was helpful to me at the time this was written.
There are many other great answers on this page on using Fastlane as well - please check them out
This may not be an option for most, but I just ended up fixing this by starting over. It may not be entirely obvious if you came over from Fabric, but I figured I would just rip off the band aid. My original setup was using the Fabric(Answers)/Firebase Crashlytics which is the Fabric->Firebase migration path, although subtle, the configuration between the two are slightly different and cause issues with upload_symbols_to_crashlytics
Remove support for Fabric answers, or replace with https://firebase.google.com/docs/analytics/ios/start
Remove the Fabric declaration in Info.plist
Modify your existing run script in BuildPhases: replace your existing runscript with "${PODS_ROOT}/Fabric/run" and add $(BUILT_PRODUCTS_DIR)/$(INFOPLIST_PATH) to input files
In you AppDelegate remove Fabric.with([Crashlytics.self]) and you can also kill the import Fabric as this is now covered by Firebase
Unlink fabric, re-onboard Firebase crashlytics, and select new integration.
desc "Upload any dsyms in the current directory to crashlytics"
lane :upload_dsyms do |options|
download_dsyms(version: version_number, build_number: build_number)
upload_symbols_to_crashlytics(gsp_path: "./App/Resources/GoogleService-Info.plist")
clean_build_artifacts
end
anyone interested in this can follow the thread here: https://github.com/fastlane/fastlane/issues/13096
TL;DR: When you call
upload_symbols_to_crashlytics(gsp_path: "./App/GoogleService-Info.plist")
It will call a binary from the installed Fabric pod called upload_symbols and will look something like this:
./Pods/Fabric/upload-symbols -a db4d085462b3cd8e3ac3b50f118e273f077497b0 -gsp ./App/GoogleService-Info.plist -p ios /private/var/folders/x1/x6nqt4997t38zr9x7zwz72kh0000gn/T/d30181115-8238-1fr38bo/D4CE43B9-9350-3FEE-9E71-9E31T39080CD.dSYM
You'll notice that it calls it using both the Fabric API key and the GoogleService-Info.plist path. I do not know why but this will cause it not to upload. You'll have to temporarily remove the fabric configuration information from your Info.plist file before running the fastlane lane. (remember to re-add the fabric configuration).
First, you need to use upload_symbols_to_crashlytics but before to use it you will need to download your dsyms from App Store Connect and to do that you should use download_dsyms with some parameters version and build_number and the Fastlane will ask you about your app_identifier so I advise you to use it to not interrupt the build until getting your answer.
desc "Upload any dsyms in the current directory to Crashlytics of firebase"
lane :upload_dsyms do |options|
version_number = get_version_number(target: "your_app_target")
build_number = get_build_number
download_dsyms(
app_identifier: "your_app_identifier",
version: version_number,
build_number: build_number
)
upload_symbols_to_crashlytics(gsp_path: "./your_app_name or your_app_target/another_directroy/GoogleService-Info.plist")
clean_build_artifacts
end
my app was
desc "Upload any dsyms in the current directory to Crashlytics of firebase"
lane :upload_dsyms do |options|
version_number = get_version_number(target: "Movies")
build_number = get_build_number
download_dsyms(
app_identifier: "com.vngrs.Movies.demo",
version: version_number,
build_number: build_number
)
upload_symbols_to_crashlytics(gsp_path: "./Movies/Resources/GoogleService-Info.plist")
clean_build_artifacts
end
Details
Xcode Version 11.3.1 (11C504)
Firebase tools 7.14.0
Fastlane 2.143.0
Solution
./fastlane/Pluginfile
gem 'fastlane-plugin-firebase_app_distribution'
Lane
before_all do
# Update fastlane
update_fastlane
# Update fastlane plugins
sh("fastlane update_plugins")
# Update firebase tools
sh("curl -sL firebase.tools | upgrade=true bash")
end
Usage 1.
Download dsyms from AppStore and upload to firebase
download_dsyms(version: '1.0', build_number: '1')
upload_symbols_to_crashlytics(gsp_path: "./App/Environment/production/GoogleService-Info-production.plist")
Usage 2.
Get dsyms from archive (after build) and upload to firebase
gym(
configuration: 'Release',
scheme: 'MyApp',
include_bitcode: true
)
upload_symbols_to_crashlytics(gsp_path: "./App/Environment/production/GoogleService-Info-production.plist")
Info
get_version_number
download_dsyms
sh
update_fastlane
upload_symbols_to_crashlytics
This worked for me
desc "Downlaod and Uplaod dSYMS to Firebase"
lane :uplaod_dsyms do
download_dsyms # This will download all version
upload_symbols_to_crashlytics(
gsp_path: "Path to your google plist",
binary_path: "./Pods/FirebaseCrashlytics/upload-symbols") # this goes to cocoapods of FirebaseCrashlytics
clean_build_artifacts # Delete the local dSYM files
end
This works for me.
1. Create a new lane on Fastfile
# Fastfile
default_platform(:ios)
# Constants
XCODE_PROJECT = "PROJECT.xcodeproj"
XCODE_WORKSPACE = "PROJECT.xcworkspace"
PRODUCTION_GSP_PATH = "./APP_ROOT/GoogleService-Info.plist"
platform :ios do
desc "Push a production build to TestFlight"
lane :upload_crashlytics_prod do
download_dsyms(
version: get_version_number(xcodeproj: XCODE_PROJECT),
build_number: get_build_number(xcodeproj: XCODE_PROJECT)
)
upload_symbols_to_crashlytics(gsp_path: PRODUCTION_GSP_PATH)
end
end
2. Now call from your shell script or terminal
fastlane upload_crashlytics_prod

Deleting artifacts in artifactory

I want to delete artifacts in artifactory.I googled and found this link
https://www.jfrog.com/confluence/display/RTF/Artifactory+REST+API
Here the Delete build,using REST API,is what we are going for at the moment.Can any one give me a general idea how the command should look using curl command.Also in buildname what do i need to specify?
For deleting a single artifact or folder you should use the Delete Item API, for example
curl -uadmin:password -XDELETE http://localhost:8080/artifactory/libs-release-local/ch/qos/logback/logback-classic/0.9.9
Notice that you will need a user with delete permissions.
If all goes well you should expect a response with a 204 status and no content.
The delete API is intended for deleting build information and is relevant if you are using the Artifactory build integration.
Nowadays there's a tool that can be used for it (note that I am a contributor to that tool):
https://github.com/devopshq/artifactory-cleanup
Assume i have 10 repositories and i want to keep only last 20 artifacts in 5 repositories and unlimited in other 5 repositories
The rule for 10 repositories would look like:
# artifactory-cleanup.yaml
artifactory-cleanup:
server: https://repo.example.com/artifactory
# $VAR is auto populated from environment variables
user: $ARTIFACTORY_USERNAME
password: $ARTIFACTORY_PASSWORD
policies:
- name: reponame
rules:
- rule: Repo
name: "reponame"
- rule: KeepLatestNFiles
count: 20

How to retrieve currently applied node configuration from Riak v2.0+

Showing currently applied configuration values
In v2.0+ of Riak there is a new command option: riak config effective
Which I read as it would tell you the current running values of riak.
At any time, you can get a snapshot of currently applied
configurations through the command line. For a listing of all of the
configs currently applied in the node
Config changes applied only on start of each node?
In multiple locations in Riak documentation there is reference like:
Remember that you must stop and then re-start each node when you
change storage backends or modify any other configuration
Problem:
However when I made a change to a setting (I've tested this in both riak.conf and advanced.conf), I see the newest value when running: riak config effective
ie:
Start node: riak start
View current setting for log level: riak config effective | grep log.console.level
log.console.level = info
Change the level to debug (something that will output a lot to console.log)
Re-run: riak config effective | grep log.console.level, we get:
log.console.level = debug
Checking the console log file for debug: cat /var/log/riak/console.log | grep debug give no results (indicating the config change has not been applied)
So the question is, how can I retrieve and verify what config setting each Riak node is running under?
When Riak starts, it creates two files: 'app..config' and 'vm..config'. The default location is in a 'generated.configs' directory under the platform data directory (usually /var/lib/riak).
These files will contain the settings that were in place when Riak was started. The command riak config effective processes the current riak.conf and advanced.config files.

Deleting artifacts older than 2 years from local nexus repository

We're running nexus on some old hardware which is limited in disk space and would like to remove artifacts older than a certain threshold.
Is there any way to do this other than a combination of find and curl?
There is a scheduled task that can automatically remove old snapshot releases:
http://www.sonatype.com/people/2009/09/nexus-scheduled-tasks/
http://www.sonatype.com/books/nexus-book/reference/confignx-sect-managing-tasks.html
Unfortunately, this does not work for hosted release repositories.
As mentioned on a Sonatype blog post linked from a comment in the blog in gavenkoa's answer, since Nexus 2.5 there is a built in "Remove Releases From Repository" scheduled task which can be configured to delete old releases keeping a defined number.
This is sufficient to meet our needs.
Delete all files to which no one access more then 100 days and not modified more then 200 days:
find . -type f -atime +100 -mtime 200 -delete
To cleanup empty directories:
find . -type d -empty -delete
Or alternatively look to https://github.com/akquinet/nexus_cleaner/blob/master/nexus_clean.sh and corresponding blog entry http://blog.akquinet.de/2013/12/09/how-to-clean-your-nexus-release-repositories/ (delete all except last 10 releases).
auto purge older than 30 days(u can change it) not download docker images from nexus 3
https://gist.github.com/anjia0532/4a7fee95fd28d17f67412f48695bb6de
# nexus3's username and pwd
username = 'admin'
password = 'admin123'
# nexus host
nexusHost = 'http://localhost:8081'
# purge repo
repoName = 'docker'
# older than days
days = 30
#change and run it
For Nexus2, you can use my Spring Boot application https://github.com/vernetto/nexusclean , you can define rules based on date and on a minimum number of Artifacts to retain, and it generates "rm -rf" commands (using the REST API is damn slow).
For Nexus3, I would definitely use a Groovy script as a "Execute Admin Task". One is posted here groovy script to delete artifacts on nexus 3 (not nexus 2)

Resources