In our organization we are running Artifactory Pro edition with daily exports of data to NAS drive (full system export). Every night it is running for around 4 hours and write that "system export was successful". The time has come to migrate our instance to PostgreSQL based (running on derby now). I have read that you need to do it with the full system import.
Few numbers:
Artifacts: almost 1 million
Data size: over 2TB of data
Export data volume: over 5TB of data
If you also were pondering why export data volume is more than 2 times bigger than disk space usage, our guess is that docker images are deduplicated (per layer) when stored in the docker registry, but on export the deduplication is not there.
Also, I had success migrating the instance by rsync'ing the data over to another server and then starting exactly the same setup there. Worked just fine.
When starting exactly the same setup on another machine (clean install) and running system import, it fails with the following log:
[/data/artifactory/logs/artifactory.log] - "errors" : [ {
[/data/artifactory/logs/artifactory.log] - "code" : "INTERNAL_SERVER_ERROR",
[/data/artifactory/logs/artifactory.log] - "message" : "Unable to import access server",
[/data/artifactory/logs/artifactory.log] - "detail" : "File '/root/.jfrog-access/etc/access.bootstrap.json' does not exist"
[/data/artifactory/logs/artifactory.log] - } ]
[/data/artifactory/logs/artifactory.log] - }
Full log is here: https://pastebin.com/ANZBiwHC
The /root/.jfrog-access directory is Access home directory (Access uses derby as well).
What am I missing here?
There are couple of things we were doing wrong according to the Artifactory documentation:
Export is not a proper way to backup big instance. In case of running Artifactory with derby, it is sufficient to rsync filestore and derby directories to NAS.
Incremental export over several versions of Artifactory is NOT supported. Meaning, that if you had full export on version 4.x.x, then you upgraded over to version 5.x.x, then to version 6.x.x and you had incremental exports along the way... Then your export will NOT be imported into version 6.x.x. After each version upgrade it is necessary to create new full export of the instance.
I resolved the situation by removing the export and doing full system export (around 30 hours). Full system export was successfully imported on another instance (around 12 hours).
P.S. The error is still cryptic to me.
Related
OS: Linux 5.9.16-1-MANJARO
Electron version: 10.1.5
BetterSqlite version: 7.1.2
I am currently writing an application using Electron and BetterSqlite.
I build the AppImage like this:
npm run build && electron-builder build
This is how I access the database from my code:
db = new Database(
path.join(__dirname, `/${dbName}`).replace("/app.asar", "")
);
I have added the database file to use using:
"extraResources": [
"public/build/Database.db"
],
But when I open the AppImage i get the following error message:
SqliteError: attempt to write a readonly database
The database seems to be inaccessible due to the /tmp/.mountxxx point being readonly.
This behavior does not occur when I open the application in the development folder since it's not a readonly directory.
Is there a way to use the database from the /tmp/.mountxxx directory?
How would I got about accessing the database another way?
Thank you in advance.
I have searched for a way to use the AppImage mount point to read and write but I have not found anything. I will be using the user's home directory to store the database
As the error says when an AppImage is executed the AppDir is mounted as RO filesystem.
To workaround this you need to copy the database file into the user home using an startup script. By example you can copy it to "$HOME/.cache/com.myapp/appdata.db" then use this new copy.
I'm trying to use Concourse to grab a dockerfile defintion from a git repository, do some work, build the docker image, and push the new image to Artifactory. See below for the pipeline definition. At this time I have all stages up to the artifactory stage (the one that pushes to Artifactory) working. The artifactory stage exits with error with the following output:
waiting for docker to come up...
sha256:c6039bfb6ac572503c8d97f42b6a419b94139f37876ad331d03cb7c3e8811ff2
The push refers to repository [artifactory.server.com:2077/base/golang/alpine]
a4ab5bf94afd: Preparing
unauthorized: The client does not have permission to push to the repository.
This would seem straight-forward as an Artifactory permissions issue, except that I've tested locally with the docker cli and am able to push using the same user/pass as specified within destination_username and destination_password. I double checked the credentials to make sure I'm using the same ones and find that I am.
Question #1: is there any other known cause for getting this error? I've scoured the resource github page without finding anything. Any ideas why I may be getting the permissions error?
Without having an answer to the above question, I'd really like to dig deeper into troubleshooting the problem. To do so I use fly hijack to get a shell in the corresponding container. I notice that docker is installed on the container, so next step I think would be to do a docker import on the tarball for the image I'm trying to push and then perform a docker push to push it to the repo. When attempting to run the import I get the error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is
the docker daemon running?
Question #2: Why can't I use docker commands from within the container? Perhaps this has something to do with the issue I'm seeing with pushing to repo when running the pipeline (I don't think so)? Is it because the container isn't running with privilege? I thought that the privileged argument would be supplied in the resource type definition, but if not, how can I run with privilege?
resources:
- name: image-repo
type: git
source:
branch: master
private_key: ((private_key))
uri: ssh://git#git-server/repo.git
- name: artifactory
type: docker-image
source:
repository: artifactory.server.com:2077/((repo))
tag: latest
username: ((destination_username))
password: ((destination_password))
jobs:
- name: update-image
plan:
- get: image-repo
- task: do-stuff
file: image-repo/scripts/do-stuff.yml
vars:
repository-directory: ((repo))
- task: build-image
privileged: true
file: image-repo/scripts/build-image.yml
- put: artifactory
params:
import_file: image/image.tar
Arghhhh. Found after much troubleshooting that the destination_password wasn't being picked up properly due to special characters and a lack of quotes. Fixed the issue by properly setting the password within yaml file being included with the --load-vars flag.
How can I get a size of specific repository in Nexus 3?
For example, Artifactory shows the repository "size on disk" via UI.
Does Nexus have something similar? If not - how can I get this information by script?
You can use admin task with groovy script nx-blob-repo-space-report.groovy from https://issues.sonatype.org/browse/NEXUS-14837 - for me turned out too slow
Or you can get it from database:
login with user-owner nexus installation on nexus server (e.g.
nexus)
go to application directory (e.g. /opt/nexus):
$ cd /opt/nexus
run java orient console:
$ java -jar ./lib/support/nexus-orient-console.jar
connect to local database (e.g. /opt/sonatype-work/nexus3/db/component):
> CONNECT PLOCAL:/opt/sonatype-work/nexus3/db/component admin admin
find out repository row id in #RID column by repository_name value:
> select * from bucket limit 50;
get sum for all assets with repo row id found in the previous step:
> select sum(size) from asset where bucket = #15:9;
result should be like (apparently in bytes):
+----+------------+
|# |sum |
+----+------------+
|0 |224981921470|
+----+------------+
nexus database connection steps took from https://support.sonatype.com/hc/en-us/articles/115002930827-Accessing-the-OrientDB-Console
another useful queries
summary size by repository name (instead 5 and 6 steps):
> select sum(size) from asset where bucket.repository_name = 'releases';
top 10 repositories by size:
> select bucket.repository_name as repository,sum(size) as bytes from asset group by bucket.repository_name order by bytes desc limit 10;
Assign each repository to its own Blob Store.
You can refer below GitHub project. The script can be help to clear storage space on Nexus repository by analyzing the stats generated by the script. The script feature prompt-based user input, Search/Filter the results, Generates CSV output file and Print the output on console in a tabular format.
Nexus Space Utilization - GitHub
You may also refer below post on the same.
Nexus Space Utilization - Post
I'm using Artifactory OSS 4.1.0 and Java 1.8.0_51.
When I try to download one of my local artifacts from the Artifactory web interface, I get this:
{
"errors" : [ {
"status" : 500,
"message" : "Could not process download request: Binary provider has no content for 'bab1c4e18f6c5edfb65b2503a388dea2fed0deb8'"
} ]
}
But I found this file in my Artifactory data area: ./files/ba/bab1c4e18f6c5edfb65b2503a388dea2fed0deb8, and upon further inspection it is the WAR file I tried to download.
I've come across other people on the web with the same error message, but their issue was with caching external artifacts, and their workaround was to delete the cache.
Does anyone have an idea what's going on and how I can fix the problem? BTW, I did stop and restart our Artifactory server, but with no noticeable difference.
Artifactory doesn't store the binaries under ./files directory, but under $ARTIFACTORY_HOME/data/filestore.
It looks like you had a symbolic link from the files directory to the filestore directory and this link was deleted.
As part of our chef infrastructure I'm trying to set up and configure a berks-api server. I have created an Ubuntu server in azure and i have bootstrapped it and it appears as a node in my chef-server.
I have followed the instructions at github - bekshelf-api installation to install the berks-api via a cookbook. I have run
sudo chef-client
on my node and the cookbook appears to have been run successfully.
The problem is that the berks-api doesn't appear to run. My Linux terminology isn't great so sorry if I'm making mistakes in what I say but it appears as if the berks-api service isn't able to run. If I navigate to /etc/service/berks-api and run this command
sudo berks-api
I get this error
I, [2015-07-23T11:56:37.490075 #16643] INFO -- : Cache manager starting...
I, [2015-07-23T11:56:37.491006 #16643] INFO -- : Cache Builder starting...
E, [2015-07-23T11:56:37.493137 #16643] ERROR -- : Actor crashed!
Errno::EACCES: Permission denied # rb_sysopen - /etc/chef/client.pem
/opt/berkshelf-api/v2.1.1/vendor/bundle/ruby/2.1.0/gems/ridley-4.1.2/lib/ridley/client.rb:144:in `read'
/opt/berkshelf-api/v2.1.1/vendor/bundle/ruby/2.1.0/gems/ridley-4.1.2/lib/ridley/client.rb:144:in `initialize'
If anyone could help me figure out what is going on, I'd really appreciate it. If you need to explain the setup any more let me know.
It turns out I misunderstood the configuration of the berks-api. I needed to get a new private key for my client (berkshelf) from manage.chef.io for our organization. I then needed to upload the new key (berkshelf.pem) to /etc/berkshelf/api-server and reconfigure the berks-api to use the new key. so my config for the berks-api now looks like below:
{
"home_path":"/etc/berkshelf/api-server",
"endpoints":[
{
"type":"chef_server",
"options":
{
"url":"https://api.opscode.com/organizations/my-organization",
"client_key":"/etc/berkshelf/api-server/berkshelf.pem",
"client_name":"berkshelf"
}
}
],
"build_interval":5.0
}
I couldn't upload berkshelf.pem directly to the target location, i had to upload it to my home location, then copy it from within linux.
Having done this, the service starts and works perfectly.