JFrog Artifactory - how to upload symlinks properly and download the destination - artifactory

I have the following artifactory repo and directory structure:
my-repository/binaries/binary.file
my-repository/dir1/
my-repository/dir2/
...
I would like to do add a symlink of binary.file in dir1 and dir2.
Assuming that I've added binary.file as follows:
$ jfrog rt u binary.file my-repository/binaries/binary.file
So here's where I fail to understand the concept of symlinks in artifactory.
I've created the same directory structure on the local filesystem and created a symlink:
$ find .
./my-repository/
./my-repository/binaries/
./my-repository/binaries/binary.file
./my-repository/dir1/
./my-repository/dir2/
$ cd my-repository/dir1/
$ ln -s ../binaries/binary.file .
$ jfrog rt u --symlinks binary.file my-repository/dir1/
...
$ cd /tmp
$ jfrog rt dl --validate-symlinks my-repository/dir1/binary.file
Log path: /home/user/.jfrog/logs/jfrog-cli.2020-04-09.18-42-13.19989.log
{
"status": "failure",
"totals": {
"success": 0,
"failure": 1
}
}
[Error] Download finished with errors, please review the logs.
$ cat /home/user/.jfrog/logs/jfrog-cli.2020-04-09.18-42-13.19989.log
[Info] Searching items to download...
[Info] [Thread 2] Downloading my-repository/dir1/binary.file
[Error] Symlink validation failed, target doesn't exist: ../binaries/binary.file
If I remove the --validate-symlinks argument, then an empty file gets downloaded. What is the proper way of uploading symlinks and vice-versa, downloading such? Is there a way to download the destination file by using a symlink?

Related

How to specify putting zip into jfrog artifactory?

I have directory like /tmp/some-js/(a lot of folders)
I added it into zip /tmp/some-js.zip
I have structure in artifactory like /npm-dev/some-js/*
I put into artifactory this zip with command
curl -u user:api-key -k -X PUT https://xxx.xx.xx.xx:8081/artifactory/npm_dev/some-js/ -T /tmp/some-js.zip
And I have got directory in artifactory /npm-dev/some-js/some-js.zip/*
There is a way to specify unpacking some-js.zip contents into /npm-dev/some-js ?
Uploading an archive file (such as a zip file) to Artifactory and extracting its content to a specific directory is done by:
PUT https://<jfrog-platform>/artifactory/the-repo/path/to/dir/file.zip
X-Explode-Archive: true
<file-content>
The content of file.zip will be extracted and deployed under the-repo/path/to/dir/, preserving the relative directory structure in the zip file. So if file.zip has the following structure:
foo/
|- bar.txt
|- baz.txt
The following files will be created in Artifactory:
the-repo/path/to/dir/foo/bar.txt
the-repo/path/to/dir/foo/baz.txt
Using curl and the details in the question:
curl -u user:api-key \
-k \
-X PUT \
https://xxx.xx.xx.xx:8081/artifactory/npm_dev/some-js/some-js.zip \
-T /tmp/some-js.zip
-H "X-Explode-Archive: true"
For more information, see the documentation on Deploy Artifacts from Archive

Error - Artifactory response: 405 Method Not Allowed

I'm trying to download a file from my Jfrog artifactory to my local machine with CLI command:
jfrog rt dl --user *username* --password *password* -url https://*domain*.jfrog.io/artifactory/*my-folder-name*/ --flat=false * c:/jfrog/
I'm getting:
Log path: C:\Users\Administrator\.jfrog\logs\jfrog-cli.2020-08-19.18-38-11.3780.log
{ "status": "failure",
"totals": {
"success": 0,
"failure": 0
}
}
[Error] Download finished with errors, please review the logs.
From the logs:
[Error] Artifactory response: 405 Method Not Allowed
but, when I'm running jfrog rt ping I'm getting
"OK"
The reason you are getting 405 is that JFrog CLI is trying ping the Artifactory using the --url https://domain.jfrog.io/artifactory/my-folder-name/. To overcome this you should try to download using the below JFrog CLI,
jfrog rt dl --user username --password password -url
https://domain.jfrog.io/artifactory/ "<repository_key>/" --flat=false *
c:/jfrog/
For example, if I want to download any artifacts from the "generic-local" repository under "jars" folder then my JFrog CLI command would be as below,
$ jfrog rt dl --user admin --password password -url
http://localhost:8081/artifactory "generic-local/jars/"
--flat=false
It should download all the artifacts under "generic-local/jars" under the current directory.

How can I auto-create .docker folder in the home directory when spinning up VM Cluster (gce_vm_cluster) on gcloud through R?

I create VMs using the following command in R:
vms <- gce_vm_cluster(vm_prefix=vm_base_name,
cluster_size=cluster_size,
docker_image = my_docker,
ssh_args = list(username="test_user",
key.pub="/home/test_user/.ssh/google_compute_engine.pub",
key.private="/home/test_user/.ssh/google_compute_engine"),
predefined_type = "n1-highmem-2")
now when I SSH into the VMs, I do not find the .docker folder in the home directory
test_user#test_server_name:~$ gcloud beta compute --project "my_test_project" ssh --zone "us-central1-a" "r-vm3"
test_user#r-vm3 ~ $ ls -a
. .. .bash_history .bash_logout .bash_profile .bashrc .ssh
Now the below command gives an error (..obviously)
test_user#r-vm3 ~ $ docker pull gcr.io/my_test_project/myimage:version1
Unable to find image 'gcr.io/my_test_project/myimage:version1' locally
/usr/bin/docker: Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication.
See '/usr/bin/docker run --help'.
I need to run the docker-credential-gcr configure-docker command to get the folder/file .docker/config.json
test_user#r-vm3 ~ $ docker-credential-gcr configure-docker
/home/test_user/.docker/config.json configured to use this credential helper for GCR registries
test_user#r-vm3 ~ $ ls -a
. .. .bash_history .bash_logout .bash_profile .bashrc .docker .ssh
Now,
test_user#r-vm3 ~ $ docker pull gcr.io/my_test_project/myimage:version1
version1: Pulling from my_test_project/myimage
Digest: sha256:98abc76543d2e10987f6ghi5j4321098k7654321l0987m65no4321p09qrs87654t
Status: Image is up to date for gcr.io/my_test_project/myimage:version1
gcr.io/my_test_project/myimage:version1
What I am trying to resolve:
I need the .docker/config.json to appear in the VMs without SSHing in and running the docker-credential-gcr configure-docker command
how about creating a bash script, upload to a cloud storage bucket, and call it while creating the cluster? Also you mentioned "R" Are you talking about R script?

`ipfs add` on a mounted directory, will the complete data downloaded even it is alreayd cached on ipfs local repo?

If user add same file/folder to IPFS (if its already cached) IPFS will not fetch any new block to apfs-repo.
$ ipfs repo stat
NumObjects: 2796
RepoSize: 168392957
$ ipfs add -r folder
added QmP41fuJg5BcqVeikmUD1fjQhqZUsyUMMiAorMWmtqGnZY folder
$ ipfs repo stat
NumObjects: 2802
RepoSize: 168502656
$ ipfs add -r folder
added QmP41fuJg5BcqVeikmUD1fjQhqZUsyUMMiAorMWmtqGnZY folder
$ ipfs repo stat
NumObjects: 2802 # Remain constant, no new block is fetch.
RepoSize: 168502656
As I understand ipfs goes through all the hash of the chunkes in its local repo, and if it cannot find the related hash, it fetches the related repo.
At this stage, I am wondering if I have added a file (for example: 1 GB) to IPFS from URL using curl or a mounted directory, at first try it has to download complete 1 GB file and add it to ipfs.
But when I re-do the same operation again, since the 1 GB file is already cached on ipfs, will it still be downloaded as complete 1 GB file in order for ipfs to check its hashes of the chunks?
$ sshfs user#<IP>:/home/user/directory ~/mountedDriectory
$ ipfs add ~/mountedDriectory
$ umount -f ~/mountedDriectory
$ sshfs user#<IP>:/home/user/directory ~/mountedDriectory
$ ipfs add ~/mountedDriectory # To do this should all files on mounted directory downloaded?

Git subtree & remote information not available to other users

Git subtree & remote information is missing from .git/config file on workspaces other than the committed workspace.
Other users who pulled the git repo are not able to see the remote repo information in their .git/config file
They are not able to update or modify the subtrees.
I used the following commands to add the subtree
$ git remote add -f github.com/google/cadvisor https://github.com/google/cadvisor.git
$ git merge -s ours --no-commit github.com/google/cadvisor/master
$ git read-tree --prefix=github.com/google/cadvisor -u github.com/google/cadvisor/master
$ git commit -m ""
What is the best way to get it working?

Resources