I am passing the following
- pipe: JfrogDev/artifactory-docker:0.2.12
variables:
ARTIFACTORY_URL: "${JFROG_URL}"
ARTIFACTORY_USER: "${JFROG_USER}"
ARTIFACTORY_PASSWORD: "${JFROG_PASSWORD}"
DOCKER_IMAGE_TAG: "xxxx/web-interface:${BITBUCKET_BUILD_NUMBER}"
DOCKER_TARGET_REPO: "local-containers"
And I am getting back the following
Status: Downloaded newer image for jfrog-int-docker-open-docker.bintray.io/artifactory-docker:0.2.12
INFO: Starting pipe execution...
jfrog rt config --url=$JFROG_URL --user=$JFROG_USER --password=$JFROG_PASSWORD --interactive=false
[Error] Wrong number of arguments. You can read the documentation at https://www.jfrog.com/confluence/display/CLI/JFrog+CLI
It can’t find anything wrong based on the documentation, it looks like the pipeline makes a cli call and that call is failing
It looks like there is an issue when trying to config the server without the mandatory server-id argument:
https://bitbucket.org/JfrogDev/artifactory-docker/src/3b32b5d01d31c07acadb2a0d29a240110f88d59d/pipe/pipe.sh#lines-65
Instead, you can use the new jfrog-setup-cli pipe. This pipe downloads and configures the JFrog CLI.
For example, in your case:
- pipe: jfrog/jfrog-setup-cli:1.0.0
- source ./jfrog-setup-cli.sh
- jfrog rt docker-push "xxxx/web-interface:${BITBUCKET_BUILD_NUMBER}" local-containers
It requires to populate one secured environment variable prefixed by JF_ARTIFACTORY_ (i.e. JF_ARTIFACTORY_MYSERVER) with a JFrog CLI server token. This token created locally in your machine using the jfrog rt config export command.
Read more about this pipe here.
Read more about the jfrog rt config export command here.
Related
Using the jfrog CLI (jfrog rt s) I can dump file information of my repo to stdout, but this information does not contain the stored checksum. I see a similar question
"Artifactory CLI - Jfrog - How to get binary Hash code (SHA1, SHA256) through jfrog CLI" but the answer is only about searching for a specific checksum. Not being very familiar with jfrog at all, can someone suggest a simple method (has to use jfrog please) for dumping the checksum info for all or a specific file in the repo?
Since version 1.36.0 of JFrog CLI, the search command also returns the SHA1 and MD5 of the files. For example:
$ jfrog rt s repo/path/file
The jfrog rt search command theoretically supports returning sha256 sums if you're using jfrog-cli version 1.36.0 or greater, Artifactory server 5.5 or greater, and a correctly configured database.
Under the hood, the jfrog rt search command utilizes AQL to generate a query which it sends to the server. The default query performs items.find().include(*), which returns all of the supported fields. I guess if the database isn't set up right then SHA-256 sums aren't supported (this seems to be the case at my workplace).
Fortunately, there's an alternative which works even on old versions of the jfrog-cli (I've tested this with 1.26.2). This involves utilizing the jfrog rt curl command to directly grab the metadata from the server. Note that the jfrog rt curl command doesn't support the standard --url --access-token or --apikey parameters, so you'll need to configure a connection to the server using jfrog rt c first (don't forget to use --interactive=false if you're automating this). Once you've done that, the magic incantation which you're looking for is:
jfrog rt curl -XGET "api/storage/your_repo/your_file"
This will return a JSON blob like the following:
'{
"repo" : "your_repo",
"path" : "/your_path/your_file",
"created" : "2020-07-21T21:28:20.663Z",
"createdBy" : "token:your-token",
"lastModified" : "2020-07-21T21:28:27.277Z",
"modifiedBy" : "token:your-token",
"lastUpdated" : "2020-07-21T21:28:27.287Z",
"downloadUri" : "https://your_artifactory_url/artifactory/your_repo/your_path/your_file",
"mimeType" : "application/x-gzip",
"size" : "1198168",
"checksums" : {
"sha1" : "5e288fe94da1fed0b4ce6695c7a984426e9f5a78",
"md5" : "a012711241ba3a5bd4a04e833001d490",
"sha256" : "d22e76b6cc0b4e2e29076d08f8209dec2b7d9c28e71a13d989175a1272ac3da7"
},
"originalChecksums" : {
"sha1" : "5e288fe94da1fed0b4ce6695c7a984426e9f5a78",
"md5" : "a012711241ba3a5bd4a04e833001d490",
"sha256" : "d22e76b6cc0b4e2e29076d08f8209dec2b7d9c28e71a13d989175a1272ac3da7"
},
"uri" : "https://your_artifactory_url/artifactory/api/storage/your_repo/your_path/your_file"
}'
The originalChecksums are from when the artifact was first uploaded. If the artifact has been tampered with on the server then the regular checksums may be different. For this reason I'd recommend validating against the originalChecksums unless you're operating in an environment where the same artifacts are expected to be overwritten.
If you're looking for a quick and dirty way to extract the returned checksums from the JSON blob then try this ugly hack I threw together in bash (note that this won't work if you collapse the whitespace first):
#!/bin/bash
...
checksums=($(sed -n -E "s/^\\s+\\\"sha256\\\"\\s:\\s\\\"(.*)\\\"\$/\\1/p" <<< "$response"))
checksum="${checksums[0]}"
original_checksum="${checksums[1]}"
If you have the option, I'd recommend using a more robust json parser instead.
Im trying to upload a file to my repo using the following command in JFrog CLI:
jfrog rt u ".\Builds\mybuild.apk" buildrepo-generic-local --url=https://buildrepo-generic-local.artifactory.mycompany.com/artifactory/buildrepo-generic-local/Gitlab/mybuild/client/Test --user=myuser --password=mypass --build-name BuildName --build-number=0000
but it appends the name of the repo on the url so it actually gets uploaded to:
https://buildrepo-generic-local.artifactory.mycompany.com/artifactory/buildrepo-generic-local/Gitlab/mybuild/client/Test/buildrepo-generic-local/mybuild.apk
and not where i want it which is:
https://buildrepo-generic-local.artifactory.mycompany.com/artifactory/buildrepo-generic-local/Gitlab/mybuild/client/Test/mybuild.apk
Im new so its probably me doing something wrong, anyone help here?
The second argument of the upload command is the destination path in Artifactory. In your case that is:
buildrepo-generic-local/Gitlab/mybuild/client/Test/mybuild.apk
The url flag is the url of your Artifactory without path:
--url=https://buildrepo-generic-local.artifactory.mycompany.com/artifactory/
For more details, see the JFrog CLI documentation.
I'm trying to use Concourse to grab a dockerfile defintion from a git repository, do some work, build the docker image, and push the new image to Artifactory. See below for the pipeline definition. At this time I have all stages up to the artifactory stage (the one that pushes to Artifactory) working. The artifactory stage exits with error with the following output:
waiting for docker to come up...
sha256:c6039bfb6ac572503c8d97f42b6a419b94139f37876ad331d03cb7c3e8811ff2
The push refers to repository [artifactory.server.com:2077/base/golang/alpine]
a4ab5bf94afd: Preparing
unauthorized: The client does not have permission to push to the repository.
This would seem straight-forward as an Artifactory permissions issue, except that I've tested locally with the docker cli and am able to push using the same user/pass as specified within destination_username and destination_password. I double checked the credentials to make sure I'm using the same ones and find that I am.
Question #1: is there any other known cause for getting this error? I've scoured the resource github page without finding anything. Any ideas why I may be getting the permissions error?
Without having an answer to the above question, I'd really like to dig deeper into troubleshooting the problem. To do so I use fly hijack to get a shell in the corresponding container. I notice that docker is installed on the container, so next step I think would be to do a docker import on the tarball for the image I'm trying to push and then perform a docker push to push it to the repo. When attempting to run the import I get the error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is
the docker daemon running?
Question #2: Why can't I use docker commands from within the container? Perhaps this has something to do with the issue I'm seeing with pushing to repo when running the pipeline (I don't think so)? Is it because the container isn't running with privilege? I thought that the privileged argument would be supplied in the resource type definition, but if not, how can I run with privilege?
resources:
- name: image-repo
type: git
source:
branch: master
private_key: ((private_key))
uri: ssh://git#git-server/repo.git
- name: artifactory
type: docker-image
source:
repository: artifactory.server.com:2077/((repo))
tag: latest
username: ((destination_username))
password: ((destination_password))
jobs:
- name: update-image
plan:
- get: image-repo
- task: do-stuff
file: image-repo/scripts/do-stuff.yml
vars:
repository-directory: ((repo))
- task: build-image
privileged: true
file: image-repo/scripts/build-image.yml
- put: artifactory
params:
import_file: image/image.tar
Arghhhh. Found after much troubleshooting that the destination_password wasn't being picked up properly due to special characters and a lack of quotes. Fixed the issue by properly setting the password within yaml file being included with the --load-vars flag.
Hello I want to use the automatic deploymen on bitbucket to the galaxy server with a deployment token.
For this reason I am creating a deployment token that is comitted in the repository.
https://galaxy-guide.meteor.com/deploy-guide.html#deployment-token
To strenghten the security I would like to use Repository variables in bitbucket pipelines:
https://confluence.atlassian.com/bitbucket/environment-variables-794502608.html
And to store the deployment token of meteor in the variables instead in file.
For the deployment we use in the command:
METEOR_SESSION_FILE=deployment_token.json
And my question is - Is there any way so that I use some variable(string) where the token is used like
METEOR_SESSION_DEPLOYMENT_TOKEN=$METEOR_TOKEN
instead to call it from a file?
Some research, after having the same problem, brought me to this article, which simply solves the problem that you can't feed meteor just the json in an env var in the following simple way:
By adding the json file content as an env var and then echoes it out into a file on deploy.
echo $METEOR_TOKEN_FILE > deploy_token.json
METEOR_SESSION_FILE=deploy_token.json
Thanks to this article I figured it out.
Save json settings as env variable and then in deployment procesS:
echo $DEPLOY_SESSION_FILE > deployment_token.json
METEOR_SESSION_FILE=deployment_token.json DEPLOY_HOSTNAME=galaxy.meteor.com meteor deploy --allow-superuser myApp-staging.meteorapp.com --settings config/staging/settings.json --owner username
I am coding a Java project and I'm automating the build and the publishing to JFrog Artifactory using SBT.
Whenever it's time to publish to Artifactory I want to do it using the Ivy directory layout and obviously publish the Ivy XML file along with the jar. I managed to achieve this by using the following lines in the build.sbt file:
crossPaths := false
publishTo := Some("Artifactory Realm" at "http://<Artifactory IP>:<Artifactory Port>/artifactory/org.project.my")
credentials += Credentials(Path.userHome / ".ivy2" / ".credentials")
publishMavenStyle := false
However it only works when anonymous users are allowed to deploy into Artifactory. I realized that sbt is not really passing my credentials to Artifactory but, instead, logging in as anonymous.
My $HOME/.ivy2/.credentials file looks like this:
realm=Artifactory Realm
host=http://<Artifactory IP>:<Artifactory Port>/artifactory/org.project.my
user=<my user name>
password=<my user name>
However, if I change the Artifactory configuration in order to prevent anonymous users from deploying new Artifacts, when I run "sbt publish" I get the following output:
[error] Unable to find credentials for [Artifactory Realm # <Artifactory IP>].
java.io.IOException: Access to URL http://<Artifactory IP>:<Artifactory Port>/artifactory//org.project.my/org/project/my/project-my/1.0.0/project-my-1.0.0.jar was refused by the server: Unauthorized
The Artifactory request.log file then contains:
20160219011657|319|REQUEST|10.0.2.2|anonymous|PUT|/org.project.my/org/project/my/project-my/1.0.0/project-my-1.0.0.jar|HTTP/1.1|401|24978
I have also tried passing the credentials manually instead of using a file:
credentials += Credentials("Artifactory Realm", "localhost", "<USERNAME>", "<PASS>")
But I am getting the same result.
Any idea what I might be missing?
try:
host=<Artifactory IP>
old answer (doesn't work):
host=<Artifactory IP>:<Artifactory port>
I had a different problem: I had the wrong realm set on my .credentials file.
Looking at the error output from sbt, I was able to figure out that I should use:
realm=Artifactory Realm
Error shows the expected values for realm and host:
[error] Unable to find credentials for [Artifactory Realm # myhost].