Is there a way to retrieve an artifact with the maven timestamp as it was originally uploaded by maven?
from jenkins logs:
Uploading: http://artifactory.foo/artifactory/libs-snapshot-local/com/foo/foo-web-service/1.16.0-SNAPSHOT/foo-web-service-1.16.0-20160504.182015-2.tar.gz
Results from artifactory REST api:
$ curl -X GET 'http://artifactory.foo/artifactory/api/search/gavc?g=com.foo&a=foo-web-service&v=1.16.0-SNAPSHOT&c=*&repos=libs-snapshot-local'
{
"results" : [ {
"uri" : "http://artifactory.foo/artifactory/api/storage/libs-snapshot-local/com/foo/foo-web-service/1.16.0-SNAPSHOT/foo-web-service-1.16.0-SNAPSHOT-sources.jar"
}, {
"uri" : "http://artifactory.foo/artifactory/api/storage/libs-snapshot-local/com/foo/foo-web-service/1.16.0-SNAPSHOT/foo-web-service-1.16.0-SNAPSHOT.pom"
}, {
"uri" : "http://artifactory.foo/artifactory/api/storage/libs-snapshot-local/com/foo/foo-web-service/1.16.0-SNAPSHOT/foo-web-service-1.16.0-SNAPSHOT.tar.gz"
}, {
"uri" : "http://artifactory.foo/artifactory/api/storage/libs-snapshot-local/com/foo/foo-web-service/1.16.0-SNAPSHOT/foo-web-service-1.16.0-SNAPSHOT.war"
} ]
}
I'd like to get the the same name as it was uploaded to via a wget or equivalent...
What I want to acheive:
jenkins uploads foo-web-service-1.16.0-20160504.182015-2.tar.gz to libs-snapshot-local
query REST api to get latest artifact link that includes the timestamps in the name with parameters a=foo-web-service&version=1.16.0&...
wget $artifact_link_with_timestamp
What I currently acheive that does not satisfy my need:
jenkins uploads foo-web-service-1.16.0-20160504.182015-2.tar.gz to libs-snapshot-local
query REST api via gavc search with parameters a=foo-web-service&version=1.16.0&...
wget $artifact_link
Conclusion as stated in the accepted answer, the problem was in the artifactory config itself. To achieve what I wanted, I needed the snapshots to be unique.
As long as your repository is configured to use unique snapshots (or to use client snapshot policy and you use Maven 3 and up), you can always use the Maven timestamp as a version. Replacing it with -SNAPSHOT is a "runtime" trick to make the resolution easier.
If your repository is configured to use non-unique snapshots, the files are actually stored with -SNAPSHOT instead of version and override previous snapshots (don't do that).
Related
I've tried everything I can think of to get this to work but in every attempt NO errors are returned (the logs say everything succeeded and one file was uploaded) but in artifactory, although the build info is published, nothing appears in the target repo.
I tried using powershell and the rest api to publish, but that just gave me (502 gateway error). Again, no matter what combination of values I used, always this useless error.
I have been forced to revert to using the 'legacy patterns' because this is the only thing that works AT ALL.
The pattern I am using is:
%ProjectPublishDir%\**\*.* => path/to/my/APP/%Major-Version%.%Minor-Version%/%ArtifactNamePrefix%/%ArtifactFileName%
Where all these values are parameters defined in TeamCity. This works like a charm.
However, if I add this at the end of my powershell to create the artifact:
#Write-Host "##teamcity[publishArtifacts '%ProjectPublishDir%\**\*.* => %ArtifactsDir%\%ArtifactFileName%']"
Then I replace legacy patterns with file specs for my powershell step, then I use this file spec:
{
"files": [
{
"pattern": "%ArtifactsDir%\%ArtifactFileName%",
"target": "my_repo_name/path/to/my/APP/%Major-Version%.%Minor-Version%/%ArtifactNamePrefix%/",
"props": "type=zip;status=ready"
}
]
}
The log says the artifact was generated (which I verified on the build agent is true) and that it was sent to Artifactory successfully, but when I check in artifactory, the build info is there but the artifact is missing.
Every example everywhere I can find is too simple and from the basic examples out there, what I have above should work. So what are the docs missing that I am not doing/doing incorrectly?
I have problem with deploying node.js (Angular) application from Bitbucket repository to Firebase using Cloud Build Trigger.
I performed steps from this article - https://cloud.google.com/build/docs/deploying-builds/deploy-firebase
Necessary APIs are turned on,
Service account has all required permissions,
Firebase community builder is deployed and visible in Container Registry,
Cloudbuild.json file is added to repository,
Cloud Build Trigger is created and it points to one specific branch of my repository.
Problem is that after running Cloud Build Trigger I receive following error: "Error: Not in Firebase app directory (could not locate firebase.json)"
What could be reason of such error? Is it possible to point to the trigger where in repository is firebase application and firebase.json file?
EDIT:
My cloudbuild.json file is quite simple:
{
"steps": [
{
"name": "gcr.io/PROJECT_NAME/firebase",
"args": [
"deploy",
"--project",
"PROJECT_NAME",
"--only",
"functions"
]
}
]
}
Logs in Cloud Build Trigger history:
starting build "build_id"
FETCHSOURCE
Initialized empty Git repository in /workspace/.git/
From https://source.developers.google.com/p/link
* branch branch_id -> FETCH_HEAD
HEAD is now at d8bc2f1 Add cloudbuild.json file
BUILD
Pulling image: gcr.io/PROJECT/firebase
Using default tag: latest
latest: Pulling PROJECT/firebase
1e987daa2432: Already exists
a0edb687a3da: Already exists
6891892cc2ec: Already exists
684eb726ddc5: Already exists
b0af097f0da6: Already exists
154aee36a7da: Already exists
37e5835696f7: Pulling fs layer
62eb6e670f1d: Pulling fs layer
47e62615d9f9: Pulling fs layer
428cea824ccd: Pulling fs layer
765a2c722bf9: Pulling fs layer
b3f5d0a285e3: Pulling fs layer
428cea824ccd: Waiting
765a2c722bf9: Waiting
b3f5d0a285e3: Waiting
47e62615d9f9: Verifying Checksum
47e62615d9f9: Download complete
62eb6e670f1d: Verifying Checksum
62eb6e670f1d: Download complete
765a2c722bf9: Verifying Checksum
765a2c722bf9: Download complete
b3f5d0a285e3: Verifying Checksum
b3f5d0a285e3: Download complete
37e5835696f7: Verifying Checksum
37e5835696f7: Download complete
428cea824ccd: Verifying Checksum
428cea824ccd: Download complete
37e5835696f7: Pull complete
62eb6e670f1d: Pull complete
47e62615d9f9: Pull complete
428cea824ccd: Pull complete
765a2c722bf9: Pull complete
b3f5d0a285e3: Pull complete
Digest: sha256:4b6b7214d6344c8247130bf3f5443a2851e39aed3ececb32dfb2cc25a5c07e44
Status: Downloaded newer image for gcr.io/PROJECT/firebase:latest
gcr.io/PROJECT/firebase:latest
Error: Not in a Firebase app directory (could not locate firebase.json)
ERROR
ERROR: build step 0 "gcr.io/PROJECT/firebase" failed: step exited with non-zero status: 1
Firebase.json file is placed in repository in following path: /apps/firebase/firebase.json
I can't see any possibility to point that path in Cloud Build Trigger config.
I'm pretty sure that you aren't in the correct directory when you run the command. To check this, you can add this step
{
"steps": [
{
"name": "gcr.io/cloud-builders/gcloud",
"entrypoint":"ls"
"args": ["-la"]
}
]
}
If you aren't in the correct directory, and you need to be in the ./app/firebase to run your deploy, you can add this parameter in your step
{
"steps": [
{
"name": "gcr.io/PROJECT_NAME/firebase",
"dir": "app/firebase",
"args": [
"deploy",
"--project",
"PROJECT_NAME",
"--only",
"functions"
]
}
]
}
Let me know if it's better
Using the jfrog CLI (jfrog rt s) I can dump file information of my repo to stdout, but this information does not contain the stored checksum. I see a similar question
"Artifactory CLI - Jfrog - How to get binary Hash code (SHA1, SHA256) through jfrog CLI" but the answer is only about searching for a specific checksum. Not being very familiar with jfrog at all, can someone suggest a simple method (has to use jfrog please) for dumping the checksum info for all or a specific file in the repo?
Since version 1.36.0 of JFrog CLI, the search command also returns the SHA1 and MD5 of the files. For example:
$ jfrog rt s repo/path/file
The jfrog rt search command theoretically supports returning sha256 sums if you're using jfrog-cli version 1.36.0 or greater, Artifactory server 5.5 or greater, and a correctly configured database.
Under the hood, the jfrog rt search command utilizes AQL to generate a query which it sends to the server. The default query performs items.find().include(*), which returns all of the supported fields. I guess if the database isn't set up right then SHA-256 sums aren't supported (this seems to be the case at my workplace).
Fortunately, there's an alternative which works even on old versions of the jfrog-cli (I've tested this with 1.26.2). This involves utilizing the jfrog rt curl command to directly grab the metadata from the server. Note that the jfrog rt curl command doesn't support the standard --url --access-token or --apikey parameters, so you'll need to configure a connection to the server using jfrog rt c first (don't forget to use --interactive=false if you're automating this). Once you've done that, the magic incantation which you're looking for is:
jfrog rt curl -XGET "api/storage/your_repo/your_file"
This will return a JSON blob like the following:
'{
"repo" : "your_repo",
"path" : "/your_path/your_file",
"created" : "2020-07-21T21:28:20.663Z",
"createdBy" : "token:your-token",
"lastModified" : "2020-07-21T21:28:27.277Z",
"modifiedBy" : "token:your-token",
"lastUpdated" : "2020-07-21T21:28:27.287Z",
"downloadUri" : "https://your_artifactory_url/artifactory/your_repo/your_path/your_file",
"mimeType" : "application/x-gzip",
"size" : "1198168",
"checksums" : {
"sha1" : "5e288fe94da1fed0b4ce6695c7a984426e9f5a78",
"md5" : "a012711241ba3a5bd4a04e833001d490",
"sha256" : "d22e76b6cc0b4e2e29076d08f8209dec2b7d9c28e71a13d989175a1272ac3da7"
},
"originalChecksums" : {
"sha1" : "5e288fe94da1fed0b4ce6695c7a984426e9f5a78",
"md5" : "a012711241ba3a5bd4a04e833001d490",
"sha256" : "d22e76b6cc0b4e2e29076d08f8209dec2b7d9c28e71a13d989175a1272ac3da7"
},
"uri" : "https://your_artifactory_url/artifactory/api/storage/your_repo/your_path/your_file"
}'
The originalChecksums are from when the artifact was first uploaded. If the artifact has been tampered with on the server then the regular checksums may be different. For this reason I'd recommend validating against the originalChecksums unless you're operating in an environment where the same artifacts are expected to be overwritten.
If you're looking for a quick and dirty way to extract the returned checksums from the JSON blob then try this ugly hack I threw together in bash (note that this won't work if you collapse the whitespace first):
#!/bin/bash
...
checksums=($(sed -n -E "s/^\\s+\\\"sha256\\\"\\s:\\s\\\"(.*)\\\"\$/\\1/p" <<< "$response"))
checksum="${checksums[0]}"
original_checksum="${checksums[1]}"
If you have the option, I'd recommend using a more robust json parser instead.
Related: Artifactory aql: find builds of job with given property
As decribed in the blog, I want to query Artifactory with this AQL, using Jfrog CLI:
items.find(
{
"repo":"snapshot-local",
"artifact.module.build.name":"foo",
"artifact.item.#vcs.Revision":"aabbccddee123456"
}
).include("artifact.module.build.number")
My understanding of the file specs is that it should be along those lines:
{
"files":
[
{
"aql":{
"items.find":{
"repo":"snapshot-local",
"artifact.module.build.name":"foo",
"artifact.item.#vcs.Revision":"aabbccddee123456"
}
}
}
]
}
However, I am not sure how to request the artifact.module.build.number property.
How can I get the same behavior as with cURL using .include("artifact.module.build.number") in the request ?
Today the CLI's AQL support does not permit modifying the schema of the object returned.
This means you can not modify the "include" and add fields from a different domain.
I would therefore (in your case) use curl (as you suggested).
Something like:
items.find({
"repo":"snapshot-local",
"artifact.module.build.name":"foo",
"artifact.item.#vcs.Revision":"aabbccddee123456"
}).include("artifact.module.build.name","artifact.module.build.number")
I am checking if I can use kairosdb for my project. I was checking out the REST api's and I have a use case where I need to save both my device state and status (state tells if device is on or off and status tells if my device is occupied or empty)
kairosdb version: 1.1.1
I came across this link https://kairosdb.github.io/docs/build/html/restapi/AddDataPoints.html
but when I try to post data from REST client I am getting the error 400 BAD Request error. The error is
{"errors":["Unregistered data point type 'complex-number'"]}
My request I am posting is ,
{
"name": "device_data",
"type": "complex-number",
"datapoints": [
[
1470897496,
{
"state": 0,
"status": "empty"
}
]
],
"tags": {
"device_id": "abc123"
}
}
In tried doing the same in Java as specified in https://kairosdb.github.io/docs/build/html/kairosdevelopment/CustomData.html
I get the same error i
Please let me know how to use complex-numbers or custom data types from REST
Recently, I figured out how to use this.
Using the example from the official document of KairosDB.
create 2 files called ComplexDataPoint.java and ComplexDataPointFactory.java and then paste the code provided by the tutorial on the doc: https://kairosdb.github.io/docs/build/html/kairosdevelopment/CustomData.html#example-for-creating-custom-types
download the KairosDB source, then extract the .zip file.
paste the 2 files in /KAIROSDB_DOWNLOADED_SOURCE/src/main/java/org/kairosdb/core/datapoints/
configure the CoreModule.java at /KAIROSDB_DOWNLOADED_SOURCE/src/main/java/org/kairosdb/core/, add the following line in the function protected void configure():
bind(ComplexDataPointFactory.class).in(Singleton.class);
open terminal, cd to KAIROSDB_DOWNLOADED_SOURCE/, then follow the instruction in the file how_to_build.txt
when complete, it will create a folder called build, the compiled kairosdb jar file is located in KAIROSDB_DOWNLOADED_SOURCE/build/jar
in your kairosdb installation folder, backup the kairosdb-X.X.X.jar file in YOUR_KAIROSDB_INSTALLATION/lib
mv kairosdb-X.X.X.jar kairosdb-X.X.X.jar.backup
mv the newly compiled jar file to YOUR_KAIROSDB_INSTALLATION/lib
modify the configuration file by adding the following line:
kairosdb.datapoints.factory.complex=org.kairosdb.core.datapoints.ComplexDataPointFactory
restart your kairosdb
For your query, since the registered name is kairosdb.datapoints.factory.complex, replace complex-number with complex in your query string.
Hope this will help you! I am now having a problem to plot the complex data. I am still figuring out...