Is there a way to download an artifact using AQL?
I have a query sent with:
curl -u user:pass \
-X POST https://artifactoryURL/artifactory/api/search/aql \
-H 'content-type: text/plain' \
-T query.aql
And my query.aql:
items.find(
{
"repo":{"$eq":"repo"},
"$and": [
{
"path": { "$match": "path/*"},
"name": { "$match": "*.rpm"}
}
]
}
)
.sort({ "$desc": ["modified"] })
.limit(1)
Now that I know it returns what I want, Is there a way to change the request from api/search/aql to something like api/download/aql and get those items?
EDIT:
I had also tried doing this with the jfrog cli but they don't fully support AQL there (sort and limit didn't work).
This is the command I tried:
jfrog rt s --spec=query-aql.json
And the spec that failed to sort and limit results:
{
"files": [
{
"aql": {
"items.find": {
"repo": "repo",
"$and": [
{
"path": { "$match": "path/*"},
"name": { "$match": "*.rpm"}
}
]
}
},
"sort": {
"$asc": ["modified"]
},
"limit": 1
}
]
}
EDIT 2:
Added a jfrog-cli-go issue: https://github.com/JFrogDev/jfrog-cli-go/issues/56
An easy way to use your aql script to download files from artifactory is to use the JFrog cli as mention here : https://www.jfrog.com/confluence/display/CLI/CLI+for+JFrog+Artifactory#CLIforJFrogArtifactory-Download,CopyandMoveCommandsSpecSchema
The cli can be downloaded as an executable for linux, mac or windows and should fit your needs
With the curl command the only thing you can do is then to parse the result from your aql query and perform download request for each file.
I was just looking for a very similar thing -- use a download spec file to download the latest artifact from a given repo and path. I don't care if I use AQL or not, I just want it in the download-spec.json file. If you go to the link above look at Example 5.
Modified for your example:
{
"files": [
{
"pattern": "path/*.rpm",
"target": "my/download/path/",
"sortBy": "created",
"sortOrder": "desc",
"limit": 1,
}
]
}
jfrog cli supports --limit, --sort-order, and --sort-by arguments.
The following works for me:
jfrog rt search --spec=/tmp/jfrogfilespec.json --sort-by created --sort-order=desc --limit 1
The contents of the json spec file are:
{ "files": [ { "aql": { "items.find": { "repo":{"$eq":"my-release-repo"}, "name":{"$match":"my-artifact-prefix*"} } } } ] }
This generates the following query (according to debug mode):
items.find( { "repo":{"$eq":"my-release-repo"}, "name":{"$match":"my-artifact-prefix*"} } ).include("name","repo","path","actual_md5","actual_sha1","size","type","created").sort({"$desc":["created"]}).limit(1)
What is frustrating is that I cannot seem to find a way to use "jfrog rt search" with a filespec that allows me to influence the "include" modifier portion of the search.
Related
I'm working with JFROG Cli and need to cleanup artifacts from folder under repository and keep only 5 latest artifacts (latest by created date).
I have already created some code which removes artifacts which were created 7 and more days. But I need to keep 5 latest artifacts. Anyone has any ideas?
{
"files": [
{
"aql": {
"items.find": {
"repo": "maven-repo",
"path": {"$match":"com/mqjbnd64/7.1"},
"name": {"$match":"*"},
"$or": [
{
"$and": [
{
"created": { "$before":"7d" }
}
]
}
]
}
}
}
]
}
You can create an initial query sorting by create date and limiting the number of records returned to 5.
Than you can execute another query, to get all artifacts in this path, and deleted the ones not returned by the previous query.
I'm trying to get the repo names under a Project using the Bitbucket API. The current link on the documentation says to use
curl -u username:pwd http://${bitbucket-url}/rest/api/1.0/projects/${projectkey}/repos/
Response:
{
"size": 1,
"limit": 25,
"isLastPage": true,
"values": [
{
"slug": "my-repo",
"id": 1,
"name": "My repo",
"scmId": "git",
"state": "AVAILABLE",
"statusMessage": "Available",
"forkable": true,
"project": {
"key": "PRJ",
"id": 1,
"name": "My Cool Project",
"description": "The description for my cool project.",
"public": true,
"type": "NORMAL",
"links": {
"self": [
{
"href": "http://link/to/project"
}
]
}
},
"public": true,
"links": {
"clone": [
{
"href": "ssh://git#/PRJ/my-repo.git",
"name": "ssh"
},
{
"href": "https:///scm/PRJ/my-repo.git",
"name": "http"
}
],
"self": [
{
"href": "http://link/to/repository"
}
]
}
}
],
"start": 0
}
But I only need the repo name from the response
from subprocess import call
import configparser
import subprocess
import json
import os
base_dir = os.getcwd()
DETACHED_PROCESS = 0x00000008
cmd = 'curl --url "' + bb_url + '?pagelen=100&page=' + str(page) + '" --user ' + bb_user + ':' + bb_pwd + ' --request GET --header "Accept: application/json"'
output = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, creationflags=DETACHED_PROCESS).communicate()
datastore = json.loads(output[0].decode("utf-8"))
size = datastore.get("size")
values = datastore.get("values")
if(len(values)) == 0:
break
for repos in range(size):
repo_name = values[repos]["values"]["slug"]
f_initial = open (base_dir+"\\repositoryList.txt", "a+")
f_initial.write(repo_name)
f_initial.write("\n")
f_initial.close()
page = page + 1
This script will help you get the list of all the repositories in your project and write it under the file repositoryList.txt
With bash command
repoNamesJson=$(curl -D- -X GET -H "Authorization: Basic <encoded user pasword here>" -H "Content-Type: application/json" https://yourstash/rest/api/1.0/projects/ad/repos?limit=100000)
repoNames=$(echo $repoNamesJson | awk -v RS=',' '/{"slug":/ {print}' | sed -e 's/{"slug":/''/g' | sed -e 's/"/''/g')
echo $repoNames
With python-stash library
import stashy
bitbucket = stashy.connect("host", "username", "password")
projects = bitbucket.projects.list()
repos = bitbucket.repos.list()
for project in projects:
for repo in bitbucket.projects["%s" % (project["key"])].repos.list():
print(repo["name"])
print(repo["project"]['key'])
You can use BitBucket API partial responses in order to limit the fields returned by the API.
Taking excerpts from the doc page:
[...] use the fields query parameter.
The fields parameter supports 3 modes of operation:
Removal of select fields (e.g. -links)
Pulling in additional fields not normally returned by an endpoint, while still getting all the default fields (e.g. +reviewers)
Omitting all fields, except those specified (e.g. owner.display_name)
The fields parameter can contain a list of multiple comma-separated field names (e.g. fields=owner.display_name,uuid,links.self.href). The parameter itself is not repeated.
So in your case would be something like:
curl -u username:pwd
http://${bitbucket-url}/rest/api/1.0/projects/${projectkey}/repos?fields=values.slug
Though I must say that the JSON output is not flat, it will still retain its original structure:
{
"values": [
{
"slug": "your repo slug #1"
},
...
So, if you actually want only a list with each repo slug on its own line, there's still some leg work to do.
In docs for Google Analytics API Response body contains queryCost and resourceQuotasRemaining. But, when I do
curl -i -H 'Content-Type: application/json' -X POST 'https://analyticsreporting.googleapis.com/v4/reports:batchGet?access_token=mytoken' -d '{"reportRequests":[{"viewId":"ga:myviewId","dateRanges":[{"startDate":"2019-12-04","endDate":"2019-12-04"}],"dimensions":[{"name":"ga:campaign"},{"name":"ga:adContent"},{"name":"ga:keyword"},{"name":"ga:currencyCode"}],"dimensionFilterClauses":[{"filters":[{"dimensionName":"ga:sourceMedium","operator":"EXACT","expressions":["Yandex.Market / cpc"]}]},{"filters":[{"dimensionName":"ga:campaign","operator":"PARTIAL","expressions":["msk"]}]}],"metrics":[{"expression":"ga:goal12Completions"}],"metricFilterClauses":[{"filters":[{"metricName":"ga:goal12Completions","operator":"GREATER_THAN","comparisonValue":"0"}]}],"hideTotals":true,"hideValueRanges":true}]}'
response body do not contain queryCost and resourceQuotasRemaining
{"reports":[{"columnHeader":{"dimensions":[...],"metricHeader":{"metricHeaderEntries":[...]}},"data":{"rows":[{"dimensions":[...],"metrics":[...]}],"rowCount":1,"isDataGolden":true}}]}
If I add to JSON in POST "useResourceQuotas":true, I get error: "The request is not eligible for resource quotas. Check if account is premium and whitelisted." (code 400).
How I can get information about query cost, quotas remaining and other limit stats using API? Or it is possible only for premium accounts?
Pete,
Resource based quota feature is only available to Analytics 360 users.
Thanks,
Ilya
your request doesn't include useResourceQuotas = true. its default false
{
"reportRequests": [
{
"viewId": "ga:xxxx",
"dateRanges": [
{
"startDate": "2019-12-04",
"endDate": "2019-12-04"
}
],
"metrics": [
{
"expression": "ga:users"
}
],
"hideTotals": true,
"hideValueRanges": true
}
],
"useResourceQuotas": true
}
Result
{
"reports": [
{
"columnHeader": {
"metricHeader": {
"metricHeaderEntries": [
{
"name": "ga:users",
"type": "INTEGER"
}
]
}
},
"data": {
"rows": [
{
"metrics": [
{
"values": [
"1298"
]
}
]
}
],
"rowCount": 1,
"isDataGolden": true
}
}
],
"resourceQuotasRemaining": {
"dailyQuotaTokensRemaining": 100000,
"hourlyQuotaTokensRemaining": 25000
}
}
useResourceQuotas
Doesn't work with every request. I would suggest that you go though yours adding different things to see what the exact problem is. Start by removing all those filters. Once you figure out exactly which one is giving you the error with the userREsourceQuotas let me know and i will ping the team about having the documentation updated. It doesnt say currently that there should be an issue with using it with anything. I cant test your request I dont have any accounts with goals setup like that that i could test with.
I'm trying to query all artifacts that are older than 6 months old. I'm able to delete them if I hard code a date into my query.
{
"files": [
{
"aql": {
"items.find": {
"repo": "foobar",
"$or": [
{
"$and": [
{
"modified": { "$lt": "2016-10-18T21:26:52.000Z"}
}
]
}
]
}
}
}
]
}
jfrog rt del --spec /tmp/foo.spec --dry-run
How can I do a query with a relative date? (e.g today - 6 months)
I'm going to put this into a cron job, and I'd rather not munge a spec file every time the cron job runs.
AQL queries support relative time operators.
In this case, modify the query:
"modified": { "$lt": "2016-10-18T21:26:52.000Z"}
To:
"modified": { "$before": "6mo"}
See full documantation at: AQL Relative Time Operators.
I am trying to import a cURL command into Postman, but it says I have a missing argument, but the cURL command works when I run it at the command line, here is the cURL command:
curl -XGET 'http://scpt-wc-ap.sys.bombast.net:9200/_all/_search?pretty' -d '{
"query": {
"filtered": {
"query": {
"bool": {
"should": [
{
"query_string": {
"query": "source:smart_connect"
}
}
]
}
},
"filter": {
"bool": {
"must": [
{
"range": {
"#timestamp": {
"from": 1439899750653,
"to": 1439903350653
}
}
}
]
}
}
}
},
"highlight": {
"fields": {},
"fragment_size": 2147483647,
"pre_tags": [
"#start-highlight#"
],
"post_tags": [
"#end-highlight#"
]
},
"size": 500,
"sort": [
{
"_score": {
"order": "desc",
"ignore_unmapped": true
}
}
]
}'
the error looks like:
any idea what is wrong the cURL command? Like I said it works when I run it at the command line.
Perhaps you can just help me translate the cURL command into HTTP, I assume it's an HTTP GET, but I don't know what the -d flag is doing besides passing data in some form.
Your curl command needs a space between -X and GET
see curl -GET and -X GET for more information about curl curl GET vs curl -X GET
Edit: It looks like this is just a postman thing, weird. Bug Maybe?
I was able to successfully import your request into postman using this:
curl -X GET 'http://scpt-wc-ap.sys.bombast.net:9200/_all/_search?pretty' -d '{"query":{"filtered":{"query":{"bool":{"should":[{"query_string":{"query":"source:smart_connect"}}]}},"filter":{"bool":{"must":[{"range":{"#timestamp":{"from":1439899750653,"to":1439903350653}}}]}}}},"highlight":{"fields":{},"fragment_size":2147483647,"pre_tags":["#start-highlight#"],"post_tags":["#end-highlight#"]},"size":500,"sort":[{"_score":{"order":"desc","ignore_unmapped":true}}]}'
Copying the curl request from the webinfo window in Safari resulted in a curl request with -XGET (without a space).
Adding the space between -X and GET made importing in Postman possible.