How do I use the artifactory REST api to search for artifacts by name since a specified date? - artifactory

I tried making a GET request to this url:
https://artifactory.xxxxxx.com/artifactory/api/search/dates?dateFields=created&from=1675050197406&name=yyyyy-*.tgz/
but I ran into this error:
{
"errors": [
{
"status": 500,
"message": "Item not found for id '2649778966'"
}
]
}
Thanks in advance.

You can use the AQL (Artifactory Query Language) to search for artifacts in Artifactory by name and last modified date. Here is an example AQL query to search for artifacts with a name starting with "my-artifact" modified after Jan 1, 2022:
items.find(
{
"name": {"$match": "my-artifact*"},
"$and": [
{"modified": {"$gt": "2022-01-01T00:00:00.000Z"}}
]
}
)

Related

Artifactory: how can I obtain a complete dump of all fields for an artifact?

I'm interested in the value of one field (Module ID) but there seems to be no way of obtaining this specifically. A complete dump of all field values would also suffice but I haven't succeeded in finding a way to do that either. I've looked at and tried the searches available within the documentation here: https://www.jfrog.com/confluence/display/JFROG/Artifactory+REST+API#ArtifactoryRESTAPI-SEARCHES
If it helps, I'm trying to query an on-premise installation of Artifactory.
More fields can be added to the AQL using the "include" element.
For example - to list all artifacts under "libs-release-local" repository, including their module names, run the following query:
items.find(
{
"repo":{"$eq":"libs-release-local"}
}
).include("artifact.module")
Response example:
{
"results": [
{
"repo": "libs-release-local",
"path": "org/jfrog/test/multi2/2.17.0",
"name": "multi2-2.17.0.jar",
"type": "file",
"size": 1022,
"created": "2021-09-11T13:51:33.878Z",
"created_by": "deployer",
"modified": "2021-09-11T13:51:33.631Z",
"modified_by": "deployer",
"updated": "2021-09-11T13:51:33.881Z",
"artifacts": [
{
"modules": [
{
"module.name": "org.jfrog.test:multi2:2.17.0"
}
]
}
]
}
]
}
You can find all of the required information under AQL documentation.

AWS Step Functions: Filter an array using JsonPath

I need to filter an array in my AWS Step Functions state. This seems like something I should easily be able to achieve with JsonPath but I am struggling for some reason.
The state I want to process looks like this:
{
"items": [
{
"id": "A"
},
{
"id": "B"
},
{
"id": "C"
}
]
}
I want to filter this array by removing entries for which id is not in a specified whitelist.
To do this, I define a Pass state in the following way:
"ApplyFilter": {
"Type": "Pass",
"ResultPath": "$.items",
"InputPath": "$.items.[?(#.id in ['A'])]",
"Next": "MapDeployments"
}
This makes use of the JsonPath in operator.
Unfortunately when I execute the state machine I receive an error:
{
"error": "States.Runtime",
"cause": "An error occurred while executing the state 'ApplyFilter' (entered at the event id #8). Invalid path '$.items.[?(#.id in ['A'])]' : com.jayway.jsonpath.InvalidPathException: com.jayway.jsonpath.InvalidPathException: Space not allowed in path"
}
However, I don't understand what is incorrect with the syntax. When I test here everything works correctly.
What is wrong with what I have done? Is there another way of achieving this sort of filter using JsonPath?
According to the official AWS docs for Step Functions,
The following in paths are not supported # .. , : ? *
https://docs.aws.amazon.com/step-functions/latest/dg/amazon-states-language-paths.html

Retrieving previously calculated route [HERE Maps] using RouteeId fails with RouteNotReconstructed error

I'm calculating routes based on user input. Then storing the routeId and any additional informations I need. But the shape of the road is something I need occasionally; when the user wants to get a preview of the road again.
Since I don't want to store all the points from shape I tried using getroute endpoint (https://developer.here.com/documentation/routing/topics/resource-get-route.html#resource-get-route) but I get this response:
{
"_type": "ns2:RoutingServiceErrorType",
"type": "ApplicationError",
"subtype": "RouteNotReconstructed",
"details": "Error is NGEO_ERROR_ROUTE_DESERIALIZATION",
"additionalData": [
{
"key": "error_code",
"value": "NGEO_ERROR_ROUTE_DESERIALIZATION"
}
],
"metaInfo": {
"timestamp": "2018-08-01T15:01:56Z",
"mapVersion": "8.30.86.150",
"moduleVersion": "7.2.201830-34436",
"interfaceVersion": "2.6.34",
"availableMapVersion": [
"8.30.86.150"
]
}
}
So the question is: why do I get the error? Following the API documentation https://developer.here.com/documentation/routing/topics/resource-type-error-route-not-reconstructed.html I can exclude wrong routeId (it works for routes saved e.g. today but not for the older ones).
The route was calculated using the same version (7.2)
Is the routeId stored only for a certain amount of time?
If so, how long?
RouteID changes with map version.
https://developer.here.com/documentation/routing/topics/request-route-information.html
You'd need to recalculate periodically to get up to date RouteIDs.

Artifactory delete all artifacts older than 6 months

How would you delete all artifacts that match a pattern (e.g older than 6 months old) from artifactory?
Using either curl, or the go library
The jfrog cli takes a 'spec file' to search for artifacts. See here for information on jfrog spec files
The jfrog cli documentation is available here:
Create an aql search query to find just the artifacts you want:
If your aql search syntax were like:
/tmp/foo.query
items.find(
{
"repo":"foobar",
"modified" : { "$lt" : "2016-10-18T21:26:52.000Z" }
}
)
And you could find the artifacts like so:
curl -X POST -u admin:<api_key> https://artifactory.example.com/artifactory/api/search/aql -T foo.query
Then the spec file would be
/tmp/foo.spec
{
"files": [
{
"aql": {
"items.find": {
"repo": "foobar",
"$or": [
{
"$and": [
{
"modified": { "$lt": "2016-10-18T21:26:52.000Z"}
}
]
}
]
}
}
}
]
}
And you would use the golang library like so:
jfrog rt del --spec /tmp/foo.spec --dry-run
Instead of modified, you can also do a relative date
"modified": { "$before":"6mo" }
If you get error 405 Method not allowed, verify you have an api or password correct, and try using PUT instead of POST
Install jforg cli
configure jfrog cli to make API calls
jfrog rt c Artifactory --url=https://<>url/artifactory --apikey=<add api key> #can generate api key from user profile
Create a spec file called artifactory.spec You can change the find query as required. Below query will
display artifacts from
registry - docker-staging
path - *
download status = null # Not downloaded
created before 6 month
{
"files": [
{
"aql": {
"items.find": {
"repo": {"$eq":"docker-staging"},
"path": {"$match":"*"},
"name": {"$match":"*"},
"stat.downloads":{"$eq":null},
"$or": [
{
"$and": [
{
"created": { "$before":"6mo" }
}
]
}
]
}
}
}
]
You can search for the packages avalable to delete by
jfrog rt s --spec artifactory.spec
Run the delete command
jfrog rt del --spec artifactory.spec
We've built a tool for Artifactory artifacts management (cleanup) https://github.com/devopshq/artifactory-cleanup
it'd look like
# artifactory-cleanup.yaml
artifactory-cleanup:
server: https://repo.example.com/artifactory
# $VAR is auto populated from environment variables
user: $ARTIFACTORY_USERNAME
password: $ARTIFACTORY_PASSWORD
policies:
- name: Remove all files from repo-name-here older then 7 days
rules:
- rule: Repo
name: "reponame"
- rule: DeleteOlderThan
days: 7
then you should run it to see what it'll remove and when you are ready - add --destroy flag to it!
artifactory-cleanup

Google Cloud Datastore runQuery returning 412 "no matching index found"

** UPDATE **
Thanks to Alfred Fuller for pointing out that I need to create a manual index for this query.
Unfortunately, using the JSON API, from a .NET application, there does not appear to be an officially supported way of doing so. In fact, there does not officially appear to be a way to do this at all from an app outside of App Engine, which is strange since the Cloud Datastore API was designed to allow access to the Datastore outside of App Engine.
The closest hack I could find was to POST the index definition using RPC to http://appengine.google.com/api/datastore/index/add. Can someone give me the raw spec for how to do this exactly (i.e. URL parameters, what exactly should the body look like, etc), perhaps using Fiddler to inspect the call made by appcfg.cmd?
** ORIGINAL QUESTION **
According to the docs, "a query can combine equality (EQUAL) filters for different properties, along with one or more inequality filters on a single property".
However, this query fails:
{
"query": {
"kinds": [
{
"name": "CodeProse.Pogo.Tests.TestPerson"
}
],
"filter": {
"compositeFilter": {
"operator": "and",
"filters": [
{
"propertyFilter": {
"operator": "equal",
"property": {
"name": "DepartmentCode"
},
"value": {
"integerValue": "123"
}
}
},
{
"propertyFilter": {
"operator": "greaterThan",
"property": {
"name": "HourlyRate"
},
"value": {
"doubleValue": 50
}
}
},
{
"propertyFilter": {
"operator": "lessThan",
"property": {
"name": "HourlyRate"
},
"value": {
"doubleValue": 100
}
}
}
]
}
}
}
}
with the following response:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "FAILED_PRECONDITION",
"message": "no matching index found.",
"locationType": "header",
"location": "If-Match"
}
],
"code": 412,
"message": "no matching index found."
}
}
The JSON API does not yet support local index generation, but we've documented a process that you can follow to generate the xml definition of the index at https://developers.google.com/datastore/docs/tools/indexconfig#Datastore_Manual_index_configuration
Please give this a shot and let us know if it doesn't work.
This is a temporary solution that we hope to replace with automatic local index generation as soon as we can.
The error "no matching index found." indicates that an index needs to be added for the query to work. See the auto index generation documentation.
In this case you need an index with the properties DepartmentCode and HourlyRate (in that order).
For gcloud-node I fixed it with those 3 links:
https://github.com/GoogleCloudPlatform/gcloud-node/issues/369
https://github.com/GoogleCloudPlatform/gcloud-node/blob/master/system-test/data/index.yaml
and most important link:
https://cloud.google.com/appengine/docs/python/config/indexconfig#Python_About_index_yaml to write your index.yaml file
As explained in the last link, an index is what allows complex queries to run faster by storing the result set of the queries in an index. When you get no matching index found it means that you tried to run a complex query involving order or filter. So to make your query work, you need to create your index on the google datastore indexes by creating a config file manually to define your indexes that represent the query you are trying to run. Here is how you fix:
create an index.yaml file in a folder named for example indexes in your app directory by following the directives for the python conf file: https://cloud.google.com/appengine/docs/python/config/indexconfig#Python_About_index_yaml or get inspiration from the gcloud-node tests in https://github.com/GoogleCloudPlatform/gcloud-node/blob/master/system-test/data/index.yaml
create the indexes from the config file with this command:
gcloud preview datastore create-indexes indexes/index.yaml
see https://cloud.google.com/sdk/gcloud/reference/preview/datastore/create-indexes
wait for the indexes to serve on your developer console in Cloud Datastore/Indexes, the interface should display "serving" once the index is built
once it is serving your query should work
For example for this query:
var q = ds.createQuery('project')
.filter('tags =', category)
.order('-date');
index.yaml looks like:
indexes:
- kind: project
ancestor: no
properties:
- name: tags
- name: date
direction: desc
Try not to order the result. After removing orderby(), it worked for me.

Resources