Force Artifactory to use numerical comparison when searching? - artifactory

I am trying to find the latest (or earliest, depending on comparison operator) version of an RPM package (the RPM bit is important). I am using AQL query similar to this one:
items.find(
{ "$and" : [
{ "#rpm.metadata.name": { "$eq": "awesome_package"}},
{ "#rpm.metadata.version": { "$gte": "19.300.0.58"}} ]
})
.include("#rpm.metadata.version")
.sort( { "$asc": [ "name" ]})
As already answered by Artifactory KnowledgeBase, it's impossible to sort on properties, so instead of just sorting on #rpm.metadata.version and taking the first top result by using .limit(1) I must use property condition in the find clause.
It appears though that Artifactory's built-in comparison is purely lexicographic, so for the query above I get the following result:
{
"results" : [ {
"repo" : "yum-private-local",
"path" : "some/path",
"name" : "awesome_package-19.300.0.9-1.noarch.rpm",
"properties" : [ {
"key" : "rpm.metadata.version",
"value" : "19.300.0.9"
} ]
},{
"repo" : "yum-private-local",
"path" : "some/path",
"name" : "awesome_package-19.300.0.58-0.noarch.rpm",
"properties" : [ {
"key" : "rpm.metadata.version",
"value" : "19.300.0.58"
} ]
},{
"repo" : "yum-private-local",
"path" : "some/path",
"name" : "awesome_package-19.300.0.59-0.noarch.rpm",
"properties" : [ {
"key" : "rpm.metadata.version",
"value" : "19.300.0.59"
} ]
} ],
"range" : {
"start_pos" : 0,
"end_pos" : 3,
"total" : 3
}
}
This result includes version 19.300.0.9, which, according to RPM spec, is older than what I am searching for (>= 19.300.0.58) and shouldn't be included in the results, but Artifactory finds it nonetheless, most likely due to its search comparisons being lexicographic.
Also note the ordering of the results, which does appear to use numerical sorting (version "19.300.0.9" comes before "19.300.0.58" and "19.300.0.59").
Question: is it possible to force Artifactory to use numerical (SemVer) comparison in search criteria? If not, is there any other way I can exclude irrelevant versions from the result list?

Although not in lines with what is asked but instead of name sorting if done by the created field would also be helpful.
created:

Related

JFrog Artifactory AQL - Pagination mechanism

I am currently trying to query Artifactory for 25 results.
So I used:
items.find({"name" : {"$match":"somefile*"}).limit(25)
But I need to get also the Total result for calculate Total results.
If there is 500 results, i want to get it also.
For Example:
500/25 = 20 pages.
You can use the search command of the JFrog CLI, with the --count option to only get the number of results.
Or, using AQL through the REST API, you can first query without limit to get the total number of results, by looking at the total count at the range field:
{
"results" : [
{
"repo" : "libs-release-local",
"path" : "org/jfrog/artifactory",
"name" : "artifactory.war",
"type" : "item type",
"size" : "75500000",
"created" : "2015-01-01T10:10;10",
"created_by" : "Jfrog",
"modified" : "2015-01-01T10:10;10",
"modified_by" : "Jfrog",
"updated" : "2015-01-01T10:10;10"
}
],
"range" : {
"start_pos" : 0,
"end_pos" : 1,
"total" : 1 // <----
}
}

Artifactory aql: find builds of job with given property

I am trying to query which build number(s) produced artifacts from build foo with artifact property vcs.Revision=aabbccddee123456.
In Artifactory 5.1.3.
I was trying like this so far:
curl -u user:apikey -i -X POST https://artifactory.foobar.com/artifactory/api/search/aql -H "content-type:text/plain" -T query.json
query.json:
builds.find(
{
"module.artifact.item.repo":"snapshot-local",
"name":"foo",
"module.artifact.item.#vcs.Revision":"aabbccddee123456"
}
)
However, none of these 3 lines seem individually correct:
builds.find({"module.artifact.item.repo":"snapshot-local"})
returns nothing,
builds.find({"name":"foo"})
returns the same empty response,
builds.find({"module.artifact.item.#vcs.Revision":"aabbccddee123456"}) also returns this:
{
"results" : [ ],
"range" : {
"start_pos" : 0,
"end_pos" : 0,
"total" : 0
}
}
What am I doing wrong here? I do see in the webapp the builds I published with this name, and with the correct artifact properties.
Here's a working solution that will give build numbers (since giving admin rights to query builds is not a solution for us):
query.json:
items.find(
{
"repo":"snapshot-local",
"artifact.module.build.name":"foo",
"artifact.item.#vcs.Revision":"aabbccddee123456"
}
).include("artifact.module.build.number")
This returns a list of all the artifacts that were built with the relevant properties, with the build number attached, e.g:
{
"results" : [ {
"repo" : "snapshot-local",
"path" : "foo/42",
"name" : "a.out",
"type" : "file",
"size" : 123456789,
"created" : "2018-07-05T12:34:56.789+09:00",
"created_by" : "jenkins",
"modified" : "2018-07-05T12:34:56.789+09:00",
"modified_by" : "jenkins",
"updated" : "2018-07-05T12:34:56.789+09:00",
"artifacts" : [ {
"modules" : [ {
"builds" : [ {
"build.number" : "42"
} ]
} ]
} ]
},
[SNIP]
}
],
"range" : {
"start_pos" : 0,
"end_pos" : 30,
"total" : 30
}
}
I can then parse this to extract build.number.
Certain AQL queries requires a user with admin permissions.
To ensure that non-privileged users do not gain access to information without the right permissions, users without admin privileges have the following restrictions:
The primary domain in the query may only be item.
The following three fields must be included in the include directive: name, repo, and path.
In your case, you are using the build domain in the query which requires admin permissions

Firebase: structuring data via per-user copies? Risk of data corruption?

Implementing an Android+Web(Angular)+Firebase app, which has a many-to-many relationship: User <-> Widget (Widgets can be shared to multiple users).
Considerations:
List all the Widgets that a User has.
A User can only see the Widgets which are shared to him/her.
Be able to see all Users to whom a given Widget is shared.
A single Widget can be owned/administered by multiple Users with equal rights (modify Widget and change to whom it is shared). Similar to how Google Drive does sharing to specific users.
One of the approaches to implement fetching (join-style), would be to go with this advice: https://www.firebase.com/docs/android/guide/structuring-data.html ("Joining Flattened Data") via multiple listeners.
However I have doubts about this approach, because I have discovered that data loading would be worryingly slow (at least on Android) - I asked about it in another question - Firebase Android: slow "join" using many listeners, seems to contradict documentation .
So, this question is about another approach: per-user copies of all Widgets that a user has. As used in the Firebase+Udacity tutorial "ShoppingList++" ( https://www.firebase.com/blog/2015-12-07-udacity-course-firebase-essentials.html ).
Their structure looks like this:
In particular this part - userLists:
"userLists" : {
"abc#gmail,com" : {
"-KBt0MDWbvXFwNvZJXTj" : {
"listName" : "Test List 1 Rename 2",
"owner" : "xyz#gmail,com",
"timestampCreated" : {
"timestamp" : 1456950573084
},
"timestampLastChanged" : {
"timestamp" : 1457044229747
},
"timestampLastChangedReverse" : {
"timestamp" : -1457044229747
}
}
},
"xyz#gmail,com" : {
"-KBt0MDWbvXFwNvZJXTj" : {
"listName" : "Test List 1 Rename 2",
"owner" : "xyz#gmail,com",
"timestampCreated" : {
"timestamp" : 1456950573084
},
"timestampLastChanged" : {
"timestamp" : 1457044229747
},
"timestampLastChangedReverse" : {
"timestamp" : -1457044229747
}
},
"-KByb0imU7hFzWTK4eoM" : {
"listName" : "List2",
"owner" : "xyz#gmail,com",
"timestampCreated" : {
"timestamp" : 1457044332539
},
"timestampLastChanged" : {
"timestamp" : 1457044332539
},
"timestampLastChangedReverse" : {
"timestamp" : -1457044332539
}
}
}
},
As you can see, the copies of shopping list "Test List 1 Rename 2" info appears in two places (for 2 users).
And here is the rest for completeness:
{
"ownerMappings" : {
"-KBt0MDWbvXFwNvZJXTj" : "xyz#gmail,com",
"-KByb0imU7hFzWTK4eoM" : "xyz#gmail,com"
},
"sharedWith" : {
"-KBt0MDWbvXFwNvZJXTj" : {
"abc#gmail,com" : {
"email" : "abc#gmail,com",
"hasLoggedInWithPassword" : false,
"name" : "Agenda TEST",
"timestampJoined" : {
"timestamp" : 1456950523145
}
}
}
},
"shoppingListItems" : {
"-KBt0MDWbvXFwNvZJXTj" : {
"-KBt0heZh-YDWIZNV7xs" : {
"bought" : false,
"itemName" : "item",
"owner" : "xyz#gmail,com"
}
}
},
"uidMappings" : {
"google:112894577549422030859" : "abc#gmail,com",
"google:117151367009479509658" : "xyz#gmail,com"
},
"userFriends" : {
"xyz#gmail,com" : {
"abc#gmail,com" : {
"email" : "abc#gmail,com",
"hasLoggedInWithPassword" : false,
"name" : "Agenda TEST",
"timestampJoined" : {
"timestamp" : 1456950523145
}
}
}
},
"users" : {
"abc#gmail,com" : {
"email" : "abc#gmail,com",
"hasLoggedInWithPassword" : false,
"name" : "Agenda TEST",
"timestampJoined" : {
"timestamp" : 1456950523145
}
},
"xyz#gmail,com" : {
"email" : "xyz#gmail,com",
"hasLoggedInWithPassword" : false,
"name" : "Karol Depka",
"timestampJoined" : {
"timestamp" : 1456952940258
}
}
}
}
However, before I jump into implementing a similar structure in my app, I would like to clarify a few doubts.
Here are my interrelated questions:
In their ShoppingList++ app, they only permit a single "owner" - assigned in the ownerMappings node. Thus no-one else can rename the shopping list. I would like to have multiple "owners"/admins, with equal rights. Would such a keep-copies-per-user structure still work for multiple owner/admin users, without risking data corruption/"desynchronization" or "pranks"?
Could data corruption arise in scenarios like this: User1 goes offline, renames Widget1 to Widget1Prim. While User1 is offline, User2 shares Widget1 to User3 (User3's copy would not yet be aware of the rename). User1 goes online and sends the info about the rename of Widget1 (only to his own and User2's copies, of which the client code was aware at the time of the rename - not updating User3's copy). Now, in a naive implementation, User3 would have the old name, while the others would have the new name. This would probably be rare, but still worrying a bit.
Could/should the data corruption scenario in point "2." be resolved via having some process (e.g. on AppEngine) listening to changes and ensuring proper propagation to all user copies?
And/or could/should the data corruption scenario in point "2." be resolved via implementing a redundant listening to both changes of sharing and renaming, and propagating the changes to per-user copies, to handle the special case? Most of the time this would not be necessary, so it could result in performance/bandwidth penalty and complicated code. Is it worth it?
Going forward, once we have multiple versions deployed "in the wild", wouldn't it become unwieldy to evolve the schema, given how much of the data-handling responsibility lies with the code in the clients? For example if we add a new relationship, that the older client versions don't yet know about, doesn't it seem fragile? Then, back to the server-side syncer-ensurerer process on e.g. AppEngine (described in question "3.") ?
Would it seem like a good idea, to also have a "master reference copy" of every Widget / shopping-list, so as to give good "source of truth" for any syncer-ensurerer type of operations that would update per-user copies?
Any special considerations/traps/blockers regarding rules.json / rules.bolt permissions for data structured in such a (redundant) way ?
PS: I know about atomic multi-path updates via updateChildren() - would definitely use them.
Any other hints/observations welcome. TIA.
I suggest having only one copy of a widget for the entire system. It would have an origin user ID, and a set of users that have access to it. The widget tree can hold user permissions and change history. Any time a change is made, a branch is added to the tree. Branches can then be "promoted" to the "master" kind of like GIT. This would guarantee data integrity because past versions are never changed or deleted. It would also simplify your fetches... I think :)
{
users:[
bob:{
widgets:[
xxx:{
widgetKey: xyz,
permissions: *,
lastEdit...
}
]
}
...
]
widgets:[
xyz:{
masterKey:abc,
data: {...},
owner: bob,
},
...
]
widgetHistory:[
xyz:[
v1:{
data:{...},
},
v2,
v3
]
123:[
...
],
...
]
}

Drupal 7 Rules - on cron, check date field and if past set field [Status] from “active” to “ended”

OK... let me start by saying I know there is a similar post here (How to create a Drupal rule to check (on cron) a date field and if passed set field "status" to "ended"?) but the answer on that post does not work. Step 4 (In the component add the condition 'Data comparison' and select node:type) does not work or even exists as an option.
What I need to do is this:
On Cron > If content type is event and the end date has passed the current date then change the status field from Active to Ended. (select list)
I was able to do this by using the Event: Content is viewed but I really need to to work when cron is ran.
Side note: with the current version I have (Content is viewed) it does change Active to Ended but it also for some reason deletes the title of the node which is strange becuase the title filed is required by Drupal... any idea wht that is happening?
Not sure if it helps but here is an export of what I have done myself:
{ "rules_event_status" : {
"LABEL" : "Event Status",
"PLUGIN" : "reaction rule",
"ACTIVE" : false,
"REQUIRES" : [ "rules", "php" ],
"ON" : [ "node_view" ],
"IF" : [
{ "node_is_of_type" : { "node" : [ "node" ], "type" : { "value" : { "event" : "event" } } } },
{ "AND" : [] },
{ "php_eval" : { "code" : "\/\/dpm(strtotime($node-\u003Efield_event_date_time[LANGUAGE_NONE][0][\u0027value2\u0027]));\r\nif (time() \u003E strtotime($node-\u003Efield_event_date_time[LANGUAGE_NONE][0][\u0027value2\u0027]))\r\n{\r\n return true;\r\n}" } }
],
"DO" : [
{ "data_set" : { "data" : [ "node:field-event-status" ], "value" : "Ended" } }
]
}
}
Any help is very much appreciated.
Thanks
C
to use any custom fields or fields created by other modules than node, you have to add condition "entity has field" to your rules which will make that field "visible" and accesible for later work
side note: I think you can do the date comparison without php_eval, just add another entity has field condition and create "data comparison" condition. There should be tokens available to your needs
Not sure I fully understand the question: rules can be triggered by cron.
You should be able to get it to run when cron executes by picking the "React on event" attribute of the rule to "System > Cron maintenance tasks are executed".
Am I missing something?

Rule-based node creation: commerce product + product display node set

I'm trying to bind a Commerce product type to my own custom type node (serving as a display node). The goal is to enter new data in as few places as possible. I'm therefore exploring a rule-based creation of one type upon creation of the other. Seems like both directions are working. Of the two though, I prefer automatic creation of a Commerce Product upon user creation of Custom Type node, which will then serve as a product display.
I was wondering if anyone has been through this choice and could recommend this. Also, is the commerce_product_display_manager module necessary?
Commerce Product Display Manager is not necessary, I've gotten this to work and I've never used that module.
I went for the route of automatically creating a Node after saving the Product.
Below is my Rules export for this:
{ "rules_create_product_display" : {
"LABEL" : "Create Product Display",
"PLUGIN" : "reaction rule",
"REQUIRES" : [ "rules", "entity" ],
"ON" : [ "commerce_product_insert" ],
"IF" : [
{ "data_is" : { "data" : [ "commerce-product:type" ], "value" : "**PRODUCT_TYPE**" } }
],
"DO" : [
{ "entity_create" : {
"USING" : {
"type" : "node",
"param_type" : "**NODE_TYPE**",
"param_title" : "[commerce-product:title]",
"param_author" : [ "commerce-product:creator" ]
},
"PROVIDE" : { "entity_created" : { "entity_created" : "Created entity" } }
}
},
{ "data_set" : {
"data" : [ "entity-created:**PRODUCT_REFERENCE**" ],
"value" : [ "commerce-product" ]
}
}
]
}
}
You'll need to substitute your own values for:
PRODUCT_TYPE (product type that has been created)
NODE_TYPE (node type being created)
PRODUCT_REFERENCE (field that will reference the created product)
Sorry I can't dedicate more time to a better answer now, let me know if you'd like me to elaborate on the process of creating the above using the GUI
The above example was useful but here is a more specific one:
{ "rules_create_product_display_on_product_creation" : {
"LABEL" : "Create Product Display on Product creation",
"PLUGIN" : "reaction rule",
"REQUIRES" : [ "rules", "entity" ],
"ON" : [ "commerce_product_insert" ],
"IF" : [
{ "entity_is_of_type" : { "entity" : [ "commerce-product" ], "type" : "commerce_product" } }
],
"DO" : [
{ "entity_create" : {
"USING" : {
"type" : "node",
"param_type" : "product_display",
"param_title" : "[commerce-product:title]",
"param_author" : [ "commerce-product:creator" ]
},
"PROVIDE" : { "entity_created" : { "entity_created" : "Created entity" } }
}
},
{ "data_set" : {
"data" : [ "entity-created:field-product:0" ],
"value" : [ "commerce-product" ]
}
}
]
}
}
The only problem I had was with the second action ("data_set")- it was important to select "entity-created:field-product:0", not the "entity-created:field-product" to make it work because we want to assign specific product and not a list of products.
This example is using the standard product display node type (product_display) but you can change it with the one you are using. Also have in mind that this is working only for one product type - for every product type a separated rule should be created. You may create also a rule for deleting the product display node when deleting the product.
This rule is useful only when you have connection one product-one product display. If you need to add more products per product display (colors, images with different prices) then you have to use Commerce Bulk Product Creation module.

Resources