Index setting path in Cosmos DB - azure-cosmosdb

In Cosmos DB, I have an index setting that looks like this:
"includedPaths": [
{
"path": "/PartitionKey/*"
}
]
What would be the difference if I changed the Path to only this: "/PartitionKey"?

That means indexing won't be enabled on the nested nodes in the document. The path to anything under /PartitionKey/. The character * should be used if you are planing to query on sub properties of the property for this path.
from the documentation,
the /* wildcard can be used to match any elements below the node
If you don't have nested nodes then this should be good enough. Also you need to use the ? character at the end of the index path is required to serve queries that query on this property.
/PartitionKey/?

Related

Firebase rule base on path name [duplicate]

I have security rule write like this :
/databases/{database}/documents {
match /collection_COUNTRY_EN/{docId} {
allow....
}
match /collection_COUNTRY_ES/{docId} {
allow...
}
}
Where the rule are identical to all the country. Is there a way to implement regex in the match /path to have the same rule for all the collection that start with something and end with a country code ?
Or does i have to structure my data in a different way ?
Thanks for your time.
Security rules do not support regex in the path match. You can only wildcard on the full name of a path segment.
What you might want to do instead is organize all your common top-level collection into subcollections organized under a known document, and apply the same rules to each of them that way:
match /countries/data/{countryCollection}/{docId} {
allow...
}
This would apply the same permissions to all country subcollections organized under /countries/data, which can be an empty document, or even a non-existent document.

Using parameters in Azure Service Bus Subscription SqlFilter expression

I am using an ARM template to try and deploy a subscription to an Azure Service Bus Topic which filters messages based on the To system property. I would like to pull the value for the filter from an ARM template parameter, but I can't seem to get the template to resolve the param in the SqlExpression.
Below is the template I have been messing around with. I thought I could maybe just toggle the requiresPreprocessing switch to get it to resolve the param on deployment, but no dice. I also played with trying to escape it using double square brackets or colons as shown in the link below
https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-messaging-sql-filter#propertyname
{
"apiVersion": "2017-04-01",
"name": "[concat(parameters('mynamespace'), '/', parameters('topic'), '/', parameters('myVariable'),'/direct')]",
"type": "Microsoft.ServiceBus/namespaces/topics/subscriptions/rules",
"dependsOn": [
"[resourceId('Microsoft.ServiceBus/namespaces', parameters('mynamespace'))]",
"[resourceId('Microsoft.ServiceBus/namespaces/topics', parameters('mynamespace'), parameters('topic'))]",
"[resourceId('Microsoft.ServiceBus/namespaces/topics/subscriptions', parameters('mynamespace'), parameters('topic'), parameters('myVariable'))]"
],
"properties": {
"filterType": "SqlFilter",
"sqlFilter": {
"sqlExpression": "sys.To=[parameters('myVariable')] OR sys.To IS NULL",
"requiresPreprocessing": true
}
}
What I am getting is the string exactly as it is displayed in the sqlExpression, but I would like to get the value that the variable resolves to in a single quoted string.
This topic subscription rules may only get static values. Maybe you can try with a static value instead of [parameters('myVariable')]. This problem might because of giving dynamic value to the property sys.To.
You could use: "[concat('sys.To=',[parameters('myVariable')],' OR sys.To IS NULL')]".
You cannot use inline expressions in an ARM template I think, therefor you should make the whole thing an expression and in this case use concat to glue the parts together.
Hint: including single quotes is difficult, so a variable like this might come in handy:
"SQ": "'"

Write single value on firebase database without deleting the existing ones

I'am trying to add a single value my database without deleting the existing ones because I need them all. The structure is something like that /user/fav_post/{post_id_1,post_id_2,...}. The "fav_post" initially is empty and as the time goes by the user adds more fav_posts.
One way that solves this problem is downloading all favorite posts, putting them in a HashMap, add the new post and the push them to the database but this does not seem very optimal.
So what I am trying to achieve is to have all the favorite posts and to display them to the user.
mDatabase.child("USERS")
.child(currentUser.getUid())
.child("favorites")
.setValue(postID);
Edit: The end result should be like
Ther end result I want to be like that
root
-User
--FavPosts
---postID1(String)
And when the user favorites another post the result should be like that:
root
-User
--FavPosts
---postID1(String)
---postID2(String)
What you're looking for is a set: an unordered collection of unique values. In the Firebase Database you'd store the post Id as a key and (since Firebase doesn't allow you to store a key without a value) true as the value.
So:
root: {
User: {
FavPosts: {
postID1: true,
postID2: true
}
}
}
You'd set these values with:
mDatabase.child("USERS")
.child(currentUser.getUid())
.child("favorites")
.child(postID)
.setValue(true);
setValue() will overwrite the entire contents of the location. If you want to just add or update child values, use updateChildren() instead.

Artifactory AQL delete empty folders

How do I delete empty folders(folders without any content) by using Artifactory AQL?
I have the current AQL query to find files that are older than 12w and never downloaded, which I will delete by an script.
items.find(
{
"repo":{"$eq":"libs-release-local"},
"stat.downloads":{"$eq":null},
"created":{"$before" : "12w"},
}
)
This leaves me with empty folders, how do I specify an AQL query that finds all empty folders?
From Artifactory Query Language documentation: if type is not specified in the query, the default type searched for is file.
By adding a type to the query you can control the result type: file, folder or both.
For example:
items.find(
{
"repo": {"$eq":"libs-release-local"},
"stat.downloads": {"$eq":null},
"created": {"$before" : "12w"},
"type": {"$eq":"any"}
}
)
If you are not married to the idea of using AQL, note that there is an Empty Folder Clean-up plugin by JFrog.

Filter property based searches in Artifactory

I'm looking to use the Artifactory property search
https://www.jfrog.com/confluence/display/RTF/Artifactory+REST+API#ArtifactoryRESTAPI-ArtifactSearch%28QuickSearch%29
Currently this will pull json listing any artifact that matches my properties.
"results" : [
{
"uri": "http://localhost:8080/artifactory/api/storage/libs-release-local/org/acme/lib/ver/lib-ver.pom"
},{
"uri": "http://localhost:8080/artifactory/api/storage/libs-release-local/org/acme/lib/ver2/lib-ver2.pom"
}
]
I need to be able to filter the artifacts I get back as i'm only interested in a certain classifier. The GAVC Search has this with &c=classifier
I can do it in code if this isn't possible via the interface
Any help appreciated
Since the release of AQL in Artifactory 3.5, it's now the official and the preferred way to find artifacts.
Here's an example similar to your needs:
items.find
(
{
"$and":[
{"#license":{"$eq":"GPL"}},
{"#version":{"$match":"1.1.*"}},
{"name":{"$match":"*.jar"}}
]
}
)
To run the query in Artifactory, copy the query to a file and name it aql.query
Run the following command from the directory that contains the aql.query file
curl -X POST -uUSER:PASSWORD 'http://HOST:PORT/artifactory/api/search/aql' -Taql.query
Don't forget to replace the templates (USER, PASSWORD,HOST and PORT) to real values.
In the example
The first two criteria are used to filter items by properties.
The third criteria filters items by the artifact name (in our case the artifact name should end with .jar)
For more details on how to write AQL query are in AQL
Old answer
Currently you can't combine the property search with GAVC search.
So you have two options:
Executing one of them (whichever gives you more precise results) and then filter the JSON list on the client by a script
Writing an execution user plugin that will execute the search by using the Searches service and then filter the results on the server side.
Of course, the later is preferable.

Resources