Indexing the partition key in Azure Cosmos DB - azure-cosmosdb

Suppose I've the following data in my container:
{
"id": "1DBF704E-1623-4844-DC86-EFA729A5C048",
"firstName": "Wylie",
"lastName": "Ramsey",
"country": "AZ",
"city": "Tucson"
}
Where I use the field "id" as the item id, and the field "country" as the partition key, when I query on specific partition key:
SELECT * FROM c WHERE c.country = "AZ"
(get all the people in "AZ")
Should I add "country" as an index or I will get it by default, since I declered "country" as my partition key?
Is there a diference when using the SDK (meaning: adding the new PartitionKey("AZ") option and then sending the query as mentioned above)?

I created a collection with 50,000 records and disabled indexing on all properties.
Indexing policy:
{
"indexingMode": "consistent",
"automatic": true,
"includedPaths": [], // Included nothing
"excludedPaths": [
{
"path": "/\"_etag\"/?"
},
{
"path": "/*" // Exclude all paths
}
]
}
Querying by id cost 2.85 RUs.
Querying by PartitionKey cost 580 RUs.
Indexing policy with PartitionKey (country) added:
{
"indexingMode": "consistent",
"automatic": true,
"includedPaths": [
{
"path": "/country/*"
}
],
"excludedPaths": [
{
"path": "/\"_etag\"/?"
},
{
"path": "/*" // Exclude all paths
}
]
}
Adding an index on the PartitionKey brought it down to 2.83 RUs.
So the answer to this is Yes, if you have disabled default indexing policies and need to search by partition key then you should add an index to it.

In my opinion, it's a good practice to query with partition key in cosmosdb sql api, here's the offical doc related to it.
By the way, cosmosdb sql api indexes all the properties by default. If you'd like to cover the default setting and customise the indexing policy, this doc may provide more details.

Related

Non Recursive Index CosmosDb

As a CosmosDB (SQL API) user I would like to index all non object or array properties inside of an object.
By default the index in cosmos /* will index every property, our data set is getting extremely large (expensive) and this strategy is no longer optimal. We store our metadata at the root and our customer data wrapped inside of an object property data.
Our platform restricts queries on the data path to be value type properties, this means that for us to index objects and arrays nested under the data path is just slowing down writes and costing RUs to store but never getting used.
I have tried several iterations of index policies but cannot find one that fits. Example:
{
"partitionKey": "f402a704-19bb-4f4d-93e6-801c50280cf6",
"id": "4a7a11e5-00b5-4def-8e80-132a8c083f24",
"data": {
"country": "Belgium",
"employee": 250,
"teammates": [
{ "name": "Jake", "id": 123 ...},
{ "name": "kyle", "id": 3252352 ...}
],
"user": {
"name": "Brian",
"addresses": [{ "city": "Moscow" ...}, { "city": "Moscow" ...}]
}
}
}
In this case I want to only index the root properties as well as /data/employee and /data/country.
Policies like /data/* will not work because it would then index /data/teammates/name ... and so on.
/data/? => assumes data is a value type which it never will be so this doesn't work.
/data/ and /data/*/? and /data/*? are not accepted by cosmos as valid policies.
Additionally I can't simply exclude /data/teammates/ and /data/user/ because what is inside of data is completely dynamic so while that might cover this use case there are several 100k others that it would not.
I have tried many iterations but it seems that options don't work for various reasons, is there a way to support what I am trying to do?
This indexing policy will index the properties you are asking for.
{
"indexingMode": "consistent",
"automatic": true,
"includedPaths": [
{
"path": "/partitionKey/?"
},
{
"path": "/data/country/?"
},
{
"path": "/data/employee/?"
}
],
"excludedPaths": [
{
"path": "/*"
}
]
}

Azure search service index pointing multiple document db collections

How to load data from two separate collections of azure cosmos db to a single azure search index? I need a solution to join the data from two collections in a way similar to inner joining concept of SQL and load that data to azure search service.
I have two collections in azure cosmos db.
One for product and sample documents for the same is as below.
{
"description": null,
"links": [],
"replaces": "00000000-0000-0000-0000-000000000000",
"replacedBy": "00000000-0000-0000-0000-000000000000",
"productTypeId": "ccd0bc73-c4a1-41bf-9c96-454a5ba1d025",
"id": "a4853bf5-9c58-4fb5-a1ff-fc3ab575b4c8",
"name": "New Product",
"createDate": "2018-09-19T10:04:35.1951552Z",
"createdBy": "00000000-0000-0000-0000-000000000000",
"updateDate": "2018-10-05T13:46:24.7048358Z",
"updatedBy": "DIJdyXMudaqeAdsw1SiNyJKRIi7Ktio5#clients"
}
{
"description": null,
"links": [],
"replaces": "00000000-0000-0000-0000-000000000000",
"replacedBy": "00000000-0000-0000-0000-000000000000",
"productTypeId": "ccd0bc73-c4a1-41bf-9c96-454a5ba1d025",
"id": "b9b6c3bc-a8f8-470f-ac93-be589eb1da16",
"name": "New Product 2",
"createDate": "2018-09-19T11:02:02.6919008Z",
"createdBy": "00000000-0000-0000-0000-000000000000",
"updateDate": "2018-09-19T11:02:02.6919008Z",
"updatedBy": "00000000-0000-0000-0000-000000000000"
}
{
"description": null,
"links": [],
"replaces": "00000000-0000-0000-0000-000000000000",
"replacedBy": "00000000-0000-0000-0000-000000000000",
"productTypeId": "ccd0bc73-c4a1-41bf-9c96-454a5ba1d025",
"id": "98b3647a-3b40-4a00-bd0f-2a397bd48b68",
"name": "New Product 7",
"createDate": "2018-09-20T09:42:28.2913567Z",
"createdBy": "00000000-0000-0000-0000-000000000000",
"updateDate": "2018-09-20T09:42:28.2913567Z",
"updatedBy": "00000000-0000-0000-0000-000000000000"
}
Another collection for ProductType with below sample document.
{
"description": null,
"links": null,
"replaces": "00000000-0000-0000-0000-000000000000",
"replacedBy": "00000000-0000-0000-0000-000000000000",
"id": "ccd0bc73-c4a1-41bf-9c96-454a5ba1d025",
"name": "ProductType1_186",
"createDate": "2018-09-18T23:54:43.9395245Z",
"createdBy": "00000000-0000-0000-0000-000000000000",
"updateDate": "2018-10-05T13:29:44.019851Z",
"updatedBy": "DIJdyXMudaqeAdsw1SiNyJKRIi7Ktio5#clients"
}
The product type id is referred in product collection and that is the column which links both the collections.
I want to load the above two collections to the same azure search service index and I expect my field of index to be populated somewhat like below.
If you use product id as the key, you can simply point two indexers at the same index, and Azure Search will merge the documents automatically. For example, here are two indexer definitions that would merge their data into the same index:
{
"name" : "productIndexer",
"dataSourceName" : "productDataSource",
"targetIndexName" : "combinedIndex",
"schedule" : { "interval" : "PT2H" }
}
{
"name" : "sampleIndexer",
"dataSourceName" : "sampleDataSource",
"targetIndexName" : "combinedIndex",
"schedule" : { "interval" : "PT2H" }
}
Learn more about the create indexer api here
However, it appears that the two collections share the same fields. This means that the fields from the document which gets indexed last will replace the fields from the document that got indexed first. To avoid this, I would recommend replacing the fields that match the 00000000-0000-0000-0000-000000000000 pattern with null in your Cosmos DB query. For example:
SELECT productTypeId, (createdBy != "00000000-0000-0000-0000-000000000000" ? createdBy : null) as createdBy FROM products
This exact query may not work for your use case. See the query syntax reference for more information.
Please let me know if you have any questions, or something is not working as expected.
Thanks
Matt

Model Firebase realtime database in JSON schema

The Firebase database uses a subset of JSON. Thus it seems obvious to use JSON schema to describe the data model. This would allow to make use of tools which generate HTML forms or typescript models from it or generate random test data.
My question: How would one model key-value pairs in JSON schema, where the key is an id?
Example: (borrowed from firebase spec)
{
"users": {
"mchen": {
"name": "Mary Chen",
// index Mary's groups in her profile
"groups": {
// the value here doesn't matter, just that the key exists
"alpha": true,
"charlie": true
}
},
...
The group name here is used as an group id. In this reference (groups object) as well as in the group object itself, the id is used as the property name.
JSON schema for above example is:
{
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"properties": {
"users": {
"type": "object",
"properties": {
"mchen": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"groups": {
"type": "object",
"properties": {
"alpha": {
"type": "boolean"
},
"charlie": {
"type": "boolean"
}
}
}
}
}
}
}
}
}
What I would need for the example is something like the following, where NAME is a placeholder for the property name and NAME_TYPE defines it's type.
{
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"properties": {
"users": {
"type": "object",
"properties": {
NAME: {
"type": "object",
NAME_TYPE: "string",
"properties": {
"name": {
"type": "string"
},
"groups": {
"type": "object",
"properties": {
NAME: {
NAME_TYPE: "string"
"type": "boolean"
}
}
}
}
}
}
}
}
}
(Maybe I am on the completely wrong path here or maybe JSON schema isn't able to model the required structure.)
There are certainly arrays in Firebase but they are situational and should be used only in certain use cases and should generally be avoided.
The Firebase structure you posted is very common and there are key:value pairs in your structure so the question is a tad unclear but I'll give it a shot.
'groups' is the parent key and the values are the children key:value pairs of group1:value, group2:value.
The group1, group2 keys you listed are essentially the same as the id's listed in the first example, other than it's not an array. i.e. array's have sequential, hard coded-indexes (0th, 1st, 2nd etc) whereas the keys in firebase are open-ended and can generally be set to any alphanumeric value - they are used more to refer to a specific node than the enforce an particular order (i'm speaking generally here)
In the Firebase structure, those keys could be id0, id1, id2... or a,b,c... or a timestamp... or an auto-generated Firebase id (childByAutoId) that would also make them 'sequential'.
However, you can get into trouble assigning your own with id0, id1 etc.
id0
id1
id2
.
id9
id10
id11
The reality here is that the actual order will be
id0
id1
id10
id11
id2
The 'key' is that if you are using the keys to read data in sequentially, set them up as such. You may also want to consider generating your keys with childByAutoId (see docs for language specifics) and orderBy one of the child values such as a timestamp or index.
'groups': {
'auto-generated id': {
'name': 'alpha',
'index': 0,
'timestamp': '20160814074146'
...
},
'auto-generated id': {
'name': 'charlie',
'index': 1,
'timestamp': '20160814073600'
...
},
...
}
in the above case, I can orderBy name, index or timestamp.
Name and index will read the nodes in the order they are listed, if we order by timestamp, then the charlie node will be loaded first. Leveraging the child values to orderBy is very flexible.
Also, you can limit the set of data you are loading in with startingAt and endingAt. So for example, you want to load nodes starting at node 10 and ending at node 14. Easily done with non-array JSON data but not easily done if it's stored in an array as the entire array must be read in.

Allow a DynamoDB User to modify a subset of documents

How can I create a policy in DynamoDB, which allows the corresponding IAM users to modify just a subset of the documents in a table?
For example, let's say there is an attribute published,
and I want this IAM user to perform PutItem and UpdateItem
on documents which have published: false.
You can only use DynamoDB Fine-Grained Access Control on the hash key value. You could save an item with "draft" pre-pended to the hash-key value and use the following policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:UpdateItem",
],
"Resource": [
"arn:aws:dynamodb:REGION:ACCOUNT_NUMBER:table/TABLE_NAME"
],
"Condition": {
"ForAllValues:StringLike": {
"dynamodb:LeadingKeys": ["draft*"],
}
}
}
]
}
Adapted from https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/FGAC_DDB.html

Google Cloud Datastore avoid entity overhead when fetching

Is it possible to avoid the entity overhead when fetching data in the cloud datastore ? Ideally I'd love to have SQL-like results : simple arrays/objects with key:values instead of
{
"batch": {
"entityResultType": "FULL",
"entityResults": [
{
"entity": {
"key": {
"partitionId": {
"datasetId": "e~******************"
},
"path": [
{
"kind": "Answer",
"id": "*******************"
}
]
},
"properties": {
"value": {
"stringValue": "12",
"indexed": false
},
"question": {
"integerValue": "120"
}
}
}
}
],
"endCursor": "********************************************",
"moreResults": "MORE_RESULTS_AFTER_LIMIT",
"skippedResults": null
}
}
which has just too much overhead for me (I plan on running queries over thousands of entities). I just couldn't find in the docs if that's possible or not.
You can use projection queries to query for specific entity properties.
In your example, you could use a projection query to avoid returning the entity's key in the query results.
The other fields are part of the API's data representation and are required to be returned, however, many of them (e.g. endCursor) are only returned once per query batch, so the overhead is low when there are many results.

Resources