How to load data from two separate collections of azure cosmos db to a single azure search index? I need a solution to join the data from two collections in a way similar to inner joining concept of SQL and load that data to azure search service.
I have two collections in azure cosmos db.
One for product and sample documents for the same is as below.
{
"description": null,
"links": [],
"replaces": "00000000-0000-0000-0000-000000000000",
"replacedBy": "00000000-0000-0000-0000-000000000000",
"productTypeId": "ccd0bc73-c4a1-41bf-9c96-454a5ba1d025",
"id": "a4853bf5-9c58-4fb5-a1ff-fc3ab575b4c8",
"name": "New Product",
"createDate": "2018-09-19T10:04:35.1951552Z",
"createdBy": "00000000-0000-0000-0000-000000000000",
"updateDate": "2018-10-05T13:46:24.7048358Z",
"updatedBy": "DIJdyXMudaqeAdsw1SiNyJKRIi7Ktio5#clients"
}
{
"description": null,
"links": [],
"replaces": "00000000-0000-0000-0000-000000000000",
"replacedBy": "00000000-0000-0000-0000-000000000000",
"productTypeId": "ccd0bc73-c4a1-41bf-9c96-454a5ba1d025",
"id": "b9b6c3bc-a8f8-470f-ac93-be589eb1da16",
"name": "New Product 2",
"createDate": "2018-09-19T11:02:02.6919008Z",
"createdBy": "00000000-0000-0000-0000-000000000000",
"updateDate": "2018-09-19T11:02:02.6919008Z",
"updatedBy": "00000000-0000-0000-0000-000000000000"
}
{
"description": null,
"links": [],
"replaces": "00000000-0000-0000-0000-000000000000",
"replacedBy": "00000000-0000-0000-0000-000000000000",
"productTypeId": "ccd0bc73-c4a1-41bf-9c96-454a5ba1d025",
"id": "98b3647a-3b40-4a00-bd0f-2a397bd48b68",
"name": "New Product 7",
"createDate": "2018-09-20T09:42:28.2913567Z",
"createdBy": "00000000-0000-0000-0000-000000000000",
"updateDate": "2018-09-20T09:42:28.2913567Z",
"updatedBy": "00000000-0000-0000-0000-000000000000"
}
Another collection for ProductType with below sample document.
{
"description": null,
"links": null,
"replaces": "00000000-0000-0000-0000-000000000000",
"replacedBy": "00000000-0000-0000-0000-000000000000",
"id": "ccd0bc73-c4a1-41bf-9c96-454a5ba1d025",
"name": "ProductType1_186",
"createDate": "2018-09-18T23:54:43.9395245Z",
"createdBy": "00000000-0000-0000-0000-000000000000",
"updateDate": "2018-10-05T13:29:44.019851Z",
"updatedBy": "DIJdyXMudaqeAdsw1SiNyJKRIi7Ktio5#clients"
}
The product type id is referred in product collection and that is the column which links both the collections.
I want to load the above two collections to the same azure search service index and I expect my field of index to be populated somewhat like below.
If you use product id as the key, you can simply point two indexers at the same index, and Azure Search will merge the documents automatically. For example, here are two indexer definitions that would merge their data into the same index:
{
"name" : "productIndexer",
"dataSourceName" : "productDataSource",
"targetIndexName" : "combinedIndex",
"schedule" : { "interval" : "PT2H" }
}
{
"name" : "sampleIndexer",
"dataSourceName" : "sampleDataSource",
"targetIndexName" : "combinedIndex",
"schedule" : { "interval" : "PT2H" }
}
Learn more about the create indexer api here
However, it appears that the two collections share the same fields. This means that the fields from the document which gets indexed last will replace the fields from the document that got indexed first. To avoid this, I would recommend replacing the fields that match the 00000000-0000-0000-0000-000000000000 pattern with null in your Cosmos DB query. For example:
SELECT productTypeId, (createdBy != "00000000-0000-0000-0000-000000000000" ? createdBy : null) as createdBy FROM products
This exact query may not work for your use case. See the query syntax reference for more information.
Please let me know if you have any questions, or something is not working as expected.
Thanks
Matt
Related
The following is my Company object that I store in Cosmos DB. I have all the essential information about employees in Employees property. I also have a Departments property that both defines departments as well as its members.
{
"id": "company-a",
"name": "Company A",
"employees": [
{
"id": "john-smith",
"name": "John Smith",
"email": "john.smith#example.com"
},
{
"id": "jane-doe",
"name": "Jane Doe",
"email": "jane.doe#example.com"
},
{
"id": "brian-green",
"name": "Brian Green",
"email": "brian.green#example.com"
}
],
"departments": [
{
"id": "it",
"name": "IT Department",
"members": [
{
"id": "john-smith",
"name": "John Smith",
"isDepartmentHead": true
},
{
"id": "brian-green",
"name": "Brian Green",
"isDepartmentHead": false
}
]
},
{
"id": "hr",
"name": "HR Department",
"members": [
{
"id": "jane-doe",
"name": "Jane Doe",
"isDepartmentHead": true
}
]
}
]
}
I'm trying to return a list of a particular department, including the employee's email which will come from employees property.
Here's what I did but this is including all employees in the output:
SELECT dm.id, dm.name, e.email, em.isDepartmentHead
FROM Companies c
JOIN d IN c.departments
JOIN dm IN d.members
JOIN e IN c.Employees
WHERE c.id = "company-a" AND d.id = "hr"
The correct output would be:
[
{
"id": "jane-doe",
"name": "Jane Doe",
"email": "jane.doe#example.com",
"isDepartmentHead": true
}
]
How do I form my SQL statement to get all members of a department AND include employees' email addresses?
I'm pretty sure you cannot write a query like this. You are trying to correlate data twice in the same query across two arrays which I don't think is possible. (at least I've never been successful doing this).
Even if this was possible though, there are other issues with your data model. This data model will not scale. You also need to avoid unbounded or very large arrays within documents (e.g. employees and departments). You also do not want to store unrelated data in the same document. Objective here is to model data for high concurrency operations in the way you use it. This reduces both latency and cost.
There are many ways in which you can remodel this data. But if this is a very small data set, you could probably do something like this below with a partition key of companyId (assuming that you always query within a single company). This will store all employees for one company in the same logical partition which can store up to 20GB of data. I would also model this such that one document stores data specific to the company itself (address, phone number, number of employess, etc) with the id and companyId having the same value. This lets you do things like store materialized aggregates like # of employees and update it in a transaction. Also, since this approach mixes different types of entities (a bonus for NoSQL database, you need a discriminator property that allows you to filter for specific entities within the container so you can deserialize them directly into your model classes.
Here is a data model you could try (please note, you need to determine if this works for you by scaling it up to the amount of data you believe you will need to store. You also need to test and measure the RU/s cost for the CRUD operations you will execute with high concurrency).
Example company document:
{
"id": "aaaaa",
"companyId": "aaaaa",
"companyName": "Company A",
"type": "company",
"numberOfEmployees: 3,
"addresses": [
{
"address1": "123 Main Street",
"address2": "",
"city": "Los Angeles",
"state": "California",
"zip": "92568"
}
]
}
Then an employee document like this:
{
"id": "jane-doe",
"companyId": "aaaaa",
"type": "employee",
"employeeId": "jane-doe",
"employeeName": "Jane Doe",
"employeeEmail": "jane.doe#example.com",
"departmentId": "hr",
"departmentName": "HR Department",
"isDepartmentHead": true
}
Then last, here's the query to get the data you need.
SELECT
c.employeeId,
c.employeeName,
c.employeeEmail,
c.IsDepartmentHead
FROM c
WHERE
c.companyId = "company-a" AND
c.type = "employee" AND
c.departmentId = "hr"
Suppose I've the following data in my container:
{
"id": "1DBF704E-1623-4844-DC86-EFA729A5C048",
"firstName": "Wylie",
"lastName": "Ramsey",
"country": "AZ",
"city": "Tucson"
}
Where I use the field "id" as the item id, and the field "country" as the partition key, when I query on specific partition key:
SELECT * FROM c WHERE c.country = "AZ"
(get all the people in "AZ")
Should I add "country" as an index or I will get it by default, since I declered "country" as my partition key?
Is there a diference when using the SDK (meaning: adding the new PartitionKey("AZ") option and then sending the query as mentioned above)?
I created a collection with 50,000 records and disabled indexing on all properties.
Indexing policy:
{
"indexingMode": "consistent",
"automatic": true,
"includedPaths": [], // Included nothing
"excludedPaths": [
{
"path": "/\"_etag\"/?"
},
{
"path": "/*" // Exclude all paths
}
]
}
Querying by id cost 2.85 RUs.
Querying by PartitionKey cost 580 RUs.
Indexing policy with PartitionKey (country) added:
{
"indexingMode": "consistent",
"automatic": true,
"includedPaths": [
{
"path": "/country/*"
}
],
"excludedPaths": [
{
"path": "/\"_etag\"/?"
},
{
"path": "/*" // Exclude all paths
}
]
}
Adding an index on the PartitionKey brought it down to 2.83 RUs.
So the answer to this is Yes, if you have disabled default indexing policies and need to search by partition key then you should add an index to it.
In my opinion, it's a good practice to query with partition key in cosmosdb sql api, here's the offical doc related to it.
By the way, cosmosdb sql api indexes all the properties by default. If you'd like to cover the default setting and customise the indexing policy, this doc may provide more details.
The following json represents two documents in a Cosmos DB container.
How can I write a query that gets any document that has an item with an id of item_1 and value of bar.
I've looked into ARRAY_CONTAINS, but don't get this to work with array's in array's.
Als I've tried somethings with any. Although I can't seem to find any documentation on how to use this, any seems to be a valid function, as I do get formatting highlights in the cosmos db explorer in Azure Portal.
For the any function I tried things like SELECT * FROM c WHERE c.pages.any(p, p.items.any(i, i.id = "item_1" AND i.value = "bar")).
The id fields are unique so if it's easier to find any document that contains any object with the right id and value, that would be fine too.
[
{
"type": "form",
"id": "form_a",
"pages": [
{
"name": "Page 1",
"id": "page_1",
"items": [
{
"id": "item_1",
"value": "foo"
}
]
}
]
},
{
"type": "form",
"id": "form_b",
"pages": [
{
"name": "Page 1",
"id": "page_1",
"items": [
{
"id": "item_1",
"value": "bar"
}
]
}
]
}
]
I think join could handle with WHERE clause with array in array.Please test below sql:
SELECT c.id FROM c
join pages in c.pages
where array_contains(pages.items,{"id": "item_1","value": "bar"},true)
Output:
With the sample json shown below, am trying to retrieve all documents that contains atleast one category which is array object wrapped underneath Categories that has the text value 'drinks' with the following query but the returned result is empty. Can someone help me get this right?
SELECT items.id
,items.description
,items.Categories
FROM items
WHERE ARRAY_CONTAINS(items.Categories.Category.Text, "drink")
{
"id": "1dbaf1d0-6549-11a0-88a8-001256957023",
"Categories": {
"Category": [{
"Type": "GS1",
"Id": "10000266",
"Text": "Stimulants/Energy Drinks Ready to Drink"
}, {
"Type": "GS2",
"Id": "10000266",
"Text": "Healthy Drink"
}]
}
},
Note: The json is a bit wierd to have the array wrapped by an object itself - this json was converted from a XML hence the result. So please assume I do not have any control over how this object is saved as json
You need to flatten the document in your query to get the result you want by joining the array back to the main document. The query you want would look like this:
SELECT items.id, items.Categories
FROM items
JOIN Category IN items.Categories.Category
WHERE CONTAINS(LOWER(Category.Text), "drink")
However, because there is no concept of a DISTINCT query, this will produce duplicates equal to the number of Category items that contain the word "drink". So this query would produce your example document twice like this:
[
{
"id": "1dbaf1d0-6549-11a0-88a8-001256957023",
"Categories": {
"Category": [
{
"Type": "GS1",
"Id": "10000266",
"Text": "Stimulants/Energy Drinks Ready to Drink"
},
{
"Type": "GS2",
"Id": "10000266",
"Text": "Healthy Drink"
}
]
}
},
{
"id": "1dbaf1d0-6549-11a0-88a8-001256957023",
"Categories": {
"Category": [
{
"Type": "GS1",
"Id": "10000266",
"Text": "Stimulants/Energy Drinks Ready to Drink"
},
{
"Type": "GS2",
"Id": "10000266",
"Text": "Healthy Drink"
}
]
}
}
]
This could be problematic and expensive if the Categories array holds a lot of Category items that have "drink" in them.
You can cut that down if you are only interested in a single Category by changing the query to:
SELECT items.id, Category
FROM items
JOIN Category IN items.Categories.Category
WHERE CONTAINS(LOWER(Category.Text), "drink")
Which would produce a more concise result with only the id field repeated with each matching Category item showing up once:
[{
"id": "1dbaf1d0-6549-11a0-88a8-001256957023",
"Category": {
"Type": "GS1",
"Id": "10000266",
"Text": "Stimulants/Energy Drinks Ready to Drink"
}
},
{
"id": "1dbaf1d0-6549-11a0-88a8-001256957023",
"Category": {
"Type": "GS2",
"Id": "10000266",
"Text": "Healthy Drink"
}
}]
Otherwise, you will have to filter the results when you get them back from the query to remove duplicate documents.
If it were me and I was building a production system with this requirement, I'd use Azure Search. Here is some info on hooking it up to DocumentDB.
If you don't want to do that and we must live with the constraint that you can't change the shape of the documents, the only way I can think to do this is to use a User Defined Function (UDF) like this:
function GetItemsWithMatchingCategories(categories, matchingString) {
if (Array.isArray(categories) && categories !== null) {
var lowerMatchingString = matchingString.toLowerCase();
for (var index = 0; index < categories.length; index++) {
var category = categories[index];
var categoryName = category.Text.toLowerCase();
if (categoryName.indexOf(lowerMatchingString) >= 0) {
return true;
}
}
}
}
Note, the code above was modified by the asker after actually trying it out so it's somewhat tested.
You would use it with a query like this:
SELECT * FROM items WHERE udf.GetItemsWithMatchingCategories(items.Categories, "drink")
Also, note that this will result in a full table scan (unless you can combine it with other criteria that can use an index) which may or may not meet your performance/RU limit constraints.
I'm use symfony2 with doctrine odm (mongodb). I need create elasticsearch index, but its not hard. My structure Product collection (abridged):
{
"_id": ObjectId("5239656f60663de206b1053e"),
"category": {
"$ref": "Category",
"$id": ObjectId("50cb515760663d3577000043"),
"$db": "<dbName>"
},
"name": "<productName>"
}
Category collection:
{
"_id": ObjectId("50cb515760663d3577000043"),
"name": "<categoryName>"
}
Category field in Product collection - has 3 sub fields, which are created by doctrine. I need to create an index comprising only productName and categoryName.
How do I do that?