Can we JSON to do search in Azure Cognitive search like Mongo DB - azure-cosmosdb

I am working on Cosmos DB SQL API and I want to run JSON formated query strings. In Mongo DB you can run queries like
"$and": [
{
"images": {
"$exists": true
},
"$where": "this.something.length > 1"
},
{
"location": "core"
}
]
Is there a way to run similar queries in Azure Cognitive search and Cosmos DB?

What is supported for JSON queries in NoSQL for Cosmos DB is JSON expressions and there is a small sample for Cosmos DB NoSQL indexer: projections with JSON expressions query:
SELECT VALUE { "id":c.id, "Name":c.contact.firstName, "Company":c.company, "_ts":c._ts } FROM c WHERE c._ts >= #HighWaterMark ORDER BY c._ts

Related

Azure Data Factory Cosmos DB sql api 'DateTimeFromParts' is not a recognized built-in function name

I am using Copy Activity in my Datafactory(V2) to query Cosmos DB (NO SQL/SQLAPI). I have a where clause to build datetime from parts using DateTimeFromParts datetime function. THis query works fine when I execute it on the Cosmos DB data explorer query window. But when i use the same query from my copy activity I get the following error:
"message":"'DateTimeFromParts' is not a recognized built-in function name."}]}
ActivityId: ac322e36-73b2-4d54-a840-6a55e456e15e, documentdb-dotnet-sdk/2.5.1 Host/64-bit
I am trying convert a string attribute which is like this '20221231', this translates to Dec 31,2022, to a date to compare it with current date, i use the DateTimeFromParts to build the date, is there another way to convert this '20221231' to a valid date
Select * from c where
DateTimeFromParts(StringToNumber(LEFT(c.userDate, 4)), StringToNumber(SUBSTRING(c.userDate,4, 2)), StringToNumber(RIGHT(c.userDate, 2))) < GetCurrentDateTime()
I suspect the error might be because the documentdb-dotnet-sdk might be an old version. Is there way to specify which sdk to use in the activity?
I tried to repro this and got the same error.
Instead of changing the format of userDate column using DateTimeFromParts function, try changing the GetCurrentDateTime() function to userDate column format.
Workaround query:
SELECT * FROM c
where c.userDate <
replace(left(GetCurrentDateTime(),10),'-','')
Input data
[
{
"id": "1",
"userDate": "20221231"
},
{
"id": "2",
"userDate": "20211231",
}
]
Output data
[
{
"id": "2",
"userDate": "20211231"
}
]
Apologies for the slow reply here. Holidays slowed getting an answer for this.
There is a workaround that allows you to use the SDK v3 which would then allows you to access the DateTimeFromParts() system function which was released in .NET SDK v.3.13.0.
Option 1: Use AAD authentication (i.e Service Principal or System or User Managed Identity) for the Linked Service object in ADF to Cosmos DB. This will automatically pick up the .NET SDK v3.
Option 2: Modify the linked service template. First, click on Manage in ADF designer, next click on Linked Services, then select the connection and click the {} to open the JSON template, you can then modify and set useV3 to true. Here is an example.
{
"name": "<CosmosDbV3>",
"type": "Microsoft.DataFactory/factories/linkedservices",
"properties": {
"annotations": [],
"type": "CosmosDb",
"typeProperties": {
"useV3": true,
"accountEndpoint": "<https://sample.documents.azure.com:443/>",
"database": "<db>",
"accountKey": {
"type": "SecureString",
"value": "<account key>"
}
}
}
}

Not able to connect cosmo table api using datafactory

Keys are stored in secret as "cosmostablekey" for cosmos table api.
Created another secret stored in key valuets as below.
{
"name": "CosmosDbSQLAPILinkedService",
"properties": {
"type": "CosmosDb",
"typeProperties": {
"connectionString": "AccountEndpoint=https://XXXXXXX.table.cosmos.azure.com:443/;Database=TablesDB",
"accountKey": { 
"type": "AzureKeyVaultSecret", 
"store": { 
"referenceName": "ls_cosmos_key"", 
"type": "LinkedServiceReference" 
}, 
"secretName": "cosmostablekey" 
}
},
"connectVia": {
"referenceName": "AutoResolveIntegrationRuntime",
"type": "IntegrationRuntimeReference"
}
}
}
when try to create linked service used authentication type as key authentication in adf and tried for test connection got below error.
Error code
9082
Details
The CosmosDb key is in a wrong format.
The input is not a valid Base-64 string as it contains a non-base 64 character, more than two padding characters, or an illegal character among the padding characters.
Activity ID: f0c9c682-12de-4b53-95e9-7abe7ea722b7.
am sure copied key strig properly to key vaults.
Used for refence to connect cosmos db from adf.
microsoftdoctoconnectcosmosDB
Thanks for quick help.
Issue is got resolved
Need to call only cosmostablekey key in linked services for key athentication. More over need to specify endpoint as
https://XXXX.documents.azure.com:443/
instead of https://XXXX.table.azure.com:443/
Working fine for me now..

How do I delete multiple records from an AWS Amplify GraphQL API?

I've made a GraphQL API in an AWS Amplify React Native app. The API contains the model Transaction. AWS Amplify provides CRUD operations out of the box, and I can delete a single transaction no problem.
However, I would like to delete all transactions that meet certain criteria. How do I delete multiple transactions using this stack (AWS Amplify + GraphQL API, React Native)?
You can send 1 request with batch delete mutation.
using listTransaction to get filtered transactions.
const txnMutation: any = transactions.map((txn, i) => {
return `mutation${i}: deleteTransaction(input: {id: "${txn.id}"}) { id }`;
});
await API.graphql(
graphqlOperation(`
mutation batchMutation {
${txnMutation}
}
`)
);
import { API, graphqlOperation } from "aws-amplify";

Spatial Indexing not working with ST_DISTANCE queries and '<'

Spatial indexing does not seem to be working on a collection which contains a document with GeoJson coordinates. I've tried using the default indexing policy which inherently provides spatial indexing on all fields.
I've tried creating a new Cosmos Db account, database, and collection from scratch without any success of getting the spatial indexing to work with ST_DISTANCE query.
I've setup a simple collection with the following indexing policy:
{
"indexingMode": "consistent",
"automatic": true,
"includedPaths": [
{
"path": "/\"location\"/?",
"indexes": [
{
"kind": "Spatial",
"dataType": "Point"
},
{
"kind": "Range",
"dataType": "Number",
"precision": -1
},
{
"kind": "Range",
"dataType": "String",
"precision": -1
}
]
}
],
"excludedPaths": [
{
"path": "/*",
},
{
"path": "/\"_etag\"/?"
}
]
}
The document that I've inserted into the collection:
{
"id": "document1",
"type": "Type1",
"location": {
"type": "Point",
"coordinates": [
-50,
50
]
},
"name": "TestObject"
}
The query that should return the single document in the collection:
SELECT * FROM f WHERE f.type = "Type1" and ST_DISTANCE(f.location, {'type': 'Point', 'coordinates':[-50,50]}) < 200000
Is not returning any results. If I explicitly query without using the spatial index like so:
SELECT * FROM f WHERE f.type = "Type1" and ST_DISTANCE({'type': 'Point', 'coordinates':[f.location.coordinates[0],f.location.coordinates[1]]}, {'type': 'Point', 'coordinates':[-50,50]}) < 200000
It returns the document as it should, but doesn't take advantage of the indexing which I will need because I will be storing a lot of coordinates.
This seems to be the same issue referenced here. If I add a second document far away and change the '<' to '>' in the first query it works!
I should mention this is only occurring on Azure. When I use the Azure Cosmos Db Emulator it works perfectly! What is going on here?! Any tips or suggestions are much appreciated.
UPDATE: I found out the reason that the query works on the Emulator and not Azure - the database on the emulator doesn't have provisioned (shared) throughput among its collections, while I made the database in Azure with provisioned throughput to keep costs down (i.e. 4 collections sharing 400 RU/s). I created a non provisioned throughput database in Azure and the query works with spatial indexing!! I will log this issue with Microsoft to see if there is a reason why this is the case?
Thanks for following up with additional details with regards to a fixed collection being the solution but, I did want to get some additional information.
The Cosmos DB Emulator now supports containers:
By default, you can create up to 25 fixed size containers (only supported using Azure Cosmos DB SDKs), or 5 unlimited containers using the Azure Cosmos Emulator. By modifying the PartitionCount value, you can create up to 250 fixed size containers or 50 unlimited containers, or any combination of the two that does not exceed 250 fixed size containers (where one unlimited container = 5 fixed size containers). However it's not recommended to set up the emulator to run with more than 200 fixed size containers. Because of the overhead that it adds to the disk IO operations, which result in unpredictable timeouts when using the endpoint APIs.
So, I want to see which version of the Emulator you were using. Current version is azure-cosmosdb-emulator-2.2.2.

Int value is different when querying via C# vs azure portal query explorer

I am seeing different the value of an Int64 field change between a query via the C# API vs using the query explorer in the azure portal.
Document
[
{
"_id": "15072358-f9eb-4e92-bde1-18e038484042",
"messageId": "15072358-f9eb-4e92-bde1-18e038484042",
"async": true,
"sequence": 0,
"sender": "me#direct.example.org",
"recipient": "you#direct.example.org",
"transmittedTicks": 636352784545156500,
"receivedTicks": 636352784546356500,
"processed": true,
"id": "15072358-f9eb-4e92-bde1-18e038484042",
"_rid": "un4kAO--TAABAAAAAAAAAA==",
"_self": "dbs/un4kAA==/colls/un4kAO--TAA=/docs/un4kAO--TAABAAAAAAAAAA==/",
"_etag": "\"00005c09-0000-0000-0000-5963c8bc0000\"",
"_attachments": "attachments/",
"_ts": 1499711676
}
]
C# using DocumentDb nuget
var query = client.CreateDocumentQuery<Expectation>(documentUri)
.OrderBy(i=>i.transmittedTicks)
.Select(i=>i.transmittedTicks)
.AsDocumentQuery();
results in the first value as 636352784545156480
SQL Query Explorer
SELECT c.transmittedTicks FROM c order by c.transmittedTicks
results in the first value as 636352784545156500
This reads as a precision issue which I see there have been similar issues in the past. Are there still outstanding issues or it this expected beahvior?
Are there still outstanding issues or it this expected beahvior?
I also can reproduce the issue that you mentioned.I will report it to Azure Cosmos DB team. If there is any update, I will update it here.
We could raise an issue on the github or give our feedback to Azure Cosmos DB team.

Resources