I am trying to add vertices (and eventually edges) to a local Cosmos DB graph using the Gremlin console. I've been following this tutorial. However, whenever I try to add a vertex, I get an error about the partition key.
My query:
g.addV('person').property('firstName', 'Thomas').property('lastName', 'Andersen').property('age', 44).property('userid', 1).property('pk', 'pk')
The error:
ActivityId : cd07f7be-d824-40fa-8137-0f2726a9c26d
ExceptionType : GraphRuntimeException
ExceptionMessage :
Gremlin Query Execution Error: Cannot add a vertex where the partition key property has value 'null'.
Source : Microsoft.Azure.Cosmos.Gremlin.Core
GremlinRequestId : cd07f7be-d824-40fa-8137-0f2726a9c26d
Context : graphcompute
Scope : graphcomp-execquery
GraphInterOpStatusCode : GraphRuntimeError
HResult : 0x80131500
Type ':help' or ':h' for help.
Display stack trace? [yN]
How can I fix my query and insert the data?
Did the same mistake of mixing the two values up. So when you add your azure database, you have to specify a partition key, I picked '/client';
Now when I do my query, I have to add this property:
.property('client', 'pk')
-- the first value has to be the key itself, and the second one 'pk', short for 'partitionKey'; Then in your document you have to add a property:
client: 'TESTCLIENTID'
But again, a lot of this is about what your 'partitioning' strategy is based on, which is something you have to decide upfront for each collection, this video from Azure explains things in more detail quite good.
I don't have a CosmosDB test environment, but there's a public sample project:
Getting started with Azure Cosmos DB: Graph API
Looks like you have to add a pk property (which most likely means "partition key", and should be configurable somehow).
You don't need to add a partition key in your g.addV i looked at what the "Quick start" tab creates for you in the portal which is the "graphdb/mycollection" database/collection. You can create your own which works fine without specifying partition key when adding a vertex... Just specify
Partition key
/_partitionKey
and check the checkbox
My partition key is larger than 100 bytes
That solved it for me anyway.
I had mixed up the partition key label and value. Reversing these fixed my issue.
Related
On an Azure cosmosDB gremlin instance,
I have 2 vertices A and B linked by and edge E.
Both vertices has a 'name' property.
I'd like to run a query which will take A's name and put it in B
when I run
g.V("AId").as("a").oute().inv().hasLabel('B').property("name",select('a').values('name'))
I get the following error :
GraphRuntimeException ExceptionMessage : Gremlin Query Execution Error: Cannot create ValueField on non-primitive type GraphTraversal.
It looks like the select operator is not correctly used.
Thank you for your help
EDITED based on discussion in comments
You have oute and inv in lower case. In general, the steps use camelCase naming, such as outE and inV (outside of specific GLVs), but in the comments it was mentioned that CosmosDB will accept all lower case step names. Assuming therefore, that is not the issue here, the query as written looks fine in terms of generic Gremlin. The example below was run using TinkerGraph, and uses the same select mechanism to pick the property value.
gremlin> g.V(3).as("a").outE().inV().has('code','LHR').property("name",select('a').values('city'))
==>v[49]
gremlin> g.V(49).values('name')
==>Austin
What you are observing may be specific to CosmosDB and it's probably worth contacting their support folks to double check.
I have a CosmosDb and a Synapse workspace linked. Everything almost works using Synapse to create SQL views to the Cosmos data.
In Cosmos I have one data set with a property that is always a zero. I know it is actually a decimal because it is a price and future data is likely to contain decimal prices.
In Synapse I need to project this data into an SQL view where that column is correctly a decimal(19,4).
When I run an OpenRowSet query into the Cosmos data and attempt to specify the type for this property I get the following error.
select *
from OPENROWSET(
'CosmosDb',
'account=myaccount;database=myDatabase;region=theRegion;key=xxxxxxxxxxxxxxx',
[myCollection])
with (
[salesPrice] float '$.salesPrice')
as testQuery
I get the error:
Column 'salesPrice' of type 'FLOAT' is not compatible with external data type 'Parquet physical type: INT64', please try with 'BIGINT'.
Obviously a BIGINT here is going to fail as soon as I get a true decimal price.
I think the parquet type is getting set to BIGINT because in Cosmos all the values for this column are zero. I guess more generally it would be the same problem if the Cosmos property was all non-zero integers.
How can I force the type of salesPrice to be a decimal or float?
(I don't want to get side tracked here on float vs decimal for monetary values, I understand the difference; this error happens either way)
UPDATE
This problem manifests itself also in another way without specifying a schema with OPENROWSET.
In a new CosmosDb collection insert a document such as:
{
"myid" : 1,
"price" : 0
}
If I wait a minute or so I can query this document from Synapse with:
select *
from OPENROWSET(
'myCosmosDb',
'account=myAccount;database=myDatabase;region=myRegion;key=xxxxxxxxxxxxxxxxxxx',
[myCollection])
as testQuery;
and I get the expected results.
Now add a second document:
{
"myid" : 1,
"price" : 1.1
}
and re-run the query and I get the same error:
Column 'price' of type 'FLOAT' is not compatible with external data type 'Parquet physical type: INT64', please try with 'BIGINT'
Is there any way to work around or prevent these kinds of errors?
How about set the document like
{
"myid" : "1",
"price" : "1.1"
}
When query cosmos db, there is an option of setting enableCrossPartitionQuery as true.
I am wondering what happens that if I did not set it? Which partition will be used for the query?
thanks
If your collection is partitioned, then the query,update, delete opeartions need partition key setting.
If you don't set, perhaps you could see below error:
For this situation, if you don't want to set any partition key or you don't know which partition the row data belongs to, then you could set enableCrossPartitionQuery= true to avoid the error. If you set enableCrossPartitionQuery= true, it means this request will scan all the partitions to filter the data. Of course,it's query performance is bound to decline.
BTW,if your data size is small,i think the impact may be small. However,if the data size is large, i suggest you trying your best to avoid setting this property.
I tested the sample project : https://github.com/Azure-Samples/azure-cosmos-db-sql-api-nodejs-getting-started.git and it doesn't require partition key indeed when the container is partitioned.
However, based on the statements in the cosmos db rest api :
I tested java sdk and it requires the partition key when i query partitioned container. Anyway,i want to say that if you met the error which indicates the lack of partition key, you could try to add the property enableCrossPartitionQuery = true to solve it. Mostly, i still suggest you providing partition key for the query performance.
I'm using a partitioned CosmosDB graph collection.
Is there a simple way to "move" a node (and its outbound links) from one partition to another? Can this be done atomically?
I tried this: (the partition key is '/tenantId')
//create the node
g.addV('testme').property('id','id123').property('tenantId','mytenant1')
//...create more nodes and edges...
//change node's partition key
g.V('id123').has('tenantId','mytenant1').property('tenantId','mytenant2')
// ^^^ fails:
// GraphRuntimeException ExceptionMessage :
// Gremlin Query Execution Error:
// Update Vertex Properties: The partition property cannot be updated
As the error explains, the partition key value cannot be updated. It's immutable.
However, if you delete the document and add it with an updated partition key value then that will work. Keep in mind that whatever you code in order to do that should have some rollback logic in it in case the new insertion fails.
I'm trying to get the total number of items in a Dynamodb table. Given below is the C# code that I use.
context = this.DynamoDBContext;
var someClassReuslts = context.Scan<SomeClass>(null);
int itemCount = someClassReuslts .Count<SomeClass>();
When I try to execute this it throws below error
"Unable to convert [Amazon.DynamoDBv2.DocumentModel.Document] of type Amazon.DynamoDBv2.DocumentModel.Document to System.String"
Is it a mismatch of a property in the data type of the "SomeClass" Vs the actual items properties in the DB? Can someone please help?
Found the issue here. I use two different programs to insert data to dynamo db and read data from dynamo db. Both of these programs are supposed to use the same "SomeClass" but unfortunately I had one of the properties altered in the "SomeClass" that I use to read the data (or run the count query) from Dynamo db. Once I fix the data type mismatch. it works fine now.