CosmosDB SQL query syntax for if statement - azure-cosmosdb

I'm trying to find the correct syntax for doing an If/Case type of statement in an Azure ComsmosDB SQL query. Here is the document that I have
{
"CurrentStage": "Stage2",
"Stage1": {
"Title": "Stage 1"
},
"Stage2": {
"Title": "Stage 2"
},
"Stage3": {
"Title": "Stage 3"
}
}
What I want to do is create a query that looks something like
Select c.CurrentStage,
if (CurrentStage == 'Stage1') { c.Stage1.Title }
else if (CurrentStage == 'Stage2') { c.Stage2.Title }
else if (CurrentStage == 'Stage3') { c.Stage3.Title } as Title
From c
Obviously the document and query that I have is a lot more complicated then this, but this gives you the general idea of what I'm trying to do. I have 1 of the fields in the select to be variable based on some other fields in the document.

While udf suggested by Jay Gong may be more comfortable to use if you need to reuse this function a lot, you can do this without udf using ternary operator syntax.
For example:
select
c.CurrentStage = 'stage1' ? c.Stage1.Title
: c.CurrentStage = 'stage2' ? c.Stage2.Title
: c.CurrentStage = 'stage3' ? c.Stage3.Title
: 'your default value should you wish one'
as title
from c
Advice: Provider SQL solution has the benefit over UDF that it is self-contained and does not require setting up the logic on the server before executing. Also, note that logic versioning is simpler if logic is stored in client apps entirely, not shared across client and server as in the UDF case. UDF does have it's uses (ex:heavy reuse across queries), but usually it's better to do without.

I suggest you using User Defined Function in Cosmos DB.
udf code:
function stage(c){
switch(c.CurrentStage){
case "Stage1" : return c.Stage1.Title;
case "Stage2" : return c.Stage2.Title;
case "Stage3" : return c.Stage3.Title;
default: return "";
}
}
SQL :
Select c.CurrentStage,
udf.stage(c) as Title
From c
Output result:
Hope it helps you.

Related

DynamoDB transactional insert with multiple conditions (PK/SK attribute_not_exists and SK attribute_exists)

I have a table with PK (String) and SK (Integer) - e.g.
PK_id SK_version Data
-------------------------------------------------------
c3d4cfc8-8985-4e5... 1 First version
c3d4cfc8-8985-4e5... 2 Second version
I can do a conditional insert to ensure we don't overwrite the PK/SK pair using ConditionalExpression (in the GoLang SDK):
putWriteItem := dynamodb.Put{
TableName: "example_table",
Item: itemMap,
ConditionExpression: aws.String("attribute_not_exists(PK_id) AND attribute_not_exists(SK_version)"),
}
However I would also like to ensure that the SK_version is always consecutive but don't know how to write the expression. In pseudo-code this is:
putWriteItem := dynamodb.Put{
TableName: "example_table",
Item: itemMap,
ConditionExpression: aws.String("attribute_not_exists(PK_id) AND attribute_not_exists(SK_version) **AND attribute_exists(SK_version = :SK_prev_version)**"),
}
Can someone advise how I can write this?
in SQL I'd do something like:
INSERT INTO example_table (PK_id, SK_version, Data)
SELECT {pk}, {sk}, {data}
WHERE NOT EXISTS (
SELECT 1
FROM example_table
WHERE PK_id = {pk}
AND SK_version = {sk}
)
AND EXISTS (
SELECT 1
FROM example_table
WHERE PK_id = {pk}
AND SK_version = {sk} - 1
)
Thanks
A conditional check is applied to a single item. It cannot be spanned across multiple items. In other words, you simply need multiple conditional checks. DynamoDb has transactWriteItems API which performs multiple conditional checks, along with writes/deletes. The code below is in nodejs.
const previousVersionCheck = {
TableName: 'example_table',
Key: {
PK_id: 'prev_pk_id',
SK_version: 'prev_sk_version'
},
ConditionExpression: 'attribute_exists(PK_id)'
}
const newVersionPut = {
TableName: 'example_table',
Item: {
// your item data
},
ConditionExpression: 'attribute_not_exists(PK_id)'
}
await documentClient.transactWrite({
TransactItems: [
{ ConditionCheck: previousVersionCheck },
{ Put: newVersionPut }
]
}).promise()
The transaction has 2 operations: one is a validation against the previous version, and the other is an conditional write. Any of their conditional checks fails, the transaction fails.
You are hitting your head on some of the differences between a SQL and a no-SQL database. DynamoDB is, of course, a no-SQL database. It does not, out of the box, support optimistic locking. I see two straight forward options:
Use a software layer to give you locking on your DynamoDB table. This may or may not be feasible depending on how often updates are made to your table. How fast 'versions' are generated and the maximum time your application can be gated on the lock will likely tell you if this can work foryou. I am not familiar with Go, but the Java API supports this. Again, this isn't a built-in feature of DynamoDB. If there is no such Go API equivalent, you could use the technique described in the link to 'lock' the table for updates. Generally speaking, locking a no-SQL DB isn't a typical pattern as it isn't exactly what it was created to do (part of which is achieving large scale on unstructured documents to allow fast access to many consumers at once)
Stop using an incrementor to guarantee uniqueness. Typically, incrementors are frowned upon in DynamoDB, in part due to the lack of intrinsic support for it and in part because of how DynamoDB shards you don't want a lot of similarity between records. Using a UUID will solve the uniqueness problem, but if you are porting an existing application that means more changes to the elements that create that ID and updates to reading the ID (perhaps to include a creation-time field so you can tell which is the newest, or the prepending or appending of an epoch time to the UUID to do the same). Here is a pertinent link to a SO question explaining on why to use UUIDs instead of incrementing integers.
Based on Hung Tran's answer, here is a Go example:
checkItem := dynamodb.TransactWriteItem{
ConditionCheck: &dynamodb.ConditionCheck{
TableName: "example_table",
ConditionExpression: aws.String("attribute_exists(pk_id) AND attribute_exists(version)"),
Key: map[string]*dynamodb.AttributeValue{"pk_id": {S: id}, "version": {N: prevVer}},
},
}
putItem := dynamodb.TransactWriteItem{
Put: &dynamodb.Put{
TableName: "example_table",
ConditionExpression: aws.String("attribute_not_exists(pk_id) AND attribute_not_exists(version)"),
Item: data,
},
}
writeItems := []*dynamodb.TransactWriteItem{&checkItem, &putItem}
_, _ = db.TransactWriteItems(&dynamodb.TransactWriteItemsInput{TransactItems: writeItems})

Translate Elastic Search numeric field to a text value

I have an Elastic Search cluster with a lot of nice data, that I have created some nice Kibana dashboards for.
For the next level I decided to take a look at scripted fields to make some of the dashboards even nicer.
I want to translate some of the numeric fields into more easily understandable text values. As an example of what I want to do and what I have tried I will use the http response status code field, that most will understand quite easily but also illustrates the problem.
We log the numeric status code (200, 201, 302, 400, 404, 500 etc.) I can create a data table visualization that tells me the count for each of these status codes. But I would like to display the text reason in my dashboard.
I can create a painless script with a lot of IF statements like this:
if (doc['statuscode'].value == 200) {return "OK";}
if (doc['statuscode'].value == 201) {return "Created";}
if (doc['statuscode'].value == 400) {return "Bad Request";}
return doc['statuscode'].value;
But that isn't very nice I think.
But since I will most likely have about 150 different values and that list won't change very often, so I can live with maintaining a static map. But I haven't found any examples of implementing a map or dictionary in painless scripting.
I was thinking of implementing something like this:
Map reasonMap;
reasonMap[200] = 'OK';
reasonMap[201] = 'Created';
def reason = reasonMap[doc['statuscode'].value];
if (reason != null)
{
return reason;
}
return doc['statuscode'].value;
I haven't been able to make this code work though. The question is also if this will perform well enough for a map with up to 150 values.
Thanks
EDIT
After some trial and error... and a lot of googling, this is what I came up with that works (notice that the key needs to start with a character and not a number):
def reasonMap =
[
's200': 'OK',
's201': 'Created'
];
def key = 's' + doc['statuscode'].value
def reason = reasonMap[key];
if (reason != null)
{
return reason;
}
return doc['statuscode'].value;
Should it be
def reason = reasonMap[doc['statuscode']value];
It will perform well with a Map of 150 values.

Gremlin graph traversal backtracking

I have some code that generates gremlin traversals dynamically based on a query graph that a user supplies. It generates gremlin that relies heavily on backtracking. I've found some edge cases that are generating gremlin that doesn't do what I expected it to, but I also can't find anything online about the rules surrounding the usage of some of these pipes (like 'as' and 'back' in this case). An example of one of these edge cases:
g.V("id", "some id").as('1').inE("edgetype").outV.has("#class", "vertextype").as('2').inE("edgetype").outV.has("#class", "vertextype").filter{(it.getProperty("name") == "Bob")}.outE("edgetype").as('target').inV.has("id", "some id").back('2').outE("edgetype").inV.has("id", "some other id").back('target')
The goal of the traversal is to return the edges that were 'as'd as 'target'. When I run this against my orientdb database it encounters a null exception. I narrowed down the exception to the final pipe in the traversal: back('target'). I'm guessing that the order of the 'as's and 'back's matters and that the as('target') went 'out of scope' as soon as back('2') was executed. So a few questions:
Is my understanding (that 'target' goes out of scope because I backtracked to before it was 'as'd) correct?
Is there anywhere I can learn the rules surrounding backtracking without having to reverse engineer the gremlin source?
edit:
I found the relevant source code:
public static List<Pipe> removePreviousPipes(final Pipeline pipeline, final String namedStep) {
final List<Pipe> previousPipes = new ArrayList<Pipe>();
for (int i = pipeline.size() - 1; i >= 0; i--) {
final Pipe pipe = pipeline.get(i);
if (pipe instanceof AsPipe && ((AsPipe) pipe).getName().equals(namedStep)) {
break;
} else {
previousPipes.add(0, pipe);
}
}
for (int i = 0; i < previousPipes.size(); i++) {
pipeline.remove(pipeline.size() - 1);
}
if (pipeline.size() == 1)
pipeline.setStarts(pipeline.getStarts());
return previousPipes;
}
It looks like the BackFilterPipe does not remove any of the aspipes between the back pipe and the named as pipe, but they must not be visible to pipes outside the BackFilterPipe. I guess my algorithm for generating the code is going to have to be smarter about detecting when the target is within a meta pipe.
Turns out the way I was using 'as' and 'back' made no sense. When you consider that everything between the 'back('2')' and the 'as('2')' is contained within the BackFilterPipe's inner pipeline, it becomes clear that meta pipes can't be overlapped like this: as('2'), as('target'), back('2'), back('target').

No Idea how to create a specific MapReduce in CouchDB

I've got 3 types of documents in my db:
{
param: "a",
timestamp: "t"
} (Type 1)
{
param: "b",
partof: "a"
} (Type 2)
{
param: "b",
timestamp: "x"
} (Type 3)
(I can't alter the layout...;-( )
Type 1 defines a start timestamp, it's like the start event. A Type 1 is connected to several Type 3 docs by Type 2 documents.
I want to get the latest Type 3 (highest timestamp) and the corresponding type 1 document.
How may I organize my Map/Reduce?
Easy. For highly relational data, use a relational database.
As user jhs stated before me, your data is relational, and if you can't change it, then you might want to reconsider using CouchDB.
By relational we mean that each "type 1" or "type 3" document in your data "knows" only about itself, and "type 2" documents hold the knowledge about the relation between documents of the other types. With CouchDB, you can only index by fields in the documents themselves, and going one level deeper when querying using includedocs=true. Thus, what you asked for cannot be achieved with a single CouchDB query, because some of the desired data is two levels away from the requested document.
Here is a two-query solution:
{
"views": {
"param-by-timestamp": {
"map": "function(doc) { if (doc.timestamp) emit(doc.timestamp, [doc.timestamp, doc.param]); }",
"reduce": "function(keys, values) { return values.reduce(function(p, c) { return c[0] > p[0] ? c : p }) }"
},
"partof-by-param": {
"map": "function(doc) { if (doc.partof) emit(doc.param, doc.partof); }"
}
}
}
You query it first with param-by-timestamp?reduce=true to get the latest timestamp in value[0] and its corresponding param in value[1], and then query again with partof-by-param?key="<what you got in previous query>". If you need to fetch the full documents together with the timestamp and param, then you will have to play with includedocs=true and provide with the correct _doc values.

Flex: Binding to an MXML-esque "binding string" in action script?

Is it possible to specify MXML-esque "binding strings" in ActionScript?
For example, I want to be able to do something like:
MXMLBinding(this, "first_item",
this, "{myArrayCollection.getItemAt(0)");
MXMLBinding(this, ["nameLbl", "text"],
this, "Name: {somePerson.first} {somePerson.last}");
Edit: thanks for the responses and suggestions… Basically, it seems like you can't do this. I've dug around and figured out why.
(Shameless plug)
BindageTools can do this:
Bind.fromProperty(this, "myArrayCollection", itemAt(0))
.toProperty(this, "first_item");
Bind.fromAll(
Bind.fromProperty(this, "somePerson.first"),
Bind.fromProperty(this, "somePerson.last")
)
.format("Name: {0} {1}")
.toProperty(this, "nameLbl.text");
Note that BindageTools puts the source object first and the destination last (whereas BindingUtils puts the destination first and the source last).
Using ChangeWatcher (e.g., via BindingUtils.bindProperty or .bindSetter) is the way to go, yes. I admit it's a strange notation, but once you get used to it, it makes sense, works perfectly and is quite flexible, too.
Of course, you could always wrap those functions yourself somehow, if the notation bugged you -- both methods are static, so doing so in a way that feels more appropriate to your application should be a fairly straightforward exercise.
I could use BindingUtils or ChainWatcher, but then I'd end up with code that looks something like this:
…
BindingUtils.bindSetter(updateName, this, ["somePerson", "first"]);
BindingUtils.bindSetter(updateName, this, ["somePerson", "last"]);
…
protected function updateName(...ignored):void {
this.nameLbl.text = "Name: " + somePerson.first + " " + somePerson.last;
}
Which is just a little bit ugly… And the first example, binding to arrayCollection.getItemAt(0), is even worse.
Does the first parameter (function) of BindingUtils.bindSetter method accept anonymous methods?
BindingUtils.bindSetter(function()
{
this.nameLbl.text = "Name: " + somePerson.first + " " + somePerson.last;
}, this, ["somePerson", "last"]);
I hate anonymous methods and obviously it's even more uglier - so I won't recommend that even if it works, but just wondering if it works.
Never the answer anyone wants to hear, but just manage this stuff with getters/setters in ActionScript. With a proper MVC, it's dead simple to manually set your display fields.
public function set myArrayCollection(value:Array):void {
myAC = new ArrayCollection(value);
first_item = mcAC.getItemAt(0); // or value[0];
}
etc....
Alright, so I've done some digging, and here's what's up.
Bindings in MXML are, contrary to reason, setup by Java code (modules/compiler/src/java/flex2/compiler/as3/binding/DataBindingFirstPassEvaluator.java, if I'm not mistaken) at compile time.
For example, the binding: first_item="{myArrayCollection.getItemAt(0)}"` is expanded into, among other things, this:
// writeWatcher id=0 shouldWriteSelf=true class=flex2.compiler.as3.binding.PropertyWatcher shouldWriteChildren=true
watchers[0] = new mx.binding.PropertyWatcher("foo",
{ propertyChange: true }, // writeWatcherListeners id=0 size=1
[ bindings[0] ],
propertyGetter);
// writeWatcher id=1 shouldWriteSelf=true class=flex2.compiler.as3.binding.FunctionReturnWatcher shouldWriteChildren=true
watchers[1] = new mx.binding.FunctionReturnWatcher("getItemAt",
target,
function():Array { return [ 0 ]; },
{ collectionChange: true },
[bindings[0]],
null);
// writeWatcherBottom id=0 shouldWriteSelf=true class=flex2.compiler.as3.binding.PropertyWatcher
watchers[0].updateParent(target);
// writeWatcherBottom id=1 shouldWriteSelf=true class=flex2.compiler.as3.binding.FunctionReturnWatcher
// writeEvaluationWatcherPart 1 0 parentWatcher
watchers[1].parentWatcher = watchers[0];
watchers[0].addChild(watchers[1]);
This means that it is simply impossible to setup curly-brace MXML-style bindings at runtime because the code to do it does not exist in ActionScript.

Resources