Convert SQL query to LookML - looker

Need to transform this query to LookML
SELECT Accounts_Unlock_Price,
Accounts_Upfront_Price,
Portfolio_Derived_Previous_Cumulative_Paid,
Portfolio_Derived_Previous_Cumulative_Paid/(Accounts_Unlock_Price - Accounts_Upfront_Price) * 100 AS FRR
FROM Accounts,
Portfolio_derived_20
WHERE Accounts.Accounts_Angaza_ID = Portfolio_derived_20.Portfolio_Derived_Account_Angaza_ID

The structure of this in LookML will depend on your model. SQL isn't really convertable to LookML, as LookML generates SQL, as opposed to just translating it.
LookML uses view files to describe tables, and you've got two tables here, so you'll need two view files. They might look something like this, though I'm just guessing:
view: accounts {
sql_table_name: Accounts ;;
dimension: Accounts_Unlock_Price {
type: number
sql: ${TABLE}.Accounts_Unlock_Price ;;
}
dimension: Accounts_Upfront_Price {
type: number
sql: ${TABLE}.Accounts_Upfront_Price ;;
}
dimension: Portfolio_Derived_Previous_Cumulative_Paid {
type: number
sql: ${TABLE}.Portfolio_Derived_Previous_Cumulative_Paid ;;
}
dimension: FRR {
type: number
sql: ${Portfolio_Derived_Previous_Cumulative_Paid}/(${Accounts_Unlock_Price} - ${Accounts_Upfront_Price}) * 100;;
}
dimension: Angaza_ID {
type: number
sql: ${TABLE}.Accounts_Angaza_ID ;;
}
}
view: Portfolio_derived {
sql_table_name: Portfolio_derived_20 ;;
##don't know what's in this file
dimension: Account_Angaza_ID {
type: number
sql: ${TABLE}.Portfolio_Derived_Account_Angaza_ID ;;
}
}
Once you've defined the fields in views, you need to join them in an explore, so you can actually query them.
I'm again just guessing, and it looks like you're doing a CROSS join here but I'm not sure.
explore: accounts_angaza {
view_name: accounts
sql_always_where: ${Portfolio_derived.Account_Angaza_ID} = ${accounts.Angaza_ID} ;;
join: Portfolio_derived {
type: cross
}
}
This would let you open that explore in the Explore UI and visually select Unlock price, upfront price, cumulative paid, and FRR, and query that. That's the easy part, "building the query". The harder part is laying the framework for it, which is what I've described a bit above.
This might be a helpful resource if you're comparing SQL to Looker queries, as it explains how Looker generates SQL: https://docs.looker.com/data-modeling/learning-lookml/how-looker-generates-sql
This explains how joins work and has links out to the lower-level components you have to build before you can join stuff. https://docs.looker.com/data-modeling/learning-lookml/working-with-joins
Good luck! If you've got more questions, there's a whole lotta Lookers that hang out at https://discourse.looker.com so that could be a good resource for future Looker questions too.

Related

Lighouse #paginator with all records

I am using #paginator directive on my query and my client wants to get all records of posts from the query. This is my code:
posts: [Post!]! #paginate
I tested this querys:
posts(first:0) {id} #works but don't get all records
posts(first:-1) {id} #error
One way was to get all records was to use the value of total inside the paginatorInfo and make a new query with that value on the first:.
posts(first:0) {
paginatorInfo {
total
}
}
For optimization making 2 querys to get all records is very bad.
The best approach I got was to make a new query (now having two querys for posts with different directives and names) like:
allPosts: [Post!]! #all
Other approaches but not so clean:
Set pagination.max_count to null in the config/lighthouse.php and do (first: 100000000) (the int is 32 bits so has a limit).
Change the #paginantor to a #field(resolver:) and do pagination with pure php.

Find ONLY soft deleted rows with Sequelize

I'm running a database on sequelize and sqlite and I use soft-deletes to basically archive the data.
I'm aware that with .findAll(paranoid: false) I can find all rows including the soft deleted ones. However I would like to find ONLY the soft-deleted ones.
Is there any way to achieve this? Or is there perhaps a way to do "set operations" with two data results, like finding the relative complement of one in the other?
For this, you can add the following condition to where.
deletedAt: { [Op.not]: null }
For example like this:
const projects = await db.Project.findAndCountAll({
paranoid: false,
order: [['createdAt', 'DESC']],
where: { employer_id: null, deletedAt: { [Op.not]: null } },
limit: parseInt(size),
offset: (page - 1) * parseInt(size),
});

DynamoDB transactional insert with multiple conditions (PK/SK attribute_not_exists and SK attribute_exists)

I have a table with PK (String) and SK (Integer) - e.g.
PK_id SK_version Data
-------------------------------------------------------
c3d4cfc8-8985-4e5... 1 First version
c3d4cfc8-8985-4e5... 2 Second version
I can do a conditional insert to ensure we don't overwrite the PK/SK pair using ConditionalExpression (in the GoLang SDK):
putWriteItem := dynamodb.Put{
TableName: "example_table",
Item: itemMap,
ConditionExpression: aws.String("attribute_not_exists(PK_id) AND attribute_not_exists(SK_version)"),
}
However I would also like to ensure that the SK_version is always consecutive but don't know how to write the expression. In pseudo-code this is:
putWriteItem := dynamodb.Put{
TableName: "example_table",
Item: itemMap,
ConditionExpression: aws.String("attribute_not_exists(PK_id) AND attribute_not_exists(SK_version) **AND attribute_exists(SK_version = :SK_prev_version)**"),
}
Can someone advise how I can write this?
in SQL I'd do something like:
INSERT INTO example_table (PK_id, SK_version, Data)
SELECT {pk}, {sk}, {data}
WHERE NOT EXISTS (
SELECT 1
FROM example_table
WHERE PK_id = {pk}
AND SK_version = {sk}
)
AND EXISTS (
SELECT 1
FROM example_table
WHERE PK_id = {pk}
AND SK_version = {sk} - 1
)
Thanks
A conditional check is applied to a single item. It cannot be spanned across multiple items. In other words, you simply need multiple conditional checks. DynamoDb has transactWriteItems API which performs multiple conditional checks, along with writes/deletes. The code below is in nodejs.
const previousVersionCheck = {
TableName: 'example_table',
Key: {
PK_id: 'prev_pk_id',
SK_version: 'prev_sk_version'
},
ConditionExpression: 'attribute_exists(PK_id)'
}
const newVersionPut = {
TableName: 'example_table',
Item: {
// your item data
},
ConditionExpression: 'attribute_not_exists(PK_id)'
}
await documentClient.transactWrite({
TransactItems: [
{ ConditionCheck: previousVersionCheck },
{ Put: newVersionPut }
]
}).promise()
The transaction has 2 operations: one is a validation against the previous version, and the other is an conditional write. Any of their conditional checks fails, the transaction fails.
You are hitting your head on some of the differences between a SQL and a no-SQL database. DynamoDB is, of course, a no-SQL database. It does not, out of the box, support optimistic locking. I see two straight forward options:
Use a software layer to give you locking on your DynamoDB table. This may or may not be feasible depending on how often updates are made to your table. How fast 'versions' are generated and the maximum time your application can be gated on the lock will likely tell you if this can work foryou. I am not familiar with Go, but the Java API supports this. Again, this isn't a built-in feature of DynamoDB. If there is no such Go API equivalent, you could use the technique described in the link to 'lock' the table for updates. Generally speaking, locking a no-SQL DB isn't a typical pattern as it isn't exactly what it was created to do (part of which is achieving large scale on unstructured documents to allow fast access to many consumers at once)
Stop using an incrementor to guarantee uniqueness. Typically, incrementors are frowned upon in DynamoDB, in part due to the lack of intrinsic support for it and in part because of how DynamoDB shards you don't want a lot of similarity between records. Using a UUID will solve the uniqueness problem, but if you are porting an existing application that means more changes to the elements that create that ID and updates to reading the ID (perhaps to include a creation-time field so you can tell which is the newest, or the prepending or appending of an epoch time to the UUID to do the same). Here is a pertinent link to a SO question explaining on why to use UUIDs instead of incrementing integers.
Based on Hung Tran's answer, here is a Go example:
checkItem := dynamodb.TransactWriteItem{
ConditionCheck: &dynamodb.ConditionCheck{
TableName: "example_table",
ConditionExpression: aws.String("attribute_exists(pk_id) AND attribute_exists(version)"),
Key: map[string]*dynamodb.AttributeValue{"pk_id": {S: id}, "version": {N: prevVer}},
},
}
putItem := dynamodb.TransactWriteItem{
Put: &dynamodb.Put{
TableName: "example_table",
ConditionExpression: aws.String("attribute_not_exists(pk_id) AND attribute_not_exists(version)"),
Item: data,
},
}
writeItems := []*dynamodb.TransactWriteItem{&checkItem, &putItem}
_, _ = db.TransactWriteItems(&dynamodb.TransactWriteItemsInput{TransactItems: writeItems})

Azure Cosmos DB (NOT IS_DEFINED OR ) clause with JOIN always evaluates to false

I have a document:
{
contact: {
id: '123'
},
channels: [
{
... some channel info...
}
],
lastUpdatedEpoch: 1583937675
}
And I have following query which doesn't return the above document:
SELECT p FROM p JOIN c IN p.channels
WHERE (NOT IS_DEFINED(p.lastUpdatedEpoch) OR p.lastUpdatedEpoch < 1585733881)
AND p.contact.id = '123'
But when I remove NOT IS_DEFINED check, it correctly returns the document:
SELECT p FROM p JOIN c IN p.channels
WHERE (p.lastUpdatedEpoch < 1585733881)
AND p.contact.id = '123'
I also tried replacing NOT IS_DEFINED clause with FALSE and it returns the document:
SELECT p FROM p JOIN c IN p.channels
WHERE (FALSE OR p.lastUpdatedEpoch < 1585733881)
AND p.contact.id = '123'
Also, if I remove JOIN, the query works as expected and returns the document:
SELECT p FROM
WHERE (NOT IS_DEFINED(p.lastUpdatedEpoch) OR p.lastUpdatedEpoch < 1585733881)
AND p.contact.id = '123'
To me this behavior is unexpected. When lastUpdatedEpoch is defined, I expect the same result from the first and second query (aside from the fact NOT_ISDEFINED will cause the index to be not used).
Could someone please explain what's going on here?
I try to reproduce your issue on my side but failed.The result is expected for me.
Test sample data:
Sql Output:
It seems that you did not refer any columns in channels.I suggest you create some simple test data to verify whether your sql is right.Then try to compare with your actual data.
I contacted CosmosDB team, and the team was able give some insight about the issue.
There was new optimization that was recently put in to allow inequality and NotIsDefined expressions to utilize the index. There was some issue with this optimization, and the team disabled this feature for now. If you are able to observe this issue with your cluster, please contact their support team.

Form Running Totals, Ax 2009

Is there an example anywhere of a form that performs running totals in a column located within a grid. The user ordering and filtering of the grid would affect the running totals column.
I can easily perform the above if it was ordering only by transaction date, but including the user ordering and filtering I presume that we would have to use the datasource range() and rangecount() functions (see SysQuery::mergeRanges() for an example) then iterate over these to apply the filtering, then include the dynalinks. The same for the ordering, albeit this is now more complicated.
Any suggestions appreciated. Any appreciations suggested (as in: vote the question up!).
You could implement it as a form datasource display method using this strategy:
Copy the form's datasource query (no need for SysQuery::mergeRanges):
QueryRun qr = new QueryRun(ledgerTrans_qr.query());
Iterate and sum over your records using qr, stop after the current record:
while (qr.next())
{
lt = qr.getNo(1);
total += lt.AmountMST;
if (lt.RecId == _lt.RecId)
break;
}
This could be made more performant if the sorting order was fixed (using sum(AmountMST) and adding a where constraint).
Return the total
This is of cause very inefficient (subquadratic time, O(n^2)).
Caching the results (in a map) may make it usable if there are not too many records.
Update: a working example.
Any observations or criticisms to the code below most welcome. Jan's observation about the method being slow is still valid. As you can see, it's a modification of his original answer.
//BP Deviation Documented
display AmountMST XXX_runningBalanceMST(LedgerTrans _trans)
{
LedgerTrans localLedgerTrans;
AmountMST amountMST;
;
localLedgerTrans = this.getFirst();
while (localLedgerTrans)
{
amountMST += localLedgerTrans.AmountMST;
if (localLedgerTrans.RecId == _trans.RecId)
{
break;
}
localLedgerTrans = this.getNext();
}
return amountMST;
}

Resources