Find ONLY soft deleted rows with Sequelize - sqlite

I'm running a database on sequelize and sqlite and I use soft-deletes to basically archive the data.
I'm aware that with .findAll(paranoid: false) I can find all rows including the soft deleted ones. However I would like to find ONLY the soft-deleted ones.
Is there any way to achieve this? Or is there perhaps a way to do "set operations" with two data results, like finding the relative complement of one in the other?

For this, you can add the following condition to where.
deletedAt: { [Op.not]: null }
For example like this:
const projects = await db.Project.findAndCountAll({
paranoid: false,
order: [['createdAt', 'DESC']],
where: { employer_id: null, deletedAt: { [Op.not]: null } },
limit: parseInt(size),
offset: (page - 1) * parseInt(size),
});

Related

DynamoDB transactional insert with multiple conditions (PK/SK attribute_not_exists and SK attribute_exists)

I have a table with PK (String) and SK (Integer) - e.g.
PK_id SK_version Data
-------------------------------------------------------
c3d4cfc8-8985-4e5... 1 First version
c3d4cfc8-8985-4e5... 2 Second version
I can do a conditional insert to ensure we don't overwrite the PK/SK pair using ConditionalExpression (in the GoLang SDK):
putWriteItem := dynamodb.Put{
TableName: "example_table",
Item: itemMap,
ConditionExpression: aws.String("attribute_not_exists(PK_id) AND attribute_not_exists(SK_version)"),
}
However I would also like to ensure that the SK_version is always consecutive but don't know how to write the expression. In pseudo-code this is:
putWriteItem := dynamodb.Put{
TableName: "example_table",
Item: itemMap,
ConditionExpression: aws.String("attribute_not_exists(PK_id) AND attribute_not_exists(SK_version) **AND attribute_exists(SK_version = :SK_prev_version)**"),
}
Can someone advise how I can write this?
in SQL I'd do something like:
INSERT INTO example_table (PK_id, SK_version, Data)
SELECT {pk}, {sk}, {data}
WHERE NOT EXISTS (
SELECT 1
FROM example_table
WHERE PK_id = {pk}
AND SK_version = {sk}
)
AND EXISTS (
SELECT 1
FROM example_table
WHERE PK_id = {pk}
AND SK_version = {sk} - 1
)
Thanks
A conditional check is applied to a single item. It cannot be spanned across multiple items. In other words, you simply need multiple conditional checks. DynamoDb has transactWriteItems API which performs multiple conditional checks, along with writes/deletes. The code below is in nodejs.
const previousVersionCheck = {
TableName: 'example_table',
Key: {
PK_id: 'prev_pk_id',
SK_version: 'prev_sk_version'
},
ConditionExpression: 'attribute_exists(PK_id)'
}
const newVersionPut = {
TableName: 'example_table',
Item: {
// your item data
},
ConditionExpression: 'attribute_not_exists(PK_id)'
}
await documentClient.transactWrite({
TransactItems: [
{ ConditionCheck: previousVersionCheck },
{ Put: newVersionPut }
]
}).promise()
The transaction has 2 operations: one is a validation against the previous version, and the other is an conditional write. Any of their conditional checks fails, the transaction fails.
You are hitting your head on some of the differences between a SQL and a no-SQL database. DynamoDB is, of course, a no-SQL database. It does not, out of the box, support optimistic locking. I see two straight forward options:
Use a software layer to give you locking on your DynamoDB table. This may or may not be feasible depending on how often updates are made to your table. How fast 'versions' are generated and the maximum time your application can be gated on the lock will likely tell you if this can work foryou. I am not familiar with Go, but the Java API supports this. Again, this isn't a built-in feature of DynamoDB. If there is no such Go API equivalent, you could use the technique described in the link to 'lock' the table for updates. Generally speaking, locking a no-SQL DB isn't a typical pattern as it isn't exactly what it was created to do (part of which is achieving large scale on unstructured documents to allow fast access to many consumers at once)
Stop using an incrementor to guarantee uniqueness. Typically, incrementors are frowned upon in DynamoDB, in part due to the lack of intrinsic support for it and in part because of how DynamoDB shards you don't want a lot of similarity between records. Using a UUID will solve the uniqueness problem, but if you are porting an existing application that means more changes to the elements that create that ID and updates to reading the ID (perhaps to include a creation-time field so you can tell which is the newest, or the prepending or appending of an epoch time to the UUID to do the same). Here is a pertinent link to a SO question explaining on why to use UUIDs instead of incrementing integers.
Based on Hung Tran's answer, here is a Go example:
checkItem := dynamodb.TransactWriteItem{
ConditionCheck: &dynamodb.ConditionCheck{
TableName: "example_table",
ConditionExpression: aws.String("attribute_exists(pk_id) AND attribute_exists(version)"),
Key: map[string]*dynamodb.AttributeValue{"pk_id": {S: id}, "version": {N: prevVer}},
},
}
putItem := dynamodb.TransactWriteItem{
Put: &dynamodb.Put{
TableName: "example_table",
ConditionExpression: aws.String("attribute_not_exists(pk_id) AND attribute_not_exists(version)"),
Item: data,
},
}
writeItems := []*dynamodb.TransactWriteItem{&checkItem, &putItem}
_, _ = db.TransactWriteItems(&dynamodb.TransactWriteItemsInput{TransactItems: writeItems})

Azure Cosmos DB (NOT IS_DEFINED OR ) clause with JOIN always evaluates to false

I have a document:
{
contact: {
id: '123'
},
channels: [
{
... some channel info...
}
],
lastUpdatedEpoch: 1583937675
}
And I have following query which doesn't return the above document:
SELECT p FROM p JOIN c IN p.channels
WHERE (NOT IS_DEFINED(p.lastUpdatedEpoch) OR p.lastUpdatedEpoch < 1585733881)
AND p.contact.id = '123'
But when I remove NOT IS_DEFINED check, it correctly returns the document:
SELECT p FROM p JOIN c IN p.channels
WHERE (p.lastUpdatedEpoch < 1585733881)
AND p.contact.id = '123'
I also tried replacing NOT IS_DEFINED clause with FALSE and it returns the document:
SELECT p FROM p JOIN c IN p.channels
WHERE (FALSE OR p.lastUpdatedEpoch < 1585733881)
AND p.contact.id = '123'
Also, if I remove JOIN, the query works as expected and returns the document:
SELECT p FROM
WHERE (NOT IS_DEFINED(p.lastUpdatedEpoch) OR p.lastUpdatedEpoch < 1585733881)
AND p.contact.id = '123'
To me this behavior is unexpected. When lastUpdatedEpoch is defined, I expect the same result from the first and second query (aside from the fact NOT_ISDEFINED will cause the index to be not used).
Could someone please explain what's going on here?
I try to reproduce your issue on my side but failed.The result is expected for me.
Test sample data:
Sql Output:
It seems that you did not refer any columns in channels.I suggest you create some simple test data to verify whether your sql is right.Then try to compare with your actual data.
I contacted CosmosDB team, and the team was able give some insight about the issue.
There was new optimization that was recently put in to allow inequality and NotIsDefined expressions to utilize the index. There was some issue with this optimization, and the team disabled this feature for now. If you are able to observe this issue with your cluster, please contact their support team.

Normalize overuse of "precondition" endpoints in Collection and its folders & environments

I have been using Postman so much and built a lot of useful things in it, but now need to implement one more thing.
Briefly:
Need to create test cases to test a correct records counting for every institution.
Now I handled it like that
Structure:
---------Collection
-------Folder
---Districts(folder)
-some folders with tests
---Colleges(folder)
-some folders with tests
---Schools(folder)
-Principal(Folder with test in Schools folder)
where these requests:
POST: Create a list
where in Tests:
var jList = JSON.parse(responseBody);
postman.setEnvironmentVariable("list_id", jList.data.id);
POST: Add some filters to list
GET: lists/{{list_id}} whete in "Test" code:
```
var allLists = JSON.parse(responseBody);
pm.test("test count", function () {
const value = allLists.data.count;
pm.expect(typeof value === 'number').to.eql(true);
pm.expect(value > 0 && value < 999999).to.eql(true);
}); // I check if the number(made in stepâ„–2 in a range we need
```
DELETE: delete this list
I guess I do 'overhead work' adding POST(create a list) and -DELETE endpoints in every folder, can I somehow bring it out to environment or variable and execute it before every POST: Add some filters to list and -DELETE endpoint after that
Something like general BeforeEach and AfterEach
Perhaps can I even bring this(below) out in one place for every separate test?
pm.test("test count", function () {
const value = allLists.data.count;
pm.expect(typeof value === 'number').to.eql(true);
pm.expect(value > 0 && value < 999999).to.eql(true);
});
Here is an example how it looks like if it not difficult, give me a piece of advice! Thanks

Query with multiple where clauses in Firebase

I'm having a bit of trouble with a Firebase query, mainly due to the size of the dataset I am querying.
What I would like to achieve is:
Find all tshirts where brandStartsWith = 'A' and salesRank is between 1 and 100
I've started to pad this out, but I am running into an issue whereby I can't seem to get the data due to having over 300,000 records within t-shirts.
If call it within React when the page loads, after a while I get the following error in console:
Uncaught RangeError: Invalid string length
Here is the code I am using to get me started, but I'm not sure where to go. Looking at the solutions on this question it seems I need to download the data per my query below, and then sort it on the client side. Something I cant seem to do
firebase.database().ref('tshirts')
.orderByChild('brandStartsWith')
.equalTo('A')
.once('value', function (snapshot) {
console.log(snapshot.val())
})
You're going to need to create a combined key as you can only do one where clause at a time.
{
"tShirts" : {
"brandStartsWith" : 'A',
"salesRank" : 5
"brandStartsWith_salesRank" = 'A_00005' //pad for as many sales ranks as you have
}, {
"brandStartsWith" : 'B',
"salesRank" : 108
"brandStartsWith_salesRank" = 'B_00108' //pad for as many sales ranks as you have
}, {
"brandStartsWith" : 'C',
"salesRank" : 52
"brandStartsWith_salesRank" = 'C_00052' //pad for as many sales ranks as you have
}
}
This will allow you to do this query:
firebase.database().ref('tshirts')
.orderByChild('brandStartsWith_salesRank')
.startAt('A_00001')
.endAt('A_00100')
.once('value', function (snapshot) {
console.log(snapshot.val())
})
Don't forget to update your rules to .index brandStartsWith_salesRank

How to get 3 data that have the highest value of a column in the table

I am using knex and bookshelf, and my table consists of author, title, content, count, and each data looks like this:
author: 'John Doe',
title: 'aaaaa',
content: 'aaaaaaaa'
count: 54,
I want to retrieve data based on the value of count, and I want to get 4 data that has the highest count value.
If I want to retrieve all data, I am doing like this:
router.get('/', (req, res) => {
Article.forge().fetchAll().then(article => {
res.json(article);
})
})
Is there any way that I can do like forge({ count: 3 data that has the highest count value }) or
What should I add the code so that I can achieve this?
Combine orderBy with fetchPage
Article
.orderBy('-count')
.fetchPage({
pageSize: 3
})
.forge()
This highlights a reason why my team is removing bookshelf and just using basic knex. Unless you are wanting to fetch related models it's simpler to deal without the ORM layer. The knex equivalent knex code is:
knex('articles')
.orderBy('count', 'desc')
.limit(3)
Which is slightly simpler and the resulting rows' properties can be accessed directly, ie rows[0].id rather than rows[0].get('id')

Resources