I am struggling to find the correct syntax to search a String Set/List target attribute (tags) by an array of strings. The idea would be that if the SS attribute contains all of the passed strings, it passes the filter. The passed strings do not need to match all of the strings within the target attribute. The more strings you pass, the more accurate your results.
// Compile tags into a list
let tagSqlValues = {};
let tagSqlStatement = query.tags.map((tag: string, index: number) => {
let tagParam = `:tag${index}`;
tagSqlValues[tagParam] = tag;
return `${tagParam} in tags`;
}).join(" and ");
// Console Logs
// tagSqlStatement = :tag0 in tags and :tag1 in tags (also tried tags contains :tag0 and tags contains :tag1)
// tagSqlValues = {":tag0":"Modern",":tag1":" Spring"}
let params = {
TableName: "Art",
FilterExpression: tagSqlStatement,
ExpressionAttributeValues: tagSqlValues,
};
let results = await this.DDB_CLIENT.scan(params).promise();
// Console Logs
// "Invalid FilterExpression: Syntax error; token: \"tags\", near: \"in tags and\""
// "Invalid FilterExpression: Syntax error; token: \"contains\", near: \"tags contains :tag0\""
I've tried several variations with IN and CONTAINS without luck. Is this possible with DynamoDB?
It looks like my CONTAINS syntax was wrong. I did a little digging and found this answer by Zanon. With a minor modification to include the and join, it seems like the filter is working as expected!
// Compile tags into a list
let tagSqlValues = {};
let tagSqlStatement = query.tags.map((tag: string, index: number) => {
let tagParam = `:tag${index}`;
tagSqlValues[tagParam] = tag;
return `contains(tags, ${tagParam})`;
}).join(" and ");
Related
I have code that looks similar to this
const attr = event.target.value;
const query = filter.merge({
[attr]: !filter[attr]
});
where filter is a Immutable Record. Flow complains because an index signature declaring the expected key / value type is missing.
Can I somehow tell flow that this is OK or do I have use a $FlowFixMe?
I'm using Immutable v4.0.0-rc.14
Solved by using a switch statement
switch(attr) {
case "something":
const query = filter.merge({
something: !filter.something
});
break;
case etc...
I am trying to append to a string set (array of strings) column, which may or may not already exist, in a DynamoDB table. I referred to SO questions like this and this when writing my UpdateExpression.
My code looks like this.
const AWS = require('aws-sdk')
const dynamo = new AWS.DynamoDB.DocumentClient()
const updateParams = {
// The table definitely exists.
TableName: process.env.DYNAMO_TABLE_NAME,
Key: {
email: user.email
},
// The column may or may not exist, which is why I am combining list_append with if_not_exists.
UpdateExpression: 'SET #column = list_append(if_not_exists(#column, :empty_list), :vals)',
ExpressionAttributeNames: {
'#column': 'items'
},
ExpressionAttributeValues: {
':vals': ['test', 'test2'],
':empty_list': []
},
ReturnValues: 'UPDATED_NEW'
}
dynamo.update(updateParams).promise().catch((error) => {
console.log(`Error: ${error}`)
})
However, I am getting this error: ValidationException: An operand in the update expression has an incorrect data type. What am I doing incorrectly here?
[Update]
Thanks to Nadav Har'El's answer, I was able to make it work by amending the params to use the ADD operation instead of SET.
const updateParams = {
TableName: process.env.DYNAMO_TABLE_NAME,
Key: {
email: user.email
},
UpdateExpression: 'ADD items :vals',
ExpressionAttributeValues: {
':vals': dynamo.createSet(['test', 'test2'])
}
}
A list and a string set are not the same type - a string set can only hold strings while a list may hold any types (including nested lists and objects), element types don't need to be the same, and a list can hold also duplicate items. So if your original item is indeed as you said a string set, not a list, this explains why this operation cannot work.
To add items to a string set, use the ADD operation, not the SET operation. The parameter you will give to add should be a set (not a list, I don't know the magic js syntax to specify this, check your docs) with a bunch of elements. If the attribute already exists these elements will be added to it (dropping duplicates), and if the attribute doesn't already exit, it will be set to the set of these elements. See the documentation here: https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateItem.html#DDB-UpdateItem-request-UpdateExpression
Hello kind Stackoverflow folks,
I'm trying to create a function to guard off code from being executed at run-time with an incorrect Flow type present.
My understanding is that the way to do this at run-time is by refining, or checking, that the type matches what is required and using Flow to keep an eye that no cases are missed along the way.
A simple case is where I have a string input that I would like to confirm matches to a enum/Union type. I have this working as I would expect with literals e.g.
/* #flow */
type typeFooOrBaa = "foo"| "baa"
const catchType = (toCheck: string): void => {
// Working check
if (toCheck === "foo" || toCheck === "baa") {
// No Flow errors
const checkedValue: typeFooOrBaa = toCheck
// ... do something with the checkedValue
}
};
Try it over here
Naturally, I would like to avoid embedding literals.
One of the things I've tried is the equivalent object key test, which doesn't work :-( e.g.
/* #flow */
type typeFooOrBaa = "foo"| "baa"
const fooOrBaaObj = {"foo": 1, "baa": 2}
const catchType = (toCheck: string): void => {
// Non working check
if (fooOrBaaObj[toCheck]) {
/*
The next assignment generates the following Flow error
Cannot assign `toCheck` to `checkedVariable` because: Either string [1] is incompatible
with string literal `foo` [2]. Or string [1] is incompatible with string literal `baa` [3].",
"type"
*/
const checkedVariable: typeFooOrBaa = toCheck
}
};
Try it over here
Is it possible to achieve something like this without having to go down the full flow-runtime route? If so how is it best done?
Thanks for your help.
One approach that appears to works is to use the const object which defines the allowed values, to:
Generate a union type using the $keys utility.
Use that union type to create a map object where the keys are the desired input (our case strings) and the values are "maybe"s of the type that needs refining.
Here's the example from earlier reworked so that it:
Sets the type up as we'd expect to allow either "foo" or "baa" but nothing else.
Detects when a string is suitably refined so that it only contains "foo" or "baa".
Detects when a string might contain something else other than what's expected.
Credit to #vkurchatkin for his answer that helped me crack this (finally).
/* #flow */
// Example of how to persuade Flow to detect safe adequately refined usage of a Union type
// at runtime and its unsafe, inadequately refined counterparts.
const fooOrBaaObj = {foo: 'foo', baa: 'baa'}
type typeFooOrBaa = $Keys<typeof fooOrBaaObj>
// NB: $Keys used inorder for the type definition to avoid aliasing typeFooOrBaa === string
// which allows things like below to correctly spot problems.
//const testFlowSpotsBadDefition: typeFooOrBaa = "make_flow_barf"
const fooOrBaaMap: { [key: string]: ?typeFooOrBaa } = fooOrBaaObj;
// NB: Use of the "?" maybe signifier in the definition a essential to inform Flow that indexing into
// the map "might" produce a "null". Without it the subsequent correct detection of unsafe
// unrefined variables fails.
const catchType = (toCheck: string): void => {
const myValue = fooOrBaaMap[toCheck];
if (myValue) {
// Detects refined safe usage
const checkedVariable: typeFooOrBaa = myValue
}
// Uncommenting the following line correctly causes Flow to flag the unsafe type. Must have the
// "?" in the map defininiton to get Flow to spot this.
//const testFlowSpotsUnrefinedUsage: typeFooOrBaa = myValue
}
Have a play with it over here
You can type the object as {[fooOrBaa]: number}, but flow will not enforce that all members of fooOrBaa exist in the object.
I'm working with Fullcalendar and I'm trying to get resources as function
resources: function(callback){
var manageEvent = new ManageEvent();
var request = manageEvent.getEmployees();
request.always(function (param) {
//location.reload();
var list = [];
var emp;
for (var elem in param) {
emp = param[elem];
list.push({
'id': emp['cp_collaboratore'],
'title': emp['cognome_col']
});
}
var t = JSON.stringify(list);
callback(t);
});
request.catch(function (param) {
alert('errore');
});
},
I checked the variable 't' through log and it shows the following result:
[{"id":"1","title":"name_1"},{"id":"2","title":"name_2"},{"id":"3","title":"name_3"},{"id":"5","title":"name_4"},{"id":"9","title":"name_5"}]
but it don't works and shows the following error message:
Uncaught TypeError: resourceInputs.map is not a function
at ResourceManager.setResources
You just need to write
callback(list);
t in your code is a string, because you converted your list array into a string using JSON.stringify(). But fullCalendar expects an actual array, not a string. It can't run functions or read individual properties from a string.
You can remove the line var t = JSON.stringify(list); completely, it's not needed.
Generally the only reason you'd use stringify() is if you wanted to log the value to your console for debugging, or convert the array into JSON if you wanted to send it somewhere else using AJAX. It makes no sense to pass arrays and objects around inside JavaScript as serialised strings, when you can just use the objects themselves.
I am trying to build an app that stores and shows book quotes by it's title and by it's author. I am using Firebase for the backend. My Firebase data structure looks like this.
When a book quote is added, I know the author. So to store the quote in author automatically, I am trying to use Firebase Functions.
I have tried two approaches,
Merge quotes from author with quotes from the book when book is updated.
exports.newQuotesTrigger = functions.database.ref('library/{bookAndAuthor}').onWrite((snap, context) => {
const message = snap;
console.log('Retrieved message content: ', message);
const newValue = message.after.val();
const oldValue = message.before.val();
const author = snakeCase(newValue.author);
admin.database().ref('authors/' + author).child('quotes').set(newValue.quotes);
console.log('Updated author quotes');
return message;
});
Just push the difference of new quotes and old quotes from the book
exports.newQuotesTrigger = functions.database.ref('library/{bookAndAuthor}').onWrite((snap, context) => {
const message = snap;
console.log('Retrieved message content: ', message);
const newValue = message.after.val();
const oldValue = message.before.val();
const newQuotes = newValue.quotes || [];
const oldQuotes = oldValue.quotes || [];
const diff = arrayDiff(newQuotes, oldQuotes);
if (diff) {
console.log('Quotes were updated for ', {title: newValue.title, author: newValue.author});
const author = snakeCase(newValue.author);
admin.database().ref('authors/' + author).child('quotes').push(diff);
console.log('Updated author quotes');
}
return message;
});
Both don't append/insert update quotes properly. I haven't found a way to append/insert to a Firebase db array.
You have to use update in order to "update specific children of a node without overwriting other child nodes", see:
https://firebase.google.com/docs/database/web/read-and-write#update_specific_fields
Your first piece of code should work with update if you slightly change your structure as follow with autogenerated Ids for the quotes
Database
author
- nassim
- quoteID1: "...." <- ID auto generated
- quoteID2: "...." <- ID auto generated
- quoteID3: "...." <- ID auto generated
Cloud Function
Replace, in your first version of the code, these lines
admin.database().ref('authors/' + author).child('quotes').set(newValue.quotes);
console.log('Updated author quotes');
return message;
by those ones
const quotesObject = newValue.quotes;
var updates = {};
Object.keys(quotesObject).forEach(function (key) {
let quote = quotesObject[key];
const newQuoteKey = firebase.database().ref('authors/' + author).child('quotes').push().key;
updates[newQuoteKey] = quote ;
});
return admin.database().ref('authors/' + author).child('quotes').update(updates);
Another important point is that you are not returning a promise in your Cloud Functions. You should return the promise from the update (or set) and not the message. See https://www.youtube.com/watch?v=652XeeKNHSk&t=26s
In case you really have to keep the quotes id generated by yourself (i.e. 0, 1, 2, etc) in a sequence you will have to manipulate arrays by getting the previous array of values, adding the new quote and overwriting the existing set of quotes with the new array.. a lot of efforts! especially that with auto-generated ids you will not loose the quotes order: they will still be saved in the order they were written.